Machine learning in trading: theory, models, practice and algo-trading - page 680

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
On the topic of how to transfer neuronics from R to mql:
I did it with package nnet. You need to look at the sourcehttps://cran.r-project.org/web/packages/nnet/index.html(Package source: nnet_7.3-12.tar.gz). There is src / nnet.c folder in the archive.
We need function fpass, first few lines before thisError = 0.0 (this is the beginning of error calculation for backprop, not needed for simple prediction)
This is the code we can use in our EA.
this code will only work with model parameter skip = FALSE
If it is not like that, or a softmax is used - look for differences in the fpass() function
the weights (array nnet_weights) must be copied from R itself after training
Another small introductory article on mql, while the point is
It would be interesting to organize a small teamwork and brainstorming session using mql and share experience, the topic is very interesting and other approaches have already been discussed
https://proglib.io/p/trade-learning/
In my opinion, training with reinforcement usually gives a model with a very large overfit, and crossvalidation will not save in this case. Within the model itself there should be some special mechanisms against overfits.
For those who are not familiar with this subject, briefly: instead of usual model evaluation (accuracy, r2, logloss, etc) a special fitness function with its own model evaluation logic is created. Such fitness function can for example calculate profit of the model during trading or sharperatio. Model parameters are selected by a genetic algorithm.
It is interesting, that optimizer in MT4 and MT5 also works on the principle of learning with reinforcement - at every bar (or even tick) EA receives current values of the price and indicators, makes some decisions, trades on them and the final EA estimate is considered as the trade result. Parameters of the Expert Advisor are selected by genetics in order to increase the estimate.
Judging by freely available Expert Advisors for MT5, it is clearly seen that this learning method causes a lot of "sinkers". But sometimes good EAs are good too.
In my opinion, training with reinforcement usually gives a model with a very large overfit, and crossvalidation will not save in this case. Within the model itself there should be some special mechanisms against overfits.
For those who are not familiar with this subject, briefly: instead of usual model evaluation (accuracy, r2, logloss, etc) a special fitness function with its own model evaluation logic is created. Such fitness function can for example calculate profit of the model during trading or sharperatio. Model parameters are selected by a genetic algorithm.
It is interesting, that optimizer in MT4 and MT5 also works on the principle of learning with reinforcement - at every bar (or even tick) the Expert Advisor receives the current values of the price and indicators, makes some decisions, and the final estimate of the EA is considered as a trade result. Parameters of the Expert Advisor are selected by genetics in order to increase the estimate.
Judging by freely available Expert Advisors for MT5, it is clearly seen that this learning method causes a lot of "sinkers". But sometimes good EAs are also good ones.
Good analogy with optimizer, my previous bot was about that but it's too simple
But in general it's not, RL if not opposed to genetics, it emphasizes its advantages, for example the ability to work on non-stationary processes. In particular, it is possible to use non-greedy fitness functions, thanks to which the model is constantly being updated as it runs and makes random steps. And as an optimizer we use the NS envelope (not genetics)
It's a bit more complicated there, but I haven't finished reading the book yet.
So your statement about genetics, optimizer and RL is wrong.
*a moment of emotion and philosophy and all that*
Imagine a clear, sunny day. You are sitting on the grass beside a pond, a warm spring breeze is blowing, leaving ripples on the water, somewhere nearby you can hear the sounds of ducks. Shallow underwater fish are swimming, sometimes touching the surface with their fins and making circles in the water. My neighbor is scooping water and her grandson is throwing stones into the water, chasing frogs from side to side. On the other side of the pond, some guy is washing his car.
Price is like waves on the water from all the commotion but also during a storm. We can follow the waves and try to guess when the water will start to rise or fall at a particular point, but if we don't follow the environment, we will be wrong most of the time.
The future value of the price depends not on its past values, but on global world processes which caused past price changes and will cause new changes.
It is necessary to monitor not only the waves on the water, but also the wind, the trajectory of fish, the neighbor's bucket, etc., then the nature of waves on the water will be clear and predictable.
Accordingly, if you have information on all global processes affecting the price, you can learn to predict, and any simple model from the last century will do.
The problem is that usually there is only a price chart, and that is not enough.
Here how to connect RL and NN
and a vid, there's a 2nd part more
That's it, I'm not spamming any more, those who are interested will read it
The neural networks in MT are simple. There is a library CNTK from Microsoft. It is implemented for Python, C# and C++. All the network analysis and training is done in Python, while C++ is used to write a DLL that loads the trained network and leads calculations with it. In my opinion this is the best option. The second option, the connection of Python to MT. I have written a simple library for this. Library. We connect it and can use everything that is available in Python. And there's a lot of stuff available. I'm thinking about starting to write about machine learning on my blog.
It would be interesting to read about strategies and personal thoughts/experiences... for me personally
cause most of the stuff here is just trash about trying lots of libraries and what's better to write with... it's an epidemic, and everything goes to waste.
Although the basic idea was voiced by fxsaber a long time ago - with such an approach the subject might as well be closed, because it was wrong to begin with