Machine learning in trading: theory, models, practice and algo-trading - page 680

 

On the topic of how to transfer neuronics from R to mql:

I did it with package nnet. You need to look at the sourcehttps://cran.r-project.org/web/packages/nnet/index.html(Package source: nnet_7.3-12.tar.gz). There is src / nnet.c folder in the archive.

We need function fpass, first few lines before thisError = 0.0 (this is the beginning of error calculation for backprop, not needed for simple prediction)


This is the code we can use in our EA.

   double sigmoid(double sum)
     {
      if(sum<-15.0)
         return (0.0);
      else if(sum>15.0)
         return (1.0);
      else
         return (1.0 / (1.0 + exp(-sum)));
     }

//nnet_hidden - число нейронов в скрытом слое
//выход один
//nnet_weights - массив с весами нейронки взятыми из R после обучения nnet модели

int weights_it=0;
double hiddenLayer[];
ArrayResize(hiddenLayer,nnet_hidden);
for(int i=0; i<nnet_hidden; i++)
 {
  hiddenLayer[i]=nnet_weights[weights_it++];
  for(int j=1; j<=nnet_bars; j++)
    {
     hiddenLayer[i]+=openPrices[j]*nnet_weights[weights_it++];
    }
  hiddenLayer[i]=sigmoid(hiddenLayer[i]);
 }
double prediction=nnet_weights[weights_it++];
for(int i=0; i<nnet_hidden; i++)
 {
  prediction+=hiddenLayer[i]*nnet_weights[weights_it++];
 }
if(!linout)
 {
  prediction = sigmoid(prediction);
 }

this code will only work with model parameter skip = FALSE
If it is not like that, or a softmax is used - look for differences in the fpass() function


the weights (array nnet_weights) must be copied from R itself after training

library(nnet)
trainedModel <- nnet(y = 1:10, x = matrix(runif(20),ncol=2), size=10)
nnet_weights <- trainedModel$wts
cat("double nnet_weights[] = {", paste(format(weights,digits=16,scientific=T), collapse=","), "};", file="D:/weights.txt") #сохранить  веса в файлик чтоб оттуда скопипастить в советник
 
О! That's something. A little more work and you can use it. Thank you!
 

Another small introductory article on mql, while the point is

It would be interesting to organize a small teamwork and brainstorming session using mql and share experience, the topic is very interesting and other approaches have already been discussed

https://proglib.io/p/trade-learning/

Трейдинг и машинное обучение с подкреплением
Трейдинг и машинное обучение с подкреплением
  • 2018.02.15
  • matyushkin
  • proglib.io
В статье рассмотрено, как машинное обучение с подкреплением может применяться для трейдинга финансовых рынков и криптовалютных бирж. Академическое сообщество Deep Learning в основном находится в стороне от финансовых рынков. В силу ли того, что у финансовой индустрии не лучшая репутация, что решаемые проблемы не кажутся слишком интересными для...
 

In my opinion, training with reinforcement usually gives a model with a very large overfit, and crossvalidation will not save in this case. Within the model itself there should be some special mechanisms against overfits.

For those who are not familiar with this subject, briefly: instead of usual model evaluation (accuracy, r2, logloss, etc) a special fitness function with its own model evaluation logic is created. Such fitness function can for example calculate profit of the model during trading or sharperatio. Model parameters are selected by a genetic algorithm.

It is interesting, that optimizer in MT4 and MT5 also works on the principle of learning with reinforcement - at every bar (or even tick) EA receives current values of the price and indicators, makes some decisions, trades on them and the final EA estimate is considered as the trade result. Parameters of the Expert Advisor are selected by genetics in order to increase the estimate.
Judging by freely available Expert Advisors for MT5, it is clearly seen that this learning method causes a lot of "sinkers". But sometimes good EAs are good too.

 
Dr. Trader:

In my opinion, training with reinforcement usually gives a model with a very large overfit, and crossvalidation will not save in this case. Within the model itself there should be some special mechanisms against overfits.

For those who are not familiar with this subject, briefly: instead of usual model evaluation (accuracy, r2, logloss, etc) a special fitness function with its own model evaluation logic is created. Such fitness function can for example calculate profit of the model during trading or sharperatio. Model parameters are selected by a genetic algorithm.

It is interesting, that optimizer in MT4 and MT5 also works on the principle of learning with reinforcement - at every bar (or even tick) the Expert Advisor receives the current values of the price and indicators, makes some decisions, and the final estimate of the EA is considered as a trade result. Parameters of the Expert Advisor are selected by genetics in order to increase the estimate.
Judging by freely available Expert Advisors for MT5, it is clearly seen that this learning method causes a lot of "sinkers". But sometimes good EAs are also good ones.

Good analogy with optimizer, my previous bot was about that but it's too simple

But in general it's not, RL if not opposed to genetics, it emphasizes its advantages, for example the ability to work on non-stationary processes. In particular, it is possible to use non-greedy fitness functions, thanks to which the model is constantly being updated as it runs and makes random steps. And as an optimizer we use the NS envelope (not genetics)

It's a bit more complicated there, but I haven't finished reading the book yet.

So your statement about genetics, optimizer and RL is wrong.

 

*a moment of emotion and philosophy and all that*

Imagine a clear, sunny day. You are sitting on the grass beside a pond, a warm spring breeze is blowing, leaving ripples on the water, somewhere nearby you can hear the sounds of ducks. Shallow underwater fish are swimming, sometimes touching the surface with their fins and making circles in the water. My neighbor is scooping water and her grandson is throwing stones into the water, chasing frogs from side to side. On the other side of the pond, some guy is washing his car.

Price is like waves on the water from all the commotion but also during a storm. We can follow the waves and try to guess when the water will start to rise or fall at a particular point, but if we don't follow the environment, we will be wrong most of the time.

The future value of the price depends not on its past values, but on global world processes which caused past price changes and will cause new changes.
It is necessary to monitor not only the waves on the water, but also the wind, the trajectory of fish, the neighbor's bucket, etc., then the nature of waves on the water will be clear and predictable.

Accordingly, if you have information on all global processes affecting the price, you can learn to predict, and any simple model from the last century will do.
The problem is that usually there is only a price chart, and that is not enough.

 

Here how to connect RL and NN

 

and a vid, there's a 2nd part more

That's it, I'm not spamming any more, those who are interested will read it


 
With neural networks in MT, everything is simple. There is a library from Microsoft called CNTK. It is implemented for Python, C# and C++. All the analysis and training of the network is done in Python, and in C++, we write a DLL that loads the trained network and makes calculations with it. In my opinion this is the best option. The second option, the connection of Python to MT. I have written a simple library for this. Library. We connect it and can use everything that is available in Python. And there's a lot of stuff available. I'm thinking about starting to write about machine learning on my blog.
 
Grigoriy Chaunin:
The neural networks in MT are simple. There is a library CNTK from Microsoft. It is implemented for Python, C# and C++. All the network analysis and training is done in Python, while C++ is used to write a DLL that loads the trained network and leads calculations with it. In my opinion this is the best option. The second option, the connection of Python to MT. I have written a simple library for this. Library. We connect it and can use everything that is available in Python. And there's a lot of stuff available. I'm thinking about starting to write about machine learning on my blog.

It would be interesting to read about strategies and personal thoughts/experiences... for me personally

cause most of the stuff here is just trash about trying lots of libraries and what's better to write with... it's an epidemic, and everything goes to waste.

Although the basic idea was voiced by fxsaber a long time ago - with such an approach the subject might as well be closed, because it was wrong to begin with

Reason: