Discussion of article "Neural Networks: From Theory to Practice" - page 3

 

marketeer:

Yedelkin: I.e. for a full-fledged neuro-advisor (self-learning) it is necessary to embed the "standard genetic optimisation algorithm" into the program code?

No, of course not! That's why it is standard, because it is already embedded in the optimiser. It optimises the grid weights by itself.
Then I don't understand. If the "standard genetic optimisation algorithm" is embedded in the optimiser, how can a self-learning neural adviser use this "external" algorithm for self-learning purposes?
 
Yedelkin:
Then I don't get it. If the "in-house genetic optimisation algorithm" is inserted into the optimiser, how can a self-learning neuro-advisor use this "external" algorithm for self-learning purposes?
The direction of interaction is the opposite. By analogy with an ordinary Expert Advisor - there is an optimiser that pulls the "black box" of the EA (any EA) by input parameters. If there is a neural network in the Expert Advisor, it does not cease to be a "black box". Only the optimised parameters are a bunch of grid weights.
 
Yedelkin:
Then I don't get it. If the "in-house genetic optimisation algorithm" is inserted into the optimiser, how can a self-learning neural network use this "external" algorithm for self-learning purposes?

A neural network is simplistically a function of the form f[x1,x2,...,xn][w1,w2,...,wn], where x is the input information (it changes and depends on the market situation) and w are the weights of the network, fixed coefficients (in the context of this article input-parameters) which are selected by optimisation in the tester.

So, if it is necessary to train the network in online mode, it will not be possible to use the standard optimiser and it will be necessary to use some optimisation algorithm (it should be built into the Expert Advisor).

 
marketeer:
The direction of interaction is the opposite. By analogy with an ordinary Expert Advisor - there is an optimiser that pulls the input parameters of the "black box" of the Expert Advisor (any). If there is a neural network in the Expert Advisor, it does not cease to be a "black box". Only the optimised parameters are a bunch of grid weights.
If this is the case, then there is no self-training of neuro-advisors to speak about. And training is called ordinary fitting of parameters.
 
joo So, if you need to train the network online, you will not be able to use the standard optimiser and you will have to use some optimisation algorithm (to be built into the Expert Advisor).
Yes, this is the point I wanted to clarify. It turns out that only in this case a neuro-advisor can really be called a self-learning one.
 
yu-sha: http://lancet.mit.edu/ga/ - Massachusetts Institute of Technology
Thank you all! I have a rough idea of the direction.
 


yu-sha: http://lancet.mit.edu/ga/ - Massachusetts Institute of Technology

Yedelkin:

Thanks everyone! I understand the direction roughly.
All the necessary tools for MQL5 are already available here, on the native forum.
 
joo: All the necessary tools for MQL5 are already available here, on the native forum.
That's for sure :) I just needed to understand the basic trick.
 
Yedelkin:
If so, then there is no self-training of neuro-advisors. And training is called ordinary fitting of parameters.
Do you naively believe that self-training is an unusual fitting?
 

Reshetov:
А Вы наивно полагаете, что самообучение - это необычная подгонка?

Network learning = fitting

Self-learning = self-fitting