Discussion of article "Neural Networks: From Theory to Practice" - page 2

 
alexeymosc:

The point is that NS learns any function, as you know, and does it successfully, the main thing is that the range of data outside the sample should be within the training range.

Actually, that's exactly what I was talking about. Anything outside the range and the answers will be wrong. That's why I say that the multiplication table 1-9 can be taught, but multiplication in general of numbers on the whole number line - no, it is something from the category of a feat - "to cook delicious eggs".
 
joo:
Actually, that's exactly what I was saying. If you go outside the range, the answers will be wrong. That is why I say that the multiplication table 1-9 can be taught, but multiplication in general of numbers on the whole number line - no, it is something from the category of a feat - "to cook delicious eggs".

Yes, unfortunately, the modern generation of NS cannot work on inputs in a different range from the teaching range. Maybe there are custom architectures that can handle it, but a multilayer perceptron with a non-linear function definitely cannot.

Specially for you :)

In this case, the validation sample data had both inputs and outputs outside the range at which the NS was trained. And the test sample data is also outside the range of the training sample. The validation starts with the 201st case. You can see how the error starts to grow exponentially. And the mean square error on the samples is highlighted in yellow at the top. You can see everything with the naked eye.

 
Maybe we should put the NS discussion in one topic? here is an unremarkable topic, but the title corresponds to https://www.mql5.com/ru/forum/8158.
Искусственные нейронные сети.
Искусственные нейронные сети.
  • www.mql5.com
Потенциал их практически безграничен, туда можно прописать сколько-угодно любых индикаторов с любым количеством параметров… и делать это можно оказывается на чистом MQL5.
 
Thanks for the kind words and criticism.
 

Neural networks are a branch of research in artificial intelligence based on attempts to replicate the human nervous system, namely the ability of the nervous system to learn and correct errors....

I don't get it. How exactly does the neuro-advisor self-learning occur? In other words, how does the programme change the weighting coefficients ?

 
joo:
Actually, that's exactly what I was saying. If you go outside the range, the answers will be wrong. That's why I say that the multiplication table 1-9 can be taught, but multiplication in general of numbers on the whole number line - no, it is something from the category of a feat - "to cook delicious eggs".
Well, this problem is sometimes solved by transforming variables. For example, if for the case of multiplication, to represent the input numbers as binary sequences of bits, i.e. actually translate into the range [0,1], then probably it will be possible to teach the multiplication of arbitrary numbers recurrent grid, to the input of which numbers are fed as sequences of bits.
 
Yedelkin:


I don't get it. How exactly does the neuro-advisor self-learning take place? In other words, how does the programme change the weight coefficients?

This is done by the standard genetic optimisation algorithm. This implementation of the grid does not include any learning algorithms - you can consider it as a convenient simplification, many people have been doing it for a long time on Quartet. But like any simplification, it affects the capabilities by actually limiting the grid structure and the learning principle. In particular, it is impossible to run such training in an online Expert Advisor (at least until the optimiser is implemented in the MQL5 API).
 
marketeer: This is done by the standard genetic optimisation algorithm.
I.e. for full-fledged operation of a neuro-advisor (self-learning) it is necessary to embed a "standard genetic optimisation algorithm" into the program code? Are there ready-made implementations of such algorithms in the public domain?
 
Yedelkin:
I.e. for full-fledged operation of a neuro-advisor (self-learning) it is necessary to embed a "standard genetic optimisation algorithm" into the program code? Are there any ready-made implementations of such algorithms in the public domain?

http://lancet.mit.edu/ga/ - Massachusetts Institute of Technology

 
Yedelkin:
I.e. for full-fledged operation of a neuro-advisor (self-learning) it is necessary to embed a "standard genetic optimisation algorithm" into the program code? Are there ready-made implementations of such algorithms in the public domain?
No, of course not! That's why it is standard, because it is already embedded in the optimiser. It optimises the grid weights by itself. Here read the article on the quaternary site, it may become clear how the grid is optimised=learned with this approach.
Как найти прибыльную торговую стратегию - Статьи по MQL4
  • www.mql5.com
Как найти прибыльную торговую стратегию - Статьи по MQL4: торговые системы