
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The point is that NS learns any function, as you know, and does it successfully, the main thing is that the range of data outside the sample should be within the training range.
Actually, that's exactly what I was saying. If you go outside the range, the answers will be wrong. That is why I say that the multiplication table 1-9 can be taught, but multiplication in general of numbers on the whole number line - no, it is something from the category of a feat - "to cook delicious eggs".
Yes, unfortunately, the modern generation of NS cannot work on inputs in a different range from the teaching range. Maybe there are custom architectures that can handle it, but a multilayer perceptron with a non-linear function definitely cannot.
Specially for you :)
In this case, the validation sample data had both inputs and outputs outside the range at which the NS was trained. And the test sample data is also outside the range of the training sample. The validation starts with the 201st case. You can see how the error starts to grow exponentially. And the mean square error on the samples is highlighted in yellow at the top. You can see everything with the naked eye.
Neural networks are a branch of research in artificial intelligence based on attempts to replicate the human nervous system, namely the ability of the nervous system to learn and correct errors....
I don't get it. How exactly does the neuro-advisor self-learning occur? In other words, how does the programme change the weighting coefficients
?
Actually, that's exactly what I was saying. If you go outside the range, the answers will be wrong. That's why I say that the multiplication table 1-9 can be taught, but multiplication in general of numbers on the whole number line - no, it is something from the category of a feat - "to cook delicious eggs".
I don't get it. How exactly does the neuro-advisor self-learning take place? In other words, how does the programme change the weight coefficients?
I.e. for full-fledged operation of a neuro-advisor (self-learning) it is necessary to embed a "standard genetic optimisation algorithm" into the program code? Are there any ready-made implementations of such algorithms in the public domain?
http://lancet.mit.edu/ga/ - Massachusetts Institute of Technology
I.e. for full-fledged operation of a neuro-advisor (self-learning) it is necessary to embed a "standard genetic optimisation algorithm" into the program code? Are there ready-made implementations of such algorithms in the public domain?