
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
>> ...but Better had very good results, so something can be survived if used correctly.
Judging by the results of the 2008 CHAMPI, where Mr. Better and other CHAMPI participants used NS, what places did their advisors take?
As we have seen in 2008, the leaders of the Championship are advisors of any type, but not those with NS!
this leads to another question: wasn't Mr. Better`s shot an accident in 2007?
but even neural network redundancy may not be enough to make a profit.
Bullshit.
In many problems, the 4-layer Perspectron shows much better results and convergence.
And in some places, a 5-layer one is used. Its middle hidden layer captures intermediate data for additional analysis.
By the way, for that matter, deployed recirculation networks are nothing but a perseptron. And a non-linear RNS deployed is just a 5-layer perseptron.
I'll keep silent about complex networks (with multiple output layers and complex links) based on a perseptron for now.
It's kind of complicated.
I know that not so long ago two theorems were proved. According to the first one, the three-layer nonlinear NS (that consists of three layers of neurons, with nonlinearity at the output of each) is a universal approximator and further increasing the number of layers does not increase the network power. According to the second theorem, the computational power of the network does not depend on the specific type of non-linearity at the outputs of its neurons. It is important that there is non-linearity in principle and it does not matter what exactly it is - sigmoid or arctangent. This saves us from trying to find the best of the equal.
These two theorems radically simplify for us the choice of NS architecture and noticeably reduce the amount of possible research work.
In addition, we prove an unambiguous relation between the optimal length of the training sample on historical data, the dimensionality of the NS input and the total number of its synapses, in the sense of minimizing the prediction error on data that did not take part in the training of the network. This allows one not to engage in Leninism by manually selecting this optimum. At existing capacities of MS, it appreciably saves time and forces.
Dear forum members, the topic of this thread is Neural Networks, how to master them, where to start?
Let's get closer to the topic....
Closer to the subject? No problem! Start by writing a neuron, and then combine them into a network. And advanced software is for later. All other advice is rubbish.
The NS in the TS is a fitting with excess...
any matrix apparatus in tc is a fitting,
Take for example the ancient way of fitting - two-parameter exponential smoothing, not at all worse than AI and NS.
Two-parameter exponential smoothing, NOT worse than AI and NS.
That's right: biparametric exponential smoothing is NOT worse than dual-input NS.
On the other hand, in real NS we are free to choose the dimensionality of the input. For BP of the price type, the typical dimensionality lies in the area of 10-100. Here it will be difficult to build a muve with such number of parameters and certainly it will be impossible to optimize them in a reasonable time. For this purpose NS uses the method of back propagation of errors which is much faster than the genetic algorithm in the tester, and all the more so than the stupid search of parameters.
i.e. all worries are like for girls: what to charge, when to feed, what to compare with what = a kind of sinecure,
I have made up my mind, tested it and put my hands in my trousers (to play with balls))), in anticipation of the results.
computer works - trader rests)))) nothing personal.
...
uses the back propagation method, which is noticeably faster than the genetic algorithm in the tester, much less the dumb parameter enumeration.
The back propagation method does NOT give any guarantee in finding the global minimum of the error function, and as for 10-100 inputs
at the input of the network... Actually, no one has cancelled the network downsizing yet; two inputs will be enough, if the principal components are analysed beforehand, but the trouble is, this analysis, as well as training, is also done on history.