Neural networks, how to master them, where to start? - page 6

 
nord >> :

>> ...but Better had very good results, so something can be survived if used correctly.

Judging by the results of the 2008 CHAMPI, where Mr. Better and other CHAMPI participants used NS, what places did their advisors take?

As we have seen in 2008, the leaders of the Championship are advisors of any type, but not those with NS!

this leads to another question: wasn't Mr. Better`s shot an accident in 2007?

 
The NS in the TS is a fit with redundancy,
but even neural network redundancy may not be enough to make a profit.
 
TheXpert писал(а) >>

Bullshit.

In many problems, the 4-layer Perspectron shows much better results and convergence.

And in some places, a 5-layer one is used. Its middle hidden layer captures intermediate data for additional analysis.

By the way, for that matter, deployed recirculation networks are nothing but a perseptron. And a non-linear RNS deployed is just a 5-layer perseptron.

I'll keep silent about complex networks (with multiple output layers and complex links) based on a perseptron for now.

It's kind of complicated.

I know that not so long ago two theorems were proved. According to the first one, the three-layer nonlinear NS (that consists of three layers of neurons, with nonlinearity at the output of each) is a universal approximator and further increasing the number of layers does not increase the network power. According to the second theorem, the computational power of the network does not depend on the specific type of non-linearity at the outputs of its neurons. It is important that there is non-linearity in principle and it does not matter what exactly it is - sigmoid or arctangent. This saves us from trying to find the best of the equal.

These two theorems radically simplify for us the choice of NS architecture and noticeably reduce the amount of possible research work.

In addition, we prove an unambiguous relation between the optimal length of the training sample on historical data, the dimensionality of the NS input and the total number of its synapses, in the sense of minimizing the prediction error on data that did not take part in the training of the network. This allows one not to engage in Leninism by manually selecting this optimum. At existing capacities of MS, it appreciably saves time and forces.

 
Andrey4-min >> :

Dear forum members, the topic of this thread is Neural Networks, how to master them, where to start?

Let's get closer to the topic....

Closer to the subject? No problem! Start by writing a neuron, and then combine them into a network. And advanced software is for later. All other advice is rubbish.

 
Korey >> :
The NS in the TS is a fitting with excess...

any matrix apparatus in tc is a fitting,

Take for example the ancient way of fitting - two-parameter exponential smoothing, not at all worse than AI and NS.

 
PraVedNiK писал(а) >>

Two-parameter exponential smoothing, NOT worse than AI and NS.

That's right: biparametric exponential smoothing is NOT worse than dual-input NS.

On the other hand, in real NS we are free to choose the dimensionality of the input. For BP of the price type, the typical dimensionality lies in the area of 10-100. Here it will be difficult to build a muve with such number of parameters and certainly it will be impossible to optimize them in a reasonable time. For this purpose NS uses the method of back propagation of errors which is much faster than the genetic algorithm in the tester, and all the more so than the stupid search of parameters.

 
working with the NS in the TC gives me the creeps - I don't have to think, there's no need to,
i.e. all worries are like for girls: what to charge, when to feed, what to compare with what = a kind of sinecure,
I have made up my mind, tested it and put my hands in my trousers (to play with balls))), in anticipation of the results.
computer works - trader rests)))) nothing personal.
 
No, if you have enough assiduity and knowledge to fit a theoretical framework to the regularity found on the kotir, then please - exploit it to your advantage! But how long will this pattern last? These are not laws of mechanics, which are eternal. Tomorrow the market trend will change and you'll have to take a piece of paper and count... Let the TS with NS do it. Stupid, but ironclad.
 
Neutron >> :

...

uses the back propagation method, which is noticeably faster than the genetic algorithm in the tester, much less the dumb parameter enumeration.

The back propagation method does NOT give any guarantee in finding the global minimum of the error function, and as for 10-100 inputs

at the input of the network... Actually, no one has cancelled the network downsizing yet; two inputs will be enough, if the principal components are analysed beforehand, but the trouble is, this analysis, as well as training, is also done on history.

 
Well, don't be so hard on yourself. After all, the network is sometimes retrained (and I do it on every BP analyzed) and the average method finds a minimum. As for dimensionality reduction, it does not work for BPs like price BPs to the extent that you are presenting, unfortunately.
Reason: