Neural networks. Questions from the experts.

 

Gentlemen, good afternoon. A question for experts in the field of neural networks. The bottom line is this. Installed statisctica and started my research with automatic neural networks. Multilayer perspetron. Set a goal to understand how intelligent neural networks are in finding patterns. What did I do? I took the most usual LVSS (linearly weighted average) for the last 20 bars. I gave the last value of the LSS as a target (output) and the last 20 points on which the current value of the LSS depends. Obviously, a person knowing the last 20 points and the formula for calculation of the LVLS would be able to restore 100% of its value. The grid did not know the formula and its task was to understand it in its own way. Result - the stack restored the LFSS by 100%, i.e. it understood how the LFSS is arranged. We can consider that it coped with the task perfectly, i.e. if there is a pattern, the net really finds it. Then a similar experiment with EMA, SAR and oscillators was performed. The result is the same. 100%. After that I decided to complicate the task. I took the adaptive average. Let me remind you that it changes the averaging parameter depending on the market volatility. The volatility, in its turn, is calculated for a certain amount of bars. I input all necessary bars to build the ACS and start the grid. The result was much worse than 100%, though a person knowing the ACC formula and possessing all points would be able to build the ACC at 100%. Actually the network failed, we are talking about automatic neural networks.

Conclusion and questions to the experts in the field.

1) Did I get it right that a neural network is unable to reconstruct a function if it is inherently dynamic as in the case of ACC, even if I have all necessary data for calculation, since if the formula is rigidly static as in the case of LVSS or EMA, there is no problem.

2) If I am wrong, which networks should be used? And used MLP in statistics.

3) I have heard the opinion that auto nets and nets of own e.... design, if I may say so, there is not fundamentally much difference. Is this really the case?

4) What networks and what programs would you advise for application to financial markets, in particular for the task I have described, i.e. to restore values from all known data.

Respectfully, mrstock

 

1) The grid is able to recover the function if the input data contains it. If in the last experiment the period value depends on volatility, then the grid should have given some estimate of that volatility, i.e. you may not have provided all the necessary data for recovery.

2)You can squeeze everything you need out of MLP. Use other networks when you can mathematically prove that using other architecture is better than MLP.

3)NS2 - fast, quality result, easy to transfer anywhere...

 

That's not even the main problem. Well OK, you have taught the grid to understand that 2x2=4, and even that a simple muwing is the arithmetic mean of the prices. But how do you teach the grid to predict? This is where the main question about the mental capacity of the grid arises.

 
I completely agree with you, but at the stage I am at now and the tasks I am solving I need exactly what I said. On the screenshot we have price, EMA (purple) and diminutive cs, EMA is recovered perfectly by feeding only close, but the red average is not. And if we adjust the results, that do not make sense outside the training sample, then everything is OK! What should I do? NS2 is a neurosolver as I understand it?
 
Hmmm... The file isn't attached for some reason. I'll try to figure out what's causing it.
 

 
There, it all worked out.
 
StatBars писал(а) >> 3)NS2 - fast, quality results, easy to transfer anywhere...

I wonder what learning stop criterion you use in NS2?

 
LeoV писал(а) >>

I wonder what learning stop criterion you use in NS2?

The error on the test one stops decreasing... I usually do at least 3-5 trainings, maybe more when the result is more important, with selection of neurons in layers, more precisely in a layer. a few trainings to see the spread and minimum.

 
StatBars писал(а) >> The error on the test one stops decreasing...

In my opinion, when the error on the test one stops decreasing, it is most likely over-training. How does the network behave on the feedback loop, with such a minimal error on the test one?

 
LeoV писал(а) >>

In my opinion, when the error on the test one stops decreasing, it is most likely over-training. How does the network behave in OOS, with such minimal error on the test one?

If neurons are selected correctly, the network behaves absolutely the same as in the training one, even more so with a 200 000 sample the same result is obtained with a much smaller training sample (more than 5 times smaller).

I.e. sometimes by selecting neurons we can equal errors of the test and training sample.

If the neurons are selected incorrectly, the error in the test one is a bit larger but remains on the "general" sample.

Reason: