Market etiquette or good manners in a minefield - page 70

 
Neutron >> :

The variance is a bit high though, you need to increase the stats.

Maybe for an hourly kotir breakdown this is quite normal? I was increasing the stats to 1000. The result is a bit lower. Now the d stats are already counting, I'll show you when it's done.

 
Neutron >> :

More precisely, from stating the fact of quasi-insteadiness, it does not follow that the market is completely efficient.

Yeah, I get it, and it makes more sense to me too. Although, I would use the word "quasi-insteady" in a word game to give the guessers more time. :)

 

Here are the d stats on Wiener:


 

If K=2, it is more beautiful (on Wiener). I just put K=1 everywhere, because with it the girl and learns better and works better(significantly)



And this is for 1000 experiences kotir (K=1)


 
paralocus писал(а) >>

Here are the d stats on Wiener:

Fig. doesn't look very informative. Display only the tangents as a function of the input dimensionality on the training sample and on the test sample in one figure, and put autoscaling on the ordinate axis.

 

Is that OK?

 

Now explain what you are getting at.

If you look at your fig. where the tangent for Wiener is derived:

It is not difficult to estimate its value visually tg=1/2 for the training sample. If you look at your last Fig., the value of tg does not exceed the level 0.1

Comment.

 
Could it be the speed? Going to look into it.
 

This is because I multiplied the calculated tangent by the Wiener volatility, which was also calculated incorrectly (the difference in counts accumulated without being squared)


These pictures are for K=1


 

Now congratulations!

We see that the single-layer NS code works correctly - zero (if we add more statistics) on a random process and something statistically significant different from zero on a market BP. Now we can pass to work with the universal approximator (bilayer nonlinear NS) and compare the obtained results with the work of the linear neuron. After polishing, we can compare returns with different number of neurons in the hidden layer and experiment with different input data.

Reason: