Market etiquette or good manners in a minefield - page 80

 
gpwr писал(а) >>

Why is there a statistical scatter near the blue circles? If the weights start from zero, there shouldn't be any statistical scatter.

The point is that I'm not generating statistics for the same training sample, but I'm shifting one sample at a time during each cycle. Therefore, the training results do not coincide with each other. I don't remember why I did it, but it doesn't change the essence. Apparently, I wanted to show the quasi-stationary processes in the market and reflect their influence on the learning speed.

Here's what the results look like when averaging 10 experiments on the same training sample (fig. left):

You can see that there is no statistical variation for weights with zero initialization.

The figure on the right is based on a network architecture with 12 inputs, 5 neurons in the hidden layer and 1 neuron in the output and with a training sample of 120 samples, i.e. it is a copy of your case. The statistics were gathered from 50 independent numerical experiments. Also, everything works correctly.

If EURUSD1h opening prices normalized by their standard deviation were used, their average is not equal to zero. Or did you subtract the average?

No, I used the first opening price difference as an input (thought it was clear from the context). It's clear that the average is zero. Predicted the amplitude and sign of the next difference.

As for the theorem, I liked it. But, it relates to our networks as a special case!

You have proved the degenerate case for training sample length tending to infinity. Really, in this case for vector of input data representing SV with zero MO we obtain zero weights - the best forecast for tomorrow for integrated SV is the current value today! But, once we take a training sample of finite length, the trained weights will tend towards equilibrium, minimising the square of the error. As an example to prove this statement, take the case of SLAE (the same NS). In this case, the weights are uniquely defined, the training error on the training sample is identically equal to zero (the number of unknowns equals the number of equations) and the weights (coefficients at the unknowns) are obviously not equal to zero.

 

Something about this design doesn't work for me:


 
Why do you see a range of +/-1 on the abscissa scale? It should be +/-N... Maybe you've set the +/-1 limits hard and now you can't see anything, but it's just like in the picture.
 

I don't think that's why. I didn't specify a range boundary at all. Now it's set hard from -N to +N :


I suspect it's a Matkad glitch. I've already got the new one, but the post office isn't working today. Will only be able to get it tomorrow.

 
paralocus писал(а) >>

I don't think that's why. I didn't specify a range boundary at all. Now it's set hard from -N to +N :

I suspect it's a Matkad glitch. I've already got the new one, but the post office isn't working today. I'll only be able to get it tomorrow.

I'm fine:

You, show me the vector values. Like, F=... what do you got?

 
 

Ahhh. You know what to do - round up your dif in the loop to an integer: dif[i]=trunc(K*(Open[i]-Open[i-1])). Maybe your source quotient is not a 4-digit one. See how the cotier itself looks in the table.

 

Yes, it worked...

Strange, how come I don't have four digits in my quotidian?


 

From where, from where... Because you've set Matcad to represent numbers to three decimal places.

Oh, no. The problem is in the original quotient, though. Look in the raw data.

 
This is because of the difference in data types. trunc() simply sets the value of the first quotient difference to an integer type.
Reason: