Market etiquette or good manners in a minefield - page 31

 
paralocus писал(а) >>

Hi Neurton! Anyway, no luck with the double layer yet.

I wrote a single-layer perceptron with ORO and ran it all day yesterday. It's behaving strangely. It learns and does not learn, and is catastrophically dependent on the number of epochs.

So my results are as follows: 8 epochs - the grid does not learn, 12 epochs - the grid learns, 13 epochs - the grid does not learn.

In short, the results boast that I can not yet.

In any case, I will describe my algorithm, which I have implemented. See if I've missed something.

1. The perceptron has D binary inputs, one of which is a constant +1.

2. The BP used is sequential cotier increments over a number of Open.

3. All weights are initialised with small random values from the +/-1 range before starting.

4. The length of the training vector is calculated as P = 4 * D*D/D = 4*D.

5. The training vector is fed to the grid input and the grid error is calculated as Qs = Test - OUT, where Test is the value of BP at n+1, i.e. the next, readout, OUT is the grid output at n readout.

6. To obtain the value of the error at inputs Qs, the grid error Qs is multiplied by the derivative of the squeezing function (1 - OUT+OUT): Q = Qs *(1 - OUT*OUT).

7. The correction vector for each weight entering a neuron COR[i] += Q*D[i] is calculated and accumulated during the epoch.

8. SQR[i] += COR[i]*COR[i] is separately calculated and accumulated during entire epoch for each weight, included into neuron.

9. At the end of an epoch, for each weight the personal correction is calculated and added to that weight W[i] += COR[i]/SQR[i]

Tried using a factor of (1 - j/N), as well as randomizing weights whose absolute values have grown over 20. Randomizing helps better.

P.S corrected an error in the text.

paralocus, stick your girlfriend on the input This:

Instead of the opening prices. And post the result. If it doesn't learn, then the error is catastrophic and you need to do some serious searching.

P.S. Do you divide by vector norm or by square? I should do it by norm, but what you wrote looks like sum of squares without extraction of the root.

 

Thanks, I'm trying it out.

Here are the results for AUDUSD, H4, D = 13, 33 epochs. Testing was carried out on a part of history from 2009.01.08 to 2009.05.21

There are some more points(number of epochs: 31, 25, 14, 10, 7) obtained with the optimizer, but the results are worse on them.





P.S. Exactly! The root is... square... I forgot to take it out!

 
paralocus писал(а) >>

P.S. Exactly! The root is... square... I forgot to take it out!

Just like in that Chapaev joke! :-)

 
Neutron >> :

It's like that Chapaev joke! :-)

-:) ... Crying, but sharpening his sabre...

 

No, Petya yanked out all the bushes in the yard - he was asked to find the square root:-)

Here, look. This is a double-layer Ns-ca with two inputs and 8 training epochs on each sample. It is 500 samples in total and the probability of correct prediction is calculated as the average of 10 independent experiments (for statistical significance of results):

Red shows the result on the training sample, blue on the test sample. You can see that the results are not very different and that is good, because it shows that the network is not overtrained. Often the mistake is that we select a too short training vector (less than the optimal length), and almost 100% of impacts on the training sample. At the same time, they do not look at the prediction results of the sample that did not take part in the training! In this case, the result is usually close to zero. This is overtraining, the grid has simply learned the lesson by heart and is unable to compose itself. Then they wonder what causes them to flunk.

 

Hooray!!!

My deposit just passed 1K !

Neutron, and binary inputs for the grid trained by your test sample formula?

 

I divide the BP constructed by this formula into segments of the same price increment H and take the first difference. I round up the increments to +/-1 and try to predict them. Hn Fig. the trading horizon H is plotted in pips on the abscissa axis and the probability is plotted on the ordinate axis. So, the entries are binary.

 

That's how I figured it out. To look at this formula with my eyes I put it on the chart as an indicator. I put all this stuff on the grid input, but I can't figure out how to see the results.

I.e., I can't do it with a tester. You can only print out the weights and see how it's... alive or not.

No, it's not like that. I did not break the resulting BP into segments of the same increment, but took consecutive increments of BP on D counts.

 
paralocus писал(а) >>

That's how I figured it out. To look at this formula with my eyes I put it on the chart as an indicator. I put all this stuff on the grid input, but I can't figure out how to see the results.

I.e., I can't do it with a tester. You can only print out the weights and see how it's... alive or not.

No, it's not like that. I didn't break the resulting BP into segments of equal increments, but took consecutive increments of BP on D counts.

How about displaying the weights via Comment()?

 
FION >> :

Can weights be displayed through Comment()?

You can, but it is not convenient, because each subsequent call of Comment() will "clog" the results of the previous output, since it will be performed using the same graphical coordinates as the previous one. Therefore, Print() is better;

Reason: