How to form the input values for the NS correctly. - page 8

 
sergeev писал (а) >>

klot, I think he posted the normalisation of the normal MA with StdDev.

It is better to use the speed of the MA, i.e. the first derivative, rather than the MA.

 
LeoV писал (а) >>

It wasn't about indicators, it was about rationing the price in a given area, where the highs and lows are chosen.

There it was about sampling rationing...

 
sergeev писал (а) >>


Roughly speaking, I have outlined a plan for my work with the neural network, or rather the things I need to pay attention to when developing it.

1. Preparation of input data. (shifting averages, decorrelation, covariance equalization).

Can you be more specific? Preferably with at least a minimum of detail. Interesting.

3. The issue of retraining the network

Well, it's easy to check, and can be treated by reducing the number of parameters. If it's not cured, we have to resample.

4. Cross-checking

More details please.

7. Possibility to use lightka maps (or Kohonen and Grossberg layers?)

I think it's perfect for pattern search.

8. Committee of networks.

Not the case, they are used in expert systems, it is a bit different here.

9. Recursive networks.

Not worth it yet, IMHO.

 
sergeev писал (а) >>

klot, I think he posted normalisation of a normal MA with StdDev.

I know you can normalise the price too, just not the way from maximum to minimum...

 
TheXpert писал (а) >>

Can you be more specific? Preferably with at least a minimum of detail. Interesting.

Fig. 4.11 from S. Haykin, the book at the beginning of the topic

Well, it's easy to check, and can be treated by reducing the number of parameters. If it's not cured, we have to resample.

>> figs. 4.18, 4.19 ibid.

More details, please.

Fig. 4.20, 4.21 ibid.

 
sergeev писал (а) >>

Fig. 4.11 from S. Haykin, book at the beginning of the topical

Figs. 4.18, 4.19 ibid.

Fig. 4.20, 4.21 ibid.


Yep, I'll have something to read tonight, quite possibly breaking out the code soon :)

 
Would you like to share?
 
sergeev писал (а) >>
Will you share?

Of course, otherwise I wouldn't talk about it. Unless, of course, there's something to share.

 
sergeev писал (а) >> Cross-check
TheXpert wrote (a) >> More details too please.
A cross-check is when a network, for example, is trained on the 2007 segment and the best result achieved in the 2007 segment is "tested" in the 2008 segment, and if that result is better than the previous one (also "tested" in 2008) then that network is kept. And so on. In the same way, you don't get better results in 2007, but you don't have to worry about it, because the network is checked in 2008. This avoids over-training (for the network) or over-optimisation (for the TC).
 
sergeev писал (а) >> 8. committee of networks.

Usually 3 out of 2 or 5 out of 3 is done. That is, out of 3 nets, 2 must "opt in". Committees are of course better, as 3 not very profitable nets, can give a much higher profit than each individually. But it is necessary to choose nets in committees very deliberately, as not every network with another one will work properly.

Reason: