Using Neural Networks in Trading. - page 3

 
StatBars писал(а) >>

The pictures show non-normalized data and from different data, I just gave an example of what I did (how it looks like as a result)

And here's the script, you can use it to see what the output will look like (but don't be too picky about the input data - this script was made just for illustration...)

//Type 0 - linear normalization, 1 - non-linear

StatBars, I have no complaints!

Well, it works - and it's good. I just want to make it clear that the normalization procedure is not the same as smoothing the distribution and you need a more sophisticated approach. But, on the other hand, we have the input value in a final range of +/-1 and in a form of a shelf - a tasty treat. Also, when you whitewash the input, you get an aesthetic sense of pleasure.

 
Neutron писал(а) >>

StatBars, I'm not complaining!

Well, it works and that's fine. I just want to clarify that the normalization procedure is not equal to equalization of distribution and a relatively more complex approach is needed. But, on the other hand, we have the input value in a final range of +/-1 and in a form of a shelf - a tasty treat. Also, when you whitewash the input, you get an aesthetic sense of pleasure.

)))

By the way implemented a method described in one article, I do not remember exactly where I read it ... In general, using the area distribution function, the second picture is precisely the work of this method.

Anyway, here's a file with simple normalization and frequency alignment, not the best example, but nevertheless...

Have you whitened the inputs? I just have a couple of questions...

Files:
 

I did.

But there's a difference between bleaching and whitening. You can foolishly feed two wipers to two NS inputs and aim to whiten them... This is one case (clinical), and you can put bar opening prices in a dozen - that's another one, there's nothing to whiten - everything is white as it is.

 
I've read the thread, it's more informative than the others, but today is the Good Deeds Day, so I'll bring it up at the T-O-P-I-C-K-E... :-О)
 
Neutron >> :

Leonid, there is no problem with retraining the NS - it is from not understanding the essence of what is going on inside this box with a beautiful name and three nails inside. Don't take a training sample shorter than minimal and you won't have to decide by gut feeling what is better and what is worse!

About "correct" inputs I agree with you 100% about this key to success - all that can be solved for NS-ku - you need to solve yourself. It needs to be left with things that the solution doesn't have or are unjustifiably difficult. For example, it does not make sense at all to feed the Zig-Zag input. In this case the behaviour of NS is obvious - it will learn what lies on the surface - familiarity of ZZ arms, and the use of such input data is zero.

There are no wrong inputs. There is a wrong task.

 

On the subject of normalisation. Not every type of normalisation can be applied in the context of the task at hand.

 
registred писал(а) >>

There are no wrong inputs. There is a misdirected task.

Why not? There is. In addition to the objective (correct) problem, there is also the data with which this problem will be solved...

And about normalization - any explanations?

 
One difficulty arises during normalisation: while all neurons are the same when describing a neural network, and it is possible to describe a neuron once, after removing the detached connections neurons usually have a different structure.
 
StatBars >> :

Why not? There is. In addition to the objective (correct) problem, there is also the data with which this problem will be solved...

And about normalization - any explanations?

Well, there is linear normalization and there is non-linear normalization. Non-linear is sensitive to the variance of the new data. Linear is simple and it requires less computational steps, but for it non-symmetry, so to speak, will affect the duration of training. With it, the variance can be anything it wants as a result of training, subject to certain conditions. But in training itself, a series that hasn't been normalized to a zero mean and unit variance will cause the network to learn longer in the end than if such normalization were carried out. On the first question, my personal opinion is this. I can take MA, I can take the increments of the series. There will be no difference for me. The essence will be that if after training the net, the result of training depends on what I choose (MA or simply increments of the series), then it will indicate only that the task was set incorrectly and the net was simply learning what I tried to teach it. That is, just perform the actions I taught it. But to find regularities, namely to generalize, the network will not or will not do it correctly. The point is to reduce generalization error of the type of data which would be required at the output of the neural network. The input data can be increments of a time series but not necessarily its smoothed version in the form of MA or whatever else. They all write that the series must be smoothed. But I think that it doesn't matter as the objective regularities in data are saved and the main thing is to choose the necessary number of increments in your series.

 
registred писал(а) >>

Well, there is linear normalisation and there is non-linear normalisation. Non-linear is sensitive to the variance of the new data. Linear is simple and requires less computational work, but its non-symmetry, so to speak, will affect the learning time. With it, the variance can be anything it wants as a result of training, subject to certain conditions. But in training itself, a series that hasn't been normalized to a zero mean and unit variance will cause the network to learn longer in the end than if such normalization were carried out. On the first question, my personal opinion is this. I can take MA, I can take the increments of the series. There will be no difference for me. The essence will be that if after training the net, the result of training depends on what I choose (MA or simply increments of the series), then it will indicate only that the task was set incorrectly and the net was simply learning what I tried to teach it. That is, just perform the actions I taught it. But to find regularities, namely to generalize, the network will not or will not do it correctly. The point is to reduce generalization error of the type of data which would be required at the output of the neural network. The input data can be increments of a time series but not necessarily its smoothed version in the form of MA or whatever else. They all write that the series must be smoothed. But I think it doesn't matter, because objective regularities in data are preserved, the main thing is to choose the right amount of increments in your series.

I think it follows from your post that normalization depends more on the data than on the task at hand.

About the second part: do you consider MA increment and series increment?

And in general sense do you mean that trained network must be non-sensitive to input data? Or you just change input data and the network must continue predicting?

Reason: