Machine learning in trading: theory, models, practice and algo-trading - page 363

 
Maxim Dmitrievsky:

Why are you so narrow-minded, everyone here writes as if he had never seen anything sweeter than a carrot?) Fuck predictors, teach the neuron to find these predictors, teach the neuron to teach the neuron, experiment :) Obviously it is very stupid to just take indicators, feed them to the input and output a zigzag, I don't know why everyone is discussing it :)

I would like it very much, of course, but...

For example convolutional neural networks partially possess such "skill", they can learn pre-processing (filtering) themselves, but they are very narrowly-specialized, like descendants of Neo-Cognitron Focus, designed as "analogue" of retina eye, trained by backprop, CNN actually for pictures is not a trivial thing? I think there is a lot of magic numbers and shamanistic architectures in configuring, a step to the left, a step to the right and everything breaks, in other words it is a very fine-tuning of the task, but not a universal solver of everything and everything. Unfortunately, from an engineering point of view, to develop a miracle system that they would do everything themselves is a naive utopia, it is much easier to make individual solutions to the task, which anyway can take years of teamwork, and the IIA is something philosophical and humanitarian.

 
Alyosha:

I would like it very much, of course, but...

For example convolutional neural networks partially possess such "skill", they can learn pre-processing (filtering) themselves, but they are very narrowly-specialized, like descendants of Neo-Cognitron Focus, designed as "analogue" of retina eye, trained by backprop, CNN actually for pictures is not a trivial thing? There are a lot of magic numbers and shamanic architectures in configuring, a step to the left, a step to the right and everything breaks, i.e. it is a very fine-tuning to the task, but not a certain universal solver of everything and everything. Unfortunately, from an engineering point of view, to develop a miracle system that they would do everything themselves - naive utopia, it is much easier to make individual solutions to the task, which anyway can take years of teamwork, and the FTI is closer to something philosophical, humanitarian.


But I believe that it is possible to make a good self-optimizing device, that will not work perfectly all the time, but from time to time it will give the best results.

But it obviously will not work with standard indicators and a zigzag output :) This even sounds like some kind of child's play to me, just as an example if only.

 
Maxim Dmitrievsky:

it's not that they are not correlated, they will contradict themselves at all with increasing sampling, yes, as a result the output is either a confusion - always 0 or 1, or 0.5... moreover for 1 neuron. That is, if >0.5 sell and <0.5 buy. You set correct (in your opinion) predictors and correct answers, that if psi is oversold and the market grows on the next n-bars, then output 0, and if psi is overbought and the market goes down, then output 0. But there will be a lot of cases when it will be the other way round and it will blunt, mix signals and go in a trance. As a result the output will always be about 0.5 with very small deviations in one direction or another... and it will happen with all oscillators, because they are not predictors, they are derivatives of the price :)
You should buy if it is > 0.8, and sell if it is <0.2. That way you sift out the noise which lies in the middle of the range, i.e. about 0.5
 
elibrarius:
It is necessary if > 0.8 to buy, if<0.2 to sell. That way you sift out the noise which is in the middle of the range, i.e. about 0.5

you don't understand)
 
a neural network is an artificial intelligence and any thinking system will not work for nothing.... it is not stupid...you offer it money before training or promise a percentage of the proceeds, then it will start to find real patterns and bring income.
 
A question to those who have been in the field of NS for a long time.
Do neural networks on ALGLIB and R - weight coefficients pick up from -1 to 1?
 
elibrarius:

It's complicated... It will take more time than if you understand the algorithm (like for K-correlation above) and write it. I think the function of trying all the inputs, calculating the correlation and sifting out the highly correlated ones will take a couple of hours.

Hopefully with other solutions for sifting out predictors it will be as easy)

So are there other solutions to finding unnecessary predictors?

Take a look here.

When estimating the importance of predictors, it is important to remember that it is a complex value, not determined by information criteria only. In the article I gave an example of selecting predictors using RandomUniformForest to show the importance of predictors in different aspects. In my opinion it is the most successful package for this purpose.

Specifically for neural networks the importance of predictors can be determined by their weights in the first layer. This approach is used in H2O. If I find time I will give you an example.

Besides importance of predictors it is necessary to define noise examples (strings). And either put them in a separate class or remove them from the training set. See NoiseFilterR

Good luck

 
elibrarius:
A question for those who have been in the NS topic for a long time.
Are the neural networks on ALGLIB and R - weight coefficients picking up from -1 to 1?
Do you mean the initial initialization of the weights or?
 

Vladimir Perervenko:
In my article I gave an example of predictor selection using RandomUniformForest to show the importance of predictors in different aspects.

It would be interesting to know the algorithm for calculating the importance in this function in order to create an analogue in MQL.

Vladimir Perervenko:

Specifically for neural networks the importance of predictors can be determined by their weights in the first layer. This approach is used in H2O. If I find time, I will give you an example.

I have also thought about it, it is not too difficult to implement.

Vladimir Perervenko:

In addition to predictors importance we should also define noise samples (strings). You should either create a separate class for them or remove them from training set. See NoiseFilterR

This is a new one for me, thanks for the idea, I should think about it).


Vladimir Perervenko:
Are you talking about initial initialization of the scales or?
I mean what is the full range in which they are picked up - from -1 to 1.
Initial one, I suppose, is chosen randomly (as a variant, in the middle of the range).
 
elibrarius:

It would be interesting to know the algorithm of importance calculation in this function, to make an analogue in MQL

I also thought about it, it's not hard to implement.

This is a new one for me, thanks for the idea, I will have to think about it).


I mean what is the full range in which they are matched - from -1 to 1.
I assume the starting point is randomly chosen (as an option in the middle of the range).

Neural networks are very sensitive to initialization of weights. This is not a trivial question at all. There are quite a lot of ways of initialization one of which - pretraining (used in deep neural networks) is the most promising (IMHO).

During training weights of neurons can take values in a wide range from -inf to +inf. Regularization and other stabilization methods are used to prevent these distortions, limiting the range of weights.

Good luck

Reason: