Machine learning in trading: theory, models, practice and algo-trading - page 3084

 
Viktor Kudriavtsev #:

Hello everyone. I am trying to train Expert Advisors taken from a large series of articles about neural networks on this site. I get the impression that they are not trainable. I tried to ask the author questions under the articles, but unfortunately he does not answer them practically...(

Accordingly, a question to the forum members - please tell me how much to train a neural network so that it starts to give some (not random) result?

I tried all EAs from articles 27 to the last one - the result is the same - random. I went from 300 to 1000 epochs of training, as indicated by the author. If the EA is just with iterations, I did from 100 000 to 20 000 000 iterations and so on 2-3 approaches, still random.

How much should be trained? What is the size of a sufficient training sample (if it is pre-created)?

PS: Simple information on neural networks in google read, in general with neural networks are familiar. All write about 100-200 epochs and there should be already a result (on pictures, figures, classifications).

and where is it written that they should not give a random result? :) the abundance of identical articles already suggests a wrong direction.

Reinforcement learning is not designed for such tasks, the field of application is quite different. You can play with it.
 
Lilita Bogachkova #:

Yes,

but the large number with the same values makes me question the overall quality of the data.
Example: seq = ([5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5]) = [5,5,5,5,5,5,5][5]; [5,5,5,5,5,5,5,5][5]; [5,5,5,5,5,5,5][5] ....
I don't see the point of feeding the model with such training data;

So I'm still sifting out all data that is not unique.

I could be wrong, but it seems wrong to me to also feed the model the following training data:

[1,2,3,4,5] [5];

[1,2,3,4,5] [6];

[1,2,3,4,5] [7];

[1,2,3,4,5] [8];

...

It's a load of crap

 
Aleksey Vyazmikin #:

You don't have a result on the training sample either?

The series of those articles is not a ready-made solution out of the box - nobody will reveal the most valuable thing in machine learning - predictors. So before trying the methods proposed there, you need to develop a set of predictors that can potentially describe price behaviour.

Yes doesn't work on the training sample either. It just doesn't work anywhere. And what are predictors in this case? The author describes taking parameters from the chart in the form of candlesticks, time, and 4 indicators. The neural network model is also there.
 
Maxim Dmitrievsky #:

where is it written that they should not give random results? :) the abundance of identical articles already suggests a wrong direction.

Reinforcement learning is not designed for such tasks, the field of application is quite different. You can play with it.
The author gives a graph and statistics from the strategy tester at the end of each article. Well, if the statistics are fictitious, then yes....
 
Viktor Kudriavtsev #:
It doesn't work on the training sample either. Just not anywhere. And what are predictors in this case? The author describes taking parameters from the chart in the form of candlesticks, time, and 4 indicators. The neural network model is also there.

If it does not work on the training sample, then probably the problem is on your side. Neural networks take a long time to train - not like tree models.

Predictors can be anything, any factor affecting the price with a stable probabilistic outcome, yes, as an option - indicators.

Well, the author had some errors in the code (critical for non-Intel cards) - a couple of versions of the rules on people's requests.
 
Aleksey Vyazmikin #:

If it doesn't work on the training sample either, then there is probably a problem on your side. Neural networks take a long time to train - not like tree models.

Predictors can be anything, any factor affecting the price with a stable probabilistic outcome, yes, as an option - indicators.

Well, the author had some errors in the code (critical for non-Intel cards) - a couple of versions of the rules on people's requests.

Well, I have the latest articles (from GoExplore and onwards) and from 27 to 35 seem to compile and run normally. Did not work 36-38 is that in the tester trained. I have a card from Nvidia GTX 660 ti.

What could be the problem on my side? My sovtnik compiles, runs, the learning process (error and progress on the graph) goes on. And how much debt is that? The author of the articles also writes a lot, it is necessary to repeat iterations of collecting examples and training, but nowhere does he write any at least approximate figures. For example, I trained 500 epochs, the first deals began to be minus. Well at least some specificity in figures. Otherwise it is not clear at all, either I teach a lot and something is wrong, or I have not taught enough and it is too early to expect anything.

 
Viktor Kudriavtsev #:
The author provides a graph and statistics from the strategy tester at the end of each article. Well, if the statistics is fictitious, then yes....
There are very modest tests for a short period of time, and it is impossible to draw unambiguous conclusions from them. If it doesn't work even on the training one, it means that they have made a big mess :) The approach itself is unsuitable, because the process of such training is difficult to control. And if you find the right control function (rewards), you don't need it anymore.
I've tried different ones, I couldn't get stable results.

And it is computationally more complicated than genetic optimisation, but not better in terms of efficiency. It can be done in 1 iteration with similar results. Without expert knowledge in the field of trading, nothing good will come out.
 
Maxim Dmitrievsky #:
There are very modest tests for a short period of time, you can't draw unambiguous conclusions from them. If it doesn't work even on a training programme, it means that they have made a lot of mistakes :) The approach itself is unsuitable, because the process of such training is difficult to control. And if you find the right control function (rewards), you don't need it anymore.
.
I've tried different ones, I couldn't get stable results.

And it is computationally more complicated than genetic optimisation, but not better in terms of efficiency. It can be done in 1 iteration at all, with similar results. Without expert knowledge in the field of trading, nothing good will come out.
I tried to train Expert Advisors from this series using genetic and evolutionary methods (articles 30 and 31). The author has 1000 epochs in the parameters. The population is 50 individuals per epoch as I understand. The best result is displayed in the log when training. So for 200 epochs this best result has not changed in relation to the initial one. I also put the population of 100 individuals and trained about 150 epochs. The effect is the same. So I gave up this method and moved on to newer ones.
 
Viktor Kudriavtsev #:
I have tried to train EAs from this cycle using genetic and evolutionary methods (articles 30 and 31). The author has 1000 epochs in the parameters. The population is 50 individuals per epoch, as I understand. The best result is displayed in the log when training. So for 200 epochs this best result has not changed in relation to the initial one. I also put the population of 100 individuals and trained about 150 epochs. The effect is the same. So I gave up this method and moved on to newer ones.

It's a complete waste of time to waste time on any newfangled models, especially complex ones.

The RF ideal is a simple and very clear model. You can take a couple of three more models from old and well-tested models, if you plan to get the final result through an ensemble of models (gives about 5% error reduction).

My scepticism is explained very simply: our main enemy is NOT the stationarity of financial markets, i.e. the limit theorem and all statistics based on it - different correlations, dispersions and so on - do not work. By the way, and estimates like RMSE.


That's why you have to start with preprocessing (datamining). Without obtaining a set of predictors with a sufficiently stable connection with the target (teacher), it is pointless to talk about anything. It is the quality of this connection that determines the prediction error and its stability at different parts of quotes. The model has nothing to do with it at all. If on RF you get approximately the same prediction error in "in-sample" and "out-of-sample" less than 20%, then you can try on the same set of predictors and target to reduce the prediction error due to more advanced models, ensemble of models ..., but it is a few per cent, for which there is no sense to spend time on something else but RF.

There is one more condition: mathematical proof of stability of the predictors' connection with the target, i.e. stationarity of the predictors' connection with the target with obtaining the variance of the predictors' connection with the target and proving at least its approximate stability in the sense of GARCH.

And different "epochs" and centuries of testing - nothing, you are fussing so much, you seem to be in the subject, but you are running in place.

 
Viktor Kudriavtsev #:
Don't listen to anyone, there are no experts here.
Reason: