Machine learning in trading: theory, models, practice and algo-trading - page 3680

 
Forester #:

It's for neural networks that you need to scale. Tree models don't need that. It won't, that's one of the reasons why neural networks stopped being used.


My post isn't about where one should or shouldn't scale

My post is as an example of looking ahead. This example clearly shows, and in practice it is quite difficult to identify predictors with looking ahead. The main sign is when a step-by-step run as in a tester gives a classification error close to 0.5, although in training, testing, cross-validation the error is many times smaller.

 
mytarmailS #:
Or just use regression for training, it doesn't need normalisation for traits with absolute values


h ttps:// stats.stackexchange.com/questions/613029/why-random-forest-loses-stability-on-new-data-but-gmdh-works-great


In this example GMDH is an analogue of linear regression

I'm afraid that with this approach extrapolation will still bend the values on new data. But it is possible to practice :)

 
Maxim Dmitrievsky #:

I'm afraid that with this approach extrapolation will still bend the values a lot on new data. But it is possible to practise :)

There is no extrapolation, it is a simple one-point-ahead prediction. (Although it is also an extrapolation, but you probably mean something else).


 
mytarmailS #:
There is no extrapolation, it is a normal predicate one point ahead. (Although it is also an extrapolation, but you probably mean something else).

Well, the new data can still be unpredictable, it's not good.
 

Below is a graph of the balance of the watchmaker = 6000 hours. There are a wide variety of parameter settings, but the balance drop is always in the same place and does not depend on the settings! The classification error of OOV, OOS is about the same within 20%. And here is the balance graph in a step-by-step run looks as shown.

How to determine the reasons for the drop?

PS. Re-training every Monday at 00:00 hours.



 
СанСаныч Фоменко #:

Below is a graph of the balance of the watchmaker = 6000 hours. There are a wide variety of parameter settings, but the balance drop is always in the same place and does not depend on the settings! The classification error of OOV, OOS is about the same within 20%. And here is the balance graph in a step-by-step run looks as shown.

How to determine the reasons for the drop?

PS. Re-training every Monday at 00:00 hours.



Maybe a black swan? Check what the day - whether there was a super-strong price movement.
You can also not trade at every hour, but skip it if there are active trades at approximately the same price level. At night, sometimes it can pick up several trades at approximately the same price, and then in the daytime they will all be taken out in a loss or profit. I.e. the fall may not be so sharp, but the rises in other places may become weaker. That is, the chart will become less volatile.
About a year on the chart, check over 5 years, maybe there will be more of these dips.
 
Forester #:
Maybe a black swan? Check what the day was - whether there was a super-strong price movement.
You can also not trade at every hour, but skip it if there are active trades at approximately the same price level. At night, sometimes it can pick up several trades at approximately the same price, and then in the daytime they will all be taken out in a loss or profit. I.e. the fall may not be so sharp, but the rises in other places may become weaker. That is, the chart will become less volatile.
About a year on the chart, check over 5 years, there may be more of these dips.

Not down to dips of a few hours.

The above dip lasts over 1000 (thousand) hours - about 2 months.

 
СанСаныч Фоменко #:

Not to the point of dropping a few hours.

A given drop lasts over 1000 (thousand) hours - approximately 2 months.

Then apparently a change of pattern for time. Try to train on a smaller number of strings (with less deep learning) or on the contrary more (with deeper learning).

 
СанСаныч Фоменко #:

Below is a graph of the balance of the watchmaker = 6000 hours. There are a wide variety of parameter settings, but the balance drop is always in the same place and does not depend on the settings! The classification error of OOV, OOS is about the same within 20%. And here is the balance graph in a step-by-step run looks as shown.

How to determine the reasons for the drop?

PS. Re-training every Monday at 00:00 hours.



an error in the interpretation of the raw data. And/or their insufficiency.

PS/ by the way "always in the same place and does not depend on settings" also hints about the error of the method.

 
Maxim Kuznetsov #:

an error in the interpretation of the raw data. And/or insufficient data.

PS/ by the way "always in the same place and does not depend on settings" also hints about the error of the method.

What is "method"?

The formation of a teacher?

A set of predictors?

Selection of predictors for the teacher?

Formation of the training sample?

Algorithm for fitting model?

An algorithm for computing the threshold?

Anything else?