Machine learning in trading: theory, models, practice and algo-trading - page 3174

 
Maxim Dmitrievsky #:
It is connected, because you have laid down the parameters before, based on your knowledge or preferences. Initially you know how to get a better curve, through which parameters. Plus you could have traded before on an earlier history and based on this experience build a ts on a new history. The depth of such gestalt therapy can be enormous :)

I am familiar with this effect of retraining the brain. In this case, the TS was written chronologically before this one.

Anyway, grateful for the thoughts.

 
fxsaber #:

The need to split into train/test/exam I don't understand.

Please clarify what is the point of these intervals?

At the moment I am imagining such a scheme on them.

  1. The number crusher runs on train, filtering on test.
  2. The number crusher is switched off completely. And a few best results on exam are taken.


The first point seems odd. A la "forward test" in the tester. Is it better than just optimisation without filtering, but on a combined interval: train+test?

 
fxsaber #:

I don't practice that kind of self-deception. It's the only way I do it.

  1. Traine optimisation.
  2. From the found ones I take the top five and watch the behaviour on OOS. There is no optimisation in this point in any case.
This is exactly how the original images were obtained. So the nice OOS on the left is no fitting at all.
You are doing one-to-one what I described, re-read more carefully.
===================
Make comparison with SB, until there is no comparison and it is not proved that the TS behaves differently on market data than on SB, there is no point in making hypotheses.
 
mytarmailS #:
You are doing the same thing that I described, re-read more carefully.

Forum on trading, automated trading systems and testing trading strategies

Machine learning in trading: theory, models, practice and algo-trading

mytarmailS, 2023.08.16 13:23

Imagine that you have only 1000 variants of TS, in general.


your steps 1 and 2

1) You start to optimise/search for a good TS, this is traine data (fitting/searching/optimisation).

Let's say you have found 300 variants where the TC makes money...

2) Now you are looking for a TC out of these 300 variants which will pass OOS is test data. You have found say 10 TCs that earn both on the traine and on the test ( OOS ).

I don't do the second step.

 
fxsaber #:

I don't do the second step.

Your second step that you "don't do" ))


What is the difference?

 
fxsaber #:

1. Didn't quite understand the question. The left OOS is a year. Is it to be zoomed into the past?

2. I assume that an indication that there are no errors in the code is that the code does exactly what was intended before programming. In that sense, everything is fine.

And in the general case, a TC with errors in the code is still a TC. It's just not exactly what the author originally intended.

1. Yes.

2. Yes.

If a sharp plum is always right after training, then trade backwards and that's it (but this is a very strange scenario and it is difficult to explain the reasons). But, of course, it is likely that the plum immediately after training does not always happen (by chance), then this indicates a weak predictive ability of the model.

 
fxsaber #:

Highly categorical statements without a modicum of doubt. I made a post on the subject of the location of the OOC.

It's not the first time I've encountered dislike for the tester. I don't know what is wrong with the number crusher.

I don't understand how one can look ahead when optimising.


On methodology. I don't understand the necessity of splitting into train/test/exam. Claiming, even with the most favourable statistical study, that the TC is NOT overtrained seems too self-believing.

The most I can get in a conclusion is "it is likely that the TC found some pattern that was present some time before and after the training interval. At the same time, there is no guarantee that this pattern has not already broken down."

My high categorisation is based on the described approach, which is standard in MO. I have not yet mentioned cross validation to the described. This is the professional approach to analysing markets in IO.

What you describe is an amateur level, TA level, where it is impossible to justify conclusions with statistics. Because of this statistics is substituted by the tester, which in its basis is NOT related to statistics.

If you understand this, you can and should use a tester ONLY after preliminary calculations, the conclusions of which are based on statistics.

This is why the approach I described of preparing the raw data goes beyond the tester and is DEFINITELY a guarantee of overtraining and looking ahead. Compare sequential testing and testing on jumbled data.

From the fact that you don't understand how OOS on the left can be a consequence of looking ahead does NOT mean it is not. Looking at the picture, it is highly suspect. For example, it is quite likely that the algorithm is fishing for something in the future to OOS segment that gives the same pretty picture as the OOS. As soon as you move to the future with respect to the test segment, it is an immediate bummer.

Conclusion.

The picture where the plum to the right of testing is evidence of overtraining and/or looking ahead.

 

Separately about the tester.

The tester has an optimisation graph in the form of a "two-dimensional surface".

It can be used to monitor overtraining.

If in this surface you can identify a piece where one cell is surrounded by other cells of approximately the same colour, then this central cell will give the parameters of the NOT overtrained TC. Such a position corresponds to the fact that the found optimum represents a plateau.

However, if the "surface" looks like a leopard's skin, the TS is hopeless, because the tester has found a large number of maxima, which indicates an extremely low probability of hitting them in the future.

 

Areference for fans of market models in general.

Can a Simple Multi-Agent Model Replicate Complex Stock Market Behaviour?
Can a Simple Multi-Agent Model Replicate Complex Stock Market Behaviour?
  • www.r-bloggers.com
The stock market is one of the most complex systems we know about. Millions of intelligent, highly competitive people (and increasingly AIs) try to outwit each other to earn as much money as possible. In...
 
СанСаныч Фоменко optimisation graph in the form of a "two-dimensional surface".

It can be used to monitor overtraining.

If in this surface it is possible to identify a piece in which one cell is surrounded by other cells of approximately the same colour, then this central cell will give the parameters of the NOT overtrained TC. This position corresponds to the fact that the found optimum is a plateau.

However, if the "surface" looks like a leopard's skin, the TS is hopeless, because the tester has found a large number of maxima, which indicates an extremely low probability of hitting them in the future.

You can't.

The nature of the pattern (surface) tells only about the characteristics of the TS according to a particular optimisation criterion. If we take another criterion, the pattern will be different. Misunderstanding of this leads to misconceptions that optimisation (learning) should not be carried out to the global maximum, on the contrary, it should. It is the choice of an optimisation criterion adequate to the strategy that is the key to correct learning.

All this has been discussed many times before.

Reason: