Machine learning in trading: theory, models, practice and algo-trading - page 360

 
Mihail Marchukajtes:
Interesting!!! But the problem is a little bit different. Suppose your TS has dropped by 20%. What's the question? Will it get out of the drawdown and earn from above or will it continue to drain???? How do you determine that the TS needs to be re-optimized???
Imho, this "HOW..." is no different than any other system. MO or not MO is irrelevant. Pick a criterion, and decide when and what.
 
Mihail Marchukajtes:
Interesting!!! But the problem is a little bit different. Suppose your TS has dropped by 20%. What's the question? Will it get out of the drawdown and earn from above or will it continue to drain???? How can you tell if your TS needs to be re-optimized?

The TS should NOT be over-optimized - that's the whole point of creating a TS. Everything else is a numbers game
 
SanSanych Fomenko:

CU should NOT be retrained - that's the whole point of creating CU. All the rest is a game of numbers.
It may or may not be retrained, but sooner or later it will start to fail anyway. I think that wasMihail Marchukajtes question- how to know when?
 
Yuriy Asaulenko:
Retrained or not retrained, but sooner or later it will start to drain anyway. I think this wasMihail Marchukajtes question- how do you know when?


You do not understand the word "retrained.

First you have to be concerned that the TC is not retrained - to prove this fact. And then this proof must be repeated. If you can not prove that it is not retrained, you can not use it.

 
SanSanych Fomenko:


You don't understand the word "retrained."

First you have to be concerned that the TC is not retrained - to prove this fact. And then this proof must be repeated. If one cannot prove that it is not retrained, then one cannot use it.

I suppose I understand).

Переобучение (переподгонка, пере- в значении «слишком», англ. overfitting) в машинном обучении и статистике — явление, когда построенная модель хорошо объясняет примеры из обучающей выборки, но относительно плохо работает на примерах, не участвовавших в обучении (на примерах из тестовой выборки).

Это связано с тем, что при построении модели («в процессе обучения») в обучающей выборке обнаруживаются некоторые случайные закономерности, которые отсутствуют в генеральной совокупности.

Даже тогда, когда обученная модель не имеет чрезмерного количества параметров, можно ожидать, что эффективность её на новых данных будет ниже, чем на данных, использовавшихся для обучения[1]. В частности, значение коэффициента детерминации будет сокращаться по сравнению с исходными данными обучения.

Способы борьбы с переобучением зависят от метода моделирования и способа построения модели. Например, если строится дерево принятия решений, то можно обрезать некоторые его ветки в процессе построения.
https://ru.wikipedia.org/wiki/Переобучение

I think this is a somewhat simplified definition. So it is still not only possible, but perhaps even necessary to use. It all depends on the specifics.

We are using crude models, and this can also be interpreted as overtrained.

 
Mihail Marchukajtes:
Interesting!!! But the problem is a little bit different. Let's assume that your TS has fallen by 20%. The question? Will it get out of the drawdown and earn from above, or will it continue to drain???? How do you know that your TS needs to be re-optimized?

If the newly trained model in the tester does not show 20% of drawdown for this period and the old model in the real account opened - then it's better to retrain it, the model has lost its relevance and should take into account the new laws. Why not retrain the model after each new trade? And give it an updated history of trades to enter.
 
Maxim Dmitrievsky:

If the newly trained model in the tester does not give a 20% drawdown for this period, and the old model in the real account did - then retrain definitely, the model has lost its relevance and needs to take into account the new patterns. Why not retrain the model after each new trade? And give it an updated history of deals for entering.
)) Just yesterday I pondered on this subject. I decided that it would be better to record transactions and after their completion to submit a training sample for input. I would learn more as the game goes on.
 
Yuriy Asaulenko:

I suppose I understand.)

I think this is a somewhat simplified definition. So it is not only possible, but perhaps even necessary to use it. It all depends on the specifics.

We are using crude models, and this can also be interpreted as overlearning.


In the quote overlearning is too subtle a consideration of features, and you have coarsening is overlearning?!

You know better. This is not the first time.

 
Maxim Dmitrievsky:

If the newly trained model in the tester does not give a 20% drawdown for this period, and the old model in the real account did - then retrain definitely, the model has lost its relevance and needs to take into account the new patterns. Why not retrain the model after each new trade? And give it the updated history of deals for entering.

Training, retraining, and retraining (overfitting) are fundamentally different things.

All this training on each new bar - chewed and chewed on this forum and in general within the TA.

In the fight against overtraining (overfitting) I know two methods.

Clearing the set of predictors from predictors not related to the target variable - clearing the input set of predictors from noise. This question was considered in details at the first 100 milestones of this thread

Having the set of predictors cleaned from noise we start fitting the model with training sample and then with test and validation sample, which are random samples from one file. The error on all three sets should be approximately the same.

3. Then we take a file that is separate from the previous one and run the model on it. The error, again, should be about the same as the previous ones.

4. If these checks are done regularly, then your question: "a 20% slump is a signal for retraining" is not worth it at all, since as a result of the first three steps the slump is obtained as a model parameter and going beyond it says that the model is not working and everything should be started over again.

 
SanSanych Fomenko:


In the quote, overtraining is too fine a consideration of features, and your coarsening is overtraining?!

You know best. It's not the first time.

It's not the first time either. But why only coarsening. Another example right in the definition is that an overly complex model finds something that doesn't exist-apparent regularities.

You have a very simplistic or one-sided understanding of overtraining, imho.

SanSanychFomenko:

Learning, retraining, and overtraining (overfitting) are fundamentally different things.

All this training on every new bar is chewed up and chewed over on this forum and in general within TA.

In the fight against overtraining (overfitting) I know two methods.

.....

4. If these checks are done regularly, then your question: "a 20% slump is a signal for retraining" is not worth it at all because as a result of the first three steps the slump is obtained as a model parameter and going beyond it says that the model is not functional and everything should be started over again.

Well, yes. But not exactly chewed up. In the literature, this option is seriously considered, which is to do a follow-up in the course of the play. How and when you can do it, and when you can't do it, is another question. There are limitations everywhere.
Reason: