Machine learning in trading: theory, models, practice and algo-trading - page 1228

 
Maxim Dmitrievsky:

then there will be few examples, and on the new data ns will become a blind lost, it is necessary that she "saw" in her life as much as possible

as an option ....

1) Create an "ideal trade" with marked deals, trade+commission, the most profitable option, which only can be... We will obtain something like a zigzag with deals on tops and troughs...

2) Form equities of this ideal trade

3) Teach the model. The purpose of training - to achieve maximum correlation (equities of model trade+commission) with (ideal equity), that is quality of the model can be expressed by one numeric, correlation coefficient

In such a way, the model will be able to adjust to the data as smoothly and accurately as possible

And of course we should not forget about OOS in training

ps. All that I wrote is pure theory

 
mytarmailS:

as an option ....

1) Create an "ideal trade" with marked trades, trade+commission, the most profitable option that can be... We'll get something like a zigzag with trades on tops and bottoms...

2) Form equities of this ideal trade

3) Teach the model. The purpose of training - to achieve maximum correlation (equities of model trade+commission) with (ideal equity), that is quality of the model can be expressed by one numeric, correlation coefficient

In such a way, the model will be able to adjust to the data as smoothly and accurately as possible

And of course we should not forget about OOS in training

ps. All that I wrote is pure theory

Essentially it is done so, but you can vary the degree of "ideal equity", because the more ideal it is, the more overtraining

error on the tray: 0, on the oos: 0.4.

An "ideal" trade, taking into account OOS (inside), loss-making trades are only 15%, which corresponds to the amount of OOS (here 20%). It's not hard to guess what will happen with the new data


 
Maxim Dmitrievsky:

In fact, it is done so, but you can vary the degree of "ideal equity", because the more ideal it is, the more overtraining

error on the tray: 0, on the oos: 0.4.

An "ideal" trade, taking into account OOS (inside), loss-making trades are only 15%, which corresponds to the size of OOS (here 20%). It is not difficult to guess what will happen with the new data


the problem is in the variability of predictor properties probably, I don't see other variants (

 
mytarmailS:

then the problem is in the variability of predictor properties probably, I see no other options(

variability with respect to the target

that's what I wanted to show that training "perfect" inputs is a crooked approach, moreover assigning the same probabilities to all outputs

 
Maxim Dmitrievsky:

variability with respect to the target

I wanted to show by this that training "perfect" inputs is a crooked approach, moreover to assign all outputs the same probabilities

The beginning of oos seems fine...

Have you tried to re-train every n bars?

 
mytarmailS:

The beginning of the oos seems fine...

Didn't try to retrain completely every n bars

this is just an example, there are ways to smooth the difference, not very effective but there are

what are you looking for there is not the beginning )) i already saved screenshots where about the same in front and behind

interested in the study topic I described earlier. but since no one did not do myself will do

 
Maxim Dmitrievsky:

this is just an example, there are ways to smooth out the difference, not very effective but there are

What are you looking for there is not the beginning )) already threw screenshots where approximately the same front and back

interested in the research topic I described earlier. but since no one did not do it myself will

will be the same as with the probability of winning/losing, you may learn something, but the new data will be close to random

 
mytarmailS:

will be the same as with the probability of winning/losing, you may learn something, but the new data will be close to random

I can't visualize this process in my head.

 
Maxim Dmitrievsky:

Let's see, I can't imagine this process in my mind.

but try with constant retraining, it's more promising imho

 
mytarmailS:

and with constant retraining still try, it is more promising imho

You think I haven't tried it? The virtual optimizer has long been available in 2 variants: full retraining, Bayesian correction

This is all nonsense until you do it you won't understand it. It will only work when the main problem is solved

because i've checked it on all kinds of mat functions, equity in the sky almost everywhere

neural networks x...ti, early stops, late stops, bugging x...i, ensembles x...i, crossvalidations
Reason: