Machine learning in trading: theory, models, practice and algo-trading - page 1438

 
Maxim Dmitrievsky:

No, Kesha, in real life and here on the forum you do not yet have enough authority to share anything with you. Work on it.

Ok, I'll get over it.

 
Maxim Dmitrievsky:

I have no idea how it works, but it says that it produces orders of magnitude less forest, i.e. in fact it should retrain less because the number of variants is less, even though it is the same for the entire depth.

I don't know what they mean, but there are no new variables in function, neither max depth, nor max number of examples in the list, nothing - everything is the same.

 
Kesha Rutov:

So you probably use returns, as everyone has been desenformed by forex demotivators Alyosha and wacky Wizard, and returns are independent, they have no information anymore, no levels or trend lines, pure SB.

Not really returns, but TP/SL. For example at 50 pts or 100. I.e. not for 1 bar, but for several bars these 100 pts could have triggered by TP or SL.
 
elibrarius:

I don't know what they mean, but no new variables are passed to the function, no max depth, no max number of examples in the sheet, nothing - everything is the same.

It's complicated, but you'll have to figure it out.

 
Maxim Dmitrievsky:

It's complicated, but we will have to deal with the whole thing

If there was some kind of limitation, there would be a variable with the level at which the limitation should be set. It makes sense, doesn't it?
Unless they decided to do it without settings, for example always limit up to 10 examples in the list. But it's not usable. Maybe some people need 100, and others need 1000.
 
elibrarius:
If there was some kind of limitation, then there would be a variable with level, according to which it should be limited. It makes sense, doesn't it?
Unless they decided to do it without setting, for example always limit up to 10 examples per sheet. But it's not usable. Maybe some people need 100, and others need 1000.

I don't know what's better. Maybe there are simple implementations on the pluses, I'll have to look at the githab

I would like to have everything in one place. But the hassle will also take a month or more.

 
Maxim Dmitrievsky:

I don't know what's better. Maybe there are simple implementations on the pluses, I'll have to look at the githab

I would like to have everything in one place. But the hassle, too, for a month or more.

Installing a depth meter in the existing forest:

n_max=10

n=0;

....

n++;

if(n>n_max){return;}

In a couple of days you can figure out where to put these 4 lines. Not a month at all.
The counter of the number of examples in the sheet is similar, only if(n<n_max){return;}

 
elibrarius:

Installing a meter in an existing forest:

n_max=10

n=0;

....

n++;

if(n>n_max){return;}

In a couple of days you can figure out where to put the four lines. Not a month.

I remember the same time I was putting the meter in, it was kind of a mess.

realistically rewrite the whole TS for catbust to try... also a lot of trouble. But the fact remains, learning on small datasets forest generalizes well and works, for example, on 2-5k samples, increasing only 2 times, on the same new data, complete retraining. This is a fact.
 
Maxim Dmitrievsky:

I remember the same time I set the counter, it turned out some bullshit.

I think we all have bullshit in our predictors/targets, and problems with patterns should be solved.

 
elibrarius:

I think we all have bullshit in the predictors/targets, and the problems with patterns should be solved.

I have solved these problems, if not completely, then at least something works, the error on the feedback 0.1-0.2 (classification), entropy is worse but it is understandable, this metric is responsible for other things

Reason: