Machine learning in trading: theory, models, practice and algo-trading - page 1275

 
Maxim Dmitrievsky:

it turns out that you are doing nonsense, because you are imitating the forest and boosting algorithm, instead of reading the theory why it works, again

You're not reading carefully again - I discard the root predictor by a certain criterion, while the same CatBoost does it randomly.

Thanks for the picture, when I had a chance to communicate with Doc it was like that.

 
Aleksey Vyazmikin:

You're not reading carefully again - I discard the root predictor according to a certain criterion, while the same CatBoost does it randomly.

Thanks for the picture, when I had a chance to communicate with Doc, it was so.

i don't care which one you take, the algorithm doesn't depend on it at all... it's the importance of the number of splits on the trait, not the root... i don't care what you take... i'll give you the number of splits on the trait.

 
Maxim Dmitrievsky:

alphastar algorithm was CHANGED specifically for revashn from full map view to piecemeal, they did not finish everything properly

you can see that the bot is slow, switching between windows, can not figure out where the prism is and is running back and forth

It's a bug!

I have no respect for myself when communicating with you.

Do not communicate with you, Maxim, I do not force you. I here respect all interlocutors, as long as they do not start to be rude.

You are again referring to the comments behind the scenes, how do you technically imagine what happened? Don't you think there was just a reduction in the frame rate of the general map collection? Certainly the visualization of the overall map, supposedly through the eyes of the network, is by frame-by-frame rapid movement across the screen, and there can be no other options. And the talk of "done as a human sees" is nonsense, and can only reflect the fantasy of how a human sees.

Bot sees a unit and reacts adequately, according to the algorithm - protects the base, his problem is that he could not divide the troops - to protect the base and attack - there are all kinds of rush, and when it went to development, the bot merged.

 
Maxim Dmitrievsky:

It doesn't matter which way you cast it, the algorithm doesn't depend on it at all... suffering bullshit) it depends on the number of splits by a trait, not the root

This sounds like ignorance... If books don't describe such a method, it's obviously bad...

 
Aleksey Vyazmikin:

This sounds like ignorance... If the books don't describe such a method, then it's obviously bad...

This is ignorance from you... it's not clear what's being discussed, it's not clear how it's tricky and endowed with some special definitions and meanings))

 
Aleksey Vyazmikin:

Don't talk to me, Maxim, I'm not forcing you. I respect all interlocutors, as long as they don't start being rude.

Again you are referring to the comments behind the scenes, how do you technically imagine what happened? Don't you think there was just a reduction in the frame rate of the general map collection? Certainly the visualization of the overall map, supposedly through the eyes of the network, is by frame-by-frame rapid movement across the screen, and there can be no other options. And the talk of "done as a human sees" is nonsense, and can only reflect the fantasy of how a human sees.

Bot sees a unit and reacts adequately, according to the algorithm - protects the base, his trouble is that he could not divide the troops - to protect the base and attack - there are all kinds of rush, and when it went to development, the bot merged.

Shurik, you're dumb, sorry.

 
Maxim Dmitrievsky:

Shurik, you're stupid, I'm sorry.

I don't know why you stoop so low and allow yourself to get hysterical...

Don't be afraid to think with your head and don't judge those who do.

 
Aleksey Vyazmikin:

I don't know why you stoop so low and let yourself get hysterical...

Don't be afraid to think with your head, and don't judge those who do.

God forbid

 
The topic has turned into trash :).
 
elibrarius:
Please attach the code (or in ls), interesting to see. Maybe something new will be found.

By the way, Alglib uses a random set of predictors (50% of total number by default) to choose partitioning in each node. This seems to be a standard approach from the creators of Random Forest. The result is a wide variety of trees.
But it is hard to find the best ones, since the difference in the final error is no more than 1%. That is, all trees come to approximately the same result, but in one tree for one predictor was split earlier, in another tree for the same predictor later (because earlier it was excluded from the list for splitting).


And in general with the selection of predictors is a problem. I'm thinking that to test 100 predictors to do a full search adding 1 and leave the improving results. If you exclude the root predictor 40 times, after complex calculations, then maybe a full enumeration is easier? Or do you have about a thousand predictors there?

The point is that even if you took 50% of all, then further there is a clear selection of these 50% for the first root split (or in Alglib it is not so?). CatBoost has not only random selection of predictors, but also random split (weights are added randomly to calculations) on first trees.

I get different results, and my goal is not to estimate the whole model, but to get leaves that are highly likely to describe most of the sample. Then such leaves are checked on the history by years and a composition is made of them, which may not describe the whole market, but I think it is better to have more accurate answers for what you know than to guess with 50% probability most of the time.

Predictors, will now be in the neighborhood of 600, so a complete overshoot is no longer realistic.

Reason: