Machine learning in trading: theory, models, practice and algo-trading - page 1279

 
elibrarius:

Figuring it out and making releases are different things. I am still experimenting. I'm mixing up predictors at the moment. Maybe I'll abandon it, just like I abandoned NS for inability to deal with noise.

It's easy to figure out. It takes a few hours to look at the code and everything becomes clear.

Yes, of course these are different things, but skilled hands can do a lot! It's hard for me to understand someone else's code, so I don't even go inside.

And the forest, so it is by design will always be noisy (in fact, the white noise should overlap the noise with a more confident collective responses), because it works not due to quality but quantity, I think we need to somehow control more and quality of trees and their uniqueness.

I do not understand why they do not like my idea of collecting leaves from the trees - and there you want to arrange a vote, or on the contrary, try to distribute the leaves so that they do not overlap in the sample ... The noise is obviously less, but it's important that the rule in the leaf has a real meaning, then it will be stable over time.

 
Aleksey Vyazmikin:

I don't understand why you don't like my idea of collecting leaves from the trees - and then you can either arrange voting, or try to distribute the leaves so that they don't overlap in the sample... If you're making less noise, it's important that the rule in the leaf has a real meaning, and then it will be stable over time.

I have not yet fully explored the scaffolding. Therefore I have no time to be distracted by anything else.
 
elibrarius:
I haven't fully explored the woods yet. That's why I don't have time to be distracted by anything else.

I see. Get to the bottom of it, write about your achievements - it's interesting.

 

Now I'm trying from such strange models from CatBoost, which with a small number of trees 1-30, make up a pool for deeper learning and assess whether there is sense in deep learning under 100-300 trees.

Interestingly, the first part of the sample (highlighted in blue, but a little more than necessary, because it showed a breakdown of the last balance high, they say it was long) for training looks very mediocre after applying the model, and then there is a pattern that begins to exploit the model, with a test sample (on which the model is selected) is not as obvious as on the exam (not involved in the training). Perhaps this is a sign of undertraining, whether it is better than overtraining is the question.

 

If anyone is interested, I can lay out the batnicks, sampling, settings to run through the command line, well who has not mastered python, like me.

 
Aleksey Vyazmikin:

I give a minute-by-minute variant, and I attach the tester's trading report.

It is true that the indicators have improved a little.

The Sharpe Ratio is 0.29 now.

 
elibrarius:

Finally counted permutation and the actual recount of the forest with the removal of 1 predictor. The results are not at all similar.

The authors of permutation experimented with 6 predictors, and I with 65. Maybe with 6 predictors it is easier to separate the noise input. Plus the forest is still random, maybe it also added randomness to the list of importance.

Now I will run again on the same data and in the morning I will compare 4 tables of importance.

The most similar were 2 importance tables by permutation (just from different runs on the same data).

The importance tables obtained by re-counting the forest by removing 1 predictor - not similar to each other, nor to permutation.

 
Hello machinists!

Is this schedule a grail tester?


 
Alexander Ivanov:
Hi machinists!

This graph is a grail tester?

That's right, there are only tester grails here, and the Marlesonian ballet is fueled by The Stallion's Apprentice and company :)

 
Kesha Rutov:

Yes, there are only tester grails, and the Marlesonian ballet is driven by The Stud and company :)

Keshenka, my son.

You make me laugh.

The depth of thought,

what a syllable.

Reason: