Machine learning in trading: theory, models, practice and algo-trading - page 1776

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
These, as you said, "chances" can be stacked, which is why they keep them that way.
Yes, on the new ones, but now I realized that the target is wrong. I took the actual vector ZZ with an offset, which is wrong.
I'll have to draft a script to get the target out.
So what's up? What's the result?
I saw it somewhere in the tutorials... I think it's more convenient to do it during pre-learning or something to do with it.
Maxim, you seem to be doing clustering now.
Here it shows that scaffolding is similar to clustering.
https://habr.com/ru/company/ods/blog/324402/
Section "Similarity of random forest to k-nearest neighbor algorithm".
Any Question?
Maxim, you seem to be clustering now.
Here it shows that scaffolding is similar to clustering.
https://habr.com/ru/company/ods/blog/324402/
Section "Similarity of random forest to k-nearest neighbor algorithm".
How am I doing... I started and then I gave up.) Forest can cluster too, yeah.
As for clustering as it is - it is quite good at separating increments into 3 groups, including new data. It makes sense to use as categorical features, that's what I wanted to doAny Question?
THIS IS THE WIN!!!!! Brothers!!!! HORRAAAAAAAAAAAAA!!!!! Happy Holidays everyone.
Because as soon as we forget this war another one will start immediately. Let's remember it always!!!!!!!! VICTORYAAAAAAAAAAAAAAAAA!!!!!!! Pew, pew (that's me firing my imaginary TT gun up and running down the street in my officer's uniform)So what's up? What acuracy did you get?
10 CatBoost models with tree depth 6, learning stop at 100 new trees not improving results, sitting in increments of 100.
Learning sample 80% 2018 and 2019, 20% sample to control learning stop. Independent sample January-May 2020.
If you torture the sample with different partitioning methods and build more models, I think you can get 72.
Balance of classification
10 CatBoost models with tree depth 6, learning stop at 100 new trees not improving results, sitting in increments of 100.
Learning sample 80% 2018 and 2019, 20% sample to control learning stops. Independent sample January-May 2020.
If you torture the sample with different partitioning methods and build more models, I think you can get 72.
Classification balance.
Well... nice and plausible. I would like to see the balance of the trade and a chart with entries.
What is the difference between these 10 models?