Discussion of article "Gradient Boosting (CatBoost) in the development of trading systems. A naive approach" - page 7
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
This is completely wrong approach . You can't generate a model based on the same training and testing dataset then say it is working that's useless and baseless. Its called Curve fitting the model . Such models look good on paper but will never work in real world. Please use correct approach to machine learning , there are several ways to do it but yours is completely wrong.
I think you both just completely wrong, guys. The aritcle marked as 'naive approach' as introduction to CatBoost model. No one forbids you to test the model on new data. Also at the and of the article you can see test on new data + learning period.
Just read an articles more carefully, because next part on the way.I must have burnt my eyes while I was figuring out what you're doing here, even though everything is simple to the point of impossibility, but it's still hard to perceive someone else's thoughts....
What can I say?
1) the signs are definitely not the best, there is a huge scope.
2) it is not necessary to mix the data before training the classifier (I checked it).
3) What is really valuable is the target, it is it that pulls the whole article (I compared it with ZZ, ZZ is rubbish).
In general, the approach turned out to be the following - maximally all "move only to get a good result on validation"
And we select the best model ))))
And I managed to "move" in this way.
But in fact, it's the same way I was talking about when I optimised the weights of neuronics, only I immediately on maximal profit "moved".
Well, I don't know what else to write...
I must have burnt my eyes while I was figuring out what you were doing here, even though everything is simple to the point of impossibility, but it's still hard to perceive someone else's thoughts....
What can I say?
1) the signs are not the best, there is a huge space.
2) it is not necessary to shuffle the data before training the classifier (I checked it).
3) What is really valuable is the target, it is the one that pulls the whole article (compared to ZZ, ZZ is rubbish)
In general, the approach turned out to be as follows - we "move everything as much as possible just to get a good result on validation".
And select the best model ))))
And I managed to "move" in this way.
But in fact, it's the same way I was talking about when I optimised the weights of neuronkey, only I was moving it to maximal profit.
Well, I don't know what else to write...
It doesn't work like that without GMM. Plus you didn't realise how short an interval you can train on and how long the model lives on new data. That's not the limit, there are ways to extend the life. The whole approach is an organic ONE, you can't separate the parts.
Yes, you can try more meaningful ones. Use it, improve it
It doesn't work that way without GMM. Plus you didn't check out how short an interval you can train on and how long the model lives on new data. That's not the limit, there are ways to extend the life. The whole approach is organic ONE, you can't separate the parts.
Yes, you can try more meaningful ones. Use it, improve it
I'll try it with gmm tomorrow.
I'll try it with gmm tomorrow.
Oh, that's the article you wrote about. Yeah. It's an introductory article to keep the reader from going completely off the rails.
Yeah, I thought I'd go through it in order
In general, without Shuffle, the system immediately starts to drain expectedly.
now you can skip to the next article