Machine learning in trading: theory, models, practice and algo-trading - page 3432

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
which makes the data comparable
Normalisation pushes all predictors onto the same scale, while standardisation makes the performance of one predictor comparable under different external conditions. Or do you have a different normalisation? If not, what would make different predictors comparable for wooden models?
Normalisation pushes all predictors onto the same scale, while standardisation makes the performance of one predictor comparable under different external conditions. Or do you have a different normalisation? If not, what would make different predictors comparable for wooden models?
In feature space, points become more comparable in both cases.
Only the trick is that making them more comparable on is makes them less comparable on oos, on non-stationary series.
So there's an ambush waiting for us everywhere.
* I'm writing in the context of a multicurrency system with data from different sources.In feature space, points become more comparable in both cases.
Only the trick is that making them more comparable on is makes them less comparable on oos, on non-stationary series.
That is, there is an ambush waiting for us everywhere.
I don't understand the idea - for networks or clustering it may be useful, but for trees where is it useful - the algorithm doesn't need it.
Besides, if you use CatBoost, it already does actually normalisation through quantization, leaving the same number and range of predictor values. You can do quantisation and save the sample with numbers of quantum segments, then such normalisation will be normal on new data.
I don't understand the idea - for networks or clustering it may be useful, but for trees where is the benefit - the algorithm doesn't need it.
Besides, if you use CatBoost, it already does actually normalisation through quantization, leaving the same number and range of predictor values. You can do quantisation and save the sample with numbers of quantum segments, then such normalisation will be normal on new data.
* I am writing in the context of multivariate, where the data are from different sources.
And you don't have to understand me, you just have to love me :)
Well, then I won't discuss your ideas - I will waste my time, but I won't get any results for myself - that's how it turns out.
Well, then I will not discuss your ideas - I will waste my time, but I will not get the result for myself - so it goes.
That's a quote
That's it, I'll be adding fractals now...ready? 😈
That's it, I'll be adding fractals now...ready? 😈
What's on the chart?
What's on the graph?
balances after training
post-training balances
The diagram of the number of open deals is missing at the bottom.