Machine learning in trading: theory, models, practice and algo-trading - page 3432

 
Maxim Dmitrievsky #:

which makes the data comparable

Normalisation pushes all predictors onto the same scale, while standardisation makes the performance of one predictor comparable under different external conditions. Or do you have a different normalisation? If not, what would make different predictors comparable for wooden models?

 
Aleksey Vyazmikin #:

Normalisation pushes all predictors onto the same scale, while standardisation makes the performance of one predictor comparable under different external conditions. Or do you have a different normalisation? If not, what would make different predictors comparable for wooden models?

In feature space, points become more comparable in both cases.

Only the trick is that making them more comparable on is makes them less comparable on oos, on non-stationary series.

So there's an ambush waiting for us everywhere.

* I'm writing in the context of a multicurrency system with data from different sources.
 
Maxim Dmitrievsky #:

In feature space, points become more comparable in both cases.

Only the trick is that making them more comparable on is makes them less comparable on oos, on non-stationary series.

That is, there is an ambush waiting for us everywhere.

I don't understand the idea - for networks or clustering it may be useful, but for trees where is it useful - the algorithm doesn't need it.

Besides, if you use CatBoost, it already does actually normalisation through quantization, leaving the same number and range of predictor values. You can do quantisation and save the sample with numbers of quantum segments, then such normalisation will be normal on new data.

 
Aleksey Vyazmikin #:

I don't understand the idea - for networks or clustering it may be useful, but for trees where is the benefit - the algorithm doesn't need it.

Besides, if you use CatBoost, it already does actually normalisation through quantization, leaving the same number and range of predictor values. You can do quantisation and save the sample with numbers of quantum segments, then such normalisation will be normal on new data.

Maxim Dmitrievsky #:
* I am writing in the context of multivariate, where the data are from different sources.
And you don't have to understand me, you just have to like me :)
 
Maxim Dmitrievsky #:
And you don't have to understand me, you just have to love me :)

Well, then I won't discuss your ideas - I will waste my time, but I won't get any results for myself - that's how it turns out.

 
Aleksey Vyazmikin #:

Well, then I will not discuss your ideas - I will waste my time, but I will not get the result for myself - so it goes.

That's a quote

 

That's it, I'll be adding fractals now...ready? 😈


 
Maxim Dmitrievsky #:

That's it, I'll be adding fractals now...ready? 😈

What's on the chart?

 
fxsaber #:

What's on the graph?

balances after training

 
Maxim Dmitrievsky #:

post-training balances

The diagram of the number of open deals is missing at the bottom.

Reason: