Machine learning in trading: theory, models, practice and algo-trading - page 2406

 
Entropy has long been used everywhere as loss f-i or cross entropy. And then, all this does not matter on rows with floating law, if it does, it is of secondary importance. The solution lies on the surface and has already been suggested in this thread, shaky-wobble works. I won't tell you the details yet, but I'll write an article someday. Z.I. Probably got a little excited about the surface, but at least for me 😀
 
Reference for metrics, don't know some myselfhttps://russianblogs.com/article/7586220986/
 
Maxim Dmitrievsky:
Reference on metrics, I don't know some of them myselfhttps://russianblogs.com/article/7586220986/

If the attributes are uniformly distributed in a unit cube, the Chebyshev metric intuitively seems to be the most correct one. Another thing is that it is unlikely to normalize arbitrary signs so well.

 
Aleksey Nikolayev:

If the signs are uniformly distributed in a unit cube, the most correct metric intuitively seems to be the Chebyshev metric. Another thing is that it is unlikely to normalize arbitrary signs so well.

I experimented with normalization and got decent losses in models. That's why forests of trees, not neural networks.
 
Maxim Dmitrievsky:
I experimented with normalization, I get decent losses in the models, without it it is better. That's why forests are trees, not neural networks.
Similar conclusions - only trees. Especially fun when maxima are updated and everything shifts. You can of course set maxima manually or automatically (for each feature), but it's crutches.
 
Maxim Dmitrievsky:
I experimented with normalization, I get decent losses in the models, and it is better without it. That's why forests of trees, not neural networks.

I'm also inclined(also thanks to your work) to something like xgboost. But normalization, as well as general preparatory research work with features, never hurts. I also need a flexible approach to building a custom objective function.

 
Maxim Dmitrievsky:
That's why forests of trees, not neural networks.

Yes, a few years ago someone in this thread wrote such an idea - suggested forests and compared neural networks with nuclear weapons. He said that they should be used only when other methods cannot help. But some Maxim then threw his poop.

I wonder... Was he right?

 
Aleksey Nikolayev:

I am also inclined(also thanks to your work) towards something like xgboost. But normalization, as well as general preparatory research work with features, will never hurt. Also we need flexible approach to building custom objective function.

recommend LightGBM or CatBoost, XGBoost lags behind

in essence it turns out that any preprocessing kills the alpha. This is if you take the increments and start drying them out. The ideal would be to take the original series (quotes), but you can't train it because of its non-stationarity. This can be clearly seen in the article about fractional differentiation (we trample the market memory). The more transformations are applied, the less something is left there.

 
Dmytryi Nazarchuk:

Yes, a few years ago someone in this thread wrote such a thought - he suggested scaffolding and compared neural networks to nuclear weapons. He said that they should be used only when other methods cannot provide anything at all. But some Maxim then threw his poop.

I wonder... Was he right?

You better give me some proof, I have no idea what he was talking about.

 
elibrarius:
Similar conclusions - only trees. It's especially fun when maxima are updated and everything shifts. You can of course set maxes manually or automatically (for each feature), but it's crutches.

Yes, no matter how you turn it turns out to be nonsense.

Reason: