Machine learning in trading: theory, models, practice and algo-trading - page 2594

 
mytarmailS #:
God's Criticism Again
It's as much in the forehead as it is in the head.
You've got to be logical.
 
Maxim Dmitrievsky #:
It's as much for you as it is for your forehead.
Try to be logical.
You voice your point, not this crap - it won't work, because it won't work
 
mytarmailS #:
You make a point, not this crap - it will work, because it won't work
I see, not even a fantasy at all. The OP won't cover the whole space of model variants, you'll have to pick and choose what she optimized and settled on the best variant herself. Go to the factory, in short. You take on some things without even roughly understanding what you're working with. And in the case of boosting it is impossible to build OP, because the number of parameters grows at each iteration. And of course you have not heard about regularization. You still think that you are able to eliminate some peaks qualitatively by yourself.
 
Maxim Dmitrievsky #:
I see, even fantasy is completely absent. The OP won't cover the whole space of model variants, you'll have to choose from the one that she optimized and stopped at the best variant herself. Go to the factory, in short. You take on some things without even roughly understanding what you're working with. And in the case of boosting, it is impossible to build the OP, because the number of parameters grows at each iteration.
Yeah...
Alexei here immediately understood how it can be done.
It's not clear to you from the 10th time... Think of a name for yourself)
 
mytarmailS #:
Yeah...
Alexei here immediately understood how you can do it...
It's not clear to you from the tenth time... Make up your own name)
From durne. It's easy to do, but it makes no sense. Okay, get into your swamp ) You'll just end up with Bayesian MOs that don't re-learn in a moment.
 
There are more interesting questions about the application of MO in trading. For example, an algorithm for determining which interval of history to take for training. Probably, it can be set by some meta-parameters that are optimized by crossvalidation. I have to read Prado).
 
Aleksey Nikolayev #:
There are more interesting questions of using MO in trading. For example, the algorithm of determining which interval of history to take for training. Perhaps it can be set by some meta-parameters which are optimized by crossvalidation. I'll have to read Prado.)

I wanted to write that the more data the better, then I remembered one of my small experiments (however it was made without sufficient representativeness, therefore the result may well be random, but still). Namely: there are 2 markets - according to my subjective assessment 1 is more efficient, the second is less. The model trained on the more efficient market gave worse results in this market at OOS than the model trained on the less efficient market did in the same area.

 
Often models stop working at one point, regardless of the size of the tray. Trained on samples of different lengths, all stopped working at a certain point in the past history. Through it you can see that some pattern is missing or changing.

And if you increase the tray, with the coverage of that place, the model can learn to noise and not work at all in the future. Increasing the trace to gigantic sizes is also evil.

This way you can find the length of the area where the model works, and then train on it completely, which will somewhat increase the performance for a while.
 
Maxim Dmitrievsky #:
Often models stop working at one moment, no matter how big the trait is. I trained on samples of different length, all of them stop working at a certain point of the previous history. Through it you can see that some pattern is missing or changing.

Then it turns out that it is necessary to train on the shortest possible section. So that after you change the pattern, the new pattern starts working faster.

For example, if training for 12 months, then after the change of the pattern in 6 months, the new and old patterns will be 50/50. And in about a year there will be training and trading on the new pattern. That is, for almost a whole year the pattern was trading on an outdated pattern and was most likely losing.

If you train for 1 month, the model will learn to work correctly again in a month.

It would be good to study for 1 week... But I don't have enough data.

 
Maxim Dmitrievsky #:
Often models stop working at one moment, regardless of the trace size. I trained on samples of different length, all of them stop working at a certain point in the past history. Through it you can see that some pattern is missing or changing.

And if you increase the tray, with the coverage of that place, the model can learn to noise and not work at all in the future. Increasing the trace to gigantic sizes is also evil.

This way you can find the length of the area where the model works, and then train on it completely, which will somewhat increase the effectiveness for a while.

About the noise, yes. Haven't thought about it in terms of taking sections of history with and without noise though. And by the way, how do you understand that before model training? Like iteratively? Trained on the whole area, saw where the best performance, left only these areas and trained first only on these areas? Here comes the second question, which, pending experimental confirmation, may be called philosophical: is it better for the model to immediately see different areas, including noisy, but train on average on the noisier data, or to learn from cleaner, but, say, never see noisy data.


What's wrong with giant sizes? Aside from increasing the computation time?

Reason: