Machine learning in trading: theory, models, practice and algo-trading - page 3095

 
Renat Fatkhullin #:
We may release a previously written package for R to the public. We should revise it and add missing functionality.

The previously released Python package has shown explosive growth and continues to grow. We did not expect this to happen.

This would be a very good decision. Willing to participate in testing if needed.

Good luck

 
Vladimir Perervenko #:

That would be a very good solution. Ready to participate in testing if needed.

Good luck

I'm willing to participate, too.
 
With the method suggested in the article, you can't go through different models in order to choose the best one (ptu style). This is https://en.wikipedia.org/wiki/Multiple_comparisons_problem.
 
mytarmailS #:

one vehicle is involved in one experiment

I've been struggling with overfit for a long time and have whispered several times - look into kozul. All these techniques are from there, Prado was inspired (partially) there.

It's a generalisation of statistics to ML.

You can do inference of models by a given criterion (tritment). It's about removing bias and variance in the data to work better on new data.
 
Maxim Dmitrievsky #:
With the method suggested in the article, you can't go through different models in order to choose the best one (ptu style). That's https://en.wikipedia.org/wiki/Multiple_comparisons_problem.

Still curious for more details... how do you propose to select a model to work with out of thousands of options?
That package rather evaluates the possibility of successfully training the selected target on the selected predictors. If most of the models are successful, then the selected model is likely to be successful too.
There is no selection of one particular model (as I understood it). It should be selected by another method, which is not discussed in the article. And there are a lot of caveats and restrictions (a couple of pages), some of them I have recounted.

mytarmailS #: in the article, only one TC is involved in the experiment.

The models there are different, because they have different indicator parameters. But the set of indicators can be the same. I think this is the confusion.
You can say - the strategy is the same, but the models (variants) are different

 
Forester #:

Still, I'm curious for more details... how do you propose to select a model to work with out of thousands of options?
That package rather evaluates the likelihood of successfully training the selected target on the selected predictors. If most of the models are successful, then the selected model is likely to be successful too.
There is no selection of one particular model (as I understood it). It should be selected by another method, which is not discussed in the article. And there are a lot of caveats and restrictions (a couple of pages), some of them I have recounted.

The models there are different, because they have different parameters of indicators. But the set of indicators can be the same. I think this is the confusion.

You can choose only from good ones, if all of them are good. If you put the question of choice in this way, or with some confidence interval, all are good. Otherwise it's the above problem, which requires more effort.

I don't know who does what exactly, so the question is vague to me.

 

do experiments, write code...

I've tried it, it works, then I think about which TCs are better to use, etc. I go further and deeper...

And you will spend another year discussing theory, and then you'll give up, and that's the end of it.

 
I still don't get this one.
Forester #:
But I don't understand how they do cross-validation without training. They just feed a ready set of returns and then mix it on 12000 variants. It should be trained on each of the 12000 IS and predicted on each corresponding OOS.
Do they feed the returns from the trayne? And then they mix it on 12000+ variants supposedly IS and OOS.
In my opinion - this is one of the options for assessing the straightness of the balance curve on the trayne.
 
Maxim Dmitrievsky #:

Due to the fact that we remove bias (this is the main thing) and variance through cross-validation, the model starts to behave +- adequately on new data. Then it can be fine-tuned.

Beautiful balance line)
You can put it in trading as well.
 
Forester #:
Beautiful balance line)
Such a line can be used in trading.

It is more than a year such a similar, so far we have not lost.

Read it (already gave a link before), it will be at least someone to chat with. The handbook is mega cool, I read it. Especially section 22. Explanation of Chernozhukov's approach.

It doesn't use any "packages", everything is written by yourself. So don't be scared of python.

Because Sanych is talking unreasonable bullshit.

The backstory began with my last article"Meta model...", in which I decided to use model errors to correct it. Then I rewrote it in various other ways. Then I found out that kozul inference is about the same thing and there are no other adequate ways to improve models yet.

Метамодели в машинном обучении и трейдинге: Оригинальный тайминг торговых приказов
Метамодели в машинном обучении и трейдинге: Оригинальный тайминг торговых приказов
  • www.mql5.com
Метамодели в машинном обучении: Автоматическое создание торговых систем практически без участия человека — Модель сама принимает решение как торговать и когда торговать.
Reason: