Machine learning in trading: theory, models, practice and algo-trading - page 301

 

Hi all!!!! Finally a miracle happened and my article saw the light of day, I invite to the discussion in the branch devoted to the article.

https://www.mql5.com/ru/articles/2773

Секвента ДеМарка (TD SEQUENTIAL) с использованием искусственного интеллекта
Секвента ДеМарка (TD SEQUENTIAL) с использованием искусственного интеллекта
  • 2017.03.29
  • Mihail Marchukajtes
  • www.mql5.com
В этой статье я расскажу, как с помощью "скрещивания" одной очень известной стратегии и нейронной сети можно успешно заниматься трейдингом. Речь пойдет о стратегии Томаса Демарка "Секвента" с применением системы искусственного интеллекта. Работать будем ТОЛЬКО по первой части стратегии, используя сигналы "Установка" и "Пересечение".
 
Mihail Marchukajtes:

Hi all!!!! Finally a miracle happened and my article saw the light of day, I invite to the discussion in the branch devoted to the article.

https://www.mql5.com/ru/articles/2773


Great!) Let's read it at our leisure.
 
Mihail Marchukajtes:

Hi all!!!! Finally a miracle happened and my article saw the light of day, I invite to the discussion in the branch devoted to the article.

https://www.mql5.com/ru/articles/2773


went to the article, thank you )
 
Andrey Dik:
You know that history does not repeat itself. That's why you are suggested to try the same on random - the result will not be much different (and maybe even be better than on historical data).


I want to note that you andfxsaber are on a branch where your statement was professionally refuted. Check out Burnakov's materials on this thread and in his blogs


In addition, there is a fundamental issue that fundamentally differentiates MO from the pattern in our heads that has formed TA.

Machine learning is necessarily made up of three parts, which constitute a whole. These are:

  • Raw data preparation (datamining) - not present in TA
  • Automatic pattern search. There are a lot of algorithms. In TA this is usually done manually by "eye as a diamond".
  • Evaluation of modeling result. In the MT terminal this role is performed by the tester, which can only to a small extent replace the tools existing in the framework with MO.

The most essential stage in its influence on the final result is the first one - preparation of the initial data.

If you use all three steps consistently and with an understanding of what you're doing, I personally have been able to reduce the prediction error for some of the target variables below 30%. I was able to reduce the prediction error below 40% almost immediately. If you get a random 50%, that means you don't understand something very important in MO.

 
SanSanych Fomenko:


I would like to point out that you andfxsaber are on a branch where your statement has been refuted in a highly professional manner. Look at Burnakov's materials on this thread and in his blogs


In addition, there is a fundamental issue that fundamentally differentiates MO from the pattern in our heads that has formed TA.

Machine learning is necessarily composed of three parts, which constitute a whole. These are:

  • Raw data preparation (datamining) - not present in TA
  • The automatic search of the patterns. There are a lot of algorithms. In TA this is usually done manually by "eye as a diamond".
  • Evaluation of modeling result. In the MT terminal this role is performed by the tester, which can only to a small extent replace the tools existing in the framework with MO.

The most essential stage in its influence on the final result is the first one - preparation of the initial data.

If you use all three steps consistently and with an understanding of what you're doing, I personally have been able to reduce the prediction error for some of the target variables below 30%. I was able to reduce the prediction error below 40% almost immediately. If you get a random 50%, that means you don't understand something very important in MO.

You masterfully dodged the question/suggestion, congratulations! While reading you, you forget what you were asking about..... For "some target variables" I've gotten less error on a random series as well, what's the use. My experiments with the results are on the 4th forum somewhere (this is a response to "Look at the materials Burnakov").
 
Maxim Dmitrievsky:

I went to get addicted to the article, thank you)

From the soul Brothers!!! Your opinion is very important to me. After this article a treatise on input and output variable will be published, there will be philosophy of course, well, and primos when it is difficult to choose....
 
SanSanych Fomenko:


If you use all three steps consistently and with an understanding of what you are doing, I personally have been able to reduce the prediction error for some of the target variables below 30%. I have been able to reduce the prediction error below 40% almost immediately. If you get a random 50%, that means you don't understand something very important in MO.

If you are talking about error onout-of-sample, with at least 100k samples for a test on properly prepared data, the results are very steep, "cooler than eggs", even for HFT data, on minutes and more it is fantastic, or trivial overfit. On low-frequency data God forbid 2-3% to get an advantage, as well as with numerai.

It's cool when it predicts price direction one second ahead, with accuracy 65-70% (for RI) I know such guys, but their data is not childish and they are worth it accordingly. I use Plasma and now I use a regular quickquick and mt to get my forex data.

 

This is an interesting thread. A lot of flubber, but some smart thoughts. Thank you.

 
Andrey:

This is an interesting thread. A lot of flubber, but some smart thoughts. Thank you.


))) The main thing is the communication and the process. It seems that some people are already creating neural bots. I'd like to try it.
 
I'mnot interested:

If you are talking about error onout-of-sample, with at least 100k samples to test on properly prepared data, the results are very steep, "cooler than eggs", even for HFT data, on minutes or more it is fantastic, or trivial overfit. On low-frequency data God forbid 2-3% to get an advantage, as well as with numerai.

It's cool when it predicts price direction one second ahead, with accuracy 65-70% (for RI) I know such guys, but their data is not childish and they are worth it accordingly. I have 60-65% but it is very cool for my data, I don't buy almost anything separately now, I used plaza before but now I use regular quick and mt to get my forex data.


For me, prediction error is not the main problem. For me the main problem is overtraining the model. Either I have even faint evidence that the model is NOT retrained, or the model is not needed at all.

I've written many times on this thread (and others too) about diagnosing overtraining and tools to deal with overtraining. In short, it is clearing input predictors from noise, and the model itself is of secondary importance.

Everything else is of no interest to me, as any result without considerations of overtraining is just so-so, now, maybe tomorrow, and after tomorrow, draining the depo.

Reason: