Machine learning in trading: theory, models, practice and algo-trading - page 2839

 

Dick's got everyone fornicating. Chew, chew, chew...

The quality of the optimisation is irrelevant. Tester, regardless of the criterion, is useful in some approximate selection of parameters. But it's all about nothing, it's enough to run MT5 with forward in the tester.

Some time ago, I wrote an Expert Advisor for legal - JMA-MACD. The Expert Advisor turned out to be great, it held the trend perfectly, even worked well in sideways. Profit factor over 3, over 80% of profitable trades.

Theresult of the optimisation looked great, exactly according to Dick's requirements. A significant part of the profit value or profit factor formed a dense set. On the chart in the tester it looked great: an even green colour with no splotches and a slight brightening on one edge. A typical slightly convex plateau with the maximum frequency of parameters slightly less than the maximum.


I did the following: I optimised for 2, 3, 6 and 12 months on 6 currency pairs, took the obtained parameters and ran the next week. Positive results were always in less than half of cases! But very often only a loss.

The Expert Advisor, which was excellent at optimisation, was persistently losing.

I couldn't accept it for a long time, the result of optimisation was transferred to Excel, there I got different statistical characteristics of the optimisation criterion - I couldn't improve it.In front of the nose is atypical slightly convex plateau, in Excel found the frequency of occurring parameters, got that the maximum frequency of parameters is slightly less than the maximum, ie the distribution of frequency of parameter use is strongly skewed towards the maximum.

I will write once again: all arguments about extraordinary optimisation are worthless without control outside the optimisation sample. Profit in the future does not follow from the optimisation algorithm.

 
Taking the topic to the wrong direction again :)
 
Maxim Dmitrievsky #:
Please take MT5 optimiser and use it. Long time ago as a general method already 😀

However, TCs earn on other principles mostly (found differently). So optimisation will not increase in its importance, I would not even devote so much time to such nuances

A simple analogy: optimising a random TS in MT5 according to different criteria does not lead to success. But optimisation of initially good TS leads to success at any criterion.

All these peaks and plateaus have nothing to do with the presence or absence of regularities. They may coincide randomly on the traine and test, or they may not coincide. This is not the subject of Andrei's research.

Maxim, I confirm, it is not the subject of my research.

Maxim Dmitrievsky #:
Again the topic is being moved to the wrong plane :)

It is as if Fomenko does not hear what is being said. I have already said several times that the tester does not affect profitability or the ability of the TS to work profitably in the future. The tester is a tool, nothing more. An optimisation algorithm is a tool and nothing more. It is like discussing the "success" of a shovel for making money.

 
Maxim Dmitrievsky #:
Please take MT5 optimiser and use it. Long time ago as a general method already 😀

I would like to combine the possibility of using a custom criterion in MT5 with the flexibility of MO models, in which parameters are either very many or a predetermined number of parameters.

For example, I don't understand how one can organically implement at least a single decision tree model in MT5.

 
Aleksey Nikolayev #:

I would like to combine the possibility of using a custom criterion in ... with the flexibility of MO models, in which the parameters are either very large or a predetermined number of parameters.

there is only R

 
Maxim Dmitrievsky #:

If you come across an adequate description of ADAM - please provide a link, I've seen several, everywhere different and unclear.

 
Maxim Dmitrievsky #:
However, TCs earn on other principles mostly (differently found). So optimisation in its importance will not increase, I would not even devote so much time to such nuances

A simple analogy: optimising a random TS in MT5 according to different criteria does not lead to success. But optimisation of initially good TS leads to success at any criterion.

All these peaks and plateaus have nothing to do with the presence or absence of regularities. They may coincide randomly on the traine and test, or they may not coincide. This is not the subject of Andrei's research.

Nothing gives guarantees of making money) It's about technical capabilities - the more of them the better.

The only reason why I am trying to discuss this topic is a small hope that metaquotes can take this into account when they promise to introduce MO in MT5.

 
Andrey Dik #:

If you come across an adequate description of ADAM - please provide a link, I have seen several, everywhere different and unclear.

I am only familiar with it in general terms, I did not go into it in depth
 
Aleksey Nikolayev #:

Nothing gives a guarantee of earnings) It's about technical capabilities - the more of them the better.

The only reason why I am trying to discuss this topic is a small hope that metaquotes can take this into account when they promise to implement MO in MT5.

I can't visualise it in my head... we have a marked up dataset, we want to train as close to those marks as possible. If we take another criterion that is not related to them, then these labels stop mattering?

Then the learning process completely changes the strategy. We end up with a customised criterion.
 
Maxim Dmitrievsky #:
I can visualise this in my head... we have a labelled dataset, we want to train as close to these labels as possible. If we take a different criterion, these labels stop mattering?

Then the learning process completely changes the strategy
Here is an excellent question. robustness of TS is not a question of goodness of AO or a particular testing tool, it is a question of criterion selection. the more adequate the evaluation criterion is, the more adequate the model behaves on new data. to choose the best AO means to choose the best tool for CRITERION optimisation. It cannot be the AO's fault or the tester's fault. The criterion is to blame.
Reason: