Discussion of article "Continuous Walk-Forward Optimization (Part 3): Adapting a Robot to Auto Optimizer"
Dear author! The work you have done is colossal! The level of programming is also impressive. But as a result it cannot be used! To do this, you have to travel through the whole cycle of articles, collect disparate files, compile, find missing ones, think about whether they have been updated in new articles...? One cannot doubt that such work is worthy of having its result collected in one place.
Meanwhile, as far as I can see, no one has appreciated the real significance (trader's) of this work on this resource yet. But it's not surprising, you still need to grow to such a level. It would be good for mt5 to grow up to a normal WFO, but it does not want to.... And besides, what space opens up for creating add-ons! What I want most of all is to implement cross-validation, to break the story into K parts, throw each of them out in turn, optimise on the remaining ones, then check on the one that was thrown out, and so on K times. Any hypotheses starting from the simplest "stable parameter set" can be tested with much more reliability than with the usual sliding opt-in.
Dear author! The work you have done is colossal! The level of programming is also impressive. But as a result it cannot be used! To do this, you have to travel through the whole cycle of articles, collect disparate files, compile, find missing ones, think about whether they have been updated in new articles...? One cannot doubt that such work is worthy of being collected in one place.
Meanwhile, as far as I can see, no one has appreciated the real significance (trader's) of this work on this resource yet. But it's not surprising, you still need to grow to such a level. It would be good for mt5 to grow up to a normal WFO, but it does not want to.... And besides, what space opens up for creating add-ons! What I want most of all is to implement cross-validation, to break the story into K parts, throw each of them out in turn, optimise on the remaining ones, then check on the one that was thrown out, and so on K times. Any hypotheses starting from the simplest "stable parameter set" can be tested with much more reliability than with the usual sliding opt-in.
Thank you for such a flattering review. I have 3 more articles planned in which the auto optimiser itself will be described. In one of the articles will be specific instructions for use and the programme itself. So many articles from the fact that it is not true will only post the result of the work done without explaining how it is achieved. At the moment the part of optimisation reports uploading and how they are formed is described, in the next articles (I am working on them now), the optimiser itself is described - it is a program written in C# that runs optimisation in the terminal. By the way, in this program the optimisation algorithm itself is placed in the interface part, so as soon as the cycle of articles will be completed, you will be able to implement your idea, for this you will need to implement the IOptimiser interface in C# and connect the constructor of the custom optimiser, about how it is done - will be told in detail in one of the upcoming articles.
Regarding the fact that everything is in disparate places - I will try in those articles on which I am working to lay out the whole code, although it will break the thread of the narrative and perhaps I will post the code on github, but compiled files still will not post, because the process of compilation is not so difficult. If anyone has any questions about compiling and using the project, I am always ready to help.
I have 3 more articles planned in which the auto optimiser itself will be described.
Recently I have come to the point where I need to automate the process. I have developed an optimisation method that gives a stable positive result on OOS history, but it requires long calculations and a lot of manual one-type work. And sliding opta is not the only solution to the problem. One of the most pressing challenges is how to get the geneticist to expand the ranges of values in which he or she settles on the best results, refine the ranges in which he or she narrows, and perform a new opta. This is a far cry from setting a wide enough range for all parameters at once and just picking the best ones. In my robots, the number of parameters is such that such an opta would either take weeks or would not be able to explore the parameter space in enough detail.
So now I think: should I wait for you or look for solutions myself?) Definitely I should start to deal with your project.
At the moment, when launching an Expert Advisor that calls the panel, my terminal just crashes, leaving the last entry:
2020.02.02 22:45:07.187 Terminal exit with code 0
That said, every terminal startup starts with the note:
2020.02.02.02 23:35:17.747 IPC failed to initialise IPC, with message:
2020.02.02.02 23:35:17.748 Terminal IPC dispatcher not started
Recently, I have come to the realisation that I need to automate the process. I have developed an optimisation method that gives a stable positive result on the OOS history, but it requires long calculations and a lot of manual one-type work. And sliding opta is not the only solution to the problem. One of the most pressing challenges is how to get the geneticist to expand the ranges of values in which he or she settles on the best results, refine the ranges in which he or she narrows, and perform a new opta. This is a far cry from setting a wide enough range for all parameters at once and just picking the best ones. In my robots, the number of parameters is such that such an opta would either take weeks or fail to explore the parameter space in sufficient detail.
Now I think: to wait for you or to search for solutions myself?) Definitely I should start to deal with your project.
At the moment, when launching an Expert Advisor that calls the panel, my terminal just crashes, leaving the last entry:
2020.02.02 22:45:07.187 Terminal exit with code 0
At the same time, every terminal startup starts with the note:
2020.02.02.02 23:35:17.747 IPC failed to initialise IPC, with message:
2020.02.02.02 23:35:17.748 Terminal IPC dispatcher not started
If you optimise on your system by hand, you will definitely be able to program it in my auto optimiser.
Regarding the error - did it start happening after connecting my files, or is it your robots affecting the terminal in such a way? I have not observed such a problem, and I have already launched robots on more than one computer with those additions that are already in the articles. I also have a 64-bit system.
If you are writing about my last series of articles (about optimisation management), it is better to deal with it directly on your computer in the debugging mode, as I am unlikely to reproduce the problem, in any case, the last time I launched the project it did not happen.
Regarding the error - did it start happening after connecting my files, or is it your robots affecting the terminal that way?
It didn't even come to robots, I just try to call your panel from the first two articles by running the OptimisationManagerExtention Expert Advisor. After that the terminal crashes.
It didn't even come to robots, I just try to call your panel from the first two articles by running the OptimisationManagerExtention Expert Advisor. After that the terminal crashes.
Then - as I said, you need to look at a specific computer and example. I can't tell at first glance what the problem might be, I haven't dealt with that project for a long time. At least you need to have more than one terminal installed on your computer and the dll with the graphical interface was in the MQL5/Libraries directory.
In the current Auto Optimiser project, I took the graphics out of the terminal and it should be run as a regular program, not as a robot, so there will be no problems like that, at least for 3 months or more I've been testing it and I've run more than one robot. And that old project, as I said, you need to look at a specific example, I can't tell you what the reason is.
It didn't even come to robots, I just try to call your panel from the first two articles by running the OptimisationManagerExtention Expert Advisor. After that the terminal crashes.
On purpose now recompiled and ran the old project from scratch. Everything worked for me. So I cannot reproduce the error.
Most of all, I want to implement cross-validation to beat the story into K parts, each of them dumped in turn, optimise on the remaining ones, then check on the dumped one, and so K times.
Selected will not work in general case. You need to set two input parameters in your TS that define a non-trading (thrown out) interval. Then it is real.
For the general case you can create a custom symbol, which is obtained from the original one by throwing out the interval.
The highlighted one will not work in the general case. You need to set two input parameters in your TS that define a non-trading (thrown out) interval. Then it is realistic.
For the general case you can create a custom symbol, which is obtained from the original one by throwing out the interval.
That's exactly how I was going to do it. Only one parameter is enough, because the division goes into equal parts. The parameter specifies the number of the segment to be discarded. Well, you can also add the parameter "number of parts".
With Andrew's tools, you can give the master terminal a task to perform k optimisations, each of which will have its own parameter "number of validation section". Then, however, you will have to write an add-on to bring the statistics together.
Everything would be a hundred times simpler if the tester had a possibility to forcibly enumerate some parameters completely during genetics. Then the opta results can be analysed by dividing them by the parameter "plot number".
Another option is the OnTesterDeinit() function. I have already implemented a full-fledged WFO in it, and you can easily do cross-validation by any criterion there. But it will be "correct" only in case of a full search, because it is done by enumerating the frames of the whole testing section. A full enumeration is unrealistic in most cases. And if we run genetics, the set of frames will be unfair, because in the process of opta it selects the results also by the sections we want to make test sections. Although how much real damage this will do is a question. If the ratio of the length of the test site to the total length is small, genetics should still have a sufficient number of variants where the test site turns out to suck. And after all such a common variant, it is possible to leave one more site, which did not participate in it, and check the result on it.
That's exactly what I was going for. Only one parameter is enough, because it is divided into equal parts. The parameter specifies the number of the section to be discarded. You can also add the parameter "number of parts".
With Andrew's tools, you can give the master terminal a task to perform k optimisations, each of which will have its own parameter "number of validation section". Then, however, you will have to write an add-on to bring the statistics together.
Everything would be a hundred times simpler if the tester had a possibility to forcibly enumerate some parameters completely during genetics. Then the opta results could be analysed by dividing them by the parameter "plot number".
There is also fxsaber tool, it will help with the rest.

- www.mql5.com
New article Continuous Walk-Forward Optimization (Part 3): Adapting a Robot to Auto Optimizer has been published:
The third part serves as a bridge between the previous two parts: it describes the mechanism of interaction with the DLL considered in the first article and the objects for report downloading, which were described in the second article. We will analyze the process of wrapper creation for a class which is imported from DLL and which forms an XML file with the trading history. We will also consider a method for interacting with this wrapper.
In the first article, the mechanism of operation with XML report files and the creation of the file structure was analyzed. Creation of reports was considered in the second article. The report generation mechanism was examined, starting with the history downloading object and ending with the objects generating the report. When studying the objects which are involved in the report creation process, the calculation part was analyzed in detail. The article also contained the main coefficient formulas, as well as the description of possible calculation issues.
As it was mentioned in the introduction to this article, the objects described in this part serve as a bridge between the data downloading mechanism and the report generation mechanism. In addition to the functions which save trading report files, the article contains the description of classes participating in XML report unloading, as well as the description of robot templates that can automatically use these features. The article also described how to add the created features to an existing algorithm. It means that auto optimizer users can optimize both old and new algorithms.
Author: Andrey Azatskiy