Discussing the article: "Role of random number generator quality in the efficiency of optimization algorithms" - page 6
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Therefore, it is critical to divide the original dataset into two, and by performing optimisation on one half, control the quality on the second half.
Hold on. Now Andrew's algorithms are responsible for exactly the type of optimisation that MT5-Tester performs. In the quoted you talk about the overhang over the optimisation algorithm.
In the discussion you can still feel a certain optimisation algorithm, which is not formally a competitor of the algorithm in MT5-Tester, but solves quite different search tasks. There were no such algorithms in Andrei's series of articles.
So, is there any way to select from all possible parameters in the results of a full search the set that we will use on unfamiliar data? We have done a full search, no optimisation here.
Now it is very important to answer this question.
Yes, we believe there is a way.
What's the intrigue?
There seems to be some confusion in terms.
I call optimisation the process of searching for the best parameters (in this case, a trading strategy). And the "best" parameters are those that will pass the forward well.
There seems to be some confusion of terms.
I call optimisation the process of searching for the best parameters (in this case, a trading strategy). And the "best" parameters are those that will pass the forward well.
I guess there is indeed a terminological misunderstanding.
Optimisation in the sense of MT5-Tester is the search for the highest FF value.
Apparently, indeed, a terminological misunderstanding.
Optimisation in the sense of MT5-Tester - search for the highest FF value.
Stanislav and I were clearly talking about something else.
Finding the maximum is disclosed in the articles. But it may not always be useful from a practical point of view.
Finding the maximum in articles is revealing. But it may not always be useful from a practical point of view.
So this is somewhat out of the context of a series of articles about different ways of solving a classical optimisation problem.
The most head-on way to find interesting places for OOS is to forcibly interrupt the optimisation algorithm solving the classical problem.
For example, let the GA make 10,000 passes to solve the problem. Obviously, the best 100 results from the first 3000 passes contain more local extrema than the best 100 results from the 10,000 passes.
Therefore, interrupting after 3000 passes and looking at the best 100 is reasonable for possibly finding robust settings.
What's the intrigue?
There is no intrigue. I asked a question, in a way a test for understanding terminology and who puts what into the term "optimisation".
The closest to using the terminology as intended is fxsaber.
Either way it is neither bad nor good, it is neither wrong nor right, it just makes it easier to achieve your goals and avoids putting meanings where there are none and vice versa - it allows you to pay more attention to the aspects that really affect what needs to be achieved.
In terms of architecture, the way the in-house tester and optimiser are structured, everything is done quite correctly there, the flies are separated from the cutlets. That's why I can well imagine how developers in Metaquotes mentally actively use profanity, perhaps even gesticulate while sitting at their workplace when they read users' phrases like "optimisation in the in-house optimizer is a fit" and similar statements.
I will try to clear up the terminological confusion.
Stanislaw and I were obviously talking about something else.
The search for the maximum is revealed in the articles. But it may not always be useful from a practical point of view.
Yes, you were talking about something else. It sounds like "the topic of kebab cooking is disclosed, but kebabs may not be healthy". That's OK, we'll sort it out together, separate the flies from the cutlets.
Is this some kind of terminology game? I suggested 3 ways to choose the best set - they are also suitable for the case of a full history run in all possible combinations.
For example, such a well-known problem: there is an NS (let's say, trading on price increments), and optimisation is used to find the weights of this network. If we apply your algorithms head-on, we will get an over-trained NS, which will not be able to work on new data. Therefore, it is critical to divide the original dataset into two, and while performing optimisation on one half, control the quality on the second half.
Could you please explain what it means to "apply algorithms head-on"? I find it hard to see how optimisation algorithms can be misused. The concepts of "overtraining" and/or "fitting" do not apply to optimisation algorithms.
No, not a terminology game, but my attempt to set the record straight on a long-standing misunderstanding that has been going on around optimisation in general and optimisation algorithms in particular.
So this is somewhat out of the context of a series of articles about different ways of solving a classical optimisation problem.
Yes, exactly, Stanislav and Andrey are talking about things outside the context of this series of articles. And, the series of articles is not about solving optimisation problems, but about optimisation algorithms. An optimisation algorithm is only a part of an optimisation problem (it is so both terminologically and correctly from the point of view of comparing algorithms among themselves, otherwise it would be impossible to compare algorithms in principle), so in this series of articles I consider only optimisation algorithms.
I'll explain below in the comments. I really hope it will help to look at familiar things from a different angle.