
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I've been looking at the test fi igures here. The pictures match.
I haven't understood the trick with multidimensional ones yet, I need to figure it out.
This is the first article in the series. The thought does not stand still, the test set of functions has been changed in the direction of complication and counteraction to fake successes of methods and false positives that are initialised by zero. That's why you should look at recent articles, for example this one.
There are always up-to-date sources on github.
Testing gradient methods:
https://www.mql5.com/ru/forum/475597/page2#comment_55006029
This is the first article in the series. The thought does not stand still, the test set of functions has been changed to make it more complicated and to counter fake successes of methods that are initialised with zero, and in general to counter false positives. That's why you should look at recent articles, for example this one.
The sources are always up-to-date.
I see. If there will be time and desire - I will make for multidimensional.
On three-dimensional already showed pictures that gradient even on them get stuck in localities. If you divide the search space into batches, it is fixed. This is the way to work with gradient solvers and no other way.
Result for 1000 measurements. Wasted server time - 9 minutes.
There seem to be no errors, made according to the article's moulds.
This is the first article in the series. Thought is not standing still, the test set of functions has been changed to make it more complex and to counter fake successes of methods and false positives that are initialised with zero. That's why you should look at recent articles, for example this one.
There are always up-to-date sources on github.
Testing gradient methods:
https://www.mql5.com/ru/forum/475597/page2#comment_55006029
There's this f-ya in the includnik, is it used?
Is it the same f-ya, how to determine the correct bounds? I understood that some part of it is taken.
If I have not reduced the bounds, then the calculation (finding the maximum) is more complicated?
For the 25-dimensional case found. I don't know where the errors are.
If I didn't reduce the bounds, then the calculation (finding the maximum) is more difficult?
No, it's not more complicated. On your function, how many skyscrapers are above 50% of the min and max function? How many on mine? On which surface is it easier to get above 50% in height if you randomly scatter the points? - Yours. So, to emphasise again, the bounds are set incorrectly.
Here it is said about it: https://www.mql5.com/ru/articles/13923#tag3
I got this result from your code:
Some not-so-fun result, but you persistently post the best results from different trials. Run 20 trials (20 presses on the play button), or write a loop that simulates multiple trials, and then calculate the average result, as you do in the articles.
Which begs the question, why 100,000, why not 1,000,000,000,000,000?
You don't need to be shy, put a billion, but for some reason you don't show the number of calls to the target function, how many calls were there? In ranking tests, only 10,000 calls to the target function are allowed, and in the case of gradient methods (where the algorithm may try to make many more calls to the FF), there is a cutoff in the code, if the limit is exceeded, the minimum value of the target is given (the methods are looking for a minimum, so the value is reversed):
All of this has been described earlier in the articles.
The class has methods GetMinRangeX (), GetMaxRangeX (), GetMinRangeY (), GetMaxRangeY (), with the help of which you can query the bounds (well, and simply in the code of the corresponding test functions you can see the bounds).
In practice, there is always a limit on the maximum allowed number of accesses to the target, in the tests we have adopted a limit of 10 000 accesses.
If there is no limit either in terms of computational resources or time, it is better not to use optimisation algorithms at all and to do a simple full enumeration, but it never happens in real life. Testing and comparison of methods is carried out at a limit of 10 000 hits to the target. The whole point of comparing algorithms is to see which algorithms get the result better and with less access to the target. Accordingly, the more times an algorithm needs to access the target to achieve comparable results, the weaker the algorithm is considered to be on the corresponding type of tasks.
Unfortunately, very subtle points, all of which are described in detail in the articles on optimisation methods, escape you.
There are situations when full brute force will not find the optimum because the nodes of the brute force grid do not fall on it.