Discussing the article: "ALGLIB library optimization methods (Part II)" - page 4

 
Maxim Dmitrievsky #:

I've been looking at the test fi igures here. The pictures match.

I haven't understood the trick with multidimensional ones yet, I need to figure it out.

This is the first article in the series. The thought does not stand still, the test set of functions has been changed in the direction of complication and counteraction to fake successes of methods and false positives that are initialised by zero. That's why you should look at recent articles, for example this one.

There are always up-to-date sources on github.

Testing gradient methods:

https://www.mql5.com/ru/forum/475597/page2#comment_55006029

GitHub - JQSakaJoo/Population-optimization-algorithms-MQL5: Population optimization algorithms
GitHub - JQSakaJoo/Population-optimization-algorithms-MQL5: Population optimization algorithms
  • JQSakaJoo
  • github.com
https://t.me/+vazsAAcney4zYmZi A list of implemented optimization algorithms (M - is a modified improved version, joo - is an algorithm of my development):
 
Andrey Dik #:

This is the first article in the series. The thought does not stand still, the test set of functions has been changed to make it more complicated and to counter fake successes of methods that are initialised with zero, and in general to counter false positives. That's why you should look at recent articles, for example this one.

The sources are always up-to-date.

I see. If there will be time and desire - I will make for multidimensional.

On three-dimensional already showed pictures that gradient even on them get stuck in localities. If you divide the search space into batches, it is fixed. This is the way to work with gradient solvers and no other way.

 
Added Multivariate Megacity LBFGS function

Result for 1000 measurements. Wasted server time - 9 minutes.

There seem to be no errors, made according to the article's moulds.


 
Andrey Dik #:

This is the first article in the series. Thought is not standing still, the test set of functions has been changed to make it more complex and to counter fake successes of methods and false positives that are initialised with zero. That's why you should look at recent articles, for example this one.

There are always up-to-date sources on github.

Testing gradient methods:

https://www.mql5.com/ru/forum/475597/page2#comment_55006029

There's this f-ya in the includnik, is it used?

Is it the same f-ya, how to determine the correct bounds? I understood that some part of it is taken.

If I have not reduced the bounds, then the calculation (finding the maximum) is more complicated?

//------------------------------------------------------------------------------
class C_Megacity : public C_Function
{
  public: //===================================================================
  C_Megacity ()
  {
    fuName = "Megacity";

    // function boundaries
    xMinRange = -10.0; xMaxRange = -2;
    yMinRange = -10.5; yMaxRange = 10;

    //maximum coordinates
    globalMaxFunValue = 1.0; //12.0
    xGlobalMax        = -3.1357545740179393;
    yGlobalMax        = 2.006136371058429;

    //minimum coordinates
    globalMinFunValue = 0.0; //-1
    xGlobalMin        = -9.5;
    yGlobalMin        = -7.5;
  }

  double Core (double x, double y)
  {
    double a = MathSin (MathSqrt (MathAbs (x - 1.13) + MathAbs (y - 2.0)));
    double b = MathCos (MathSqrt (MathAbs (MathSin (x))) + MathSqrt (MathAbs (MathSin (y - 2.0))));
    double f = a + b;

    double res = floor (MathPow (f, 4)) -
                 floor (2 * exp (-(pow (x + 9.5, 2) + pow (y + 7.5, 2)) / 0.4));

    return Scale (res, -1.0, 12.0, 0.0, 1.0);
  }
};
 
globalMaxFunValue = 1.0; //12.0

For the 25-dimensional case found. I don't know where the errors are.


 
Maxim Dmitrievsky #:

If I didn't reduce the bounds, then the calculation (finding the maximum) is more difficult?

No, it's not more complicated. On your function, how many skyscrapers are above 50% of the min and max function? How many on mine? On which surface is it easier to get above 50% in height if you randomly scatter the points? - Yours. So, to emphasise again, the bounds are set incorrectly.

Here it is said about it: https://www.mql5.com/ru/articles/13923#tag3

I got this result from your code:

Some not-so-fun result, but you persistently post the best results from different trials. Run 20 trials (20 presses on the play button), or write a loop that simulates multiple trials, and then calculate the average result, as you do in the articles.

Which begs the question, why 100,000, why not 1,000,000,000,000,000?


You don't need to be shy, put a billion, but for some reason you don't show the number of calls to the target function, how many calls were there? In ranking tests, only 10,000 calls to the target function are allowed, and in the case of gradient methods (where the algorithm may try to make many more calls to the FF), there is a cutoff in the code, if the limit is exceeded, the minimum value of the target is given (the methods are looking for a minimum, so the value is reversed):

// Increase of the function start counter and control of restrictions----------------
  numberLaunches++;
  if (numberLaunches >= maxNumbLaunchesAllowed)
  {
    fi.Set (0, DBL_MAX);
    CAlglib::MinNLCRequestTermination (state);
    return;
  }

All of this has been described earlier in the articles.

Популяционные алгоритмы оптимизации: Алгоритмы эволюционных стратегий (Evolution Strategies, (μ,λ)-ES и (μ+λ)-ES)
Популяционные алгоритмы оптимизации: Алгоритмы эволюционных стратегий (Evolution Strategies, (μ,λ)-ES и (μ+λ)-ES)
  • www.mql5.com
В этой статье будет рассмотрена группа алгоритмов оптимизации, известных как "Эволюционные стратегии" (Evolution Strategies или ES). Они являются одними из самых первых популяционных алгоритмов, использующих принципы эволюции для поиска оптимальных решений. Будут представлены изменения, внесенные в классические варианты ES, а также пересмотрена тестовая функция и методика стенда для алгоритмов.
 
How to get the same surface has been asked. It is not in the includnik. When the dimensionality of the problem increases, the number of iterations increases, it is natural. The comparison should be made in terms of execution time until convergence to the same maximum, if the task is to get a silver bullet for all cases of life.
 
Maxim Dmitrievsky silver bullet for all cases.

The class has methods GetMinRangeX (), GetMaxRangeX (), GetMinRangeY (), GetMaxRangeY (), with the help of which you can query the bounds (well, and simply in the code of the corresponding test functions you can see the bounds).

In practice, there is always a limit on the maximum allowed number of accesses to the target, in the tests we have adopted a limit of 10 000 accesses.

If there is no limit either in terms of computational resources or time, it is better not to use optimisation algorithms at all and to do a simple full enumeration, but it never happens in real life. Testing and comparison of methods is carried out at a limit of 10 000 hits to the target. The whole point of comparing algorithms is to see which algorithms get the result better and with less access to the target. Accordingly, the more times an algorithm needs to access the target to achieve comparable results, the weaker the algorithm is considered to be on the corresponding type of tasks.

Unfortunately, very subtle points, all of which are described in detail in the articles on optimisation methods, escape you.

 
Is there any specific criterion for evaluating the quality of optimisation?) if the algorithm needs more iterations is it bad?

Comparing the speed of finding the optimum with a full search is a normal comparison.
 
Maxim Dmitrievsky #:
Comparing the speed of finding the optimum with a full brute force search is a normal comparison.

There are situations when full brute force will not find the optimum because the nodes of the brute force grid do not fall on it.