Discussing the article: "Population optimization algorithms: Evolution of Social Groups (ESG)" - page 2

 
Andrey Dik #:
I doubt the GetMicrosecondCount value can repeat the values on repeated runs, even if you try hard. Provided that individual tests are longer than a microsecond, of course.
2024.02.03 16:11:25.824 OPTIMIZATION_METHOD_AO_BGA
2024.02.03 16:11:25.873 AmountCycles = 5000, Repeats = 1
2024.02.03 16:11:25.873 BestResult = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:11:25.873 Check = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:11:36.957 OPTIMIZATION_METHOD_AO_BGA
2024.02.03 16:11:37.007 AmountCycles = 5000, Repeats = 1
2024.02.03 16:11:37.007 BestResult = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:11:37.007 Check = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:11:47.218 OPTIMIZATION_METHOD_AO_BGA
2024.02.03 16:11:47.267 AmountCycles = 5000, Repeats = 1
2024.02.03 16:11:47.267 BestResult = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:11:47.267 Check = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:12:49.104 OPTIMIZATION_METHOD_AO_BGA
2024.02.03 16:12:49.152 AmountCycles = 5000, Repeats = 1
2024.02.03 16:12:49.152 BestResult = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999
2024.02.03 16:12:49.152 Check = 0.9999285571886549: X1 = -1.48, Y1 = 0.6299999999999999, X2 = -1.48, Y2 = 0.6299999999999999, X3 = -1.48, Y3 = 0.6299999999999999

On the left is the column of script run times.

 
fxsaber #:

On the left is a column of script start times.


99.99% is the maximum that can be achieved with the selected step, and as far as I understand, two decimal places accuracy is selected in the BGA settings.
So, this is not a consequence of the same initialisation of the gcc, but a result of full theoretically possible convergence within the set task.
You can make sure of this if you print the very first chromosome created in each individual test, they will be different chromosomes. This is the amazing property of the optimisation algorithm - to find the same solution at different (random) initial states.
 
I came up with a test of the algo's resistance to getting stuck in locals.
We need to place all agents at the first iteration not randomly over the whole field, but in the global minimum. The task is to find the global maximum. I am sure that many algos will stay in the hole like this.
This is a very artificial case, but it has interesting conclusions.
 
Andrey Dik #:
A test of the algo's resistance to getting stuck in locales came to mind.
We need to place all agents at the first iteration not randomly over the whole field, but in the global minimum. The task is to find the global maximum. I am sure that many algos will remain in the hole.
This is a very artificial case, but it has interesting conclusions.

The whole population at one point is a degenerate population. This is also a test for the effect of decreasing diversity in the population, the algorithm should be able to get out of such a bottle neck.

That's just thinking out loud.

 
Andrey Dik #:
We need to place all agents at the first iteration not randomly over the whole field, but in the global minimum. The task is to find the global maximum.

  1. The usual launch with the search for the global maximum.
  2. Then we launch from the found point(MaxTmp) to find the global minimum.
  3. Then start from the found point(MinTmp) to find the global maximum.
  4. Go to step 2.

In steps 2-3 we measure how far we are from the global minima/maxima(MaxGlobal - MaxTmp + MinTmp - Min Global). The average is the rating1 of the optimisation algorithm.

Taking the average of the sum of(MaxTmp[i]-MinTmp[i]) is the rating2 of the optimisation algorithm.


Now there is no function that allows you to run the algorithm from a point.

 
fxsaber #:

  1. Normal launch with search for global maximum.
  2. Then start from the found point(MaxTmp) to search for the global minimum.
  3. Then start from the found point (MinTmp) to search for the global maximum.
  4. Go to step 2.

In steps 2-3 we measure how far we are from the global minima/maxima(MaxGlobal - MaxTmp + MinTmp - Min Global). The average is the rating1 of the optimisation algorithm.

We take the average value from the sum(MaxTmp[i]-MinTmp[i]) - rating2 of the optimisation algorithm.


You can do it this way)))

Сейчас нет функции, позволяющей запускать алгоритм из точки.

You can "forcibly" initialise the coordinates of the agents with any values before the FF is measured at the very first epoch. The agents' fields are public. The "guts" of the algorithms tried to make them as accessible as possible from the outside, of course, it contradicts the usual approach to ensure the safety of fields, but the one who wants to shoot himself in the foot in any case will find a way, and so the convenience of using the algorithms is preserved.

And, is the issue with the initialisation of the HCS resolved?

 
Andrey Dik #:

99.99% is the maximum that can be achieved with the selected pitch, and as far as I understand, the accuracy selected is two decimal places in the BGA settings.
So, this is not a consequence of the same initialisation of the gcc, but a result of full theoretically possible convergence within the set task.
You can be convinced of this if you print the very first chromosome created in each individual test, it will be different chromosomes. This is the amazing property of the optimisation algorithm - to find the same solution at different (random) initial states.

You are right, thank you!

void OnStart()
{
  Print(GetMicrosecondCount());
}
2024.02.04 15:31:28.422 2142
2024.02.04 15:31:40.553 2367
2024.02.04 15:31:42.385 2326
 
Andrey Dik #:

It is possible to "forcibly" externally initialise the agents' coordinates with any values before the FF is measured on the very first epoch.

Please show examples of your two algorithms.

 
fxsaber #:

Please show with examples of your two algorithms.

in the main loop of epochs insert a code that will overwrite the coordinates of agents by the coordinates of the global minimum of the function:

for (int epochCNT = 1; epochCNT <= epochCount && !IsStopped (); epochCNT++)
    {
      AO.Moving ();
      
      //---the inserted code---the inserted code---the inserted code---the inser
      if (epochCNT == 1)
      {
        for (int set = 0; set < ArraySize (AO.a); set++)
        {
          for (int i = 0; i < funcCount; i++)
          {
            AO.a [set].c [i * 2]     = f.GetMinFuncX ();
            AO.a [set].c [i * 2 + 1] = f.GetMinFuncY ();
          }
        }
      }
      //---the inserted code---the inserted code---the inserted code---the inser

      for (int set = 0; set < ArraySize (AO.a); set++)
      {
        AO.a [set].f = f.CalcFunc (AO.a [set].c, funcCount);
      }

      AO.Revision  ();

      if (Video_P)
      {
        //drawing a population--------------------------------------------------
        SendGraphToCanvas  (1, 1);

        for (int i = 0; i < ArraySize (AO.a); i++)
        {
          PointDr (AO.a [i].c, f, 1, 1, funcCount, false);
        }
        PointDr (AO.cB, f, 1, 1, funcCount, true);

        MaxMinDr (f);

        //drawing a convergence graph---------------------------------------------
        xConv = (int)Scale (epochCNT, 1,              epochCount,     H + 2, W - 3, false);
        yConv = (int)Scale (AO.fB,   f.GetMinFun (), f.GetMaxFun (), 2,     H - 2, true);
        Canvas.FillCircle (xConv, yConv, 1, COLOR2RGB (clrConv));

        Canvas.Update ();
      }
    }

This trick will not work with the BGA algorithm, because this way we will only overwrite the phenotype, while the binary genotype will remain unchanged. For it we will have to get inside the algorithm and when the population is in its infancy, we will have to perform such a surgical operation.

This is what you get if you initialise ESG in this way:

expansion

Here, I took a video of the ESG, you can see how it climbs out from one point and spreads around in search of a global maximum. Not all algorithms will behave this way, I wrote about it in my articles, some algorithms have no mechanisms to "escape from the trap" at all.

 
Hi, I am just starting to learn about alternatives to the inbuilt fast genetic algorithm. I was wondering if you could help me get your BGA optimisation working. I have been looking at some of your articles on this topic. However, I feel like I am starting late, missed some information somewhere and don't know how to actually optimise the EA with a different algorithm. I downloaded and compiled test_ao_bga.mq5. When I load the terminal it says: "Invalid programme type, loading Test_AO_BGA.ex5 failed". If I try to run it, the terminal reports "Test_AO_BGA.ex5 not found". Could you please help me to get it to work? And how do I configure my own EA to use BGA optimisation? Thanks.