Русский Español Português
preview
Chaos optimization algorithm (COA): Continued

Chaos optimization algorithm (COA): Continued

MetaTrader 5Tester |
508 2
Andrey Dik
Andrey Dik

Contents

  1. Introduction
  2. Implementation of the algorithm
  3. Test results
  4. Summary


Introduction

In the previous article, we introduced the chaotic optimization method and analyzed some of the methods included in the algorithm. In this article, we will complete the analysis of the remaining methods and move directly to testing the algorithm on test functions. 

In this implementation, the chaotic optimization method uses deterministic chaos to explore the solution space. The key principle is the use of three different chaotic maps (logistic, sinusoidal and tent maps) to generate sequences that have pseudo-randomness and ergodicity properties. The algorithm operates in three phases: an initial chaotic search, a solution refinement using the weighted gradient method, and a final local search with adaptive scope narrowing.

The use of velocity vectors with inertia and stagnation-detection mechanisms helps the algorithm avoid local extremes. Dynamic adaptation of parameters and various types of mutations ensure a balance between global exploration and local exploitation of found solutions. Now let's look at the remaining methods.



Implementation of the algorithm

The ApplyMutation method is used to introduce mutations into the agent. Its main task is to randomly change the parameters of a certain agent.

The method starts by checking the validity of the index of the specified agent. If the index is out of range, the method terminates. Next, information is collected about the size of various arrays associated with the agent, such as the size of its parameters and velocity. This prevents the method from going beyond the bounds of these arrays.

The method calculates the maximum number of coordinates that can be changed based on the sizes of available arrays to ensure the safety of operations. A random number is selected to determine how many coordinates will be mutated. This value ranges from 1 to 30% of the maximum number of available coordinates. An array of indices is formed representing the coordinates that can be changed. This array is shuffled to ensure random selection when applying mutations.

In the loop, coordinates for mutation are selected from the shuffled array. For each selected coordinate, a mutation type is determined based on a random value. Possible types include complete random mutation (where a new value is chosen randomly within a given range), mutation relative to the global best solution, and mutation using chaotic maps.

After changing the value for each coordinate, the agent's velocity is reset so that the new value is not affected by previous velocities. Finally, the new value for the corresponding agent coordinate is set taking into account the ranges and steps, which ensures that the values remain within the acceptable range.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::ApplyMutation (int agentIdx)
{
  // Determine the number of coordinates for mutation (from 1 to 30% of coordinates)
  int mutationCount = 1 + (int)(u.RNDprobab () * coords * 0.3);
  mutationCount = MathMin (mutationCount, coords);

  // Create an array of indices for mutation without repetitions
  int mutationIndices [];
  ArrayResize (mutationIndices, coords);

  // Fill the array with indices
  for (int i = 0; i < coords; i++)
  {
    mutationIndices [i] = i;
  }

  // Shuffle the indices
  for (int i = coords - 1; i > 0; i--)
  {
    int j = (int)(u.RNDprobab () * (i + 1));
    if (j <= i) // Additional security check
    {
      int temp = mutationIndices [i];
      mutationIndices [i] = mutationIndices [j];
      mutationIndices [j] = temp;
    }
  }

  // Apply mutations to the selected coordinates
  for (int m = 0; m < mutationCount; m++)
  {
    int c = mutationIndices [m];

    // Different types of mutations for variety
    double r = u.RNDprobab ();
    double x;

    if (r < 0.3)
    {
      // Complete random mutation
      x = rangeMin [c] + u.RNDprobab () * (rangeMax [c] - rangeMin [c]);
    }
    else
      if (r < 0.6)
      {
        // Mutation relative to global best
        double offset = (u.RNDprobab () - 0.5) * (rangeMax [c] - rangeMin [c]) * 0.2;
        x = cB [c] + offset;
      }
      else
      {
        // Mutation using chaotic map
        agent [agentIdx].gamma [c] = SelectChaosMap (agent [agentIdx].gamma [c], (epochNow + c) % 3);
        x = rangeMin [c] + agent [agentIdx].gamma [c] * (rangeMax [c] - rangeMin [c]);
      }

    // Reset velocity
    agent [agentIdx].velocity [c] = 0.0;

    // Apply the new value with range check
    a [agentIdx].c [c] = u.SeInDiSp (x, rangeMin [c], rangeMax [c], rangeStep [c]);
  }
}
//——————————————————————————————————————————————————————————————————————————————

The UpdateSigma method in the class is responsible for dynamically adapting the penalty parameter used during the optimization. It regulates the penalty value depending on the number of feasible solutions in the agent population.

If the method is called for the first time (epoch is 1), the current penalty value (currentSigma) is set to half the base value (sigma). The method runs through the entire population of agents and calculates the number of feasible solutions. To do this, an auxiliary function is used to determine whether the agent is valid. Here, iteration is performed over all agents, which allows us to determine how many of them meet the given criteria.

Next, the method calculates the proportion of feasible solutions in the total population by dividing the number of feasible solutions by the total number of agents. This ratio helps in understanding how well our current strategy is working. Based on the calculated ratio of feasible solutions, the method makes decisions about adjusting the penalty: if the proportion of feasible solutions is less than 30%, this indicates that the algorithm is acting too restrictively, so the penalty is increased to alter the search dynamics and promote greater diversity in agent behavior. If the proportion exceeds 70%, this indicates that too many agents find feasible solutions, and the penalty is reduced to avoid premature convergence.

In the end, the method ensures that the value of the current penalty remains within feasible limits. If it becomes too small (less than 10% of the base value), the penalty increases to that level. Similarly, if the penalty exceeds 500% of the base value, it is capped to that level.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::UpdateSigma ()
{
  // Dynamic adaptation of the penalty parameter
  // Start with a small value and increase it if necessary

  if (epochNow == 1)
  {
    currentSigma = sigma * 0.5;
    return;
  }

  // Calculate the number of feasible solutions
  int feasibleCount = 0;
  for (int i = 0; i < popSize; i++)
  {
    if (IsFeasible (i))
    {
      feasibleCount++;
    }
  }

  double feasibleRatio = (double)feasibleCount / MathMax (1, popSize);

  // Adapt the penalty parameter depending on the proportion of feasible solutions
  if (feasibleRatio < 0.3)
  {
    // Too few feasible solutions - increase the penalty
    currentSigma *= 1.2;
  }
  else
    if (feasibleRatio > 0.7)
    {
      // Too many feasible solutions - reduce the penalty
      currentSigma *= 0.9;
    }

  // Limit the sigma value
  if (currentSigma < sigma * 0.1) currentSigma = sigma * 0.1;
  else
    if (currentSigma > sigma * 5.0) currentSigma = sigma * 5.0;
}
//——————————————————————————————————————————————————————————————————————————————

The IsFeasible method determines whether a particular solution represented by an agent with a given agentIdx index is feasible with respect to the given constraints.

First, the method checks that the given agent index (agentIdx) is within the allowed range, that is, greater than or equal to 0 and less than the population size (popSize). If the index is invalid, the method returns 'false', signaling that the solution is invalid. If the agent index is valid, the method begins checking the constraints for the given solution. It iterates through all the solution's coordinates and calculates the constraint violation value for each coordinate using the CalculateConstraintValue function.

The CalculateConstraintValue function returns a value that reflects the degree of constraint violation for a particular solution coordinate. If after checking all coordinates, no constraint is violated, that is, all violation values are less than or equal to "eps", then the solution is considered feasible, and the method returns "true".

//——————————————————————————————————————————————————————————————————————————————
bool C_AO_COA_chaos::IsFeasible (int agentIdx)
{
  // Check if the solution is within the feasible region
  for (int c = 0; c < coords; c++)
  {
    double violation = CalculateConstraintValue (agentIdx, c);
    if (violation > eps)
    {
      return false;
    }
  }
  return true;
}
//——————————————————————————————————————————————————————————————————————————————

The UpdateBestHistory method stores the current best value in the search history. It takes the new best value and first checks if it is correct or valid. If the value is correct, it is stored in the array storing the history of the best results, at the current indexed position. After this, the index is updated for the next save with a cyclic update using the remainder operation of dividing by the size of the history array, which ensures that the history is always filled with the last 10 values without going beyond the array boundaries.

The method allows tracking search progress and using the history of the best solutions for analysis.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::UpdateBestHistory (double newBest)
{
  // Protection against invalid values
  if (!MathIsValidNumber (newBest)) return;

  // Update the history of the best values
  globalBestHistory [historyIndex] = newBest;
  historyIndex = (historyIndex + 1) % 10;
}
//——————————————————————————————————————————————————————————————————————————————

The IsConverged method is used to determine whether the algorithm has reached convergence. It analyzes the history of the best global solutions to assess how significantly the solutions improve over time. If the improvements become insignificant, the algorithm is considered to have converged.

The method initializes variables to count the number of allowed (valid) values, the sum of values, the minimum and maximum values from the history of the best global solutions (globalBestHistory). The minVal and maxVal variables are initialized to the maximum and minimum possible double values respectively to correctly determine the minimum and maximum in the future.

The method iterates over the globalBestHistory array and for each value in the history it performs validity, data sufficiency, and diversity checks, calculates the mean and relative difference, and checks for convergence. In general, the IsConverged method provides a way to determine the convergence of an optimization algorithm by analyzing the history of best solutions and assessing how significant the improvements are over time. 

//——————————————————————————————————————————————————————————————————————————————
bool C_AO_COA_chaos::IsConverged ()
{
  // Check if there is enough data in the history
  int validValues = 0;
  double sum      = 0.0;
  double minVal   = DBL_MAX;
  double maxVal   = -DBL_MAX;

  // Find min, max, and sum of values in history
  for (int i = 0; i < 10; i++)
  {
    if (globalBestHistory [i] == -DBL_MAX || !MathIsValidNumber (globalBestHistory [i])) continue;

    minVal = MathMin (minVal, globalBestHistory [i]);
    maxVal = MathMax (maxVal, globalBestHistory [i]);
    sum += globalBestHistory [i];
    validValues++;
  }

  // If there is not enough data or all values are the same
  if (validValues < 5 || minVal == maxVal) return false;

  // Calculate the average value
  double average = sum / validValues;

  // Check the case when the mean is close to zero
  if (MathAbs (average) < eps) return MathAbs (maxVal - minVal) < eps * 10.0;

  // Relative difference for convergence testing
  double relDiff = MathAbs (maxVal - minVal) / MathAbs (average);

  return relDiff < 0.001; // Convergence threshold - 0.1%
}
//——————————————————————————————————————————————————————————————————————————————

The ResetStagnatingAgents method is designed to manage agents during the optimization, especially to handle cases where agents stop showing improvement in their results. It monitors the state of stagnation of agents and, if necessary, applies mutation to those of them that are stuck in local extremes.

For each agent, it is checked whether its current fitness function value (a[i].f) has improved compared to its previous value (a[i].fB). If the current value has not improved, or has become worse, the agent's stagnationCounter is incremented, signaling that the agent is in a state of stagnation. If the value has improved, the stagnation counter is reset.

If the agent is stagnant for more than 5 iterations, the method calculates the reset probability (resetProb). This probability is proportional to the current value of the stagnation counter, meaning that the longer the agent stagnates, the higher the probability of it being reset. The equation for calculating the probability takes into account a fixed value (0.2) and the relative value of the stagnation counter. If the randomly generated value is less than the calculated resetProb probability, a mutation is applied to the agent using the ApplyMutation function. After applying a mutation, the agent's stagnation counter is reset and the resetCount counter is increased.

the method finishes after processing all agents that have exceeded the stagnation threshold and met the mutation condition.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::ResetStagnatingAgents ()
{
  int resetCount = 0;

  for (int i = 0; i < popSize; i++)
  {
    // Increase the stagnation counter if there is no improvement
    if (a [i].f <= a [i].fB)
    {
      agent [i].stagnationCounter++;
    }
    else
    {
      agent [i].stagnationCounter = 0;
    }

    // Reset the agent if it has been stagnating for too long
    if (agent [i].stagnationCounter > 5)
    {
      // Reset only some of the agents with a probability depending on stagnation
      double resetProb = 0.2 * (1.0 + (double)agent [i].stagnationCounter / 10.0);

      if (u.RNDprobab () < resetProb)
      {
        ApplyMutation (i);
        agent [i].stagnationCounter = 0;
        resetCount++;
      }
    }
  }
}
//——————————————————————————————————————————————————————————————————————————————

The CalculateWeightedGradient method is used to estimate the gradient of a constraint for a specific agent and coordinate, taking into account the degree to which those constraints are violated. It helps determine in which direction and to what degree the solution should be adjusted to eliminate the violation, weighting the gradient by the level of the violation.

First, the agent's indices and coordinates are checked to ensure they are within acceptable limits. If the indices are invalid, a null value is returned, meaning there is no direction for adjustment. The method then loops through all coordinates associated with the current agent and calculates the violation value for each. During this process, it looks for the maximum violation among them. If the violation for a given coordinate is greater than the "eps" threshold, it is considered worthwhile to take corrective action; otherwise, zero is returned — movement in this direction is meaningless or not required.

If the violation is significant, the constraint gradient is calculated for the coordinate — a measure of the direction and strength of the change in the constraint relative to changes in that coordinate. The gradient is multiplied by a weight, which is defined as the ratio of the violation of the current coordinate to the maximum violation among all constraints. This ensures that a larger violation has a stronger influence on the direction of the adjustment. The resulting value is a weighted constraint gradient that indicates how much and in what direction the solution needs to be adjusted to eliminate the violation in a given coordinate, proportional to its degree.

//——————————————————————————————————————————————————————————————————————————————
double C_AO_COA_chaos::CalculateWeightedGradient (int agentIdx, int coordIdx)
{
  // Calculate the maximum value of the constraint violation
  double maxViolation = eps;

  for (int c = 0; c < coords; c++)
  {
    double violation = CalculateConstraintValue (agentIdx, c);
    maxViolation = MathMax (maxViolation, violation);
  }

  // Violation for the current coordinate
  double violation = CalculateConstraintValue (agentIdx, coordIdx);

  // If there is no significant violation, return 0
  if (violation <= eps) return 0.0;

  // Calculate the gradient
  double gradient = CalculateConstraintGradient (agentIdx, coordIdx);

  // Normalize the impact depending on the degree of violation
  double weight = violation / maxViolation;

  return gradient * weight;
}
//——————————————————————————————————————————————————————————————————————————————

The CalculateConstraintValue method is used to estimate the degree of constraint violation for a given coordinate of a given agent. It determines how far the current coordinate value is outside the acceptable limits (boundaries) and returns a numerical value reflecting the magnitude of this violation.

First, the method checks the agent indices (agentIdx) and coordinates (coordIdx) for validity. The method gets the current value of the "x" coordinate from the agent's state and also determines the minimum (min) and maximum (max) acceptable bounds for this coordinate, respectively. Next, the method checks whether the "x" value is within the acceptable range (min and max). Finally, the method returns a "violation" value, which is the total amount of constraint violation for the given coordinate of the given agent.

Key features include:

  • The method returns a positive value if the constraint is violated and 0 if it is met.
  • The magnitude of the violation is proportional to the extent to which the coordinate value goes beyond the acceptable limits.

Ultimately, the CalculateConstraintValue method provides important information about how well each agent in the population satisfies the problem constraints. It measures the magnitude of the violation, which helps the algorithm understand how much the agent's position needs to be adjusted.

//——————————————————————————————————————————————————————————————————————————————
double C_AO_COA_chaos::CalculateConstraintValue (int agentIdx, int coordIdx)
{
  double x = a [agentIdx].c [coordIdx];
  double min = rangeMin [coordIdx];
  double max = rangeMax [coordIdx];

  // Smoothed constraint violation function with a smooth transition at the boundary
  double violation = 0.0;

  // Check the lower bound
  if (x < min)
  {
    violation += min - x;
  }
  // Check the upper bound
  else
    if (x > max)
    {
      violation += x - max;
    }

  return violation;
}
//——————————————————————————————————————————————————————————————————————————————

The CalculateConstraintGradient method calculates the constraint gradient for a given coordinate of a specific agent. This method determines in which direction and to what extent the coordinate value should be changed due to constraint violation.

The method first checks the input agent indices (agentIdx) and coordinates (coordIdx) to ensure they are correct. If at least one of the indices is outside the valid range, the method returns 0.0, which indicates that the gradient cannot be calculated. The method retrieves the current value of the agent's coordinate, as well as the minimum and maximum boundaries set for this coordinate. These values are required for further evaluations. Next, the method checks whether the current value of the "x" coordinate is outside the established boundaries.

The final value returned by the method reflects the direction and necessity of changing the current value of the agent's coordinate, taking into account the specified constraints.

//——————————————————————————————————————————————————————————————————————————————
double C_AO_COA_chaos::CalculateConstraintGradient (int agentIdx, int coordIdx)
{
  double x = a [agentIdx].c [coordIdx];
  double min = rangeMin [coordIdx];
  double max = rangeMax [coordIdx];

  // Smoothed gradient for better stability
  if (x < min) return -1.0; // Need to increase the value
  else
    if (x > max) return 1.0;  // Need to decrease the value

  return 0.0;
}
//——————————————————————————————————————————————————————————————————————————————

The CalculatePenaltyFunction method calculates the values of the objective function taking into account the penalty for violating constraints for a specific agent. It combines a base value of the objective function with a penalty based on the magnitude of constraint violations.

The method starts by checking the validity of the agent index (agentIdx), and if the index is valid, the method gets the base (non-penalized) value of the objective function of this agent (a[agentIdx].f) and stores it in the baseValue variable. The method then loops through all coordinates (coords) of each agent. For each coordinate, the magnitude of the constraint violation is calculated. If the magnitude of the violation exceeds a given small value "eps", then the square of the magnitude of the violation is added to the penaltySum sum of penalties.

A quadratic penalty is used to provide a more "soft" assessment for minor violations and a more "hard" assessment for major ones. After all the penalties for constraint violations are added up, the total 'penalty' is calculated.

Finally, the method calculates the "penalized" value of the objective function by subtracting the 'penalty' from the base value of the baseValue objective function. The penalized value of the objective function is returned.

//——————————————————————————————————————————————————————————————————————————————
double C_AO_COA_chaos::CalculatePenaltyFunction (int agentIdx)
{
  // Base value of the objective function
  double baseValue = a [agentIdx].f;

  // Penalty term
  double penaltySum = 0.0;

  for (int c = 0; c < coords; c++)
  {
    double violation = CalculateConstraintValue (agentIdx, c);

    // Quadratic penalty for better resolution of solutions close to the boundary
    if (violation > eps)
    {
      penaltySum += violation * violation;
    }
  }

  // Apply a dynamic penalty ratio
  double penalty = currentSigma * penaltySum;

  // For the maximization problem: f_penalty = f - penalty
  return baseValue - penalty;
}
//——————————————————————————————————————————————————————————————————————————————

The Moving method is the main control procedure for moving the solution through the optimization. It performs sequential steps aimed at updating the current state of the system, adapting parameters and managing the search phases.

First, the current epoch counter is incremented. On the first call, the method initializes the population and returns. Next, the penalty parameter (sigma) is dynamically updated, which helps regulate the influence of penalty functions during the search process. An important task is to periodically check and reset agents that are stuck or not showing progress, which helps prevent stagnation in local traps.

The current phase of the search is then determined by the value of the epoch counter - at the initial stages, a global search is performed, and then it moves on to a narrower, local search. Depending on the phase, the corresponding search strategies are called, which control the movement of the agents.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::Moving ()
{
  epochNow++;

  if (!revision)
  {
    InitialPopulation ();

    revision = true;
    return;
  }

  // Dynamically update the penalty parameter
  UpdateSigma ();

  // Reset stagnating agents
  if (epochNow % 5 == 0)
  {
    ResetStagnatingAgents ();
  }

  // Determine the search phase
  if (epochNow <= S1)
  {
    FirstCarrierWaveSearch ();
  }
  else
    if (epochNow <= S1 + S2)
    {
      SecondCarrierWaveSearch ();
    }
}
//——————————————————————————————————————————————————————————————————————————————

The Revision method is responsible for improving solutions and adapting search parameters during the iterative optimization. It re-evaluates the current state of the population, updates the best solutions, and adjusts the search parameters based on the performance of the latest steps.

The method iterates through all solutions in the population, calculating their new "penalty" value using the penalty function, which allows for constraint violations to be taken into account. The resulting value is then validated, and if it is valid, it is stored as the current value of the function. For each agent whose new function value is better than the previous one, its personal data is updated: the new value is saved, and the current coordinates are also copied. This stores each agent's personal best solution. If an agent is found to have a solution better than the current global solution, the global best solution is updated and the corresponding coordinates are remembered. The history of best solutions is also updated.

After the first level of iterations (epochNow > 1), the success of the search is assessed – the proportion of improvements among all agents is determined. Based on this assessment, the parameters are adapted, especially "alpha" (the search scope for each coordinate). Depending on the current search phase (global or local), the rules for adjusting "alpha" may differ: in the global search phase, with a low level of improvement, the search scope increases to expand the possibilities; in the local search phase, with insufficient improvement, the search scope also increases. In case of a significant success, the search scope decreases, which helps localize the optimal solution. The "alpha" parameter is limited by maximum and minimum values, based on the range of acceptable values for the coordinate.

This approach allows for a dynamic balance between exploring new solutions and localizing optimal ones, using information about the performance of past iterations.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_COA_chaos::Revision ()
{
  int    improvementCount = 0;
  double bestImprovement  = 0.0;

  // Evaluate all solutions
  for (int i = 0; i < popSize; i++)
  {
    double newValue = CalculatePenaltyFunction (i);

    // Check the validity of the new value
    if (!MathIsValidNumber (newValue)) continue;

    // Save the current value for the next iteration
    a [i].f = newValue;

    // Update the personal best solution (before a global one for efficiency)
    if (newValue > a [i].fB)
    {
      a [i].fB = newValue;
      ArrayCopy (a [i].cB, a [i].c, 0, 0, WHOLE_ARRAY);
      improvementCount++;
    }

    // Update the global best solution
    if (newValue > fB)
    {
      double improvement = newValue - fB;
      fB = newValue;
      ArrayCopy (cB, a [i].c, 0, 0, WHOLE_ARRAY);

      // Update the history of the best values
      UpdateBestHistory (fB);

      bestImprovement = MathMax (bestImprovement, improvement);
      improvementCount++;
    }
  }

  // Adapt search parameters depending on the phase and success
  if (epochNow > 1)
  {
    // Search success rate (prevention of zero divide)
    double successRate = (double)improvementCount / MathMax (1, 2 * popSize);

    // Adaptation of the alpha parameter
    for (int c = 0; c < coords; c++)
    {
      double oldAlpha = alpha [c];

      if (epochNow <= S1)
      {
        // In the global search phase
        if (successRate < 0.1)
        {
          // Very few improvements - increasing the search scope
          alpha [c] *= t3;
        }
        else
          if (successRate < 0.3)
          {
            // Few improvements - slightly expanding the scope
            alpha [c] *= 1.2;
          }
          else
            if (successRate > 0.7)
            {
              // Many improvements - narrowing the scope
              alpha [c] *= 0.9;
            }
      }
      else
      {
        // In the local search phase
        if (successRate < 0.05)
        {
          // Very few improvements - increasing the search scope
          alpha [c] *= t3;
        }
        else
          if (successRate > 0.2)
          {
            // Enough improvements - narrowing the scope
            alpha [c] *= 0.8;
          }
      }

      // Limits on alpha range with safe bounds verification
      if (c < ArraySize (rangeMax) && c < ArraySize (rangeMin))
      {
        double maxAlpha = 0.2 * (rangeMax [c] - rangeMin [c]);
        double minAlpha = 0.0001 * (rangeMax [c] - rangeMin [c]);

        if (alpha [c] > maxAlpha)
        {
          alpha [c] = maxAlpha;
        }
        else
          if (alpha [c] < minAlpha)
          {
            alpha [c] = minAlpha;
          }
      }
    }
  }
}
//——————————————————————————————————————————————————————————————————————————————

Now that we have examined all the methods, we can move on directly to testing the algorithm.


Test results

The results speak for themselves: the external parameters were slightly optimized, while the internal ones remained unchanged and are presented as described earlier in the methods.

COA(CHAOS)|Chaos Optimization Algorithm|50.0|10.0|40.0|2.0|1.2|0.0001|
=============================
5 Hilly's; Func runs: 10000; result: 0.6083668756744218
25 Hilly's; Func runs: 10000; result: 0.4079413401686229
500 Hilly's; Func runs: 10000; result: 0.2678056430575926
=============================
5 Forest's; Func runs: 10000; result: 0.668343763134901
25 Forest's; Func runs: 10000; result: 0.38894787694645416
500 Forest's; Func runs: 10000; result: 0.2198998763835145
=============================
5 Megacity's; Func runs: 10000; result: 0.32307692307692304
25 Megacity's; Func runs: 10000; result: 0.1987692307692308
500 Megacity's; Func runs: 10000; result: 0.10598461538461638
=============================
All score: 3.18914 (35.43%)

I would like to draw attention to the visualization of the algorithm operation: the combination of the variety of search methods used produces an unusual visual effect.

Hilly

COA(CHAOS) on the Hilly test function

Forest

COA(CHAOS) on the Forest test function

Megacity

COA(CHAOS) on the Megacity test function

Based on the test results, the COA(CHAOS) algorithm is presented in our rating table.

# AO Description Hilly Hilly
Final
Forest Forest
Final
Megacity (discrete) Megacity
Final
Final
Result
% of
MAX
10 p (5 F)50 p (25 F)1000 p (500 F)10 p (5 F)50 p (25 F)1000 p (500 F)10 p (5 F)50 p (25 F)1000 p (500 F)
1ANSacross neighbourhood search0.949480.847760.438572.235811.000000.923340.399882.323230.709230.634770.230911.574916.13468.15
2CLAcode lock algorithm (joo)0.953450.871070.375902.200420.989420.917090.316422.222940.796920.693850.193031.683806.10767.86
3AMOmanimal migration ptimization M0.903580.843170.462842.209590.990010.924360.465982.380340.567690.591320.237731.396755.98766.52
4(P+O)ES(P+O) evolution strategies0.922560.881010.400212.203790.977500.874900.319452.171850.673850.629850.186341.490035.86665.17
5CTAcomet tail algorithm (joo)0.953460.863190.277702.094350.997940.857400.339492.194840.887690.564310.105121.557125.84664.96
6TETAtime evolution travel algorithm (joo)0.913620.823490.319902.057010.970960.895320.293242.159520.734620.685690.160211.580525.79764.41
7SDSmstochastic diffusion search M0.930660.854450.394762.179880.999830.892440.196192.088460.723330.611000.106701.441035.70963.44
8BOAmbilliards optimization algorithm M0.957570.825990.252352.035901.000000.900360.305022.205380.735380.525230.095631.356255.59862.19
9AAmarchery algorithm M0.917440.708760.421602.047800.925270.758020.353282.036570.673850.552000.237381.463235.54861.64
10ESGevolution of social groups (joo)0.999060.796540.350562.146161.000000.828630.131021.959650.823330.553000.047251.423585.52961.44
11SIAsimulated isotropic annealing (joo)0.957840.842640.414652.215130.982390.795860.205071.983320.686670.493000.090531.270205.46960.76
12ACSartificial cooperative search0.755470.747440.304071.806981.000000.888610.224132.112740.690770.481850.133221.305835.22658.06
13DAdialectical algorithm0.861830.700330.337241.899400.981630.727720.287181.996530.703080.452920.163671.319675.21657.95
14BHAmblack hole algorithm M0.752360.766750.345831.864930.935930.801520.271772.009230.650770.516460.154721.321955.19657.73
15ASOanarchy society optimization0.848720.746460.314651.909830.961480.791500.238031.991010.570770.540620.166141.277525.17857.54
16RFOroyal flush optimization (joo)0.833610.737420.346291.917330.894240.738240.240981.873460.631540.502920.164211.298675.08956.55
17AOSmatomic orbital search M0.802320.704490.310211.817020.856600.694510.219961.771070.746150.528620.143581.418355.00655.63
18TSEAturtle shell evolution algorithm (joo)0.967980.644800.296721.909490.994490.619810.227081.841390.690770.426460.135981.253225.00455.60
19DEdifferential evolution0.950440.616740.303081.870260.953170.788960.166521.908650.786670.360330.029531.176534.95555.06
20SRAsuccessful restaurateur algorithm (joo)0.968830.634550.292171.895550.946370.555060.191241.692670.749230.440310.125261.314804.90354.48
21CROchemical reaction optimization0.946290.661120.298531.905930.879060.584220.211461.674730.758460.426460.126861.311784.89254.36
22BIOblood inheritance optimization (joo)0.815680.653360.308771.777810.899370.653190.217601.770160.678460.476310.139021.293784.84253.80
23BSAbird swarm algorithm0.893060.649000.262501.804550.924200.711210.249391.884790.693850.326150.100121.120124.80953.44
24HSharmony search0.865090.687820.325271.878180.999990.680020.095901.775920.620000.422670.054581.097254.75152.79
25SSGsaplings sowing and growing0.778390.649250.395431.823080.859730.624670.174291.658690.646670.441330.105981.193984.67651.95
26BCOmbacterial chemotaxis optimization M0.759530.622680.314831.697040.893780.613390.225421.732590.653850.420920.144351.219124.64951.65
27ABOafrican buffalo optimization0.833370.622470.299641.755480.921700.586180.197231.705110.610000.431540.132251.173784.63451.49
28(PO)ES(PO) evolution strategies0.790250.626470.429351.846060.876160.609430.195911.681510.590000.379330.113221.082554.61051.22
29TSmtabu search M0.877950.614310.291041.783300.928850.518440.190541.637830.610770.382150.121571.114494.53650.40
30BSObrain storm optimization0.937360.576160.296881.810410.931310.558660.235371.725340.552310.290770.119140.962224.49849.98
31WOAmwale optimization algorithm M0.845210.562980.262631.670810.931000.522780.163651.617430.663080.411380.113571.188034.47649.74
32AEFAartificial electric field algorithm0.877000.617530.252351.746880.927290.726980.180641.834900.666150.116310.095080.877544.45949.55
33AEOartificial ecosystem-based optimization algorithm0.913800.467130.264701.645630.902230.437050.214001.553270.661540.308000.285631.255174.45449.49
34ACOmant colony optimization M0.881900.661270.303771.846930.858730.586800.150511.596040.596670.373330.024720.994724.43849.31
35BFO-GAbacterial foraging optimization - ga0.891500.551110.315291.757900.969820.396120.063051.428990.726670.275000.035251.036924.22446.93
36SOAsimple optimization algorithm0.915200.469760.270891.655850.896750.374010.169841.440600.695380.280310.108521.084224.18146.45
37ABHAartificial bee hive algorithm0.841310.542270.263041.646630.878580.477790.171811.528180.509230.338770.103970.951974.12745.85
38ACMOatmospheric cloud model optimization0.903210.485460.304031.692700.802680.378570.191781.373030.623080.244000.107950.975034.04144.90
39ADAMmadaptive moment estimation M0.886350.447660.266131.600140.844970.384930.168891.398800.661540.270460.105941.037944.03744.85
40CGOchaos game optimization0.572560.371580.320181.264320.611760.619310.621611.852670.375380.219230.190280.784903.90243.35
41ATAmartificial tribe algorithm M0.717710.553040.252351.523100.824910.559040.204731.588670.440000.186150.094110.720263.83242.58
42CROmcoral reefs optimization M0.785120.460320.259581.505020.866880.352970.162671.382520.632310.267380.107341.007033.89543.27
43CFOcentral force optimization0.609610.549580.278311.437500.634180.468330.225411.327920.572310.234770.095860.902943.66840.76
44ASHAartificial showering algorithm0.896860.404330.256171.557370.803600.355260.191601.350460.476920.181230.097740.755893.66440.71
45ASBOadaptive social behavior optimization0.763310.492530.326191.582020.795460.400350.260971.456770.264620.171690.182000.618313.65740.63
CHAOSchaos optimization algorithm0.608370.407940.267811.284120.668340.388950.219901.277190.323080.198770.105980.627833.18935.43
RWneuroboids optimization algorithm 2(joo)0.487540.321590.257811.066940.375540.219440.158770.753750.279690.149170.098470.527342.34826.09


Summary

The 35% result after testing shows good potential of the COA(CHAOS) algorithm, especially considering its complexity and the number of configurable external parameters, including internal ones that were not optimized. The algorithm simply has not yet demonstrated its maximum performance.

The most interesting and promising aspects of the algorithm are the following: the use of three different maps (logistic, sinusoidal, and tent) for the algorithm's search capabilities; the addition of inertia helps the algorithm maintain momentum while overcoming local traps; dynamic parameter changes, depending on the success and phase of optimization, increase its flexibility; tracking the history of decisions and resetting "stuck" agents allows the algorithm to avoid wasting computing resources on unpromising areas. Furthermore, the use of the Latin hypercube and different approaches to initial agent placement ensures good coverage of the search space from the very beginning. To further improve the algorithm, it would be worthwhile to conduct a more thorough optimization of the internal parameters.

This algorithm provides a rich source of ideas for improving other optimization methods, especially its adaptive mechanisms and stagnation handling methods that could be successfully transferred to other machine learning and optimization algorithms for financial markets.

Tab

Figure 1. Color gradation of algorithms according to the corresponding tests

chart

Figure 2. Histogram of algorithm testing results (scale from 0 to 100, the higher the better, where 100 is the maximum possible theoretical result, in the archive there is a script for calculating the rating table)

COA(CHAOS) pros and cons:

Pros:

  1. A variety of search methods.
  2. Promising development.
  3. Stable results.

Cons:

  1. A large number of external parameters.
  2. A large number of internal non-optimizable parameters.

The article is accompanied by an archive with the current versions of the algorithm codes. The author of the article is not responsible for the absolute accuracy in the description of canonical algorithms. Changes have been made to many of them to improve search capabilities. The conclusions and judgments presented in the articles are based on the results of the experiments.


Programs used in the article

#NameTypeDescription
1#C_AO.mqh
Include
Parent class of population optimization algorithms
2#C_AO_enum.mqh
Include
Enumeration of population optimization algorithms
3TestFunctions.mqh
Include
Library of test functions
4
TestStandFunctions.mqh
Include
Test stand function library
5
Utilities.mqh
Include
Library of auxiliary functions
6
CalculationTestResults.mqh
Include
Script for calculating results in the comparison table
7
Testing AOs.mq5
ScriptThe unified test stand for all population optimization algorithms
8
Simple use of population optimization algorithms.mq5
Script
A simple example of using population optimization algorithms without visualization
9
Test_AO_COA_chaos.mq5
ScriptCOA(CHAOS) test stand

Translated from Russian by MetaQuotes Ltd.
Original article: https://www.mql5.com/ru/articles/17874

Attached files |
COAcCHAOSd.zip (203.55 KB)
Last comments | Go to discussion (2)
Jesus Manzur
Jesus Manzur | 12 Feb 2026 at 21:04
MetaQuotes:

Published article Chaos optimization algorithm (COA): Continued:

Author: Andrey Dik

It is surprising to see that when the different methods are combined it generates a kind of static as those of the old televisions, and that like her in a certain way this showing a kind of "noise" or chaos that remains recorded in yes, must be by the same fact of as the small initial variations determined by the sensitivity end up altering totally the final result, in truth that we are insignificant, very good article.
Andrey Dik
Andrey Dik | 13 Feb 2026 at 05:22
Jesus Manzur #:
...
Thank you!
Features of Custom Indicators Creation Features of Custom Indicators Creation
Creation of Custom Indicators in the MetaTrader trading system has a number of features.
Chaos optimization algorithm (COA) Chaos optimization algorithm (COA)
This is an improved chaotic optimization algorithm (COA) that combines the effects of chaos with adaptive search mechanisms. The algorithm uses a set of chaotic maps and inertial components to explore the search space. The article reveals the theoretical foundations of chaotic methods of financial optimization.
Features of Experts Advisors Features of Experts Advisors
Creation of expert advisors in the MetaTrader trading system has a number of features.
Can DOOM Run in MetaTrader 5: DLLs, Rendering, and MQL5 Input? Can DOOM Run in MetaTrader 5: DLLs, Rendering, and MQL5 Input?
This article demonstrates how to run DOOM inside MetaTrader 5 by integrating a native Windows DLL with an MQL5 Expert Advisor. We cover building the DLL, real-time framebuffer rendering via ResourceCreate, keyboard input with a key-up workaround using GetAsyncKeyState, and running the game loop on a background thread. The techniques are directly applicable to custom visualization, external data bridges, and robust MQL5–native code integration.