Discussion of article "Population optimization algorithms: Saplings Sowing and Growing up (SSG)" - page 12

 
Andrey Dik #:

Yes, a very interesting organism.))

but, the slug uses only 2 dimensions, and even the simplest AOs can cope with 2 dimensions. how it will behave in 1000 dimensions is a big question, because the complexity of the problem grows non-linearly with the number of dimensions.

Ant is the same, but they generalise it for multidimensional cases

Here chattgpt asked, don't know if it is or not :)

The Slime Mould Algorithm is a heuristic optimization method based on the behaviour of slime moulds. This algorithm was developed by Chiaro D'Ambrosio and belongs to the group of "natural" optimisation algorithms.


The essence of the algorithm is to place a "slug" at a specific point and observe how it spreads around that point, in search of maximum food. In the course of this algorithm, the slug leaves behind a trail that gradually evaporates. The more trace left in an area, the more likely it is that another "slug" will adopt that trace as a path to a higher concentration of food.


In the context of optimisation, this slug behaviour is used to find optimal solutions in parameter spaces. For example, we can use this algorithm to find the minimum value of a function within given parameter bounds.


The advantages of this algorithm are its simplicity and its ability to converge quickly to optimal solutions in multidimensional spaces. However, it can also have problems converging to the optimal solution in more complex and nonlinear problems.

 
Maxim Dmitrievsky #:
The ant algorithm is the same, but it's generalised to multidimensional cases.

The convergence of the ant decreases with increasing dimensionality, it decreases rapidly. all algorithms behave the same way in general. it depends on the search strategy, and we don't know the strategy of the slug))).

convergence drops slower than complexity growth with increasing dimensionality - I consider it as AO scalability. scalability depends on the type of function, different algorithms scale differently on different functions.

 
Maxim Dmitrievsky #:

Here chattgpt asked, don't know if it is or not :)

The Slime Mould Algorithm is a heuristic optimization method based on the behaviour of slime moulds. This algorithm was developed by Chiaro D'Ambrosio and belongs to the group of "natural" optimisation algorithms.

The essence of the algorithm is to place a "slug" at a specific point and observe how it spreads around that point, in search of maximum food. In the course of this algorithm, the slug leaves behind a trail that gradually evaporates. The more trace left in an area, the more likely it is that another "slug" will adopt that trace as a path to a higher concentration of food.

In the context of optimisation, this slug behaviour is used to find optimal solutions in parameter spaces. For example, we can use this algorithm to find the minimum value of a function within given parameter bounds.

The advantages of this algorithm are its simplicity and its ability to converge quickly to optimal solutions in multidimensional spaces. However, it can also have problems converging to the optimal solution in more complex and nonlinear problems.

wow, how interesting. i will look for this algo, of course it is interesting to take it apart. and assemble)))))))

 
As soon as I get my hands on it, I will make a comparative graph of convergence of all known algorithms on the dimensionality of the problem, three types of problems. this trick seems to me very time-consuming.
 
Andrey Dik #:

wow, how interesting. i will look for this algo, of course it is interesting to take it apart. and assemble))))))

Here is an example of implementation of the slug algorithm in MQL programming language:


```

double slimeMouldAlgorithm(double(*f)(double[]), double x[], int n, double LB[], double UB[], int maxIter, int N)

{

const double k = 0.1; // trace evaporation coefficient

const double stepSize = 0.001; // the size of the "slug" movement step

double trail[N]; // array of traces of each particle

double bestTrail[N]; // array of the best trail

double bestFitness = DBL_MAX; // initial value of the best fitness function

double bestX[N]; // array of parameter values corresponding to the best trail

for (int i = 0; i < N; i++) {

for (int j = 0; j < n; j++) {

x[j] = LB[j] + (UB[j] - LB[j]) * (double)rand() / RAND_MAX; // place "slugs" randomly in the specified range of parameters

}

trail[i] = f(x); // calculate the value of the fitness function for these parameters

if (trail[i] < bestFitness) { // if the fitness function value is better than the current best value, store it as the new best value

bestFitness = trail[i];

memcpy(bestX, x, sizeof(bestX)); // copy the values of the found parameters into the array of best values

memcpy(bestTrail, trail, sizeof(bestTrail)); // copy function values for the found parameters into the array of best function values

}

}

for (int iter = 0; iter < maxIter; iter++) { // loop by the number of iterations

for (int i = 0; i < N; i++) {

double r[n], d[n];

for (int j = 0; j < n; j++) {

r[j] = (double)rand() / RAND_MAX - 0.5; // random increment vector

d[j] = r[j] * stepSize * (UB[j] - LB[j]); // calculate the step width

x[j] += d[j]; // add a step to the parameter

if (x[j] < LB[j]) { // if the parameter is out of range, leave it at the boundary

x[j] = LB[j];

} else if (x[j] > UB[j]) {

x[j] = UB[j];

}

}

double newTrail = f(x); // calculate the function value for the new parameters

if (newTrail <= trail[i]) { // if the function value has improved or has not changed, save the new parameters and the function value

memcpy(bestX, x, sizeof(bestX)); // copy the new parameters into the array of the best parameter values

memcpy(bestTrail, &newTrail, sizeof(newTrail)); // copy the new function value into the array of the best function values

trail[i] = newTrail; // save the function value for new parameters in the array of function values for all "slugs"

if (bestTrail[i] < bestFitness) { // check if the current best solution is better than the current best solution

bestFitness = bestTrail[i];

}

} else { // if the function values have deteriorated, return to the old parameters

memcpy(x, bestX, sizeof(bestX));

trail[i] = bestTrail[i];

}

}

for (int i = 0; i < N; i++) {

trail[i] *= (1.0 - k); // evaporate the slug trail to avoid getting stuck in the local maximum

}

}

return bestFitness;

}

```


This code uses random values of selected parameters as a starting point, then processes each particle sequentially. For each particle, it generates a random increment vector, calculates the step size and adds it to each parameter. It then calculates the function value for the new parameters and stores it if the value decreases or stays the same. At the end of each cycle, it evaporates the slug trace to avoid getting stuck at a local maximum.


You'll probably have to tweak it there 😀 gpt issued

 
Correction:

You are correct, there is no `memcpy` function in MQL. To copy values from one variable to another, you can use the assignment operator `=`. Here is an example of the modified code without using the `memcpy` function:


```

double slimeMouldAlgorithm(double(*f)(double[]), double x[], int n, double LB[], double UB[], int maxIter, int N)

{

const double k = 0.1; // trace evaporation coefficient

const double stepSize = 0.001; // the size of the "slug" movement step

double trail[N]; // array of traces of each particle

double bestTrail[N]; // array of the best trail

double bestFitness = DBL_MAX; // initial value of the best fitness function

double bestX[N]; // array of parameter values corresponding to the best trail

for (int i = 0; i < N; i++) {

for (int j = 0; j < n; j++) {

x[j] = LB[j] + (UB[j] - LB[j]) * (double)rand() / RAND_MAX; // place "slugs" randomly in the specified range of parameters

}

trail[i] = f(x); // calculate the value of the fitness function for these parameters

if (trail[i] < bestFitness) { // if the fitness function value is better than the current best value, store it as the new best value

bestFitness = trail[i];

for (int j = 0; j < n; j++) {

bestX[j] = x[j]; // copy the values of the found parameters into the array of best values

}

bestTrail[i] = trail[i]; // copy the function value for the found parameters into the array of the best function values

}

}

for (int iter = 0; iter < maxIter; iter++) { // loop through the number of iterations

for (int i = 0; i < N; i++) {

double r[n], d[n];

for (int j = 0; j < n; j++) {

r[j] = (double)rand() / RAND_MAX - 0.5; // random increment vector

d[j] = r[j] * stepSize * (UB[j] - LB[j]); // calculate the step width

x[j] += d[j]; // add a step to the parameter

if (x[j] < LB[j]) { // if the parameter is out of range, leave it at the boundary

x[j] = LB[j];

} else if (x[j] > UB[j]) {

x[j] = UB[j];

}

}

double newTrail = f(x); // calculate the function value for the new parameters

if (newTrail <= trail[i]) { // if the function value has improved or has not changed, save the new parameters and the function value

for (int j = 0; j < n; j++) {

bestX[j] = x[j]; // copy the new parameters into the array of the best parameter values

}

bestTrail[i] = newTrail; // copy the new function value into the array of best function values

trail[i] = newTrail; // store the function value for the new parameters in the array of function values for all "slugs"

if (bestTrail[i] < bestFitness) { // check if the current best solution is better than the current best solution

bestFitness = bestTrail[i];

}

} else { // if the function values have deteriorated, return to the old parameters

for (int j = 0; j < n; j++) {

x[j] = bestX[j];

}

trail[i] = bestTrail[i];

}

}

for (int i = 0; i < N; i++) {

trail[i] *= (1.0 - k); // vaporise the slug trail to avoid getting stuck in the local maximum

}

}

return bestFitness;

}

```


As you can see, we are simply using loops to copy array values element by element.

 


this hardware knows everything...))))

in general case, probably, you can trust the hardware to choose the algo for the task, but I'd rather look in my rating table)))

 
Maxim Dmitrievsky #:

```

double slimeMouldAlgorithm(double(*f)(double[]), double x[], int n, double LB[], double UB[], int maxIter, int N)

It will be necessary to break it down into a scheme, as all algos in the articles are done.

 
Andrey Dik #:

It will be necessary to break it down according to the scheme, as all the algos in the articles are done. in this form it is useless.

Try it via chat, I wonder if it can handle such a task :) I'm out of tokens, so I dabble sometimes. Feed him an example of an article and to keep the style. By the way, a great startup for a penny is such a bot in a cart.
 
Maxim Dmitrievsky #:
Try it via chat, I wonder if it will handle such a task :) I ran out of tokens, so I dabble sometimes. Feed it a sample article and make it follow the style. By the way, a great startup for a penny is such a bot in a cart.

Unfortunately, or maybe fortunately, the bot does not generate new information, and the information that is publicly available is often distorted. i think this is due to the mechanisms of information interpolation. it lies, makes up non-existent abbreviations of algorithms, even makes up authors' names on the fly and dates of the algorithm's appearance))). you should be very careful with such information.

As an assistant in text editing, stylistic corrections and reference when writing articles - yes, an indispensable assistant.