Discussion of article "Population optimization algorithms: Saplings Sowing and Growing up (SSG)"
What is the best algorithm for finding local extrema?
Roughly speaking, we need to produce a list of parameters that best fall into local extrema (vertices).
Using the example of a hedgehog, show the coordinates of the tips of its needles.
Tell me, are there any optimisation algorithms that have the following features:
selection of only a part of parameters when their number is large;
the selected parameters are optimised only using a genetic algorithm, or an algorithm that can be replaced by genetics;
the algorithm for selecting active parameters for the current iteration does not matter, it is allowed to change the ranges and step of optimisation
What is the best algorithm for finding local extrema?
Roughly speaking, we need to produce a list of parameters that best fall into local extrema (vertices).
Using the example of a hedgehog, show the coordinates of the tips of its needles.
For any task of global search, any algorithms from the top of the table will work best. You can choose the most suitable one individually if you know, for example, that the function is smooth or discrete, or choose the most suitable one according to the number of optimised parameters in the problem.
Usually do not set the problem in such a way that it is necessary to find local extrema, because in most practical problems the number of local extrema is unknown (if it were not so, it would be easier to use analytical methods), and the number of global extrema is known - one (if there are several globals with the same value, it is equivalent to the fact that it is one). That's why usually the problem is set in such a way that the function is as monotonic as possible, an example is error minimisation.
I don't know any algorithms for solving problems of searching for locals in the general case, and it is rarely expedient to write specific algorithms for a particular problem. In such cases, I would consider how one might represent the FF to solve the problem. There could be different approaches, such as using AO as an add-on to clustering. Another approach to the problem is to split the FF into hypothetical "layers", from global minimum to global maximum (which must be found beforehand), and then examine each layer sequentially, i.e. you need to make a batch task manager for AO.
In short, I will disappoint you - there are no algorithms for solving problems of searching local extrema in general form. It is easier to modify the FF than to make a special algorithm.
3. the algorithm of selecting active parameters for the current iteration is irrelevant, it is allowed to change the ranges and step of optimisation
1. Well, AO does not care how many and what parameters to submit to it for optimisation, so you can submit all parameters to AO and not all.
2. You can apply any algorithm individually, jointly, sequentially and combined. I tried to make the algorithms in the articles uniform.
3. Any of the presented algorithms can be adjusted directly during the optimisation process, in principle. You only need to correct the initialisation so that the accumulated population is not reset. It is possible, for example, to reduce the step of optimised parameters in proportion to the passed epochs (or increase).
In short, I will disappoint you - there are no algorithms for solving problems of searching local extrema in general form. It is easier to modify the FF than to make a special algorithm.
Thank you. I indirectly find local ones through forced interruption of optimisation when a large number of cores are involved. Roughly speaking, there are 20 agents in Tester, I interrupt optimisation after 2000 passes.
Thank you. I indirectly find local ones through forced interruption of optimisation when a large number of cores are involved. Roughly speaking, there are 20 agents in the Tester, I interrupt optimisation after 2000 passes.
Well, it's absolutely anti-scientific! ))))) Clever, though.
I was thinking that there are algorithms of increased "clumping", when the population tends to divide into groups by locales, such as IWO, COA, ABC, BFO, the populations of these algorithms can be analysed for clumps of agents (a logical search unit in AO is called an agent) by measuring the Euclidean distance between agents, i.e. search for groups of agents with minimal distance between them.
You can also try in such rollback (when an agent repeatedly makes search attempts in different directions from the same position) algorithms as COA and HS or MA to set a counter of attempts, make slices of population states through several iterations and agents with the largest number of attempts will be local extrema. MA and BFO have such a counter natively.
I.e., I said that there are no exact ways to search for locals (search for a global can be considered "exact" in this respect), but you can search approximately as I described above. For an exact solution you need to know diff-information about FF.
ZЫ. An interesting question to all who are interested in this topic: what is the difference between local extrema and global extrema (not taking into account their difference in FF values)?
ZZY Having answered the first question, many questions disappear by themselves.
Unfortunately, I am incompetent at all, so I am interested in an approximate but ready solution.
ZЫ. An interesting question for all those who are interested in this topic: what is the difference between local extrema and global extrema (not taking into account their difference in FF values)?
If we take the result of optimisation as a set of passes, then hints of local ones will be visible after applying clustering to the space of input calculated passes. Further in each cluster we achieve "narrow" optimisation.
What is the best algorithm for finding local extrema?
Roughly speaking, we need to produce a list of parameters that best fall into local extrema (vertices).
Using the example of a hedgehog, show the coordinates of the tips of its needles.
Unfortunately, incompetent at all, so I'm interested in an approximate but ready-made solution.
If we take the result of optimisation as a set of passes, then hints of local ones will be visible after applying clustering to the space of input counted passes. Further in each cluster we achieve "narrow" optimisation.
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Population optimization algorithms: Saplings Sowing and Growing up (SSG) has been published:
Saplings Sowing and Growing up (SSG) algorithm is inspired by one of the most resilient organisms on the planet demonstrating outstanding capability for survival in a wide variety of conditions.
The algorithm is one of the few that does not have a clear description by the authors (only general provisions and ideas are provided). The algorithm operators presented by the authors are also not ready-made instructions for the algorithmic implementation of the program. There are no clear instructions about child and parent trees and their interaction. There are no requirements for the order, in which operators are executed, and any user can change their order to obtain a better seedling.
In a broad sense, SSG is not an optimization algorithm, it is a general set of rules that is designed to complement other algorithms to improve the quality of optimization. In other words, SSG is an add-on for any evolutionary population algorithms, so I have room for imagination and the opportunity to experiment with a specific implementation of the optimization algorithm. I applied some of my own thoughts and experience while writing previous algorithms and used them to work with SSG. The results of the experiments are presented for the reader's judgment below.
To start understanding the algorithm, we need to think of a tree as an optimization agent. A tree is a solution to an optimization problem, where each branch is an optimized parameter of the problem. An abstract and artistic illustration of child and parent trees is provided in Figure 1. The tree trunk is a set of parameters to be optimized. Each branch is a separate optimized parameter, where the length of the branch is limited by the allowable range of values of the corresponding parameter. The direction of the branches does not matter and is only shown in the figure to highlight their difference.
Author: Andrey Dik