Discussing the article: "Population optimization algorithms: Micro Artificial immune system (Micro-AIS)" - page 4

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Forum on trading, automated trading systems and testing trading strategies
Discussion of the article "Population Optimisation Algorithms: Micro Artificial immune system (Micro-AIS) Algorithms"
fxsaber, 2024.01.21 01:38 AM
IWO has the best coordinates not matching the returned value.
Pulled it in.
As far as I understood, custom optimisation is done only on the terminal graph on one core, and I was talking about multithreaded optimisation in the tester (for the particle swarm algorithm I described in the article, for most other algorithms it should also be possible by analogy, since there is usually a principle of dividing tasks into groups of agents). But the tester hangs on the most primitive example (I gave the test above), which nipped the idea in the bud.
I have assembled and compiled the project. Technically everything works perfectly, except for the PSO algorithm, taken here from Stanislav, errors fell out, so it is not in the logs, which I will give below.
When looking at the source code there was a question about the number of FF runs, the default is 1000. This is very low, the results will not be much better than random. The in-house tester did 19968 FF runs, so I set the source to 20000.
I did 5 runs of optimisation in the staff one, the results show the best of all. In articles, used 10 runs of optimisation and the average result is displayed.
Next, I used FF Megacity, the discrete function closest to real discrete trading strategy problems (Hilly and Forest are smooth and more suitable for evaluating the ability to perform in MO tasks).
Settings in the tester:
Optimisation results after 5 runs:
Results of custom run #1:
Results of custom launch #2:
Results of custom run #3:
Results with 100% convergence are highlighted in yellow.
At such a coarse step, as we can see, even the RND algorithm can converge. But, only multiple runs (which is done in the articles) and consideration of average results can give reliability to the results. As we can see, of the highlighted ones, not all of them converged repeatedly.
Conclusions:
1. algorithms fully show themselves only in multiple tests (in single tests the results can be good by chance).
2. the algorithms' capabilities are shown with many variables and small steps, otherwise the results tend to be random (RND is an example, which showed 100% convergence in a single test).
3. The number of runs should be large, something around 10000 (for example, with a population of 50 the number of epochs will be 10000/50=200, with 1000 runs there are only 20 epochs), the less FF runs, the more the results tend to random for obvious reasons.
The results of the staff one with cursor highlighting, the first one gave 0.666 and the fourth one gave a maximum of 0.97. In the list of results, the best is shown as 0.97.
As far as I understood, custom optimisation is done only on the terminal graph on one core, and I was talking about multithreaded optimisation in the tester (for the particle swarm algorithm I described in the article, for most other algorithms it should also be possible by analogy, as there is usually a principle of dividing tasks into groups of agents). But the tester hangs on the most primitive example (I gave the test above), which nipped the idea in the bud.
everything works perfectly, except for the PSO algorithm taken here from Stanislav, errors fell out
I'd like to get to the bottom of this.
these are the errors
such errors
Get TypeToBytes.mqh from here.