Discussion of article "Population optimization algorithms: Saplings Sowing and Growing up (SSG)" - page 6
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It is clear that with a competent approach no one optimises by the same balance criterion. They try to take into account the absence of overshoots, statistically significant number of transactions, etc.
I wrote a bit on the topic here.
You should get a similar picture of deals in each pass. Here you can see rogue trades and TC bricks.
To form the optimisation criterion, it is desirable to throw out the deals that are outliers.
But this is the topic of forming a convenient FF. Even if it is completely without peaks and has a hilly nature, these hills will not be found by one completed pass. That's why I do it this way.
Forum on trading, automated trading systems and testing trading strategies
Discussion of the article "Population Optimisation Algorithms: Saplings Sowing and Growing up (SSG) Algorithm"
fxsaber, 2023.03.22 00:32
I indirectly find local through forced interruption of optimisation when a large number of cores are involved. Roughly speaking, there are 20 agents in Tester, I interrupt optimisation after 2000 passes.
If you perform a full optimisation for a TC with two parameters every day for a year or two to get a similar frame, and then form a video from these frames, you will get something like this:
It would be naive to assume that the surface will remain static.
So the question is:
What's the point?
Frankly speaking, this is not a specific wish, but a tool that must be included in a serious software for algo-trading - Tester.
But it is realistic to tune it yourself.
Forum on trading, automated trading systems and testing trading strategies.
Discussion of the article "Population Optimisation Algorithms: Saplings Sowing and Growing up (SSG) Algorithm"
fxsaber, 2023.03.23 19:51
if you have the data of the area to be cut, you can very easily (even in the regular Tester) do the optimisation without that space location.
But I completely lack the competence how to define the area around the found global maximum in the GA results.
GA algorithms are quite different. I suppose that if a maximum is found in 1000 passes and the algorithm stops, then the last 100 passes are the points around the maximum found. Then we just take these 100 points and immediately form the area to be thrown away in future optimisations.
In principle, if the same regular GA is run only once, then the last 100 entries in the opt file should be these points. However, I'm not sure this is the case with a multi-core approach. That's why some kind of clustering is needed. There are quite a lot of articles on this topic, you should study it.
It would be naive to assume that the surface would remain static.
I need a relatively static point in this bubbling surface. I've found those.
And yet we shouldn't reduce the topic to the skill and meaning of cooking. You need a pot to cook in.
Still, don't reduce the topic to the skill and meaning of cooking. You need a pot to cook in.
+++
Frankly speaking, this is not a specific wish, but a tool that must be included in a serious algo-trading software - Tester.
But you can really tune it yourself
GA algorithms are quite different. I suppose that if a maximum is found for 1000 passes and the algorithm stops, then the last 100 passes are points around the found maximum. Then we just take these 100 points and immediately form a region to be thrown away in future optimisations.
In principle, if the same regular GA is run only once, then the last 100 entries in the opt file should be these points. However, I'm not sure this is the case with a multi-core approach. That's why some kind of clustering is needed. There are quite a few articles on this topic, need to research.
I suppose that the last 100 records will be strikingly different for different algorithms. from experience I can afford to assume so, so it is reasonable to note that the choice of algos for tasks makes sense, not just to take the best one from the table...
Therefore, I find the idea of creating such a rating table useful.
Here's to ranking! More objectivity. But I'm still a bit confused. I'm still looking at the practical problems I'm facing.
In Tester I would add a tick "throw away areas of found maxima of previous optimisations".
Then the first run without this tick, the rest - with it enabled. Make 20 optimisations - get 20 peaks.
Then load them into OOS-check (20 single runs) and evaluate them, at the same time evaluating the whole TS/FF.
I need a relatively static point in this bubbling surface. I've found these...
but still, I think, finding such points had a probabilistic character.
My message is that there should be no input parameters requiring "optimisation" at the TC input. Such parameters turn the TS into a coin.
Even internal self-optimisation of internal parameters is also self-deception.
In Tester, I would add a tick box "discard areas of found maxima of previous optimisations".
Independent implementation: