Optimisation range - page 6

 
ITeXPert писал(а) >>

Hi all!

I would like to ask a question about the data range used to optimise EAs. I.e. on which timeframes which ranges to choose. For example, for H1, is it enough to optimise the Expert Advisor on one month, three months, or one year of data? I would like to see these values for different timeframes and at least a brief substantiation of the choice. Thank you very much.

First you need to figure out what optimization is for you. If you take a system from scratch and try to optimise it for the last X days/years, it will be a fit and go straight to the trash. It's the same when the optimization range is defined by a number of deals. The method implemented in TS should work for a long time (the longer the better) and preferably on different symbols. But the working doesn't mean grinding money with fixed parameters. It should be adaptable. It means that its optimal parameters should change slowly and evenly enough for you to earn money using the parameters adjusted to the nearest history. Or even suspend the trade in time, if the market does not fit your system. For this purpose, you should know, which parameters and in what limits it makes sense to optimize, as well as the criteria for system rejection (for example, when there are no optimal values inside the predetermined zones during the optimization period). In other words, you need to know the applicability and optimization boundaries of your system, and this can be found out using the testing or trading history. It is necessary to achieve robustness of the method and not of its individual optimal parameters. To do this you should not analyze individual runs, but the behavior within the optimal ranges of parameters and the dynamics of the optimal parameters' behavior within these ranges over time.
 
Avals >> :

I feel like I wasn't talking about the same thing :(((

 
Avals >> :
... It means, that its optimal parameters should change slowly and evenly enough for you to earn money, using parameters adjusted to the nearest history. Or even to stop trading in time, if the market does not fit your system. To do this, you need to know what parameters make sense to optimize and within what limits, as well as the criteria for system abandonment (for example, if there are no optimal values inside the predetermined zones during the optimization period).

That's the whole point, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time

These parameters and their limits can change drastically.)

 
budimir >> :

That's the whole point, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time

These parameters and their limits can change drastically.)

that's the point :)))

 
budimir писал(а) >>

That's the tricky part, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time

these parameters and their limits may change drastically :-o)

For this purpose, there are system abandonment criteria, and in most cases this can be done before it is expressed in equity. Also, no one forbids trading only longs if shorts do not work and vice versa ;) All this can be done in due time if one does not make decisions solely based on changing equity on the traded parameters.

 

Lately I have been trying to use some kind of stability coefficient.

For example - optimisation for a year, then for each month the growth coefficient (increase in DEPO per month) is counted. The maximum and minimum coefficient is calculated. Their ratio is called stability coefficient. If it tends to one, then it is the ideal variant. The minimal coefficient should also be larger than one. All parameters are saved in the file. I don't have time to make all this in a decent form. I want to post it on my forum.

 
Vinin писал(а) >>

Lately I have been trying to use some kind of stability coefficient.

For example - optimisation for a year, then for each month the growth coefficient (increase in DEPO per month) is counted. The maximum and minimum coefficient is calculated. Their ratio is called stability coefficient. If it tends to one, then it is the ideal variant. The minimal coefficient should also be larger than one. All parameters are saved in the file. I don't have time to make all this in a decent form. I want to post it on my forum.

IMHO the drawback is in the fixed time ranges: month, year. For this reason I agree with Neutron - to compare parameters we should use a fixed number of trades and then you may calculate not only increase of DEPO (profit) but also profit/risk by comparing, for instance, profit factor.

 
Avals писал(а) >>

IMHO the disadvantage is in the fixed timeframes: month, year. In this respect I agree with Neutron - to compare indices on a fixed number of trades and then you can count not only the increase of DEPO (profit), but also the profit/risk, comparing for example the profit factor.

The system may always be improved. If only we had criteria.

 
Vinin >> :

........Whenever there are criteria.

That's the whole point :), that everyone adjusts their criteria for themselves, even after reading a "great book on optimisation"......... NO ANSWERS TO ALL QUESTIONS..... somewhere and someone works, somewhere doesn't..... etc. etc.....

..................

Unfortunately I don't have a statistical-mathematical apparatus that would allow me to calculate all this, but I don't think it would help either - there are too many options.....

 

In general, if you take a bird's eye view of the Strategy Tester optimizer, it is clear that it does not differ from the Neural Network. Indeed, we have a certain amount of customizable parameters, a certain number of indicators used and one output which signals to us to open a position to Long or Short. As a rule, the number of adjustable parameters is the same as the number of indicators (inputs), it is a variant of classic single layer perseptron. But we do not know it, and nevertheless we actively use it in trading. And it would be useful to know better the apparatus that is used when working with NS, which would allow to avoid standard errors and suboptimal behavior in parameter optimization. For example from this it immediately follows the limitation of strategy tester, because single layer persepron is not an optimal approximator and therefore in principle it is impossible to get the best result for MTS in terms of profitability of TS on it in this formulation.

For the NS we are getting the optimal number of fitting parameters for a predetermined history length, not taking it into account leads to the effect of parameters overoptimization (I already mentioned above). This is where all problems with the tester memorizing the history and losing deposits during forwards tests stem from. Moreover, if we take into account that the two-layer perseptron is a universal approximator, then any TS with any cunning links between used indicators (one with multiplication, division, etc.) can be reduced to the weighted sum of the same indicators without losing power, and this is the classical NS architecture and we can use the most effective method of parameter optimization in the world - the method of backward error propagation. It's obviously orders of magnitude faster than a simple brute force and even genetic algorithm used in the tester. Moreover, there is nothing difficult in such a transfer to a new architecture, you just need to take the sum of indicator signals and find the optimal weights.

What I want to say is the following: we are all very skeptical of Artificial Intelligence and everything related to it, especially of NS. But we don't notice that we exploit this knowledge area implicitly at every step - optimization in a strategy tester! We exploit this area in the most suboptimal way - by groping. Hence there is often a desire to discard "bad" passes in a series of tests, etc. In fact, the world is simpler and there is nothing to do, but you just need to know the area of applicability of the method and its limitations.

Reason: