Optimisation and Out-of-Sample Testing.

 

Good afternoon to all.

After optimizing an EA, we often have to nerd out-of-sample more than a dozen sets of parameters suggested by the optimizer.

I have an idea of optimizing Expert Advisors outside the sample. Suppose we "charged" the Expert Advisor with optimization by a number of parameters. For example, we set a date from Jan. 1, 2006 to Jan. 1, 2007. 2006 till January 1, 2007.

We have received several thousand Expert Advisors. After that, we save the page with theOPTIMIZATION RESULTS as a separate file. Next, we set the following history period for optimization, i.e. we add a month or two, or as many as we need.

In our case, we set for example from 1 Jan. 2007 to June 1, 2007. And again we enable optimization. The optimizer should not take parameters in EXPERT'S PROPERTIES, but re-select them one by one from the file we have saved after the first optimization. After this second optimization, we are left with only those vAriens that have yielded profits outside the sample!

The result, ideally, is that we get the "ideal parameters" to work with and test online later on!

I think this will be a useful addition to the mt4 tester. Probably, and most likely, it is already implemented by someone somewhere. If anyone knows, please share the link!

I, due to my modest knowledge, cannot figure out how to go about implementing the idea in practice.

 
leonid553, you are going in the right direction, since the existing "optimization" without testing on out-of-sample data is pure curve fit. But the basic algorithm should be more complex, like in neural network programs. All "optimization" should run on all datasets at once (this is a wish for developers). You can, of course, work with only two datasets, but it is better to provide for three - training (A), validating (B) and testing (C). Well, under existing conditions, you will have to work more or less as you suggest.
 
It is possible for the expert to specify a file with parameters, which will then be used in the optimisation. Or it can be simpler, optimise on one time interval, then on another, save everything in excel and compare :-)
 
dimontus:
Or it could be simpler, optimise on one time interval, then on another, save everything in excel and compare :-)
No, dimontus, it doesn't work that way. Two different curve fits on different data won't do any good.
 
So if they are the same, i.e. the same parameters on different time intervals give similar results, isn't that what the author of the thread wants?
 
What's the point of doing a curve fit on the second if you can just sift the promising optimisation sets on the first through the second?
 
What do you mean?
 
I have tried the following variant:
I test the Expert Advisor on the whole available period, select the segment with the worst expected payoff (dip on the chart) and optimize it, this worst interval
I sift out (as much as possible) local extrema by hand
And then routine work is to insert the optimization data of the worst interval into the optimizer and run the Expert Advisor with this data over the whole available interval
from what I get, I select the meat...:-)
 

In the light of the above, I see the following way :

To build a simple additional Expert Advisor, - and load all obtained sets of parameters into it after the first optimization.

Each set will have its own index. And then we simply insert this additional EA into the tester instead of the first one and optimize it beyond the sample, and the optimization parameter will be the LOCAL NUMBER of inserted sets!

It may be a bit tricky, but it's much better than manually out of sample...

The only thing we need is to consider the versatility of this add-on.

 
Makes sense, leonid553. When you've done it, drop it in Code Base or here, if you don't mind. I'm sure a lot of people already want it... I've been thinking about it for a long time, I just can't get my hands on it. Only with optimization parameters on out-of-sample I need to think about it, because I have to somehow take into account the results of testing on the first set of data.
 
leonid553:

In the light of the above, I see the following way :

To build a simple additional Expert Advisor, - and load all obtained sets of parameters into it after the first optimization.

Each set will have its own index. And then we simply insert this additional EA into the tester instead of the first one and optimize it beyond the sample, and the optimization parameter will be the LOCAL NUMBER of inserted sets!

It may be a bit tricky, but it's much better than manually out of sample...

The only thing we need to do is to consider the versatility of this auxiliary EA.

I don't think it will be that easy, for each optimized parameter in connection with other parameters several extrema will be identified. It may be possible to find a solution if these extrema are fed to the input of the neural network.
Reason: