Optimisation results differ from single tests on them - page 6

 

I see that the issue of spread testing has been raised again and again. I've recently arrived at a system that looks more or less serious (in terms of real trading) and therefore requires thorough testing. And have been concerned about this issue as well. As the result I wrote a simple script that sets the required spread for offline testing.

The principle is well known, in symbols.sel file Ask is overwritten. So, in the offline terminal copy it from the history folder to the folder experts/files, launch the script, then close the terminal, copy symbols.sel back and launch the terminal again.

P.S. Replaced the script, a small oversight was, if anyone suddenly had time to take SetSpread and not SetSpread_1, need to download again.

Files:
 
The same problem. I get the same results during optimization, but results drastically different during single runs. Thanks to Mathemat for pointing me in the right direction.

Mathemat:
Be careful with the objects during testing. It's better not to use them at all.

I'll share what happened and how it was solved, maybe someone will find it useful. I wrote my own indicator, using trend lines. The indicator passed the number of broken trend line to the global variable of the terminal. It went like this:

GlobalVariableSet("GV_name", number);

and in the Expert Advisor took the value of

int dc = GlobalVariableGet("GV_name");

Everything was fine in the visualizer and in single tests. So, I decided to save on declaring an "extra" variable to reduce the code size. After I corrected code in following way, everything started working with identical results both in optimizer and single tests.

string GlobVar = "GV_name";  // объявил переменную в индикаторе на глобальном уровне программы

int start(){
  GlobalVariableSet(GlobVar, number);
}
string GlobVar;  // объявил переменную в советнике на глобальном уровне программы
int dc;

int start(){
  dc = GlobalVariableGet(GlobVar);
}

Thus, setting global variables in indicator and in the EA made everything work properly.

 

Good evening, colleagues.

I decided to reanimate this topic, as I have encountered an identical problem.

My Expert Advisor does not use graphical objects. I set a custom spread that is the same everywhere. However, single tests are very much different from optimization results. Moreover, I ran single tests on different computers and they all look alike but do not coincide with optimization results.

Maybe someone has found a solution?

 
Andrey Kaunov:

Good evening, colleagues.

I decided to reanimate this topic, as I have encountered an identical problem.

My Expert Advisor does not use graphical objects. I set a custom spread that is the same everywhere. However, single tests are very much different from optimization results. Moreover, I ran single tests on different computers and they all look alike but do not coincide with optimization results.

Maybe somebody has found a solution?

Why should they be the same? Unless you have to run through all the parameters and select the best option. But this is expensive and resource-intensive. That is why we use genetic algorithms. And they are essentially built like this: random sampling of parameter sets from those being optimized and then choosing the best one and a more detailed search there. For example 6 parameters. Presenting the best solution is like the highest density in 6 dimensional space. And there can be many densification points. Good algorithm gives smooth 6-dimensional volumetric glades with not many densities and the optimization will find them, and if the algorithm gives sharp densities, then the results can be random. i.e. optimization will find densities, but not every time the same set of parameters (same models).

 

Valery, instead of answering, I'll just quote, may I...

eugene-last:

Um... I think a lot of people just refuse to understand the problem. Or deliberately walk away.

What is optimization and what is a single test? Answer: optimization is several single tests.
What does it mean? Answer: it means theoretically that optimization pass is the same and ends up with the same result as the single test.

Well, in practice it turns out that this is not the case. And the Expert Advisor (not a maxims by the way, I see it bothering some people here) doesn't fail because the single test shows exactly the same result. So why does this single test in optimization give a different result ?!?!?!?!?!?!?!?

 
Andrey Kaunov:

Good evening, colleagues.

I decided to reanimate this topic, as I have encountered an identical problem.

My Expert Advisor does not use graphical objects. I set a custom spread that is the same everywhere. However, single tests are very much different from optimization results. Moreover, I ran single tests on different computers and they all look alike but do not coincide with optimization results.

Maybe somebody has found a solution?

1. Check that all variables are initialized, though in the past in MQL4 - uninitialized variables were equal to 0, now I don't know.

2, if you use dynamic arrays - you need to check the result ArrayResize() - I had this problem, I did EA on 4-5 indicators, it turned out that one indicator ate all the memory, and in my EA, ArrayResize() did not always give the requested array size - it worked once or not. If I'm not mistaken, the MQL4 has about 3Gb of memory max. for MQL-programs, the terminal is 32-bit.

 
Andrey Kaunov:

Valery, instead of answering, I'll just quote, may I...

I don't know exactly, I don't know. Optimisation is after all not a few single tests, but many. so for the sake of speed maybe the input data can be different. To get to the bottom of this, we need simple reproducible problem codes. Then maybe developers will answer.

 
Igor Makanu:

1. check that all variables are initialized, although previously in MQL4 - uninitialized variables were equal to 0, now I don't know, by the way it also concerns indicators

2, if you use dynamic arrays - you need to check the result of ArrayResize() - I had this problem, I did EA for 4-5 indicators, it turned out that one indicator ate all the memory, and in EA, I have not always ArrayResize() marked the requested size of the array - it worked and it didn't work any other time. If I'm not mistaken, the MQL4 has about 3Gb of memory max. for MQL-programs, the terminal has 32 bits.

Igor, thank you for the tip. I will try to look for some solutions.

Valeriy Yastremskiy:

I'm not sure, I don't know. Therefore, the input data may be different for the sake of speed. To get to the bottom of this, we need simple reproducible problem codes. Then maybe the developers will answer.

Well, nothing should be different, otherwise the whole point of optimization is lost. And developers have not answered anything for 10 years...

 
Andrey Kaunov:

Igor, thanks for the tip. I'll try to dig in that direction.

Well, nothing should be different, otherwise the whole point of optimisation is lost. And the developers have not answered anything for 10 years...

Developers do not understand words and complaints. Only understandable code that reproduces the problem).

 
Igor Makanu:

1. check that all variables are initialized, although previously in MQL4 - uninitialized variables were equal to 0, now I don't know, by the way it also concerns indicators

2, if you use dynamic arrays - you need to check the result of ArrayResize() - I had this problem, I did EA on 4-5 indicators, it turned out that one indicator ate all the memory, and in EA, I have not always ArrayResize() marked the requested size of the array - it worked once or not. If I'm not mistaken, in MQL4 the memory is about 3Gb max. for MQL-programs, the terminal is 32-bit.

There are zeros in 4 and rubbish in 5. Last time such problems seemed to have been solved exactly because of finding variables that were initialized outside OnInit and changed during the optimization run, i.e. during

on the next pass they didn't end up with their original value.

Reason: