Optimisation results differ from single tests on them - page 3

 
eugene-last:

1) is the spread fixed? - yes.

Is the stopgap fixed for example?

******

There may be many reasons. I wish one of the complainers would post his EA) I feel sorry for the EA giving 100 quid during optimization and losing 5000 during testing...

******

This is a variant of rake localization. You should take the standard MACD_Sample EA, for example, and optimise it, and test it on the same data, the same symbol, for the same period of time. Everything is OK - the matter is in the Expert Advisor.

 
Figar0:

There could be many reasons.

Another clever one, he has plenty of reasons... Either give your reasons and offer solutions, or keep quiet and test your makdi-sample until it starts earning
 
eugene-last:
Another clever one, there are plenty of reasons... Either give your reasons and suggest solutions, or keep quiet and test your makdi-sample until it starts earning money


This is a forum for programmers, for the most part. You offer to solve your problems without code. A person who has "already sold the latest version of ****, based on a probabilistic neural network and able to earn profits in a completely automatic mode" should understand what he can get in response. The simple solution I suggested with MASD unambiguously identifies the problem and takes 5 minutes. The elementary question about the stop level is left unanswered. Laziness? There's a psychic club here somewhere....

Want the causes here they are: crooked hands, inability to use a tester. Either one of the two, or both together. You want a solution? - Straighten your hands and learn to match, until you're blue in the face and enlightened. I won't guess any more. Also, help from terminal developers is unlikely, the tester has been "tested" a billion times by them and by us (hundreds of thousands of terminal users) and no such problems have been found. What does it say? See a couple of lines above.

 
eugene-last:

So how did the topic end? Time goes on and the story is the same: the results of optimisation runs and simple tests are different... sometimes so different that it's a crying shame. At the same time if you run a single test once, twice, three times the result is the same. But if you mix in the results of optimization, the result is different... That's very silly.

1) Spread is fixed? - yes
2) The quotes archive has a good quality, without holes? - checked manually, no gaps
3) Have you checked the algorithm of the Expert Advisor? - Yes, of course I checked it. On a single test, the result is the same, no matter how many times you run it.
4) With other brokers the same story repeats? - it is the same, not the brokers!
5) Did you choose a smaller period or a bigger one? - yes
6) Did you try to test the explicit control of bars? - well, i tried... just not for my EA

Well, if you tried everything, then why don't you just shoot?

Items 1) and 2) raise suspicion that if there is a problem, it is in the EA. Everything is perfect from DC's side, I'm even surprised.

Judging by 6), and 4), I have a hypothesis that you have something like pipsing. In that case don't expect the results to be identical. In our case it is better to prohibit by law to test pips in brokerage companies.

Another thing: point 3) is not very well checked. The identity of the results suggests only that the Expert Advisor works, but not that it works correctly.

In general - post the result of a single test. I mean the full report (image + numbers). We will suggest something.

 
eugene-last:

So how did the topic end? Time goes on and the story is the same: the results of optimisation runs and simple tests are different... Sometimes so different that it's a crying shame. At the same time if you run a single test once, twice, three times the result is the same. But if you mix in the results of optimization, the result is different... Just some kind of stupidity.

1) Spread is fixed? - yes
2) the quotes archive is qualitative, without holes? - checked manually, no gaps
3) Did you check the algorithm of the Expert Advisor? - Yes, of course I checked it. On a single test, the result is the same, no matter how many times you run it.
4) With other brokers the same story repeats? - it is the same, not the brokers!
5) Did you choose a smaller period or a bigger one? - yes
6) Did you try to test the explicit control of bars? - well, i tried... just not for my EA

Well, if you tried everything, then why don't you just shoot?

The answer in paragraph 6

it is strictly forbidden to use ticks in the tester.

 
mersi:

the answer is in point 6.

in the tester to focus on ticks categorically forbidden.


Here, by the way, is not impossible... The test is only on M1 on the opening price model.

2eugene-last:"Well, if all tried, then only shoot???" - before that, still try to organize the trade your owl with control of formation of a new bar, say, on M1, if still so "...but this is not for my EA", then the tester is not your helper, immediately charge a demo in your DC for its testing + search to help - to start ALL articles.

In your EA don't forget to do the necessary checks for min. tolerances when setting or modifying orders.

 

Um... I think a lot of people just refuse to understand the problem. Or deliberately walk away.

What is optimization and what is a single test? Answer: optimization is several single tests.
What does it mean? Answer: it means theoretically that optimization pass is the same and ends up with the same result as the single test.

Well, in practice it turns out that this is not the case. And the Expert Advisor (not a maxims by the way, I see it bothering some people here) doesn't fail because the single test shows exactly the same result. So why does this single test in optimization give a different result ?!?!?!?!?!?!?!?

 

Oh, and one last quirk. If you perform optimization several times, without genetics, just say 32 passes. So comparing reports of SEVERAL optimizations we see that the results coincide 100%.

You choose any pass, run it once and get a different result.

Even if we assume that something goes wrong between passes, well at least the first pass in optimization should be identical to a single test?!

Going to shoot....

 
eugene-last:

Oh, and one last quirk. If you perform optimization several times, without genetics, just say 32 passes. So comparing reports of SEVERAL optimizations we see that the results coincide 100%.

Choose any pass, run it alone and you get a different result. Going to shoot.......


Perhaps the error lies in the definition of the TF used... do you determine the TF to be used through optimisation... what TF do you test it on? how does the Expert Advisor determine the TF to be used?
 
Define the tf... One indicator, yes, is used. There tf: NULL, PERIOD_H1
It's pretty much standard. And how or what else could it be related to the tf?
Reason: