First of all I suggest to use a static value for delay, not the ping value which can always change and affect operations.
No disrespect but I wasnt asking for critism on my workflow. Honestly, theres a lot more to it then just testing on all market watch and selecting the best performer. I just want to find out why is there a mismatch in results.
No disrespect but I wasnt asking for critism on my workflow. Honestly, theres a lot more to it then just testing on all market watch and selecting the best performer. I just want to find out why is there a mismatch in results.
well 1. not many will respond due to you using 1 minute data; depending on your strategy, this may work, however, in general, most of us will only use "every tick based on real ticks".
2. question. have you recently deleted and redownloaded your tickdata with clean test folder? This may be your cause if you have not. As a rule i will do this every couple updates of mt5 and also after every few monthly cummulative, windows updates as these will often break or fragment these files and corrupt mt5 somehow.
well 1. not many will respond due to you using 1 minute data; depending on your strategy, this may work, however, in general, most of us will only use "every tick based on real ticks".
2. question. have you recently deleted and redownloaded your tickdata with clean test folder? This may be your cause if you have not. As a rule i will do this every couple updates of mt5 and also after every few monthly cummulative, windows updates as these will often break or fragment these files and corrupt mt5 somehow.
Yes, I use 1 minute OHLC model in backtesting for my M1 custom data. My code only references OHLC and it save me not only time in backtesting but also space, which I dont have at the moment. Also I did recently delete and redownload my data and there was a few MT5 and windows updates since then. I should have started there but I assumed my data was fine and we all know what happens when you assume. Ill look more into my data. Thanks.
Yes, I use 1 minute OHLC model in backtesting for my M1 custom data. My code only references OHLC and it save me not only time in backtesting but also space, which I dont have at the moment. Also I did recently delete and redownload my data and there was a few MT5 and windows updates since then. I should have started there but I assumed my data was fine and we all know what happens when you assume. Ill look more into my data. Thanks.
makes very little difference if your code only reference OHLC. search this site. i know that i have read a few big threads on this subject.
If i remember correctly, the 1 minute OHLC data that strategy tester uses is always virtual ticks -- rarely even close to real ticks. You could prove me wrong, however, i am quite sure of this.
EDIT: using virtual ticks could be your trouble: i believe that virtual ticks are recreated each time that you start a backtest; whether that is optimiser or single test. This could also mean that you have different virtual ticks even if you restart the same single test, right after finished a single test. You may get same resulting ticks after consecutive tests, but no guarantees.
Buf if you happen to find a solution, then, please report it on this, your thread so that all of us can benefit.makes very little difference if your code only reference OHLC. search this site. i know that i have read a few big threads on this subject.
If i remember correctly, the 1 minute OHLC data that strategy tester uses is always virtual ticks -- rarely even close to real ticks. You could prove me wrong, however, i am quite sure of this.
EDIT: using virtual ticks could be your trouble: i believe that virtual ticks are recreated each time that you start a backtest; whether that is optimiser or single test. This could also mean that you have different virtual ticks even if you restart the same single test, right after finished a single test. You may get same resulting ticks after consecutive tests, but no guarantees.
Thats right, MT5 will use synthetic/virtual ticks to fill in the intrabar price movement using the 1 minute OHLC model. I try to avoid creating systems that depend on this intrabar movement because at that point you need tick data. Ive been backtesting using the 1 minute OHLC model for quite some time now and this issue im having with mismatching reports is something that came up recently. So I dont think its the issue is how ticks are being delivered as Ive stated my robots reference OHLC values on candleID 1 (as the values are fixed) not to mention my trade durations are 4 hours+ and im not trading on anything lower than H1 timeframe.
of course! I worked IT before and always made it a habit to update threads once I find a solution :D
Thats right, MT5 will use synthetic/virtual ticks to fill in the intrabar price movement using the 1 minute OHLC model. I try to avoid creating systems that depend on this intrabar movement because at that point you need tick data. Ive been backtesting using the 1 minute OHLC model for quite some time now and this issue im having with mismatching reports is something that came up recently. So I dont think its the issue is how ticks are being delivered as Ive stated my robots reference OHLC values on candleID 1 (as the values are fixed) not to mention my trade durations are 4 hours+ and im not trading on anything lower than H1 timeframe.
of course! I worked IT before and always made it a habit to update threads once I find a solution :D
I'm pretty sure 1 min ohlc only sends four ticks: open, high, low then close. I think you are referring to every tick mode which is in my opinion more misleading than 1 min ohlc or real ticks because you can force EAs and indicators to only process new 1 min bars.
Hello
When I initially backtest my system, I test using all market watch symbols and then I individually backtest each result that shows positive performance. The problem is, the individual backtest doesnt match the all market watch symbols test.
For example. Here are the top performing symbols using all market watch symbols.
Now, lets run a single test on CHFJPY.m1 so that I can save the report for further analysis
The results dont match! The issue seems to happen randomly and not tied to any specific symbol. I dont control timeframes or symbols in the code, thats done by MT5.
Here are the backtester settings im using. Nothing out of the ordinary.
Any ideas?
Hi there! I have read so much disinformation on this thread that I decided to step in and give a proper answer to help, because I had these issues many times ago and I know the struggle to cope with them. I will try to cover only the things directly relevant to the Victor's issue, otherwise the thread will go off topic very quickly and then Miguel Angel Vico Alba will come to pull my ears :-)
The problem when an individual/single "run" differs from the set of multiple "runs", as in your case, but also in other cases, like when you run a single one from the list of optimizations result and they don't match, not only the P/L is different, but the number of trades are completely off.
This problem has 2 main causes:
1) Lack of "tester_everytick_calculate" property, if your indicator needs every tick to work properly and can't handle to get updated at random times, you need this.
2) RACE CONDITION, many MT5's "internal" functions are asynchronous and if your code doesn't handle that properly, RACE CONDITIONS appears even if they are unnoticeable during most of the time.
To be clear here, both of these issues are developer's fault, MT5 works well.
Let me explain a little more
-You hit the 1) cause if your visual tester run differs from the non visual one, as in visual tester the indicators are forced to work on everytick and generally in non visual mode the indicators are updated when accessed(if you don't set the mentioned property)
-For the 2), you have to check your code, a lot of data fetching in MT5 is asyncronous, for example iClose/iHigh/iLow/iOpen can return 0 if the data is not present during the fetch. This is a simple example, but there are many of these that need to get checked correctly. What causes confusion is that most of the time these functions return the proper value, but sometimes they don't. This is why it is called RACE CONDITION, this is a common concept in computer science, most of the time the MT5 fetcher's thread is faster and your call of "iSomething" returns the correct value, but sometimes it gets late(lose the race) and your call of "iSomething" returns 0. I repeat, this is just an example, there are a lot of these function in MT5 that can return error, but this is properly documented and you should pay attention to the documentation. RACE CONDITIONs are the most annoying issues to fix in general, as they are unpredictable, some RACE CONDITIONs can trigger the error just once in a month of heavy load. Also depends of the load of your system in that specific moment, usually during optimization RACE CONDITIONs trigger more errors as the system is under heavy load. Also many serious RACE CONDITIONs are not findable by the debugger, as the debugger alters the internal timing of calling of the functions and if you are unlucky the issue will never reproduce within the debugger and this makes it hard to spot.
So, my final advice is to add the property stated on 1) and also recheck all of your code for functions that can return errors and handle them properly(there are a lot of them, be sure to check them all), assume they WILL return errors. Checking these 2 things solved practically all of my issues with mismatches between runs.
Also there is no problem to run it in "1 Minute OHLC" mode as this mode is not randomly done every time, but in a solid and deterministic way. The people behind MT5 do the things properly(otherwise this wouldn't be the best platform, hands down, for testing and optimization, other platforms are very primitive in comparison). Anyway I see there are a lot of confusion on how these things work(the modelling part), but I will not explain that here, will keep it for another thread in case :-)
Enjoy it!
Edit: If it wasn't obvious, first you have to check all the iSomething functions and all the CopyBuffer ones
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Hello


When I initially backtest my system, I test using all market watch symbols and then I individually backtest each result that shows positive performance. The problem is, the individual backtest doesnt match the all market watch symbols test.
For example. Here are the top performing symbols using all market watch symbols.
Now, lets run a single test on CHFJPY.m1 so that I can save the report for further analysis
The results dont match! The issue seems to happen randomly and not tied to any specific symbol. I dont control timeframes or symbols in the code, thats done by MT5.
Here are the backtester settings im using. Nothing out of the ordinary.

Any ideas?