The same test with the same settings (EA, symbol. etc.) run twice - why is modeling quality different?

 

In another post I complained about modeling quality itself. Now I found that the modeling quality is not constant for the same test that is run multiple times. This ultimately proves that the tester is not reliable (build 222).


1. First, let's make sure that we delete all history and tester cache:




2. Now let's start the MT4 terminal and import M1 data from MetaQuotes server.




3. Here we verify that M1 data was imported and all higher time frames were calculated.






The imported data is stored into a broker's specific subdirectory (InterbankFX-Demo in this case). We have to remember not to open an online chart because it will make MT4 fetch new data from this particular broker and potentially override our data that was downloaded from MetaQuotes. In my opinion this makes testing very cumbersome. Why isn't this data separated and made available for the tester? For this reason it is a good idea to keep a dedicated MT4 instance for testing only and make sure not to open online charts.



4. It is time now to setup EA in the tester and run the backtest.




And here are the results, note the number of errors and modeling quality:




5. Now let's run the same test again using the same settings (do not change anything, just press "Start").




And here are the results, note the number of errors and modeling quality:




Wait a sec - why are

the results completely different

?


Let's do one more test - let's close MT4 and remove the tester tick history and try again:




Open MT4 terminal, run the test again with the same parameters and observe the results:




These results are again different, although the total net profit is the same for run 2 and 3 (still different than for run 1). However, it is still not clear why:


1) Results from the 2nd and 3rd run are substantially different than results from the 1st run


2) Modeling quality is "n/a" for the 2nd and 3rd run


Is it just me or there is a serious problem with MT4? I'm also attaching reports.


Please don't get distracted by the profitability of this EA - it is irrelevant.


PS. Screenshots courtesy of Paint.


Thanks,

Lukasz

Files:
reports.zip  27 kb
 

I've just noticed that the start date was different for the 2nd and 3rd run - it was changed from 1999.05.02 to 1989.11.09.


According to the manual the tester will use all available data if the "Use date" checkbox is not selected.


This brings the following questions:


1) Since the imported data starts from 1999.01.04, why was 1999.05.02 date used instead for the 1st run?


2) Why did tester use different start dates in the subsequent runs?


3) Does the tester try to download any historical data from the broker associated with the account, possibly overriding imported data if "Use date" is not checked?


I gave it one more try by explicitly specifying the date range and this time I got the same results (reports attached), although the modeling quality is still "n/a".


Any thoughts?


PS. As I wrote this post I noticed that a newer MT4 build (223) came out. Let's see what is fixed.

Files:
reports_1.zip  21 kb
 

After updating to build 223, I'm still getting different modeling quality for subsequent runs. This time, I got 90% for the first run, and again "n/a" for the second (used explicit data range).

Reason: