Different Tester Results for Two Computer Using the Same EA input Parameters - page 3

 
Michael Charles Schefe #:
more or less proven than tongue rolling?

More no than yes. The probability of cosmic radiation affecting a personal computer (Intel CPU) is once every 14 years. Quants at HFT firms use radiation hardened computers that would otherwise be sensitive--presumably, FPGA/quantum/hybrid computers:

What's the Probability Cosmic Rays Will Affect Your Program? Debunking a Developer's Design Review Claim
  • 2025.11.21
  • www.codestudy.net
What Are Cosmic Rays and How Do They Interact With Electronics? The “Soft Error” Problem: When Cosmic Rays Flip Bits Quantifying the Risk: What’s the Actual Probability of a Cosmic Ray Error? Common Misconceptions in Developer Discussions When Should You Really Worry About Cosmic Ray-Induced Errors? Debunking the Design Review Claim: A...
 
Peter Mueller #:

I have the same issue (with the same broker though, as different brokers do have completely different data feed). I think that there can be a difference in the performance of the CPU of both computers.

I ran the same test, same timeframe, same time interval, same inputs, same deposit (currency same aswell) same leverage, same broker, same modelling. (I could screen them all if you want proof but I ask for you to trust me on this)

I even set the delay to the same value eventhough my strategy can't really be affected by it because it's a swing trading strategy. I checked the symbol parameters and they all align. I can't find a logical reason behind the difference. My strategy is based on a Hidden Markow Model regime detection system. Maybe the CPU may cause the difference, on one of my computers I have 2.8GHz intel core i7-7700HQ (this test performed better) and on the other I have 1.4 GHz Intel Celeron 2955U (this comouter is used basically as a VPN) I also want to add that I did not use any multithread logic. What's quite disturbing is that the difference is not marginal. On one computer I get a sharpe of 1.79, whereas on the other 1.06. I attached 2 screenshots so that you can compare the results yourself. 

With the amount of ticks you can see that these are indeed the same tests, or should be the same atleast.

If anyone has an idea about what's the cause of this, please share your secrets. 

Make sure you really use the same settings (both for the testers and EAs). For that purpose just copy the file MQL5/Profiles/Tester/<EANAME><SYMBOL><RANGE><MODE>.ini from one terminal to another, or compare these two text files by any context comparison tool (via network or a flash drive).

Also I'd suggest not to use the delay - it's probably "randomized" due to PCs clocks differencies.

If everything in conditions is matching, and you still get different results, then 99.999% that the problem is in your code, you can provide it to a willing developer/tester on NDA terms or via freelance fpr proofreading.

BTW, what tick modelling mode do you use?