hence why I have not used mt4 strategy tester for long time. It took me years to get my mt4 strategy tester to work 98% same as my broker. I do not see the reason for your post. You are obviously highly aware of the same issue in the past, so why make a post about it now?
But i also did not use that online generator for very long as like all the other generators, is very misleading. I even got a 100% refund from my bank as i found the differences between data from that site, and my broker, and dukoscopy, where support claimed their data came from, was so different, they may have been from different universes.
NOTE: you are best to remove the name of the 3rd party product, otherwise this thread may be removed by a moderator, and you may get a warning.Even if you're using the same FXT and HST files, MT4 might still simulate ticks a bit differently between an optimization and a regular backtest. That's because the "Every tick" mode doesn’t rely on real market data, it generates synthetic ticks from OHLC values, and that process isn't always consistent between test types.
If you're working with FXT and HST files from StrategyQuant, make sure MT4 doesn’t overwrite them. One way to do that is to launch the terminal in offline mode before running the backtest. Otherwise, MT4 might silently replace the files, and your backtest could end up using different tick data than the optimization did.
Also, depending on how your EA is written, especially if it uses static variables or relies on time-based logic, it might behave slightly differently during optimization runs.
These little differences may not seem like a big deal at first, but if your strategy depends on precise tick flow, like in scalping or timing-based entries, they can completely change the outcome over time.
Regarding the mention of third-party tools, as long as there's context and no intent to promote, it's generally fine. The real issue is when there's clear advertising behind it.
Broker names, on the other hand, are not allowed under any circumstance. Just to clarify, this includes even neutral mentions; for example, saying that something was tested on a specific broker or works better with one is still not permitted, regardless of intent. These references are restricted to keep the forum neutral and avoid any appearance of bias or promotion.
In cases like StrategyQuant, if it's part of a technical discussion and not used for promotion, I personally have no problem with it. That said, if another moderator sees it differently, I’ll fully support their decision.
Even if you're using the same FXT and HST files, MT4 might still simulate ticks a bit differently between an optimization and a regular backtest. That's because the "Every tick" mode doesn’t rely on real market data, it generates synthetic ticks from OHLC values, and that process isn't always consistent between test types.
If you're working with FXT and HST files from StrategyQuant, make sure MT4 doesn’t overwrite them. One way to do that is to launch the terminal in offline mode before running the backtest. Otherwise, MT4 might silently replace the files, and your backtest could end up using different tick data than the optimization did.
Also, depending on how your EA is written, especially if it uses static variables or relies on time-based logic, it might behave slightly differently during optimization runs.
These little differences may not seem like a big deal at first, but if your strategy depends on precise tick flow, like in scalping or timing-based entries, they can completely change the outcome over time.
Hi, thanks for the tips, very helpful. A few follow-ups and notes on what I’ve tried so far:
-
Offline mode / file overwrites
I did launch MT4 in offline mode (started with the /portable switch and removed all symbols from Market Watch except GPYJPY), and I’ve also set file permissions to “read-only” on the FXT/HST pair. That seems to prevent MT4 from rewriting them, but I still see the tick generation diverge between optimiser and backtest. Do you know of any other tricks to be absolutely sure MT4 isn’t silently swapping in its own files? -
Synthetic-tick determinism
As you point out, “Every tick” on MT4 is purely synthetic, reconstructed from OHLC. I found references in the MetaQuotes docs that optimisers use a different RNG seed for each “pass”¹, which would explain why the same date-range run delivers different sequences. Have you ever managed to extract or fix the RNG seed so that you can replay exactly the same tick stream? Any pointers to registry hacks or .mqh tweaks would be hugely appreciated. -
EA-side effects
My EA doesn’t call OnTester() or make any static/global state changes, but it does use time-based logic. Is there any chance the optimiser itself adjusts the “current time” used for decision making? If so, is there a known workaround to force the EA to see the same timestamp in both modes? -
Custom FXT from real ticks
I’ve been experimenting with Tick Data Suite to build a “real-tick” FXT, then pointing MT4 at that. Early results look promising: optimisation and backtest P&L lines up almost perfectly. If anyone else reading this thread is on that path, make sure you disable “Skip week-end ticks” in TDS and tick-count match the broker’s live server feed. Small differences there still throw off the final equity curve. -
Considering MT5
You’re right that MT4’s tester is pretty ancient. In MT5 you can import “true tick” CSVs and there’s an option to “Use same ticks for all runs,” which seems to guarantee reproducibility.I’ve successfully rewritten my EA from MQL4 to MQL5 and managed to download tick data from both StrategyQuant and TickStory. Unfortunately, I’m running into a spread issue: in MT4 I’m used to setting my own spread in the tester, but with MQL5 I have to use the spreads that come with the tick data. My live account is on RawECN, so my real spreads are very tight, whereas the downloaded data shows much wider spreads—making it impossible to run the optimizations the way I’m accustomed to.
Any advice on how to override or adjust the spread in the MQL5 tester would be greatly appreciated!
-
Considering MT5
You’re right that MT4’s tester is pretty ancient. In MT5 you can import “true tick” CSVs and there’s an option to “Use same ticks for all runs,” which seems to guarantee reproducibility.I’ve successfully rewritten my EA from MQL4 to MQL5 and managed to download tick data from both StrategyQuant and TickStory. Unfortunately, I’m running into a spread issue: in MT4 I’m used to setting my own spread in the tester, but with MQL5 I have to use the spreads that come with the tick data. My live account is on RawECN, so my real spreads are very tight, whereas the downloaded data shows much wider spreads—making it impossible to run the optimizations the way I’m accustomed to.
Any advice on how to override or adjust the spread in the MQL5 tester would be greatly appreciated!
read and learn how to use Tickstory. You can add your own spread before importing the data to mt4. You can also add the ea tickstory to a live chart that records all your symbol specifications, including the current spreads, altho you may need to do this during live trading session when spreads are normalised.
read and learn how to use Tickstory. You can add your own spread before importing the data to mt4. You can also add the ea tickstory to a live chart that records all your symbol specifications, including the current spreads, altho you may need to do this during live trading session when spreads are normalised.
Hi,
Thanks for the tip, before I dive in, I wanted to point out two resources that might be helpful?
-
Token reference for file exports
You can find the full list of export‐file tokens (timestamp formats, price fields, volume, etc.) here:
https://tickstory.com/help/tickstorylite/doku.php?id=token_reference_for_file_exports -
Custom tick‐format example
On the TickStory forum, one user demonstrated how to add a fixed premium to the bid price by defining a custom format string:{Timestamp:yyyyMMdd},{Timestamp:HH\:mm\:ss},{BidPrice},{BidPrice+0.00001},{BidPrice+0.00001},{BidVolume:F0}
In this example, the “0.00001” is just an arbitrary amount added on top of the bid price to create a constant spread.
Could you let me know if that’s the approach you’ve been using to set your own spreads? And if there are any other tokens or formatting tricks you’ve found useful for MT5 imports, I’d love to hear about them.
Thanks again!
That is one way, yeah, but it is also the hard way :D
This is how I usually do it. see here <<--- this method i proved to get my tickdata to 97% same as my broker.
or import broker specs from the broker tickdata as decribed here also using tickstory.
or 3rd method, also using tickstory. There is also the tickstory "ea". This method may be fastest, but also needs to be done during time when spreads are normalised; But you can aways modify the spreads and commissions for each symbol yourself by methods above.
Remember as Miguel said, the data can be modified by mt4, and also mt5 before strategy tester is opened. There is a small process to follow to block and/or stop this from happening, but you will need to redo this process before every single backtest. There is a write up on Tickstory. sorry i cannot find the link myself, but there was a thread on this site regarding this sometime ago also. maybe you can find it.

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Hello everyone,
I’ve been experimenting with my Expert Advisor in MT4’s Strategy Tester using the generic algorithm optimizer, and I’ve noticed a puzzling inconsistency: the parameter combination that delivers outstanding results during optimization almost flat-lines when I plug the same inputs into a standard backtest. I’d like to share my setup and observations in the hope that someone can shed light on what’s happening “under the hood” of MT4’s tester. This happens on both MetaTrader 4 Build 1440 and Build 1441, so it doesn’t seem tied to a single release.
Last week I ran a full optimization over the same GPYJPY data set (HST/FXT files downloaded from StrategyQuant), covering the exact same date range, using a fixed spread of 8 points and “Every tick” modeling. No visual mode, no code changes between runs—just pure Strategy Tester.
One of the standout runs (“Run 17”) delivered the following figures in the optimization report:
Seeing over 28 600 USD profit on nearly 1 800 trades looked incredibly promising. However, when I immediately switched to a standard backtest with those exact same externals, the results collapsed to roughly 300 USD net profit on about 1 600 trades, with a drastically different equity curve (see attached screenshot).
I’ve double-checked that:
Identical data (same HST/FXT files) are loaded in both optimization and backtest.
Tester settings—date range, spread, “Every tick” model—are unchanged.
No calls to OnTester() or other hooks could be artificially boosting the optimized results.
Visual mode remains off throughout.
Despite all that, the optimized results simply don’t reproduce in the backtest. Has anyone seen behavior like this before? Are there known quirks in MT4’s generic optimizer—perhaps related to how it re-samples ticks, applies randomization within the spread, or caches data—that could explain the gap? Would generating custom FXT files with tick data from a separate source help ensure a one-to-one match? Or is there a logging or debug mode I’m missing that would reveal the exact tick sequence and execution times used during optimization?
Any insights, links to forum discussions, or best practices to guarantee fully reproducible strategy testing would be greatly appreciated. Thank you in advance for your time and expertise!
—
I’ve attached another screenshot below that illustrates an even more extreme case of the same problem. In this run the optimizer reported:
Net Profit: 17 744.93 USD
Trades: 558
Profit Factor: 3.61
Expected Payoff: 31.80
…but when I take those exact externals into a standalone backtest, I get only 1.51 USD net profit on 584 trades!
This is yet another clear example of how MT4’s generic-algorithm optimization can produce results that simply don’t reproducibly match a regular backtest, even though all data, modeling quality and settings remain identical.
—
Best regards,
Carl-Emil Bograd