Curve-Fitted Backtests vs Real Edge — What’s the actual filter?

 

I’ve been around MQL5 for quite a while, coding EAs, and I keep coming back to the same question:

What really separates one product from another in this marketplace when almost anyone can present a curve-fitted backtest?

What seems to be happening in practice is that presentation is rewarded more than actual robustness.

This creates an environment where well-presented systems can easily attract less technical users, who may not be able to distinguish between genuine robustness and over-optimized strategies.


The core issue, in my view, is that there is no real filtering mechanism that can distinguish a system with a genuine edge from one that is simply optimized on historical data.

Maybe it would make sense to introduce a second layer of evaluation beyond basic technical validation.

For example:

1). groups of experienced backtesters who can identify curve fitting.

2).evaluation from Live/Demo conditions like convergence. whether the behavior of the strategy remains consistent after runing lie for 100-200 trades.

3).or, more generally, a framework that emphasizes robustness rather than just tester results.


Because right now, distinguishing a genuinely robust system from a well-presented backtest remains difficult — even for experienced users.

 
What I have seen is that the signals show the validity of a bot...