Strategy Tester still broken in build 2007

 

2 things are broken:


* After optimization the StrategyTester resets all optimization parameters to default settings. (Totally unacceptable, unworkable) 

* Selecting the 'Run Single test' from the 'Optimization results' tab does NOT copy the settings to the input tab. (Previous versions always did.)  


Please fix asap. 

 
I have this problem too
 
Decler:
I have this problem too

Same here. Have you found a solution?

 
Same here like if we could talk about it a bit .... can i add input directly and then instantly save it to file because it doesnt even want to load on top of not input previous optization results....WTF u think if this were something making alot of money people would be on top of this but i only find a few people saying anything in 1 forum?
 
This might be related to Win 10. It's working here under Win 7.
 
no same problem here on win 10 and win 7
 
Dex Petersと皆さん こんにちは。

提案なのですが、以前のバージョンにダウングレードしてみてはどうでしょうか?

私のPCにBuild 1940のターミナル、エディター、テスターがありました。

もしよかったら使ってください。

ですが、ファイルの容量が大きすぎて添付できません。

どこか、あなたたちとファイルを共有できる場所はありませんか?

私は返信を待っています。

あと、私は日本語のネイティブスピーカーのため、英語の文章が変になっているかもしれませんが、ご了承ください。



Hello, Dex Peters and  everyone. 

It is a suggestion, how about trying downgrading to a previous version?

There was a Build 1940 terminal, editor, tester on my PC.

Please use it if you like.

However, the file size is too large to attach.

Is there any place to share files with you somewhere?

I am waiting for a reply.

Also, since I am a native Japanese speaker, English sentences may be strange, but please understand.
 

In build 2006 and build 2007 I am seeing similar issues with parameter transfer from the Optimisation Results tab to the Inputs tab, although 2007 seems better.

With 2006, fields using enumerated types, such as MA Smoothing Methods and Applied Price are bad at being copied across.  I have not tested this in 2007.

Comparing how build 1940 used to behave with builds 2006 / 2007, what I have also noticed, and this is a particularly bad problem, is the vast difference in the 'Result' value for each parameter combination in an optimisation run, and the result then shown when then invoking a single test using the same optimised parameters.

So for example a given line in the optimisation results tab might say 245 trades, loss of 176.2, and when running those parameters as a single test to see the equity curve, the end result (confirmed from the Back Test Report) was 284 trades (another +39) and loss of -822.62 (another -646.42).  

I tried this without cloud optimisation, but with and without local network tester agents, using 'Every tick based on real ticks' and connected to a live account for data.

This difference in the optimisation results means I don't know what to trust - the optimisation results, or the single run result

Without trust in the back tests, I can't proceed with testing and development.

I have even gone to test the demo applications that come with MT5, for example the Moving Averages EA and the ExpertMAMA EA.   I have noticed that the difference in the results tab versus single run for Moving Averages was not much, but the difference in results tab vs single run result for Wizard / Standard Library based ExpertMAMA was much worse. So I am concerned that the Standard Library is exposing some calculation bugs. 

 
Mark Flint:

In build 2006 and build 2007 I am seeing similar issues with parameter transfer from the Optimisation Results tab to the Inputs tab, although 2007 seems better.

With 2006, fields using enumerated types, such as MA Smoothing Methods and Applied Price are bad at being copied across.  I have not tested this in 2007.

Comparing how build 1940 used to behave with builds 2006 / 2007, what I have also noticed, and this is a particularly bad problem, is the vast difference in the 'Result' value for each parameter combination in an optimisation run, and the result then shown when then invoking a single test using the same optimised parameters.

So for example a given line in the optimisation results tab might say 245 trades, loss of 176.2, and when running those parameters as a single test to see the equity curve, the end result (confirmed from the Back Test Report) was 284 trades (another +39) and loss of -822.62 (another -646.42).  

I tried this without cloud optimisation, but with and without local network tester agents, using 'Every tick based on real ticks' and connected to a live account for data.

This difference in the optimisation results means I don't know what to trust - the optimisation results, or the single run result

Without trust in the back tests, I can't proceed with testing and development.

I have even gone to test the demo applications that come with MT5, for example the Moving Averages EA and the ExpertMAMA EA.   I have noticed that the difference in the results tab versus single run for Moving Averages was not much, but the difference in results tab vs single run result for Wizard / Standard Library based ExpertMAMA was much worse. So I am concerned that the Standard Library is exposing some calculation bugs. 

See reproducible case and report here https://www.mql5.com/en/forum/305758 

MT5 build 2007 - optimisation is broken. Rerun of tests silently fail. Often optimisation results give wildly different results when ‘Run Single Test’. Number of CPU cores affects results.
MT5 build 2007 - optimisation is broken. Rerun of tests silently fail. Often optimisation results give wildly different results when ‘Run Single Test’. Number of CPU cores affects results.
  • 2019.03.05
  • www.mql5.com
Having become quite frustrated with MT5 builds 2006 and 2007, I decided to create a reproducible case using MQL example code (Moving Averages EA) f...
 

I have the same problem with build 2009 running it on a VPS with OS being Windows Server 2012 R2 Datacenter Edition (64 Bit). The OS is updated up to 03/Apr/19.

Strangely, with build 2007 but on a unupdated Windows Server 2012 R2, I don't see a problem...




 
I still with the same problem on my backtests
Reason: