Strategy Tester missing ticks - page 2

 
Alain Verleyen:

I have serious doubts that a lack of resources could lead to missing ticks, it's not impossible though but unlikely. It should be checked in last resort.

But I just get that these tests comes from different MT5 instances, maybe even from different computers, right ?

So about the generated ticks, they are based on M1 data, you need to check that these data are the same between instances, if they are not, then the generated ticks will be different, can you check that ? Please also check if there are any difference in the generated ticks between tests on the same instance.

Alain,

I´ve changed the broker from XP Investimentos to Modalmais, and now I got the reliability that I was expecting for. The best result I had with XP investimentos, in a data range (one month), was 455 trades, but alwas oscilating (between 276 and 455 trades).

With Modalmais, running the same script, with up to 10 instances (the maximum tried, but I´m guessing it would support more), it´s achieved 471 trades, always stable, always the same number of trade in 4 sessions of test. I´ve executed everthing with Modal=0.

Thank you again you and Fernando for helping.

 
Alain Verleyen:

I have serious doubts that a lack of resources could lead to missing ticks, it's not impossible though but unlikely. It should be checked in last resort.

But I just get that these tests comes from different MT5 instances, maybe even from different computers, right ?

So about the generated ticks, they are based on M1 data, you need to check that these data are the same between instances, if they are not, then the generated ticks will be different, can you check that ? Please also check if there are any difference in the generated ticks between tests on the same instance.

Continuing...

About your question: yes, these instances are installed in multiple machines and they all are controlled by one of the machines. After changing the broker, running the test several times (the instances execute each test ramdonly, in a pop/push stack of tasks) I always get the same result. Considering that I can´t preview which instance will run which test (each instance pick a test based in a randomly timer), and I have always the same result, I can conclude that all of them are now receiving the same number the ticks.

Moreover, I´ve a routine that syncronizes all the instances before to start the tests, and that routine deletes most of the data cached, like in the directories "Ticks", "History", "Files", and others (a clean up process), and ensures that all the machines and instances will have the same script version. In the end of the test, that routing gathers all the results and summary them in a single report.

I´m suggesting it was some broker issue because, for example, in one test, the date 5/may/2021, can be processed by the instance 3, in the machine 1, and in the next test session, this same date may be processed by the machine 2, instance 6. This push/pop in the stack is propositaly choosing the tasks (test date) randomly.