MT5 Optimization how to speed up runtime

 

Hello, I am using a server for running MT5. When I do optimisations, I can split it on up to 8 CPU's. Still it is running not extremely fast.

I was wondering if there are any best practices on speeding up optimisations?


Thanks a lot in advance.

 
KjLNi: Hello, I am using a server for running MT5. When I do optimisations, I can split it on up to 8 CPU's. Still it is running not extremely fast. I was wondering if there are any best practices on speeding up optimisations?

You will need to "optimise" your code to be more efficient.

Things like checking OHLC data or indicator buffer data only when a new bar is formed, or cacheing data values so as not to fetch them on every cycle, and many other things that can improve the speed and efficiency of your code.

It is somewhat difficult to offer exact and detailed points of improvement not knowing your level of coding skill and knowledge or the condition of your current code.

 
  1. EAs : Don't do per tick what you can do per bar, or on open.
    If you are waiting for a level, don't reevaluate, wait until price reaches it (or a new bar starts, and you recalculate.)
    If you are waiting for an order to close, only look when OrdersTotal (or MT5 equivalent) has changed.
              How to get backtesting faster ? - MT4 - MQL4 programming forum (2017)

  2. Indicators: Code it properly so it only recomputes bar zero (after the initial run.)
              How to do your lookbacks correctly. (2016)
              3 Methods of Indicators Acceleration by the Example of the Linear Regression - MQL5 Articles. (2011)
    Or, reduce Tools → Options (control+O) → Charts → Max bars in chart to something reasonable (like 1K.)

 

Hello Gentlemen,

thanks a lot for your tips!

I have tried a couple of things. With regards to on tick / new bar I was already in good shape.

But I have done a "profiling" and found something which is repeated over and over again.

So I have made a change there, and as a result I could reduce the runtime of a certain situation from 5 minutes, 40 seconds down to just 21 seconds!!

So overall, thanks a lot, and I do recommend the "profiling" function.

 
KjLNi #:

Hello Gentlemen,

thanks a lot for your tips!

I have tried a couple of things. With regards to on tick / new bar I was already in good shape.

But I have done a "profiling" and found something which is repeated over and over again.

So I have made a change there, and as a result I could reduce the runtime of a certain situation from 5 minutes, 40 seconds down to just 21 seconds!!

So overall, thanks a lot, and I do recommend the "profiling" function.

Well done...

One habit I have picked up over the years from other systems is to be careful not to define too many local variables in functions which are called frequently (e.g. every tick) unless absolutely necessary. Instead of allocating/deallocating memory for variables/objects/structures on every call, I rely on single instantiation at startup and re-using for the entire runtime.

In theory this should be better but I am yet to profile my code to explore whether it makes a significant difference in MQL5 - so far I have not had any noticeable performance problems 

 

Thanks, just to let you know the reason in my case:

When I test historical values, I call up a file with some information in it (recommendations).

In order to avoid to read the file multiple times, I check if the file has been changed in the meantime.

But this check is what took so much time ...

So now I have declared a global variable (bool) to document that it has already been read.

And then I check against this variable.

So it seems that doing a "file" operation is something that takes particularly long.

 
KjLNi #:

Thanks, just to let you know the reason in my case:

When I test historical values, I call up a file with some information in it (recommendations).

In order to avoid to read the file multiple times, I check if the file has been changed in the meantime.

But this check is what took so much time ...

So now I have declared a global variable (bool) to document that it has already been read.

And then I check against this variable.

So it seems that doing a "file" operation is something that takes particularly long.

That makes perfect sense - disk access was always the most important factor/bottleneck in many of the VLDB's and enterprise systems I have tuned in the past.

If you can reduce disk I/O delays and handle more processing in RAM, it always helps. However in this age of in-memory and cloud-computing it is now less common nowadays, so well done for finding it.

 
KjLNi #: Thanks, just to let you know the reason in my case: When I test historical values, I call up a file with some information in it (recommendations). In order to avoid to read the file multiple times, I check if the file has been changed in the meantime. But this check is what took so much time ... So now I have declared a global variable (bool) to document that it has already been read. And then I check against this variable.

So it seems that doing a "file" operation is something that takes particularly long.

For testing purposes, embed the data from the file in the EA code itself as a globally scoped variable, such as an array or an array of a structure and compile.

During testing use that data and don't do any file operations.

You can easily make your code detect when it is testing or live and use one method or the other depending on situation.

Reason: