Hi all,
was just curious if anyone had any ideas on why this code (in a script):
happens to take roughly 100,000 microseconds to run the first CopyBuffer (), but further calls of CopyBuffer () are considerably faster (<100 microseconds).
Similar calls to GetMicrosecondCount() inside of custom indicators reveal it is not the indicator OnCalculate() functions responsible for the delay, they all finish within a couple of hundred microseconds.
Further testing also revealed that if one creates multiple indicators alongside each other, the first CopyBuffer () call to any of them has this delay, but then further CopyBuffer () calls to any indicator do not present this delay.
example below (this uses iMAs, but again, I observed the same from multiple custom indicators as well):
Should it be taking CopyBuffer() (or otherwise subsequent code/functions/process inside of it) that long to retrieve a single value into an array?
For those who wonder why it might matter: I am collecting thousands of samples across several symbols by creating and destroying multiple indicator combinations with different settings.
I am only reading close, low and high data at each bar to make this is fast as possible. Strategy tester is not an option, it disallows destruction of handles (see here), and crawls to a halt late in the process once tens of thousands of un-removed handles pile up.
This very brief delay is adding a quite considerable (x10 - x100) increase in running time, and as such I am seeking to eliminate it if possible.
Kind regards,
~J8
Probably lazy evaluation, it's possible that routines dealing with the cache are triggering after the first call to copybuffer. A solution could be to not destroy the indicators,
just allocate all the variations you need at the start of the script, even if you have thousands of them, most would only keep updating the last value so it shouldn't hit the
CPU too much.
https://www.mql5.com/en/docs/indicators
- www.mql5.com
Hi all,
was just curious if anyone had any ideas on why this code (in a script):
happens to take roughly 100,000 microseconds to run the first CopyBuffer (), but further calls of CopyBuffer () are considerably faster (<100 microseconds).
Similar calls to GetMicrosecondCount() inside of custom indicators reveal it is not the indicator OnCalculate() functions responsible for the delay, they all finish within a couple of hundred microseconds.
Further testing also revealed that if one creates multiple indicators alongside each other, the first CopyBuffer () call to any of them has this delay, but then further CopyBuffer () calls to any indicator do not present this delay.
example below (this uses iMAs, but again, I observed the same from multiple custom indicators as well):
Should it be taking CopyBuffer() (or otherwise subsequent code/functions/process inside of it) that long to retrieve a single value into an array?
For those who wonder why it might matter: I am collecting thousands of samples across several symbols by creating and destroying multiple indicator combinations with different settings.
I am only reading close, low and high data at each bar to make this is fast as possible. Strategy tester is not an option, it disallows destruction of handles (see here), and crawls to a halt late in the process once tens of thousands of un-removed handles pile up.
This very brief delay is adding a quite considerable (x10 - x100) increase in running time, and as such I am seeking to eliminate it if possible.
Kind regards,
~J8
It seems to me perfectly normal, it's multi-threading and launch of the indicator for the first time.
The solution is the same as the problem, multi-threading.
Also, @Alexandre Borela is perfectly right, your second call to CopyBuffer() is irrelevant, and the second part of the solution is to not release the handles, optimize the memory usage by reducing the max bars in chart.
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Hi all,
was just curious if anyone had any ideas on why this code (in a script):
happens to take roughly 100,000 microseconds to run the first CopyBuffer (), but further calls of CopyBuffer () are considerably faster (<100 microseconds).
Similar calls to GetMicrosecondCount() inside of custom indicators reveal it is not the indicator OnCalculate() functions responsible for the delay, they all finish within a couple of hundred microseconds.
Further testing also revealed that if one creates multiple indicators alongside each other, the first CopyBuffer () call to any of them has this delay, but then further CopyBuffer () calls to any indicator do not present this delay.
example below (this uses iMAs, but again, I observed the same from multiple custom indicators as well):
Should it be taking CopyBuffer() (or otherwise subsequent code/functions/process inside of it) that long to retrieve a single value into an array?
For those who wonder why it might matter: I am collecting thousands of samples across several symbols by creating and destroying multiple indicator combinations with different settings.
I am only reading close, low and high data at each bar to make this is fast as possible. Strategy tester is not an option, it disallows destruction of handles (see here), and crawls to a halt late in the process once tens of thousands of un-removed handles pile up.
This very brief delay is adding a quite considerable (x10 - x100) increase in running time, and as such I am seeking to eliminate it if possible.
Kind regards,
~J8