Running an optimization in MQL5 Cloud Network crashes with 505 out of memory

 
Hello,

I’ve developed a custom EA that has a maximum memory footprint of 1.5GiB per agent/thread when running locally. With an average memory utilization of under 1GiB (~600-800MiB)

Whenever I run the current year period, that is from January 2025 until August 2025, I get a critical error displayed in the journal recurrently, with an 505 error message “out of memory”.

If I run only the current month, the optimization works just fine.

Given that in the docs it specifies that each agent admits up to 4GiB, I’m not sure what the cause of the issue is since the EA consumes less than a half of the limit.

I have tried OHLC1M modeling but I was not able with this setting either, and the journal will output the same error and the agents crash.

I understand that because of a security feature or limit to not to consume all my account credits, the different cloud nodes get deactivated, and I need to install a fresh terminal to be able to perform a new optimization routine.

The EA has been thoroughly optimized, and the logic is simple enough not to require huge amounts of processing power and/or memory (resources)

Have you come across with the same issue?
Can you suggest any solutions?

Thanks,
gsus.fx
 
gsus.fx:
Hello,

Sounds like you need to debug your ea. This happens to me when I get re-occuring errors such as 4756 and not enuf money errors, when i backtest many eas on marketplace.

 
gsus.fx:

The 4 GB limit per agent in the Cloud is theoretical: in practice, the available memory is lower because each node adds history, buffers, arrays, optimisation results, and several passes can run in parallel.

This is why RAM peaks are reached over long periods, even if your EA does not reach 2 GB locally. Check dynamic arrays, free up indicators/resources, avoid unnecessary copies, and reduce data load or concurrency.

This is not a terminal error, but rather an inherent limitation of the Cloud network, so the solution involves adapting the code and usage to these conditions.

 
Miguel Angel Vico Alba #:

The 4 GB limit per agent in the Cloud is theoretical: in practice, the available memory is lower because each node adds history, buffers, arrays, optimisation results, and several passes can run in parallel.

This is why RAM peaks are reached over long periods, even if your EA does not reach 2 GB locally. Check dynamic arrays, free up indicators/resources, avoid unnecessary copies, and reduce data load or concurrency.

This is not a terminal error, but rather an inherent limitation of the Cloud network, so the solution involves adapting the code and usage to these conditions.

Thank you both for your comments.

I found that if I decrease the date ranges from one year to six months, the optimization will run properly.

Although I was looking forward to perform long running optimizations, to discard any overfitting.

I’ve contacted the MQL support team, and we are trying to debug the reason.

Thanks
 
gsus.fx #:

Please update this thread when you have worked out how you avoid the issue and how you "debugged" your ea with the reason why the issue occured, besides decreasing the testing period, of course; because i am guessing that the reason for the error is likely that you get lots of errors during the period that you have removed from your test.

 
Michael Charles Schefe #:

Please update this thread when you have worked out how you avoid the issue and how you "debugged" your ea with the reason why the issue occured, besides decreasing the testing period, of course.


Hello,

Sure, I will comment here any updates to the case for visibility.

I’m now thinking that the I know what the issue is:

I’m using a custom indicator for the EA which uses several timeframes apart from the one I have in my strategy tester settings.
I think that this is going to exponentially increase the amount of historical data that’s going to be requested and loaded in memory/RAM.
If the custom indicator needs to load 3 timeframes and my EA loads 2 other indicators in another timeframe, plus the strategy tester needs to load historical data for the timeframe I want to backtest, it will need to fetch many quotes from every timeframe, to perform its calculations and computations per tick.

That’s why of I choose 1 year date ranges eg. 2024-2025 it’s going to load 5x1year=5 years historical quotes data.

I could probably try to cache the different indicators values for the selected periods, proxy the copy rates or copy buffer calls or similar approaches, and bundle them cache in a resource together with the EA and load it in memory data structures to be queried each pass.

That way I would only load the historical data for a single timeframe and avoid the issue stated above.

Thanks all,
I’ll keep you posted,
gsus.fx
 
gsus.fx #:

i concur with your observations regarding the multiple timeframes, as i have had similar issue in the past.

As for CopyBuffer can be a very big issue. This comes up in a couple threads every now and then. Developers/coders forget or seem to be unaware that these need to be checked for valid /invalid values before allowing the ea to continue. If you do a searc on this site there is some good threads on this subject, and discussed only recently.