Optimiser Errors "no disk space in ticks generating function" - page 4

 
Michael Charles Schefe #:

all you said makes complete sense to me. And as for "higher memory functions". i do remember seeing memory mentioned many times in more advanced coding threads.

yeahh, in mql5 we need to use simple but effective memory management... that's why this language is much faster than pinescript. It's almost like a poor man's C++ :D

 
Imre Sopp #:


it is definitely under control, but 1gb per instrument? thats nuthin! check out the images i posted in the 2nd comment. 5gb per thread, but then I also was using "every tick based on real ticks" and i was also using a AUD account to test EURGBP. It is once I changed the AUD currency to EUR that it dropped to 500 - 650mb per thread. But the memory never increased but did decrease at least 100 - 250mb until the start of the next testing iteration as it is called in the journal, and then memory went back up to the 650mb again and gradually feel down again until the next iternation started.

Imre Sopp #:

 But that's also a theory... Need to test... 

It's a shame that the MQL5 developers themselves never answer users' questions and we have to discover everything ourselves...

i would almost be willing to bet that even the devs dont know

And each ea tested in optimiser uses different memory amounts, and this seems to vary on how many inputs you are testing, albeit seems to be only 50 - 100mb (each thread) difference between testing 2,3,4,5 inputs.

 
Imre Sopp #:

I mean the result, it doesn't matter if this optimizer is a visually visible GUI that we have to run or if we get the same optimized values ​​by processing the data stream...

I can't comment - if 1 GB per core stably maintains its value without changing the hard drive capacity, everything would seem to be maybe fine? Although 1 GB per instrument seems like a lot to me. But then the strategy tester is built that way and it would be wrong if it accumulated over each time massively. So, I believe that the main thing is that the data volume remains uniform when using the strategy tester and it does not grow massively. In this case, everything seems to be under control. There is no point in picking at this point why so much memory is used, the main thing is that the memory usage does not grow over time.

However, it would be interesting to test with programs optimized for small memory usage and speed to see if the memory usage in the optimizer is still that high. I'll try it out soon and let you know. 1GB per instrument seems still crazy for me...

It could also be that the tester itself stores 1 GB of data in memory at a time for one instrument because it would also be unthinkable if too little data was loaded into memory and disk I/O was constantly being used to update the data.

 But that's also a theory... Need to test... 

It's a shame that the MQL5 developers themselves never answer users' questions and we have to discover everything ourselves...

its not per instrument its per core and each core is running 3 separate symbols across 3 years so its more like 0.33GB of RAM per symbol for three years

 

Have you compared these volumes with the size of the tickdata you are using? As I said before, unfortunately I really don't know how the strategy tester technically handles the data. Is it possible that all the usable tickdata is loaded into memory at once? I'll have my current project code ready soon and then I can test it myself and let you know the results, whether it strategy tester data processing can be optimized at all by coding...

 
Imre Sopp #:

Have you compared these volumes with the size of the tickdata you are using? As I said before, unfortunately I really don't know how the strategy tester technically handles the data. Is it possible that all the usable tickdata is loaded into memory at once? I'll have my current project code ready soon and then I can test it myself and let you know the results, whether it strategy tester data processing can be optimized at all by coding...

I don't think it matters too much the total volume of the tickdata because if it were loading the entire available tick data (which I'm pretty sure it does not) then I think it wouldn't throw memory errors when I increase the date range of the backtest/optimization. 

 

now getting 100% same error despite doing all the changes i mentioned earlier in the discussion. I have to assume, now, that it is a memory leak, since i have made all these changes to my optimisations, the error has only occured a handful of times, but every so often a couple of the cores will jump from 200 - 550mb and within a minute can be 7gb each. Even if I have my pagefile set at a ridiculous number, or at a windows recommended value of 8.8gb, I have continued to get the same error, albeit not very frequently anymore. It is a marketplace ea, so I cannot guarantee that it is not the code, however, in past i have got 100% same error on many eas from marketplace.

and yet! I have NOT got this error, EVER, with any ea that I got from elsewhere!

But the reason i propose it is a "leak", is because the process still has the 7gb or more, once the optimisation is stopped and i close the st. Normally this would clear the memory being used on each metatester process and after few minutes, that process is closed, however after such an error as above, the memory amount is stuck. If i resume the optimisation or do a single backtest, the memory that was already claimed by each process continues to be stuck and the new backtest just claims more memory on top of that amount of memory that seems to be stuck. The only way to clear this claimed memory is to shut down mt5, which does NOT clear these metatester process(es), but then I right click on each process and exit each one, one by one. This is only way I can clear them, other than restarting the computer.
 
Michael Charles Schefe #:

now getting 100% same error despite doing all the changes i mentioned earlier in the discussion. I have to assume, now, that it is a memory leak, since i have made all these changes to my optimisations, the error has only occured a handful of times, but every so often a couple of the cores will jump from 200 - 550mb and within a minute can be 7gb each. Even if I have my pagefile set at a ridiculous number, or at a windows recommended value of 8.8gb, I have continued to get the same error, albeit not very frequently anymore. It is a marketplace ea, so I cannot guarantee that it is not the code, however, in past i have got 100% same error on many eas from marketplace.

and yet! I have NOT got this error, EVER, with any ea that I got from elsewhere!

But the reason i propose it is a "leak", is because the process still has the 7gb or more, once the optimisation is stopped and i close the st. Normally this would clear the memory being used on each metatester process and after few minutes, that process is closed, however after such an error as above, the memory amount is stuck. If i resume the optimisation or do a single backtest, the memory that was already claimed by each process continues to be stuck and the new backtest just claims more memory on top of that amount of memory that seems to be stuck. The only way to clear this claimed memory is to shut down mt5, which does NOT clear these metatester process(es), but then I right click on each process and exit each one, one by one. This is only way I can clear them, other than restarting the computer.

I think your EAs are just badly coded because mine should theoretically be using a lot more RAM per core compared to your situation and mine doesn't randomly max out. I think your marketplace EAs are just made with some sort of EA builder and is not really memory efficient. It could be a memory leak but idk what would be causing that since the not enough RAM thing is a separate issue and it sounds like you've done some stuff to your computer to make it more memory efficient which might interfere idk good luck solving it. 

 
Casey Courtney #:

I think your EAs are just badly coded because mine should theoretically be using a lot more RAM per core compared to your situation and mine doesn't randomly max out. I think your marketplace EAs are just made with some sort of EA builder and is not really memory efficient. It could be a memory leak but idk what would be causing that since the not enough RAM thing is a separate issue and it sounds like you've done some stuff to your computer to make it more memory efficient which might interfere idk good luck solving it. 

i was able to find a iteration that the last error happened. i believe it happened when pages of stop level errors occur; i think that because these invalid stop level errors occur for pages upon pages when I do the single test with the settings from the optimiser. However, i thought that marketplace validates all eas for this error before allowing such eas to be published. And yet an hour later, trades open without any such errors.

 

the seller of this last ea had not responded to support or myself since May 15, so support has given me refund today; which is VERY disappointing since that the strategy seems to work on demo account, despite the pages of these errors.

Casey Courtney #:

I think your EAs are just badly coded.

Agreed. And this is what i assumed it was for years now, but the changes discussed above, got me into a false sense of euphoria and thinking that the issue was "down to just me, the user". But apparently the changes i made to my process have only improved things "so much".