You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I finished the test yesterday, so I am no longer able to give you any screenshots of current situation.
The test period was for the whole year of 2022, with "Every tick based on real ticks", with 50ms delay, "Slow complete" optimisation (about 3000 passes).
The EA itself is 678 lines and is based on two EMA cross (of tick data, not OHLC), and using ATR and StdDev (of OHLC) for volatility calculations and risk assessments.
The necessary OHLC and tick data from the broker had already been download previously when I ran the test (from previous runs).
Those were my honest values I presented. I've have never had the MetaTester overwhelm my computer.
Also, I usually run the optimisations using only 4 out of 8 threads, so that I can continue doing other work.
PS! It was trading 4 symbols simultaneously, each one independently of each other in terms of strategy. My aim was to analyse if the parameters behaved well in such a scenario and if the gains of one symbol helped compensate with the drawdown of another symbol.I have no doubt about your values of course, I just don't understand how it's possible.
An idea : the value you show was after a while the optimization was running, so maybe the memory consumption was higher at the start and then decrease ?
I check.
The screenshot was taken during the optimisations. It was about 25-30% of the way at the time.
All of the passes were for valid parameter values and all of the passes had trades. The smallest had at least 70+ trades while the larger ones had about 2000+ trades.
I have no doubt about your values of course, I just don't understand how it's possible.
An idea : the value you show was after a while the optimization was running, so maybe the memory consumption was higher at the start and then decrease ?
I check.
Ok my idea was the good one.
I started an optimization with a minimal EA (tester settings and EA attached).
I got such resources usage at the start :
Then after 20 minutes, the memory usage suddenly fell. The optimization was around 35% of the total.
So obviously the values shown be Fernando were taken after this decrease.
Interesting. Thanks to the participants.
PS: This EA doesn't even trade, so don't lost your time with this Grail :-D
Very good investigative work. 👍
Hello! I would like to know what happened, did you upgrade the RAM? Did you change something?
I lost 9% of my SSD life for 1 year of genetic optimizations (big ones, like 10 years of tick data in 5 minutes and so on).
Any help is welcome Thanks.
Hello! I would like to know what happened, did you upgrade the RAM? Did you change something?
I lost 9% of my SSD life for 1 year of genetic optimizations (big ones, like 10 years of tick data in 5 minutes and so on).
Any help is welcome Thanks.
considering the ops post was almost 2 years ago, i think you will not get a response from them.
The liklihood of your losing 9% lifespan of your ssd seems unreal; even if metatester and strategy tester does use much resources on some scenarios; however, the reasons mentioned and described by the responders will almost certain to give you good reason for that; if in fact you have in fact lost that much.
And without you giving more details of your system, you are unlikely to get any more responses from others.
Hello! I would like to know what happened, did you upgrade the RAM? Did you change something?
I lost 9% of my SSD life for 1 year of genetic optimizations (big ones, like 10 years of tick data in 5 minutes and so on).
Any help is welcome Thanks.
1. Its a RAM issue. Not enough memory to go around so it starts paging the SSD and probably freezes/slows down you computer a lot.
2. Are you using Every Tick Based on Real Ticks or 1 min OHLC? Real ticks eats so much RAM for me even on what would be called an efficient EA (I don't use EA builders I code them all myself). I still max out RAM if I want to test a lot of symbols and my current computer has 32gb of RAM
The way I avoid this problem is I use 1 min OHLC but to be able to sleep at night I also force everything inside my EA to only react on new 1 min bar open/close ticks. This has the added benefits being able to confidently optimize the EA on historical data and still expecting it to perform the same on different brokers/platforms because broker will have WILDLY different real tick generation paths but will generally have the same open and close prices for M1 data (baring news events but you can either close trades before or hold trades through them depending on the strategy).
Its also SO MUCH FASTER I cannot understand this point enough. And you aren't really even sacrificing anything at all in fact I now prefer this method and don't use real ticks at all unless I am checking out differences between brokers. Also I use M1 data from dukascopy
Any help is welcome Thanks.
Форум по трейдингу, автоматическим торговым системам и тестированию торговых стратегий
Библиотеки: TicksShort
fxsaber, 2025.09.26 21:34
Ticks were requested for 5.5 years, representing almost 500 million ticks. A request made through the standard CopyTicksRange would have required over 28 GB of RAM. In this implementation, it requires 10 times less. This allowed us to retrieve and store all 500 million ticks in memory even on a very old computer.
Since all ticks are stored in memory, they can be manipulated. In particular, they can be filtered—see the source code.
Such low-cost filling of custom symbols with ticks allows us to obtain a tick sequence identical to the tick history for these symbols in the Tester. This completely eliminates the influence of desynchronized bars of real symbols on the Tester's tick generation. Roughly speaking, the Tester only displays the ticks that were recorded.
As a reminder, there is a low-cost way to view data on existing ticks.