MetaTrader 5 Strategy Tester and MQL5 Cloud Network - page 30
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'm afraid that with 24 agents on 8 cores (4 essentially + hypertrading) you will spend all the CPU performance on providing infrastructure.
Exposing an excessive number of agents causes their PR performance index to drop drastically, resulting in a multiple of the fee.
Haven't used the cloud in a while. Decided to use it in the selection of parameters. The work of the cloud was pleasantly surprising.
If you grind a distributed network system for a long time, you get a good result.
All in all, there's no way it's an hour and a half.
P. S. Turned the cloud on the fly. Due to loss of internet, remote agents dropped out. Then they didn't want to connect (autorized state; at least two genetic generations didn't connect) - apparently, the tester decided that there's enough to do on the cloud, and let the free agents rest. Disconnected the cloud. The remote agents are connected. Turned the cloud back on. Ended up with a hang-up.
Network needs to be a little bit finalized to avoid such situations (for example, to remember maximal pass time and if waiting for pass lasts 2 times longer than maximal pass time - to start the same process on the best core from local (or remote)).
+ TerminalInfoInteger(TERMINAL_MEMORY_AVAILABLE) needs to be refined
+ speed of genetics depends on speed of weakest core - if my cores have PR - 160-180, and tasks in cloud are distributed to cores up to 100. As a result, every generation, my cores have to idle for a significant amount of time and wait for responses from the cloud to generate a new population. I think I should drop the 100PR limit and give first priority to those agents which PR is greater than the PR of the weakest local core (+or remote core, if connected). If not, load balancing must be done somehow. For example, if we assume that all passes run on the same core at the same speed (in life, of course, not so, but many experts with some assumptions, can be called stable in testing time, regardless of parameters). If PR of local core is 150 and PR of core in cloud is 100, then local agent should be given 1.5 times more tasks than agent in cloud. Alternatively, if PR is lower, don't give agents in the cloud one task at a time, but give one task to a wider range of agents. In this case, downtime would be minimal. In general, I would like to see progress on this issue
In the last 12 hours, the network has hung up three more times :(
(And there are agents with PR < 100 in genetics journals)
By the way, has anyone tried to share agents on an ssd? Considering how my drive starts crunching on 8 agents, even without tasks, I have a suspicion that the ssd resource is quickly running out. And when testing a fairly light EA, compute speed, the speed of the hard drive starts to get bogged down. How many terabytes are pumped into the cache is a good question)
There is such a letter in the alphabet (I mean ssd), but I haven't done specific tests: as the server with such a device is at the other end of the city. But, IMHO, in any system there is a disk cache, which smoothes frequent access to the disk.
I decided to optimise a simple grider (timer 30 sec, new m1 bar control) on all ticks for two pairs. My 4 cores i5 (PR=160-170) and 8 cores i7 (PR=170-180) optimized for about 90 (!) hours.
Then it turned out that i5 passages are tested 2 times slower (although, as I've already written several times before, it was vice versa - i5 + winxp64 was faster than i7 + win7x64). At first I mowed down the memory - i7 has more of it.
Then I accidentally glanced at task manager and saw that the agents have the lowest priority (Low). And I've got it on both machines. And while I managed to raise the priority to Normal on win7, winxp64bit doesn't allow it for some reason. During half a day with new priorities on i7, my testing time was reduced (seemingly :)) by several hours.
Such "lags" seem to be observed in last two builds (or maybe it only seems to me).
And Priority Low is too cruel - if equipment at least 12 hours per day can give maximum priority to agents.
In general, I thought that the priority somehow automatically changes depending on resource load, but it seems that it does not change by itself :(