
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
My results on the following configuration:
Genetics mode in MQL5 Cloud Network: 2,624 passes out of 8,704 expected passes were actually calculated in 15 minutes and 52 seconds. Genetics stopped the calculation earlier, as it hit the ceiling of results due to a highly sparse search field.
After clearing all caches on the disc and restarting the terminal, I ran Genetics on local cores i7-2600, 3.4 Ghz, Windows 7 x64, RAM 16Gb, 8 cores:
Average local pass time from 19 to 25 sec (computer not loaded with anything):
2012.02.05 01:06:34 Core 2 genetic pass (184, 344771) returned result 97426.26 in 20 sec
2012.02.05 01:06:31 Core 7 genetic pass (191, 419403, 1) started
2012.02.05 01:06:31 Core 7 genetic pass (181, 347989) returned result 94247.90 in 25 sec
2012.02.05 01:06:31 Core 3 genetic pass (190, 1048934, 1) started
2012.02.05 01:06:31 Core 3 genetic pass (183, 255717) returned result 92939.02 in 20 sec
2012.02.05 01:06:28 Core 4 genetic pass (189, 535782, 1) started
2012.02.05 01:06:28 Core 4 genetic pass (182, 131277) returned result 98194.52 in 21 sec
Realising that I would be waiting a long time at this rate, I stopped the calculation at 211 passes:
2012.02.05 01:07:59 Statistics locals 211 tasks (100%), remote 0 tasks (0%), cloud 0 tasks (0%)
2012.02.05 01:07:59 Statistics optimization passed in 11 minutes 16 seconds
2012.02.05 01:07:59 Tester genetic optimization finished on pass 211 (of 1276290)
2012.02.05 01:07:59 Tester result cache was used 0 times
You can see that these are pure 211 passes with zero hits to the old cache.
Since different runs of genetics rarely ever match the number of passes and I ran the tests fairly cleanly, it is possible to do the time calculation mathematically:
The total turns out to be 8,406 / 952 = 8.8 times genetics is faster in the cloud. This corresponds to the power of 64 local cores.
What is the explanation:
This depends directly on the very idea of crossing different generations and the adaptive size of the population. That is, in genetics, the entire claude network is not to be used.
As a result, out of 64-256 agents, many spend time waiting for a new chunk rather than working continuously. Often each agent has time to complete only one task out of a population of 64-256 tasks. And local cores, due to their smaller number and equal power, are almost never idle - each of them performs a proportional part of the task batch.
On the one hand any result of claud practically has to be divided by 4-8, and on the other hand - we use the capabilities of local cores as efficiently as possible.
Packetisation and efficient network protocol mechanism rule.
Of course, the results of full enumeration have no problems with latency due to small packet populations and the speed can be linearly scaled by hundreds and thousands of times.
I found a mistake - I was testing on someone else's RoboForex server and spent enough time on the initial pumping of the charts history.
On the new history, the claud just warms up for a few minutes, synchronising the history by agents. I will repeat the tests now.
I conducted tests on MetaQuotes-Demo: there is more M1 history on our server, almost all claudes already have it and the network warm-up time is shorter.
The increase in the volume of M1 bars has led to an increase in the time for one pass to 30 seconds.
Here are the results on the MQL5 Cloud Network: 3,704 clean tasks in 25 minutes and 8 seconds (1,508 seconds).
Here are the results of local passes: 181 tasks in 11 minutes and 57 seconds (717 seconds) - I stopped to avoid waiting for 4 hours (the total time can be easily calculated).
If we calculate how much it will take on local kernels to calculate 3,704 tasks, we get: ( 3,704 / 181 ) * 717 = 14,672 seconds (244 minutes and 32 seconds = 4 hours, 4 minutes and 32 seconds).
Total it turns out that 14,672 / 1,508 = 9.7 times genetics is faster in the cloud.
Although the result is close to the previous 8.8, but still reaches 10 times, which gives the right to declare "an order of magnitude faster".
The explanations given in the comment https://www.mql5.com/ru/forum/6071/page2#comment_148584 still apply.
During 3 min. 28 seconds of network usage, I was charged either 2 or 3 cents (3 cents in the terminal, 2 cents on the website, and 3 cents frozen). Let it be 3, or even for simplicity, using the network for genetics costs 1 cent for one minute. Total, an hour is 60 cents, 24 hours = $14.4. That sounds very expensive to me. Prices need to be discounted at least three times to make it attractive to the consumer (many people test EAs, but not everyone can/want to shell out about $15 per day for Cloud, and if it were $5 or less - there would be more people willing).
...
I think my train of thought is clear
@ Hey neighbour, I don't like you going to my wife while I'm at work.
@You don't understand you Johnsons, you don't like it, she likes it @ :))
Buyers always want cheaper, sellers more expensive, only quoting will settle the dispute.
Buyers should be able to quote a desired price for a task, sellers a desired price for their resources, then a market will be organised.
But this is all in the future, when the service will become commonplace and its expediency will not be a question. Now MQ has to actively prove that Cloud is cool.
My experience: I have done and am doing a lot of tests in claude, but for all the time since the start of the project I have spent only 44 dollars.
This is very little, taking into account that at times 2-3 thousand agents were used.
It turns out that 14,672 / 1,508 = 9.7 times genetics is faster in the cloud.
Although the result is close to the previous 8.8 times, it still reaches 10 times, which gives us the right to declare that it is "an order of magnitude faster".
Total, genetics is accelerated from 7 to 10 times. That is, by an order of magnitude. And since in the local tests not weak kernels were used, for some users the acceleration will probably reach 40 times.
I decided for myself that I will use the cloud if optimisation by my own forces will take more than a day.
+ request - please expand the mechanisms for identifying slow agents and redistributing tasks between them to local + remote agents. For it turns out that the same test with the same cores as remote + 10 more weak + 2 weak local agents leads to the result:
Practically 2.5 times slower. And not always there is a desire and possibility to measure with which agents to get the maximum performance.
Is there any mention that you need to pay for this service?
The MQL Cloud Network web page (https://cloud.mql5.com/) advertises earning extra $$$ letting others use your CPU for optimizations. Why is there no mention that if you use the Cloud Network for optimization that you must pay? If you don't pay, who is paying the people who put their CPU's on the network yet don't run any optimizations?
Cloud is safe? Or My EA will get stolen if I use cloud?
thanks
Cloud is safe? Or My EA will get stolen if I use cloud?
thanks
I would say, that if MQ want they coud be already implameted, that if your expert backtest results are good enough, they get expert+backtest reports straight away from your local computer. But I don't thinik that they are doining anyting of that.
If You really think that your expert might be that good, then just do split engeneering (I don't remeber right therm): Bascaly you test things seperatly, and real results you will see only when you combine all of thoes results what you did get before(splits).
Cloud is safe? Or My EA will get stolen if I use cloud?
thanks
You should read the article The Fundamentals of Testing in MetaTrader 5:
The Data Exchange between the Terminal and the Agent
...
The agents never record to the hard disk the EX5-files, obtained from the terminal (EA, indicators, libraries, etc.) for security reasons, so that a computer with a running agent could not use the sent data. All other files, including DLL, are recorded in the sandbox. In remote agents you can not test EAs using DLL.
The testing results are added up by the terminal into a special cache of results (the result cache), for a quick access to them when they are needed. For each set of parameters, the terminal searches the result cache for already available results from the previous runs, in order to avoid re-runs. If the result with such a set of parameters is not found, the agent is given the task to conduct the testing.
All traffic between the terminal and the agent is encrypted.