What should be added for additional support of universal mathematical calculations in MQL5 and MQL5 Cloud Network? - page 7

 
Help to understand and fix it. Yesterday I installedMetaTrader 5 Agents Manager
to use my PC in Cloud Computing in
MQL5 Cloud Network. But here is the problem, my account athttp://www.mql5.com does not show agents, which means that I will not be charged for using my PC. I have entered my account name in MT5MetaTrader 5 Agents Manager itself.
Скачать MetaTrader 5 Strategy Tester Agent для работы в сети MQL5 Cloud Network
Скачать MetaTrader 5 Strategy Tester Agent для работы в сети MQL5 Cloud Network
  • cloud.mql5.com
Подключайтесь к сети распределенных вычислений MQL5 Cloud Network и получайте дополнительный доход круглосуточно — пусть компьютер работает на вас!
 
Victuar:
But here's the problem in my account onhttp://www.mql5.com no agents are displayed, which means the fee will not drip for using my PC. In the MT5MetaTrader 5 Agents Manager itself, I have entered my account name.
How about reading the FAQ - https://cloud.mql5.com/ru/faq
Вопросы по сети распределенных вычислений MQL5 Cloud Network
Вопросы по сети распределенных вычислений MQL5 Cloud Network
  • cloud.mql5.com
Часто задаваемые вопросы по MetaTester 5 Agents Manager
 
Renat:

Hence the question - what other functions should be included to improve the billing grid capabilities?

Probably methods of classes that can be called remotely and get their values from agents: Remote Procedure Call (RPC). Something like this:

remote:
   ...
   double f(int x);
   double y(doble a, double b, int[] &c);
   void z(double[] &arr);
   void func(SomeObject *so);
   ...

Along with a method call, of course, we need to pass current field values of the object that calls the method remotely to the agent.

The idea is that the main class instance calls some method, and inside the method, instances of other classes are created, which send out tasks to the cloud. The result(s) is(are) returned.

For example, a task is created in the form of calculating several chess moves. In the main method, which is executed remotely, various combinations are created with a count for one move in the form of objects of some class and sent out. Those in turn, if the move did not end with a result or the depth of calculation did not exceed the limit, again call the same method. And so on and so forth.

 
her.human:

Without the involvement of the terminal, this is a good thing.

Who will generate the data for this 'one of the agents'? Will a script or indicator be able to do it?

Any of the agents can generate raw data for the others. Can send either by fordcast to all or to a selected agent.

Any agent will be able to send data frames to any other agents.


What is the purpose of agent-to-agent communication, enlighten the ignorant if you can.

For related tasks where data/results of previous calculations need to be exchanged.

It doesn't have to be in the cloud. You can make a high-speed network of agents in your local area and run a complex task with a lot of data exchange on it.

As a result, you can build a powerful network without any supercomputers.

 
Reshetov:

Probably class methods that can be called remotely and get their values from agents. Something like this:

Here, of course, along with a method call, we would have to pass current field values of the object, which calls this method remotely, to the agent as well.

No, the only workable and realistic option is to exchange data frames. Remote execution of functions is not serious, because no one in their right mind would replicate information environment.

As part of frame work, the functionality can be extended:

bool  FrameSend(const long    agent,       // номер агента, или BROADCAST
                const string  name,        // публичное имя/метка
                const long    id,          // публичный id
                const double  value,       // значение
                const string  filename     // имя файла с данными
               );

Just in case for information:

The cost of network latency is such that in order to optimise the overall process, you should engage in explicit batching of results and transfer data as infrequently as possible. For example, if there is a high-speed (within fractions of a second) mathematical problem for 100,000,000 passes, it is better to optimize the process immediately algorithmically in portions of 1,000-10,000 passes and write batch processing code with results returned in a batch. This would give a huge advantage over 100,000,000 returns, where a lot of time would be spent on the network.

For our part, we ourselves seriously help in cases of high-speed tasks by batching the output in dozens or hundreds of passes to each agent and also batching the responses. This gives huge savings on network transmission and keeps network latency to a minimum.

 
Renat:

No, the only workable and realistic option is to exchange data frames. Remote execution of functions is not serious, because no one in their right mind would replicate an information environment.

Not all tasks can be packaged, because in some application and very resource-intensive tasks the result may be the only one or it may not be detected at all and futile tasks are discarded along the way, i.e. missing results should not even be returned.

Then there is another way to do it. I.e., the main task generates tasks on its side, informs the agents about it. And agents call remote methods with tasks, compute and if they get results, they call remote methods to return the results.

For example, the task: search for prime divisors of Fermat numbers. Maybe there will be no result at all, or one, or several. The point is that the search of these very potential divisors is a very resource-intensive task, because first you need to create an object in the form of a large number (in the task you can specify only two numbers as integers: prime and mantissa, to reduce the cost of transferring information). Then the number to check for prime (run a simplified test, which will reveal that the number in more than 90 percent is not prime). And then, if the simplicity test is passed successfully, in the loop, squaring up modulo look for a match. If the condition before the end of the loop will not coincide, then there will be no result and there will be nothing to return. In this case, the agent must remotely request the next job by remotely calling the appropriate method from the host application. If it finds the result, it must call another method and pass the same result.

I.e. tasks are different and frame structures are not adequate for all. And the network latency cost in the above example is also negligible, since one task consists of passing two integers to an agent.

 
Reshetov:

Not all tasks can be bundled, as in some application and very resource intensive tasks there may be only one result or no result at all, and inconclusive tasks are discarded as the play progresses, i.e. missing results do not even need to be returned.

If using a frame scheme, just don't return empty results to the "server agent" or just return the "packet calculated, no data" flag.

Are you aware of how frame mode works? The EA header starts right in the terminal window and waits for responses (data frames) from remote agents. That is, the server part sits on the chart, receives data and can visualise anything.

Read and try it for yourself: https://www.mql5.com/ru/code/914

ATTENTION: Video should be reuploaded

Пример обработки результатов оптимизации в тестере стратегий
Пример обработки результатов оптимизации в тестере стратегий
  • votes: 24
  • 2012.06.11
  • MetaQuotes Software Corp.
  • www.mql5.com
Пример визуализации результатов тестирования (динамика кривой баланса и статистические характеристики торгового советника) в процессе оптимизации.
 
Renat:

If using a frame scheme, just don't return empty results to the "server agent".

Well, that's just the basis. The main tasks, which are very computationally intensive, are recursive. And the cloud isn't intended for such tasks because it's designed only for the full search. In many applied tasks we don't use the brute force since it has no perspective. Recursive tasks are necessary for search in depth and in width and in depth with width. For example, synthesis of molecules. I.e. a tree of potential solutions is built in the course of the play, each branch is computationally resource-intensive. But not every branch is effective. I.e. the search stops somewhere, but at the same time, the search continues for other potential branches in depth or width.

Complete search is practically never used anywhere, since for most application tasks it will not take enough time to find a solution (e.g. problem with analyzing chess moves). But recursive methods with cutting off non-prospective solution branches give high speed, especially in distributed calculations. That is why if you want to attract application engineers to the cloud, you should adjust the cloud to their tasks, instead of thinking that they will leave everything and will try all variants in a row, regardless of their perspectives. It will be easier for them to create their own distributed computing network, even if it is less fast in terms of gigaflops and has fewer computers, but it will be more efficient, because it will search only in promising directions and will find the needed solution much faster than Cloud Network. And many programming languages have a toolkit for this, i.e. ready-made RPC implementations.

For example, the same search for prime divisors of Fermat numbers can be broken down into subtasks. The main application generates the tasks. The next layer creates objects and performs a quick simplicity check on them from the remaining tasks. The next layer looks for conditions, i.e. whether a divisor of a Fermat number is found or not. Jobs are again generated from the found numbers. The next layer performs a complete simplicity check, and if the number is not prime, it generates jobs. If it is prime, it returns the result to the main application. The next layer factorizes non-simple divisors of Fermat numbers and generates jobs for the previous layer from them.

This creates a conveyor, where each layer agents perform their tasks. It's unclear whether the result will be found. What is important is that knowingly hopeless for further search of solutions numbers are discarded in the conveyor. In other words, it saves a lot of computational resources, instead of trying to pile thousands of agents on unpromising tasks and try to grind them down.

Распределенные вычисления в сети MQL5 Cloud Network
Распределенные вычисления в сети MQL5 Cloud Network
  • cloud.mql5.com
Заработать деньги, продавая мощности своего компьютера для сети распределенных вычислений MQL5 Cloud Network
 
Reshetov:

That's just the basis. The main tasks are recursive and very resource-intensive for calculations. And the cloud isn't intended for such tasks because it's designed only for the full search. In many applied tasks we don't use the brute force since it has no perspective. Recursive tasks are necessary for search in depth and in width and in depth with width. For example, synthesis of molecules. I.e. a tree of potential solutions is built in the course of the play, each branch is computationally resource-intensive. But not every branch is effective. I.e. somewhere the search stops, but at the same time, the search continues for other potential branches in depth or width.

Batch calculations in 1,000-10,000 passes, and return only the significant results. This is a very effective algorithmic technique.

I wrote about it specifically above.


Complete search is almost never used, because for most applied problems it will not take enough time to find a solution (for example, a chess game move analysis problem). But recursive methods with cutting off non-prospective solution branches give high speed, especially in distributed calculations. That is why if you want to attract application engineers to the cloud, you should adjust the cloud to their tasks, instead of thinking that they will leave everything and will try all variants in a row, regardless of their perspectives. It will be easier for them to create their own distributed computing network, even if it is less fast in terms of gigaflops and has fewer computers, but it will be more efficient, because it will search only in promising areas and will find the right solution much faster than Cloud Network. And many programming languages have a toolkit for this, i.e. ready-made implementations of RPC.

For example, the same search for prime divisors of Fermat numbers can be broken down into subtasks. The main application generates the tasks. The next layer creates objects and performs a quick simplicity check on them from the remaining tasks. The next layer looks for conditions, i.e. whether a divisor of a Fermat number is found or not. Jobs are again generated from the found numbers. The next layer performs a complete simplicity check, and if the number is not prime, it generates jobs. If it is prime, it returns the result to the main application. The next layer factorizes non prime divisors of Fermat numbers and generates jobs for the previous layer.

Read above about data exchange and the demo example:

  1. You already have a master process that controls the work of the agents. It sits on a chart and accepts frames (of large custom size) from agents.
  2. The master process can already retrieve, visualize, process and save the resulting custom data

A further extension to the data exchange is proposed so that the master process can additionally pass any additional custom data to any agent. As a result, it is possible to read in parts, doling out new custom conditions to remote agents. As a result, it can read any way it wants, changing conditions every time.

One more possible extension, when agents can not only receive tasks from master but also exchange data with each other. You can of course do it through wizard (which can be very slow if there is a lot of data), but it's even more efficient and faster to do it directly through cloud servers.

 

Renat:

Another possible extension is for agents to not only receive tasks from the wizard, but also to transfer data between themselves. You can of course do this through the wizard (which can be very slow if there is a lot of data), but it is even more efficient and faster to do it directly through the cloud servers.

This is what we need, i.e. recursive data transfer from one agent to another without wizard, but with guaranteed return of results to master. In other words, the agent doesn't get a task and terminates without completing it, for example, because computer was shut down and potentially effective branch of solution was broken.

I.e., for example, the task of analysing a chess game. The wizard arranges the pieces and generates assignments for the colour of the pieces that are to move now, i.e. one piece - one assignment. Each agent, having received a task for his piece, discards the unpromising variants for further analysis, when a piece cannot move, and forms new formations which are passed on as tasks for the enemy pieces. And so on and so forth, until the mate or stalemate or the search depth is exceeded.

If such task is entrusted to current cloud implementation, then you can only generate task packages for full search. And the cloud doesn't have enough agents for that, and it's unlikely that the wizard has enough memory to batch all those jobs. Because there is no mechanism of sifting out unpromising variants. For with each new analyzed move of pieces, the number of tasks grows exponentially, but also a considerable part of them is discarded and does not generate meaningless tasks, as in the complete overkill. And one can only find out for perspective after diving to some depth or width of the decision tree. And the depth in this cloud implementation is 1, i.e. from master to agent and back again.

My point is this. For trading, implementation of recursive search with chopping off dead ends is also necessary. It is better to search not only for a single extremum, but for a set of local extrema (there are indeed many of them). And the search space for all possible variants is astronomical, i.e. no agents taken from all distributed computing networks will be enough. To do this, at each step we enumerate nearest neighborhoods of a point (point coordinates - input parameters of the EA) at some distance from it and spaced apart by some angular value, to see if they improve the results compared to the current one or not. If any of them is worse or exceeds the depth of search, we discard them. If they improve, we recursively look further and form a set of further tasks from the improved points that we distribute to the agents. If an extremum is found locally (all points in the neighborhood only worsen the current result), we return the result to the main application. Once extrema have been identified, they are handed over to the wizard and further analyzed using forward tests.

Such a task cannot be solved directly, as the number of variants is astronomical. A genetic algorithm also does not look for local extrema (it stops near the global one in the immediate neighborhood as well) and only shows intermediate results, regardless of their extremum. Not to mention that the search space of genetic algorithm and brute force algorithm is limited and discrete. It is search for maximal number of local extrema but fast, i.e. with cutting off non-prospective generations of tasks from master to agents and from agent to other agents and unlimited range (but so that restrictions can be set if needed, e.g. search depth in such algorithms is always limited). If the cloud implemented recursive job transfer, the problem would be solved.

Reason: