Errors, bugs, questions - page 1944

 
Anton Ohmat:

Question for developers (I apologize if I am boring you)

I don't understand - my genetic algorithm says 12000 passes, but my agents actually perform only 9000 passes. - What happens to another 3000 results?

In genetics, 12000 is a rough estimate of the number of passes in the primary step. The actual number is less. But sometimes even more - it depends on the task.
 
Anton Ohmat:

Sat and waited to see what the slow agent would return. In the end it returned error INIT_PARAMETERS_INCORRECT (no operations are performed). Which in my case indicates that the input parameters don't match. So with 99 out of 100 probability I can say that someone just plugged an old laptop into the system. The idea becomes meaningless because of that. Observed in MQL5 Cloud USA


It's in the logs.

MQL5 Cloud USA genetic pass (0, 206) tested with error "incorrect input parameters" at 0:00:00.359 (PR 142)

Pleasewrite to servicedesk with all the details, so we can find the case from the logs.

There are no old computers in the cloud, tasks are distributed to the most powerful participants.
 
Renat Fatkhullin:
In genetics, 12,000 is a rough estimate of the number of passes in the primary stage. In reality, it is less. But sometimes more - it depends on the task.
A little misunderstood 12000 - this is what he writes that he passed, not what he plans to pass. And for agents, it's 9000 passes.
 
Renat Fatkhullin:
No, tasks are not distributed to old agents in the cloud.

I don't think this is the case. Otherwise, as soon as a new beta terminal comes out, the Cloud would not work for it.

 
Anton Ohmat:
A little misunderstood 12,000 is what he says he passed, not what he plans to pass. And the agents have 9000 passes.
Read the logs. In genetic optimization you can see lots of entries like "result found in cache". This means that genetic operations of crossover, mutation and/or inversion resulted in a set of parameters which was already calculated earlier. In this case, the task is not given to agents, but uses a previously obtained result
 
fxsaber:

I don't think this is the case. Otherwise, as soon as a new beta of the terminal is released, the Cloud would not work for it.

We periodically cut off old builds in the cloud, waiting for them to be updated, which goes very quickly and unnoticed.

This is not done every version, but depending on the importance of the changes made.
 
Please advise why the test result of optimisation and single test may not be the same. The difference in the account is almost 2 times. Can it be that agent and I have different quotes? I am using agents from the cloud.
 
Anton Ohmat:
Please advise why the test result may not be the same on optimization and on a single test. The difference is almost 2 times. May it be that agents and I have different quotes? I am using agents from the cloud.

Does this happen to any expert or just a particular one?

In any case you need to deal with it. Please create a ticket to Service Desk, attach testing settings (broker, account type, and the contents of the tabs Settings and Parameters in the tester), the Expert Advisor. As much information as possible.

 
Anton Ohmat:
Please advise why the test result of optimisation and single test may not be the same. The difference in the account is almost 2 times. May it be that agent and I have different quotes? I am using agents from the cloud.

This is one of the optionshere.

 

How do I know the input parameters of an EA at least in single run mode?

For indicators there is IndicatorParameters.

For optimisation - FrameInputs.

But for a single run of the Expert Advisor or its usual run - nothing.

Reason: