Features of the mql5 language, subtleties and tricks - page 92

 
Slava:
What is the probability of changing the local computer time between two calls to GetMicrosecondsCount used to measure the time in microseconds?

Not zero.

 
TheXpert:
Very constructive conversation )

We'll just delete a few more scribblers for good and that's it.

No more tolerating those who rush to the ambuscade, trying to call the reality of WinAPI functions a bug and blame us. There will definitely be more constructive.

 
fxsaber:

Not zero.

And what is the probability of client/server exchange time loss in milliseconds? Probably more than the probability of changing the local time.

 
Renat Fatkhullin:

We'll just delete a few more peckers permanently and that's it.

No more tolerating those who rush to the embargo, trying to call reality a bug and blame us. There will clearly be more of the same.

a little to the right of the topic, in the direction of OnTimer()))

I don't remember where I read, there a representative of MQ wrote, that it is possible ( for those who have a strong itch ) to switch the system to a 1 ms delay and then if you use EventSetMillisecondTimer(...), OnTimer() will also work with an error of about 1 ms, but not 16 ms

If I understood correctly OnTimer() works with system delay, right?


ps. sent a request to servcie-desk yesterdayNot processed,Started: 2018.07.30 12:52,#2117844, can you help in its processing, since yesterday hangs ))
 

OnTimer works with the error of the system WinAPI timer, controlled through the WinAPI function GetTickCount. This is a very fast and cheap way of timing that has minimal impact on the process being measured. That is, it does not greatly spoil the final result.

The accuracy of this timer can be improved for the entire operating system, but at the cost of both increased CPU consumption and random and massive induced effects from the mass of programs will start:

  • measure time more accurately
  • spend less time in slips
  • some timeouts that work for common errors degenerate into outright wrong behavior
  • And a number of more cool glitches

The Windows system timer problem is over 20 years old. But the behavior and accuracy of the old timer is dangerous to change.

That's why new, more accurate timing methods have long been introduced. But they are resource-intensive and it is unreasonable to use them as a complete replacement for the old timer.

In our case, the timer with higher precision is implemented with GetMicrosecondCount. It should be used consciously and with the understanding that it costs more than GetTickCount. In addition, the cost of GetMicrosecondCount calls should be explicitly taken into account in cases of accurate measurements.

It is very easy to fool yourself and others by misusing the timer and by not keeping the benchmark clean.

 
Renat Fatkhullin:

We no longer tolerate those who rush to the embargo, trying to call the reality of WinAPI functions a bug and blame us. There will clearly be more constructive.

You can just write in help that GetMicrosecondsCount depends on local computer time and can work inadequately when it is modified. And GetTickCount does not depend on it.

The problem of microsecond measurement with regard to the problem is also solvable, albeit a bit buggy. and it can be solved at our level and at your. apparently, we will have to solve it at our level.

Why should you ban me?

 
Renat Fatkhullin:

OnTimer works with error of system WinAPI timer, controlled via WinAPI function GetTickCount. This is a very fast and cheap way of timing that has minimal impact on the process being measured. That is, it does not greatly spoil the final result.

The accuracy of this timer can be improved for the entire operating system, but at the cost of both increased CPU consumption and random and massive induced effects from the mass of programs to start:

  • measure time more accurately
  • spend less time in slips
  • Some timeouts that work for normal timeouts degenerate into outright wrong behavior
  • and a few more cool things

The Windows system timer problem is over 20 years old. But the behavior and accuracy of the old timer is dangerous to change.

That's why new, more accurate methods of timing have long been introduced. But they are resource-intensive and it is unreasonable to use them as a complete replacement for the old timer.

In our case, the timer with higher precision is implemented with GetMicrosecondsCount. It should be used consciously and with the understanding that it costs more than GetTickCount. In addition, the cost of GetMicrosecondsCount calls should be explicitly considered in cases of accurate measurements.

It is very easy to fool yourself and others by misusing the timer and failing to keep the benchmark clean.

oops, about the same thought after the representative of MQ wrote about reducing the time of the system timer ))

so i support, that it is not necessary to change something in this direction

by the way, i would like to know if there are any developments towardsreflection as in C# or at least as in boost ? for example serialization / deserialization would be more convenient to implement

 
TheXpert:

You can just write in the help that GetMicrosecondsCount depends on the local time of the computer and may not work adequately when it is modified. And GetTickCount does not depend.

It's written in the Help: The GetMicrosecondCount() function returns the number of microseconds, elapsed since the start of the MQL5 program.

That's what I said clearly: to measure intervals of time.

The problem of microsecond measuring is also solvable, albeit a little bit askew. And it can be solved at our level and at yours.

Why do you need to ban?

We should ban.

First of all, there is no problem with the microsecond timer time measurement. Secondly - some people are directly itching to think up an excuse to throw, and then stick to it to the last.

Once again, the rules have changed.

No more insults or "you must" will be accepted. We will conduct a sweep without warning.

 
Renat Fatkhullin:

It says so in the Help: The GetMicrosecondCount() function returns the number of microseconds that have passed since the start of the MQL5 program.

And it is written for the GetTickCount function:

The GetTickCount() function returns the number of milliseconds that have elapsed since the system started.

The phrases are almost identical, but one function depends on the local time and the other doesn't. How should we guess it?

 
TheXpert:

And it is written for the GetTickCount function:

The phrases are almost identical, but one function depends on local time, the other doesn't. how are we supposed to guess that?

This is the WinAPI.

A reminder about the use of "should" phrases explicitly or implicitly. Using "metaquotes should" instead of "please consider" is now unacceptable.

Reason: