Errors, bugs, questions - page 2822

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Only rounding is not done using the standard round(), ceil(), floor() because they also return double.
But through these, especially they work faster than the regular ones:
It might be faster, but it's just wrong.
Have you tried it yourself?
Try it:
Output:
should be 12346, because it is a ceil ("Returns the closest integer numeric value from above.")the first case is 12345, because the significant digits in double type are 17, while you have 18
Really, you can't compare doubles. It's just a hard rule.
Of course, it is possible and sometimes even necessary to compare doubles directly with each other.
For example, OnTick is sometimes called a trillion times during Optimization. In order to understand whether or not to execute a pending limit, the built-in tester compares the current corresponding symbol price and the limit price. It does it for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.
And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.
The MQL-custom MQL Tester easily outperforms the native built-in tester in performance.
fxsaber:
Of course, it is possible and sometimes even necessary to compare doubles directly with each other.
For example, OnTick is sometimes called a trillion times during Optimize. The built-in tester, in order to understand whether to execute a pending limit call or not, compares the current corresponding symbol price and the limit call price. It does it for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.
And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.
The MQL custom tester easily outperforms the native built-in tester in performance.
NormalizeDouble() is a very expensive function. Therefore, you'd better forget about it.
Here's a script that demonstrates the difference between NormalizeDouble() and normalize with int:
result:
SZZ the normalisation by int is also more accurate (you can see it by the number of nines after the last digit of normalisation - highlighted in blue).NormalizeDouble() is a very expensive function. That's why it's better to forget about it.
Here is a script that demonstrates the difference between NormalizeDouble() and normalize with int:
result:
SZZ the normalisation by int is even more accurate (you can see it by the number of nines after the last digit of the normalisation - highlighted in blue).and if summing is not via double, but via long, then the result is even more impressive, since summing via int (multiplying and rounding followed by dividing the final sum) calculates faster than the normal sum of double.
result:
And if the summation is not via double, but via long, then the result is even more impressive, because summation via int (multiplication and rounding, followed by division of the total sum) is faster than a normal double sum.
result:
Decimal for comparison add.
Wrong link, it's not a complete implementation.
And it's done through normalisation every time. Well, this is a terrible waste of computing resources.
How do you know about it? Because even if prices are not normalized, the check is simply done without any normalization:
Given that prices are multiples of ticksize
Moreover, normalization through int also turns out to be more accurate (you can see it by the number of nines after the last digit of normalization - highlighted in blue).
The test is incorrect. Why do you divide by 100000.0 only once at the end? It should be performed at each iteration and then summed up. That's a fair comparison. But this is not normalization at all - you have just optimized your test algorithm. Naturally, it will be faster and more accurate (because the accumulated error is reduced).
How do you know this?
Because you can input non-normalised prices to the Tester and it will handle them identically.
After all, even if the prices are not normalized, the check is easily done without any normalization.
By normalization I meant in this case, a single standard algorithm, after applying it, you can directly compare doubles of this standard.
So the tester doesn't compare doubles directly. It does it through NormalizeDouble, ticksize or something else. But certainly not through direct comparison of doubles. And it is not rational at all.
Of course, it is possible and sometimes even necessary to compare doubles directly with each other.
For example, Optimize OnTick is sometimes called a trillion times. The built-in tester, in order to understand whether to execute a pending limit call or not, compares the current corresponding symbol price and the limit call price. It does this for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.
And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.
The MQL-custom MQL Tester does not beat the native built-in tester in performance.
So I decided to check the performance nonsense version.
And the result was surprising.
Comparison even of pre-normalized double is even slower on average than when double is compared through epsilon or conversion to int
The result:
I don't exclude that much depends on novelty and architecture of processor, and for someone the result may be different.
To tell you the truth - I don't even understand why it happens.
It seems that the compiler has nothing to optimize with sum of random numbers. You can't put rounding out of brackets.
It seems that double comparison in processor is one command
When comparing through epsilon (the fastest way) we still have a comparison operation of two double's but in addition we have a function call with passing of three parameters and one operation of subtraction.
Does the performance of the operation of comparison of two double's depend on the values of the variables themselves? I doubt it.
Geez, I don't get it. Please help me, what have I failed to take into account or where did I go wrong?