Get the number of decimal places of any numbers (not just quotes) bypassing Digits() in MQL4 and MQL5 - page 21

 
Nikolai Semko:
On the road for now. You can try it yourself. The idea is to use unions with arrays of structures of different sizes, e.g. 10, 100, 1000, 10000...
This will shorten the loop by orders of magnitude and reduce the number of ArrayCopy calls by orders of magnitude.
This should be close to the memcopy variant.

This idea was used. In this case

You can see everything in the source code.
 
fxsaber:

This idea was used. In doing so

You can see everything in the sourcebook.
Yeah, looked it up. Strange that it doesn't have any effect.
 
Nikolai Semko:
Yeah, looked it up. Strange that it has no effect.

There is a line in the source that controls the size

#define  CONVERT_AMOUNT 128

You can change this value and see the result. If the value is greater than a hundred, the speed does not increase. It is actually easy to explain, because all the same number of elements are copied in total. And the slowdowns associated with small copy portions are eliminated.

 
fxsaber:

I'm afraid we're already locked into maximum performance.

Yes, I agree.
I tried it - the same result as in your TicksToIntArray_fxsaber4/IntArrayToTicks_fxsaber4

 
Andrey Khatimlianskii:

You have the source code, you can measure it yourself.

So measure it. I'm pretty sure I don't, so I don't see the point in wasting time on either the article or the measurement.

 
fxsaber:

I'm afraid we're already at the limit of performance.

To be honest, I'm very surprised that they managed to get so close to memcpy. It just can't be. Something is wrong.

 
fxsaber:

I'm afraid we're already stuck with maximum performance.

I think I understand a very serious miscalculation of yours.
Your BANCH selects the minimum of 50 absolutely identical runs.
But the compiler is a big smartass and lazy. It won't do the same job 50 times and will optimize the code. That's why you should at least change the arrays on each run. Or you may replace 50 with 1 and increase the number of tests. Then the results will be quite different and more objective.

2018.12.09 13:55:43.048 StructToArray__2        https://www.mql5.com/ru/forum/287618/page18#comment_9813963
2018.12.09 13:55:43.048 StructToArray__2        TicksToIntArray_thexpert
2018.12.09 13:55:43.296 StructToArray__2        Time[TicksToIntArray(TicksIn,Array)] = 247579
2018.12.09 13:55:43.296 StructToArray__2        IntArrayToTicks_thexpert
2018.12.09 13:55:43.544 StructToArray__2        Time[IntArrayToTicks(Array,TicksOut)] = 247840
2018.12.09 13:55:43.634 StructToArray__2        true
2018.12.09 13:55:43.766 StructToArray__2        
2018.12.09 13:55:43.766 StructToArray__2        https://www.mql5.com/ru/forum/287618/page18#comment_9814108
2018.12.09 13:55:43.766 StructToArray__2        TicksToIntArray_fxsaber4
2018.12.09 13:55:44.118 StructToArray__2        Time[TicksToIntArray(TicksIn,Array)] = 351847
2018.12.09 13:55:44.118 StructToArray__2        IntArrayToTicks_fxsaber4
2018.12.09 13:55:44.452 StructToArray__2        Time[IntArrayToTicks(Array,TicksOut)] = 334011
2018.12.09 13:55:44.548 StructToArray__2        true
2018.12.09 13:55:44.692 StructToArray__2        
2018.12.09 13:55:44.692 StructToArray__2        https://www.mql5.com/ru/forum/287618/page18#comment_9814108
2018.12.09 13:55:44.692 StructToArray__2        TicksToIntArray_semko
2018.12.09 13:55:45.037 StructToArray__2        Time[TicksToIntArray(TicksIn,Array)] = 344707
2018.12.09 13:55:45.037 StructToArray__2        IntArrayToTicks_semko
2018.12.09 13:55:45.373 StructToArray__2        Time[IntArrayToTicks(Array,TicksOut)] = 336193
2018.12.09 13:55:45.462 StructToArray__2        true

When the difference compared to memcpy is 40% it's more plausible

I wonder if compressing the array will have an effect. An array of ticks can be compressed by a factor of 10-12. Only question is whether this will save the resultant time in sending and receiving through the resource.

Files:
 
Nikolai Semko:

I think I understand a very serious miscalculation of yours.
Your BANCH selects the minimum of 50 absolutely identical runs.
But the compiler is a big smartass and lazy. It won't do the same job 50 times, it will optimise the code.

The code is written in such a way that it will do exactly what it is supposed to do. The compiler won't be able to affect the speed of memcpy, but the results of the passes are as follows

A loop of one pass

https://www.mql5.com/ru/forum/287618/page18#comment_9813963
TicksToIntArray_thexpert
Time[TicksToIntArray(TicksIn,Array)] = 235285
IntArrayToTicks_thexpert
Time[IntArrayToTicks(Array,TicksOut)] = 192509
true


Out of 50

https://www.mql5.com/ru/forum/287618/page18#comment_9813963
TicksToIntArray_thexpert
Time[TicksToIntArray(TicksIn,Array)] = 80970
IntArrayToTicks_thexpert
Time[IntArrayToTicks(Array,TicksOut)] = 81103
true
 
fxsaber:

The code is written in such a way that it will do exactly what you want it to do. The compiler is unable to affect the speed of memcpy, but the results of the passes are

A loop of one pass


Out of 50.

But then why does this happen? Of course, the compiler cannot affect the process of executing memcpy, but it can refuse to execute it, pulling pre-stored results from the first calculation, if it understands that none of the calculated parameters change during the loop. This is how I would organise the compiler myself to fix program alogisms.
 
Ilya Malev:

So measure it. I'm pretty sure I don't, so I don't see the point in wasting time on either the article or the measurement.

I don't have to.

Reason: