Features of the mql5 language, subtleties and tricks - page 305

 
Vladimir Simakov #:

Learning the pros and smiling))))))

https://en.cppreference.com/w/cpp/language/operator_arithmetic.html

Thanks. I would like to find a quick way to multiply int by the negative power of 10.

 
fxsaber #:

Thanks. I would like to find a quick way to multiply int by the negative power of 10.

Is that you, so you don't have to divide by 10 in a loop? Something that would work roughly the same way:

С++17:

template<int64_t N, typename = std::enable_if_t<(N < 0)>>
double mult(int i) {
    constexpr double kMult = []() constexpr {
        double ret = 1.0;
        int64_t n = N;
        while (n++ < 0) {
            ret /= 10.0;
        }
        return ret;
    }();
    return i * kMult;
}

int main() {
    auto x = mult<-7>(100);
    std::cout << x << '\n';
}

 
Vladimir Simakov #:

You're doing this so you don't have to divide by 10 in the loop?

About that.
TickOut.bid = ::NormalizeDouble((this.PriceBid += ShiftBid) * this.Pow, 7/* this.digits*/);
If Pow is made float, it will speed up immediately. But then there is a loss of accuracy on the same BTCUSD.
 
fxsaber #:
About that. If Pow is made float, it will speed up immediately. But then there is a loss of accuracy on the same BTCUSD.
Have you tried to connect OpenCL to this type of task (packing-unpacking)?

 
Nikolai Semko #:
Have you tried OpenCL for this type of task(packing-unpacking)?

I'm not really into this kind of thing. The speed is already pretty good. But for self-education it would be interesting to see a ready variant.

 
amrali #:

Here https://www.mql5.com/en/forum/349798/page 3#comment_57408237

#property script_show_inputs

input datetime inFrom = D'2024.12.01';

#include <fxsaber\TicksShort\TicksShort.mqh> // https://www.mql5.com/en/code/61126
#include <TickCompressor__3.mqh> // https://www.mql5.com/ru/forum/170952/page303#comment_57411774

// Returns the size of the array in bytes.
template <typename T>
ulong GetSize( const T &Array[] ) { return((ulong)sizeof(T) * ArraySize(Array)); }

template <typename T1, typename T2>
double Criterion( const T1 &Decompression[], const T2 &Compression[], const ulong Interval )
{
  const double Performance = (double)ArraySize(Decompression) / Interval;

  return(Performance * ((double)GetSize(Decompression) / GetSize(Compression)));
}

void OnStart()
{
  MqlTick Ticks[]; // For source ticks.

  if (CopyTicksRange(_Symbol, Ticks, COPY_TICKS_ALL,(ulong)inFrom * 1000) > 0)
  {
// TICK_SHORT Ticks2[]; // For compressed ticks.
    MqlTickBidAsk Ticks2[]; // For compressed ticks.

    ulong Interval = GetMicrosecondCount();
// TICKS_SHORT::Compress(Ticks, Ticks2); // Compressed.
    TickCompressor::compress(Ticks, Ticks2); // Squeezed.
    Interval = GetMicrosecondCount() - Interval;
    const double Performance = (double)ArraySize(Ticks) / Interval;

    Print("Compress performance: " + DoubleToString(Performance, 1) + " Ticks (millions)/sec.");
    Print("Compress performance criterion: " + DoubleToString(Criterion(Ticks, Ticks2, Interval), 1));

    MqlTick Ticks3[]; // For squashed ticks.

    ulong Interval2 = GetMicrosecondCount();
// TICKS_SHORT::Decompress(Ticks2, Ticks3); // Decompress.
    TickCompressor::decompress(Ticks2, Ticks3); // Unclenched.
    Interval2 = GetMicrosecondCount() - Interval2;
    const double Performance2 = (double)ArraySize(Ticks3) / Interval2;

    Print("Decompress performance: " + DoubleToString((double)ArraySize(Ticks3) / Interval2, 1) + " Ticks (millions)/sec.");
    Print("Decompress performance criterion: " + DoubleToString(Criterion(Ticks3, Ticks2, Interval2), 1));

    Print("Correct = " + (string)TICKS_SHORT::IsEqual(Ticks, Ticks3)); // Compared.
  }
}


Result (BTCUSD).

Compress performance: 106.2 Ticks (millions)/sec.
Compress performance criterion: 354.0
Decompress performance: 28.8 Ticks (millions)/sec.
Decompress performance criterion: 96.0
Correct = false
 
fxsaber #:

I'm not in this thread at all. The speed is already pretty good. But for self-education it would be interesting to see the finished version.

Also out of the loop. Still trying to get into it.
Here is an example of code that is impressive:
h ttps:// www.mql5.com/ru/forum/227736/page64#comment_20414078

 
Nikolai Semko #:
I'm out of the loop, too. Still trying to get into it.
Here is a code sample that is impressive:
h ttps:// www.mql5.com/ru/forum/227736/page64#comment_20414078

Of course, you need to define the tasks for these technologies.

 
Nikolai Semko #:
I'm out of the loop, too. Still trying to get into it.
Here is a code sample that is impressive:
h ttps:// www.mql5.com/ru/forum/227736/page64#comment_20414078

Here are all the files to play

Files:
Swirl2_OCL.mq5  14 kb
iCanvas_CB.mqh  75 kb
Files.zip  2 kb
Swirl2_GPU.mq5  10 kb
Swirl2.mq5  5 kb
 
fxsaber #:

Of course, we need to define the tasks for these technologies.

And what is there to define.
All tasks that require powerful computational performance sit on GPUs. Whether it's mining or AI, which is gaining momentum.
All cloud providers, the same AWS, Azure, Google Cloud are the main buyers of NVIDIA's chips lately. Precisely for the growing use of AI. That's why NVIDIA's stock has been steadily rising lately.
CPUs can't compete with GPUs in computing.
Even with my laptop GPU built into the CPU, the performance gain on this example is more than an order of magnitude. 440 vs 37 frames per second.