Questions about OOP - page 2

 

https://www.mql5.com/en/docs/basis/variables/local#stack

In every MQL5 program, a special memory area called stack is allocated for storing local function variables that are created automatically.

Static local variables are stored in the same place where other static and global variables are stored - in a special memory area, which exists separately from the stack. Dynamically created variables also use a memory area separate from the stack.

With each function call, a place on the stack is allocated for internal non-static variables. After exiting the function, the memory is available for use again.

Therefore, for large local data you should better use dynamic memory - when entering a function, allocate the memory, which is required for local needs, in the system (new, ArrayResize()), and when exiting the function, release the memory (delete, ArrayFree()).

What memory area do local objects (instances of a class) with an automatic pointer use?

class Foo {};

void OnStart()
  {
   Foo *a = new Foo; // This object is allocated in a memory area separate from the stack
   Foo b;            // In what memory area is this object allocated?
  }
 
Vladislav Boyko #:

What memory area do local objects (instances of a class) with an automatic pointer use?

Usually they are created on the heap memory.

This is a bottom up memory area, and will not conflict with the stack, which is a top down memory area.

Except of course you are running out of available memory.

Your second example is created on the stack. Depending on what members it has, it might have a dynamic array, allocated on the heap.
 
Dominik Egert #:
Usually they are created on the heap memory.

This is a bottom up memory area, and will not conflict with the stack, which is a top down memory area.

Except of course you are running out of available memory.

Your second example is created on the stack. Depending on what members it has, it might have a dynamic array, allocated on the heap.

Thanks for your answer.

I can't say I completely understood😄 I need to figure this out... Looks like this requires C++ knowledge

 
Vladislav Boyko #:

Thanks for your answer.

I can't say I completely understood😄 I need to figure this out... Looks like this requires C++ knowledge

Stack is allocated at the highest available address in memory and works it's way down as it grows.

Heap is allocated from the lowest memory address up as it grows.


 
Dominik Egert #:
Stack is allocated at the highest available address in memory and works it's way down as it grows.

Heap is allocated from the lowest memory address up as it grows.

I'm trying to understand whether allocating memory to an object is costly in terms of performance.

With each tick, I collect all current orders into one complex object (which consists of other objects). I'm guessing that the total size of this object is about one kilobyte.

I'm wondering if the EA would work faster if this object existed all the time instead of being created and deleted every tick.

I believe that given the small size of the object, the time spent on memory allocation can be neglected. But I can't be sure about it.

 
Vladislav Boyko #:
I collect all current orders into one complex object (which consists of other objects)

I'm talking about market and pending orders in MT4 (not historical). This is just a clarification so that you understand my example correctly. The topic is devoted to MQL5

 
Vladislav Boyko #:

I'm trying to understand whether allocating memory to an object is costly in terms of performance.

With each tick, I collect all current orders into one complex object (which consists of other objects). I'm guessing that the total size of this object is about one kilobyte.

I'm wondering if the EA would work faster if this object existed all the time instead of being created and deleted every tick.

I believe that given the small size of the object, the time spent on memory allocation can be neglected. But I can't be sure about it.

So test it. With the Strategy Tester you can easily check that and see what is faster.

In general, even a small improvement can be very useful if you are using optimizations. Even if it's negligible when running on a live chart.

 
Alain Verleyen #:

So test it. With the Strategy Tester you can easily check that and see what is faster.

In general, even a small improvement can be very useful if you are using optimizations. Even if it's negligible when running on a live chart.

I'll try to check this in the next couple of days.

But first I can conclude that in most cases this is not advisable (for small data). This makes the code more cumbersome and inconvenient. Accordingly, it increases the cost of writing code and changing it in the future if necessary.

If you imagine that this will give some kind of performance gain on small objects, then with this logic you can abandon OOP, make all variables global and not use functions - macros at most😄

 
Vladislav Boyko #:

I'll try to check this in the next couple of days.

But first I can conclude that in most cases this is not advisable (for small data). This makes the code more cumbersome and inconvenient. Accordingly, it increases the cost of writing code and changing it in the future if necessary.

If you imagine that this will give some kind of performance gain on small objects, then with this logic you can abandon OOP, make all variables global and not use functions - macros at most😄

Generally speaking, stack is usually faster for one simple reason, it is already in CPU cache.

CPU cache is usually built up if rows of size 128 bytes, and you can fit most objects into cache. So, if you want every possible performance gain, make your data local, keep the chunk of work in reasonable sizes, such that it fits into the cache (cache is also used for other tasks) L1 cache nowadays is 128kb or more per CPU core.

But much more interesting is to avoid branches is your code. Branch misses are extremely cost intensive.

See these two videos explaining why, and how it works:




 
Dominik Egert #:

But much more interesting is to avoid branches is your code. Branch misses are extremely cost intensive.

This is mainly something for the compiler and hardware. Not something for average MQL coder.

The example in the second video with the double loops is very bad. Unfortunately we can't check the assembler code produce by MQL compiler, but here is the C++ result :

int c;

int main() {
    int i,j;
    for(i=0;i<100;i++)
        for(j=0;j<4;j++)
            c++;
    return c;
}
main:
 mov    eax,DWORD PTR [rip+0x2fee]        # 404014 <c>
 add    eax,0x190                         // The loops are optimized and the value 400 is directly set in the binary code
 mov    DWORD PTR [rip+0x2fe3],eax        # 404014 <c>
 ret
 cs nop WORD PTR [rax+rax*1+0x0]
 nop    DWORD PTR [rax+0x0]

MQL coders don't have much to worry about "branches in your code".

Reason: