
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Now I have a problem. I need to make efficient sorting of arrays with big "cost" of elements copying (i.e. elements of which are large, "heavy" structures, class objects, long strings, etc.). Common sense suggests that we should leave them in place, but sort instead some kind of pointers - indexes of cells of their original location. Here is the quote from https://www.mql5.com/ru/forum/6476#comment_178318Оставим so far, and let's implement it on mql5.
Everything has already been stolen before us :)
Spreadsheets in MQL5
The input should be a copy of the array to be sorted, comments should be annotated, and the unnecessary should be commented out
Everything has already been stolen before us :)
Spreadsheets in MQL5
The input should be a copy of the array to be sorted, the comments should be annotated and the unnecessary commented out.
You're mean! You ruined the song... Or rather - tried to. :)
// It's clear that there are good approximations. I might as well have pulled a sample from the standard library. :)
There are patterns. And there are some excellent ones, too. But still. Here it's important to coding and debugging everything to a working condition of a separate usable product. Moreover (why I am sending it here) - as fast as possible. I.e. I suggest to squeeze all possible performance up to hundredths of a percent inclusive. :)
This is the first. Second, for objects we need the number of index arrays equal to the number of sorting criteria, which in general case can be several, + (preferably) function of insertion in sorted according to several criteria indexed array.
You're mean! You ruined the song... Or rather, you tried. :)
// It's clear that there are good approximations. I might as well have pulled a sample from the standard library. :)
There are samples. And even some great ones. But still. It's important to check and debug everything to a working condition of a separate usable product. And (why I'm sending it here) - the fastest one. I.e. I suggest to squeeze all possible performance up to hundredths of a percent inclusive. :)
That's the first thing. Second - for objects we need the number of index arrays equal to the number of sorting criteria, which in general may be several, + (preferably) function of insertion in sorted by several criteria indexed array.
Same answer Spreadsheets in MQL5.
It's all there. Under a concrete problem, it is possible to remake under manipulation not columns but rows, there are columns made for it is possible to declare them as different types. If the table type is the same, everything can be replayed.
Made an inluder with sorting indexes for base types.
Default sorting is "descending", to sort in ascending direction, set sorting direction flag to false.
Test results: // indexing arrays double[], int[], string[]; sequentially : raw array, descending array, ascending array
Library and test in trailer.
Put the indexer in the folder "MQL5\Include\Indexes\".
Here's a sample class for working with OCL. Of course, some things are incomplete and awkward, but maybe someone will find it useful.
I've reworked the initialization a bit, now you can run multidimensional calculations.
Great topic!
Just now I faced an optimization problem with an algorithm for finding a price extremum (minimum). The conditions are as follows: there is a bar, n bars to the left and right of which are below (above) its maximum:
n is a free arbitrarily chosen value. The period n is always odd, because the sum of the two bars to the right and to the left will always be an even number to which the central bar of the price extremum proper is added.
I didn't think much about the first version of the algorithm and wrote the most obvious code. Now I am writing it in C# using WealthLab platform, but I think you can easily understand the essence of the problematic algorithm, here is the most problematic part of it:
The whole problem is in the second loop. It simultaneously handles left and right branches of a potential extremum and therefore goes through only (N - 1)/2 bars, but that is not enough. Measurements show that time spent on identifying an extremum in an arithmetic progression depends on the period N, which is very, very bad:
Trying through the periods will take the time to sum up the arithmetic progression, and this is a very large value.
One possible solution is to introduce an additional variable. After all, if an extremum is identified, it is guaranteed that there is no bar to its right for (N - 1)/2 so a new extremum can be identified starting with bar: current_bar + (N - 1)/2. However, extrema need to be identified along with minima and a new minimum can be found before current_bar + (N - 1)/2. It will therefore be necessary to split the search for extrema and minima into two passes which will reduce the performance gain to zero. We can easily split two passes into two threads processed simultaneously on two cores in C# but I would like to find the optimal algorithm and optimize it first of all. I am waiting for the help of experts.
Well this is not an optimization problem.
Trying through the periods will take the sum of the arithmetic progression, and this is a very large value.
Finding an extremum seems to be a problem of the order of O(n), where n is the number of data. How you can make this asymptotic worse, i.e. O(n^2) - I can't even imagine. Or you're confusing the terms.
The simplest algorithm is ArraySort(), built-in fast enough, something around O(n * ln( n ) ). But it's probably redundant for this problem.
You could come up with something recursive that would be faster.
How long does it take to find the minimum and for how many bars? Well, I don't believe that for 100 bars you're searching for a maximum of one and a half seconds.
The simplest algorithm is ArraySort(), the built-in sorting is fast enough. But it is probably redundant for this task.
The best sorting is O(n*log(n)). Exactly redundant.
We could come up with something recursive which would be faster.
Slower. Recursion is most often evil. Recursive? This is probably a case where no matter how you do it, the speed will be about the same.
By code:
Cycles for min and max must be explicitly separated. And exit the loop immediately if it fails.
In principle, yes. But still no more than O(n).
OCL would help here. The asymptotics will remain the same, of course. But the speed could well be increased by a hundred times.