Mladen Rakic: I removed it from inside the function since this way it will be created (and resized) only if the size of the input arrays changes. That way you are avoiding repeated array creation (on each new call of the function - since the array remains allocated between the two calls) and resizing within the loop (which, when you take into account that, for example, for period 32 the array is size 496 - ie: creating a new array, and resizing it 495 times, can slow down the overall work of the function significantly)

It's would not have been resized 495 times but (rates_total-InpPeriod+1) times (on history data...). Anyway doesn't matter you are right about this point.

However the main speed problem is that the same calculations are repeated again and again for each candle. The speed could probably be improved by a factor 10 (depends of period value).

It's would not have been resized 495 times but (rates_total-InpPeriod+1) times (on history data...). Anyway doesn't matter you are right about this point.

However the main speed problem is that the same calculations are repeated again and again for each candle. The speed could probably be improved by a factor 10 (depends of period value).

Alain

I was referring to the original algorithm - it was creating a new array on each call and then resizing it in a loop for each new calculating element.

As of speed : I am not sure that it can be improved - that is one of the typical "heavy" algos. Just for the rest (that are not aware of it) : some examples for the loop count (within that function) depending of periods (size of the input arrays)

period 25 - loops 300 times

period 50 - loops 1225 times

period 100 - loops 4950 times

Was checking it, but so far haven't found any way to avoid that - maybe someone else will come up with a solution for that

I was referring to the original algorithm - it was creating a new array on each call and then resizing it in a loop for each new calculating element.

Ah yes I see what you mean now.

As of speed : I am not sure that it can be improved - that is one of the typical "heavy" algos. Just for the rest (that are not aware of it) : some examples for the loop count (within that function) depending of periods (size of the input arrays)

period 25 - loops 300 times

period 50 - loops 1225 times

period 100 - loops 4950 times

Was checking it, but so far haven't found any way to avoid that - maybe someone else will come up with a solution for that

It can be improved, I said you probably 10 times (on average) ;-)

Of course that's not so easy.

EDIT: If you send me the original I will work on it and publish it.

It simply can't. It calculates the combination of all possible coefficients. The next step is to calculate it's quantiles. So, I expect the median coefficient to cross exactly in the middle of all points, and some coefficients that are just too far are ignored. So, what I want is the most likely slope.

I also wrote the percent rank code, so I can calculate the 5th, 50th (median) and 95th coefficients. So I need them all. The original code also weights them by volume, by simply making copies of them. Using R or NumPy is somewhat fast, but not useful inside MT5.

You can also check for Quantile Regression by Koenker. But I don't need to minimize them because I use last price and forecast percent value use the coefficient. :)

As you can see, the center line represents the median coefficient. Upper and lower ones represent 5th and 95th percentile coefficients. Notice the asymmetry and how the angles are slightly different.

Mladen Rakic:I removed it from inside the function since this way it will be created (and resized) only if the size of the input arrays changes. That way you are avoiding repeated array creation (on each new call of the function - since the array remains allocated between the two calls) and resizing within the loop (which, when you take into account that, for example, for period 32 the array is size 496 - ie: creating a new array, and resizing it 495 times, can slow down the overall work of the function significantly)

It's would not have been resized 495 times but (rates_total-InpPeriod+1) times (on history data...). Anyway doesn't matter you are right about this point.

However the main speed problem is that the same calculations are repeated again and again for each candle. The speed could probably be improved by a factor 10 (depends of period value).

Alain Verleyen:It's would not have been resized 495 times but (rates_total-InpPeriod+1) times (on history data...). Anyway doesn't matter you are right about this point.

However the main speed problem is that the same calculations are repeated again and again for each candle. The speed could probably be improved by a factor 10 (depends of period value).

Alain

I was referring to the original algorithm - it was creating a new array on each call and then resizing it in a loop for each new calculating element.

As of speed : I am not sure that it can be improved - that is one of the typical "heavy" algos. Just for the rest (that are not aware of it) : some examples for the loop count (within that function) depending of periods (size of the input arrays)

Mladen Rakic:Alain

I was referring to the original algorithm - it was creating a new array on each call and then resizing it in a loop for each new calculating element.

As of speed : I am not sure that it can be improved - that is one of the typical "heavy" algos. Just for the rest (that are not aware of it) : some examples for the loop count (within that function) depending of periods (size of the input arrays)

It can be improved, I said you probably 10 times (on average) ;-)

Of course that's not so easy.

EDIT: If you send me the original I will work on it and publish it.

Alain Verleyen:Ah yes I see what you mean now.

It can be improved, I said you probably 10 times (on average) ;-)

Of course that's not so easy.

EDIT: If you send me the original I will work on it and publish it.

Alain

All the calculation is done in this part :

And it is the part that needs to be optimized

Mladen Rakic:Alain

All the calculation is done in this part :

And it is the part that needs to be optimized

It simply can't. It calculates the combination of all possible coefficients. The next step is to calculate it's quantiles. So, I expect the median coefficient to cross exactly in the middle of all points, and some coefficients that are just too far are ignored. So, what I want is the most likely slope.

I also wrote the percent rank code, so I can calculate the 5th, 50th (median) and 95th coefficients. So I need them all. The original code also weights them by volume, by simply making copies of them. Using R or NumPy is somewhat fast, but not useful inside MT5.

You can also check for Quantile Regression by Koenker. But I don't need to minimize them because I use last price and forecast percent value use the coefficient. :)

Mladen Rakic:Alain

All the calculation is done in this part :

And it is the part that needs to be optimized

You are wrong, send me the original and I will show you. I will not build the indicator from the snippets posted here.

If you are not interested it's ok.

Mladen Rakic:Alain

All the calculation is done in this part :

And it is the part that needs to be optimized

or

I really do not know which one is faster. But considering all (time(j)-time(i)) is constant, it will do.