You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Thank you for your effort. Don't be sad I won't call you "Master of efficiency" :D
I really appreciate your approach. Even though you've failed you are still honest (unlike others).
I still have some difficulties to integrate that double arithmetic is so fast and efficient nowadays.
I don't care to fail or to be wrong, it's how we are learning. However I still have a feeling that it's possible to be optimized :-D
Well that's not so simple. Considering a period of 32, for 129122 bars, there are 31*129122 coefficients, not 129121. (Neglecting the fact that older bars doesn't have enough data to be calculated correctly).
Thank you for your effort. Don't be sad I won't call you "Master of efficiency" :D
I really appreciate your approach. Even though you've failed you are still honest (unlike others).
BTW I think that the better approach for OP's needs is using Linear Regression or Kalman filter or something similar.
https://en.wikipedia.org/wiki/Anscombe%27s_quartet
Example 3. Linear regression causes the slope to be 0.5. However, the point at x[13]=12.74 is clearly wrong. So, if one takes all possible coefficients and take the median out of them, gets the orange line (median slope of 0.3456). This is the simplest case of quantile regression. In a practical way. :P
So there's no chance of using mean (LR uses it) or any other calculation based on it to predict the future. And all calculations based on the mean will fail. People forgot that descriptive statistics is used to compare different known datasets.
I still have some difficulties to integrate that double arithmetic is so fast and efficient nowadays.
I don't care to fail or to be wrong, it's how we are learning. However I still have a feeling that it's possible to be optimized :-D
Had some ideas (to eliminate some elements - and that would largely make the sorting faster too), but since the decisive is the price (regardless of the rest), we can not "predict" where the median will fall without calculation, and that brings us back to square one
One of those :)
Had some ideas (to eliminate some elements - and that would largely make the sorting faster too), but since the decisive is the price (regardless of the rest), we can not "predict" where the median will fall without calculation, and that brings us back to square one
One of those :)
I was experimenting more something like this :
But, as obvious, the results are *regardless of the fact that they are sometimes interesting) just approximations (compared to "full calculation")
So at my surprise, I didn't succeed. The main optimization idea I had was to reduce the number of division operations needed :
As this operation is done in a loop for every candle i, there are a lot of repetitions of the exact same operation. So the idea was to do all the operations once and to memorize them (see attached how I did). However it doesn't improve this speed, even while the numbers of operations was reduced by a factor 16 !
From 64 millions to 4 millions division operations, but no change in execution time. I didn't expect that. That means double arithmetic CPU is very efficient and cached very well all the results.
Also, though this
So at my surprise, I didn't succeed. The main optimization idea I had was to reduce the number of division operations needed :
As this operation is done in a loop for every candle i, there are a lot of repetitions of the exact same operation. So the idea was to do all the operations once and to memorize them (see attached how I did). However it doesn't improve this speed, even while the numbers of operations was reduced by a factor 16 !
From 64 millions to 4 millions division operations, but no change in execution time. I didn't expect that. That means double arithmetic CPU is very efficient and cached very well all the results.
Also, though this imbricated loop with division operations is time consuming, the main bottleneck in the ArraySort(), the speed impact is more than 3 times the one of the loops. So even if the "division" optimization had worked that global impact would have been low (~20% max).
That was an interesting exercise, even if it failed.
Attached the code (as we are on week-end I didn't pay attention to "live" update).
loop with division operations is time consuming, the main bottleneck in the ArraySort(), the speed impact is more than 3 times the one of the loops. So even if the "division" optimization had worked that global impact would have been low (~20% max).
That was an interesting exercise, even if it failed.
Attached the code (as we are on week-end I didn't pay attention to "live" update).
Does MQL 5 quickselect implemented? If not, how can I contact one of the developers to include it?
Selection sort can find the median. However, I believe it is slower than ArraySort(), which uses quicksort. Did not test.
https://en.wikipedia.org/wiki/Anscombe%27s_quartet
Example 3. Linear regression causes the slope to be 0.5. However, the point at x[13]=12.74 is clearly wrong. So, if one takes all possible coefficients and take the median out of them, gets the orange line (median slope of 0.3456). This is the simplest case of quantile regression. In a practical way. :P
So there's no chance of using mean (LR uses it) or any other calculation based on it to predict the future. And all calculations based on the mean will fail. People forgot that descriptive statistics is used to compare different known datasets.
Implementation of quickselect. But there's something wrong...