
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
For example, the FGN package with the function HurstK(z), which produces a nonparametric estimate of the Hurst coefficient that gives a much more accurate value.
Replace the phrase "Hurst coefficient" in the highlighted phrase with "Pearson correlation coefficient", for example, and then perhaps you will feel the ridiculousness of the highlighted statement. For example, replace the phrase "Hirst coefficient" with "Pearson's Correlation Coefficient" and then you might feel the ridiculousness of the highlighted statement.
I won't substantiate, as all my posts were actually directed at the author of the article.
I looked at his profile and my impression is that the man tends to provide a certain level of his reasoning and actions. On the example of Hirst's calculation I tried to convey to the author of the article that the level of the article can be provided ONLY taking into account already available results in the relevant field. And this level, the point of reference, the stove from which one dances, is exactly what R gives. It is possible to take another system, for example, Python, other paid..... But in any case one should not pretend that this is the first word on the topic in this article.
I was not interested in everything else.
I won't substantiate, as all of my posts were in reality directed at the author of the article.
Read my comment above. If Pearson is inserted into the phrase, it becomes ridiculous for some reason. If we put Hearst in it, it doesn't. What does that have to do with it?
Apparently it's because Pearson is a clear-cut algorithm for calculations. And Hearst is every bit as cheesy.
There is Hearst-DmitriyPiskarev, there is Hearst-R and there are many others. The funny thing is that it is impossible to compare them, because there can be no comparison criterion when there is no clear definition.
That's why it's funny to hear people say that one Hirst variant is more accurate than another. They are simply different values, which because of a historical mishap people call the same thing - Hearst.
Read my comment above. If we put Pearson in a sentence, it becomes ridiculous for some reason. And if we put Hearst in it, it doesn't. What does that have to do with it?
Apparently it's because Pearson is a clear algorithm for calculating. And Hearst is every bit as cheesy.
There is Hearst-DmitriyPiskarev, there is Hearst-R and there are many others. The funny thing is that it is impossible to compare them, because there can be no comparison criterion when there is no clear definition.
That's why it's funny to hear people say that one Hirst variant is more accurate than another. They are simply different values that because of a historical mishap people call the same thing, Hearst.
I absolutely agree with you that the Hirst thing is extremely vague both in terms of calculation algorithm and interpretation
I am writing about something completely different: if a person gives an algorithm, he should justify this algorithm. A code that implements an incorrect algorithm will also be incorrect.
If you look specifically at the algorithm given in the article, it uses linear regression estimated by MNC. This piece of the article has no relation to reality at all, because the estimation of linear regression coefficients by ISC is an ASSESSMENT of two random variables: displacement "a" and slope angle "b". If the author had used, for example, the lm() function from R, he would have seen surprising things, that not necessarily the value of "b", which he considers as the value of the Hurst coefficient, exists on paper, but in reality it may not exist at all, because the standard lm() function, besides the value of "b" itself, gives its variance and the confidence level of this value. Quite often, when using linear regression, the confidence level is much lower than 90%.
Here is an example of a standard linear regression estimation table with many variables
Estimate Std. Error z value Pr(>|z|)
(Intercept) -338.88337 152.55692 -2.221 0.026327 *
rsi_eurusd 0.01237 0.01363 0.908 0.363934
macd_eurusd 13.94972 4.36041 3.199 0.001378 **
trix_eurusd -741.34816 148.31309 -4.999 0.00000057768 ***
sig_eurusd 1118.41702 212.31435 5.268 0.00000013811 ***
trix_eurusd_trend NA NA NA NA
trix_gbpusd 407.84268 131.29586 3.106 0.001895 **
sig_gbpusd -918.57282 202.12341 -4.545 0.00000550361 ***
trix_gbpusd_trend NA NA NA NA
trix_eurgbp 264.59572 115.74195 2.286 0.022249 *
sig_eurgbp -795.43634 159.17763 -4.997 0.00000058180 ***
trix_eurgbp_trend NA NA NA NA
trix_usdchf -76.32606 27.15637 -2.811 0.004945 **
sig_usdchf 14.28410 31.35889 0.456 0.648747
trix_usdjpy 5.42010 8.93393 0.607 0.544059
sig_usdjpy 65.28629 11.08181 5.891 0.00000000383 ***
trix_usdjpy_trend NA NA NA NA
trix_usdcad 32.76774 21.62655 1.515 0.129731
sig_usdcad -25.12268 25.27109 -0.994 0.320161
trix_usdcad_trend NA NA NA NA
fit.eurusd -72.05260 149.20763 -0.483 0.629166
fit.gbpusd -304.38920 121.47457 -2.506 0.012218 *
fit.eurgbp 253.58306 132.96820 1.907 0.056508 .
fit.usdchf -387.54743 100.37962 -3.861 0.000113 ***
fit.usdjpy 1.82458 0.41496 4.397 0.00001097684 ***
fit.usdcad -133.88962 81.83316 -1.636 0.101813
fit.eurusd.2 25.03730 160.94619 0.156 0.876377
fit.gbpusd.2 423.37220 143.07774 2.959 0.003086 **
fit.eurgbp.2 -227.97261 192.34022 -1.185 0.235916
fit.usdchf.2 426.74965 101.14174 4.219 0.00002450374 ***
fit.usdjpy.2 -2.15458 0.42133 -5.114 0.00000031587 ***
fit.usdcad.2 321.48459 86.36230 3.723 0.000197 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Only the values marked with asterisks can be trusted with the specified confidence level. The rest are just a fiction, the figure is there, but in reality it is not!
That's what this is about. It is about accuracy and careful attention to each result of calculations.
Read my comment above. If we put Pearson in a sentence, it becomes ridiculous for some reason. And if we put Hearst in it, it doesn't. What does that have to do with it?
Apparently it's because Pearson is a clear algorithm for calculating. And Hearst is every bit as cheesy.
There is Hearst-DmitriyPiskarev, there is Hearst-R and there are many others. The funny thing is that it is impossible to compare them, because there can be no comparison criterion when there is no clear definition.
That's why it's funny to hear people say that one Hirst variant is more accurate than another. They are simply different values, which because of a historical mishap people call the same thing - Hirst.
СанСаныч Фоменко:
Например, пакет FGN с функция HurstK(z), в которой производится непараметрическая оценка коэффициента Херста, которая дает гораздо более точную величину.
fxsaber:
Replace the phrase "Hurst coefficient" in the highlighted phrase with "Pearson correlation coefficient", for example, and then perhaps you will feel the absurdity of the highlighted statement. for example, to "Pearson Correlation Coefficient" and then, perhaps, you will feel the absurdity of the highlighted statement.
I absolutely agree with you that Hirst's thing is exceptionally vague both in terms of calculation algorithm and interpretation
My point is quite different: if one gives an algorithm, one should justify this algorithm. A code that implements an incorrect algorithm will also be incorrect.
If you look specifically at the algorithm given in the article, it uses linear regression estimated by MNC. This piece of the article has no relation to reality at all, because the estimation of linear regression coefficients by ISC is an ASSESSMENT of two random variables: displacement "a" and slope angle "b". If the author had used, for example, the lm() function from R, he would have seen surprising things, that not necessarily the value of "b", which he considers as the value of the Hurst coefficient, exists on paper, but in reality it may not exist at all, because the standard lm() function, besides the value of "b" itself, gives its variance and the confidence level of this value. Quite often, when using linear regression, the confidence level is much lower than 90%.
Here is an example of a standard linear regression estimation table with many variables
Estimate Std. Error z value Pr(>|z|)
(Intercept) -338.88337 152.55692 -2.221 0.026327 *
rsi_eurusd 0.01237 0.01363 0.908 0.363934
macd_eurusd 13.94972 4.36041 3.199 0.001378 **
trix_eurusd -741.34816 148.31309 -4.999 0.00000057768 ***
sig_eurusd 1118.41702 212.31435 5.268 0.00000013811 ***
trix_eurusd_trend NA NA NA NA
trix_gbpusd 407.84268 131.29586 3.106 0.001895 **
sig_gbpusd -918.57282 202.12341 -4.545 0.00000550361 ***
trix_gbpusd_trend NA NA NA NA
trix_eurgbp 264.59572 115.74195 2.286 0.022249 *
sig_eurgbp -795.43634 159.17763 -4.997 0.00000058180 ***
trix_eurgbp_trend NA NA NA NA
trix_usdchf -76.32606 27.15637 -2.811 0.004945 **
sig_usdchf 14.28410 31.35889 0.456 0.648747
trix_usdjpy 5.42010 8.93393 0.607 0.544059
sig_usdjpy 65.28629 11.08181 5.891 0.00000000383 ***
trix_usdjpy_trend NA NA NA NA
trix_usdcad 32.76774 21.62655 1.515 0.129731
sig_usdcad -25.12268 25.27109 -0.994 0.320161
trix_usdcad_trend NA NA NA NA
fit.eurusd -72.05260 149.20763 -0.483 0.629166
fit.gbpusd -304.38920 121.47457 -2.506 0.012218 *
fit.eurgbp 253.58306 132.96820 1.907 0.056508 .
fit.usdchf -387.54743 100.37962 -3.861 0.000113 ***
fit.usdjpy 1.82458 0.41496 4.397 0.00001097684 ***
fit.usdcad -133.88962 81.83316 -1.636 0.101813
fit.eurusd.2 25.03730 160.94619 0.156 0.876377
fit.gbpusd.2 423.37220 143.07774 2.959 0.003086 **
fit.eurgbp.2 -227.97261 192.34022 -1.185 0.235916
fit.usdchf.2 426.74965 101.14174 4.219 0.00002450374 ***
fit.usdjpy.2 -2.15458 0.42133 -5.114 0.00000031587 ***
fit.usdcad.2 321.48459 86.36230 3.723 0.000197 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Only the values marked with asterisks can be trusted with the specified confidence level. The rest are just a fiction, the figure is there, but in reality it is not!
That's what this is about. It is about accuracy and careful attention to each result of calculations.
Before making any conclusions, it is necessary to understand from what data the regression is calculated.
San Sanych, I'm sorry, but you're really fed up with your "expert judgements". From your side you can't see anything at all except the eternal sticking of some R. At least put some MQL code somewhere so that it would be clear that you understand something.
Maxim, thank you for your comment!
Yes, you are right, of course the calculation of Hurst coefficient is just a base to get at least a slightest idea about the application of some kind of matrix statistics in the study of time series. I support your remark and I also think that it would be naive and wrong to use only coefficient analysis for forecasting market dynamics. Of course, it is necessary to build a strategy on the basis of aggregate indicators and using various indicators and sources.
In the next article I will definitely show you my correct understanding of fractal analysis.
Thanks again for your comment.
P.S. I was asked to do a review of the MT5 tools for such analysis. I took the opportunity to promote it.