a trading strategy based on Elliott Wave Theory - page 61

 
Извините, что вот влез, значительно после обсуждения этой темы, но так получилось. Вероятно, это уже не так актуально. Но по-прежнему надеюсь, что кто-то поможет разобраться с моими вопросами.

In order to apply the approach suggested in the book, you have to do exactly the same thing as described in the book. The book gives a detailed example ONLY for Brownian traffic! That is, it shows how a sample of Brownian motion "inflows" should visually look like at different Hurst coefficients. If you take a random number generator and then create interdependent transactions in white noise by setting their probability of occurrence, you will get roughly the same pictures as in the book. That is you will firstly get fractal observation noise (a sample of "tributaries"), by summing it up you will get the physical motion of something (in this case the oscillogram of Brownian noise). From the amplitude of the physical motion you will see that the larger your Hearst factor (probability of interdependent transactions) was, the larger the amplitude spread of the physical motion itself turned out to be. What can we ultimately understand from the example in the book? We can only understand what I already said "the greater your Hearst ratio (probability of interdependent transactions), the greater the amplitude of the amplitude of the physical movement itself turned out to be". Next just answer, please, what exactly does THIS information give us in predictive terms? I can answer precisely - NOTHING, except what I have written 2 times (let us only determine the degree of interdependence of transactions)! What do the authors do next in the book? They apply the proposed calculation (Brownian motion analysis) to different capital markets. At all markets (or almost at all markets) the Hurst index is more than 0.5, in particular for EURUSD it is 0.64, if I do not forget. So what next? WELL, NOTHING! Except that we know that the trades on the markets are mostly interdependent. But let's assume we knew it all along, that people are more likely to go with the trend than against it, looking at which direction the price moved yesterday. Due to this there are periods of a clear trend in the markets based on the previous movement. It is obvious to everyone. And Vladislav has tried to apply this approach to predict linear regression channels. That is, he has significantly changed the way of calculating "tides" based on the existing price movement in order to answer the question "What will happen to the channel in the very near future - will it continue or will it terminate its existence".



Does the highlighted "SOFTLY THERE" indicate that I was mistaken? Possibly, but it seems I tried very hard and did everything by the book. And in it the general approach to calculation of the index is stated, and as an example the result is given for Brownian motion and for cycle 19 of Wolf series (honestly, I don't know what it is).

I checked my algorithm with random walk and got almost correct result (I attached graph log(R/S) from log(N)).

In Vladislav's algorithms and in yours, eventually an approximate estimate of the index itself is performed, by the formula H=log(R/S)/log(0.5*N) - exactly as in the book. And as I wrote before, I decided to implement a more accurate algorithm.

Thanks for the detailed explanation, I didn't understand some details of Vladislav's approach vaguely. Now it seems to clear up. I'm not questioning your and Vladislav's calculations, especially since they work fine.
:о)))))
 
Does the highlighted "EXACTLY THERE" indicate that I am mistaken?

No, you are not wrong! By "EXACTLY SAME" I meant that the approach used in the book is only suitable for solving the problem for which it was developed, and which I have repeated several times - "to estimate the interdependence of transactions (tides) under some process similar to the Brownian process". But to use it to solve our problem of "forecasting movements along the channel in the very near future" in the form in which it is given in the book, is definitely NOT possible! Vladislav has finalized it for our problem in terms of "tidal" sampling considering a forecasted value of a linear regression channel plotted on a sample not including the current bar as the average. If to think deep into the sense of that revision, which he suggested, it can take at least a PhD thesis (in mathematics or economics, whichever is more emphasized) with an appropriate elaboration and presentation of additional material ;o))))! Vladislav, think about it, if you need it at all!
 
I forgot to add. If there is a more accurate algorithm, all that will be left is to pass modified data on the 'inflow', on the advice of the old-timers - well thought out. :о)))
 
Выделенное «ТОЧНО ТАКЖЕ» говорит о том, что я ошибся?

No, you are not mistaken! By "EXACTLY the same" I meant that the approach used in the book is only suitable for solving the problem for which it was developed, and which I have already repeated several times - "to estimate the interdependence of transactions (tides) in some process similar to the Brownian process". But to use it to solve our problem of "predicting movement along a channel in the very near future" in the form in which it is given in this book, is clearly NOT possible! So Vladislav improved it for our problem in terms of obtaining a sample of "tides" counting the forecasted value of the linear regression channel built on the sample not including the current bar as the average level.


Yes, but the algorithm given in the book does not impose any special requirements for the content of "inflow". At least I didn't find anything of the kind there and one of the questions we discussed was what the inflow should be. I have received valuable advice from you. Thank you.
 
Берется выборка в , допустим, 10000 баров, нарезается на неперсекающиеся интервалы в 20 баров, вычисляется средний Херст, далее нарезается по 21, 22 и так дале до 5000 баров. Потом строится аппроксимирующая прямая. Вот только что с ней делать в нашем случае - не ясно.

It is not the average Hurst that is calculated, but the two coordinates Y=Log(R/S) and X=Log(N). And what to do with it seems to be clear too.
There is an equation Y=Y(X) which looks like this: Log(R/S) = H*Log(N) + A. You need to build a linear regression and determine its coefficient and the free term. Hurst is its coefficient.
And just the ratio of logarithms is not Hurst at all.
IMHO


No, it's an average Hurst over these samples :)
 
Берется выборка в , допустим, 10000 баров, нарезается на неперсекающиеся интервалы в 20 баров, вычисляется средний Херст, далее нарезается по 21, 22 и так дале до 5000 баров. Потом строится аппроксимирующая прямая. Вот только что с ней делать в нашем случае - не ясно.

Вычисляется не средний Херст, а две координаты Y=Log(R/S) и X=Log(N). И что с этим делать, тоже вроде бы ясно.
Есть уравнение Y=Y(X), которое выглядит так: Log(R/S) = H*Log(N) + A. Нужно построить линейную регрессию и определить ее коэффициент и свободный член. Херст - это ее коэффициент.
А просто отношение логарифмов - это совсем не Херст.
ИМХО


No, it is the Hearst average of these samples :)

I read this algorithm today in the book "Fractal Analysis". I implemented it using a different algorithm, according to different formulas. I go from 1 to N and for each current n I count log(R/S) and log(N). Then I build an approximating straight line y(x)=ax+b. The coefficient a is the Hurst exponent. There may be a mistake in principle here.
:о)

PS: Can't it be counted that way?
 
<br / translate="no"> After reading additional material on the calculation of the Hurst index, I came across a study by Federer. He claimed that the empirical law - H=Log(R/S)/Log(0.5*N) works rather badly and gives relatively correct data only for small samples (however, nothing was said about the size of these samples). So, I decided to implement Hearst index calculation strictly according to methodical materials (it seems that it turned out to be even worse than Mr. Feder warned).

I realize that the code may not be the most optimal in terms of performance (a lot of function calls and everything), but the main thing is that I wanted to clarify for myself whether I correctly understood the logic of its calculation, because the results seem to me questionable, and I decided to ask people who know.
...
PS: I hope that forum participants will help me to understand. I would be very grateful if Vladislav would give me some time and explain where I go wrong with such a simple methodology.



Take 1000 random Brownian particles in thousand coordinate grids at zero point. Along these points start bombarding random forces in a random direction. Here Hirst argues that over time the distance between the particle and the origin of the corrdinates (vector length) will be proportional to the square root of time. Why 1000 chats? For good averaging. This problem is not hard to program and test.
 
Yeah, here's a thought - if for supposedly Brownian particle Hirst is more than 0.5 - there's a force pushing out of centre of coordinates(unipolar magnets), if less than 0.5 - force attracts to centre (sort of potential field). That's the physical meaning.
 
<br / translate="no">
Take 1000 random Brownian particles in a thousand coordinate grids at zero point. Along these points, random forces start bombarding in a random direction. Here's Hirst's reasoning that over time the distance between the particle and the origin of the corrdinates (vector length) will be proportional to the square root of time. Why 1000 chats? For good averaging. This problem is not hard to program and test.


I believe him. But Feder argued that if there is a need for an exact value, then you should also count more accurately. So I tried to do it. And today I found out that Mr. Peters does not calculate it that way at all.
 
Dear solandr. I would like to ask you, if it is not too much trouble for you. Send me your sample (inflow) as a text file for which you have calculated the index. (You can e-mail: grasn@rambler.ru), and I will try to calculate the Hearst index on my algorithm and give the result. Simply, right now I'm using the influx in the form of Close[i].

It will be enough - just a column of numbers, I will do the rest myself.
Reason: