Volumes, volatility and Hearst index - page 13

 
Andrei01:

Doesn't the first postulate contradict the second?

If there are no statistics or they are meaningless, then how can you apply TV which only deals with meaningful statistics and meaningful processes?


No, that's the point. I've pointed out several times specifically - there is no quoting process "as a whole". In other words - there is no whole, but there are unrelated parts (hence the 0.5 for the whole process), but each part, if identified, offers a good chance.

PS: this is a separate, big topic

 
Farnsworth:

No, that's the point. I've pointed out a few times specifically - there is no quotation process "as a whole". In other words - there is no whole, but there are unrelated parts (hence the 0.5 for the whole process), but each part, if identified, gives a good chance.

If a random process can be represented by several composite independent processes, then why would the summary statistics of those processes be meaningless?
 
Candid:

The question here is not what definition Hearst personally gave, but what is the officially accepted definition of the value called the Hearst index.

And if through swing is not the definition, then what is the definition? The question is not rhetorical, I really wonder?


OK, you've got me confused, though I'm surprised, knowing you :o). Apparently I didn't catch the subtle thread of your reasoning. I've been away from that research for about 2-3 years now. I have to remember more about what exactly Hearst meant and how it was then understood :o).

 
Andrei01:
If a random process can be represented by several composite independent processes, then why would the summary statistics of these processes be meaningless?

And what do you want to investigate by taking statistics of such a whole series? A characteristic of "what exactly", which object do you want to get?
 
Farnsworth:

And what do you want to investigate by taking statistics of such a whole series? A characterisation of "what exactly" do you want to get?
So far nothing, for a start I am trying to understand your postulate from TV position about meaninglessness of the whole process (series), in which at the same time the parts are quite meaningful and predictable.
 
For real instruments, High-Low/|Open-Close| ratio
Tool m5 m15 h1 d1 w1
EURUSD 2,3079 2,3827 2,2744 2,0254 1,9709
GBPUSD 2,2024 2,3190 2,2349 2,0559 1,9958
JPYUSD 2,3931 2,4003 2,2974 2,0745 1,9692

Roughly speaking, for an average candle each shadow equals half of the body. For SB it seems to converge to two as the series length increases (based on Table 2a of Yurixx R/M). Although at low TF the deviation of real data is significant. It could be explained by a small number of ticks (as on SB with small N), but for example on h1 it should be enough. And on SB on the contrary, the ratio is approaching a double from the bottom to the top:

N R/M
2 1,58
4 1,74
8 1,92
15 1,99

 
Andrei01:
Nothing yet, for starters I'm trying to understand your postulate from TV's point of view about the meaninglessness of a whole process (series), where the parts are quite meaningful and predictable.

It's simple (IMHO). I assume that you want to form an understanding of the process that forms the series - to build some kind of model that adequately describes the original process.

Then how do you make assumptions about randomness? There are two fundamentally different approaches:

  • (1) Randomness is an objective reality: and like "everything". This is essentially classic TV, based only on the study of frequencies
  • (2) Randomness - Degree of ignorance of the process, this is already a Bayesian approach

Suppose there are 3 people (A, B, C) each with their own button. When A presses the button:

  • A - generates a "sine wave" process (own sine wave parameters)
  • B - generates "parabola" process (custom parameters for parabola)
  • C - "hyperbola" process (custom parameters for hyperbola)

They are pressed completely at random, are not connected in any way, but immediately after pressing the button control over the common process is intercepted by the "pressed button". The transition process, can be anything:

  • Instantaneous .
  • Or assumes a "transient" process with its own characteristics

Statistics of the whole series say nothing about the process itself, about its essence, and in this sense, predicting the series is very difficult (almost meaningless). Even "statistical suddenly" presence of correlations will not give any guarantees. And here, a slightly different approach is needed - some combination of (1) and (2).

There is nothing special about it - the approach is based on self-organising stochastic processes with a random structure. The topic is quite large and requires a separate branch and time. But it is the only thing that can somehow describe forex.

 
Candid:
Here is algorithm description from 11.09.2010 20:40

H = (Log(R2) - Log(R1))/ (Log(N2) - Log(N1))

So where is the standard deviation in this formula here?

R2 and R1 are still the average spreads for N2 and N1. The intricacy of the algorithm for calculating Yurix does not change the layout. The algorithm still divides the log of the spread proportional to the root of N by the log of N itself. Again substitution High - Low = k * sqrt(N) works.

[ln (k2 * sqrt(N2)) - ln (k1 * sqrt(N1))] / (ln(N2) - Ln(N1)) = [ ln(k2) - ln(k1) + 1/2 * (ln(N2) - ln(N1))] / ln(N2/N1) = 1/2 + ln(k1/k2) / ln (2);

Voila! Again we see how the calculation of H tends from above to 1/2. Again Hurst has nothing to do with it.

Notice that the greater n, the greater k1 = k2. Of course, it can't be any other way with the right formulas from the textbook. ;)

 
Vita:

[ln (k2 * sqrt(N2)) - ln (k1 * sqrt(N1))] / (ln(N2) - Ln(N1)) = [ ln(k2) - ln(k1) + 1/2 * (ln(N2) - ln(N1))] / ln(N2/N1) = 1/2 + ln(k1/k2) / ln (2);

Voila! Again we see how the calculation of H tends from above to 1/2. Again Hurst has nothing to do with it.

Notice that the greater n, the greater k1 = k2. Of course, it can't be any other way with the right formulas from the textbook. ;)


What are these wonders of mathematics ? How does ln(N2/N1) turn into ln(2) and ln(k2) turn ln(k1) into ln(k1/k2) ? Where does the value of n suddenly appear and what does it mean ? And finally the main trick. It turns out that the coefficient k is not a constant ? It turns out to depend on the value of N? And this you call direct proportionality?

Vita, did you notice that the last term in your formula is actually a constant? Unlike the previous version, when ln(N) was in the denominator and provided for reduction of the summand to zero in the limit. But most of all I was amused by the bold type.

You must be a writer. You didn't have enough energy to read the whole branch and immediately jumped into the forumla on the first page. And for nothing. This is a really wrong result. And if you had read to the end, you would have understood that the study was conducted to make sure that the formula from the first page can be applied. However, the study showed that neither that formula nor Hurst's formula can be applied. The former is not correct at all, and the latter only achieves fairness in the limit. And to clarify this circumstance a model series of random numbers was used - an equal probability, single generated PRNG. Not a real tick series, as (why ?) some have decided here.

But if you, Vita, have read all the way through and don't understand it, I can hardly help you. You can't hear anyone, you can't show anything yourself (apart from that ridiculous "conclusion" in the quote), you just post your first, unsubstantiated statement over and over again.

PS

By the way, what is this turn of phrase "reveals next" ? What language is it in ?

 
Yurixx:


What are these wonders of mathematics ? How does ln(N2/N1) change into ln(2), and how does ln(k2) change from ln(k1) into ln(k1/k2) ? Where does the value of n suddenly appear and what does it mean ? And finally the main trick. It turns out that the coefficient k is not a constant ? It turns out to depend on the value of N? And this you call direct proportionality?

Vita, did you notice that the last term in your formula is actually a constant? Unlike the previous version, when ln(N) stood in the denominator and provided for reduction of the summand in the limit to zero. But most of all I was amused by the bold type.

You must be a writer. You didn't have enough strength to read the whole branch and immediately jumped into the forumla on the first page. And for nothing. This is a really wrong result. And if you had read to the end, you would have understood that the study was conducted to make sure that the formula from the first page can be applied. However, the study showed that neither that formula nor Hurst's formula can be applied. The former is not correct at all, and the latter only achieves fairness in the limit. And to clarify this circumstance a model series of random numbers was used - an equal probability, single generated PRNG. Not a real tick series, as (why ?) some have decided here.

But if you, Vita, have read all the way through and don't understand it, I can hardly help you. You can't hear anyone, you can't show anything yourself (apart from that ridiculous "conclusion" in the quote)? only post your first, unsubstantiated statement over and over again.

PS

By the way, what is this turn of phrase "reveals next" ? What language is it in ?

All the designations are from your table 2b:

Yurixx 11.09.2010 20:58

Table 2b.

Further you wrote yourself:

The main interest is in the last column, where the Hearst figure is given. The result in nThe -s line was calculated from two points - n-and the previous one.

ln(k2) - ln(k1) = ln(k2/k1) - this is an oversight, it doesn't change the point.

n and N are from your table. Since your calculation is based on two points - n and the previous one, N2/N1 = 2 from your table.

The coefficient k is a constant. The rest is your fiction.

The last term is a constant in theory when n tends to infinity, then k1 = k2, hence the last term is zero. In numerical calculations k1 does not equal k2, that is why you have 0.5 + error in the last column. Everything is very simple and straightforward.

Neither your first nor your second, exactly the same formula, is a Hearst calculation.

What you're imputing to me is your own fiction. I've attached a file that calculates Hearst, but you only write the word "Hearst". Your Hearst algorithm doesn't count. Your second formula in the limit reaches the logarithm of the average run, not Hearst. No other series other than yours fits your formula. Give Hearst's calculation for N in the cube in the limit by your "not funny" formula before you call someone a writer or a misunderstood-unheard.

Next time you want to spell Hearst, practise with control examples.



Reason: