Volumes, volatility and Hearst index - page 33

 
Farnsworth:
I plan to investigate:
  • The possibility of obtaining Hss and Hsssi series from a quotation process
  • Investigate the stationarity-correlation-self-similarity relationship for these processes

Apparently everyone knows what Hss and Hsssi series are, except me. :)

You mean this: H-self-similar with stationary increments (H-sssi) ?

 
Candid:

Apparently everyone but me knows what Hss and Hsssi rows are. :)

You mean this one: H-self-similar with stationary increments (H-sssi) ?

Sorry, yes it is :o) I forgot to decipher it :o) It's quite a new direction for me, I'll look into it and maybe I'll find something. :о)

 
Farnsworth:

Sorry, yes, that's it :o) I forgot to decipher it :o) It's a rather new direction for me, I'll look into it and maybe I'll find something. :о)


If you can decipher what it is. Example. A formula... Thank you. Maybe this new is a well forgotten old ?
 
Avals:


In any case - the causes and consequences lie outside the timetable. These are real economic processes, like the inflation and deflation of a speculative bubble for example. A pattern can show the changing of these phases in a timely manner and help to match this process.

Recently I am inclined to the idea that the characteristics of price series are so close to the characteristics of random series for a reason. Probably randomness really dominates the price formation most of the time, at least at some horizons (timeframes). However, apparently there are also "fair" prices, and when the market drifts too far away from them, by way of catastrophe (black swans, fat tails, etc.) the price recovers, usually with a flight back.
 
Prival:

if you could decipher what it is. Example. The formula... Thank you. Maybe this new one is a well forgotten old one ?

I came across this.


 
Prival:

can you decipher what this is in more detail. Example. The formula... Thank you. Maybe this new is a well forgotten old ?

"New" to me and the topic is old enough, I believe:

H-sssi is a self-similar process with stationary increments and a similarity parameter H. Hss is simply a self-similar process


and why i typed it :o)

 
Candid:
I can feel the process of measuring the length of the coastline has made a strong impression on you :). However, you raised a different question (although related in some ways) - about the process of R/S analysis - and there we have a new average at each step, this is the new ruler size for the new row size.

I still don't understand what is meant by a new row size. The row under R/S analysis is the same all the time and its size does not change. The row is sliced into K pieces. K is what I call the size of the ruler, not the new average. The new average (hopefully referring to the R/S average for splitting the row into K chunks) is already the result of the ruler measuring size K. We lay it out on the plane. You end up with many points for the same row from measurements with rulers of different sizes. And no asymptotes.

As for the reference to Hurst asymptotics, indeed Wikipedia points out that

The Hurst exponent, H, is defined in terms of the asymptotic behaviour of the rescaled range as follows;[2]

In doing so it refers to the work:

^ a b Bo Qian and Khaled Rasheed. "HURST EXPONENT AND FINANCIAL MARKET PREDICTABILITY". IASTED conference on "Financial Engineering and Applications" (FEA 2004), pp. 203-209, 2004.

The first part of this article seems to be a model which mentions the asymptote only once and only in what sense:

3. Monte Carlo simulation

For a random series, Feller [13] gave expected (R/S)t

formula as 3.1.

E((R/S)t) = (n*π/2)0.50 (3.1)

However, this is an asymptotic relationship and is only valid for large t.

Where it is clearly stated in Russian that the asymptotic relation of Feller's formula (the spread of SB from the root of steps) is only valid for large t. No Hearst, as we see, and certainly not for series other than SB.

In the dry residue we have a story about someone reading a paper on Hearst where Feller's asymptotic equality for SB is mentioned, after which the asymptotic equality for Hearst already appears on wikipedia. Unfortunately, the Internet is the Internet - any easily digested heresy (consider Hearst's asymptote!) has an advantage in spreading over the hard to digest (no way, without R/S analysis you can't count). Don't trust anyone, demand the code and the ability to validate the results. So far, no code for Hearst's asymptote calculation has been presented.

Anyway, I understand, Candid, why you need a jokingly condescending tone. So far the thread is overloaded with anything but results and no way to check them. I really wish you well, hope to see a denouement. Please make me happy.

 
to Vita

I'm going to jump in a bit, if you don't mind my saying so.

В сухом остатке мы имеем историю о том, как кто-то прочел работу об Херсте

I don't think his work will be useful. I'm sure it's too specific and would require in-depth knowledge of the subject area. The paper itself seems to be: Hurst H. Trans. Amer. Soc. of Civil Eng. 1951. V.116. P.770-808, I think you can find it by code, but maybe not even electronically. The model I am going to study is a classic and has been rediscovered by several scientists. I really hope it reconciles everyone.

So far, no code for calculating Hearst's asymptote has been presented.

As for the code, I'm going to write and post the algorithm. The only problem is, if I don't make it in the next few days, I'll have to postpone it for a few weeks - business :o(

So far, the thread is overloaded with everything but results and possibilities to check them.

Personally I only try to formulate task formulation clearly. Besides, the experiment must still be planned and executed.

... hope to see a denouement

and I'm looking forward to seeing the denouement :o/

 
Yurixx:

> Trying to judge self-similarity by the coincidence or repetition of candlestick patterns is, imho, a significant oversimplification. Not justified in any way.

I wasn't talking about candlesticks at all, so this argument misses the point.

> It is even more simplistic, from my point of view, to judge it by trading results.

This is very controversial. In fact, "trading results" is also a kind of statistic, non-parametric.

> They are trying to explain the self-similarity of the market to newbies who have never heard of fractals.

I think not. I think this is the basic idea of market fractality. And likewise, I think there is nothing else besides this "visual" idea.

> The self-similarity lies primarily in the structural similarity of the different levels of the phenomenon. Those levels that make up the fractal structure. However, and this is the basic mistake of many, similarity does not follow from sameness. Similarity is not equality. Therefore on each fractal level different processes can develop.

So where does this boundary, which separates the similar and the same, lie?

> Don't you know that trends at different levels (roughly speaking, at different timeframes) can be directed in different directions? Or a trend at one level may coincide with a flat at another?

Excessive primitivisation of the discus is to no avail. All the more so as you then have to define a trend.

> Based on what I said just above, the difference in H-volatility for different levels is quite normal and reflects the difference in processes occurring at those levels.

Am I not the only one who sees this as a big logical inconsistency. If we have different processes going on at different levels - why should they look the same? If they look the same, then consequently we can't separate them - then what is the point of all this?

> It is only for a pure and perfectly stationary SB that there should be the same H-volatility value at all levels.

That's right, the H-volatility on the SB tends to the same value.

> That is, by the way, the difference between H-volatility and Hurst: it can be easily measured locally. And Hurst is a global characteristic of the process. Not because it is so abrupt but because it is such a curve - its definition and measuring procedure do not allow to obtain local values and hence it is impossible to measure it on different levels. But whoever can localise it or comes up with another, more practical characterisation, will be able to do so and see that for non-stationary processes with memory it will be different at different levels.

For non-stationary processes Hearst makes no sense at all. But what you get in log-log coordinates, many researchers interpret as changing trends, at different levels

> The self-similarity of a series of quotes is not that the H-wave or something similar is always the same, but that its definition, calculation methodology and meaning is the same at all levels. And the difference in the quantitative measure is just a consequence of the state.

Self-similarity is exactly that, if you look at the numbers. The dimensionality of space should be the same. The dimensionality of space is easily and simply related to the Hurst coefficient.

> You seem to have missed the point at which this mess got started. There are several of my posts on p.5-6, where I posted the results of my research on Hearst's behaviour for SB. In theory it should be equal to 0.5. However, in practice it turns out otherwise. These results are not original. All this has long been studied by the scientific community and is well aware of it. Even wikipedia gives a definition of Hurst which will tell an attentive reader everything - Hurst characteristic is marginal. Therefore, for small values of intervals its values differ from what we would like to see. That is also why the procedure of its definition is so heavy-handed (how else could we reach the asymptote?). And that is why its application in practice is of little effect. And Hearst's harpies, which differ from a straight line, are also given on p.6. And so is the interpretation of these results.

I don't understand the thrust, to call the Hurst coefficient a marginal value. It is a non-parametric statistic, and like any statistic only makes sense in the limit. Why emphasise that. The issue is the speed of convergence. If you don't like Hearst's convergence, take the coefficient of variation. There the convergence is faster and the result is the same Hearst.

> But these are all Hearst's problems. You want a straight line, work with the variance of the increments. But what does self-similarity have to do with it. Why do you cross out a huge phenomenon just because some curve there is not a constant value ? And at the same time with self-similarity you give up the theory of fractals. Is that adequate?

You don't need a constant value, that's absurd. You need a quantity that deviates from the constant in a random, preferably controlled, way. One can see from the charts that the deviation does not even smell of randomness.


And by the way, I'm troubled by vague doubts. By any chance, didn't you use C PRNG in your experiments? If so, it's a big mistake, you can't use it to generate data for Hearst.

 
Farnsworth:

and I'm looking forward to seeing the denouement :o/

Thank you for the excellent graphics.

That's where I saw the model. I hope for your success.

As for Slutsky-Yule, it was the paradox of the effect on the different components or rows that alarmed me...

So not only Harst but also Hurst will be namesakes.

Though you and Shiryaev X(Y)... understand.

;)

Reason: