Volumes, volatility and Hearst index - page 34

 
Vita:

I still don't understand what is meant by a new row size. The row under R/S analysis is the same all the time and its size does not change. The row is sliced into K pieces. K is what I call the size of the ruler, not the new average.

When people get into this sort of detail, they have to describe exactly what they are coming from. Otherwise, with a probability close to 100%, they end up talking about different things. You finally gave a link in this post to a fairly specific description of the procedure, but there's nothing in there about breaking it down into K chunks. If I were a Hirst fan, I'd probably guess which procedure and from which source I'm talking about. But I'm not going to guess .
The new average (hopefully referring to the R/S average for splitting the row into K chunks) is already the result of the ruler measuring size K. We lay it out on the plane. The result is many points for the same row from measurements with rulers of different sizes.

Probably, I have made an inaccuracy in speaking about the average. I meant the series of cumulative deviations, Z(t) in the benchmark you suggested. From the initial series of n points, n series of size t = 1, 2, ...,n are produced. And for each such row, there is a ruler - Z(t). To make it clearer, for me any cumulative sum is almost synonymous to the mean, up to a normalization factor.

And no asymptotes.

As for the reference to Hearst asymptotics, indeed Wikipedia points out that

.... So far, no Hearst asymptote calculation code has been presented.
What asymptote calculation? You'd better give a proof that Hearst SB is 0.5 for any N. The asymptote arises precisely because 0.5 is an asymptote for the SB spread. Actually, since this question is so important for you, why don't you repeat Yuri's calculation with "that" spread calculation?

In any case, I understand, Candid, why you need a jokingly condescending tone. So far the thread is overloaded with anything but results and opportunities to verify them. I really wish you well, hope to see a denouement. Please make me happy.

Hmm, our discussion overloads the thread too, hence the tone. You seem to have missed one of the results of the discussion - agreeing that the original Hurst is not the best indicator for our purposes. So you are engaging in a sort of historical reconstruction.

 

HideYourRichess:

And by the way, I am troubled by vague doubts. You didn't happen to use C PRNG in your experiments, did you? If so, it's a big mistake, you can't use it to generate data for Hearst.


Thank you for your thoroughness, you have commented on each point. And I could have responded with the same. However, I decided to limit myself to the cited quote. I think therein lies the fundamental difference in our approaches.

I used the MQ PRNG, which is just a shell of the C one. I.e. from your point of view "it's a big mistake" and it's not suitable "to generate data for Hearst".

Here the very formulation of the question seems to me completely unacceptable. It turns out you need special data for Hearst. It doesn't work on non-special data. What kind of indicator is it that is so selective? What does it have to do with mathematics then?

Vita, for example, suggested that Nikolai calculate Hearst on a number N in the cube. Even though Vita never admitted what the result should be, he behaved as if it were quite possible and the result must make sense. And I believe him.

As for the PRNG, I, if you paid attention, made a calculation for three quantities - the range, the incremental modulus and the variance. For the last of these there is a strictly proven formula: <D>=N. This formula was for me a criterion of correctness of calculations. If the calculations showed that it does not hold, then I would assume that the calculations are wrong (no matter what the reason) and not the formula. However, again, if you pay attention, the results have shown that this formula holds for all values of N. And for Hurst they showed exactly what was to be expected. Therefore I personally have no doubts about them.

Apparently we have quite different notions of what a Hurst index is, how it is calculated and what it is. There is no sense in arguing in this situation. We must first agree on the notions. I suggest that you look up the English-language wikipedia and see what they say about it. Vita referred to it somewhere earlier. I've looked at that link and I think it's very correct. And simple.

 
Thanks Farnsworth Candid Yurixx Avals

Candid:

I took it as an introduction. It arouses curiosity, some resonances arise, something falls into the void (in the sense of lack of association).
But the introduction should be followed by the main text :).
The mere division of the pattern into cause and effect is entirely consistent with my views - only in this case do they deserve a separate title and separate consideration. Disassociation from similarity, correlation and other vivisectional tools rather suggests a rather early stage in the development of the idea, when apart from the feeling that you have clearly grasped something and very general imagery there is almost nothing.
On the whole, you rather like the new world drawn in broad strokes, but I would like to understand what it has to do with reality.
It's more correct to characterize my approach to BP analysis as a theory rather than a mere method - Transcending Paternals Theory (TPT, still a working title). It has been developed as a complete replacement for the Dow Theory and TA based on it since late 2008. Since then I have almost completely stopped torturing Lady MA and Sir Mc Di and their kin in the tester, having finally realised the dead end of TA in terms of further development.

The deadlock is in the discreteness of TA signals, that famous and ubiquitous "buy, sell, smoke bamboo", which is why there is a lag and other inevitable "weights" of TA. In addition, TA is full of internal contradictions.

It all started with wanting to be able, on the one hand, to freely write ATSs, and on the other hand, to emulate the decision-making process of a live trader. One of the features of manual trading (I mean without using indicators) is the ability to make a decision immediately at a glance on a chart, i.e. on any bar. I thought then, how can I teach a silly machine what I do myself? Besides, here and there I met some opinions that some "manual" tactics cannot be formalized.

In the middle of 2009 I got acquainted with ANN, then they were primitive Reshetov linear perseptrons (no offence intended). From that moment on I started to master ANN as a means for non-linear BP transformations, as I felt in my gut that this was exactly what I was looking for.

This was followed by the study of optimization algorithms in general and GA in particular as a tool for learning networks and as a potential possibility to create adaptive self-organizing systems with AI (limited by the purposes of trading of course). This is where the contours of CCI begin to emerge. A theory that is devoid of contradictions of the Dow Theory and TA. The theory on the basis of which it becomes possible to build ATC with "analog" signals to buy/sell, i.e. at any time, building almost "live" adaptive ATC absolutely without any settings related to trading functions of the TS itself, excluding, of course, settings related to service maintenance.

The basic principles of CCI I have already repeatedly voiced in various threads of the forum.
These are:
-At any time there is a possibility to enter both long and short positions.
-There may be both long and short positions together at any given time (each has its own objectives).
-Each position has limitations that uniquely identify the trade decision, such as TP and SL.
Each position has its own "lifetime", essentially limited by the trailing paternoster.


The main procedures for applying the CCP are:

-Preparing BPs to get distribution without thick tails.

-Constructing to a stationary form on the relative scales of the pattern.
-Analysis of a set of patterns from different TFs (you can use different tools, I use ANN)
-Formalization based on signal analysis.

Yurixx:

The general idea of the direction is not objectionable. But it's a pretty cool program. It won't be easy to implement because there is no formal definition of a pattern, and on the other hand, identical patterns can consist of a different number of points.
Not using correlation as a measure of pattern similarity could be interesting if an alternative (and efficient) method is proposed. Without it, abandoning correlation could lead to a dead end.

и

Farnsworth:

It's not really important to use MathCAD, MQL or C++. It has to be formalized in the end somehow. I've investigated the patterns and I've investigated ZZ in the past/future framework to no avail, no connections. None at all. Hearst's 0.5 explains everything.

This is the theory. For each of the points there are nuances and details that no one is interested in. Each of items can be implemented in different ways, but the fact is that everything is formalizable.

Chewing over and over the TPP inevitably leads to very interesting conclusions.

Avals:

imha, a pattern or a combination of them on different frames makes sense only in a certain context - the phase of the market. A pattern is not the cause of a move, but only some probable sign of a transition. The context can be quite different.

I was influenced in my thinking by a famous thread on Context. Essentially, the totality of patterns from different TFs (different horizons) at each moment in time is that very Context. The context of each moment in time. The totality of patterns unambiguously describes the current market phase.


Avals:

Although I use technical analysis to make trading decisions, there are a number of important differences between my method and the approaches of most other traders in this group . First, I don't think very many technical traders go further back in their research than thirty years, let alone a hundred years or more . Secondly, I do not always interpret the same stereotypical figure in the same way . I also take into account which part of the long-term economic cycle we are in . This alone can lead to very significant differences between the conclusions I draw from the charts and those that are reached by traders who don't. Finally, I do not consider classic chart patterns (head and shoulders, triangle, etc.) just as independent formations. Rather, I try to look for certain combinations of figures or, in other words, figures within figures. These more complex multi-figure combinations can provide signals for trades with a higher probability of success .
There are indeed some similarities. This is the "getting used" of a trader to a trading instrument, which is different for everyone. But the difference is that I don't search "I rather try to find certain combinations of figures or, in other words, figures within figures",
a set of patterns that unambiguously describes the current market phase is always present.

Candid:

I took this as an introduction. ......

But the introduction should be followed by the main text :).....
There will not be a basic text (more detailed). It is beyond the scope of this thread. Besides, I was going to tell Farnsworth about something interesting.
At the moment I am writing an ATC on CCI, some things from theoretical research was posted in the branch about the lock. I've been posting some of my theoretical findings in the Lock branch.
 
Yurixx:


Thank you for your thoroughness, you commented on each point. And I could have responded in the same way. However, I decided to limit myself to the above quote. I think that the fundamental difference in our approaches is buried in it.

I was using MQ's PRNG, which is just a shell of Cish. So from your point of view "it's a big mistake" and it's not suitable "to generate data for Hearst".

Here the very formulation of the question seems to me completely unacceptable. It turns out you need special data for Hearst. It doesn't work on non-special data. What kind of indicator is it that is so selective? What does it have to do with mathematics then?

Vita, for example, suggested that Nikolai calculate Hearst on a number N in the cube. Even though Vita never admitted what the result should be, he behaved as if it were quite possible and the result must make sense. And I believe him.

As for the PRNG, I, if you paid attention, made a calculation for three quantities - the range, the incremental modulus and the variance. For the last of these there is a strictly proven formula: <D>=N. This formula was for me a criterion of correctness of calculations. If the calculations showed that it does not hold, then I would assume that the calculations are wrong (no matter what the reason) and not the formula. However, again, if you pay attention, the results have shown that this formula holds for all values of N. And for Hurst they showed exactly what was to be expected. Therefore I personally have no doubts about them.

Apparently we have quite different notions of what a Hurst index is, how it is calculated and what it is. There is no sense in arguing in this situation. We must first agree on the notions. I suggest that you look up the English-language wikipedia and see what they say about it. Vita referred to it somewhere earlier. I've looked at that link and I think it's very correct. And simple.

You misunderstood my comment about the strong PRNG. It's not exactly "random", and not even exactly "pseudo-random", so expecting a set of "random" numbers from it, you can run into an unpleasant set of internal cyclicities of the generator itself. Which can affect the results. But it's so, by the way, if the issue of correctness of the input data is not very bothersome - you can not pay attention to it.

And Hearst's definition from wikipedia didn't surprise me in any way.

 

Vita:

Very much wish for success, hope to see a denouement. Please do.

My previous reply was written under time pressure and in a rather nervous atmosphere :).
Now I reread my explanations on rulers and realised that I hadn't clarified anything. Imagine one person has worked on something for one day and another for two days. And you may not take this fact into account when comparing their results. In human terms, this would be called a double standard, that is, you would have two rulers, one for one person and one for the other.

Similarly, when successive values of the cumulative sum turn out to be perfectly equal members of the same series you can, imho, speak of your ruler for each of these values.

But, of course, it wouldn't occur to me to impose my associations on anyone.


And the most likely denouement, alas, is a more or less gradual fading of the discussion :)

 
Candid: And the most likely outcome, alas, is a more or less gradual fading of the discussion :)
And there are topics that manage to live forever, such as the one about fishing :)
 
Mathemat:
And yet there are topics that manage to live forever - say, the one about fishing :)
Apparently, there are some topics that go on and some that last forever :)
 
HideYourRichess:

You misunderstood my point about the strong PRNG. It is not exactly "random", and not even exactly "pseudo-random", so expecting a set of "random" numbers from it, you may run into an unpleasant set of internal cyclicities of the generator itself. Which can affect the results. But it's so, by the way, if the question of correctness of the input data is not very bothersome - you can not pay attention to it.


I get the impression that you don't read the posts. It is not enough for you that the theoretical and calculated results coincide? Or do you think that such a coincidence could happen by chance?

Oh, come on. For the rest of our disagreements the situation is the same - we speak different languages. Leaving aside the minutiae and returning to self-similarity we have: you believe that self-similarity is well defined by a number, and this number should be constant, while the only argument for self-similarity is the similarity of graphs on different tf. And this all seems to me to be an unwarranted simplification. How can we come to an agreement ? Can you give us your definition of self-similarity?

 

to FreeLance

Спасибо за превосходную графику.

That's where I saw the model.

No, you didn't! It was an illustration of the Slutsky effect. I was trying to reinforce the word with an artistic form, to show that 99% of the information does not change when shifting by a step in MA, i.e. in essence an integral characteristic of the same series, roughly speaking "itself", is "shown". And it is not the best option to build any strategies on MA.

But the model is quite different and much more complicated

I hope for your success

Thanks a lot for my wishes, Same to you :o)

to Joo

Thank you for the food for thought. I will think about it.

To FreeLance, Joo

Thanks for the wishes, I feel I am going beyond the front line :o) It seems to me that your expectations are a bit high. It's quite a chore and given my workload - research for at least 6 months or even 10 in optional mode.

I have much to prepare and debug, I will start with calculation of singular spectrum and that already after a business trip (2 weeks). So, come in sometimes... :o)

 
to Candid
<br / translate="no">

And the most likely denouement, alas, is a more or less gradual fading of the discussion :)

With what self-similarity coefficient? :о)

Reason: