FR H-Volatility - page 32

 
rsi:
And the conclusion is more down-to-earth, to which I think we are steadily approaching ("we" as in "we have ploughed on" :-) - there is no more information in ticks than in bars, or not much more. Please forgive the incorrigible scepticism.
Following this logic, there is almost as much information in one big bar in about 30 years as there is in ticks. That's not right. I think the more bars, the less information they contain.
 
Prival:
If you want to know the formula according to which the blue curve was generated. I would like to give my comments on each of its components, how it was generated and what was done. Thanks. Or just a file, I can figure it out on my own, I know Matkad.

1. Construct a series of first differences (FDD) of the initial BP on the selected TF (let it be 5 min);

2. find the volatility of FFD - let it be m;

3. replacement of all increments in the FF with m, taking into account the sign of increments, we obtain the first difference of the series of equal increments (FF);

4. Integrate the first difference of RP and the output is a synthetic series - RP.

ALL.

rsi wrote (a):
But I disagree with the last assumption - unsubstantiated hypothesis, nothing more. The movement of the crowd just seems to create "agitation" (similar to the behaviour of quantum collectives), which violates your RP, so it's unlikely to justify the confirmation of the conspiracy theory here :). And the conclusion is more down-to-earth, to which I think we are steadily approaching ("we" - in the sense of "we have ploughed on!" :-) - there is no more information in ticks than in bars, or not much more.

How about that!

The crowd, those are the ones who are in the majority. If we plot the distribution of price increments, the bulk of the volume will come from small increments - they are the vast majority - so it's a crowd. Remove from the initial price series increments greater than some, leaving only the "small" ones, i.e. - "from the crowd", and we get a series close to the RP series. In the limit, replacing all increments in initial BP by the same, we obtain some indicator of "the crowd's mood", which, as you can see, goes perpendicular to the rate...

The answer to the question of where there is more information is obvious: having a tick history, I will definitely recover all imaginable TFs. But having a definite TF, I will not recover any smaller TF, and moreover - ticks. So, the information is more in ticks than in bars! Prival is right "...in one large bar in 30 years there is almost as much information in ticks. That's not right. I think the more bars, the less information they contain."

As for the statement"there's not much more information in ticks than in bars" I think that in Forex the information is never superfluous! - It is worth its weight in gold:-)

 
By saying "in bars", I meant the existing representation in the form of bars - on all TFs. It is clear that we are not using a 30-year bar. It's not even about the bars (OHLC), it's about the fractal-discrete representation of price. A tick is also not a single trade in forex, but the result of a broker's integral view of price (no matter where it is taken from) - it may be formed by a crowd, i.e. many traders "simultaneously" (i.e. over an interval less than the minimum tick interval), or it may be a response from DC filters. Just as the information from traders may not be automatically, but through a broker's filter be passed "up". So, Neutron, I don't see the basis of how to filter out the actions of the "crowd" from the actions of individual players in ticks. Now about information. Information is what we can use. In this case the information is a significant change of price within the chosen observation interval, and most importantly - in the direction of this movement. Significant - for example, over spread+n pips. As a result, the solution often comes down to comparison with threshold(s) of some scalar (sufficient statistics), i.e. information is compressed to a few bits - in case of one threshold - to one bit: "up and down". So this is how much information needs to be processed (compressed) to get the right decision? When I said that there is no more of this information in ticks than in bars, I meant exactly this aspect of it - sufficiency to make a correct decision at a chosen time interval.
 
Neutron:

Crowds are those who are in the majority. If you plot the distribution of price increments, the bulk of the increments are small increments - they are the vast majority - so it's a crowd. If we remove from the initial price series increments greater than some, leaving only "small" increments, i.e. "from the crowd", we get a series close to the RP series. In the limit, replacing all increments in initial BP by the same, we will get some indicator of "crowd's mood", which, as you can see, goes perpendicular to the rate...

In the program the K coefficient, as I understand it, and sifts out small increments (the crowd) in its construction method is not a word, I need point 3.1 K = and from what considerations do we select 10^2 or 10^4 . If you don't mind a piece of file "eurjpy.prn" with description of columns. PLEASE.
 
Prival: The program K coefficient, as I understand it, and eliminates small increments (the crowd) in the method of construction of a word about it, I need paragraph 3.1 K = and from what considerations is it 10^2 or 10^4. If you don't mind a piece of file "eurjpy.prn" with description of columns, please.

The K-factor, is analogous to the Point value in MT4. It is determined automatically, and for example for EURUSD it is 10^4, and for EURJPY it is 10^2. It does not participate in the sorting process.

The format of file is standard: <DTYYYYMMDD>,<TIME>,<OPEN>,<HIGH>,<LOW>,<CLOSE>,<VOL>. In the program, the second column, <OPEN>, is used for constructions.

rsi wrote (a): Information is what we can use. In this case, information is about a significant change in price within the chosen observation interval, and more importantly, the direction of that movement...

rsi, I agree with you, moreover, I myself hold a similar opinion. For example, I'm convinced that the Zig-Zag is the optimal approximation operator for a price chart. In this way of decomposition, all the information inside of the H-points step of price scale is considered non-interesting. This allows us to significantly compress the amount of incoming information without throwing the baby out with the bath water.

As for the phenomenon under discussion, this is somewhat different. I'm trying to decompose the price chart on some basis, in particular, on the basis defined in the space of possible values of price increments. The thing is that there are many small movements of no interest, and few large and strong ones! In this situation, I would not risk to discard the small ones, they are too big.

For example:

Here in red, the 1-minute quotient is divided into 12 vectors consisting of equal increments from 1 to 12 pips. According to the principle:

In the original series we leave only those increments whose modulus is equal to n points (e.g. n=5 points). At points where this condition is not fulfilled, the value of the series is assigned to the left value, etc. We obtain a set of vectors of n - realizations. The figure shows vectors for RP of 2,5 and 8 points, and the sum of all vectors from 1 to 12 is black line. It can be seen that the original VR can be reconstructed with any accuracy using more decompositions.

The vectors themselves are of interest. First of all for analysis of quotient dynamics. Perhaps these realizations are easier to predict than the original series. Or their dynamics may inform us in advance of the expected change of trend on the market... Let me remind you - if we eliminate null elements from the series of the first difference (informativity doesn't decrease), then we deal with stationary series, with all positive consequences that follow from this. Mathemat, can you hear me?

Files:
eurjpy5m.zip  624 kb
 

I hear Neutron, of course. I see the words "decomposition" and "vector" and wonder where else to put orthonormality here. Just kidding. Actually it is a curious experiment, I haven't thought about it yet. And about stationarity: of course, it must be strictly justified. The cry has already been made here to mechmatics.

P.S. Aren't there any tools for checking stationarity in the matcadex?

 
> Thevectors themselves are of interest. First of all, for analysing the dynamics of the quotient. Perhaps these realisations are easier to predict than the initial series. Or their dynamics, in advance, inform us about the expected change of trend in the market...


That's what I meant when I said that the charts are interesting. That's where the stats should be typed in!
P.S. I remember even from my youth there was such a device - threshold statistics ...
 

I have created a small indicator which draws the price chart in equal increments, the amplitude of which is equal to the volatility of the instrument in the selected timeframe.

Now it's time to think how to use it.

Files:
 
Thanks for the indicator. But it seems that my hopes for it were too high. I tried it on different TFs and found no stable signals. Life goes on!
 

I don't think we can expect an "easy" solution.

Here's a pattern I've noticed. Sometimes you can see a stable combination of a price chart and a line of equal increments (PL):

In the pictures, the green colour shows the price chart and the WP line in red. Usually, the price chart and the WP are close to each other as if competing, but sometimes, the price chart makes a sharp forward movement leaving the WP far behind. It appears that such a market condition is not random (efficient) and that with non-zero probability it tends to return to an unperturbed state (Figure left). In contrast, the figure on the right shows a situation where price and RP move almost in concert, without sharp perturbations. In such a state, the market is efficient and further developments can follow an arbitrary path.

Reason: