FR H-Volatility - page 9

 
Yurixx:

I Need to either visualise it somehow to describe the logic of decision-making,


Sometimes it helps to visualize the Voronov diagram, you just need to understand what to put down on the X and Y axis. Here is an example with explanations, something I managed to dig up on the Internet.
Files:
doc.zip  631 kb
 
Yurixx:

Even with a not very large number of parameters, the phase space of the system turns out to be too multidimensional for human perception. If the approach is correct and the chosen estimates enable clustering of the phase space, then the location and shape of the clusters can have a very complex topology. We either have to visualize it somehow in order to describe the decision logic, or blindly introduce classes and membership criteria. NS is much better at this, as well as probabilistic evaluations (as we can see).


If I understand you correctly, the researcher needs to prepare the input data for the NS in advance in order to achieve "phase space clustering". In this case the NS will independently allocate "significant" areas in the multidimensional phase space (PS) of input parameters and their arbitrary combinations that will significantly reduce the volume of the PS and, consequently, the volume of necessary calculations. Right?

But what are the "probabilistic evaluations" that NS "handles much better" I don't understand.

 
Prival:
Sometimes the Voronov diagram helps in visualisation, but you need to understand what to put down on the X and Y axis. Here is an example with explanations, something I managed to dig up on the web.

Correct me if I'm wrong. Voronov's diagram shows the optimal (in a sense) boundary of partitioning of space on which the boundary conditions for a given class of differential equations are given. How, then, does this relate to the topic at hand?
 

Neutron

Sorry really it has nothing to do with H-volatility FR, it has to do with NS, or rather recognition theory, a diagram sometimes helps to visualise classes and how to break them down.

Just saw the question and tried to help.

 
Neutron:


If I understood you correctly, the researcher must first prepare input data for the NS in order to achieve "phase space clustering". In this case the NS will independently allocate "significant" areas in a multidimensional phase space (PS) of input parameters and their arbitrary combinations that will allow to considerably decrease volume of PS and, consequently, volume of necessary calculations. Right?

But what are the "probabilistic estimates" that the NS "handles much better" I don't understand.


The clustering of FPs is a separate task and it is performed by the Kohonennet. It is a one-layer network, which in the process of learning (without a teacher ! i.e. self-learning) produces clustering of FPs. Then a kernel function is fitted to these data, which describes the cluster distribution. Then a probabilistic network is built, which (as far as I understood) in the simplest version does not even require training, but simply using Bayesian statistics considers the probability that the new sample belongs to a particular cluster. The output is the winning cluster. This is just a simplified scheme as much as possible.

The architecture of the NS, the way the input data is prepared and the learning algorithm are the three keystones on which everything is based. As you can see, each of the three components involves something unformalizable. As far as I understand, this is what the NS inherits from its creator, which allows it to work successfully. And the numbers - weights and parameters of the activation function - are just as an appendix. Everybody has a head, but some people think with it, and others eat with it. :-)

 

Thank you, Yura. Great answer!

If you don't mind, I'll ask a question about the applicability of NS. For example, I'll refer to my rams - Kagi constructions. We have a generating Zig-Zag (blue line) and a transaction line (red).

It follows from the theory (Pastukhov's thesis) that the behavior of the red line is statistically predictable and it (the behavior) is likely to be zigzag with the amplitude S=(H-volatility-2)*H. This is the average statistical return of the strategy on a representative sample. Unfortunately, the estimated value is usually smaller than the spread. And this is all that the statistical method of time series analysis can give us in this case.

On the other hand, we have a generating Zig-Zag whose return (practically impossible to realize) for a fixed time interval, is the maximum possible for a given partition step H, and the return, for example with partition H=spread, is the maximum possible for any BP at all. I wish I could get my hands on a tool capable of predicting the Zig-Zag! Or, at least prove the possibility in principle of such prediction with yield higher than that given by statmethod (S).

Do I understand correctly that the problem in this formulation is suitable for analysis with NS?

P.S. It seems to me that predicting the equidistant Zig-Zag (with a single step) is the best option. Secondly, we get rid of dimensionality connected with time scale - it is not necessary, because we trade only price change, and the time interval, during which this change occurred, in the first approximation is not included in profitability.

 
Neutron:

I wish I could get my hands on a tool capable of predicting the Zig-Zag! Or at least prove that it is possible in principle to make such forecasts with returns higher than those given by Statmethod (S).

Do I understand correctly that the problem in this formulation is suitable for analysis with NS?

P.S. It seems to me that predicting the equidistant Zig-Zag (with a single step) is the best option. Secondly, we get rid of dimensionality connected with time scale - it is not necessary, because we trade only price change, and the time interval, during which this change took place, in the first approximation is not included in profitability.


Theoretically, of course, it is suitable. But practically ...

The few things I've read on the networks abound with advice to beginners: predicting price behaviour is ineffective. Indeed, if you think about it, how will the network suddenly know how the price will move in the future. Just because we packed it with a lot of neurons and fed it with a lot of data? In this matter I am a rationalist. This knowledge does not appear out of thin air and is not born by itself. I wrote about three whales for a reason. Besides these whales, the source from which they are drawn is even more important - the author's intent. And in this concept there should be an idea of what data and in what form may contain the essential information about the market, how they must be processed in the network to obtain other numbers from which a meaningful conclusion for decision making can be drawn and, finally, how to teach the network to find these numbers.

From this point of view, imho, the problem in this formulation, although suitable for the network, is complicated and has little perspective. As the ticks and the zigzag built on them have quite similar distributions and it is no easier to predict the zigzag than the price.

It seems to me that the ZigZag is really interesting to use as a network input, but as the most convenient form of price patterns representation. Those very patterns, the link to the website you gave me, could be a very interesting option. But in this case the net will not predict the price but determine the state of the market. This is a slightly different approach. For NS to give statistical output up or down is a much more realistic task than predicting movement. But this variant combines well with ZigZag. So there are prospects, we just need to do the problem statement in such a way that it is solvable.

 

Thank you, Yura, for the lucid explanations - now my head is a bit clearer.

By the way, I was so sure that transaction line FR (the red one in the previous picture) has normal distribution, that I didn't even want to study this point. What was my surprise when I saw THIS:

Agree, an unexpected result... Compare it with the picture in the first post of this thread. There, the FR for the sides of the Zig-Zag is given.

 

Yes, it's an interesting picture. If I understand correctly, it's for cagi-partitioning with parameter H=10 ? But a certain connection with the picture from the first post is still apparent.

By the way, a thought occurred to me. I think, that all the same you were right about perspective of using NS for ZigZag prediction. Only it should not be a kagi, but a renko-building. In that case a more or less clear formalization of ZigZag patterns is indeed possible, and hence clustering the space of those patterns, and predicting segment sizes together with statistical evaluation of the validity of that prediction. I am interested in your assessment of this thought. The main point is the difference between kaga and renko. For renko I am clear on how patterns can be formalised and hence how to compare them with each other and how to assess their proximity. For kaga the picture is very fuzzy and hence the same procedure may not work.

On the other hand, I know a priori that what is true for kaga will also be true for renko. I'm not sure about the reverse, though. If the opposite is also true, then my bias towards renko is a fallacy and NS can just as well be applied to predict the segment size of any ZigZag, both renko and kaga.

What do you think ?

 

On the one hand, the Kagi constructions determine the position of the BP extremum with an accuracy to the point (Renko to the partitioning step H). On the other hand, it is not clear whether we need such precision? In this sense Renko looks more attractive because of the equidistant step H on the price scale. In short, the question needs to be investigated.

As for formalization of the Zig-Zag forecasting task, it appears to me as an estimate of a probable amplitude of movement U of the price (red vector) from the point of end of formation of the current extremum t=0, to the point of expected extremum t=1 (see Fig. ).

In this setting the only thing to predict is the amplitude of vector U movement, because its direction is predefined! - it coincides with the direction of vector H (green solid arrow). The range that vector U is allowed to receive ranges from 0 points to infinity (see fig. on the right). The most probable value of the amplitude that vector U takes is 2 points and its average value is a bit less than N. By the way, if a probable value were larger than the spread, we would be able to consider this strategy to be positively profitable. I'm talking about possible application of FR for BP arbitrage analysis.

Of course, the NS output should be fed with a Zig-Zag, but what to feed to the input... also a Zig-Zag shifted by one step? But, you don't need NS to analyse this situation! I think our task is to detect incipient arbitrage early on (see figure). To do this, we already need to analyse the familiarity of the transaction line. Except, the trouble is that it usually consists of 1-2 more rarely 3 breaks and by the time of identification the market becomes efficient. Maybe there are some indirect signs of arbitrage origination, then the task of their early detection, classification and constant modernization is just up to NS.

What do you think?

I read on the internet:

Что лучше, статистические методы или нейронные сети? Лучшим ответом на этот сугубо практический для прикладника вопрос является “It depends”. По-русски это означает “Все зависит от ситуации”.

The main practical conclusion that can be drawn boils down to the phrase, which has already become an aphorism: "If nothing helps, try neural networks".
Reason: