a trading strategy based on Elliott Wave Theory - page 131
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I understood your idea very well, and I liked it very much. When I was testing my systems, I also faced the problem of separate entry assessment. I cannot say that I have solved it. But I can share my IMHO.
I asked the question about your entry level because, to understand your approach to estimation, I needed to see the ratio between SL and TP. I have now realised that it is 1:4. It follows that you are making a non-equilibrium entry estimate. This is one of the variants that I have applied as well. In general I imagine the options are:
1. Equilibrium valuation. SL = TP. I like this variant because it is simple and gives an objective assessment of the "correctness" of the entry. That is, it gives an estimate of the system's increase in probability of winning.
2. Nonequilibrium estimation SL < TP. This variant allows you to estimate how close to the reversal point the system enters (for counter-trend entry) or how far it enters from the end of the trend (for trend entry).
3. Complex estimations. There are many of them, of course. And each of them can evaluate the specific property of the entries the system provides. Let me give just one example, which I also used. SL is not given, the only parameter is TR. For each entry the maximum drawdown that was reached before the entry reached TP is estimated. By varying the TP we obtain a series that can be statistically analyzed. This is just an example that has its disadvantages. In particular, ТР may never be reached at all. Therefore, the application of each such estimation variant requires its own refinement.
In general, when estimating the system as a whole, we rely on two values: the amount of positive trades for each negative one and the ratio of the average profit for profitable trades to the average loss for unprofitable ones. All these values are obtained as a complex when testing the system as a whole. Therefore, they are not independent in the sense that we cannot say why these results appear. Whether it's because the inputs are bad, or because the outputs are bad, or because the SLs and TRs are wrong, etc. So it would, of course, be great to standardise the methodology for assessing inputs and outputs (and they are related). Then it would be possible to build a methodology to independently evaluate the two main characteristics of the system. And this would immediately show where the strengths of the system lie, and what still needs to be improved.
Yes, I am more familiar with variational analysis than with integral methods. By the way, it is intended precisely to investigate functionals, not functions. So Vladislav's statement about finding an extremum of the potential energy functional is more understandable to me, than his use of the field potentiality to determine something there. By the way, what? What exactly does Vladislav use field potentiality of price for ?
You once wrote that you don't understand why Vladislav has so much code and why each cycle takes so long. That's exactly why. Variation of the trajectory. Too many degrees of freedom.
I think to assess the sufficiency of the approximation. One has to stop at some point, rather than fitting to infinity.
"The better the model, the less empirical it is and the more theoretical it contains. " Academician Zeldovich and Professor Myshkis in a course on applied mathematics.
"There is nothing more practical than a good theory" Einstein.
Quote from the book
As for the formal proximity of the empirical distribution and the theoretical distribution (model) adequate to it, they cannot coincide exactly due to sampling limitations generating random deviations of frequencies and parameters. Moreover, the very small discrepancy between the empirical and theoretical distributions indicates, paradoxically, their inconsistency, since according to the law of large numbers, empirical frequencies converge to probabilities only when the sample size is unlimitedly large. A limited sample size must have a discrepancy with the model, which allows for an alternative interpretation:
the discrepancy between the empirical and theoretical distributions is random within the limits of acceptable variation, they do not contradict each other and the hypothesis of agreement with the theoretical model can be accepted;
differences between the empirical and theoretical distributions are not explained by random fluctuations and are statistically significant, and the hypothesis of agreement with the theoretical model can be rejected.
The rules by which consistency with the theoretical model is established or rejected are called acceptance criteria. The probability of error in rejecting a hypothesis of agreement is usually estimated.
Rosh, thanks for the tip - I've sorted out the pictures.
By the way, is anyone selecting channels by swings? I haven't understood Vladislav completely and did it by my own methods, but my calculations are slowed down now. In general, I run through the Zig-Zag several times with different periods and then take the penultimate extremum point and look for a channel with minimum RMS in the range around it. Can anyone advise how to simplify it?
I was using Omega before I came across this thread. However, I have to deal with MQL as well. I hope I will be able to catch up with the others. :))
Here the larger channels (their boundaries) serve as a base for the smaller ones. Here you have fractality, and plenty of investment horizons and multiframes (3 Elder screens).
I haven't implemented the channel colouring myself yet :)
I haven't implemented the channel colouring myself yet :)
So it's better to refuse such a method? Yes, I think the quality of selection is satisfactory...... And calculation time - no. :))
And channel colouring is easy to implement via two triangles.
Naturally, all moves respect the imposed constraints on the chain length and the coordinates of the start and end.
I do not quite understand the term "roving", if you mean the search of maximum or minimum by gradual approximation, say, by method of conjugate gradients (once gave a link), then this method is more suitable for our case and has no relation to roving. And if it implies defining a new chain line, I think it is wrong and numerical methods do not solve the problem this way. But differential, integral equations, interpolation problems, etc. are solved. I.e. as a result of solving a system of equations we get a set of curves.
If you represent a price series as a chain, I do not like this approach and moreover, I do not understand its meaning and analogy for our case.
I started my research on a different basis. Here on this link http://www.rfbr.ru/default.asp?doc_id=5169 there is a description of potential energy surface of reaction (I understand that it is hard to make a dick out of it, there is mechanics there and chemistry here :o). Of course, I took only the idea and nothing more. And now I "invent" equations of equilibrium in matcad to find minimum of such surface.
The analogy is very close, though not complete. The price series also has two fixed ends - the beginning and the end of the trajectory. Inside, the trajectory is lined up to minimise the potential energy functional. This is the classical approach of theormech, if we neglect the distinction between the notions of Hamiltonian and potential energy. The fact that Vladislav used this in his model impressed me at first sight.
But then the trouble begins. Since the price field is potential, ANY price trajectory linking the two fixed ends corresponds to the same work of moving between them. This gives us the right to vary the trajectory as we please, without caring what happens on the inside in the process. But this is precisely what makes the principle of potentiality unconstructive, since all trajectories become equivalent. At the same time, Vladislav wrote:
This is what I don't understand.
Rosh about numerical methods has written everything correctly. Only it is not about "numeracy", but about "integrality" of the method.
And when asked what exactly Vladislav uses potentiality of the price field for, Rosh replied
I have my doubts about that too. I don't think Vladislav uses approximations above first order, i.e. above LR.
And when asked what exactly Vladislav uses the potentiality of the price field for, Rosh replied
I have my doubts about that too. I don't think Vladislav uses approximations above first order, i.e. above LR.
I too am sure that there is no need for approximation above first order, because otherwise the whole theory of normal residuals distribution goes to hell.
And about the price potentiality paradox - remember the definition of piecewise smooth functions. And the existence of a left or right derivative.
I remember, but I don't see the connection yet.