a trading strategy based on Elliott Wave Theory - page 131

 
2 Candid
The point is to compare different entry conditions. In principle, I somehow got hooked on 2.5 RMS from the very beginning, and so far the impression persists that the true (sharp) boundary of the channels usually lies exactly around that level. I want to clarify that I meant not so much the results comparison between the project participants (everyone has his own plan and the stages of its implementation are essentially different) but rather the correctness of the inputs optimization procedure. In this sense, the mentioned variant kind of follows from the basic model - a successful entry from the border of the channel should move the price inwards, ideally to the other border (and vice versa, respectively, in case of an unsuccessful one), the RMS levels are a dimensionless coordinate. But the comparison of entries is a very delicate thing, that is why I wrote that post exactly in anticipation of comments and objections.

I understood your idea very well, and I liked it very much. When I was testing my systems, I also faced the problem of separate entry assessment. I cannot say that I have solved it. But I can share my IMHO.

I asked the question about your entry level because, to understand your approach to estimation, I needed to see the ratio between SL and TP. I have now realised that it is 1:4. It follows that you are making a non-equilibrium entry estimate. This is one of the variants that I have applied as well. In general I imagine the options are:

1. Equilibrium valuation. SL = TP. I like this variant because it is simple and gives an objective assessment of the "correctness" of the entry. That is, it gives an estimate of the system's increase in probability of winning.
2. Nonequilibrium estimation SL < TP. This variant allows you to estimate how close to the reversal point the system enters (for counter-trend entry) or how far it enters from the end of the trend (for trend entry).
3. Complex estimations. There are many of them, of course. And each of them can evaluate the specific property of the entries the system provides. Let me give just one example, which I also used. SL is not given, the only parameter is TR. For each entry the maximum drawdown that was reached before the entry reached TP is estimated. By varying the TP we obtain a series that can be statistically analyzed. This is just an example that has its disadvantages. In particular, ТР may never be reached at all. Therefore, the application of each such estimation variant requires its own refinement.

In general, when estimating the system as a whole, we rely on two values: the amount of positive trades for each negative one and the ratio of the average profit for profitable trades to the average loss for unprofitable ones. All these values are obtained as a complex when testing the system as a whole. Therefore, they are not independent in the sense that we cannot say why these results appear. Whether it's because the inputs are bad, or because the outputs are bad, or because the SLs and TRs are wrong, etc. So it would, of course, be great to standardise the methodology for assessing inputs and outputs (and they are related). Then it would be possible to build a methodology to independently evaluate the two main characteristics of the system. And this would immediately show where the strengths of the system lie, and what still needs to be improved.
 
2 Rosh
Numerical methods solve approximately as follows: first roughly draw any line of length L with the ends at the vertices of the columns. Calculate the potential energy of the circuit (integration). Then they "move" the line a bit and calculate the energy again.

Yes, I am more familiar with variational analysis than with integral methods. By the way, it is intended precisely to investigate functionals, not functions. So Vladislav's statement about finding an extremum of the potential energy functional is more understandable to me, than his use of the field potentiality to determine something there. By the way, what? What exactly does Vladislav use field potentiality of price for ?

There are many wiggle points, an algorithm is required which will eventually result in a minimum potential energy (the requirement of convergence of the method).

You once wrote that you don't understand why Vladislav has so much code and why each cycle takes so long. That's exactly why. Variation of the trajectory. Too many degrees of freedom.
 
So Vladislav's statement about finding an extremum of the potential energy functional makes more sense to me than his use of field potentiality to determine something. By the way, what? What exactly is Vladislav using the price field potentiality for ? <br / translate="no">.


I think to assess the sufficiency of the approximation. One has to stop at some point, rather than fitting to infinity.

"The better the model, the less empirical it is and the more theoretical it contains. " Academician Zeldovich and Professor Myshkis in a course on applied mathematics.

"There is nothing more practical than a good theory" Einstein.

Quote from the book

As for the formal proximity of the empirical distribution and the theoretical distribution (model) adequate to it, they cannot coincide exactly due to sampling limitations generating random deviations of frequencies and parameters. Moreover, the very small discrepancy between the empirical and theoretical distributions indicates, paradoxically, their inconsistency, since according to the law of large numbers, empirical frequencies converge to probabilities only when the sample size is unlimitedly large. A limited sample size must have a discrepancy with the model, which allows for an alternative interpretation:

the discrepancy between the empirical and theoretical distributions is random within the limits of acceptable variation, they do not contradict each other and the hypothesis of agreement with the theoretical model can be accepted;
differences between the empirical and theoretical distributions are not explained by random fluctuations and are statistically significant, and the hypothesis of agreement with the theoretical model can be rejected.

The rules by which consistency with the theoretical model is established or rejected are called acceptance criteria. The probability of error in rejecting a hypothesis of agreement is usually estimated.

 
While it's a long way to MTS - I liked Rosha's idea of colouring in the channels. Implemented it. It was easier on the eyes.

Rosh, thanks for the tip - I've sorted out the pictures.

By the way, is anyone selecting channels by swings? I haven't understood Vladislav completely and did it by my own methods, but my calculations are slowed down now. In general, I run through the Zig-Zag several times with different periods and then take the penultimate extremum point and look for a channel with minimum RMS in the range around it. Can anyone advise how to simplify it?

I was using Omega before I came across this thread. However, I have to deal with MQL as well. I hope I will be able to catch up with the others. :))

 
Swings as such (in the conventional sense) are sort of unnecessary - in my opinion. I mean the standard zig-zag and the like (which is why it slows down).
Here the larger channels (their boundaries) serve as a base for the smaller ones. Here you have fractality, and plenty of investment horizons and multiframes (3 Elder screens).

I haven't implemented the channel colouring myself yet :)
 
Swings as such (in the conventional sense) are sort of unnecessary - in my opinion. I mean the standard zig-zag and the like (that's why it slows down). <br / translate="no"> Here the larger channels (their boundaries) serve as a base for the smaller ones. Here you have fractality, and a lot of investment horizons and multiframes (3 Elder screens).

I haven't implemented the channel colouring myself yet :)



So it's better to refuse such a method? Yes, I think the quality of selection is satisfactory...... And calculation time - no. :))

And channel colouring is easy to implement via two triangles.
 
Rosh

<br / translate="no"> Numerical methods solve roughly as follows: First roughly draw any line of length L with the ends at the vertices of the columns. Calculate the potential energy of the circuit (integration). Then they "move" the line a bit and again calculate the energy. The difference from this "moving" is checked - a kind of differentiation (variation) took place. If the variation leads to reduction of potential energy, they move it in that direction, and if vice versa, they move it in the other direction. There are many moving points - we need the algorithm that eventually leads to the minimum potential energy (the requirement of method convergence).

Naturally, all moves respect the imposed constraints on the chain length and the coordinates of the start and end.


I do not quite understand the term "roving", if you mean the search of maximum or minimum by gradual approximation, say, by method of conjugate gradients (once gave a link), then this method is more suitable for our case and has no relation to roving. And if it implies defining a new chain line, I think it is wrong and numerical methods do not solve the problem this way. But differential, integral equations, interpolation problems, etc. are solved. I.e. as a result of solving a system of equations we get a set of curves.

If you represent a price series as a chain, I do not like this approach and moreover, I do not understand its meaning and analogy for our case.

I started my research on a different basis. Here on this link http://www.rfbr.ru/default.asp?doc_id=5169 there is a description of potential energy surface of reaction (I understand that it is hard to make a dick out of it, there is mechanics there and chemistry here :o). Of course, I took only the idea and nothing more. And now I "invent" equations of equilibrium in matcad to find minimum of such surface.
 
By "wiggling" Rosh meant curve variation. In calculus, an infinitesimal change in a variable is denoted by "d" - dx. In calculus of variations, an infinitesimal change in a function (!!!) is denoted by the Greek letter delta. The meaning is similar, if you remember that it is not a variable (i.e. a number), but a function.

If you represent a price series as a chain, I don't like this approach and moreover, I don't understand its meaning and analogy for our case.

The analogy is very close, though not complete. The price series also has two fixed ends - the beginning and the end of the trajectory. Inside, the trajectory is lined up to minimise the potential energy functional. This is the classical approach of theormech, if we neglect the distinction between the notions of Hamiltonian and potential energy. The fact that Vladislav used this in his model impressed me at first sight.

But then the trouble begins. Since the price field is potential, ANY price trajectory linking the two fixed ends corresponds to the same work of moving between them. This gives us the right to vary the trajectory as we please, without caring what happens on the inside in the process. But this is precisely what makes the principle of potentiality unconstructive, since all trajectories become equivalent. At the same time, Vladislav wrote:
The potentiality of the price field, on the other hand, provides an opportunity and a method for reconstructing the function from the derivative.

This is what I don't understand.

Rosh about numerical methods has written everything correctly. Only it is not about "numeracy", but about "integrality" of the method.
And when asked what exactly Vladislav uses potentiality of the price field for, Rosh replied
I think to assess the adequacy of the approximation. One has to stop at some point, rather than adjusting to infinity.

I have my doubts about that too. I don't think Vladislav uses approximations above first order, i.e. above LR.
 
<br / translate="no">Rosh wrote about numerical methods correctly. But it's not about "quantity" - it's about "integrality" of the method.
And when asked what exactly Vladislav uses the potentiality of the price field for, Rosh replied
I think to assess the adequacy of the approximation. You have to stop at some point, rather than adjusting to infinity.

I have my doubts about that too. I don't think Vladislav uses approximations above first order, i.e. above LR.


I too am sure that there is no need for approximation above first order, because otherwise the whole theory of normal residuals distribution goes to hell.
And about the price potentiality paradox - remember the definition of piecewise smooth functions. And the existence of a left or right derivative.
 
And about the price potentiality paradox - remember the definition of piecewise smooth functions. And the existence of a left or right derivative.

I remember, but I don't see the connection yet.
Reason: