a trading strategy based on Elliott Wave Theory - page 237

 
Yuri and Northwind, thanks for the clarification. Intuitively I don't really like it, but we'll see.
 
to Neotron

Sergey, here is a very rough but correct simulation of a Wiener process. This random process is modelled by the sum of a convergent series, where N is generally speaking infinity.

In general the point is that the next element is not obtained as a sum with the previous one. The elements are independent, and this is one of the properties of random processes.

The given method (or rather it is not a method, but the formula derived by N. Wiener) also cannot be applied to modeling. Usually, a Wiener process is modelled using the Monte Carlo method. But my machine is rather weak for this method.



With satisfaction, I hasten to note, that with increasing N, my criterion registers decreasing strength of connection of counts and "memory length":

N=50000


N=100000


Phew, at this point I stop proving anything. All I wanted - I checked and told, I gave all arguments. Thanks a lot for the ideas, Sergei, you've helped me again. :о)))
 
Well, now that everyone has reached a local consensus, let us simulate real trading using Pastukhov's scheme on ticks.

For modeling we took ticks for 2006 of EURUSD (Spread=1 pip), EURCHF (Spread=2 pip), EURGBP (Spread=2 pip). Because in the estimations for these pairs a large return has been obtained for the renko-scheme, the modeling of the real trading was held only for the renko-scheme. There is only one optimisation parameter - the partitioning amplitude (vertical brick size). The starting size was taken from the results of evaluations for each pair ("trading strategy based on Elliot's Wave Theory" 26.01.07 15:47), then the calculation was performed with a smaller splitting size and so on, until the maximum profit for the year was extracted. The results of the simulation of real trading are shown below:



The figure below shows the behaviour of the difference between the yield curve and its smoothed value. This relationship reflects the characteristic absolute value and the dynamics of possible drawdowns expressed in points.



Conclusions:

1. Simulation of real trading using the rent-scheme proposed by Pastukhov confirmed the possibility of obtaining the arbitrage profit on the discussed instruments.

2. The average rate of return for EURCHF and EURGBP at the spread of 2 points is 1.5 and 2.5 points for each deal accordingly and 6 points for EURUSD at the spread of 1 point, which is in the satisfactory agreement with the estimations received by the formula nt-2H-Spread.
("trading strategy based on Elliot's Wave Theory" 26.01.07 15:47)

3. During optimization only one parameter was used - the rvazzle amplitude. The parameter showed good time stability:
("trading strategy based on Elliot's Wave Theory" 27.01.07 09:28).

4. Moderate drawdown of the EURCHF pair (up to 50 pips), allows to use this tool with the leverage up to 50. This, with the annual income of about 400 points and reinvestment of funds allows to hope for 100-200% of the annual income, with maximum drawdown up to 25%.
Moderate drawdown of the EURGBP pair (up to 20 points), allows you to use this tool with a leverage up to 100. This, with the annual income of about 100 points and reinvestment of funds allows you to hope for a 100%-150% annual income, with maximum drawdown up to 50%.
The average drawdown of the EURUSD pair (up to 100 pips) allows using this tool with a leverage up to 30. This, with an annual income of about 500 pips and reinvestment of funds, allows us to hope for 100-150% annual revenues, with maximal drawdowns up to 30%.

These are preliminary results. I ask everyone to take part in the discussion.
 
2 Neutron

2. Average profitability for EURCHF and EURGBP at 2 points spread was 1.5 and 2.5 points per trade correspondingly, and 6 points per trade, at 1 point spread, for EURUSD, which is in satisfactory agreement with estimated data obtained using nt-2H-Spread formula.


Actually my brokers have 2 points spread for EURUSD, and 4 points for two other pairs.
As far as I understand the additivity, which is present in the formula for income calculation, is not violated when modelling real trades. It means that it's elementary to recalculate the results obtained, and we don't need to model trades again. Is it so ?

And one more question. So, it turns out that we have only ca. 80 deals for EURUSD during the year ?
 
to Yurixx.

That's exactly right.

What I was thinking, though kagi builds show in estimates a lower return, but judging by the same estimates, they allow making 1.5-2 times more transits in a test period of time, all other conditions being equal. In this light, the Kagi will probably show a higher return on the test period...
Yuri, since you have the method, could you post the results of real trading simulations for the kagi builds?
 
OK, but not today. I've been out of the process for a couple of days and have only been able to post on the forum.
I'll post the bar results today. And then the modelling for the kaga.
 
These are the preliminary results. Please take part in the discussion.

And how are things in testing outside the sample on which the partitioning parameter was optimised?
100-500 points per year obtained as a result of optimization (under ideal conditions) looks "slightly" doubtful in terms of viability in the real market. How not to fall into the trap of "fitting on history"...
 
And how are things in testing outside the sample on which the partitioning parameter optimization was performed? <br/ translate="no"> 100-500 points over a year, obtained as a result of optimization (under ideal conditions) looks "slightly" questionable in terms of viability in the real market... How not to fall into the trap of "fitting on history"...


1. There is and can be no fitting or optimisation here. A perfectly consistent, self-consistent scheme has been constructed and theoretically justified. This scheme contains a single parameter H. You can take it as an analogue of the timeframe, on which the strategy should be applied. You will agree that it is impossible to fit the timeframe. During testing on the history we just define the H, on which the strategy gives the best effect. By the way, any strategy gives different results on different timeframes. Therefore the authors tend to apply it on some particular one and not on any one. Which is what they warn you about.

2. Out-of-sample testing is a valid and logical step. However, what can it show ? If market conditions have not changed (in this case this means that H-volatility has not changed) then the results will be statistically similar. If they have changed, then the results will also change. There is no one strategy that works in all market conditions. Here it is one H-volatility = const.

3. Do you think, Andrei, can there be such an Expert Advisor, which gives a guarantee of "not falling" into any traps ?
Or an Expert Advisor, the working parameters of which are determined by history, but which does not depend on history ?

4. If you have understood this scheme, you should have noted one detail: this scheme is in fact a demonstration of the power of mathematical statistics. That is, the possibility of making money in the market has been scientifically proven and, moreover, a method of how to do it has been formulated, depending on the conditions. This is the good news. The bad news is that mathematical statistics is the law of large numbers. And it requires a long participation in the market to justify the income prediction. But the longer you are in the market, the more likely it is that market conditions will change and the scheme will stop working. And you will know it by your losses.

5. There can be no profit without the risk of getting into losses - this is an AXIOMA. The only thing you can afford to do is to know WHERE you are taking the risk. You already know. :-))
 
And how are things in testing outside the sample on which the partitioning parameter optimization was performed? <br / translate="no"> 100-500 points over a year, obtained as a result of optimization (under ideal conditions) looks "slightly" questionable in terms of viability in the real market... How not to fall into the trap of "history matching"...

It should be noted that there is only one optimisation parameter and it shows good robustness. As a consequence, we can expect from the strategy a weak dependence of the level of profitability on possible overoptimization on historical data. If Yuri manages to work correctly with minute bars, we will not have problems with adequate testing of the strategy in the future - the archives with minute bars for any period are everywhere.
 
Here are the results for the kagi-partitioning of the EURUSD candlestick chart, M1, 2006.


Here H=1...50 pips is plotted along the x-axis, H-volatility is plotted along the y-axis.
Price chart details: total bars are about 350000, ATR value for this interval = 2.19 points.
Therefore, Hvol[H=1]=3.63 and Hvol[H=2]=2.14 are results that do not make sense physically.
From Hvol[H=3]=1.83 and onwards, the results quite fit the theory.
As for the tick plots it is seen that at H>20 Hvol -> 2.0 very quickly and further fluctuates around this value.

At the same time, I am showing the dependence of the number of cage vertices of the same graph on the value of H.
Maybe it will be interesting for someone.
Reason: