a trading strategy based on Elliott Wave Theory - page 58

 
There are many things in life that seem important or unimportant at first. Like the results of a primitive Expert Advisor, which according to Avals had to be justified at first, though intuitively the idea behind it is correct. But it was easier for me to put it for testing in real than to try to disprove it. If it lasts until the end of the year - then I'll probably get down to testing it in the tester :)
<br / translate="no">
Summary:
Deposit/Withdrawal: 5,000.00 Credit Facility: 0.00
Closed Trade P/L: 1 186.63 Floating P/L: 560.29 Margin: 961.62
Balance: 6 186.63 Equity: 6 746.92 Free Margin: 5 785.30

Gross Profit: 3 669.45 Gross Loss: 2 482.82 Total Net Profit: 1 186.63
Profit Factor: 1.48 Expected Payoff: 23.27
Absolute Drawdown: 143.75 Maximal Drawdown (%): 858.39 (13.20%)

Total Trades: 51 Short Positions (won %): 26 (42.31%) Long Positions (won %): 25 (56.00%)
Profit Trades (% of total): 25 (49.02%) Loss trades (% of total): 26 (50.98%)
Largest profit trade: 540.94 loss trade: -292.62
Average profit trade: 146.78 loss trade: -95.49
Maximum consecutive wins ($): 6 (1,099.13) consecutive losses ($): 4 (-744.95)
Maximal consecutive profit (count): 1,099.13 (6) consecutive loss (count): -744.95 (4)
Average consecutive wins: 2 consecutive losses: 2
 
Tweaked my Expert Advisor a bit in terms of relaxing market entry conditions and added a variable lot.
The results are here https://c.mql5.com/mql4/forum/2006/06/new_results.zip
The lot size varied from $5 to $80.
The entry/exit algorithm was developed considering EURUSD, so the Expert Advisor showed considerable results on the history for 3 years. Due to large calculations, I simply limited the sample in which the channels are estimated to only 300 bars in foreseeable time. That is 12.5 trading days. In fact, we should consider much longer samples to increase the reliability of an entry point. I will run the Expert Advisor once again at long samples in order to obtain experimental proof of my assumption but it requires certain time for calculation. After obtaining the result shown in the file for EURUSD, I ran the Expert Advisor at the same parameters (including even the trailing stop parameter!) at which it was optimized for EURUSD on GBPUSD and USDCAD. Of course we did not manage to obtain a noticeable result on other 2 pairs, but I think the obtained result is still positively biased. Therefore I can draw two conclusions. The first one is that the system proposed by Vladislav may well be profitable with accurate technical realization. Perhaps Vladislav managed to find the universal characteristics working on all currency pairs without correction. If we compare these 3 currency pairs, we can detect the following differences. USDCAD has faster movements compared to EURUSD. I.e. if it takes EURUSD 3 hours to pass 1.5 figures, it may take USDCAD 1 hour to pass 1.5 figures, which leads to the serious problem of a lag, at least for the timeframe H1, for which we got the result. Generally speaking, I am going to continue research in this direction. I now consider a reasonable tightening of the stops to be of paramount importance. When and by how much to pull them up when trading. Since it is obvious that standard trailing stop will not allow taking much. We will analyze it further.
 
Good afternoon Solandr!

Could you tell me which conditions you have relaxed?
 
Could you tell me which conditions you have relaxed?

I have reduced the sample size for the calculation and also the period for the calculation of the Murray levels indicator. If you have a short sample size, there are more local highs and lows satisfying the entry point. Also, if the Murray levels are calculated for a shorter period, they are located closer to each other, increasing the probability of entering the market. Compare 31 trades from the previous file and 80 from the new one. As a result, the number of deals has increased, but their quality has decreased accordingly. However, the introduction of a variable lot size like Vladislav's was able to exceed the result obtained from the previous file for the same period of time with the same amount of risk.
 
I have posted new test results of the Expert Advisor https://c.mql5.com/mql4/forum/2006/06/new_results2.zip
The results show the data obtained on EURUSD at different sample lengths, on which the search for channels takes place. I have tested at 300 and 1000 bars on H1 period. As expected, a longer sample has improved the report on testing the Expert Advisor. However, it is not very significant. Increasing the sample length by more than 3 times has led to an increase of the final profit by only 4%. As expected, the number of deals in a longer sample has become less (by 19%), but the quality of deals has improved as evidenced by the 14% growth in the profitability index. As a result, the decrease in the number of deals along with the increase in profitability has resulted in some increase in the total profit.

This file also contains the results of the Expert Advisor developed for EURUSD, traded on USDCHF and USDJPY over a 300-bar time frame on H1. At the USDCHF currency pair the result has a clear positive shift. And the result of USDJPY has wandered around zero throughout the entire testing period. At the same time, the amount of performed transactions was approx. 3 times less than it was when testing on the same period on other currency pairs. Based on this, I can conclude that the nature of USDJPY has a significant difference from those pairs on which testing has already been conducted. Thus, if my version of the strategy works for USDJPY, it will need some effort and possibly a revised entry/exit algorithm for this pair.
By the way, for some reason Vladislav is not trading on all pairs in the demo (although there are no technical problems or they are solvable), but he is trading only on 3 pairs - EURUSD, USDCHF, GBPUSD. And I think that maybe it's not just for fun, but he has some reasons for that.
 
Hi all ! Kinda finished with MTSinka ;) - As I told earlier, on Monday I will close direct access to Lites account - the one which is used to upload to the Empire. I may just change the investor password and maybe even leave it for testing - there is no point in running computer for nothing: I am going to test it on real account. What will happen next - we'll see.
I hope many have progressed to the practical realisation?

Good luck and good trends.
 
Good afternoon Vladislav!

Very nice of you to come back to this thread.
While you were gone there were a lot of discussions on several questions:

1.
Jhonny 16.06.06 16:48 <br / translate="no">
Vladislav, could you please answer a few questions....
1) The level of importance of channel selection criteria is the same (i.e. you find the optimal combination of these criteria), or there is a consistent selection from more significant criteria to less significant ones.
2) Having read a lot of clever books, I forgot what to find :). If I have understood correctly that you mean under a concept of potential energy functional, it is not clear why we search for it as the result of search will be the equation (not value, but function!) of a trajectory at movement along which change of potential energy (during movement, instead of at achievement of an end point!I understand that the price moves along this very trajectory and we have already chosen the equation that approximates this trajectory (regression equation), it remains only to conclude how well we approximate this trajectory. But if we look for it anyway, we may actually find a quadratic function and if the coefficients В and С in equation Ах^2+Вх+С are equal (or very close) to those in the regression equation, perhaps this is the necessary channel, although I have already gone into doubt :)


2. The question about distribution in the channel...

Vladislav 13.03.06 21:38

....There is also a central limit theorem, which says that any convergent distribution with increasing degrees of freedom converges to a normal distribution (so we don't really care what it is inside, as long as it converges ;)....

I understood this phrase to mean that we accept the distribution in the channel as normal and calculate the confidence intervals using the normal distribution function. But there were other logically sound opinions...


solandr 18.06.06 08:42

Honestly just the difference at different values of N is what interests me. The Student's distribution remaining qualitatively the same in form, it changes its quantitative parameters with different degrees of freedom. When the number of degrees of freedom is very large, it should coincide with the normal distribution. When the number of degrees of freedom is small, it differs from the normal distribution.
The Student distribution has different degrees of freedom:
For a probability of 99%:
q(30 bars)=2.750
q(100 bars)=2.626
q(300 bars)=2.593
q(1000 bars)=2.581
If you think that the difference of 6% between the quantile value for 30 bars and 1000 bars is not worth the extra effort, then it is your personal choice. I have a slightly different opinion.


3. And one more question about the Hurst coefficient


Jhonny 17.06.06 16:49

...I actually have a Hearst value >1 for channels with samples smaller than about 80 so there must be an error somewhere...

But on the page http://www.xaoc.ru/index.php?option=com_forum&Itemid=0&page=viewtopic&p=2380, there is such a statement about it:
It's just that there is an error in finding the index by linear approximation, think of it as 1

Can we consider this statement to be true or am I more likely to have an error in my algorithm?

P.S. I hope I'm not bothering you too much with these questions and I'm looking forward to hearing from you.
 
Good afternoon Vladislav!

Very nice of you to look in this thread again.
While you were away there were a few questions that were actively discussed in the thread:

1.
Jhonny 16.06.06 16:48

Vladislav, could you please answer a few questions....
1) Level of importance of criteria of selection of channels at you uniformly (i.e. you find an optimum combination of these criteria), or there is a consecutive selection from more significant criteria to less significant.

No it is not - everyone has a different weighting factor.

2) After reading a lot of clever books I forgot what I should find :). If I have understood correctly that you mean under a concept of functional of potential energy, it is not clear why we search for it as the result of search will be the equation (not value, but function!) of a trajectory at which movement change of potential energy (during movement, instead of at achievement of an end point!I understand that the price moves along this very trajectory and we have already chosen the equation that approximates this trajectory (regression equation), it remains only to conclude how well we approximate this trajectory. But if we look for it anyway, we may find a quadratic function and if the coefficients B and C in the equation Ah^2+Bx+C are equal (or very close) to those in the regression equation, perhaps this is the necessary channel, although I have already started to feel doubt :)


I've written about this many times before. When you are looking for a sample to construct a regression channel you will see that this sample may not be constructed in the only way at this point in time. Somehow a selection has to be made. I have chosen this as my main criterion - the rationale has been set out above on the thread. The algorithm is simple: build it, check it, select the extreme one.

2. Question about the distribution in the channel...

Vladislav 13.03.06 21:38

....There is also a central limit theorem which says that any convergent distribution with increasing degrees of freedom converges to a normal distribution (so we don't really care what it is inside, as long as it converges ;)....

I understood the phrase as meaning that we take the distribution in the channel as normal and calculate the confidence intervals using the normal distribution function. But there were other logically sound opinions...


No - it is considered the worst convergent :) (it also converges to normal with increasing degrees of freedom). I wrote about it and solandr quite correctly understood it.

solandr 18.06.06 08:42

Honestly just the difference at different values of N I am interested. The Student's distribution remaining qualitatively the same in form, it changes its quantitative parameters with different degrees of freedom. When the number of degrees of freedom is very large, it should coincide with the normal distribution. When the number of degrees of freedom is small, it differs from the normal distribution.
The Student distribution has different degrees of freedom:
For a probability of 99%:
q(30 bars)=2.750
q(100 bars)=2.626
q(300 bars)=2.593
q(1000 bars)=2.581
If you think that the difference of 6% between the quantile value for 30 bars and 1000 bars is not worth the extra effort, then it is your personal choice. I have a slightly different opinion.



3 And one more question about the Hurst coefficient


Jhonny 17.06.06 16:49

...I actually have a Hearst value >1 for channels with samples smaller than about 80 so there must be an error somewhere...

But on http://www.xaoc.ru/index.php?option=com_forum&Itemid=0&page=viewtopic&p=2380, there is this statement about it:
it's just that there is an error in finding the index by linear approximation, think of it as 1

Can we consider this statement to be true or am I more likely to have an error in my algorithm?

P.S. I hope I'm not bothering you too much with these questions and I'm looking forward to hearing from you.



In calculations, of course, an error arises. The main criterion is a value above/below 0.5 (it is in the version of the formula with k-value 0.5, if you shifted it to zero or one, then relative to the shift parameter, respectively). Regarding the error in your algorithm - it may be ;). Who else can check it but you?

Good luck and good trends.
 
Thank you Vladislav. You'll forgive me, and I'm so brazen already...
But the question about funkitsonal, at least for me, remains open. I will try to invent something in this area myself :). Thanks again!
 
Been coming to this thread for quite a long time, but then there was not yet a fourth thread, which is where all the interesting stuff started. Thanks to all participants, learned a lot of interesting things for myself. But as they say, a lot of knowledge is a lot of not so much misfortune, but rather questions.

After reading additional material on the calculation of the Hurst Index I came across a study by Feder. He claimed that the empirical law - H=Log(R/S)/Log(0.5*N) works rather badly and gives relatively correct data only for small samples (though nothing was said about the size of these samples). So, I decided to implement Hearst index calculation strictly according to methodical materials (it seems that it turned out to be even worse than Mr. Feder warned).

I realize that the code may not be the most optimal in terms of performance (a lot of function calls and everything), but the main thing is the other - I wanted to clarify whether I correctly understood the logic of its calculation, because the results seem to me questionable, and I decided to ask people knowledgeable.

In my calculations I took into account only formulas in manuals, nothing extra.

The implementation is quite simple, in the form of a script. There are only eight functions, the description of which I am attaching below, as well as comments in the source code. Getting the array with the calculated data from the function, implemented as follows. Each function receives double out[] array. The array is initialized in the function body and computational data is written to the array depending on the function's purpose.

void GetVn(double out[], int N, int i)
I form "inflow" - v[]. I take close price as inflow. The result is written to array out[]

void SetArrayMv(double v[], double out[], int N)
I calculate average inflow - mv[] for all observations. The function receives inflow and number of observations - N

void SetArrayDv(double v[], double out[], double mv[], int N)
I calculate accumulated deviation of inflow - dv[]. I pass arrays "inflow", "average inflow" and number of observations

void SetArrayR(double dv[], int N, double out[])
I calculate spread - r[]. I pass inflow and number of observations to the function.

void SetArrayS(double v[], double mv[], double out[], int N)
I calculate standard deviation s[]. I pass "inflow", "mean inflow" and number of observations

void SetArrayRS(double r[], double s[], double out[], int N)
I calculate normalized spread. I write the values of R/S into rs[] array. I pass the functions "spread", "standard deviation" and number of observations

void LogN(double out[], int N)
Calculates array logarithm from N

void GetLine(double x[],double y[], int N)
I calculate coefficients of approximating line.

//+------------------------------------------------------------------+
//|                                                        HERST.mq4 |
//+------------------------------------------------------------------+
#property copyright ""
#property link      "grasn@rambler.ru"

int start()
{
   int i=0;
   int N=100;

   double v[];                         // Приток
   
   double mv[];                        // Средний приток
   double dv[];                        // Накопившееся отклонение
   
   double r[];                         // Размах
   double s[];                         // Стандартное отклонение
   
   double rs[];                        // Нормированный размах
   double logN[];                      // логарифм от N
   
   GetVn(v, N, i);
   SetArrayMv(v, mv, N);
   SetArrayDv(v, dv, mv, N);

   SetArrayR(dv, N, r);
   SetArrayS(v, mv, s, N);
   SetArrayRS(r, s, rs, N);
   
   LogN(logN, N);
            
   GetLine(logN, rs, N);
   
   return(0);
}

// Приток
// ________________________________________________________________________________________
// "Приток".....................................................................double out[]
// Длина выборки......................................................................int N
// номер бара с которого выполняется выборка..........................................int i

void GetVn(double out[], int N, int i)
{
   int n;
   int k;      

   double x[];

   ArrayResize(x, N);
   ArrayInitialize(x, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);

   k=i+N-1;
      
   for(n=0; n<=N-1; n++)
   {
      x[n]=Close[k];
      k=k-1;
   }

   ArrayCopy(out, x, 0, 0, WHOLE_ARRAY);
      
   return(0);
}

// Средний приток 
// ________________________________________________________________________________________
// Приток ........................................................................double v[]
// Средний приток ..............................................................double out[]
// Число наблюдений...................................................................int N

void SetArrayMv(double v[], double out[], int N)
{
   int n;
   int i;
   
   double SUM;
   double mv[];

   ArrayResize(mv, N);
   ArrayInitialize(mv, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);
      
   for(n=0; n<=N-1; n++)
   {
      SUM=0.0;   
      
      for(i=0; i<=n; i++)
      {
         SUM=SUM+v[i];
      }
      
      mv[n]=(1.0/(n+1))*SUM;
   }
   
   ArrayCopy(out, mv, 0, 0, WHOLE_ARRAY);
 
   return(0);
}

// Накопившееся отклонение притока от среднего
// ________________________________________________________________________________________
// Приток ........................................................................double v[]
// Накопившееся отклонение притока..............................................double out[]
// Средний приток................................................................double mv[]
// Число наблюдений...................................................................int N

void SetArrayDv(double v[], double out[], double mv[], int N)
{
   int n;
   int i;
   
   double dv[];
   double SUM;

   ArrayResize(dv, N);
   ArrayInitialize(dv, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);

   for(n=0; n<=N-1; n++)
   {
      SUM=0.0;
      
      for(i=0; i<=n; i++)
      {
         SUM=SUM+(v[i]-mv[i]);
      }
      
      dv[n]=SUM;
   }
   
   ArrayCopy(out, dv, 0, 0, WHOLE_ARRAY);
      
   return(0);   
}

// Размах
// ________________________________________________________________________________________
// Накопившееся отклонение притока от среднего...................................double dv[]
// Размах.......................................................................double out[]
// Число наблюдений...................................................................int N

void SetArrayR(double dv[], int N, double out[])
{
   int n;
   
   int idMax;
   int idMin;
   
   double rn[];
      
   double max;
   double min;

   ArrayResize(rn, N);
   ArrayInitialize(rn, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);   

   for(n=1; n<=N; n++)
   {
      idMax=ArrayMaximum(dv, n, 0);
      idMin=ArrayMinimum(dv, n, 0);
      
      max=dv[idMax];
      min=dv[idMin];
      
      rn[n]=MathAbs(max-min);
   }
   
   ArrayCopy(out, rn, 0, 0, WHOLE_ARRAY);
   
   return(0);
}

// Стандартное отклонение
// ________________________________________________________________________________________
// Приток.........................................................................double v[]
// Среднее приток................................................................double mv[]
// Стандартное отклонение.......................................................double out[]
// Число наблюдений...................................................................int N

void SetArrayS(double v[], double mv[], double out[], int N)
{
   int n;
   int i;
   
   double sn[];
   double SUM;

   ArrayResize(sn, N);
   ArrayInitialize(sn, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);   
   
   for(n=0; n<=N-1; n++)
   {
      SUM=0.0;
      
      for(i=0; i<=n; i++)
      {
         SUM=SUM+MathPow((v[i]-mv[i]), 2);
      }
     
      sn[n]=MathSqrt((1.0/(n+1))*SUM);
   }
   
   ArrayCopy(out, sn, 0, 0, WHOLE_ARRAY);
   
   return(0);
}

// Нормированный размах
// ________________________________________________________________________________________
// Размах.........................................................................double r[]
// Стандартное отклонение.........................................................double s[]
// Нормированный размах.........................................................double out[]
// Число наблюдений...................................................................int N
void SetArrayRS(double r[], double s[], double out[], int N)
{
   int n;
   
   double rs[];

   ArrayResize(rs, N);
   ArrayInitialize(rs, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);  
      
   for(n=3; n<=N-1; n++)
   {  
      if(s[n]==0)
      {
         rs[n]=0.0;
      }
      else 
      {
         rs[n]=MathLog(r[n]/s[n]);
//         Print("rs ", rs[n]);
      }
   }
   
   ArrayCopy(out, rs, 0, 0, WHOLE_ARRAY);
   
   return(0);
}

// Логарифм N
// ________________________________________________________________________________________
// Логарифм N...................................................................double out[]
// Число наблюдений...................................................................int N
void LogN(double out[], int N)
{
   int n;
      
   double logN[];
   
   ArrayResize(logN, N);
   ArrayInitialize(logN, 0.0);

   ArrayResize(out, N);
   ArrayInitialize(out, 0.0);   
   
   for(n=3; n<=N-1; n++)
   {
      logN[n]=MathLog(n);
//      Print("n ", logN[n]);
   }
   
   ArrayCopy(out, logN, 0, 0, WHOLE_ARRAY);
   
   return(0);   
}

// Расчет коэффициентов аппроксимирующей линии
// ______________________________________________________________
// log(R/S).......................................................................double x[]
// log(N)................................................................................y[]
// Число наблюдений...................................................................int N
void GetLine(double x[],double y[], int N)
{
   double m=0;
   double b=0;
   double tan = 0;
      
   int size=N;
   double sum_x=0.0;
   double sum_y=0.0;
   double sum_xy=0.0;
   double sum_xx=0.0;
      
   for (int i=3; i<size; i++)
   {
      sum_x=sum_x+x[i];
      sum_y=sum_y+y[i];
      sum_xy=sum_xy+(x[i]*y[i]);
      sum_xx=sum_xx+(x[i]*x[i]);
   }

   m = (size*sum_xy - sum_x*sum_y) / (size*sum_xx - sum_x*sum_x);
   b = sum_y/size - m * sum_x / size;

           
   Print("Аппроксимирующая прямая: y(x)=", m, "x+", b);
   Print("Показатель Херста= ", m);
   
   return(0);
}



Here are log(R/S) and log(N) plots for the random series and for the EURUSD daily quotes
The random series was generated using MathRand() with the following parameters.
N=100
Result:
H=0.5454
Y(x)=0.5454x+0.2653

As N increases, Hurst index approaches 0.5


EURUSD D1 calculation was performed with the following parameters:
i=0 (starting from 27.06)
N=100

calculation results
H=1.0107 (why is it so huge?)
y(x)=1.0197x-0.5885



There are a few questions:
1. What to take for the inflow? The full price, the modulo difference, only the positive difference? In other words, does the notion of "influx" in the method in question have any effect on the pre-preparation of the data? Or should the data to be investigated be taken as the influx. I intuitively, for example, took the closing price in the calculations.

2. How to determine the normalised spread if the values of the spread and especially the standard deviation are zero. This is almost always present for small array indexes. In case zeros are detected, for now I also assign zero to the normalized spread. Is this correct? For now, abandon the calculation of the first values.

3. I do not like the results themselves. Here is an example of several calculation variants for EURUSD:

Inflow (Close[] D1) on all bars Hurst index 0.9069
Inflow (Close[] D1) i=0, N=200 Hurst index 0.8264

The figures are too optimistic and the data does not coincide with that of Vladislava (0.64 for the whole series), but there are also very low values. For the price difference of all bars I get the Hearst index 0.5119

4. Am I calculating the average inflow correctly. One value for all iterations or should it change depending on the current N?

5. There is a possibility that I got it all mixed up and the figure is calculated in the wrong way. Please, explain where I messed up :o(

PS: I hope the members of this forum will help me to understand this. I would be very grateful if Vladislav take some time and explain where I go wrong with such a simple methodology.

Reason: