Advisors on neural networks, sharing experiences. - page 9

 
Andrey Emelyanov:

I also tried to implement a similar algorithm back in 2013... But I used 7 indicators, and Zigzag was used to form a vector for training the NS. But the essence is the same - I was looking for reverse positions... When I started to use Zigzag I had no idea what to do with it. until I accidentally came across some patterns. That radically changed my TS. Now my algorithm is much simpler:

1. Calculating patterns on minute and hour timeframe, over the last year;

2. Making a dictionary of turning points (pairs "minute pattern - hour pattern") ;

3. teaching NS using the tipping point dictionary (on 150-160 pairs);

This is the result of my approach:

To the disadvantages of my approach:

1) High risk of the TS - since it is not possible to determine the exact value of the break price, the TS places 9 pending orders with lots: 1, 1, 3, 6, 14, 31, 70, 158, 355;

2) Difficult to implement an exit algorithm (trawl TS);

So NS can be used for trading, only question is what to teach NS...

P/s: By patterns I understand A. Merrill's patterns (M & W) .

It's a smart approach. And the patterns were described simply as the position of the bars in the matrix, without taking into account the actual price delta - only the relative position?

I have an idea, to try the pattern indicators but with a different frame - the first five bars we analyse the indicators on the last 5 indicators, and the two indicators for trend analysis - we analyse in increments of 10 and take into account the absolute changes.

The zig-zag is a smart idea, but how do the peaks filter out from the flat wobbles there could be false points of trend change?

 
-Aleks-:

A sensible approach. And the patterns described simply as the position of the bars in the matrix, without taking into account the actual price delta - just the relative position?

I have an idea, to try the pattern indicators, but with a different frame - the first five bars we analyse the indicators on the last 5 indicators, and two indicators for trend analysis - we analyse in steps of 10 and at the same time take into account the absolute changes.

About the zig-zag is a smart idea, but how peaks filtered from flat wobbles could be false points of trend change?

I do it this way:

There's a dynamic array that stores exclusively pairs of patterns (I call it a dictionary), if a pair of patterns gets into the dictionary a second time I don't write it down; and two counter arrays of senior timeframe and junior - they count how often a pattern was involved in forming pairs, even if it was not written into the dictionary.

The training vector is formed according to the dictionary, the weight of an individual pattern = pattern_counter / maximum_counter. I.e., the pattern, which participates in formation of pairs more often, equals 1, and all other patterns are less than 1. This is the table you get after teaching the NS:

Main Pattern Main Count Slave Pattern Slave Count Sum_Multilayer_Perceptron
W2 18 W2 21 0.94914702
W14 14 W2 21 0.84972197
M15 20 M15 14 0.83269191
W1 11 W2 21 0.77499075
W13 10 W2 21 0.75006553
M15 20 M3 10 0.73813147
M15 20 M10 10 0.73812512
M15 20 M16 10 0.738099
W5 9 W2 21 0.72506739
W10 9 W2 21 0.72505412
M15 20 M11 9 0.71431236
W2 18 W1 11 0.71204136
W2 18 W5 11 0.7118911
W4 8 W2 21 0.70017271
W2 18 W4 10 0.68815217
W2 18 W7 10 0.68802818
M15 20 M7 7 0.66682395
M15 20 M14 6 0.64291215
W2 18 W13 8 0.64045346
M3 12 M15 14 0.63254238
W9 5 W2 21 0.62522345
W3 5 W2 21 0.62509623
W7 5 W2 21 0.62505511
M15 20 M12 5 0.61917222
M15 20 M8 5 0.6191331
W14 14 W1 11 0.61210667
W6 4 W2 21 0.60012943
W2 18 W14 6 0.59301682

Structure of the NS: 64 input neurons, 4 internal, 1 output. That is, one input neuron describes one pattern. The grid takes 40-50 minutes to train, and the error of NS does not exceed 0.00001.

Thus I have a model that can predict the significance of pairs of patterns, even if it was not in the dictionary before.

I have been struggling with flat and false peaks for a long time but I'm working on the level of ZigZaga calculation. I slightly modified the code of a standard Zigzag, namely, implemented ZZ percentage on its basis. So far, the code looks more or less as follows:

int MyCExtremum::GetCombiZigzag(const double    &high[],     // буфер цен high

                                const double    &low[],      // буфер цен low

                                const datetime  &time[],     // буфер время 

                                int             ExtDepth,    // глубина поиска экстремумов(первого прохода)

                                double          ExtDeviation,// "пороговое значение": жесткая ступенька + % роста цены

                                int             ExtBackstep  // глубина поиска экстремумов(второго прохода)

                               )

  {

   //--- value

   int    shift=0, whatlookfor=0, lasthighpos=0, lastlowpos=0, Deviat=1;

   double lasthigh=0.0, lastlow=0.0, percent=0.0;

   int    rates_total = ArraySize(time);          // размер входных таймсерий

   int    limit       = rates_total - ExtDepth;   // лимит на расчеты...

   //+---------------------------------------------------------------+

   //| ОЧЕНЬ ВАЖНАЯ ПРОВЕРКА ВЛИЯЮЩАЯ НА КОРРЕКТНОСТЬ ВЫЧИСЛЕНИЙ!    |

   //+---------------------------------------------------------------+

   if(ArrayIsSeries(high)) ArraySetAsSeries(high,false);

   if(ArrayIsSeries(low))  ArraySetAsSeries(low,false);

   if(ArrayIsSeries(time)) ArraySetAsSeries(time,false);

   //+---------------------------------------------------------------+

   //| ПРОВЕРКИ ВХОДНЫХ ПЕРЕМЕННЫХ                                   |

   //+---------------------------------------------------------------+

   if(rates_total<20)

     { 

      Print(__FUNCTION__," ERROR: the small size of the buffer.");

      return(-1);                                     

     }

   if(ExtDeviation<0 || ExtDeviation>100)

     { 

      Print(__FUNCTION__," ERROR: Is\'not correct a Deviation. The value of Deviation should be in the interval [0..100].");

      return(-1);                                     

     }

   //--- Проверка: Depth and Backstep

   if((ExtDepth < ExtBackstep)||(ExtDepth < 2))

     {

      Print(__FUNCTION__+" ERROR: Is\'not correct a Depth and Backstep. The value of Depth should be greater than Backstep.");

      return(-1);

     }

   //--- готовим буфер ZigzagBuffer[]

   if(ArraySize(ZigzagBuffer)>0) ArrayFree(ZigzagBuffer);               // Удаляем старые данные

   ArrayResize(ZigzagBuffer,rates_total, EXTREMUM_RESERVE);

   ArrayFill(ZigzagBuffer,0,rates_total,0.0);

   if(ArrayIsSeries(ZigzagBuffer))  ArraySetAsSeries(ZigzagBuffer,  false);

   //---

   if(ArraySize(HighMapBuffer)>0) ArrayFree(HighMapBuffer);             // Удаляем старые данные

   ArrayResize(HighMapBuffer,rates_total, EXTREMUM_RESERVE);

   ArrayFill(HighMapBuffer,0,rates_total,0.0);

   if(ArrayIsSeries(HighMapBuffer)) ArraySetAsSeries(HighMapBuffer, false);

   //---

   if(ArraySize(LowMapBuffer)>0) ArrayFree(LowMapBuffer);               // Удаляем старые данные

   ArrayResize(LowMapBuffer,rates_total, EXTREMUM_RESERVE);

   ArrayFill(LowMapBuffer,0,rates_total,0.0);

   if(ArrayIsSeries(LowMapBuffer))  ArraySetAsSeries(LowMapBuffer,  false);   

   //---

   if(ArraySize(TimeBuffer)>0) ArrayFree(TimeBuffer);                   // Удаляем старые данные

   ArrayResize(TimeBuffer,     rates_total, EXTREMUM_RESERVE);

   ArrayFill(TimeBuffer,    0, rates_total,   0);

   if(ArrayIsSeries(TimeBuffer))  ArraySetAsSeries(TimeBuffer,  false);   

   //--- корректировка Deviation

   if(ExtDeviation < 1)

     {

      Deviat = 1;

     }else

        {

         Deviat = (int)ExtDeviation;

        }

   //--- получаем "свежие" минимумы и максимумы

   if(GetHighMapZigzag(high,ExtDepth,Deviat,ExtBackstep) < 0) return(0);

   if(GetLowMapZigzag(low,ExtDepth,Deviat,ExtBackstep)   < 0) return(0);

   //--- final rejection

   for(shift=ExtDepth;shift<rates_total;shift++)

     {

      switch(whatlookfor)

        {

         case Start: // search for peak or lawn

            if(lastlow==0 && lasthigh==0)

              {

               if(HighMapBuffer[shift]!=0)

                 {

                  lasthigh=high[shift];

                  lasthighpos=shift;

                  whatlookfor=Sill;

                  ZigzagBuffer[shift]=lasthigh;

                  TimeBuffer[shift]=time[shift];

                 }

               if(LowMapBuffer[shift]!=0)

                 {

                  lastlow=low[shift];

                  lastlowpos=shift;

                  whatlookfor=Pike;

                  ZigzagBuffer[shift]=lastlow;

                  TimeBuffer[shift]=time[shift];

                 }

              }

            break;

         case Pike: // search for peak

            if(LowMapBuffer[shift]!=0.0 && LowMapBuffer[shift]<lastlow && HighMapBuffer[shift]==0.0)

              {

               //---

               ZigzagBuffer[lastlowpos] = 0.0;

               TimeBuffer[lastlowpos]   = 0;

               //---

               lastlowpos=shift;

               lastlow=LowMapBuffer[shift];

               ZigzagBuffer[shift]=lastlow;

               TimeBuffer[shift]=time[shift];

               //--- Обязательно: покинуть switch

               break;

              }

            //--- Обход "двойственности"

            if(LowMapBuffer[shift]!=0.0 && HighMapBuffer[shift]!=0.0 && LowMapBuffer[shift]<lastlow)

              {

               //---

               ZigzagBuffer[lastlowpos] = 0.0;

               TimeBuffer[lastlowpos]   = 0;

               //---

               lastlowpos=shift;

               lastlow=LowMapBuffer[shift];

               ZigzagBuffer[shift]=lastlow;

               TimeBuffer[shift]=time[shift];

               //--- Обязательно: покинуть switch

               break;

              }

            if(HighMapBuffer[shift]!=0.0 && LowMapBuffer[shift]==0.0)

              {

               //--- Проверка: % роста цены

               percent = (HighMapBuffer[shift]-lastlow)/(lastlow/100);

               if(percent > ExtDeviation)

                 {

                  lasthigh=HighMapBuffer[shift];

                  lasthighpos=shift;

                  ZigzagBuffer[shift]=lasthigh;

                  TimeBuffer[shift]=time[shift];

                  whatlookfor=Sill;

                 }

               percent = 0.0;

              }            

            break;

         case Sill: // search for lawn

            if(HighMapBuffer[shift]!=0.0 && HighMapBuffer[shift]>lasthigh && LowMapBuffer[shift]==0.0)

              {

               //--- 

               ZigzagBuffer[lasthighpos] = 0.0;

               TimeBuffer[lasthighpos]   = 0;

               //---

               lasthighpos=shift;

               lasthigh=HighMapBuffer[shift];

               ZigzagBuffer[shift]=lasthigh;

               TimeBuffer[shift]=time[shift];

               //--- Обязательно: покинуть switch

               break;

              }

            if(HighMapBuffer[shift]!=0.0 && LowMapBuffer[shift]!=0.0 && HighMapBuffer[shift]>lasthigh)

              {

               //--- 

               ZigzagBuffer[lasthighpos] = 0.0;

               TimeBuffer[lasthighpos]   = 0;

               //---

               lasthighpos=shift;

               lasthigh=HighMapBuffer[shift];

               ZigzagBuffer[shift]=lasthigh;

               TimeBuffer[shift]=time[shift];

               //--- Обязательно: покинуть switch

               break;

              }

            if(LowMapBuffer[shift]!=0.0 && HighMapBuffer[shift]==0.0)

              {

               //--- Проверка: % роста цены

               percent = (lasthigh-LowMapBuffer[shift])/(lasthigh/100);

               if(percent > ExtDeviation)

                 {

                  lastlow=LowMapBuffer[shift];

                  lastlowpos=shift;

                  ZigzagBuffer[shift]=lastlow;

                  TimeBuffer[shift]=time[shift];

                  whatlookfor=Pike;                  

                 }

               percent = 0.0;

              }

            break;

         default: 

            return(-1);

        }

     }

   //--- return value of prev_calculated for next call

   return(rates_total);   

  }

MyCExtremum is a class for calculating ZigZag...

Files:
MyCExtremum.mqh  37 kb
 
-Aleks-:

A sensible approach. And the patterns described simply as the position of the bars in the matrix, without taking into account the actual price delta - just the relative position?

I have an idea, to try the patterns of the indicators, but with a different frame - the first five bars we analyse the indicators on the last 5 indicators, and two indicators for trend analysis - we analyse in increments of 10 and at the same time take into account the absolute changes.

The zig-zag is a smart idea, but how do the peaks filter out from the flat wobbles there could be false points of trend change?

About the analysis of indicators using patterns - that's very interesting... I think there is less noise on indicators, but we should choose indicators so that some suppress "low noise" and others "high noise", then you get a multifilter.
 

Andrey Emelyanov:

Structure of the NS: 64 input neurons, 4 internal, 1 output. That is, one input neuron describes one pattern.

Are you hoping for results with this model? Your inner layer acts as an intermediate compressor, not a classifier.
 
Andrey Emelyanov:

I do the following:

There is a dynamic array that stores exclusively pairs of patterns (I call it the dictionary), if a pair of patterns got into the dictionary a second time I don't write it down; and two arrays of high timeframe and low timeframe counter - they count how often a pattern was involved in forming pairs, even if it was not written to the dictionary.

The training vector is formed according to the dictionary, the weight of an individual pattern = pattern_counter / maximum_counter. I.e., the pattern, which participates in formation of pairs more often, equals 1, and all other patterns are less than 1. This is the table you get after teaching the NS:

Main Pattern Main Count Slave Pattern Slave Count Sum_Multilayer_Perceptron
W2 18 W2 21 0.94914702
W14 14 W2 21 0.84972197
M15 20 M15 14 0.83269191
W1 11 W2 21 0.77499075
W13 10 W2 21 0.75006553
M15 20 M3 10 0.73813147
M15 20 M10 10 0.73812512
M15 20 M16 10 0.738099
W5 9 W2 21 0.72506739
W10 9 W2 21 0.72505412
M15 20 M11 9 0.71431236
W2 18 W1 11 0.71204136
W2 18 W5 11 0.7118911
W4 8 W2 21 0.70017271
W2 18 W4 10 0.68815217
W2 18 W7 10 0.68802818
M15 20 M7 7 0.66682395
M15 20 M14 6 0.64291215
W2 18 W13 8 0.64045346
M3 12 M15 14 0.63254238
W9 5 W2 21 0.62522345
W3 5 W2 21 0.62509623
W7 5 W2 21 0.62505511
M15 20 M12 5 0.61917222
M15 20 M8 5 0.6191331
W14 14 W1 11 0.61210667
W6 4 W2 21 0.60012943
W2 18 W14 6 0.59301682

Structure of the NS: 64 input neurons, 4 internal, 1 output. That is, one input neuron describes one pattern. The grid takes 40-50 minutes to train, and the error of NS does not exceed 0.00001.

Thus I have a model that can predict the significance of pairs of patterns, even if it was not in the dictionary before.

I have been struggling with flat and false spikes for a long time but I am calculating ZigZaga. I slightly modified the code of a standard Zigzag, namely, implemented ZZ percentage on its basis. So far, the code looks more or less as follows:

The array is an interesting solution. Are there any differences in statistics between pairs/periods, what is the stability in general of the variability of the frequency of occurrence of a pattern giving a positive prediction result?

About the zig-zag, I also have a percentage solution, but I also use a deeper history to calculate a reference section of the zig-zag, against which I compare the percentage change in others.

 
Andrey Emelyanov:
As for analysing indicators with patterns - that's very interesting... I think there is less noise in indicators, but you have to choose indicators so that one suppresses "low noise" and the other "high noise", then you have a multi-filter.
There are many different indicators. I've experimented with standard oscillators (and their kind of interest-free), and I've managed to make profit on all of them - it all depends on the settings... just a question of whether it's randomness or regularity.
 
Комбинатор:
Are you hoping for results with this model? Your inner layer acts as an intermediate compressor, not a classifier.
That's the compression I need... at one point in time (on the current bar) out of 64 inputs only 2 inputs are non-zero... and the task of the network is not to divide by buy/sell, but to measure what is the probability of a bounce at given inputs. Or am I reasoning wrong?
 
-Aleks-:

The array is an interesting solution. Is there any difference in statistics between pairs/periods, what is the stability in general of frequency variability of pattern occurrence that gives a positive prediction result?

About the zig-zag, I also have a percentage solution, but I also use a deeper history to calculate a reference section of the zig-zag, against which I compare the percentage change in others.

As everyone knows, A. Merrill's patterns don't give an exact answer whether the pattern will develop further (maintain the trend) or change into another pattern (price rebound). That is why I decided to search for the answer using two timeframes - one hour and one minute. I collect statistics on the recurrence of pairs and do not have a universal training dictionary yet. However I am sure this connection must exist... Otherwise there would be no harmonious patterns: butterflies, bats, etc.
 
Andrey Emelyanov:
As everyone knows, A. Merrill's patterns do not give an exact answer whether the pattern will develop further (maintain the trend) or change into another pattern (price rebound). That is why I decided to search for the answer using two timeframes - one hour and one minute. I collect statistics on the recurrence of pairs and do not have a universal training dictionary yet. However I am sure this connection must exist... Otherwise there would be no harmonious models: butterflies, bats, etc.
Models perceived visually by a man as "butterflies, bats, etc." are born only in the human brain and, as I think, it is necessary to consider this factor, it is necessary to study cognitive psychology on this theme to understand, what for brain is essential and what is not in perception of pattern, i.e. what errors are acceptable and what are not. The brain often completes what it expects to see - its pattern vocabulary is limited, so it puts similar combinations of candlesticks into a single picture, i.e. it does not use the exact mathematical model to describe what it sees.
 

My baby is still dumb and dull, but it's getting somewhere... 8 input indicators, 1 output, 15 neurons in the covered layer. 2000 input vector, 10000 training epochs.

This is actually the 3rd or 4th one, all getting pretty much the same results. I guess I need more neurons and input vector, but it takes a long time to train.

I have an approximate idea of the pattern it should pick up, I've selected indicators from different timeframes and the outputs seem to have meaningful information.

Reason: