Market etiquette or good manners in a minefield - page 24

 
Neutron писал(а) >>

That's the truth! Just don't make a fuss - the soup is separate.

As for the bar, I only use opening prices - no averaging. And I'm going to do what the wise Prival will do - to switch to ticks. However, I will have to fuss with the saving mode and data collection. But if it's worth it, why not?

... ...and then won't all this fuss with NS go to a mere scalper. I find it hard to believe that one can catch, say, an hourly trend at tick level. Although, of course, it depends on how many ticks... but we have limited computing power here...

 
YDzh >> :

I'm not making a fuss :) The usefulness of tics is recognised in the literature... It smells like chaos theory... About whether it's worth... Is it worth it? And where does Prival advise it?

Check it out here.

 
YDzh >> :

... and whether all the fuss with NS then goes to the mere scalper. I find it hard to believe that you can catch, say, an hourly trend at tick level. Although, of course, it depends on how many ticks... but we have limited computing power here...

Well, planes flew... once. And how much scepticism there was...

 
YDzh писал(а) >>

... and whether all the fuss with NS then goes to the mere scalper. I find it hard to believe that you can catch, say, an hourly trend at tick level. Although, of course, it depends on how many ticks... but we have limited computing power here...

It's a matter of how you approach it... There's no need to look through ticks more than 5-10 minutes ago; there are enough bars for analysis, and the ticks of the last one-minute bars can be memorized and computed. And you don't need such huge capacities.

 
FION >> :

Here's how to approach... There is no need to look more than 5-10 minutes back ticks, there are enough bars for analysis, and the ticks of the last one-minute bars can be memorised and calculated. And you won't need such a huge capacity.

I've been unsatisfied with data representation formats in MT4 for a long time. The timeframe is the first thing that caught my eye from the very first days of the market. I.e., when looking at MT4 timeframes, you quickly understand that there is a reason for it! Designed for convenience, so to speak... -:)

So, once I understand the grid, timeframes will go after the induks - f*ck it. In fact, I need only quotes and trading interface from this terminal. The rest can be discarded without any harm to the deposit.

 
paralocus писал(а) >>

I have long been dissatisfied with the data formats in MT4. Timeframes have been the first thing that caught my eye since the first days on the market. I.e., when looking at MT4 timeframes, you quickly understand that there is a reason for it! Designed for convenience, so to speak... -:)

Therefore, once I understand the grid, the timeframes will go after the indices. In fact, I only need quotes and trading interface from the terminal. The rest of the stuff can be easily thrown away without harm to the deposit.

Then don't bother, take a ready-made grid, there are plenty of them written in java, for example... Why should you rack your brains in MQL4 and re-invent the wheel...

 
YDzh >> :

So don't bother, get a ready-made grid - there are plenty of them written in java, for example... Why bother with MQL4 and re-invent the wheel...

Just for the sake of knowing.

 
YDzh писал(а) >>

So don't bother, get a ready-made grid - there are plenty of them written in java, for example... Why bother with MQL4 and re-invent the wheel...

You're kidding! - I don't know why I'm parsing other people's bloopers. My algorithm is more optimized - it is customized for a certain tactical purpose.)

paralocus wrote >>

Well, I guess that's it, until the scales are corrected.

for(int i = cikl; i >= 0; i--)
{
out = OUT2(i);---------------------------------------------------// Получаем вых. сигнал сетки
test = (Close[i]-Close[i+1])/Close[i+1];--------------------------// Получаем n+1-вый отсчет

d_2_out = test - out;---------------------------------------------// Ошибка на выходе сетки
d_2_in = d_2_out * (1 - out*out);--------------------------------// Ошибка на входе выходного нейрона

Correction2[0] += d_2_in * D2[0];---------------------------// Суммируем микрокоррекции
SquareCorrection2[0] += Correction2[0] * Correction2[0];----------// по каждому весу входящему в вых. нейрон
Correction2[1] += d_2_in * D2[1];---------------------------// и суммируем квадраты оных микрокоррекций
SquareCorrection2[1] += Correction2[1] * Correction2[1];
Correction2[2] += d_2_in * D2[2];
SquareCorrection2[2] += Correction2[2] * Correction2[2];

d_11_in = d_2_in * (1 - D2[1]*D2[1]);-----------------------------// Считаем ошибку на входах нейронов
d_12_in = d_2_in * (1 - D2[2]*D2[2]);-----------------------------// скрытого слоя

for (int k = 0; k < 17; k++)
{---------------------------------------------------------------// Сууммируем микрокоррекции для входов
Correction11[k] += d_11_in * D1[k];----------------------// первого нейрона
SquareCorrection11[k] += Correction11[k] * Correction11[k];
}

for (k = 0; k < 17; k++)
{---------------------------------------------------------------// Суммируем микрокоррекции для входов
Correction12[k] += d_12_in * D1[k];----------------------// второго нейрона
SquareCorrection12[k] += Correction12[k] * Correction12[k];
}
}
Paralocus , now take the square root of the sum of the squares of each weight and divide by this rate each total weight correction. That's what you add to each weight! That's one epoch. Repeat this as many times as you have planned your training epochs. Gradually reduce the multiple contribution of each epoch to zero by the end of training.
 

Gentlemen, tell me how you deal with flying into shallow local lows and curved initial weights. I understand that at the beginning they do not affect the training, but later they start to affect the results very much.

 
Neutron >> :
...now take the square root of the sum of the squares for each weight and divide by this norm each total weight correction. That's what you add to each weight! That's one epoch. Repeat this as many times as you have planned your training epochs. Gradually reduce the multiple contribution of each epoch to zero by the end of training.

Done:

for(int q = 1; q <= 1000; q++)
  {
// ----------------------------------------- ЭПОХА ---------------------------------------------------  
   for(int i = cikl; i >= 0; i--)
     {
       out  = OUT2( i);                                                   // Получаем вых. сигнал сетки
       test = (Close[ i]-Close[ i+1])/Close[ i+1];                          // Получаем n+1-вый отсчет
       
       d_2_out = test - out;                                             // Ошибка на выходе сетки         
       d_2_in  = d_2_out * (1 - out* out);                                // Ошибка на входе выходного нейрона
       
       Correction2[0]       += d_2_in * D2[0];                           // Суммируем микрокоррекции
       SquareCorrection2[0] += Correction2[0] * Correction2[0];          // по каждому весу входящему в вых. нейрон
       Correction2[1]       += d_2_in * D2[1];                           // и суммируем квадраты оных микрокоррекций
       SquareCorrection2[1] += Correction2[1] * Correction2[1];     
       Correction2[2]       += d_2_in * D2[2];
       SquareCorrection2[2] += Correction2[2] * Correction2[2];   
       
       d_11_in = d_2_in * (1 - D2[1]* D2[1]);                             // Считаем ошибку на входах нейронов
       d_12_in = d_2_in * (1 - D2[2]* D2[2]);                             // скрытого слоя
       
       for (int k = 0; k < 17; k++)
         {                                                               // Сууммируем микрокоррекции для входов
          Correction11[ k]       += d_11_in * D1[ k];                      // первого нейрона
          SquareCorrection11[ k] += Correction11[ k] * Correction11[ k];  
         }                                    
         
       for ( k = 0; k < 17; k++)
         {                                                               // Суммируем микрокоррекции для входов
          Correction12[ k]       += d_12_in * D1[ k];                      // второго нейрона
          SquareCorrection12[ k] += Correction12[ k] * Correction12[ k];
         }   
     }
// ------------------------------------- КОНЕЦ ЭПОХИ --------------------------------------------------

// ----------------------------------- КОРРЕКЦИЯ ВЕСОВ ------------------------------------------------
         for( k = 0; k < 3; k++)
             W2[ k] += Correction2[ k]/MathSqrt( SquareCorrection2[ k]);  

         for( k = 0; k < 17; k++ )
           {
             W1[0, k] += Correction11[ k]/MathSqrt( SquareCorrection11[ k]);
             W1[1, k] += Correction12[ k]/MathSqrt( SquareCorrection12[ k]);
           }        

   }                                 


I'm a little confused about how to reduce the contribution multiplier of an epoch.... I get very small output layer weights at the end of training, and large hidden layer weights.

Alert: W2 [0] = -0.0414 W2 [1] = 0.0188 W2 [2] = -0.0539

Alert: W1[1,0]=-27.0731 W1[1,1]=-30.2069 W1[1,2]=37.6292 W1[1,3]=30.4359 W1[1,4]=-22.7556 W1[1,5]=-37.5899