6.Feed-forward method

The feed-forward pass is traditionally implemented in the FeedForward method. This method is declared virtual in the neural layer base class. It is overridden in each new class to build a specific algorithm for the class. We will do the same for this particular class, that is, we will also override this method.

In the parameters, the CNeuronDropout::FeedForward method receives a pointer to the object of the previous layer of our model. Within the method body, we immediately set up control blocks to check the pointers to objects used in this method. As usual, here we check not only pointers to external objects received in parameters but also to internal objects of the class. In this case, we will check the pointers to the previous layer object and its result buffer. We will also check the validity of the pointer to the result buffer of the current layer.

bool CNeuronDropout::FeedForward(CNeuronBase *prevLayer)
  {
//--- control block
   if(!prevLayer || !prevLayer.GetOutputs() || !m_cOutputs)
      return false;

After successfully passing the control block, we proceed to execute the algorithm of the Dropout method.

To execute the algorithm in training mode, we prepare a masking buffer. First, we fill the entire buffer with increasing coefficients 1/q, which we stored in the m_dInitValue variable at the class initialization stage.

After that, we create a loop with the number of iterations equal to the number of elements to be dropped out. In the loop body, we generate random values from the range between 0 and the number of elements of the sequence. For randomly selected elements, we replace the multiplier in the masking buffer with 0.

Although lightning never strikes twice in the same place, let's provide an algorithm for the case when the same element falls out twice. Before writing 0 to the masking buffer, we first check the current coefficient for the dropped element. If it is equal to zero, then we decrease the value of the loop iteration counter and move on to selecting the next element. This approach will allow us to exclude the specified number of elements precisely.

//--- generate a data masking tensor
   ulong total = m_cOutputs.Total();
   if(!m_cDropOutMultiplier.m_mMatrix.Fill(m_dInitValue))
      return false;
   for(int i = 0i < m_iOutNumberi++)
     {
      int pos = (int)(MathRand() * MathRand() / MathPow(32767.02) * total);
      if(m_cDropOutMultiplier.m_mMatrix.Flat(pos) == 0)
        {
         i--;
         continue;
        }
      if(!m_cDropOutMultiplier.m_mMatrix.Flat(pos0))
         return false;
     }

After generating the masking vector, we only need to apply it to the initial data. To do this, we multiply two buffers, element by element: the initial data and the masking.

According to our library building concept, in each method of the class, we create two execution branches whenever possible: one using standard MQL5 means and the other one using OpenCL for multi-threaded computations. Therefore, next, we create a branching of the algorithm depending on the selected device for computing operations.

As always, now we will look at the implementation of the algorithm using MQL5. We will return to implementing the algorithm in multi-threaded operations using OpenCL a little later. In the block for implementing the algorithm using MQL5, we use matrix operations.

//--- branching of the algorithm depending on the execution device
   if(!m_cOpenCL)
     {

As you remember, the method has two modes: training and operation. Therefore, before executing the algorithm, we check the current operating mode. If the class runs in operational mode, we simply copy the contents of the result buffer from the previous layer into the result buffer of the current layer. In the case of the training process, we multiply the tensor of the original data by the masking tensor.

//--- checking the operating mode flag
      if(!m_bTrain)
         m_cOutputs.m_mMatrix = prevLayer.GetOutputs().m_mMatrix;
      else         
         m_cOutputs.m_mMatrix = prevLayer.GetOutputs().m_mMatrix *
                                m_cDropOutMultiplier.m_mMatrix;
     }
   else  // OpenCL block
     {
      return false;
     }
//---
   return true;
  }

So, as a result of the operations described above, the result buffer of our layer contains the masked data from the previous layer. The task set for the feed-forward method has been completed, and we can complete the method execution. Let's also add a temporary stub in place of the multi-threaded calculation algorithm.

Next, we move on to organizing the backpropagation process.