Getting NaN out of double multiplication

 

Hi everybody, hope you are all good!

First, thank you for your time reading my topic, hope you can help me.


So, the bug I'm having is simple to explain, I'm getting:

-(nan)

after doing a series of multiplication of double values. Here goes some details that I noted:

  • While in the backtest, this bug never happens, only on the live account
  • While using debug in the live account the bug doesn't happen
  • Depending on the PC I'm using the EA the bug will not happen


Now, here goes some code, it is a simple feed foward algorithm for running a neural network:

const ushort layersArrayRef[] = { 120,32,2 };
   //'layersArrayRef' is a small const array that keep tracks of the size the last dimension for each first dimension, so we don't need to waste time with ArraySize
const ushort numLayers = 2;
   //'numLayers' also help us to keep track of ArraySize(layersArrayRef)-1, without having to call it all the time
const ushort maxWeights = 120;
   //'maxWeights' is the larggest number of weights we would have in all of the last dimensions of 'layers' array

//+------------------------------------------------------------------+
void feedFoward(const double &inputs[], double &result[], const double &layers[][120][121])
  {
      //'layers' is a 3 dimension array that contains all the double values to be multiplied to the input, if you are acquainted with neural networks, these are the weights values
      //None of these weights are NaN varibles, they are all double values, below is a sample so you know what you should expect:
      //{-0.2777355450744791,-0.7567252900916879,0.6602865137730505,-0.2598532455377704,-0.7134906462589329,0.1609123375956665,0.3382764091756482,0.1060638487098541,0.1298993085141322}
      //layers are a big array set defined on the global scope, to show all weights values would take a lot of lines, so, here is the declaration that gives you a good idea of what to expect
      //const double layers[2][120][121] ={{{...}}}
   
   if((ushort)ArraySize(inputs)!=layersArrayRef[0])
     {
      Print("Not enough inputs.");
      return;
     }
   if((ushort)ArraySize(result)!=layersArrayRef[numLayers])
     {
      Print("Result size wrong.");
      return;
     }

   double inLayer[];
   ArrayResize(inLayer,maxWeights);

   for(int i=0; i<layersArrayRef[0]; i++)
     {
      inLayer[i]=inputs[i];
     }

   for(ushort l=0; l<numLayers; l++)
     {
      double outLayer[];
      ArrayResize(outLayer,layersArrayRef[l+1]);

      for(ushort n=0; n<layersArrayRef[l+1]; n++)
        {
         outLayer[n]=layers[l][n][0];//this is the bias, for those that knows who neural networks works
         for(ushort w=0; w<layersArrayRef[l]; w++)
           {
            outLayer[n]+= inLayer[w]*layers[l][n][w+1];//this is the weight multiplication that at some moment in result in a NaN value
           }
        }

      for(ushort n=0; n<layersArrayRef[l+1]; n++)
        {
         if(MathClassify(MathAbs(outLayer[n]))==FP_NAN)
           {
            Print("Exception instable calculations!");//here, is the place that I receive the bug
            ExpertRemove();
           }

         inLayer[n] = 1/(1+MathExp(outLayer[n])); //activation function
        }
     }

   for(ushort n=0; n<layersArrayRef[numLayers]; n++)
     {
      result[n]=inLayer[n];
     }
  }

Thank you all for your time!

If you have any idea where the bug might be or have questions, I'll be glad to hear and to look for anything that might help me solve the bug :-)

Testing trading strategies on real ticks
Testing trading strategies on real ticks
  • www.mql5.com
The article provides the results of testing a simple trading strategy in three modes: "1 minute OHLC", "Every tick" and "Every tick based on real ticks" using actual historical data.
 
My bad, I was feeding -nan into the network