Array out of range (Troubleshooting help)

 

hello together

When the EA is started, it is started briefly, but deleted again at the same moment. The error is as follows: array out of range in "DeepFeedForward.mqh"  (94,25) (DeepFeedForward --> that is the MQH file)

Unfortunately I am still a beginner in coding. I would be really happy if someone could help me with this problem. It seems to me that the error occurs in the MQH file.

The Expert Advisor is not finished yet, so some lines might seem a bit bizarre!

Thank you in advance for the help.


MQH File:

#property copyright ""
#property link      ""
#property version   ""

#define SIZEI 5
#define SIZEA 5
#define SIZEB 6
#define SIZEC 5 

//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class DeepFeedForward
  {
private:

   int               numInput;
   int               numHiddenA;
   int               numHiddenB;
   int               numHiddenC;
   int               numOutput;

   double            inputs[];

   double            iaWeights[][SIZEI]; //Weights Input to HiddenA
   double            abWeights[][SIZEA]; //Weights HiddenA to HiddenB
   double            bcWeights[][SIZEB]; //Weights HiddenB to HiddenC
   double            coWeights[][SIZEC]; //Weights HiddenC to Outputs

   double            aBiases[];
   double            bBiases[];
   double            cBiases[];
   double            oBiases[];

   double            aOutputs[];
   double            bOutputs[];
   double            cOutputs[];
   double            outputs[];

public:

                     DeepFeedForward(int _numInput,int _numHiddenA,int _numHiddenB,int _numHiddenC, int _numOutput)
     {

      this.numInput= _numInput;
      this.numHiddenA = _numHiddenA;
      this.numHiddenB = _numHiddenB;
      this.numHiddenC = _numHiddenC;
      this.numOutput= _numOutput;

      ArrayResize(inputs,numInput);

      ArrayResize(iaWeights,numInput);
      ArrayResize(abWeights,numHiddenA);
      ArrayResize(bcWeights,numHiddenB);
      ArrayResize(coWeights,numHiddenC);

      ArrayResize(aBiases,numHiddenA);
      ArrayResize(bBiases,numHiddenB);
      ArrayResize(cBiases,numHiddenC);
      ArrayResize(oBiases,numOutput);

      ArrayResize(aOutputs,numHiddenA);
      ArrayResize(bOutputs,numHiddenB);
      ArrayResize(cOutputs,numHiddenC);
      ArrayResize(outputs,numOutput);
     }

   void SetWeights(double &weights[])
     {
      int numWeights=(numInput*numHiddenA)+numHiddenA+(numHiddenA*numHiddenB)+numHiddenB+(numHiddenB*numHiddenC)+numHiddenC+(numHiddenC*numOutput)+numOutput;
      if(ArraySize(weights)!=numWeights)
        {
         Print("Bad weights length");
         return;
        }

      int k=0;

      for(int i=0; i<numInput;++i)
         for(int j=0; j<numHiddenA;++j)
            iaWeights[i][j]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numHiddenA;++i)
         aBiases[i]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numHiddenA;++i)
         for(int j=0; j<numHiddenB;++j)
            abWeights[i][j]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numHiddenB;++i)
         bBiases[i]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numHiddenB;++i)
         for(int j=0; j<numHiddenC;++j)
            bcWeights[i][j]=NormalizeDouble(weights[k++],2);
            
      for(int i=0; i<numHiddenC;++i)
         cBiases[i]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numHiddenC;++i)
         for(int j=0; j<numOutput;++j)
            coWeights[i][j]=NormalizeDouble(weights[k++],2);

      for(int i=0; i<numOutput;++i)
         oBiases[i]=NormalizeDouble(weights[k++],2);
     }

   void ComputeOutputs(double &xValues[],double &yValues[])
     {
      double aSums[]; // hidden A nodes sums scratch array
      double bSums[]; // hidden B nodes sums scratch array
      double cSums[]; // hidden C nodes sums scratch array
      double oSums[]; // output nodes sums

      ArrayResize(aSums,numHiddenA);
      ArrayFill(aSums,0,numHiddenA,0);
      ArrayResize(bSums,numHiddenB);
      ArrayFill(bSums,0,numHiddenB,0);
      ArrayResize(cSums,numHiddenC);
      ArrayFill(cSums,0,numHiddenC,0);
      ArrayResize(oSums,numOutput);
      ArrayFill(oSums,0,numOutput,0);

      int size=ArraySize(xValues);

      for(int i=0; i<size;++i) // copy x-values to inputs
         this.inputs[i]=xValues[i];

      for(int j=0; j<numHiddenA;++j) // compute sum of (ia) weights * inputs
         for(int i=0; i<numInput;++i)
            aSums[j]+=this.inputs[i]*this.iaWeights[i][j]; // note +=

      for(int i=0; i<numHiddenA;++i) // add biases to a sums
         aSums[i]+=this.aBiases[i];

      for(int i=0; i<numHiddenA;++i) // apply activation
         this.aOutputs[i]=HyperTanFunction(aSums[i]); // hard-coded

      for(int j=0; j<numHiddenB;++j) // compute sum of (ab) weights * a outputs = local inputs
         for(int i=0; i<numHiddenA;++i)
            bSums[j]+=aOutputs[i]*this.abWeights[i][j]; // note +=

      for(int i=0; i<numHiddenB;++i) // add biases to b sums
         bSums[i]+=this.bBiases[i];

      for(int i=0; i<numHiddenB;++i) // apply activation
         this.bOutputs[i]=HyperTanFunction(bSums[i]); // hard-coded
         
       for(int j=0; j<numHiddenC;++j) // compute sum of (bc) weights * b outputs = local inputs
         for(int i=0; i<numHiddenB;++i)
            cSums[j]+=bOutputs[i]*this.bcWeights[i][j]; // note +=

      for(int i=0; i<numHiddenC;++i) // add biases to c sums
         cSums[i]+=this.cBiases[i];

      for(int i=0; i<numHiddenC;++i) // apply activation
         this.cOutputs[i]=HyperTanFunction(cSums[i]); // hard-coded

      for(int j=0; j<numOutput;++j) // compute sum of (co) weights * c outputs = local inputs
         for(int i=0; i<numHiddenC;++i)
            oSums[j]+=cOutputs[i]*coWeights[i][j];

      for(int i=0; i<numOutput;++i) // add biases to input-to-hidden sums
         oSums[i]+=oBiases[i];

      for(int i=0; i<numOutput;++i) // apply activation
         this.outputs[i]=HyperTanFunction(oSums[i]); // hard-coded

      ArrayCopy(yValues,this.outputs);

     }

   double HyperTanFunction(double x)
     {
      if(x<-20.0) return -1.0; // approximation is correct to 30 decimals
      else if(x > 20.0) return 1.0;
      else return (1-exp(-2*x))/(1+exp(-2*x));//MathTanh(x);
     }
     
   double SigmoidalFunction(double x)
     {
      if(x<-20.0) return -1.0; // approximation is correct to 30 decimals
      else if(x > 20.0) return 1.0;
      else return 1/(1+exp(-x));//Sigmoidal(x);
     }     
  };
//+------------------------------------------------------------------+


EA:

#property copyright ""
#property link      ""
#property version   ""
#property strict
//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
#include <Trade\Trade.mqh>        //include the library for execution of trades
#include <Trade\PositionInfo.mqh> //include the library for obtaining information on positions
#include <DeepFeedForward.mqh> 

int numInput= 5;
int numHiddenA = 5;
int numHiddenB = 6;
int numHiddenC = 5;
int numOutput= 1;

DeepFeedForward dff(numInput,numHiddenA,numHiddenB,numHiddenC,numOutput);


input double Lot=0.1;

input long order_magic=6390;//MagicNumber

//--- weight values
input double w0=1.0;
input double w1=1.0;
input double w2=1.0;
input double w3=1.0;
input double w4=1.0;
input double w5=1.0;
input double w6=1.0;
input double w7=1.0;
input double w8=1.0;
input double w9=1.0;
input double w10=1.0;
input double w11=1.0;
input double w12=1.0;
input double w13=1.0;
input double w14=1.0;
input double w15=1.0;
input double w16=1.0;
input double w17=1.0;
input double w18=1.0;
input double w19=1.0;
input double w20=1.0;
input double w21=1.0;
input double w22=1.0;
input double w23=1.0;
input double w24=1.0;
input double b0=1.0;
input double b1=1.0;
input double b2=1.0;
input double b3=1.0;
input double b4=1.0;
input double w25=1.0;
input double w26=1.0;
input double w27=1.0;
input double w28=1.0;
input double w29=1.0;
input double w30=1.0;
input double w31=1.0;
input double w32=1.0;
input double w33=1.0;
input double w34=1.0;
input double w35=1.0;
input double w36=1.0;
input double w37=1.0;
input double w38=1.0;
input double w39=1.0;
input double w40=1.0;
input double w41=1.0;
input double w42=1.0;
input double w43=1.0;
input double w44=1.0;
input double w45=1.0;
input double w46=1.0;
input double w47=1.0;
input double w48=1.0;
input double w49=1.0;
input double w50=1.0;
input double w51=1.0;
input double w52=1.0;
input double w53=1.0;
input double w54=1.0;
input double b5=1.0;
input double b6=1.0;
input double b7=1.0;
input double b8=1.0;
input double b9=1.0;
input double b10=1.0;
input double w55=1.0;
input double w56=1.0;
input double w57=1.0;
input double w58=1.0;
input double w59=1.0;
input double w60=1.0;
input double w61=1.0;
input double w62=1.0;
input double w63=1.0;
input double w64=1.0;
input double w65=1.0;
input double w66=1.0;
input double w67=1.0;
input double w68=1.0;
input double w69=1.0;
input double w70=1.0;
input double w71=1.0;
input double w72=1.0;
input double w73=1.0;
input double w74=1.0;
input double w75=1.0;
input double w76=1.0;
input double w77=1.0;
input double w78=1.0;
input double w79=1.0;
input double w80=1.0;
input double w81=1.0;
input double w82=1.0;
input double w83=1.0;
input double w84=1.0;
input double b11=1.0;
input double b12=1.0;
input double b13=1.0;
input double b14=1.0;
input double b15=1.0;
input double w85=1.0;
input double w86=1.0;
input double w87=1.0;
input double w88=1.0;
input double w89=1.0;
input double b16=1.0;


double            _xValues[5];   // array for storing inputs
double            weight[107];   // array for storing weights

double            out;          // variable for storing the output of the neuron

string            my_symbol;    // variable for storing the symbol
ENUM_TIMEFRAMES   my_timeframe; // variable for storing the time frame
double            lot_size;     // variable for storing the minimum lot size of the transaction to be performed

CTrade            m_Trade;      // entity for execution of trades
CPositionInfo     m_Position;   // entity for obtaining information on positions
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
int OnInit()
  {
   my_symbol=Symbol();
   my_timeframe=PERIOD_CURRENT;
   lot_size=Lot;
   m_Trade.SetExpertMagicNumber(order_magic);

   weight[0]=w0;
   weight[1]=w1;
   weight[2]=w2;
   weight[3]=w3;
   weight[4]=w4;
   weight[5]=w5;
   weight[6]=w6;
   weight[7]=w7;
   weight[8]=w8;
   weight[9]=w9;
   weight[10]=w10;
   weight[11]=w11;
   weight[12]=w12;
   weight[13]=w13;
   weight[14]=w14;
   weight[15]=w15;
   weight[16]=w16;
   weight[17]=w17;
   weight[18]=w18;
   weight[19]=w19;
   weight[20]=w20;
   weight[21]=w21;
   weight[22]=w22;
   weight[23]=w23;
   weight[24]=w24;
   weight[25]=b0;
   weight[26]=b1;
   weight[27]=b2;
   weight[28]=b3;
   weight[29]=b4;
   weight[30]=w25;
   weight[31]=w26;
   weight[32]=w27;
   weight[33]=w28;
   weight[34]=w29;
   weight[35]=w30;
   weight[36]=w31;
   weight[37]=w32;
   weight[38]=w33;
   weight[39]=w34;
   weight[40]=w35;
   weight[41]=w36;
   weight[42]=w37;
   weight[43]=w38;
   weight[44]=w39;
   weight[45]=w40;
   weight[46]=w41;
   weight[47]=w72;
   weight[48]=w43;
   weight[49]=w44;
   weight[50]=w45;
   weight[51]=w46;
   weight[52]=w47;
   weight[53]=w48;
   weight[54]=w49;
   weight[55]=w50;
   weight[56]=w51;
   weight[57]=w52;
   weight[58]=w53;
   weight[59]=w54;
   weight[60]=b5;
   weight[61]=b6;
   weight[62]=b7;
   weight[63]=b8;
   weight[64]=b9;
   weight[65]=b10;
   weight[66]=w55;
   weight[67]=w56;
   weight[68]=w57;
   weight[69]=w58;
   weight[70]=w59;
   weight[71]=w60;
   weight[72]=w61;
   weight[73]=w62;
   weight[74]=w63;
   weight[75]=w64;
   weight[76]=w65;
   weight[77]=w66;
   weight[78]=w67;
   weight[79]=w68;
   weight[80]=w69;
   weight[81]=w70;
   weight[82]=w71;
   weight[83]=w72;
   weight[84]=w73;
   weight[85]=w74;
   weight[86]=w75;
   weight[87]=w76;
   weight[88]=w77;
   weight[89]=w78;
   weight[90]=w79;
   weight[91]=w80;
   weight[92]=w81;
   weight[93]=w82;
   weight[94]=w83;
   weight[95]=w84;
   weight[96]=b11;
   weight[97]=b12;
   weight[98]=b13;
   weight[99]=b14;
   weight[100]=b15;
   weight[101]=w85;
   weight[102]=w86;
   weight[103]=w87;
   weight[104]=w88;
   weight[105]=w89;
   weight[106]=b15;

//--- return 0, initialization complete
   return(0);
  }
//+------------------------------------------------------------------+
//| Expert deinitialization function                                 |
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
  {

  }
//+------------------------------------------------------------------+
//| Expert tick function                                             |
//+------------------------------------------------------------------+
void OnTick()
  {

   MqlRates rates[];
   ArraySetAsSeries(rates,true);
   int copied=CopyRates(_Symbol,0,1,5,rates);

   dff.SetWeights(weight);

   double yValues[];
   dff.ComputeOutputs(_xValues,yValues);

//--- if the output value of the neuron is more than 0
   if(yValues[0]>0)
     {
      if(m_Position.Select(my_symbol))//check if there is an open position
        {
         if(m_Position.PositionType()==POSITION_TYPE_SELL) m_Trade.PositionClose(my_symbol);//Close the opposite position if exists
         if(m_Position.PositionType()==POSITION_TYPE_BUY) return;
        }
      m_Trade.Buy(lot_size,my_symbol);//open a Long position
     }
//--- if the output value of the neuron is less than 0
   if(yValues[0]<0)
     {
      if(m_Position.Select(my_symbol))//check if there is an open position
        {
         if(m_Position.PositionType()==POSITION_TYPE_BUY) m_Trade.PositionClose(my_symbol);//Close the opposite position if exists
         if(m_Position.PositionType()==POSITION_TYPE_SELL) return;
        }
      m_Trade.Sell(lot_size,my_symbol);//open a Short position
     }

   if(yValues[0]==0)
     {
      m_Trade.PositionClose(my_symbol);//close any position

     }
  }
//+------------------------------------------------------------------+
 
IXO30I:

When the EA is started, it is started briefly, but deleted again at the same moment. The error is as follows: array out of range in "DeepFeedForward.mqh"  (94,25) (DeepFeedForward --> that is the MQH file)

The line that triggered this error is highlighted below, so the array indices (i.e. i, j or k) used to access the arrays (i.e. abWeights[][] or weights[]) must have gone out of range:

      for(int i=0; i<numHiddenA;++i)
         for(int j=0; j<numHiddenB;++j)
            abWeights[i][j]=NormalizeDouble(weights[k++],2);

If you check the sizes of abWeights[][] and weights[], they are [5][5] and [107] at the time of crash, which means that the problem is due to j reaching the value of 5 (because you are iterating it from 0 to 5 (since numHiddenB is 6)).

 
Seng Joo Thio:

The line that triggered this error is highlighted below, so the array indices (i.e. i, j or k) used to access the arrays (i.e. abWeights[][] or weights[]) must have gone out of range:

If you check the sizes of abWeights[][] and weights[], they are [5][5] and [107] at the time of crash, which means that the problem is due to j reaching the value of 5 (because you are iterating it from 0 to 5 (since numHiddenB is 6)).

Thanks a lot, as you recognized correctly, the error disappears when I set NumHiddenB to the value 5. Unfortunately, this doesn't really solve my problem, because the value must be 6.

Although I've learned a lot about arrays from your answer, I still haven't come to a green branch yet.

Do you have any idea what I need to change to take the value 6?

You would help me a lot with that!
 
IXO30I:
Thanks a lot, as you recognized correctly, the error disappears when I set NumHiddenB to the value 5. Unfortunately, this doesn't really solve my problem, because the value must be 6.
Although I've learned a lot about arrays from your answer, I still haven't come to a green branch yet.
Do you have any idea what I need to change to take the value 6?
You would help me a lot with that!

I looked through the codes again, and I noticed that it won't run even if the array is fixed to take the 6th value, because another array - _xValues[] - while being used for computation, was never filled with any data.

So my suggestion is, if you can describe the execution logic that you're after, go to the freelance section and get someone to add in whatever functions that are obviously missing. Or else fixing the array index won't get you anywhere.

 
IXO30I:
Thanks a lot, as you recognized correctly, the error disappears when I set NumHiddenB to the value 5. Unfortunately, this doesn't really solve my problem, because the value must be 6.

Although I've learned a lot about arrays from your answer, I still haven't come to a green branch yet.

Do you have any idea what I need to change to take the value 6?

You would help me a lot with that!

Your problem is a bad design, you are using an hardcoded value to declare your array then a variable with a different value to loop through it.

You are using a macro to declare the size of your array :

   double            abWeights[][SIZEA]; //Weights HiddenA to HiddenB

SIZEA is hardcoded as :

#define SIZEA 5

So indexes for abWeights[] is from 0 to 4. But the you are using a variable numHiddenB to loop through the second dimension of your array :

int numHiddenB = 6;

...

      for(int i=0; i<numHiddenA;++i)
         for(int j=0; j<numHiddenB;++j)
            abWeights[i][j]=NormalizeDouble(weights[k++],2);
 

By the way - although it wasn't your question: if you initialize all weights and biases with 1.0 the network will never learn anything. You need to assign a random value, e.g. using something like weight[i]=MathRand()/32767. Then initializing weights and biases is a very simple loop in just a few lines of code and it has more flexibility that way instead of hardcoding like you do.

The principle during backpropagation of a neural network is computing back the error between output result and correct (label) result and then correcting the weights via giving a "penalty" depending on how much an individual weight contributed to the overall error. This way the overall error (expressed by the cost function) will hopefully step by step reach a minimum.

If all weights are the same, they will be corrected always by the same amount (=same "penalty") and the weight matrix won't learn anything.

Please feel free to ask if there's anything you don't understand.

Chris.

Reason: