Libraries: MLP Neural Network Class

 

MLP Neural Network Class:

CNetMLP provides multilayer perceptron (MLP).

The feature of the class is that input vector and network structure notions are separated, i.e. input vector and network structure descriptions are not connected to each other.

The size of the input vector can have any value within reasonable limits. Input data should be normalized, i.e. the data should be within the range -1 .. 1 or 0 .. 1. Various activation functions are applied for the network depending on the type of the used data: hyperbolic tangent should be used for -1..1 data range, while sigmoid is used for 0..1 data range.

The network has a layer-by-layer structure with a direct signal transmission. Tne network structure is described by a one-dimensional array, where the value of the array element determines the number of neurons in the appropriate layer. The number of layers and neurons is not limited. The network may consist of a single neuron.

Each neuron has multiple inputs, defined by its place in the network, and one output. If you need the network to give out N responses, the last layer should contain N neurons. The learning algorithm is iRprop. Input and output training data are located in one-dimensional arrays vector by vector. The learning process is limited either by the number of learnnig epochs or by a permissible error.

Author: Yury Kulikov 

MLP Neural Network Class
MLP Neural Network Class
  • www.mql5.com
CNetMLP provides multilayer perceptron (MLP). The feature of the class is that input vector and network structure notions are separated, i.e. input vector and network structure descriptions are not connected to each other. The size of the input vector can have any value within reasonable limits. Input data should be normalized, i.e. the data...
 

The grid works strangely.

During the learning process, the error first decreases, then starts to increase.

Is that the way it's designed? Or am I doing something wrong?

 

Test case result:

2011.12.25 12:42:52 TestMLPs(GBPUSD,H1) Entry=0, 0 Exit=0 Check=0
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=0, 1 Exit=0 Check=1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=1, 0 Exit=0 Check=1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=1, 1 Exit=0 Check=0
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) MSE=0.375 Epoch=1001
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Example for input data range from 0 to 1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Input=-1, -1 Output=0 Check=-1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=-1, 1 Exit=0 Check=1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=1, -1 Exit=0 Check=1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Entry=1, 1 Exit=0 Check=-1
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) MSE=0.9375 Epoch=1001
2011.12.25 12:42:52 TestMLPs (GBPUSD,H1) Example for input data range from -1 to 1.

Is this how it should be? (The output is 0,0,0,0,0,0 and a huge error).

 

Hello Yury,

How can I make an Expert Advisor using this MLP class?

 

Thanks. 

 
supercoder2006:

Hello Yury,

How can I make an Expert Advisor using this MLP class?

 

Thanks. 

Can Someone make a simple Expert Advisor using the smaple code?
 

Maybe I'm doing something wrong or the code doesn't work correctly

I want to teach the NS the multiplication table and count 2x3, I do this:

#property copyright "Yurich"
//+------------------------------------------------------------------+
#include <class_NetMLP.mqh>

void OnStart(){
double vector[2];   // Input vector
int snn[]={2,2,1};    // Network structure
double out[1];      // Array for network responses

double inpdata[];// Array of input training data
double outdata[];// Array of output training data

   CNetMLP *net;
   int epoch=1000;
   int AFT=0;
   net=new CNetMLP(ArraySize(snn),snn,2,AFT);
   
   ArrayResize(inpdata,20);
   ArrayResize(outdata,10);
   
   for(int i=0;i<10;i++){
      for(int j=0;j<10;j++){
         inpdata[j*2] = (i+1)/10.0;
         inpdata[j*2+1] = (j+1)/10.0;
         outdata[j] = inpdata[j*2] * inpdata[j*2+1];
// Print("inpdata[",j*2,"]=",DoubleToString(inpdata[j*2])," / inpdata[",j*2+1,"]=",DoubleToString(inpdata[j*2+1]));
      }
      net.Learn(10,inpdata,outdata,epoch,1.0 e-8);
      vector[0] = 0.2;
      vector[1] = 0.3;
      net.Calculate(vector,out);
      Print("MSE=",net.mse," , out =",out[0]*100);
   }
   
   Print("MSE=",net.mse,"  Epoch=",net.epoch);
}
//+------------------------------------------------------------------+

in the log I have:

2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    1824 bytes of leaked memory
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    3 objects of type CLayerMLP left
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    1 object of type CNetMLP left
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    4 undeleted objects left
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=3.215934174267907 e-005  Epoch=1001
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=3.215934174267907 e-005 , out =23.81042803092551
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=2.506540371444645 e-006 , out =22.233366741152
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=1.524148111498897 e-006 , out =20.42036901380543
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=1.519171222235065 e-006 , out =18.89110154263913
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=1.047462369320528 e-006 , out =16.63410153653344
2012.10.07 22:46:43     TestMLPs (EURUSD,D1)    MSE=9.477321159986828 e-007 , out =14.24605748950336
2012.10.07 22:46:42     TestMLPs (EURUSD,D1)    MSE=6.585902193183645 e-007 , out =11.66913117122246
2012.10.07 22:46:42     TestMLPs (EURUSD,D1)    MSE=2.237858920539329 e-007 , out =8.906822741170629
2012.10.07 22:46:42     TestMLPs (EURUSD,D1)    MSE=2.540333890146069 e-007 , out =6.033412338430783
2012.10.07 22:46:42     TestMLPs (EURUSD,D1)    MSE=2.26424262746638 e-007 , out =2.942888766617119
 
IgorM:

Maybe I'm doing something wrong or the code doesn't work correctly

I want to teach the NS the multiplication table and count 2x3, I do this:

In fact, you train the network with 10 examples. If you want to pass all 100 examples to the network, you need to take the training out of the data preparation cycle. It is also important to determine the number of neurons and the criterion for stopping training - 1000 epochs is too short.

#include <class_NetMLP.mqh>
void OnStart()
{
   double vector[2];   // Input vector
   int snn[]={2,2,1};  // Network structure
   double out[1];      // Array for network responses
   double inpdata[];   // Array of input training data
   double outdata[];   // Array of output training data
   // network creation
   CNetMLP *net;
   int epoch=1000000;
   int AFT=0;
   net=new CNetMLP(ArraySize(snn),snn,2,AFT);
   // preparing data for training
   ArrayResize(inpdata,200);
   ArrayResize(outdata,100);
   int m=0, k=0;
   for(int i=1; i<=10; i++)
      for(int j=1; j<=10; j++)
      {
         inpdata[m++]=i/10.0;
         inpdata[m++]=j/10.0;
         outdata[k++]=(i*j)/100.0;
      }
   // network training
   net.Learn(100,inpdata,outdata,epoch,1.0 e-8);
   Print("MSE=",net.mse,"  Epoch=",net.epoch);
   // network check
   for(int i=1; i<=10; i++)
   {
       vector[0]=i/10.0;
       vector[1]=i/10.0;
       net.Calculate(vector,out);
       Print(i,"*",i,"=",DoubleToString(out[0]*100,1));
   }
   // deleting the network
   delete net;
}
2012.10.08 13:46:59     test_nn (EURUSD,M15)    MSE=4.22005256254196 e-005  Epoch=1000001
2012.10.08 13:46:59     test_nn (EURUSD,M15)    1*1=1.3
2012.10.08 13:46:59     test_nn (EURUSD,M15)    2*2=3.4
2012.10.08 13:46:59     test_nn (EURUSD,M15)    3*3=7.6
2012.10.08 13:46:59     test_nn (EURUSD,M15)    4*4=14.8
2012.10.08 13:46:59     test_nn (EURUSD,M15)    5*5=25.0
2012.10.08 13:46:59     test_nn (EURUSD,M15)    6*6=37.2
2012.10.08 13:46:59     test_nn (EURUSD,M15)    7*7=50.2
2012.10.08 13:46:59     test_nn (EURUSD,M15)    8*8=64.3
2012.10.08 13:46:59     test_nn (EURUSD,M15)    9*9=82.2
2012.10.08 13:46:59     test_nn (EURUSD,M15)    10*10=96.9
 
Yurich:

In fact, you are training the network with 10 examples. If you want to pass all 100 examples to the network, you need to take the training out of the data preparation cycle. It is also important to determine the number of neurons and the criterion for stopping training - 1000 epochs is too short.

thanks, I have figured it out, I will experiment with your code some more

Only one request:

CNetMLP *net=new CNetMLP(number of layers, network structure array, input vector size, activation function type: 0 - sigmoid, 1 - hyperbolic tangent).

do it this way: CNetMLP *net=new CNetMLP(network structure array, activation function type: 0 - sigmoid, 1 - hyperbolic tangent).

i.e. your code will calculate the parameters "number of layers" and "input vector size" from the network structure array by itself, imho it will add clarity and readability of the code.

 

Hallo Yuri,

zunächst einmal vielen Dank für dieses Stück Code teilt die Gemeinschaft.

Ich habe Ihren Code ein Fachberater zu bauen Diagrammwerte vorherzusagen, aber es scheint, dass es ein Fehler in der class_netmlp.mqh ist.

sobald ich versuchte, 3 zu verwenden oder mehrere Eingangswerte ist der Ausgang nicht ganz richtig mehr scheint .... können Sie Sie mir dieses Problem Festsetzung helfen?

 
please see the pictures
Files:
example1.jpg  67 kb
example2.jpg  39 kb
 

Comparing the results of multiplication table training your network loses noticeably. On ALGLIB the network 2,5,1 for 100 epochs of training(https://www.mql5.com/ru/forum/8265/page2) gives better answers than yours with 1000000 epochs. The speed of computing 10000000000 epochs is not pleasing either.

Apparently the learning method is not very efficient. But still - thanks for your work, it is easier to understand in small code than in ALGLIB. But then we still need to move there.

Библиотеки: ALGLIB - библиотека численного анализа
Библиотеки: ALGLIB - библиотека численного анализа
  • 2012.10.12
  • www.mql5.com
Форум алго-трейдеров MQL5