Libraries: MLP Neural Network Class

 

MLP Neural Network Class:

CNetMLP provides multilayer perceptron (MLP).

The feature of the class is that input vector and network structure notions are separated, i.e. input vector and network structure descriptions are not connected to each other.

The size of the input vector can have any value within reasonable limits. Input data should be normalized, i.e. the data should be within the range -1 .. 1 or 0 .. 1. Various activation functions are applied for the network depending on the type of the used data: hyperbolic tangent should be used for -1..1 data range, while sigmoid is used for 0..1 data range.

The network has a layer-by-layer structure with a direct signal transmission. Tne network structure is described by a one-dimensional array, where the value of the array element determines the number of neurons in the appropriate layer. The number of layers and neurons is not limited. The network may consist of a single neuron.

Each neuron has multiple inputs, defined by its place in the network, and one output. If you need the network to give out N responses, the last layer should contain N neurons. The learning algorithm is iRprop. Input and output training data are located in one-dimensional arrays vector by vector. The learning process is limited either by the number of learnnig epochs or by a permissible error.

Author: Yury Kulikov 

MLP Neural Network Class
MLP Neural Network Class
  • www.mql5.com
CNetMLP provides multilayer perceptron (MLP). The feature of the class is that input vector and network structure notions are separated, i.e. input vector and network structure descriptions are not connected to each other. The size of the input vector can have any value within reasonable limits. Input data should be normalized, i.e. the data...
 

Hello Yury,

How can I make an Expert Advisor using this MLP class?

 

Thanks. 

 
supercoder2006:

Hello Yury,

How can I make an Expert Advisor using this MLP class?

 

Thanks. 

Can Someone make a simple Expert Advisor using the smaple code?
 

Hi, much appreciated that you took the time to make this native library! 

I tried to do one hot encoding and added relu to the hidden and a Softmax function to the output layer call. It works and produces a 100% total result .

But the training got messed up, even when switching back to Sigmoud, is iRprop unfit for classification? It pretty much gives the same output no matter what the input is. Changes sometimes but not much.


Second question, i see that Tanh and Sigmod are treated diffrently in the Lern method, i don't understand that part of the code yet, but which one is appropriate for Relu?

Reason: