Discussion of article "Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design"

 

New article Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design has been published:

There are minor things to cover on the feed-forward neural network before we are through, the design being one of them. Let's see how we can build and design a flexible neural network to our inputs, the number of hidden layers, and the nodes for each of the network.

We all know that hard-coding models fall flat when it comes to wanting to optimize for the new parameters, the whole procedure is time-consuming, causes headaches, pain in the back etc. (It's not worth it)

If we take a closer look at the operations behind a neural network you'll notice that each input gets multiplied to the weight assigned to it then their output gets added to the bias. This can be handled well by the matrix operations.

neural network matrix multiplication

Author: Omega J Msigwa

 

Imho, in this cycle the material is much better presented than, for example, in the cycle "Neural Networks - it's simple"...

A question to the admins. Is it possible to insert links to paid bibliographies in the code?

//+------------------------------------------------------------------+
//|NeuralNets.mqh |
//|Copyright 2022, Omega Joctan. |
//| https://www.mql5.com/en/users/omegajoctan |
//+------------------------------------------------------------------+
#property copyright "Copyright 2022, Omega Joctan."
#property link      "https://www.mql5.com/en/users/omegajoctan"
//+------------------------------------------------------------------+

#import "The_Matrix.ex5" //sourcecode here >>> https://www.mql5.com/en/market/product/81533
   void MatrixMultiply(double &A[],double &B[],double &AxBMatrix[], int colsA,int rowsB,int &new_rows,int &new_cols);
   void CSVToMatrix(double &Matrix[],int &mat_rows,int &mat_cols,string csv_file,string sep=",");
   void MatrixPrint(double &Matrix[],int cols,int digits=5);
#import

bool m_debug = true;
 
Denis Kirichenko #:

Imho, in this cycle the material is much better presented than, for example, in the cycle "Neural Networks - it's easy"...

A question to the admins. Is it possible to insert links to paid bibliographies in the code?

No, it's not possible, I forgot to remove the link

 

There is such a thing in the article:

Ok so here is the function responsible for training the neural network.

void CNeuralNets::train_feedforwardMLP(double &XMatrix[],int epochs=1)

I am purposely giving an excerpt in the language in which the author wrote the article.

I am embarrassed to ask, where is the learning taking place? Imho, there is direct dissemination taking place....

That's funny:

CNeuralNets::CNeuralNets(fx HActivationFx, fx OActivationFx, int inputs, int &NodesHL[], int outputs=NULL, bool SoftMax=false)
   {
   e = 2.718281828;
    ...
   }

What if it does? )))

CNeuralNets::CNeuralNets(fx HActivationFx, fx OActivationFx, int inputs, int &NodesHL[], int outputs=NULL, bool SoftMax=false)
   {
    e = M_E;
   ...
   }
 

When I saw that there is a section in the article:

Матрицы в помощь

If you suddenly need to change the parameters of a model with static code, optimisation can take a lot of time - it's a headache, backache and other troubles.

I thought that finally someone will describe MO in terms of nativematrices. But the headache from self-made matrices in the form of a one-dimensional array a la XMatrix[] only increased....

Документация по MQL5: Основы языка / Типы данных / Матрицы и векторы
Документация по MQL5: Основы языка / Типы данных / Матрицы и векторы
  • www.mql5.com
Матрицы и векторы - Типы данных - Основы языка - Справочник MQL5 - Справочник по языку алгоритмического/автоматического трейдинга для MetaTrader 5
 
What does this mean:
int hlnodes[3] = {4,6,1};

4 inputs, 1 hidden laer with 6 neurons and one output?


You don't explain the most important thing well. How to declare the architecture of the model.

How many hidden layers can I use?

How do I define how many neurons each hidden layer has?
Example: I want a network with 8 inputs.
3 hidden layers with 16, 8, 4 neurons.
And 2 Outputs..
It's possible??