Libraries: MLP Neural Network Class - page 2

 
elibrarius:

Comparing the results of multiplication table training your network loses noticeably. On ALGLIB the network 2,5,1 for 100 epochs of training(https://www.mql5.com/ru/forum/8265/page2) gives better answers than yours with 1000000 epochs. The speed of computing 10000000000 epochs is not pleasing either.

Apparently the learning method is not very efficient. But still - thanks for your work, it is easier to understand in small code than in ALGLIB. But then we still need to move there.

wrong comparison, all 100 examples are shown in the alglib variant for learning, that's why the answers are more correct. i think if you cut down the examples with answers in alglib, the results will not be better.

 

Hi, much appreciated that you took the time to make this native library! 

I tried to do one hot encoding and added relu to the hidden and a Softmax function to the output layer call. It works and produces a 100% total result .

But the training got messed up, even when switching back to Sigmoud, is iRprop unfit for classification? It pretty much gives the same output no matter what the input is. Changes sometimes but not much.


Second question, i see that Tanh and Sigmod are treated diffrently in the Lern method, i don't understand that part of the code yet, but which one is appropriate for Relu?

 
Thanks for this amazing contribution... May you explain how can I determine the permissive error?
 
northedan:
Thanks for this amazing contribution... May you explain how can I determine the permissive error?
//+------------------------------------------------------------------+
//| Learn|
//+------------------------------------------------------------------+
void CNetMLP::Learn(int c_npat,       // number of teaching patterns
                    double &c_inp[],  // input data
                    double &c_res[],  // output data
                    int c_nep,        // number of learning epochs
                    double c_err)     // permissible error