You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Comparing the results of multiplication table training your network loses noticeably. On ALGLIB the network 2,5,1 for 100 epochs of training(https://www.mql5.com/ru/forum/8265/page2) gives better answers than yours with 1000000 epochs. The speed of computing 10000000000 epochs is not pleasing either.
Apparently the learning method is not very efficient. But still - thanks for your work, it is easier to understand in small code than in ALGLIB. But then we still need to move there.
wrong comparison, all 100 examples are shown in the alglib variant for learning, that's why the answers are more correct. i think if you cut down the examples with answers in alglib, the results will not be better.
Hi, much appreciated that you took the time to make this native library!
I tried to do one hot encoding and added relu to the hidden and a Softmax function to the output layer call. It works and produces a 100% total result .
But the training got messed up, even when switching back to Sigmoud, is iRprop unfit for classification? It pretty much gives the same output no matter what the input is. Changes sometimes but not much.
Second question, i see that Tanh and Sigmod are treated diffrently in the Lern method, i don't understand that part of the code yet, but which one is appropriate for Relu?
Thanks for this amazing contribution... May you explain how can I determine the permissive error?