Discussion of article "Data Science and Machine Learning — Neural Network (Part 01): Feed Forward Neural Network demystified"
Hi,
Very good your article. Good job!
I've been reading about neural network, but until now I still haven't figured out what advantages or differences the neural network can be when compared to the MT5 optimization system itself.
For example: If I have some strategy using MACD and ATR, I can "train" it to find out the best parameters on MT5 optimization system. And also I can include a weight system in the indicators or other data.
Both will search for best parameters or "weights" in the past to apply in the future.
Maybe I'm wrong and did not get the whole idea.
Could you explain it? Or give some examples?

- www.mql5.com
the difference between optimization, on the strategy tester versus optimizing the neural network parameters is the goal, on the strategy tester we tend to focus on the parameters that provide the most profitable outputs or at least the trading results we want, this doesn't necessarily mean that the neural network has a good model that has led to those kind of results
some folks prefer to put the weights and the bias as input parameters of neural net based systems(Feed forward roughly speaking) but I think optimizing using the strategy tester is basically finding the random values of the best results(finding the optimal ones sounds like depending on luck) while if we were to optimize using stochastic gradient descent we are moving towards the model with the least errors in predictions on every step
the difference between optimization, on the strategy tester versus optimizing the neural network parameters is the goal, on the strategy tester we tend to focus on the parameters that provide the most profitable outputs or at least the trading results we want, this doesn't necessarily mean that the neural network has a good model that has led to those kind of results
some folks prefer to put the weights and the bias as input parameters of neural net based systems(Feed forward roughly speaking) but I think optimizing using the strategy tester is basically finding the random values of the best results(finding the optimal ones sounds like depending on luck) while if we were to optimize using stochastic gradient descent we are moving towards the model with the least errors in predictions on every step
Thank you for your response.
I got your point.
Why did you start from the first part?
old article:
DATA SCIENCE AND MACHINE LEARNING (PART 01): LINEAR REGRESSION

- www.mql5.com

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Data Science and Machine Learning — Neural Network (Part 01): Feed Forward Neural Network demystified has been published:
Many people love them but a few understand the whole operations behind Neural Networks. In this article I will try to explain everything that goes behind closed doors of a feed-forward multi-layer perception in plain English.
The Hyperbolic Tangent Function.
It's given by the formula,
Its graph looks like the below,
Author: Omega J Msigwa