Neural Networks - page 28

 

Simple neural net object 

and the same with matrix vector

for comparison of structure

 

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast - the article 

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast

Perceptron is a machine learning technique that can be used to predict market prices. It is a useful tool for traders and investors striving to get a price forecast.

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast
Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast
  • www.mql5.com
The article provides an example of using a perceptron as a self-sufficient price prediction tool by showcasing general concepts and the simplest ready-made Expert Advisor followed by the results of its optimization.
 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA development

Sergey Golubev, 2024.03.08 16:08

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done. 

We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets.  Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.


Reason: