Neural Networks - page 28

 

Simple neural net object 

and the same with matrix vector

for comparison of structure

 

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast - the article 

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast

Perceptron is a machine learning technique that can be used to predict market prices. It is a useful tool for traders and investors striving to get a price forecast.

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast
Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast
  • www.mql5.com
The article provides an example of using a perceptron as a self-sufficient price prediction tool by showcasing general concepts and the simplest ready-made Expert Advisor followed by the results of its optimization.
 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA development

Sergey Golubev, 2024.03.08 16:08

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done. 

We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets.  Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.


 

Overcoming The Limitation of Machine Learning (Part 1): Lack of Interoperable Metrics

Overcoming The Limitation of Machine Learning (Part 1): Lack of Interoperable Metrics

In our related series of articles, like Self-Optimizing Expert Advisors, we discussed an unsettling truth: even when you follow all the “best practices” in algorithmic trading development, things can still go horribly wrong. Briefly, we observed that practitioners using the RSI according to its standardized rules, may wait several months without the indicator generating any of the expected results. Resulting in trading accounts that are exposed to more market risk than what was intended. 

In this series of articles, we shall explore critical problems that algorithmic traders are exposed to every day, by the very guidelines and practices intended to keep them safe when using machine learning models.
Overcoming The Limitation of Machine Learning (Part 1): Lack of Interoperable Metrics
Overcoming The Limitation of Machine Learning (Part 1): Lack of Interoperable Metrics
  • www.mql5.com
There is a powerful and pervasive force quietly corrupting the collective efforts of our community to build reliable trading strategies that employ AI in any shape or form. This article establishes that part of the problems we face, are rooted in blind adherence to "best practices". By furnishing the reader with simple real-world market-based evidence, we will reason to the reader why we must refrain from such conduct, and rather adopt domain-bound best practices if our community should stand any chance of recovering the latent potential of AI.
 

Overcoming The Limitation of Machine Learning (Part 3): A Fresh Perspective on Irreducible Error

Overcoming The Limitation of Machine Learning (Part 3): A Fresh Perspective on Irreducible Error

This article will introduce the reader to advanced limitations of current machine learning models that are not explicitly taught to instructors before they deploy these models. The field of machine learning is dominated by mathematical notation and literature. And since there are many levels of abstraction from which a practitioner can study, the approach often differs. For example, some practitioners study machine learning simply from high-level libraries such as scikit-learn, which provide an easy and intuitive framework to use models while abstracting away the mathematical concepts that underpin them.

Overcoming The Limitation of Machine Learning (Part 3): A Fresh Perspective on Irreducible Error
Overcoming The Limitation of Machine Learning (Part 3): A Fresh Perspective on Irreducible Error
  • 2025.09.08
  • www.mql5.com
This article takes a fresh perspective on a hidden, geometric source of error that quietly shapes every prediction your models make. By rethinking how we measure and apply machine learning forecasts in trading, we reveal how this overlooked perspective can unlock sharper decisions, stronger returns, and a more intelligent way to work with models we thought we already understood.
 

Overcoming The Limitation of Machine Learning (Part 4): Overcoming Irreducible Error Using Multiple Forecast Horizons

Overcoming The Limitation of Machine Learning (Part 4): Overcoming Irreducible Error Using Multiple Forecast Horizons

Machine learning is a very broad field that can be studied and interpreted from many different perspectives. This very breadth makes it materially challenging for any one of us to master. In our series of articles, we have covered some material on machine learning from a statistical point of view, or from the perspective of linear algebra. However, we rarely give attention to the geometric interpretation of machine learning models. Traditionally, machine learning models are described as approximating a function that maps inputs to outputs. From a geometric perspective, however, this is incomplete.

Overcoming The Limitation of Machine Learning (Part 4): Overcoming Irreducible Error Using Multiple Forecast Horizons
Overcoming The Limitation of Machine Learning (Part 4): Overcoming Irreducible Error Using Multiple Forecast Horizons
  • 2025.09.23
  • www.mql5.com
Machine learning is often viewed through statistical or linear algebraic lenses, but this article emphasizes a geometric perspective of model predictions. It demonstrates that models do not truly approximate the target but rather map it onto a new coordinate system, creating an inherent misalignment that results in irreducible error. The article proposes that multi-step predictions, comparing the model’s forecasts across different horizons, offer a more effective approach than direct comparisons with the target. By applying this method to a trading model, the article demonstrates significant improvements in profitability and accuracy without changing the underlying model.
 

Overcoming The Limitation of Machine Learning (Part 5): A Quick Recap of Time Series Cross Validation

Overcoming The Limitation of Machine Learning (Part 5): A Quick Recap of Time Series Cross Validation

In our related series of articles, we’ve covered numerous tactics on how to deal with issues created by market behavior. However, in this series, we focus on problems caused by the machine learning algorithms we wish to employ in our strategies. Many of these issues arise from the architecture of the model, the algorithms used in model selection, the loss functions we define to measure performance, and many other subjects of the same nature.

Overcoming The Limitation of Machine Learning (Part 5): A Quick Recap of Time Series Cross Validation
Overcoming The Limitation of Machine Learning (Part 5): A Quick Recap of Time Series Cross Validation
  • 2025.10.13
  • www.mql5.com
In this series of articles, we look at the challenges faced by algorithmic traders when deploying machine-learning-powered trading strategies. Some challenges within our community remain unseen because they demand deeper technical understanding. Today’s discussion acts as a springboard toward examining the blind spots of cross-validation in machine learning. Although often treated as routine, this step can easily produce misleading or suboptimal results if handled carelessly. This article briefly revisits the essentials of time series cross-validation to prepare us for more in-depth insight into its hidden blind spots.
 

Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation

Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation

In our previous discussion on cross-validation, we reviewed the classical approach and how it is applied to time series data to optimize models and mitigate overfitting. A link to that discussion has been provided for your convenience, here. We also suggested that we can achieve better performance than the traditional interpretation implies. In this article, we explore the blind-spots of conventional cross-validation techniques and show how they can be enhanced through domain-specific validation methods.

Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation
Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation
  • 2025.10.23
  • www.mql5.com
In this discussion, we contrast the classical approach to time series cross-validation with modern alternatives that challenge its core assumptions. We expose key blind spots in the traditional method—especially its failure to account for evolving market conditions. To address these gaps, we introduce Effective Memory Cross-Validation (EMCV), a domain-aware approach that questions the long-held belief that more historical data always improves performance.