You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Simple neural net object
and the same with matrix vector
for comparison of structure
Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast - the article
Perceptron is a machine learning technique that can be used to predict market prices. It is a useful tool for traders and investors striving to get a price forecast.
Forum on trading, automated trading systems and testing trading strategies
Better NN EA development
Sergey Golubev, 2024.03.08 16:08
The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI
The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done.
We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets. Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.
Overcoming The Limitation of Machine Learning (Part 1): Lack of Interoperable Metrics
In this series of articles, we shall explore critical problems that algorithmic traders are exposed to every day, by the very guidelines and practices intended to keep them safe when using machine learning models.
Overcoming The Limitation of Machine Learning (Part 3): A Fresh Perspective on Irreducible Error
This article will introduce the reader to advanced limitations of current machine learning models that are not explicitly taught to instructors before they deploy these models. The field of machine learning is dominated by mathematical notation and literature. And since there are many levels of abstraction from which a practitioner can study, the approach often differs. For example, some practitioners study machine learning simply from high-level libraries such as scikit-learn, which provide an easy and intuitive framework to use models while abstracting away the mathematical concepts that underpin them.
Overcoming The Limitation of Machine Learning (Part 4): Overcoming Irreducible Error Using Multiple Forecast Horizons
Machine learning is a very broad field that can be studied and interpreted from many different perspectives. This very breadth makes it materially challenging for any one of us to master. In our series of articles, we have covered some material on machine learning from a statistical point of view, or from the perspective of linear algebra. However, we rarely give attention to the geometric interpretation of machine learning models. Traditionally, machine learning models are described as approximating a function that maps inputs to outputs. From a geometric perspective, however, this is incomplete.
Overcoming The Limitation of Machine Learning (Part 5): A Quick Recap of Time Series Cross Validation
In our related series of articles, we’ve covered numerous tactics on how to deal with issues created by market behavior. However, in this series, we focus on problems caused by the machine learning algorithms we wish to employ in our strategies. Many of these issues arise from the architecture of the model, the algorithms used in model selection, the loss functions we define to measure performance, and many other subjects of the same nature.
Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation
In our previous discussion on cross-validation, we reviewed the classical approach and how it is applied to time series data to optimize models and mitigate overfitting. A link to that discussion has been provided for your convenience, here. We also suggested that we can achieve better performance than the traditional interpretation implies. In this article, we explore the blind-spots of conventional cross-validation techniques and show how they can be enhanced through domain-specific validation methods.