Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
MQL5 Wizard Techniques you should know (Part 57): Supervised Learning with Moving Average and Stochastic Oscillator

MQL5 Wizard Techniques you should know (Part 57): Supervised Learning with Moving Average and Stochastic Oscillator

Moving Average and Stochastic Oscillator are very common indicators that some traders may not use a lot because of their lagging nature. In a 3-part ‘miniseries' that considers the 3 main forms of machine learning, we look to see if this bias against these indicators is justified, or they might be holding an edge. We do our examination in wizard assembled Expert Advisors.
preview
Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

We continue to explore the analysis and forecasting of time series in the frequency domain. In this article, we will get acquainted with a new method to forecast data in the frequency domain, which can be added to many of the algorithms we have studied previously.
preview
Reimagining Classic Strategies in Python: MA Crossovers

Reimagining Classic Strategies in Python: MA Crossovers

In this article, we revisit the classic moving average crossover strategy to assess its current effectiveness. Given the amount of time since its inception, we explore the potential enhancements that AI can bring to this traditional trading strategy. By incorporating AI techniques, we aim to leverage advanced predictive capabilities to potentially optimize trade entry and exit points, adapt to varying market conditions, and enhance overall performance compared to conventional approaches.
preview
Generative Adversarial Networks (GANs) for Synthetic Data in Financial Modeling (Part 1): Introduction to GANs and Synthetic Data in Financial Modeling

Generative Adversarial Networks (GANs) for Synthetic Data in Financial Modeling (Part 1): Introduction to GANs and Synthetic Data in Financial Modeling

This article introduces traders to Generative Adversarial Networks (GANs) for generating Synthetic Financial data, addressing data limitations in model training. It covers GAN basics, python and MQL5 code implementations, and practical applications in finance, empowering traders to enhance model accuracy and robustness through synthetic data.
preview
Category Theory in MQL5 (Part 6): Monomorphic Pull-Backs and Epimorphic Push-Outs

Category Theory in MQL5 (Part 6): Monomorphic Pull-Backs and Epimorphic Push-Outs

Category Theory is a diverse and expanding branch of Mathematics which is only recently getting some coverage in the MQL5 community. These series of articles look to explore and examine some of its concepts & axioms with the overall goal of establishing an open library that provides insight while also hopefully furthering the use of this remarkable field in Traders' strategy development.
preview
Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)

Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)

In previous works, we always assessed the current state of the environment. At the same time, the dynamics of changes in indicators always remained "behind the scenes". In this article I want to introduce you to an algorithm that allows you to evaluate the direct change in data between 2 successive environmental states.
preview
Chemical reaction optimization (CRO) algorithm (Part II): Assembling and results

Chemical reaction optimization (CRO) algorithm (Part II): Assembling and results

In the second part, we will collect chemical operators into a single algorithm and present a detailed analysis of its results. Let's find out how the Chemical reaction optimization (CRO) method copes with solving complex problems on test functions.
preview
Data label for time series mining (Part 6):Apply and Test in EA Using ONNX

Data label for time series mining (Part 6):Apply and Test in EA Using ONNX

This series of articles introduces several time series labeling methods, which can create data that meets most artificial intelligence models, and targeted data labeling according to needs can make the trained artificial intelligence model more in line with the expected design, improve the accuracy of our model, and even help the model make a qualitative leap!
preview
Artificial Cooperative Search (ACS) algorithm

Artificial Cooperative Search (ACS) algorithm

Artificial Cooperative Search (ACS) is an innovative method using a binary matrix and multiple dynamic populations based on mutualistic relationships and cooperation to find optimal solutions quickly and accurately. ACS unique approach to predators and prey enables it to achieve excellent results in numerical optimization problems.
preview
The case for using a Composite Data Set this Q4 in weighing SPDR XLY's next performance

The case for using a Composite Data Set this Q4 in weighing SPDR XLY's next performance

We consider XLY, SPDR’s consumer discretionary spending ETF and see if with tools in MetaTrader’s IDE we can sift through an array of data sets in selecting what could work with a forecasting model with a forward outlook of not more than a year.
preview
Time series clustering in causal inference

Time series clustering in causal inference

Clustering algorithms in machine learning are important unsupervised learning algorithms that can divide the original data into groups with similar observations. By using these groups, you can analyze the market for a specific cluster, search for the most stable clusters using new data, and make causal inferences. The article proposes an original method for time series clustering in Python.
preview
Population optimization algorithms: Resistance to getting stuck in local extrema (Part II)

Population optimization algorithms: Resistance to getting stuck in local extrema (Part II)

We continue our experiment that aims to examine the behavior of population optimization algorithms in the context of their ability to efficiently escape local minima when population diversity is low and reach global maxima. Research results are provided.
preview
Neural Networks in Trading: Hierarchical Vector Transformer (Final Part)

Neural Networks in Trading: Hierarchical Vector Transformer (Final Part)

We continue studying the Hierarchical Vector Transformer method. In this article, we will complete the construction of the model. We will also train and test it on real historical data.
preview
Category Theory in MQL5 (Part 21): Natural Transformations with LDA

Category Theory in MQL5 (Part 21): Natural Transformations with LDA

This article, the 21st in our series, continues with a look at Natural Transformations and how they can be implemented using linear discriminant analysis. We present applications of this in a signal class format, like in the previous article.
preview
Population optimization algorithms: Intelligent Water Drops (IWD) algorithm

Population optimization algorithms: Intelligent Water Drops (IWD) algorithm

The article considers an interesting algorithm derived from inanimate nature - intelligent water drops (IWD) simulating the process of river bed formation. The ideas of this algorithm made it possible to significantly improve the previous leader of the rating - SDS. As usual, the new leader (modified SDSm) can be found in the attachment.
preview
MQL5 Wizard Techniques you should know (Part 58): Reinforcement Learning (DDPG) with Moving Average and Stochastic Oscillator Patterns

MQL5 Wizard Techniques you should know (Part 58): Reinforcement Learning (DDPG) with Moving Average and Stochastic Oscillator Patterns

Moving Average and Stochastic Oscillator are very common indicators whose collective patterns we explored in the prior article, via a supervised learning network, to see which “patterns-would-stick”. We take our analyses from that article, a step further by considering the effects' reinforcement learning, when used with this trained network, would have on performance. Readers should note our testing is over a very limited time window. Nonetheless, we continue to harness the minimal coding requirements afforded by the MQL5 wizard in showcasing this.
preview
Population optimization algorithms: Simulated Isotropic Annealing (SIA) algorithm. Part II

Population optimization algorithms: Simulated Isotropic Annealing (SIA) algorithm. Part II

The first part was devoted to the well-known and popular algorithm - simulated annealing. We have thoroughly considered its pros and cons. The second part of the article is devoted to the radical transformation of the algorithm, which turns it into a new optimization algorithm - Simulated Isotropic Annealing (SIA).
preview
Category Theory in MQL5 (Part 19): Naturality Square Induction

Category Theory in MQL5 (Part 19): Naturality Square Induction

We continue our look at natural transformations by considering naturality square induction. Slight restraints on multicurrency implementation for experts assembled with the MQL5 wizard mean we are showcasing our data classification abilities with a script. Principle applications considered are price change classification and thus its forecasting.
preview
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning

Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning

During the offline learning, we optimize the Agent's policy based on the training sample data. The resulting strategy gives the Agent confidence in its actions. However, such optimism is not always justified and can cause increased risks during the model operation. Today we will look at one of the methods to reduce these risks.
preview
MQL5 Wizard Techniques you should know (Part 07): Dendrograms

MQL5 Wizard Techniques you should know (Part 07): Dendrograms

Data classification for purposes of analysis and forecasting is a very diverse arena within machine learning and it features a large number of approaches and methods. This piece looks at one such approach, namely Agglomerative Hierarchical Classification.
preview
Neural Network in Practice: Secant Line

Neural Network in Practice: Secant Line

As already explained in the theoretical part, when working with neural networks we need to use linear regressions and derivatives. Why? The reason is that linear regression is one of the simplest formulas in existence. Essentially, linear regression is just an affine function. However, when we talk about neural networks, we are not interested in the effects of direct linear regression. We are interested in the equation that generates this line. We are not that interested in the line created. Do you know the main equation that we need to understand? If not, I recommend reading this article to understanding it.
preview
Integrating MQL5 with data processing packages (Part 2): Machine Learning and Predictive Analytics

Integrating MQL5 with data processing packages (Part 2): Machine Learning and Predictive Analytics

In our series on integrating MQL5 with data processing packages, we delve in to the powerful combination of machine learning and predictive analysis. We will explore how to seamlessly connect MQL5 with popular machine learning libraries, to enable sophisticated predictive models for financial markets.
preview
Neural Networks in Trading: Superpoint Transformer (SPFormer)

Neural Networks in Trading: Superpoint Transformer (SPFormer)

In this article, we introduce a method for segmenting 3D objects based on Superpoint Transformer (SPFormer), which eliminates the need for intermediate data aggregation. This speeds up the segmentation process and improves the performance of the model.
preview
The Group Method of Data Handling: Implementing the Combinatorial Algorithm in MQL5

The Group Method of Data Handling: Implementing the Combinatorial Algorithm in MQL5

In this article we continue our exploration of the Group Method of Data Handling family of algorithms, with the implementation of the Combinatorial Algorithm along with its refined incarnation, the Combinatorial Selective Algorithm in MQL5.
preview
MQL5 Wizard Techniques you should know (Part 34): Price-Embedding with an Unconventional RBM

MQL5 Wizard Techniques you should know (Part 34): Price-Embedding with an Unconventional RBM

Restricted Boltzmann Machines are a form of neural network that was developed in the mid 1980s at a time when compute resources were prohibitively expensive. At its onset, it relied on Gibbs Sampling and Contrastive Divergence in order to reduce dimensionality or capture the hidden probabilities/properties over input training data sets. We examine how Backpropagation can perform similarly when the RBM ‘embeds’ prices for a forecasting Multi-Layer-Perceptron.
preview
Propensity score in causal inference

Propensity score in causal inference

The article examines the topic of matching in causal inference. Matching is used to compare similar observations in a data set. This is necessary to correctly determine causal effects and get rid of bias. The author explains how this helps in building trading systems based on machine learning, which become more stable on new data they were not trained on. The propensity score plays a central role and is widely used in causal inference.
preview
Matrix Factorization: The Basics

Matrix Factorization: The Basics

Since the goal here is didactic, we will proceed as simply as possible. That is, we will implement only what we need: matrix multiplication. You will see today that this is enough to simulate matrix-scalar multiplication. The most significant difficulty that many people encounter when implementing code using matrix factorization is this: unlike scalar factorization, where in almost all cases the order of the factors does not change the result, this is not the case when using matrices.
preview
Adaptive Social Behavior Optimization (ASBO): Schwefel, Box-Muller Method

Adaptive Social Behavior Optimization (ASBO): Schwefel, Box-Muller Method

This article provides a fascinating insight into the world of social behavior in living organisms and its influence on the creation of a new mathematical model - ASBO (Adaptive Social Behavior Optimization). We will examine how the principles of leadership, neighborhood, and cooperation observed in living societies inspire the development of innovative optimization algorithms.
preview
Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning

With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.
preview
Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

All the models we have considered so far analyze the state of the environment as a time sequence. However, the time series can also be represented in the form of frequency features. In this article, I introduce you to an algorithm that uses frequency components of a time sequence to predict future states.
preview
Reimagining Classic Strategies: Crude Oil

Reimagining Classic Strategies: Crude Oil

In this article, we revisit a classic crude oil trading strategy with the aim of enhancing it by leveraging supervised machine learning algorithms. We will construct a least-squares model to predict future Brent crude oil prices based on the spread between Brent and WTI crude oil prices. Our goal is to identify a leading indicator of future changes in Brent prices.
preview
Developing an MQL5 RL agent with RestAPI integration (Part 3): Creating automatic moves and test scripts in MQL5

Developing an MQL5 RL agent with RestAPI integration (Part 3): Creating automatic moves and test scripts in MQL5

This article discusses the implementation of automatic moves in the tic-tac-toe game in Python, integrated with MQL5 functions and unit tests. The goal is to improve the interactivity of the game and ensure the reliability of the system through testing in MQL5. The presentation covers game logic development, integration, and hands-on testing, and concludes with the creation of a dynamic game environment and a robust integrated system.
preview
Neural Networks Made Easy (Part 97): Training Models With MSFformer

Neural Networks Made Easy (Part 97): Training Models With MSFformer

When exploring various model architecture designs, we often devote insufficient attention to the process of model training. In this article, I aim to address this gap.
preview
MQL5 Wizard Techniques you should know (Part 49): Reinforcement Learning with Proximal Policy Optimization

MQL5 Wizard Techniques you should know (Part 49): Reinforcement Learning with Proximal Policy Optimization

Proximal Policy Optimization is another algorithm in reinforcement learning that updates the policy, often in network form, in very small incremental steps to ensure the model stability. We examine how this could be of use, as we have with previous articles, in a wizard assembled Expert Advisor.
preview
Neural Networks in Trading: Node-Adaptive Graph Representation with NAFS

Neural Networks in Trading: Node-Adaptive Graph Representation with NAFS

We invite you to get acquainted with the NAFS (Node-Adaptive Feature Smoothing) method, which is a non-parametric approach to creating node representations that does not require parameter training. NAFS extracts features of each node given its neighbors and then adaptively combines these features to form a final representation.
preview
MQL5 Wizard Techniques you should know (Part 21): Testing with Economic Calendar Data

MQL5 Wizard Techniques you should know (Part 21): Testing with Economic Calendar Data

Economic Calendar Data is not available for testing with Expert Advisors within Strategy Tester, by default. We look at how Databases could help in providing a work around this limitation. So, for this article we explore how SQLite databases can be used to archive Economic Calendar news such that wizard assembled Expert Advisors can use this to generate trade signals.
preview
Reimagining Classic Strategies (Part IV): SP500 and US Treasury Notes

Reimagining Classic Strategies (Part IV): SP500 and US Treasury Notes

In this series of articles, we analyze classical trading strategies using modern algorithms to determine whether we can improve the strategy using AI. In today's article, we revisit a classical approach for trading the SP500 using the relationship it has with US Treasury Notes.
preview
Neural networks are easy (Part 59): Dichotomy of Control (DoC)

Neural networks are easy (Part 59): Dichotomy of Control (DoC)

In the previous article, we got acquainted with the Decision Transformer. But the complex stochastic environment of the foreign exchange market did not allow us to fully implement the potential of the presented method. In this article, I will introduce an algorithm that is aimed at improving the performance of algorithms in stochastic environments.
preview
Population optimization algorithms: Bird Swarm Algorithm (BSA)

Population optimization algorithms: Bird Swarm Algorithm (BSA)

The article explores the bird swarm-based algorithm (BSA) inspired by the collective flocking interactions of birds in nature. The different search strategies of individuals in BSA, including switching between flight, vigilance and foraging behavior, make this algorithm multifaceted. It uses the principles of bird flocking, communication, adaptability, leading and following to efficiently find optimal solutions.
preview
MQL5 Wizard Techniques you should know (Part 32): Regularization

MQL5 Wizard Techniques you should know (Part 32): Regularization

Regularization is a form of penalizing the loss function in proportion to the discrete weighting applied throughout the various layers of a neural network. We look at the significance, for some of the various regularization forms, this can have in test runs with a wizard assembled Expert Advisor.