Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Neural Networks in Trading: Controlled Segmentation

Neural Networks in Trading: Controlled Segmentation

In this article. we will discuss a method of complex multimodal interaction analysis and feature understanding.
preview
Neural networks made easy (Part 72): Trajectory prediction in noisy environments

Neural networks made easy (Part 72): Trajectory prediction in noisy environments

The quality of future state predictions plays an important role in the Goal-Conditioned Predictive Coding method, which we discussed in the previous article. In this article I want to introduce you to an algorithm that can significantly improve the prediction quality in stochastic environments, such as financial markets.
preview
Build Self Optimizing Expert Advisors in MQL5 (Part 7): Trading With Multiple Periods At Once

Build Self Optimizing Expert Advisors in MQL5 (Part 7): Trading With Multiple Periods At Once

In this series of articles, we have considered multiple different ways of identifying the best period to use our technical indicators with. Today, we shall demonstrate to the reader how they can instead perform the opposite logic, that is to say, instead of picking the single best period to use, we will demonstrate to the reader how to employ all available periods effectively. This approach reduces the amount of data discarded, and offers alternative use cases for machine learning algorithms beyond ordinary price prediction.
preview
Time series clustering in causal inference

Time series clustering in causal inference

Clustering algorithms in machine learning are important unsupervised learning algorithms that can divide the original data into groups with similar observations. By using these groups, you can analyze the market for a specific cluster, search for the most stable clusters using new data, and make causal inferences. The article proposes an original method for time series clustering in Python.
preview
Neural Network in Practice: Pseudoinverse (I)

Neural Network in Practice: Pseudoinverse (I)

Today we will begin to consider how to implement the calculation of pseudo-inverse in pure MQL5 language. The code we are going to look at will be much more complex for beginners than I expected, and I'm still figuring out how to explain it in a simple way. So for now, consider this an opportunity to learn some unusual code. Calmly and attentively. Although it is not aimed at efficient or quick application, its goal is to be as didactic as possible.
preview
Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning

With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.
preview
Artificial Bee Hive Algorithm (ABHA): Tests and results

Artificial Bee Hive Algorithm (ABHA): Tests and results

In this article, we will continue exploring the Artificial Bee Hive Algorithm (ABHA) by diving into the code and considering the remaining methods. As you might remember, each bee in the model is represented as an individual agent whose behavior depends on internal and external information, as well as motivational state. We will test the algorithm on various functions and summarize the results by presenting them in the rating table.
preview
Neural Network in Practice: Straight Line Function

Neural Network in Practice: Straight Line Function

In this article, we will take a quick look at some methods to get a function that can represent our data in the database. I will not go into detail about how to use statistics and probability studies to interpret the results. Let's leave that for those who really want to delve into the mathematical side of the matter. Exploring these questions will be critical to understanding what is involved in studying neural networks. Here we will consider this issue quite calmly.
preview
Category Theory in MQL5 (Part 23): A different look at the Double Exponential Moving Average

Category Theory in MQL5 (Part 23): A different look at the Double Exponential Moving Average

In this article we continue with our theme in the last of tackling everyday trading indicators viewed in a ‘new’ light. We are handling horizontal composition of natural transformations for this piece and the best indicator for this, that expands on what we just covered, is the double exponential moving average (DEMA).
preview
Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)

Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)

In this article, we will get acquainted with an interesting algorithm that is built at the intersection of supervised and reinforcement learning methods.
preview
Neural Networks in Trading: Directional Diffusion Models (DDM)

Neural Networks in Trading: Directional Diffusion Models (DDM)

In this article, we discuss Directional Diffusion Models that exploit data-dependent anisotropic and directed noise in a forward diffusion process to capture meaningful graph representations.
preview
Neural networks made easy (Part 62): Using Decision Transformer in hierarchical models

Neural networks made easy (Part 62): Using Decision Transformer in hierarchical models

In recent articles, we have seen several options for using the Decision Transformer method. The method allows analyzing not only the current state, but also the trajectory of previous states and actions performed in them. In this article, we will focus on using this method in hierarchical models.
preview
Neural networks made easy (Part 52): Research with optimism and distribution correction

Neural networks made easy (Part 52): Research with optimism and distribution correction

As the model is trained based on the experience reproduction buffer, the current Actor policy moves further and further away from the stored examples, which reduces the efficiency of training the model as a whole. In this article, we will look at the algorithm of improving the efficiency of using samples in reinforcement learning algorithms.
preview
Population optimization algorithms: Changing shape, shifting probability distributions and testing on Smart Cephalopod (SC)

Population optimization algorithms: Changing shape, shifting probability distributions and testing on Smart Cephalopod (SC)

The article examines the impact of changing the shape of probability distributions on the performance of optimization algorithms. We will conduct experiments using the Smart Cephalopod (SC) test algorithm to evaluate the efficiency of various probability distributions in the context of optimization problems.
preview
Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method

Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method

As a result of tests performed in previous articles, we came to the conclusion that the optimality of the trained strategy largely depends on the training set used. In this article, we will get acquainted with a fairly simple yet effective method for selecting trajectories to train models.
preview
Neural Networks in Trading: Spatio-Temporal Neural Network (STNN)

Neural Networks in Trading: Spatio-Temporal Neural Network (STNN)

In this article we will talk about using space-time transformations to effectively predict upcoming price movement. To improve the numerical prediction accuracy in STNN, a continuous attention mechanism is proposed that allows the model to better consider important aspects of the data.
preview
Data label for time series mining (Part 6):Apply and Test in EA Using ONNX

Data label for time series mining (Part 6):Apply and Test in EA Using ONNX

This series of articles introduces several time series labeling methods, which can create data that meets most artificial intelligence models, and targeted data labeling according to needs can make the trained artificial intelligence model more in line with the expected design, improve the accuracy of our model, and even help the model make a qualitative leap!
preview
Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I

Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I

The Simulated Annealing algorithm is a metaheuristic inspired by the metal annealing process. In the article, we will conduct a thorough analysis of the algorithm and debunk a number of common beliefs and myths surrounding this widely known optimization method. The second part of the article will consider the custom Simulated Isotropic Annealing (SIA) algorithm.
preview
MQL5 Wizard Techniques you should know (Part 11): Number Walls

MQL5 Wizard Techniques you should know (Part 11): Number Walls

Number Walls are a variant of Linear Shift Back Registers that prescreen sequences for predictability by checking for convergence. We look at how these ideas could be of use in MQL5.
preview
MQL5 Wizard Techniques you should know (Part 51): Reinforcement Learning with SAC

MQL5 Wizard Techniques you should know (Part 51): Reinforcement Learning with SAC

Soft Actor Critic is a Reinforcement Learning algorithm that utilizes 3 neural networks. An actor network and 2 critic networks. These machine learning models are paired in a master slave partnership where the critics are modelled to improve the forecast accuracy of the actor network. While also introducing ONNX in these series, we explore how these ideas could be put to test as a custom signal of a wizard assembled Expert Advisor.
preview
MQL5 Wizard Techniques you should know (Part 22): Conditional GANs

MQL5 Wizard Techniques you should know (Part 22): Conditional GANs

Generative Adversarial Networks are a pairing of Neural Networks that train off of each other for more accurate results. We adopt the conditional type of these networks as we look to possible application in forecasting Financial time series within an Expert Signal Class.
preview
Implementing Practical Modules from Other Languages in MQL5 (Part 03): Schedule Module from Python, the OnTimer Event on Steroids

Implementing Practical Modules from Other Languages in MQL5 (Part 03): Schedule Module from Python, the OnTimer Event on Steroids

The schedule module in Python offers a simple way to schedule repeated tasks. While MQL5 lacks a built-in equivalent, in this article we’ll implement a similar library to make it easier to set up timed events in MetaTrader 5.
preview
Market Simulation (Part 06): Transferring Information from MetaTrader 5 to Excel

Market Simulation (Part 06): Transferring Information from MetaTrader 5 to Excel

Many people, especially non=programmers, find it very difficult to transfer information between MetaTrader 5 and other programs. One such program is Excel. Many use Excel as a way to manage and maintain their risk control. It is an excellent program and easy to learn, even for those who are not VBA programmers. Here we will look at how to establish a connection between MetaTrader 5 and Excel (a very simple method).
preview
Data Science and ML (Part 43): Hidden Patterns Detection in Indicators Data Using Latent Gaussian Mixture Models (LGMM)

Data Science and ML (Part 43): Hidden Patterns Detection in Indicators Data Using Latent Gaussian Mixture Models (LGMM)

Have you ever looked at the chart and felt that strange sensation… that there’s a pattern hidden just beneath the surface? A secret code that might reveal where prices are headed if only you could crack it? Meet LGMM, the Market’s Hidden Pattern Detector. A machine learning model that helps identify those hidden patterns in the market.
preview
Data Science and ML (Part 46): Stock Markets Forecasting Using N-BEATS in Python

Data Science and ML (Part 46): Stock Markets Forecasting Using N-BEATS in Python

N-BEATS is a revolutionary deep learning model designed for time series forecasting. It was released to surpass classical models for time series forecasting such as ARIMA, PROPHET, VAR, etc. In this article, we are going to discuss this model and use it in predicting the stock market.
preview
MQL5 Wizard Techniques you should know (Part 72): Using Patterns of MACD and the OBV with Supervised Learning

MQL5 Wizard Techniques you should know (Part 72): Using Patterns of MACD and the OBV with Supervised Learning

We follow up on our last article, where we introduced the indicator pair of the MACD and the OBV, by looking at how this pairing could be enhanced with Machine Learning. MACD and OBV are a trend and volume complimentary pairing. Our machine learning approach uses a convolution neural network that engages the Exponential kernel in sizing its kernels and channels, when fine-tuning the forecasts of this indicator pairing. As always, this is done in a custom signal class file that works with the MQL5 wizard to assemble an Expert Advisor.
preview
Comet Tail Algorithm (CTA)

Comet Tail Algorithm (CTA)

In this article, we will look at the Comet Tail Optimization Algorithm (CTA), which draws inspiration from unique space objects - comets and their impressive tails that form when approaching the Sun. The algorithm is based on the concept of the motion of comets and their tails, and is designed to find optimal solutions in optimization problems.
preview
Neural Networks in Trading: Superpoint Transformer (SPFormer)

Neural Networks in Trading: Superpoint Transformer (SPFormer)

In this article, we introduce a method for segmenting 3D objects based on Superpoint Transformer (SPFormer), which eliminates the need for intermediate data aggregation. This speeds up the segmentation process and improves the performance of the model.
preview
Generative Adversarial Networks (GANs) for Synthetic Data in Financial Modeling (Part 1): Introduction to GANs and Synthetic Data in Financial Modeling

Generative Adversarial Networks (GANs) for Synthetic Data in Financial Modeling (Part 1): Introduction to GANs and Synthetic Data in Financial Modeling

This article introduces traders to Generative Adversarial Networks (GANs) for generating Synthetic Financial data, addressing data limitations in model training. It covers GAN basics, python and MQL5 code implementations, and practical applications in finance, empowering traders to enhance model accuracy and robustness through synthetic data.
preview
Neural networks made easy (Part 41): Hierarchical models

Neural networks made easy (Part 41): Hierarchical models

The article describes hierarchical training models that offer an effective approach to solving complex machine learning problems. Hierarchical models consist of several levels, each of which is responsible for different aspects of the task.
preview
Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

We continue to explore the analysis and forecasting of time series in the frequency domain. In this article, we will get acquainted with a new method to forecast data in the frequency domain, which can be added to many of the algorithms we have studied previously.
preview
MQL5 Wizard Techniques you should know (Part 49): Reinforcement Learning with Proximal Policy Optimization

MQL5 Wizard Techniques you should know (Part 49): Reinforcement Learning with Proximal Policy Optimization

Proximal Policy Optimization is another algorithm in reinforcement learning that updates the policy, often in network form, in very small incremental steps to ensure the model stability. We examine how this could be of use, as we have with previous articles, in a wizard assembled Expert Advisor.
preview
Self Optimizing Expert Advisors in MQL5 (Part 8): Multiple Strategy Analysis (2)

Self Optimizing Expert Advisors in MQL5 (Part 8): Multiple Strategy Analysis (2)

Join us for our follow-up discussion, where we will merge our first two trading strategies into an ensemble trading strategy. We shall demonstrate the different schemes possible for combining multiple strategies and also how to exercise control over the parameter space, to ensure that effective optimization remains possible even as our parameter size grows.
preview
Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation

Overcoming The Limitation of Machine Learning (Part 6): Effective Memory Cross Validation

In this discussion, we contrast the classical approach to time series cross-validation with modern alternatives that challenge its core assumptions. We expose key blind spots in the traditional method—especially its failure to account for evolving market conditions. To address these gaps, we introduce Effective Memory Cross-Validation (EMCV), a domain-aware approach that questions the long-held belief that more historical data always improves performance.
preview
MQL5 Wizard Techniques you should know (Part 31): Selecting the Loss Function

MQL5 Wizard Techniques you should know (Part 31): Selecting the Loss Function

Loss Function is the key metric of machine learning algorithms that provides feedback to the training process by quantifying how well a given set of parameters are performing when compared to their intended target. We explore the various formats of this function in an MQL5 custom wizard class.
preview
Neural Networks in Trading: Hierarchical Feature Learning for Point Clouds

Neural Networks in Trading: Hierarchical Feature Learning for Point Clouds

We continue to study algorithms for extracting features from a point cloud. In this article, we will get acquainted with the mechanisms for increasing the efficiency of the PointNet method.
preview
Neural Networks in Trading: Market Analysis Using a Pattern Transformer

Neural Networks in Trading: Market Analysis Using a Pattern Transformer

When we use models to analyze the market situation, we mainly focus on the candlestick. However, it has long been known that candlestick patterns can help in predicting future price movements. In this article, we will get acquainted with a method that allows us to integrate both of these approaches.
preview
Neural Networks in Trading: Hierarchical Vector Transformer (Final Part)

Neural Networks in Trading: Hierarchical Vector Transformer (Final Part)

We continue studying the Hierarchical Vector Transformer method. In this article, we will complete the construction of the model. We will also train and test it on real historical data.
preview
Neural Networks Made Easy (Part 97): Training Models With MSFformer

Neural Networks Made Easy (Part 97): Training Models With MSFformer

When exploring various model architecture designs, we often devote insufficient attention to the process of model training. In this article, I aim to address this gap.
preview
Population optimization algorithms: Intelligent Water Drops (IWD) algorithm

Population optimization algorithms: Intelligent Water Drops (IWD) algorithm

The article considers an interesting algorithm derived from inanimate nature - intelligent water drops (IWD) simulating the process of river bed formation. The ideas of this algorithm made it possible to significantly improve the previous leader of the rating - SDS. As usual, the new leader (modified SDSm) can be found in the attachment.