Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Neural Networks in Trading: Hierarchical Vector Transformer (HiVT)

Neural Networks in Trading: Hierarchical Vector Transformer (HiVT)

We invite you to get acquainted with the Hierarchical Vector Transformer (HiVT) method, which was developed for fast and accurate forecasting of multimodal time series.
preview
Reimagining Classic Strategies (Part XI): Moving Average Cross Over (II)

Reimagining Classic Strategies (Part XI): Moving Average Cross Over (II)

The moving averages and the stochastic oscillator could be used to generate trend following trading signals. However, these signals will only be observed after the price action has occurred. We can effectively overcome this inherent lag in technical indicators using AI. This article will teach you how to create a fully autonomous AI-powered Expert Advisor in a manner that can improve any of your existing trading strategies. Even the oldest trading strategy possible can be improved.
preview
Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost

Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost

The article considers the theoretical application of quantization in the construction of tree models and showcases the implemented quantization methods in CatBoost. No complex mathematical equations are used.
preview
Population optimization algorithms: Bat algorithm (BA)

Population optimization algorithms: Bat algorithm (BA)

In this article, I will consider the Bat Algorithm (BA), which shows good convergence on smooth functions.
preview
Neural Networks in Trading: Using Language Models for Time Series Forecasting

Neural Networks in Trading: Using Language Models for Time Series Forecasting

We continue to study time series forecasting models. In this article, we get acquainted with a complex algorithm built on the use of a pre-trained language model.
preview
Neural networks made easy (Part 68): Offline Preference-guided Policy Optimization

Neural networks made easy (Part 68): Offline Preference-guided Policy Optimization

Since the first articles devoted to reinforcement learning, we have in one way or another touched upon 2 problems: exploring the environment and determining the reward function. Recent articles have been devoted to the problem of exploration in offline learning. In this article, I would like to introduce you to an algorithm whose authors completely eliminated the reward function.
preview
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

The authors of the FreDF method experimentally confirmed the advantage of combined forecasting in the frequency and time domains. However, the use of the weight hyperparameter is not optimal for non-stationary time series. In this article, we will get acquainted with the method of adaptive combination of forecasts in frequency and time domains.
preview
Population optimization algorithms: Nelder–Mead, or simplex search (NM) method

Population optimization algorithms: Nelder–Mead, or simplex search (NM) method

The article presents a complete exploration of the Nelder-Mead method, explaining how the simplex (function parameter space) is modified and rearranged at each iteration to achieve an optimal solution, and describes how the method can be improved.
preview
Neural networks made easy (Part 60): Online Decision Transformer (ODT)

Neural networks made easy (Part 60): Online Decision Transformer (ODT)

The last two articles were devoted to the Decision Transformer method, which models action sequences in the context of an autoregressive model of desired rewards. In this article, we will look at another optimization algorithm for this method.
preview
Neural networks made easy (Part 39): Go-Explore, a different approach to exploration

Neural networks made easy (Part 39): Go-Explore, a different approach to exploration

We continue studying the environment in reinforcement learning models. And in this article we will look at another algorithm – Go-Explore, which allows you to effectively explore the environment at the model training stage.
preview
MQL5 Wizard Techniques you should know (Part 08): Perceptrons

MQL5 Wizard Techniques you should know (Part 08): Perceptrons

Perceptrons, single hidden layer networks, can be a good segue for anyone familiar with basic automated trading and is looking to dip into neural networks. We take a step by step look at how this could be realized in a signal class assembly that is part of the MQL5 Wizard classes for expert advisors.
preview
Reimagining Classic Strategies (Part III): Forecasting Higher Highs And Lower Lows

Reimagining Classic Strategies (Part III): Forecasting Higher Highs And Lower Lows

In this series article, we will empirically analyze classic trading strategies to see if we can improve them using AI. In today's discussion, we tried to predict higher highs and lower lows using the Linear Discriminant Analysis model.
preview
From Python to MQL5: A Journey into Quantum-Inspired Trading Systems

From Python to MQL5: A Journey into Quantum-Inspired Trading Systems

The article explores the development of a quantum-inspired trading system, transitioning from a Python prototype to an MQL5 implementation for real-world trading. The system uses quantum computing principles like superposition and entanglement to analyze market states, though it runs on classical computers using quantum simulators. Key features include a three-qubit system for analyzing eight market states simultaneously, 24-hour lookback periods, and seven technical indicators for market analysis. While the accuracy rates might seem modest, they provide a significant edge when combined with proper risk management strategies.
preview
Turtle Shell Evolution Algorithm (TSEA)

Turtle Shell Evolution Algorithm (TSEA)

This is a unique optimization algorithm inspired by the evolution of the turtle shell. The TSEA algorithm emulates the gradual formation of keratinized skin areas, which represent optimal solutions to a problem. The best solutions become "harder" and are located closer to the outer surface, while the less successful solutions remain "softer" and are located inside. The algorithm uses clustering of solutions by quality and distance, allowing to preserve less successful options and providing flexibility and adaptability.
preview
Data Science and ML (Part 33): Pandas Dataframe in MQL5, Data Collection for ML Usage made easier

Data Science and ML (Part 33): Pandas Dataframe in MQL5, Data Collection for ML Usage made easier

When working with machine learning models, it’s essential to ensure consistency in the data used for training, validation, and testing. In this article, we will create our own version of the Pandas library in MQL5 to ensure a unified approach for handling machine learning data, for ensuring the same data is applied inside and outside MQL5, where most of the training occurs.
preview
Neural Networks Made Easy (Part 87): Time Series Patching

Neural Networks Made Easy (Part 87): Time Series Patching

Forecasting plays an important role in time series analysis. In the new article, we will talk about the benefits of time series patching.
preview
MQL5 Wizard Techniques you should know (Part 13): DBSCAN for Expert Signal Class

MQL5 Wizard Techniques you should know (Part 13): DBSCAN for Expert Signal Class

Density Based Spatial Clustering for Applications with Noise is an unsupervised form of grouping data that hardly requires any input parameters, save for just 2, which when compared to other approaches like k-means, is a boon. We delve into how this could be constructive for testing and eventually trading with Wizard assembled Expert Advisers
preview
Category Theory in MQL5 (Part 5): Equalizers

Category Theory in MQL5 (Part 5): Equalizers

Category Theory is a diverse and expanding branch of Mathematics which is only recently getting some coverage in the MQL5 community. These series of articles look to explore and examine some of its concepts & axioms with the overall goal of establishing an open library that provides insight while also hopefully furthering the use of this remarkable field in Traders' strategy development.
preview
Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

In the previous article, we discussed a simple RNN which despite its inability to understand long-term dependencies in the data, was able to make a profitable strategy. In this article, we are discussing both the Long-Short Term Memory(LSTM) and the Gated Recurrent Unit(GRU). These two were introduced to overcome the shortcomings of a simple RNN and to outsmart it.
preview
Neural networks made easy (Part 51): Behavior-Guided Actor-Critic (BAC)

Neural networks made easy (Part 51): Behavior-Guided Actor-Critic (BAC)

The last two articles considered the Soft Actor-Critic algorithm, which incorporates entropy regularization into the reward function. This approach balances environmental exploration and model exploitation, but it is only applicable to stochastic models. The current article proposes an alternative approach that is applicable to both stochastic and deterministic models.
preview
Neural networks made easy (Part 45): Training state exploration skills

Neural networks made easy (Part 45): Training state exploration skills

Training useful skills without an explicit reward function is one of the main challenges in hierarchical reinforcement learning. Previously, we already got acquainted with two algorithms for solving this problem. But the question of the completeness of environmental research remains open. This article demonstrates a different approach to skill training, the use of which directly depends on the current state of the system.
preview
Category Theory in MQL5 (Part 22): A different look at Moving Averages

Category Theory in MQL5 (Part 22): A different look at Moving Averages

In this article we attempt to simplify our illustration of concepts covered in these series by dwelling on just one indicator, the most common and probably the easiest to understand. The moving average. In doing so we consider significance and possible applications of vertical natural transformations.
preview
Neural networks made easy (Part 46): Goal-conditioned reinforcement learning (GCRL)

Neural networks made easy (Part 46): Goal-conditioned reinforcement learning (GCRL)

In this article, we will have a look at yet another reinforcement learning approach. It is called goal-conditioned reinforcement learning (GCRL). In this approach, an agent is trained to achieve different goals in specific scenarios.
preview
Category Theory in MQL5 (Part 13): Calendar Events with Database Schemas

Category Theory in MQL5 (Part 13): Calendar Events with Database Schemas

This article, that follows Category Theory implementation of Orders in MQL5, considers how database schemas can be incorporated for classification in MQL5. We take an introductory look at how database schema concepts could be married with category theory when identifying trade relevant text(string) information. Calendar events are the focus.
preview
Neural Networks in Trading: Practical Results of the TEMPO Method

Neural Networks in Trading: Practical Results of the TEMPO Method

We continue our acquaintance with the TEMPO method. In this article we will evaluate the actual effectiveness of the proposed approaches on real historical data.
preview
Data Science and Machine Learning (Part 20): Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5

Data Science and Machine Learning (Part 20): Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5

Uncover the secrets behind these powerful dimensionality reduction techniques as we dissect their applications within the MQL5 trading environment. Delve into the nuances of Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA), gaining a profound understanding of their impact on strategy development and market analysis.
preview
Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

In this article, I propose to look at the issue of building a trading strategy from a different angle. We will not predict future price movements, but will try to build a trading system based on the analysis of historical data.
preview
Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Recurrent neural networks (RNNs) excel at leveraging past information to predict future events. Their remarkable predictive capabilities have been applied across various domains with great success. In this article, we will deploy RNN models to predict trends in the forex market, demonstrating their potential to enhance forecasting accuracy in forex trading.
preview
Feature Engineering With Python And MQL5 (Part I): Forecasting Moving Averages For Long-Range AI Models

Feature Engineering With Python And MQL5 (Part I): Forecasting Moving Averages For Long-Range AI Models

The moving averages are by far the best indicators for our AI models to predict. However, we can improve our accuracy even further by carefully transforming our data. This article will demonstrate, how you can build AI Models capable of forecasting further into the future than you may currently be practicing without significant drops to your accuracy levels. It is truly remarkable, how useful the moving averages are.
preview
Using PSAR, Heiken Ashi, and Deep Learning Together for Trading

Using PSAR, Heiken Ashi, and Deep Learning Together for Trading

This project explores the fusion of deep learning and technical analysis to test trading strategies in forex. A Python script is used for rapid experimentation, employing an ONNX model alongside traditional indicators like PSAR, SMA, and RSI to predict EUR/USD movements. A MetaTrader 5 script then brings this strategy into a live environment, using historical data and technical analysis to make informed trading decisions. The backtesting results indicate a cautious yet consistent approach, with a focus on risk management and steady growth rather than aggressive profit-seeking.
preview
Introduction to MQL5 (Part 5): A Beginner's Guide to Array Functions in MQL5

Introduction to MQL5 (Part 5): A Beginner's Guide to Array Functions in MQL5

Explore the world of MQL5 arrays in Part 5, designed for absolute beginners. Simplifying complex coding concepts, this article focuses on clarity and inclusivity. Join our community of learners, where questions are embraced, and knowledge is shared!
preview
Reimagining Classic Strategies (Part V): Multiple Symbol Analysis on USDZAR

Reimagining Classic Strategies (Part V): Multiple Symbol Analysis on USDZAR

In this series of articles, we revisit classical strategies to see if we can improve the strategy using AI. In today's article, we will examine a popular strategy of multiple symbol analysis using a basket of correlated securities, we will focus on the exotic USDZAR currency pair.
preview
Neural Networks in Trading: Unified Trajectory Generation Model (UniTraj)

Neural Networks in Trading: Unified Trajectory Generation Model (UniTraj)

Understanding agent behavior is important in many different areas, but most methods focus on just one of the tasks (understanding, noise removal, or prediction), which reduces their effectiveness in real-world scenarios. In this article, we will get acquainted with a model that can adapt to solving various problems.
preview
MQL5 Wizard Techniques you should know (Part 16): Principal Component Analysis with Eigen Vectors

MQL5 Wizard Techniques you should know (Part 16): Principal Component Analysis with Eigen Vectors

Principal Component Analysis, a dimensionality reducing technique in data analysis, is looked at in this article, with how it could be implemented with Eigen values and vectors. As always, we aim to develop a prototype expert-signal-class usable in the MQL5 wizard.
preview
Neural Networks in Trading: Dual-Attention-Based Trend Prediction Model

Neural Networks in Trading: Dual-Attention-Based Trend Prediction Model

We continue the discussion about the use of piecewise linear representation of time series, which was started in the previous article. Today we will see how to combine this method with other approaches to time series analysis to improve the price trend prediction quality.
preview
William Gann methods (Part III): Does Astrology Work?

William Gann methods (Part III): Does Astrology Work?

Do the positions of planets and stars affect financial markets? Let's arm ourselves with statistics and big data, and embark on an exciting journey into the world where stars and stock charts intersect.
preview
Artificial Electric Field Algorithm (AEFA)

Artificial Electric Field Algorithm (AEFA)

The article presents an artificial electric field algorithm (AEFA) inspired by Coulomb's law of electrostatic force. The algorithm simulates electrical phenomena to solve complex optimization problems using charged particles and their interactions. AEFA exhibits unique properties in the context of other algorithms related to laws of nature.
preview
Matrix Factorization: A more practical modeling

Matrix Factorization: A more practical modeling

You might not have noticed that the matrix modeling was a little strange, since only columns were specified, not rows and columns. This looks very strange when reading the code that performs matrix factorizations. If you were expecting to see the rows and columns listed, you might get confused when trying to factorize. Moreover, this matrix modeling method is not the best. This is because when we model matrices in this way, we encounter some limitations that force us to use other methods or functions that would not be necessary if the modeling were done in a more appropriate way.
preview
Data Science and Machine Learning (Part 23): Why LightGBM and XGBoost outperform a lot of AI models?

Data Science and Machine Learning (Part 23): Why LightGBM and XGBoost outperform a lot of AI models?

These advanced gradient-boosted decision tree techniques offer superior performance and flexibility, making them ideal for financial modeling and algorithmic trading. Learn how to leverage these tools to optimize your trading strategies, improve predictive accuracy, and gain a competitive edge in the financial markets.
preview
Data Science and ML (Part 29): Essential Tips for Selecting the Best Forex Data for AI Training Purposes

Data Science and ML (Part 29): Essential Tips for Selecting the Best Forex Data for AI Training Purposes

In this article, we dive deep into the crucial aspects of choosing the most relevant and high-quality Forex data to enhance the performance of AI models.