Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Population optimization algorithms: Artificial Bee Colony (ABC)

Population optimization algorithms: Artificial Bee Colony (ABC)

In this article, we will study the algorithm of an artificial bee colony and supplement our knowledge with new principles of studying functional spaces. In this article, I will showcase my interpretation of the classic version of the algorithm.
preview
Data Science and Machine Learning(Part 14): Finding Your Way in the Markets with Kohonen Maps

Data Science and Machine Learning(Part 14): Finding Your Way in the Markets with Kohonen Maps

Are you looking for a cutting-edge approach to trading that can help you navigate complex and ever-changing markets? Look no further than Kohonen maps, an innovative form of artificial neural networks that can help you uncover hidden patterns and trends in market data. In this article, we'll explore how Kohonen maps work, and how they can be used to develop smarter, more effective trading strategies. Whether you're a seasoned trader or just starting out, you won't want to miss this exciting new approach to trading.
preview
Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

In an attempt to obtain the most accurate forecasts, researchers often complicate forecasting models. Which in turn leads to increased model training and maintenance costs. Is such an increase always justified? This article introduces an algorithm that uses the simplicity and speed of linear models and demonstrates results on par with the best models with a more complex architecture.
preview
Neural Networks in Trading: Optimizing the Transformer for Time Series Forecasting (LSEAttention)

Neural Networks in Trading: Optimizing the Transformer for Time Series Forecasting (LSEAttention)

The LSEAttention framework offers improvements to the Transformer architecture. It was designed specifically for long-term multivariate time series forecasting. The approaches proposed by the authors of the method can be applied to solve problems of entropy collapse and learning instability, which are often encountered with vanilla Transformer.
preview
Integrating ML models with the Strategy Tester (Conclusion): Implementing a regression model for price prediction

Integrating ML models with the Strategy Tester (Conclusion): Implementing a regression model for price prediction

This article describes the implementation of a regression model based on a decision tree. The model should predict prices of financial assets. We have already prepared the data, trained and evaluated the model, as well as adjusted and optimized it. However, it is important to note that this model is intended for study purposes only and should not be used in real trading.
preview
Population optimization algorithms: Stochastic Diffusion Search (SDS)

Population optimization algorithms: Stochastic Diffusion Search (SDS)

The article discusses Stochastic Diffusion Search (SDS), which is a very powerful and efficient optimization algorithm based on the principles of random walk. The algorithm allows finding optimal solutions in complex multidimensional spaces, while featuring a high speed of convergence and the ability to avoid local extrema.
preview
Fast trading strategy tester in Python using Numba

Fast trading strategy tester in Python using Numba

The article implements a fast strategy tester for machine learning models using Numba. It is 50 times faster than the pure Python strategy tester. The author recommends using this library to speed up mathematical calculations, especially the ones involving loops.
preview
Neural Networks in Trading: An Ensemble of Agents with Attention Mechanisms (Final Part)

Neural Networks in Trading: An Ensemble of Agents with Attention Mechanisms (Final Part)

In the previous article, we introduced the multi-agent adaptive framework MASAAT, which uses an ensemble of agents to perform cross-analysis of multimodal time series at different data scales. Today we will continue implementing the approaches of this framework in MQL5 and bring this work to a logical conclusion.
preview
Experiments with neural networks (Part 4): Templates

Experiments with neural networks (Part 4): Templates

In this article, I will use experimentation and non-standard approaches to develop a profitable trading system and check whether neural networks can be of any help for traders. MetaTrader 5 as a self-sufficient tool for using neural networks in trading. Simple explanation.
preview
Data Science and ML(Part 30): The Power Couple for Predicting the Stock Market, Convolutional Neural Networks(CNNs) and Recurrent Neural Networks(RNNs)

Data Science and ML(Part 30): The Power Couple for Predicting the Stock Market, Convolutional Neural Networks(CNNs) and Recurrent Neural Networks(RNNs)

In this article, We explore the dynamic integration of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) in stock market prediction. By leveraging CNNs' ability to extract patterns and RNNs' proficiency in handling sequential data. Let us see how this powerful combination can enhance the accuracy and efficiency of trading algorithms.
preview
Category Theory in MQL5 (Part 3)

Category Theory in MQL5 (Part 3)

Category Theory is a diverse and expanding branch of Mathematics which as of yet is relatively uncovered in the MQL5 community. These series of articles look to introduce and examine some of its concepts with the overall goal of establishing an open library that provides insight while hopefully furthering the use of this remarkable field in Traders' strategy development.
preview
Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

When working with time series, we always use the source data in their historical sequence. But is this the best option? There is an opinion that changing the sequence of the input data will improve the efficiency of the trained models. In this article I invite you to get acquainted with one of the methods for optimizing the input sequence.
preview
Neural Networks in Trading: A Multimodal, Tool-Augmented Agent for Financial Markets (FinAgent)

Neural Networks in Trading: A Multimodal, Tool-Augmented Agent for Financial Markets (FinAgent)

We invite you to explore FinAgent, a multimodal financial trading agent framework designed to analyze various types of data reflecting market dynamics and historical trading patterns.
preview
Integrating ML models with the Strategy Tester (Part 3): Managing CSV files (II)

Integrating ML models with the Strategy Tester (Part 3): Managing CSV files (II)

This material provides a complete guide to creating a class in MQL5 for efficient management of CSV files. We will see the implementation of methods for opening, writing, reading, and transforming data. We will also consider how to use them to store and access information. In addition, we will discuss the limitations and the most important aspects of using such a class. This article ca be a valuable resource for those who want to learn how to process CSV files in MQL5.
preview
Reimagining Classic Strategies (Part 16): Double Bollinger Band Breakouts

Reimagining Classic Strategies (Part 16): Double Bollinger Band Breakouts

This article walks the reader through a reimagined version of the classical Bollinger Band breakout strategy. It identifies key weaknesses in the original approach, such as its well-known susceptibility to false breakouts. The article aims to introduce a possible solution: the Double Bollinger Band trading strategy. This relatively lesser known approach supplements the weaknesses of the classical version and offers a more dynamic perspective on financial markets. It helps us overcome the old limitations defined by the original rules, providing traders with a stronger and more adaptive framework.
preview
Build Self Optimizing Expert Advisors in MQL5 (Part 8): Multiple Strategy Analysis

Build Self Optimizing Expert Advisors in MQL5 (Part 8): Multiple Strategy Analysis

How best can we combine multiple strategies to create a powerful ensemble strategy? Join us in this discussion as we look to fit together three different strategies into our trading application. Traders often employ specialized strategies for opening and closing positions, and we want to know if our machines can perform this task better. For our opening discussion, we will get familiar with the faculties of the strategy tester and the principles of OOP we will need for this task.
preview
MQL5 Wizard Techniques you should know (Part 80): Using Patterns of Ichimoku and the ADX-Wilder with TD3 Reinforcement Learning

MQL5 Wizard Techniques you should know (Part 80): Using Patterns of Ichimoku and the ADX-Wilder with TD3 Reinforcement Learning

This article follows up ‘Part-74’, where we examined the pairing of Ichimoku and the ADX under a Supervised Learning framework, by moving our focus to Reinforcement Learning. Ichimoku and ADX form a complementary combination of support/resistance mapping and trend strength spotting. In this installment, we indulge in how the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm can be used with this indicator set. As with earlier parts of the series, the implementation is carried out in a custom signal class designed for integration with the MQL5 Wizard, which facilitates seamless Expert Advisor assembly.
preview
Build Self Optimizing Expert Advisors With MQL5 And Python (Part II): Tuning Deep Neural Networks

Build Self Optimizing Expert Advisors With MQL5 And Python (Part II): Tuning Deep Neural Networks

Machine learning models come with various adjustable parameters. In this series of articles, we will explore how to customize your AI models to fit your specific market using the SciPy library.
preview
Neural networks made easy (Part 44): Learning skills with dynamics in mind

Neural networks made easy (Part 44): Learning skills with dynamics in mind

In the previous article, we introduced the DIAYN method, which offers the algorithm for learning a variety of skills. The acquired skills can be used for various tasks. But such skills can be quite unpredictable, which can make them difficult to use. In this article, we will look at an algorithm for learning predictable skills.
preview
Example of Auto Optimized Take Profits and Indicator Parameters with SMA and EMA

Example of Auto Optimized Take Profits and Indicator Parameters with SMA and EMA

This article presents a sophisticated Expert Advisor for forex trading, combining machine learning with technical analysis. It focuses on trading Apple stock, featuring adaptive optimization, risk management, and multiple strategies. Backtesting shows promising results with high profitability but also significant drawdowns, indicating potential for further refinement.
preview
Data Science and Machine Learning (Part 19): Supercharge Your AI models with AdaBoost

Data Science and Machine Learning (Part 19): Supercharge Your AI models with AdaBoost

AdaBoost, a powerful boosting algorithm designed to elevate the performance of your AI models. AdaBoost, short for Adaptive Boosting, is a sophisticated ensemble learning technique that seamlessly integrates weak learners, enhancing their collective predictive strength.
preview
Introduction to MQL5 (Part 5): A Beginner's Guide to Array Functions in MQL5

Introduction to MQL5 (Part 5): A Beginner's Guide to Array Functions in MQL5

Explore the world of MQL5 arrays in Part 5, designed for absolute beginners. Simplifying complex coding concepts, this article focuses on clarity and inclusivity. Join our community of learners, where questions are embraced, and knowledge is shared!
preview
Neural networks made easy (Part 50): Soft Actor-Critic (model optimization)

Neural networks made easy (Part 50): Soft Actor-Critic (model optimization)

In the previous article, we implemented the Soft Actor-Critic algorithm, but were unable to train a profitable model. Here we will optimize the previously created model to obtain the desired results.
preview
Matrix Utils, Extending the Matrices and Vector Standard Library Functionality

Matrix Utils, Extending the Matrices and Vector Standard Library Functionality

Matrix serves as the foundation of machine learning algorithms and computers in general because of their ability to effectively handle large mathematical operations, The Standard library has everything one needs but let's see how we can extend it by introducing several functions in the utils file, that are not yet available in the library
preview
Population optimization algorithms: Cuckoo Optimization Algorithm (COA)

Population optimization algorithms: Cuckoo Optimization Algorithm (COA)

The next algorithm I will consider is cuckoo search optimization using Levy flights. This is one of the latest optimization algorithms and a new leader in the leaderboard.
preview
Neural networks made easy (Part 35): Intrinsic Curiosity Module

Neural networks made easy (Part 35): Intrinsic Curiosity Module

We continue to study reinforcement learning algorithms. All the algorithms we have considered so far required the creation of a reward policy to enable the agent to evaluate each of its actions at each transition from one system state to another. However, this approach is rather artificial. In practice, there is some time lag between an action and a reward. In this article, we will get acquainted with a model training algorithm which can work with various time delays from the action to the reward.
preview
Population optimization algorithms: Firefly Algorithm (FA)

Population optimization algorithms: Firefly Algorithm (FA)

In this article, I will consider the Firefly Algorithm (FA) optimization method. Thanks to the modification, the algorithm has turned from an outsider into a real rating table leader.
preview
Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement

Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement

One of the key problems within reinforcement learning is environmental exploration. Previously, we have already seen the research method based on Intrinsic Curiosity. Today I propose to look at another algorithm: Exploration via Disagreement.
preview
Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

In the previous article, we discussed a simple RNN which despite its inability to understand long-term dependencies in the data, was able to make a profitable strategy. In this article, we are discussing both the Long-Short Term Memory(LSTM) and the Gated Recurrent Unit(GRU). These two were introduced to overcome the shortcomings of a simple RNN and to outsmart it.
preview
Hidden Markov Models for Trend-Following Volatility Prediction

Hidden Markov Models for Trend-Following Volatility Prediction

Hidden Markov Models (HMMs) are powerful statistical tools that identify underlying market states by analyzing observable price movements. In trading, HMMs enhance volatility prediction and inform trend-following strategies by modeling and anticipating shifts in market regimes. In this article, we will present the complete procedure for developing a trend-following strategy that utilizes HMMs to predict volatility as a filter.
preview
Neural networks made easy (Part 43): Mastering skills without the reward function

Neural networks made easy (Part 43): Mastering skills without the reward function

The problem of reinforcement learning lies in the need to define a reward function. It can be complex or difficult to formalize. To address this problem, activity-based and environment-based approaches are being explored to learn skills without an explicit reward function.
preview
Integrating MQL5 with data processing packages (Part 5): Adaptive Learning and Flexibility

Integrating MQL5 with data processing packages (Part 5): Adaptive Learning and Flexibility

This part focuses on building a flexible, adaptive trading model trained on historical XAUUSD data, preparing it for ONNX export and potential integration into live trading systems.
preview
Non-linear regression models on the stock exchange

Non-linear regression models on the stock exchange

Non-linear regression models on the stock exchange: Is it possible to predict financial markets? Let's consider creating a model for forecasting prices for EURUSD, and make two robots based on it - in Python and MQL5.
preview
Neural networks made easy (Part 48): Methods for reducing overestimation of Q-function values

Neural networks made easy (Part 48): Methods for reducing overestimation of Q-function values

In the previous article, we introduced the DDPG method, which allows training models in a continuous action space. However, like other Q-learning methods, DDPG is prone to overestimating Q-function values. This problem often results in training an agent with a suboptimal strategy. In this article, we will look at some approaches to overcome the mentioned issue.
preview
Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Recurrent neural networks (RNNs) excel at leveraging past information to predict future events. Their remarkable predictive capabilities have been applied across various domains with great success. In this article, we will deploy RNN models to predict trends in the forex market, demonstrating their potential to enhance forecasting accuracy in forex trading.
preview
Neural Networks in Trading: Hierarchical Vector Transformer (HiVT)

Neural Networks in Trading: Hierarchical Vector Transformer (HiVT)

We invite you to get acquainted with the Hierarchical Vector Transformer (HiVT) method, which was developed for fast and accurate forecasting of multimodal time series.
preview
Neural Networks in Trading: Hyperbolic Latent Diffusion Model (Final Part)

Neural Networks in Trading: Hyperbolic Latent Diffusion Model (Final Part)

The use of anisotropic diffusion processes for encoding the initial data in a hyperbolic latent space, as proposed in the HypDIff framework, assists in preserving the topological features of the current market situation and improves the quality of its analysis. In the previous article, we started implementing the proposed approaches using MQL5. Today we will continue the work we started and will bring it to its logical conclusion.
preview
Data Science and ML (Part 33): Pandas Dataframe in MQL5, Data Collection for ML Usage made easier

Data Science and ML (Part 33): Pandas Dataframe in MQL5, Data Collection for ML Usage made easier

When working with machine learning models, it’s essential to ensure consistency in the data used for training, validation, and testing. In this article, we will create our own version of the Pandas library in MQL5 to ensure a unified approach for handling machine learning data, for ensuring the same data is applied inside and outside MQL5, where most of the training occurs.
preview
Category Theory in MQL5 (Part 20): A detour to Self-Attention and the Transformer

Category Theory in MQL5 (Part 20): A detour to Self-Attention and the Transformer

We digress in our series by pondering at part of the algorithm to chatGPT. Are there any similarities or concepts borrowed from natural transformations? We attempt to answer these and other questions in a fun piece, with our code in a signal class format.
preview
Neural Networks Made Easy (Part 87): Time Series Patching

Neural Networks Made Easy (Part 87): Time Series Patching

Forecasting plays an important role in time series analysis. In the new article, we will talk about the benefits of time series patching.