Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models

Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models

The article considers the practical application of quantization in the construction of tree models. The methods for selecting quantum tables and data preprocessing are considered. No complex mathematical equations are used.
preview
The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

Dive into the heart of Artificial Intelligence's enigma as we navigate the tumultuous waters of explainability. In a realm where models conceal their inner workings, our exploration unveils the "disagreement problem" that echoes through the corridors of machine learning.
preview
MQL5 Wizard Techniques you should know (Part 21): Testing with Economic Calendar Data

MQL5 Wizard Techniques you should know (Part 21): Testing with Economic Calendar Data

Economic Calendar Data is not available for testing with Expert Advisors within Strategy Tester, by default. We look at how Databases could help in providing a work around this limitation. So, for this article we explore how SQLite databases can be used to archive Economic Calendar news such that wizard assembled Expert Advisors can use this to generate trade signals.
preview
MQL5 Wizard Techniques you should know (Part 18): Neural Architecture Search with Eigen Vectors

MQL5 Wizard Techniques you should know (Part 18): Neural Architecture Search with Eigen Vectors

Neural Architecture Search, an automated approach at determining the ideal neural network settings can be a plus when facing many options and large test data sets. We examine how when paired Eigen Vectors this process can be made even more efficient.
preview
Neural networks made easy (Part 74): Trajectory prediction with adaptation

Neural networks made easy (Part 74): Trajectory prediction with adaptation

This article introduces a fairly effective method of multi-agent trajectory forecasting, which is able to adapt to various environmental conditions.
preview
Spurious Regressions in Python

Spurious Regressions in Python

Spurious regressions occur when two time series exhibit a high degree of correlation purely by chance, leading to misleading results in regression analysis. In such cases, even though variables may appear to be related, the correlation is coincidental and the model may be unreliable.
preview
The Group Method of Data Handling: Implementing the Multilayered Iterative Algorithm in MQL5

The Group Method of Data Handling: Implementing the Multilayered Iterative Algorithm in MQL5

In this article we describe the implementation of the Multilayered Iterative Algorithm of the Group Method of Data Handling in MQL5.
preview
MQL5 Wizard Techniques you should know (Part 20): Symbolic Regression

MQL5 Wizard Techniques you should know (Part 20): Symbolic Regression

Symbolic Regression is a form of regression that starts with minimal to no assumptions on what the underlying model that maps the sets of data under study would look like. Even though it can be implemented by Bayesian Methods or Neural Networks, we look at how an implementation with Genetic Algorithms can help customize an expert signal class usable in the MQL5 wizard.
preview
Population optimization algorithms: Evolution Strategies, (μ,λ)-ES and (μ+λ)-ES

Population optimization algorithms: Evolution Strategies, (μ,λ)-ES and (μ+λ)-ES

The article considers a group of optimization algorithms known as Evolution Strategies (ES). They are among the very first population algorithms to use evolutionary principles for finding optimal solutions. We will implement changes to the conventional ES variants and revise the test function and test stand methodology for the algorithms.
preview
Reimagining Classic Strategies: Crude Oil

Reimagining Classic Strategies: Crude Oil

In this article, we revisit a classic crude oil trading strategy with the aim of enhancing it by leveraging supervised machine learning algorithms. We will construct a least-squares model to predict future Brent crude oil prices based on the spread between Brent and WTI crude oil prices. Our goal is to identify a leading indicator of future changes in Brent prices.
preview
A feature selection algorithm using energy based learning in pure MQL5

A feature selection algorithm using energy based learning in pure MQL5

In this article we present the implementation of a feature selection algorithm described in an academic paper titled,"FREL: A stable feature selection algorithm", called Feature weighting as regularized energy based learning.
preview
Neural networks made easy (Part 70): Closed-Form Policy Improvement Operators (CFPI)

Neural networks made easy (Part 70): Closed-Form Policy Improvement Operators (CFPI)

In this article, we will get acquainted with an algorithm that uses closed-form policy improvement operators to optimize Agent actions in offline mode.
preview
Population optimization algorithms: Binary Genetic Algorithm (BGA). Part I

Population optimization algorithms: Binary Genetic Algorithm (BGA). Part I

In this article, we will explore various methods used in binary genetic and other population algorithms. We will look at the main components of the algorithm, such as selection, crossover and mutation, and their impact on the optimization. In addition, we will study data presentation methods and their impact on optimization results.
preview
Neural networks made easy (Part 69): Density-based support constraint for the behavioral policy (SPOT)

Neural networks made easy (Part 69): Density-based support constraint for the behavioral policy (SPOT)

In offline learning, we use a fixed dataset, which limits the coverage of environmental diversity. During the learning process, our Agent can generate actions beyond this dataset. If there is no feedback from the environment, how can we be sure that the assessments of such actions are correct? Maintaining the Agent's policy within the training dataset becomes an important aspect to ensure the reliability of training. This is what we will talk about in this article.
preview
Population optimization algorithms: Bacterial Foraging Optimization - Genetic Algorithm (BFO-GA)

Population optimization algorithms: Bacterial Foraging Optimization - Genetic Algorithm (BFO-GA)

The article presents a new approach to solving optimization problems by combining ideas from bacterial foraging optimization (BFO) algorithms and techniques used in the genetic algorithm (GA) into a hybrid BFO-GA algorithm. It uses bacterial swarming to globally search for an optimal solution and genetic operators to refine local optima. Unlike the original BFO, bacteria can now mutate and inherit genes.
preview
Data Science and ML (Part 23): Why LightGBM and XGBoost outperform a lot of AI models?

Data Science and ML (Part 23): Why LightGBM and XGBoost outperform a lot of AI models?

These advanced gradient-=boosted decision tree techniques offer superior performance and flexibility, making them ideal for financial modeling and algorithmic trading. Learn how to leverage these tools to optimize your trading strategies, improve predictive accuracy, and gain a competitive edge in the financial markets.
preview
Causal inference in time series classification problems

Causal inference in time series classification problems

In this article, we will look at the theory of causal inference using machine learning, as well as the custom approach implementation in Python. Causal inference and causal thinking have their roots in philosophy and psychology and play an important role in our understanding of reality.
preview
Neural networks made easy (Part 75): Improving the performance of trajectory prediction models

Neural networks made easy (Part 75): Improving the performance of trajectory prediction models

The models we create are becoming larger and more complex. This increases the costs of not only their training as well as operation. However, the time required to make a decision is often critical. In this regard, let us consider methods for optimizing model performance without loss of quality.