
Neural Networks in Trading: Hierarchical Vector Transformer (HiVT)
We invite you to get acquainted with the Hierarchical Vector Transformer (HiVT) method, which was developed for fast and accurate forecasting of multimodal time series.

Bacterial Chemotaxis Optimization (BCO)
The article presents the original version of the Bacterial Chemotaxis Optimization (BCO) algorithm and its modified version. We will take a closer look at all the differences, with a special focus on the new version of BCOm, which simplifies the bacterial movement mechanism, reduces the dependence on positional history, and uses simpler math than the computationally heavy original version. We will also conduct the tests and summarize the results.

Neural Networks in Trading: Unified Trajectory Generation Model (UniTraj)
Understanding agent behavior is important in many different areas, but most methods focus on just one of the tasks (understanding, noise removal, or prediction), which reduces their effectiveness in real-world scenarios. In this article, we will get acquainted with a model that can adapt to solving various problems.

Data Science and ML (Part 35): NumPy in MQL5 – The Art of Making Complex Algorithms with Less Code
NumPy library is powering almost all the machine learning algorithms to the core in Python programming language, In this article we are going to implement a similar module which has a collection of all the complex code to aid us in building sophisticated models and algorithms of any kind.

Exploring Advanced Machine Learning Techniques on the Darvas Box Breakout Strategy
The Darvas Box Breakout Strategy, created by Nicolas Darvas, is a technical trading approach that spots potential buy signals when a stock’s price rises above a set "box" range, suggesting strong upward momentum. In this article, we will apply this strategy concept as an example to explore three advanced machine learning techniques. These include using a machine learning model to generate signals rather than to filter trades, employing continuous signals rather than discrete ones, and using models trained on different timeframes to confirm trades.

Resampling techniques for prediction and classification assessment in MQL5
In this article, we will explore and implement, methods for assessing model quality that utilize a single dataset as both training and validation sets.

Tabu Search (TS)
The article discusses the Tabu Search algorithm, one of the first and most well-known metaheuristic methods. We will go through the algorithm operation in detail, starting with choosing an initial solution and exploring neighboring options, with an emphasis on using a tabu list. The article covers the key aspects of the algorithm and its features.

MQL5 Wizard Techniques you should know (Part 57): Supervised Learning with Moving Average and Stochastic Oscillator
Moving Average and Stochastic Oscillator are very common indicators that some traders may not use a lot because of their lagging nature. In a 3-part ‘miniseries' that considers the 3 main forms of machine learning, we look to see if this bias against these indicators is justified, or they might be holding an edge. We do our examination in wizard assembled Expert Advisors.

Neural Networks in Trading: A Complex Trajectory Prediction Method (Traj-LLM)
In this article, I would like to introduce you to an interesting trajectory prediction method developed to solve problems in the field of autonomous vehicle movements. The authors of the method combined the best elements of various architectural solutions.

Data Science and ML (Part 34): Time series decomposition, Breaking the stock market down to the core
In a world overflowing with noisy and unpredictable data, identifying meaningful patterns can be challenging. In this article, we'll explore seasonal decomposition, a powerful analytical technique that helps separate data into its key components: trend, seasonal patterns, and noise. By breaking data down this way, we can uncover hidden insights and work with cleaner, more interpretable information.

An introduction to Receiver Operating Characteristic curves
ROC curves are graphical representations used to evaluate the performance of classifiers. Despite ROC graphs being relatively straightforward, there exist common misconceptions and pitfalls when using them in practice. This article aims to provide an introduction to ROC graphs as a tool for practitioners seeking to understand classifier performance evaluation.

Neural Networks in Trading: State Space Models
A large number of the models we have reviewed so far are based on the Transformer architecture. However, they may be inefficient when dealing with long sequences. And in this article, we will get acquainted with an alternative direction of time series forecasting based on state space models.

William Gann methods (Part III): Does Astrology Work?
Do the positions of planets and stars affect financial markets? Let's arm ourselves with statistics and big data, and embark on an exciting journey into the world where stars and stock charts intersect.

Artificial Algae Algorithm (AAA)
The article considers the Artificial Algae Algorithm (AAA) based on biological processes characteristic of microalgae. The algorithm includes spiral motion, evolutionary process and adaptation, which allows it to solve optimization problems. The article provides an in-depth analysis of the working principles of AAA and its potential in mathematical modeling, highlighting the connection between nature and algorithmic solutions.

Neural Network in Practice: Sketching a Neuron
In this article we will build a basic neuron. And although it looks simple, and many may consider this code completely trivial and meaningless, I want you to have fun studying this simple sketch of a neuron. Don't be afraid to modify the code, understanding it fully is the goal.

Anarchic Society Optimization (ASO) algorithm
In this article, we will get acquainted with the Anarchic Society Optimization (ASO) algorithm and discuss how an algorithm based on the irrational and adventurous behavior of participants in an anarchic society (an anomalous system of social interaction free from centralized power and various kinds of hierarchies) is able to explore the solution space and avoid the traps of local optimum. The article presents a unified ASO structure applicable to both continuous and discrete problems.

MQL5 Wizard Techniques you should know (Part 55): SAC with Prioritized Experience Replay
Replay buffers in Reinforcement Learning are particularly important with off-policy algorithms like DQN or SAC. This then puts the spotlight on the sampling process of this memory-buffer. While default options with SAC, for instance, use random selection from this buffer, Prioritized Experience Replay buffers fine tune this by sampling from the buffer based on a TD-score. We review the importance of Reinforcement Learning, and, as always, examine just this hypothesis (not the cross-validation) in a wizard assembled Expert Advisor.

Neural Networks in Trading: Injection of Global Information into Independent Channels (InjectTST)
Most modern multimodal time series forecasting methods use the independent channels approach. This ignores the natural dependence of different channels of the same time series. Smart use of two approaches (independent and mixed channels) is the key to improving the performance of the models.

Build Self Optimizing Expert Advisors in MQL5 (Part 6): Stop Out Prevention
Join us in our discussion today as we look for an algorithmic procedure to minimize the total number of times we get stopped out of winning trades. The problem we faced is significantly challenging, and most solutions given in community discussions lack set and fixed rules. Our algorithmic approach to solving the problem increased the profitability of our trades and reduced our average loss per trade. However, there are further advancements to be made to completely filter out all trades that will be stopped out, our solution is a good first step for anyone to try.

Neural Networks in Trading: Practical Results of the TEMPO Method
We continue our acquaintance with the TEMPO method. In this article we will evaluate the actual effectiveness of the proposed approaches on real historical data.

Animal Migration Optimization (AMO) algorithm
The article is devoted to the AMO algorithm, which models the seasonal migration of animals in search of optimal conditions for life and reproduction. The main features of AMO include the use of topological neighborhood and a probabilistic update mechanism, which makes it easy to implement and flexible for various optimization tasks.

MQL5 Wizard Techniques you should know (Part 54): Reinforcement Learning with hybrid SAC and Tensors
Soft Actor Critic is a Reinforcement Learning algorithm that we looked at in a previous article, where we also introduced python and ONNX to these series as efficient approaches to training networks. We revisit the algorithm with the aim of exploiting tensors, computational graphs that are often exploited in Python.

Neural Networks in Trading: Using Language Models for Time Series Forecasting
We continue to study time series forecasting models. In this article, we get acquainted with a complex algorithm built on the use of a pre-trained language model.

Neural Networks in Trading: Lightweight Models for Time Series Forecasting
Lightweight time series forecasting models achieve high performance using a minimum number of parameters. This, in turn, reduces the consumption of computing resources and speeds up decision-making. Despite being lightweight, such models achieve forecast quality comparable to more complex ones.

Artificial Bee Hive Algorithm (ABHA): Tests and results
In this article, we will continue exploring the Artificial Bee Hive Algorithm (ABHA) by diving into the code and considering the remaining methods. As you might remember, each bee in the model is represented as an individual agent whose behavior depends on internal and external information, as well as motivational state. We will test the algorithm on various functions and summarize the results by presenting them in the rating table.

Artificial Bee Hive Algorithm (ABHA): Theory and methods
In this article, we will consider the Artificial Bee Hive Algorithm (ABHA) developed in 2009. The algorithm is aimed at solving continuous optimization problems. We will look at how ABHA draws inspiration from the behavior of a bee colony, where each bee has a unique role that helps them find resources more efficiently.

Trend Prediction with LSTM for Trend-Following Strategies
Long Short-Term Memory (LSTM) is a type of recurrent neural network (RNN) designed to model sequential data by effectively capturing long-term dependencies and addressing the vanishing gradient problem. In this article, we will explore how to utilize LSTM to predict future trends, enhancing the performance of trend-following strategies. The article will cover the introduction of key concepts and the motivation behind development, fetching data from MetaTrader 5, using that data to train the model in Python, integrating the machine learning model into MQL5, and reflecting on the results and future aspirations based on statistical backtesting.

Neural Networks in Trading: Reducing Memory Consumption with Adam-mini Optimization
One of the directions for increasing the efficiency of the model training and convergence process is the improvement of optimization methods. Adam-mini is an adaptive optimization method designed to improve on the basic Adam algorithm.

Generative Adversarial Networks (GANs) for Synthetic Data in Financial Modeling (Part 2): Creating Synthetic Symbol for Testing
In this article we are creating a synthetic symbol using a Generative Adversarial Network (GAN) involves generating realistic Financial data that mimics the behavior of actual market instruments, such as EURUSD. The GAN model learns patterns and volatility from historical market data and creates synthetic price data with similar characteristics.

Data Science and ML (Part 33): Pandas Dataframe in MQL5, Data Collection for ML Usage made easier
When working with machine learning models, it’s essential to ensure consistency in the data used for training, validation, and testing. In this article, we will create our own version of the Pandas library in MQL5 to ensure a unified approach for handling machine learning data, for ensuring the same data is applied inside and outside MQL5, where most of the training occurs.

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(IV) — Test Trading Strategy
With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.

Gating mechanisms in ensemble learning
In this article, we continue our exploration of ensemble models by discussing the concept of gates, specifically how they may be useful in combining model outputs to enhance either prediction accuracy or model generalization.

Adaptive Social Behavior Optimization (ASBO): Two-phase evolution
We continue dwelling on the topic of social behavior of living organisms and its impact on the development of a new mathematical model - ASBO (Adaptive Social Behavior Optimization). We will dive into the two-phase evolution, test the algorithm and draw conclusions. Just as in nature a group of living organisms join their efforts to survive, ASBO uses principles of collective behavior to solve complex optimization problems.

Neural Networks in Trading: Spatio-Temporal Neural Network (STNN)
In this article we will talk about using space-time transformations to effectively predict upcoming price movement. To improve the numerical prediction accuracy in STNN, a continuous attention mechanism is proposed that allows the model to better consider important aspects of the data.

Neural Network in Practice: Pseudoinverse (II)
Since these articles are educational in nature and are not intended to show the implementation of specific functionality, we will do things a little differently in this article. Instead of showing how to apply factorization to obtain the inverse of a matrix, we will focus on factorization of the pseudoinverse. The reason is that there is no point in showing how to get the general coefficient if we can do it in a special way. Even better, the reader can gain a deeper understanding of why things happen the way they do. So, let's now figure out why hardware is replacing software over time.

Neural Networks in Trading: Dual-Attention-Based Trend Prediction Model
We continue the discussion about the use of piecewise linear representation of time series, which was started in the previous article. Today we will see how to combine this method with other approaches to time series analysis to improve the price trend prediction quality.

Hidden Markov Models for Trend-Following Volatility Prediction
Hidden Markov Models (HMMs) are powerful statistical tools that identify underlying market states by analyzing observable price movements. In trading, HMMs enhance volatility prediction and inform trend-following strategies by modeling and anticipating shifts in market regimes. In this article, we will present the complete procedure for developing a trend-following strategy that utilizes HMMs to predict volatility as a filter.

Neural Networks in Trading: Piecewise Linear Representation of Time Series
This article is somewhat different from my earlier publications. In this article, we will talk about an alternative representation of time series. Piecewise linear representation of time series is a method of approximating a time series using linear functions over small intervals.

Adaptive Social Behavior Optimization (ASBO): Schwefel, Box-Muller Method
This article provides a fascinating insight into the world of social behavior in living organisms and its influence on the creation of a new mathematical model - ASBO (Adaptive Social Behavior Optimization). We will examine how the principles of leadership, neighborhood, and cooperation observed in living societies inspire the development of innovative optimization algorithms.

Neural Networks Made Easy (Part 97): Training Models With MSFformer
When exploring various model architecture designs, we often devote insufficient attention to the process of model training. In this article, I aim to address this gap.