
Neural Networks in Trading: Hierarchical Feature Learning for Point Clouds
We continue to study algorithms for extracting features from a point cloud. In this article, we will get acquainted with the mechanisms for increasing the efficiency of the PointNet method.

The Group Method of Data Handling: Implementing the Combinatorial Algorithm in MQL5
In this article we continue our exploration of the Group Method of Data Handling family of algorithms, with the implementation of the Combinatorial Algorithm along with its refined incarnation, the Combinatorial Selective Algorithm in MQL5.

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(IV) — Test Trading Strategy
With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.

Neural Network in Practice: Secant Line
As already explained in the theoretical part, when working with neural networks we need to use linear regressions and derivatives. Why? The reason is that linear regression is one of the simplest formulas in existence. Essentially, linear regression is just an affine function. However, when we talk about neural networks, we are not interested in the effects of direct linear regression. We are interested in the equation that generates this line. We are not that interested in the line created. Do you know the main equation that we need to understand? If not, I recommend reading this article to understanding it.

Time series clustering in causal inference
Clustering algorithms in machine learning are important unsupervised learning algorithms that can divide the original data into groups with similar observations. By using these groups, you can analyze the market for a specific cluster, search for the most stable clusters using new data, and make causal inferences. The article proposes an original method for time series clustering in Python.

MQL5 Wizard Techniques you should know (Part 34): Price-Embedding with an Unconventional RBM
Restricted Boltzmann Machines are a form of neural network that was developed in the mid 1980s at a time when compute resources were prohibitively expensive. At its onset, it relied on Gibbs Sampling and Contrastive Divergence in order to reduce dimensionality or capture the hidden probabilities/properties over input training data sets. We examine how Backpropagation can perform similarly when the RBM ‘embeds’ prices for a forecasting Multi-Layer-Perceptron.

Propensity score in causal inference
The article examines the topic of matching in causal inference. Matching is used to compare similar observations in a data set. This is necessary to correctly determine causal effects and get rid of bias. The author explains how this helps in building trading systems based on machine learning, which become more stable on new data they were not trained on. The propensity score plays a central role and is widely used in causal inference.

Matrix Factorization: The Basics
Since the goal here is didactic, we will proceed as simply as possible. That is, we will implement only what we need: matrix multiplication. You will see today that this is enough to simulate matrix-scalar multiplication. The most significant difficulty that many people encounter when implementing code using matrix factorization is this: unlike scalar factorization, where in almost all cases the order of the factors does not change the result, this is not the case when using matrices.

Adaptive Social Behavior Optimization (ASBO): Schwefel, Box-Muller Method
This article provides a fascinating insight into the world of social behavior in living organisms and its influence on the creation of a new mathematical model - ASBO (Adaptive Social Behavior Optimization). We will examine how the principles of leadership, neighborhood, and cooperation observed in living societies inspire the development of innovative optimization algorithms.

Neural Networks in Trading: Hierarchical Vector Transformer (Final Part)
We continue studying the Hierarchical Vector Transformer method. In this article, we will complete the construction of the model. We will also train and test it on real historical data.

Developing an MQL5 RL agent with RestAPI integration (Part 3): Creating automatic moves and test scripts in MQL5
This article discusses the implementation of automatic moves in the tic-tac-toe game in Python, integrated with MQL5 functions and unit tests. The goal is to improve the interactivity of the game and ensure the reliability of the system through testing in MQL5. The presentation covers game logic development, integration, and hands-on testing, and concludes with the creation of a dynamic game environment and a robust integrated system.

Neural Networks Made Easy (Part 97): Training Models With MSFformer
When exploring various model architecture designs, we often devote insufficient attention to the process of model training. In this article, I aim to address this gap.

Neural networks are easy (Part 59): Dichotomy of Control (DoC)
In the previous article, we got acquainted with the Decision Transformer. But the complex stochastic environment of the foreign exchange market did not allow us to fully implement the potential of the presented method. In this article, I will introduce an algorithm that is aimed at improving the performance of algorithms in stochastic environments.

Reimagining Classic Strategies (Part IV): SP500 and US Treasury Notes
In this series of articles, we analyze classical trading strategies using modern algorithms to determine whether we can improve the strategy using AI. In today's article, we revisit a classical approach for trading the SP500 using the relationship it has with US Treasury Notes.

Reimagining Classic Strategies: Crude Oil
In this article, we revisit a classic crude oil trading strategy with the aim of enhancing it by leveraging supervised machine learning algorithms. We will construct a least-squares model to predict future Brent crude oil prices based on the spread between Brent and WTI crude oil prices. Our goal is to identify a leading indicator of future changes in Brent prices.

Category Theory in MQL5 (Part 17): Functors and Monoids
This article, the final in our series to tackle functors as a subject, revisits monoids as a category. Monoids which we have already introduced in these series are used here to aid in position sizing, together with multi-layer perceptrons.

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning
With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.

MQL5 Wizard Techniques you should know (Part 58): Reinforcement Learning (DDPG) with Moving Average and Stochastic Oscillator Patterns
Moving Average and Stochastic Oscillator are very common indicators whose collective patterns we explored in the prior article, via a supervised learning network, to see which “patterns-would-stick”. We take our analyses from that article, a step further by considering the effects' reinforcement learning, when used with this trained network, would have on performance. Readers should note our testing is over a very limited time window. Nonetheless, we continue to harness the minimal coding requirements afforded by the MQL5 wizard in showcasing this.

MQL5 Wizard Techniques you should know (Part 21): Testing with Economic Calendar Data
Economic Calendar Data is not available for testing with Expert Advisors within Strategy Tester, by default. We look at how Databases could help in providing a work around this limitation. So, for this article we explore how SQLite databases can be used to archive Economic Calendar news such that wizard assembled Expert Advisors can use this to generate trade signals.

MQL5 Wizard Techniques you should know (Part 32): Regularization
Regularization is a form of penalizing the loss function in proportion to the discrete weighting applied throughout the various layers of a neural network. We look at the significance, for some of the various regularization forms, this can have in test runs with a wizard assembled Expert Advisor.

Integrating MQL5 with data processing packages (Part 2): Machine Learning and Predictive Analytics
In our series on integrating MQL5 with data processing packages, we delve in to the powerful combination of machine learning and predictive analysis. We will explore how to seamlessly connect MQL5 with popular machine learning libraries, to enable sophisticated predictive models for financial markets.

Population optimization algorithms: Bird Swarm Algorithm (BSA)
The article explores the bird swarm-based algorithm (BSA) inspired by the collective flocking interactions of birds in nature. The different search strategies of individuals in BSA, including switching between flight, vigilance and foraging behavior, make this algorithm multifaceted. It uses the principles of bird flocking, communication, adaptability, leading and following to efficiently find optimal solutions.

Most notable Artificial Cooperative Search algorithm modifications (ACSm)
Here we will consider the evolution of the ACS algorithm: three modifications aimed at improving the convergence characteristics and the algorithm efficiency. Transformation of one of the leading optimization algorithms. From matrix modifications to revolutionary approaches regarding population formation.

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)
All the models we have considered so far analyze the state of the environment as a time sequence. However, the time series can also be represented in the form of frequency features. In this article, I introduce you to an algorithm that uses frequency components of a time sequence to predict future states.

MQL5 Wizard Techniques you should know (Part 49): Reinforcement Learning with Proximal Policy Optimization
Proximal Policy Optimization is another algorithm in reinforcement learning that updates the policy, often in network form, in very small incremental steps to ensure the model stability. We examine how this could be of use, as we have with previous articles, in a wizard assembled Expert Advisor.

Neural Network in Practice: Pseudoinverse (II)
Since these articles are educational in nature and are not intended to show the implementation of specific functionality, we will do things a little differently in this article. Instead of showing how to apply factorization to obtain the inverse of a matrix, we will focus on factorization of the pseudoinverse. The reason is that there is no point in showing how to get the general coefficient if we can do it in a special way. Even better, the reader can gain a deeper understanding of why things happen the way they do. So, let's now figure out why hardware is replacing software over time.

Artificial Bee Hive Algorithm (ABHA): Theory and methods
In this article, we will consider the Artificial Bee Hive Algorithm (ABHA) developed in 2009. The algorithm is aimed at solving continuous optimization problems. We will look at how ABHA draws inspiration from the behavior of a bee colony, where each bee has a unique role that helps them find resources more efficiently.

Neural Networks in Trading: Injection of Global Information into Independent Channels (InjectTST)
Most modern multimodal time series forecasting methods use the independent channels approach. This ignores the natural dependence of different channels of the same time series. Smart use of two approaches (independent and mixed channels) is the key to improving the performance of the models.

Category Theory in MQL5 (Part 12): Orders
This article which is part of a series that follows Category Theory implementation of Graphs in MQL5, delves in Orders. We examine how concepts of Order-Theory can support monoid sets in informing trade decisions by considering two major ordering types.

Data label for time series mining (Part 5):Apply and Test in EA Using Socket
This series of articles introduces several time series labeling methods, which can create data that meets most artificial intelligence models, and targeted data labeling according to needs can make the trained artificial intelligence model more in line with the expected design, improve the accuracy of our model, and even help the model make a qualitative leap!

Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE)
In this article, we will discuss another type of models that are aimed at studying the dynamics of the environmental state.

Hybridization of population algorithms. Sequential and parallel structures
Here we will dive into the world of hybridization of optimization algorithms by looking at three key types: strategy mixing, sequential and parallel hybridization. We will conduct a series of experiments combining and testing relevant optimization algorithms.

Chemical reaction optimization (CRO) algorithm (Part I): Process chemistry in optimization
In the first part of this article, we will dive into the world of chemical reactions and discover a new approach to optimization! Chemical reaction optimization (CRO) uses principles derived from the laws of thermodynamics to achieve efficient results. We will reveal the secrets of decomposition, synthesis and other chemical processes that became the basis of this innovative method.

Neural Networks in Trading: Point Cloud Analysis (PointNet)
Direct point cloud analysis avoids unnecessary data growth and improves the performance of models in classification and segmentation tasks. Such approaches demonstrate high performance and robustness to perturbations in the original data.

Anarchic Society Optimization (ASO) algorithm
In this article, we will get acquainted with the Anarchic Society Optimization (ASO) algorithm and discuss how an algorithm based on the irrational and adventurous behavior of participants in an anarchic society (an anomalous system of social interaction free from centralized power and various kinds of hierarchies) is able to explore the solution space and avoid the traps of local optimum. The article presents a unified ASO structure applicable to both continuous and discrete problems.

Mastering Model Interpretation: Gaining Deeper Insight From Your Machine Learning Models
Machine Learning is a complex and rewarding field for anyone of any experience. In this article we dive deep into the inner mechanisms powering the models you build, we explore the intricate world of features,predictions and impactful decisions unravelling the complexities and gaining a firm grasp of model interpretation. Learn the art of navigating tradeoffs , enhancing predictions, ranking feature importance all while ensuring robust decision making. This essential read helps you clock more performance from your machine learning models and extract more value for employing machine learning methodologies.

Population optimization algorithms: Spiral Dynamics Optimization (SDO) algorithm
The article presents an optimization algorithm based on the patterns of constructing spiral trajectories in nature, such as mollusk shells - the spiral dynamics optimization (SDO) algorithm. I have thoroughly revised and modified the algorithm proposed by the authors. The article will consider the necessity of these changes.

Population optimization algorithms: Boids Algorithm
The article considers Boids algorithm based on unique examples of animal flocking behavior. In turn, the Boids algorithm serves as the basis for the creation of the whole class of algorithms united under the name "Swarm Intelligence".

MQL5 Wizard Techniques you should know (Part 41): Deep-Q-Networks
The Deep-Q-Network is a reinforcement learning algorithm that engages neural networks in projecting the next Q-value and ideal action during the training process of a machine learning module. We have already considered an alternative reinforcement learning algorithm, Q-Learning. This article therefore presents another example of how an MLP trained with reinforcement learning, can be used within a custom signal class.

MQL5 Wizard Techniques you should know (Part 36): Q-Learning with Markov Chains
Reinforcement Learning is one of the three main tenets in machine learning, alongside supervised learning and unsupervised learning. It is therefore concerned with optimal control, or learning the best long-term policy that will best suit the objective function. It is with this back-drop, that we explore its possible role in informing the learning-process to an MLP of a wizard assembled Expert Advisor.