Better NN EA development - page 86

 

Neural networks made easy (Part 46): Goal-conditioned reinforcement learning (GCRL)

"Goal-conditioned reinforcement learning" sounds a little unusual or even strange. After all, the basic principle of reinforcement learning is aimed at maximizing the total reward during the interaction of the agent with the environment. But in this context, we are looking at achieving a specific goal at a specific stage or within a specific scenario.

Neural networks made easy (Part 46): Goal-conditioned reinforcement learning (GCRL)
Neural networks made easy (Part 46): Goal-conditioned reinforcement learning (GCRL)
  • www.mql5.com
In this article, we will have a look at yet another reinforcement learning approach. It is called goal-conditioned reinforcement learning (GCRL). In this approach, an agent is trained to achieve different goals in specific scenarios.
 

Neural networks made easy (Part 47): Continuous action space

The EA assessed the market situation at each new trading candle and made a decision on a trading operation. But every upcoming bar carries risks for our account. Price movement within a bar can be detrimental to our balance. This is why it is always recommended to use stop losses. This simple approach allows us to limit risks per trade.

Neural networks made easy (Part 47): Continuous action space
Neural networks made easy (Part 47): Continuous action space
  • www.mql5.com
In this article, we expand the range of tasks of our agent. The training process will include some aspects of money and risk management, which are an integral part of any trading strategy.
 

Neural networks made easy (Part 48): Methods for reducing overestimation of Q-function values

As you might remember, in DDPG, the Critic model learns the Q-function (prediction of expected reward) based on the results of interaction with the environment, while the Agent model is trained to maximize the expected reward, based only on the results of the Critic’s assessment of actions. Consequently, the quality of the Critic’s training greatly influences the Agent’s behavioral strategy and its ability to make optimal decisions.

Neural networks made easy (Part 48): Methods for reducing overestimation of Q-function values
Neural networks made easy (Part 48): Methods for reducing overestimation of Q-function values
  • www.mql5.com
In the previous article, we introduced the DDPG method, which allows training models in a continuous action space. However, like other Q-learning methods, DDPG is prone to overestimating Q-function values. This problem often results in training an agent with a suboptimal strategy. In this article, we will look at some approaches to overcome the mentioned issue.
 

Neural networks made easy (Part 49): Soft Actor-Critic

Neural networks made easy (Part 49): Soft Actor-Critic

In this article, we will focus our attention on another algorithm - Soft Actor-Critic (SAC). It was first presented in the article "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor" (January 2018). The method was presented almost simultaneously with TD3. It has some similarities, but there are also differences in the algorithms. The main goal of SAC is to maximize the expected reward given the maximum entropy of the policy, which allows finding a variety of optimal solutions in stochastic environments.
Neural networks made easy (Part 49): Soft Actor-Critic
Neural networks made easy (Part 49): Soft Actor-Critic
  • www.mql5.com
We continue our discussion of reinforcement learning algorithms for solving continuous action space problems. In this article, I will present the Soft Actor-Critic (SAC) algorithm. The main advantage of SAC is the ability to find optimal policies that not only maximize the expected reward, but also have maximum entropy (diversity) of actions.
 

Data Science and Machine Learning (Part 15): SVM, A Must-Have Tool in Every Trader's Toolbox

Data Science and Machine Learning (Part 15): SVM, A Must-Have Tool in Every Trader's Toolbox

Support Vector Machine (SVM) is a powerful supervised machine learning algorithm used for linear or nonlinear classification and regression tasks, and sometimes outlier detection tasks.

Unlike Bayesian classification techniques, and logistic regression which deploy simple mathematical models to classify information, The SVM has some complex mathematical learning functions aimed at finding the optimal hyperplane that separates the data in an N-dimensional space.

Support vector machine is usually used for classification tasks, something we'll also do in this article.

Data Science and Machine Learning(Part 15): SVM, A Must-Have Tool in Every Trader's Toolbox
Data Science and Machine Learning(Part 15): SVM, A Must-Have Tool in Every Trader's Toolbox
  • www.mql5.com
Discover the indispensable role of Support Vector Machines (SVM) in shaping the future of trading. This comprehensive guide explores how SVM can elevate your trading strategies, enhance decision-making, and unlock new opportunities in the financial markets. Dive into the world of SVM with real-world applications, step-by-step tutorials, and expert insights. Equip yourself with the essential tool that can help you navigate the complexities of modern trading. Elevate your trading game with SVM—a must-have for every trader's toolbox.
 

Data Science and Machine Learning (Part 20) : Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5 

LDA is a supervised generalization machine learning algorithm that aims to find a linear combination of features that best separates the classes in a dataset.

Just like the Principal Component Analysis(PCA), it is a dimension reduction algorithm, These algorithms are a common choice for dimensionality reduction, in this article we are going to compare them and observe in what situation each algorithm works best. We already discussed the PCA in the prior articles of this series, Let us commence by observing what the PCA algorithm is all about as we will discuss it mostly, finally we will compare their performances on a simple dataset and in the strategy tester, make sure you stick to the end for awesome data science stuff.
Data Science and Machine Learning(Part 20) : Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5
Data Science and Machine Learning(Part 20) : Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5
  • www.mql5.com
Uncover the secrets behind these powerful dimensionality reduction techniques as we dissect their applications within the MQL5 trading environment. Delve into the nuances of Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA), gaining a profound understanding of their impact on strategy development and market analysis,
 

Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost 

The article considers the theoretical application of quantization in the construction of tree models. No complex mathematical equations are used. While writing the article, I discovered the absence of established unified terminology in the scientific works of different authors, so I will choose the terminology options that, in my opinion, best reflect the meaning. Besides, I will use the terms of my own in the matters left unattended by other researchers. This article will use terms and concepts I have previously described in the article "CatBoost machine learning algorithm from Yandex without learning Python or R". Therefore, I recommend that you familiarize yourself with it before reading the current article.
Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost
Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost
  • www.mql5.com
The article considers the theoretical application of quantization in the construction of tree models and showcases the implemented quantization methods in CatBoost. No complex mathematical equations are used.
 

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI

The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done. 

We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets.  Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.

The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI
The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI
  • www.mql5.com
Dive into the heart of Artificial Intelligence's enigma as we navigate the tumultuous waters of explainability. In a realm where models conceal their inner workings, our exploration unveils the "disagreement problem" that echoes through the corridors of machine learning.
 
Sergey Golubev #:

Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost 

The article considers the theoretical application of quantization in the construction of tree models. No complex mathematical equations are used. While writing the article, I discovered the absence of established unified terminology in the scientific works of different authors, so I will choose the terminology options that, in my opinion, best reflect the meaning. Besides, I will use the terms of my own in the matters left unattended by other researchers. This article will use terms and concepts I have previously described in the article "CatBoost machine learning algorithm from Yandex without learning Python or R". Therefore, I recommend that you familiarize yourself with it before reading the current article.

Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models 

Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models

The article considers the practical application of quantization in the construction of tree models. No complex mathematical equations are used. This is the second part of the article "Quantization and other methods of preprocessing input data in machine learning", so I strongly recommend starting your acquaintance with it. Here we will talk about the following:

  • In the first part, we will consider the methods for preprocessing sample data implemented in MQL5.
  • In the second part, we will conduct an experiment that will provide information on the feasibility of data quantization.
Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models
Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models
  • www.mql5.com
The article considers the practical application of quantization in the construction of tree models. The methods for selecting quantum tables and data preprocessing are considered. No complex mathematical equations are used.
Reason: