
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Forum on trading, automated trading systems and testing trading strategies
Better NN EA development
Sergey Golubev, 2023.10.16 06:17
Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement
Forum on trading, automated trading systems and testing trading strategies
Better NN EA development
Sergey Golubev, 2024.02.27 17:50
Data Science and Machine Learning (Part 20) : Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5
LDA is a supervised generalization machine learning algorithm that aims to find a linear combination of features that best separates the classes in a dataset.
Just like the Principal Component Analysis(PCA), it is a dimension reduction algorithm, These algorithms are a common choice for dimensionality reduction, in this article we are going to compare them and observe in what situation each algorithm works best. We already discussed the PCA in the prior articles of this series, Let us commence by observing what the PCA algorithm is all about as we will discuss it mostly, finally we will compare their performances on a simple dataset and in the strategy tester, make sure you stick to the end for awesome data science stuff.Neural networks made easy (Part 60): Online Decision Transformer (ODT)
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.03.07 13:50
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning
Forum on trading, automated trading systems and testing trading strategies
Better NN EA development
Sergey Golubev, 2024.03.08 16:08
The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI
The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done.
We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets. Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.04.18 16:47
Neural networks made easy (Part 66): Exploration problems in offline learning
As we move along the series of articles devoted to reinforcement learning methods, we are facing the question related to the balance between environmental exploration and exploitation of learned policies. We have previously considered various methods of stimulating the Agent to explore. But quite often, algorithms that demonstrate excellent results in online learning are not so effective offline. The problem is that for offline mode, information about the environment is limited by the size of the training dataset. Most often, the data selected for model training is narrowly targeted as it is collected within a small subspace of the task. This provides an even more limited idea of the environment. However, in order to find the optimal solution, the Agent needs the most complete understanding of the environment and its patterns. We have earlier noted that learning results often depend on the training dataset.
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.04.18 16:54
Neural networks made easy (Part 67): Using past experience to solve new tasks
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.04.28 17:35
Neural networks made easy (Part 68): Offline Preference-guided Policy Optimization
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.06.01 09:15
Neural networks made easy (Part 69): Density-based support constraint for the behavioral policy (SPOT)
Offline reinforcement learning allows the training of models based on data collected from interactions with the environment. This allows a significant reduction of the process of interacting with the environment. Moreover, given the complexity of environmental modeling, we can collect real-time data from multiple research agents and then train the model using this data.
At the same time, using a static training dataset significantly reduces the environment information available to us. Due to the limited resources, we cannot preserve the entire diversity of the environment in the training dataset.
Forum on trading, automated trading systems and testing trading strategies
Better NN EA
Sergey Golubev, 2024.06.01 09:17
Neural networks made easy (Part 70): Closed-Form Policy Improvement Operators (CFPI)
The approach to optimizing the Agent policy with constraints on its behavior turned out to be promising in solving offline reinforcement learning problems. By exploiting historical transitions, the Agent policy is trained to maximize a learned value function.
Behavior constrained policy can help to avoid a significant distribution shift in relation to Agent actions, which provides sufficient confidence in the assessment of the action costs. In the previous article we got acquainted with the SPOT method, which exploits this approach. As a continuation of the topic, I propose to get acquainted with the Closed-Form Policy Improvement (CFPI) algorithm, which was presented in the paper "Offline Reinforcement Learning with Closed-Form Policy Improvement Operators".