The advisor opened a buy and added on every candle. I've seen this somewhere before.
Does Actor have a negative error or is it just a hyphen?
Dear Dmitriy Gizlyk. You once promised us to translate Fractal_LSTM to multithreading. Would you be so kind as to find the time? I still understand something at that level, but further on I'm a complete failure. And purely mechanically in this case is unlikely to succeed. I think many of those present here will be grateful to you. After all, this is not a forum for programmers at all.
star-ik #:
Dear Dmitriy Gizlyk. You once promised us to translate Fractal_LSTM to multithreading. Would you be so kind as to find the time? I still understand something at that level, but further on I'm a complete failure. And purely mechanically in this case is unlikely to succeed. I think many of those present here will be grateful to you. This is not a forum for programmers at all.
Dear Dmitriy Gizlyk. You once promised us to translate Fractal_LSTM to multithreading. Would you be so kind as to find the time? I still understand something at that level, but further on I'm a complete failure. And purely mechanically in this case is unlikely to succeed. I think many of those present here will be grateful to you. This is not a forum for programmers at all.
LSTM layer in OpenCL implementation is described in the article"Neural Networks - it's easy (Part 22): Learning without a teacher of recurrent models"

Нейросети — это просто (Часть 22): Обучение без учителя рекуррентных моделей
- www.mql5.com
Мы продолжаем рассмотрение алгоритмов обучения без учителя. И сейчас я предлагаю обсудить особенности использования автоэнкодеров для обучения рекуррентных моделей.
Dmitriy Gizlyk #:
The LSTM layer in OpenCL implementation is described in the article"Neural Networks are Simple (Part 22): Learning without a teacher of recurrent models"
This is one of those EAs that I have not managed to make trade. That's why I would like to see multithreading specifically in the EA from Part 4 (training with a teacher). Or this one (22), but with some trading function.

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Check out the new article: Neural networks made easy (Part 41): Hierarchical models.
The article describes hierarchical training models that offer an effective approach to solving complex machine learning problems. Hierarchical models consist of several levels, each of which is responsible for different aspects of the task.
The Scheduled Auxiliary Control (SAC-X) algorithm is a reinforcement learning method that uses a hierarchical structure to make decisions. It represents a new approach towards solving problems with sparse rewards. It is based on four main principles:
The SAC-X algorithm uses these principles to efficiently solve sparse reward problems. Reward vectors allow learning from different aspects of a task and create multiple intentions, each of which maximizes its own reward. The planner manages the execution of intentions by choosing the optimal strategy to achieve external objectives. Learning occurs outside of politics allowing experiences from different intentions to be used for effective learning.
This approach allows the agent to efficiently solve sparse reward problems by learning from external and internal rewards. Using the planner allows coordination of actions. It also involves the exchange of experience between intentions, which promotes the efficient use of information and improves the overall performance of the agent.
SAC-X enables more efficient and flexible agent training in sparse reward environments. A key feature of SAC-X is the use of internal auxiliary rewards, which helps overcome the sparsity problem and facilitate learning on low-reward tasks.
In the SAC-X learning process, each intent has its own policy that maximizes the corresponding auxiliary reward. The scheduler determines which intentions will be selected and executed at any given time. This allows the agent to learn from different aspects of a task and effectively use available information to achieve optimal results.
Author: Dmitriy Gizlyk