Output: TotalBase.dat (binary trajectory data).
Output: TotalBase.dat (binary trajectory data).
So How can we run step 1 because we not have previously trained Encoder (Enc.nnw) and Actor (Act.nnw) so cannot run Research.mq5 , and we not have Signals\Signal1.csv file so we cannot run ResearchRealORL.mq5 too ?
Check out the new article: Neural Networks in Trading: State Space Models.
Author: Dmitriy Gizlyk
As I Understand, in your pipeline in Step 1 we need run Research.mq5 or ResearchRealORL.mq5 with detail like below :
Output: TotalBase.dat (binary trajectory data).
Output: TotalBase.dat (binary trajectory data).
So How can we run step 1 because we not have previously trained Encoder (Enc.nnw) and Actor (Act.nnw) so cannot run Research.mq5 , and we not have Signals\Signal1.csv file so we cannot run ResearchRealORL.mq5 too ?
Hello,
In Research.mq5 you can find
//--- load models float temp; //--- if(!Encoder.Load(FileName + "Enc.nnw", temp, temp, temp, dtStudied, true)) { CArrayObj *encoder = new CArrayObj(); if(!CreateEncoderDescriptions(encoder)) { delete encoder; return INIT_FAILED; } if(!Encoder.Create(encoder)) { delete encoder; return INIT_FAILED; } delete encoder; } if(!Actor.Load(FileName + "Act.nnw", temp, temp, temp, dtStudied, true)) { CArrayObj *actor = new CArrayObj(); CArrayObj *critic = new CArrayObj(); if(!CreateDescriptions(actor, critic)) { delete actor; delete critic; return INIT_FAILED; } if(!Actor.Create(actor)) { delete actor; delete critic; return INIT_FAILED; } delete actor; delete critic; } //---
So, if you don't have pretrained model EA will generate models with random params. And you can collect data from random trajectories.
About ResearchRealORL.mq5 you can more read in article.

- www.mql5.com

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Check out the new article: Neural Networks in Trading: State Space Models.
A large number of the models we have reviewed so far are based on the Transformer architecture. However, they may be inefficient when dealing with long sequences. And in this article, we will get acquainted with an alternative direction of time series forecasting based on state space models.
In recent times, the paradigm of adapting large models to new tasks has become increasingly widespread. These models are pre-trained on extensive datasets containing arbitrary raw data from a broad spectrum of domains, including text, images, audio, time series, and more.
Although this concept is not tied to any specific architectural choice, most models are based on a single architecture – Transformer and its core layer Self-Attention. The efficiency of Self-Attention is attributed to its ability to densely direct information within a contextual window, enabling the modeling of complex data. However, this property has fundamental limitations: the inability to model anything beyond the finite window and the quadratic scaling with respect to the window length.
For sequence modeling tasks, an alternative solution involves using structured sequence models in state space (Space Sequence Models, SSM). These models can be interpreted as a combination of recurrent neural networks (RNNs) and convolutional neural networks (CNNs). This class of models can be computed very efficiently with linear or near-linear scaling of sequence length. Furthermore, it possesses inherent mechanisms for modeling long-range dependencies in specific data modalities.
One algorithm that enables the use of state space models for time series forecasting was introduced in the paper "Mamba: Linear-Time Sequence Modeling with Selective State Spaces". This paper presents a new class of selective state space models.
Author: Dmitriy Gizlyk