Discussing the article: "Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)"
Dmitry hello!
How do you train and replenish the database of examples for a year of history? I have a problem with replenishing new examples in the bd file in your Expert Advisors from the latest articles (where you use a year of history). The thing is that when this file reaches the size of 2 GB, it apparently starts to be saved crookedly and then the model training Expert Advisor can not read it and gives an error. Or the file bd sharply begins to drop in size, with each new addition of examples up to several megabytes and then the training advisor still gives an error. This problem occurs up to 150 trajectories if you take the history for a year and about 250 if you take the history for 7 months. The size of the bd file grows very fast. For example, 18 trajectories weigh almost 500 Mb. 30 trajectories are 700 MB.
As a result, in order to train we have to delete this file with a set of 230 trajectories over 7 months and create it anew with a pre-trained Expert Advisor. But in this mode, the mechanism of updating trajectories when replenishing the database does not work. I assume that this is due to the limitation of 4 GB RAM for one thread in MT5. Somewhere in the help they wrote about it.
What is interesting is that in earlier articles (where the history was for 7 months, and the base for 500 trajectories weighed about 1 GB) such a problem was not present. I am not limited by PC resources as RAM is more than 32 GB and video card memory is enough.
Dmitry, how do you teach with this point in mind or maybe you set up MT5 beforehand?
I use the files from the articles without any modification.
As a result, in order to train we have to delete this file with a set of 230 trajectories over 7 months and create it anew with a pre-trained Expert Advisor. But in this mode, the mechanism of updating trajectories when replenishing the database does not work. I assume that this is due to the limitation of 4 GB RAM for one thread in MT5. It was written about it somewhere in the help.
What is interesting is that in earlier articles (where the history was for 7 months, and the base for 500 trajectories weighed about 1 GB) such a problem was not present. I am not limited by PC resources as RAM is more than 32 GB and the video card has enough memory.
Dmitry, how do you teach with this in mind or maybe you have configured MT5 beforehand?
I use files from articles without any modification.
Victor,
I don't know what to answer you. I work with larger files.

Hi, read this article it's interesting . understand a little, will go trough once again after reading original paper.
I have came across this paper https://www.mdpi.com/2076-3417/14/9/3797#
it claims they archived 94% in bitcoin image classification,is it really possible?
- www.mdpi.com
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Check out the new article: Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part).
In this article, we continue the implementation of the approaches of the ATFNet model, which adaptively combines the results of 2 blocks (frequency and time) within time series forecasting.
In the previous article, we got acquainted with the ATFNet algorithm, which is an ensemble of 2 time series forecasting models. One of them works in the time domain and constructs predictive values of the studied time series based on the analysis of signal amplitudes. The second model works with the frequency characteristics of the analyzed time series and records its global dependencies, their periodicity and spectrum. Adaptive merging of two independent forecasts, according to the author of the method, generates impressive results.
The key feature of the frequency F-Block is a complete construction of the algorithm using the mathematics of complex numbers. To meet this requirement, in the previous article we built the CNeuronComplexMLMHAttention class. It completely repeats the Transformer multilayer Encoder algorithms with elements of multi-headed Self-Attention. The integrated attention class we built is the foundation of the F-Block. In this article, we will continue to implement the approaches proposed by the authors of the ATFNet method.
Author: Dmitriy Gizlyk