Hi Dmitriy,
Using MLP instead of other more complex networks is quite interesting, especially since the results are better.
Unfortunately, I encountered several errors while testing this algorithm. Here are a few, key lines of the log:
2024.11.15 00:15:51.269 Core 01 Iterations=100000
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 TiDEEnc.nnw
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Create new model
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 OpenCL: GPU device 'GeForce GTX 1060' selected
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Error of execution kernel bool CNeuronBaseOCL::SumAndNormilize(CBufferFloat*,CBufferFloat*,CBufferFloat*,int,bool,int,int,int,float) MatrixSum: unknown OpenCL error 65536
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Train -> 164
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Train -> 179 -> Encoder 1543.0718994
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 ExpertRemove() function called
Do you have any idea what could be the reason?
Before the OpenCL worked quite well.
Chris.

Hi Dmitriy,
Using MLP instead of other more complex networks is quite interesting, especially since the results are better.
Unfortunately, I encountered several errors while testing this algorithm. Here are a few, key lines of the log:
2024.11.15 00:15:51.269 Core 01 Iterations=100000
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 TiDEEnc.nnw
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Create new model
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 OpenCL: GPU device 'GeForce GTX 1060' selected
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Error of execution kernel bool CNeuronBaseOCL::SumAndNormilize(CBufferFloat*,CBufferFloat*,CBufferFloat*,int,bool,int,int,int,float) MatrixSum: unknown OpenCL error 65536
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Train -> 164
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 Train -> 179 -> Encoder 1543.0718994
2024.11.15 00:15:51.269 Core 01 2024.01.01 00:00:00 ExpertRemove() function called
Do you have any idea what could be the reason?
Before the OpenCL worked quite well.
Chris.
Hy, Chris.
Did you make some changes in model architecture or used default models from article?
Hy, Chris.
Did you make some changes in model architecture or used default models from article?
Hi. No changes were made. I simply copied the "Experts" folder in full and ran the scripts as they were, after compilation, in this order: "Research", "StudyEncoder", "Study" and "Test". The errors appeared at the "Test" stage. The only difference was the instrument, i.e. changing from EURUSD to EURJPY.
Chris
Dmitriy, I have an important fix. The error appeared after starting StudyEncoder. Here is another sample:
2024.11.18 03:23:51.770 Core 01 Iterations=100000
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 TiDEEnc.nnw
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 Create new model
2024.11.18 03:23:51.770 Core 01 opencl.dll successfully loaded
2024.11.18 03:23:51.770 Core 01 device #0: GPU 'GeForce GTX 1060' with OpenCL 1.2 (10 units, 1771 MHz, 6144 Mb, version 457.20, rating 4444)
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 OpenCL: GPU device 'GeForce GTX 1060' selected
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 Error of execution kernel bool CNeuronBaseOCL::SumAndNormilize(CBufferFloat*,CBufferFloat*,CBufferFloat*,int,bool,int,int,int,float) MatrixSum: unknown OpenCL error 65536
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 Train -> 164
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 Train -> 179 -> Encoder 1815.1101074
2024.11.18 03:23:51.770 Core 01 2023.11.01 00:00:00 ExpertRemove() function called
Chris

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Check out the new article: Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE).
In an attempt to obtain the most accurate forecasts, researchers often complicate forecasting models. Which in turn leads to increased model training and maintenance costs. Is such an increase always justified? This article introduces an algorithm that uses the simplicity and speed of linear models and demonstrates results on par with the best models with a more complex architecture.
As in a number of previous articles, the Environment State Encoder model is independent of the account state and open positions. Therefore, we can train the model even on a training sample with one pass of interaction with the environment until we obtain the desired accuracy of predicting future states. Naturally, the "desired prediction accuracy" cannot exceed the capabilities of the model. You can't jump above your head.
After training the model for predicting environmental states, we move on to the second stage – training the Actor's behavior policy. In this step, we iteratively train the Actor and Critic models and updating of the experience replay buffer at certain periods.
By updating the experience replay buffer we mean an additional collection of the environment interaction experience, taking into account the current behavior policy of the Actor. Because the financial market environment we study is quite multifaceted. So, we cannot completely collect all of its manifestations in the experience replay buffer. We just capture a small environment of the Actor's current policy actions. By analyzing this data, we take a small step towards optimizing the behavior policy of our Actor. When approaching the boundaries of this segment, we need to collect additional data by expanding the visible area slightly beyond the updated Actor policy.
As a result of these iterations, I have trained an Actor policy capable of generating profit on both the training and testing datasets.
In the chart above, we see a losing trade at the beginning, which then changes into a clear profitable trend. The share of profitable trades is less than 40%. There are almost 2 losing trades per every 1 profitable trade. However, we observe that unprofitable trades are significantly smaller than profitable ones. The average profitable trade is almost 2 times larger than the average losing trade. All this allows the model to тв up with a profit during the test period. Based on the test results, the profit factor was 1.23.
Author: Dmitriy Gizlyk