Discussion of article "Practical Use of Kohonen Neural Networks in Algorithmic Trading. Part II. Optimizing and forecasting"
The work is certainly of high quality. A good tutorial on the use of neural networks in analytics and trading.
As a discussion, I would like to add the following (on the "Optimisation" section):
The author claims that "the experiment did not reveal optimal settings". In my opinion, the reasons are as follows:
1. parameters of traditional indicators (RSI, MA. Parabolic) were used as the ones under study (during optimisation). In general, these are very rough analytical tools, so experienced traders, as a rule, do not use them for real trading. The result should improve if candlestick analysis but non-classical models are used instead of these indicators (my experience of testing "classics" has shown their inefficiency). That is, candlestick parameters (amplitude, size of the candlestick body, size of shadows) can be used as optimised parameters. The change of these parameters in time is a much more accurate analytical tool than the signals of traditional indicators.
2. lack of multiscale ( only one scale is used), and this seriously reduces the accuracy of the analysis. Therefore, it is better to use simultaneous analysis of candlesticks of 3-4 timeframes (of course, of the same financial instrument).
3. The author chooses gold and silver as the analysed financial instruments (protective assets, in the author's opinion, less dependent on fundamental factors). I think that for a full-fledged analysis, on the contrary, it is necessary to use the most traded instruments, and first of all EURUSD as the most characteristic (in terms of dynamics) financial instrument. Also (in analysis) one should not be afraid of volatility and so-called "noise" of the market (movements on small timeframes), as these are manifestations of the nature of the market, its natural elements.
I agree with the comments. Some points in the article were explained (e.g., the use of genetic optimisation results, the need to add other scales/indicators, etc.). One of the goals was to provide tools and general approaches for research on combat robots. Share your findings.
These are not comments, just suggestions, as the article is excellent.
The search through the space of parameters of very large dimensionality is performed by the genetic algorithm, and in order for it not to "collapse" into profitable areas, you can use a synthetic optimisation parameter, such as this one:
TesterStatistics(STAT_PROFIT_TRADES)/TesterStatistics(STAT_LOSS_TRADES). Or such TesterStatistics(STAT_MAX_CONPROFIT_TRADES)/TesterStatistics(STAT_MAX_CONLOSS_TRADES). In reality you have to link more parameters.
I was interested in the part of the article regarding plateau selection on the Optimisation results. I don't quite understand how you can do any analysis from the maps if they are derived from a random CSV? The article explicitly says about this, that a simple GA was done with discarding the rubbish afterwards.
How can we get closer to practice? Here is an EA and need to find a plateau.
Also. What principle is used to select cells when building maps? Probably missed something in the description, but it is very difficult to understand.
I was interested in the part of the article regarding plateau selection on the Optimisation results. I don't quite understand how you can do any analysis from the maps if they are derived from a random CSV? The article explicitly says about this, that a simple GA was done with discarding the rubbish afterwards.
How can we get closer to practice? Here is an EA and we need to find a plateau.
And another thing. What is the principle of selecting cells when building maps? I probably missed something in the description, but it is very difficult to understand.
In the article - more about the tool itself, and how the data is obtained for analysis - although it is said, but no research has been done. This is a front for more than one article - in particular, comparing the solution spaces of conventional and genetic optimisations. Complete overshoot does not give any bias - if you can, use it. However, the term "random CSV" is hardly appropriate, or chosen incorrectly - otherwise you could say that all optimisation results in MT are random. Discarding explicit outliers is normal practice imho.
Regarding selection of cells when building maps - I don't understand the question.
The article is more about the tool itself, and how the data for analysis is obtained, although it is mentioned, but no research has been done. This is a front for more than one article - in particular, comparing solution spaces of conventional and genetic optimisations. Complete overshoot does not give any bias - if you can, use it. However, the term "random CSV" is hardly appropriate, or chosen incorrectly - otherwise you could say that all optimisation results in MT are random. Discarding obvious outliers is normal practice imho.
The randomness of the GA is the sliding into a local extremum, which may occur differently each time. I.e. the picture of GA and full search can be very different (after discarding rubbish in both variants).
I encountered such nonsense on GA
Forum on trading, automated trading systems and testing trading strategies
New version of MetaTrader 5 build 1930: Floating chart windows and .Net libraries in MQL5
fxsaber, 2019.02.02 09:54 AM
Genetic optimisation logPH 0 11:40:35.073 Tester genetic optimization finished on pass 9216 (of 30240000) FI 0 11:40:35.563 Statistics optimization done in 1 minutes 51 seconds JP 0 11:40:35.563 Statistics shortest pass 0:00:00.014, longest pass 0:00:01.329, average pass 0:00:00.177 II 0 11:40:35.563 Statistics local 3891 tasks (100%), remote 0 tasks (0%), cloud 0 tasks (0%)
Out of 30 million variants, less than 4K were made, which lasted less than two minutes. How to increase the accuracy of GA? It would be better to take 2 hours to count, but produce a more objective result.
The same does not happen in MT4.
Regarding selection of cells when building maps - I don't understand the question.
In the article during the analysis of different maps the property is used that in two maps a cell with matching coordinates (X; Y) corresponds to the same set of TC input parameters. How is this rule formed?
Thank you so much for your labours in the form of articles and blog posts! The material is very different from the average.
On the basis of your implementation of the MHR I want to make an alternative to the standard GA and make a comparative analysis. The standard GA has a very large variation from run to run. And the 10K pass limitation does not coincide with the MT4 GA. I have seen your implementation of virtualisation, I want to use a similar approach, but use the GUI of the tester.
GA will never converge to a global optimum, it's an evolutionary method aimed at diversity. It doesn't have to converge to anything at all.
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Practical Use of Kohonen Neural Networks in Algorithmic Trading. Part II. Optimizing and forecasting has been published:
Based on universal tools designed for working with Kohonen networks, we construct the system of analyzing and selecting the optimal EA parameters and consider forecasting time series. In Part I, we corrected and improved the publicly available neural network classes, having added necessary algorithms. Now, it is time to apply them to practice.
This is how testing appears on the period from July 1, 2018 - December 1, 2018.
Unity-Forecast: Forecasting the movements of silver on the Forex and gold cluster in MetaTrader 5 Tester
At times, the accuracy reaches 60 %. We can conclude that, basically, the method works, if even it requires fundamentally selecting the forecasting object, preparing inputs, and long-time, meticulously configuring.
Author: Stanislav Korotky