Discussion of article "Neural network: Self-optimizing Expert Advisor"

 

New article Neural network: Self-optimizing Expert Advisor has been published:

Is it possible to develop an Expert Advisor able to optimize position open and close conditions at regular intervals according to the code commands? What happens if we implement a neural network (multilayer perceptron) in the form of a module to analyze history and provide strategy? We can make the EA optimize a neural network monthly (weekly, daily or hourly) and continue its work afterwards. Thus, we can develop a self-optimizing EA.

If we conduct a test with the following network structure: "10 input layer neurons, 35 first hidden layer neurons, 10 second hidden layer neurons and 1 output layer neuron", the script displays the following data during its execution:

Author: Jose Miguel Soriano

 

Self-optimisation of any target function was unfortunately not seen.

 

Maybe a translation flaw, but I don't get it:

What are you teaching the neural network?

What is the input and what is the target?

What exactly are you optimising?

It's a rambling article.

So much for the advantage of using Alglig in MT5.

 
<br/ translate="no">
I would like to see the results in the tester and trading on demo in the articles devoted to the creation of Expert Advisors. Otherwise it's a spherical horse in a vacuum again. There has not been a single article on the use of neural networks in trading. Only theories.
 

Congratulations on the information,

The script was missing.

 

Hi and Thanks, it was great to see this self optimizer bu NN, 

How can I use this in MT4? do you have the MT4 version?   

 
Comments on the article.
1. Neural networks do not optimise! (in general any machine learning models) are trained, tested and can be further trained.
2.Data preparation. Before normalisation of input/output data set should be divided into train/test and sometimes into valid parts in necessary proportions. The normalisation parameters are defined on the training set! When solving classification problems the partitioning should be (preferably) stratified. And of course to take into account the class imbalance.
The article does not give specific input and output data, but this passage: " The variable "nVelasPredic" allows extrapolating these indicator values for n candles ahead." raises a mute question.
The final paragraph of the section:"That is, each row of the "arDatos" array of the CMatrixDouble class will have as many columns as there are input data or indicator values used in the strategy and as many output data defined by it." And how much output data is defined? As you can guess, the regression problem is solved and the number of outputs = 1.
In general, input and output data in the sleeper is some abstraction, nevertheless the RMSE learning error is given.
3."Oversampling of input/output data."
"To avoid the tendencies associated with inheritance of values within a data array, we can arbitrarily change (reorder) the order of rows within the array. To do this, we apply the function "barajaDatosEntra" which traverses the rows of the CMatrixDouble array, defines a new target row for each row, respecting the data position of each column and performing data shuffling using the bubble method " Are you talking about sorting?
For classification problems, shuffling the rows randomly (not overshooting) is desirable, for regression it is not necessary.
4."3.6.Training/optimisation of a neural network". Let me repeat - a neural network is not optimised or tuned - it is trained. There are quite a lot of training methods today. Each with its own advantages and disadvantages.
"The tests I have done have not shown any improvement in results with increasing the number of training epochs. "
"These algorithms perform tuning in such a way that reiteration of training cycles (variable "ciclosEntrena") has almost no effect on the resulting error, unlike the "back propagation" algorithm, where reiteration can significantly change the resulting accuracy."
If your neural network does not respond to changing the number of training epochs - It's a failing neural network.
"A network of 4 layers with 35, 45, 10 and 2 neurons and an input matrix of 2000 rows can be optimised using the above function in 4 - 6 minutes (I5, core 4, RAM 8 gb) with an error of the order of 2 - 4 hundred thousandths (4x10^-5)." - As you can see from the article you are talking about learning error, which is always very good. But it is the error of testing, which you obviously have not done, that is indicative. Without having specific data on inputs and outputs, the figure given by you does not tell anything. Besides, I did not see in the article the shift of output data during training.
5."3.7 Saving the network..."
For further work it is necessary to save not only the network itself, but also the normalisation parameters that were obtained on the training set. Otherwise, further work will be a waste of time.
"This function (respuestaRed()) is supposed to be able to change the normalisation applied to the output data in the training matrix." How? The normalisation parameters are set at the training stage and cannot be changed at the testing or prediction stage.
6. "Self-optimisation" There is neither optimisation nor self-optimisation here. You have a neural network learning periodically on new data.

To summarise, I can say: as a programming exercise it is probably useful, but as a guide on the topic of creating and using a neural network it is absolutely unacceptable. The author (or translator) should stick to the established terms and definitions in this area so as not to mislead users. And I think preliminary review of texts before publication is very desirable. I am not talking about the absence of graphs, drawings illustrating the text. Without them it is a sheet of code.

Good luck




 

Alexey Volchanskiy:
Хотелось бы в статьях, посвященных созданию советников, увидеть результаты в тестере и торговли на демо. А то опять сферический конь в вакууме. Еще не одной дельной статьи по использованию нейросетей в трейдинге не было. Одни теории.

The statement is strong, but unfounded.

In the appendix to the articles (mine at least) there are codes of Expert Advisors. Put them on demo and test them. Or do you prefer to believe other people's pictures? Do not be lazy, try it.

The purpose of the articles is to give you new ideas, ways and methods that can be applied in Expert Advisors. But it is up to you to set up for productive work. It is in the market that you can be lured by beautiful pictures.

Good luck

 
Thanks you for your hard work! Very profound for me.
 
Excellent work and super enriching article, well explained and clear, I can only thank you for your contribution in this field.

Greetings.
 
MetaQuotes Software Corp.:

New Article Neural Networks: Intelligent Trading Systems Self-Optimising has been published:

By Jose Miguel Soriano

Thanks for sharing