Discussion of article "Experiments with neural networks (Part 3): Practical application" - page 2

 

It's now February 2023 - over two months of new history regarding the tests on the balance pictures. Can you show the results of EAs with the same settings on the new data? Or earlier ones that were not involved in optimisation.

 
Aleksey Vyazmikin #:

It's now February 2023 - over two months of new history regarding the tests on the balance pictures. Can you show the results of EAs with the same settings on the new data? Well or earlier ones that were not involved in optimisation.

TP =60
SL=600
This is no longer serious
 
Aleksey Vyazmikin #:

It's now February 2023 - over two months of new history regarding the tests on the balance pictures. Can you show the results of EAs with the same settings on the new data? Or earlier ones that were not involved in optimisation.

I can't now. Do the optimisation yourself.

 
Aliaksandr Hryshyn #:
TP =60
SL=600
That's not serious anymore.

Yeah, I've seen it. But, still, I found the author's view on predictors interesting....

 
Roman Poshtar #:

I can't right now. Do the optimisation yourself.

Well, maybe later - in a week or a month?

 
Aleksey Vyazmikin #:

Well, maybe later - a week or a month from now?

Maybe

 
I have tested many versions of neural networks on EURUSD over the last year and I want to say that the year itself is good. For some reason, many neural networks after optimisation on 2021 show growth on 2022.

But if you train on 2020 and test on 2021, they lose exactly the opposite.

Moreover, while I was studying what neural networks are, I understood what the activation function is for - for backward propagation of the error, which is the training. That is, adjusting the weights during optimisation kills the meaning of activation functions and, consequently, of classical neural networks. The result is almost the same with the activation function as with the simplest Reshetov perceptron.

Even more I will say - the simplest perceptrons even better show the picture, and all kinds of clutter in the form of libraries simply overload the terminal.

Therefore, it is better to check on several years in a row and several currency pairs. Yes, it is labour-intensive, but the result will be more objective.

All of the above is imho. Thanks to the author for the article, the part about the perceptron is interesting, I will dive deeper into it
 

Wow! Roman,

This article covers exactly what i am trying to accomplish!. 

Using either a 3 or 4 layer DNN, I ran the tests for a day and exported the results to Excel via the XML process on the Optimization Tab to create an Excel spreadsheet which I saved as a CSV file.  Using the CSV file, I am planning to import it into the EA and then run an optimization to select the best strategy out of the highest 1000 optimized results in a forward data test.  A couple of things I have learned.  First save the EA inputs to a .SET file in mql5\profiles\tester and you can edit the .SET file in NotePad,its much easier than using the input tab to modify groups of input.  The second thing is the tests I ran into a lot of very small number of trades. Less than 0-100 over 2 years, so I eliminated them.  The final one is be careful of commas in the CSV file, especially if you have values over $1000.00.  The equity and profit columns have a comma set for 1000 values so when the data is saved to a CSV file, additional commas are included.  If you are using StringSplit as I am to identify the start and then parse the optimized neurons into the Weight array, the two additional commas must be included in the calculations.

I am attaching a PNG file of the equity scatter graph for a completeed optimization run for a 2 year 433 DNN using EURUSD H4 on the Original function.  As you can see,there is a preponderance of results at or or above the 2900 line and that the number above increases dramatically as number of optimizations nears the end of the run which is expected.  My plan is to choose the best 1000, and then use the forward data to identify the the best corresponding optimized weights from the previous optimization.  Since genetic optimizations increase exponentially based on the number of layers as well as the number of neurons, A full GA optimization for a large number of neurons and complex trade strategies and stop loss calculations will be impossible for most machines.  However, identifying the base line e.g. 2900, and also getting a couple thousand results to use should substantially result in more reasonable GA run times and also should result in good but not the best options for the EA for forward testing on live data.  I I have found out that you can export the optimizations to Excel while the GA Agents continues to run, thus you can determine when you have a 100 optimizations above the base line using Excel's COUNTIF function.

 
CapeCoddah #:

Wow! Roman,

This article covers exactly what i am trying to accomplish!. 

Using either a 3 or 4 layer DNN, I ran the tests for a day and exported the results to Excel via the XML process on the Optimization Tab to create an Excel spreadsheet which I saved as a CSV file.  Using the CSV file, I am planning to import it into the EA and then run an optimization to select the best strategy out of the highest 1000 optimized results in a forward data test.  A couple of things I have learned.  First save the EA inputs to a .SET file in mql5\profiles\tester and you can edit the .SET file in NotePad,its much easier than using the input tab to modify groups of input.  The second thing is the tests I ran into a lot of very small number of trades. Less than 0-100 over 2 years, so I eliminated them.  The final one is be careful of commas in the CSV file, especially if you have values over $1000.00.  The equity and profit columns have a comma set for 1000 values so when the data is saved to a CSV file, additional commas are included.  If you are using StringSplit as I am to identify the start and then parse the optimized neurons into the Weight array, the two additional commas must be included in the calculations.

I am attaching a PNG file of the equity scatter graph for a completeed optimization run for a 2 year 433 DNN using EURUSD H4 on the Original function.  As you can see,there is a preponderance of results at or or above the 2900 line and that the number above increases dramatically as number of optimizations nears the end of the run which is expected.  My plan is to choose the best 1000, and then use the forward data to identify the the best corresponding optimized weights from the previous optimization.  Since genetic optimizations increase exponentially based on the number of layers as well as the number of neurons, A full GA optimization for a large number of neurons and complex trade strategies and stop loss calculations will be impossible for most machines.  However, identifying the base line e.g. 2900, and also getting a couple thousand results to use should substantially result in more reasonable GA run times and also should result in good but not the best options for the EA for forward testing on live data.  I I have found out that you can export the optimizations to Excel while the GA Agents continues to run, thus you can determine when you have a 100 optimizations above the base line using Excel's COUNTIF function.

Thank you for your interest in my publications. I think your ideas can be implemented. But as you can see, everything rests on the iron part of our question - computers.

 

This is incredible work, thank you Roman!

I am running into an issue where I am unable to compile any of the Perceptron MQ5s "1 perceptron 4 angle SL TP - trade" for example has 22 errors, most of them being semicolon expected. Am I missing something or did I do something wrong?