Neuro-forecasting of financial series (based on one article) - page 9

 
nikelodeon:


In the end, after gathering statistics, we can conclude that a training target of maximum balance is not always a good thing. But there is a slightly different question, how to find the goal for NS to work well in the future.

I tried different variants of my Expert Advisor with NS, i.e. by balance, profit factor, expected payoff, drawdown in deposit currency and drawdown in %. And I've looked at forward tests both before and after the optimization.

It appears that if we optimize by the minimal drawdown in the deposit currency and then select this minimal drawdown from the optimization results, both forwards are successful. If the minimum drawdown value is the same for several optimization results, you should choose the one that has the maximal balance.

Additionally, it was found out that if you optimize by the minimum drawdown in the deposit currency, and then select the result with the maximum profit factor, the forward tests are also successful, but the results are already worse than in the previous case.

But this method gives results only for a single EA. Other Expert Advisors with the same NS but different inputs do not have such a feature and so far we have not been able to find out for them methods of defining signs of optimization results for successful forward tests.

 
Reshetov:

Tried different variants of my EA with NS, i.e. by balance, profit factor, expected payoff, drawdown in deposit currency and drawdown in %. And I've looked at forward tests both before and after the optimization.

It appears that if we optimize by the minimal drawdown in the deposit currency and then select this minimal drawdown from the optimization results, both forwards are successful. If the minimum drawdown value is the same for several optimization results, you should choose the one that has the maximal balance.

Additionally, it was found out that if you optimize by the minimal drawdown in the deposit currency and then select the result with the maximal profit factor, the forward tests are also successful, but the results are already worse than in the previous case.

But this method gives results only for one EA. Other EAs with the same NS but different inputs do not have such a feature and so far we have not been able to identify for them methods of determining signs of optimization results for successful forward testing.

neural network weighting requires trillions of variations, and ha can only give out 10-18 thousand.

so it would be correct to run optimization several times (at least five) in ha mode and only then choose something suitable.

 
Reshetov:

It turns out that if you optimize by the minimum drawdown in the deposit currency and then choose this very minimum drawdown from the optimization results, then both forwards are successful. If the value of the minimal drawdown is the same for several optimization results, you need to select the one that will have the maximal balance.

Additionally, it was found out that if you optimize by a minimum drawdown in the deposit currency, and then select the result with the maximal profit factor, the forward tests are also successful, but the results are worse than in the previous case.

But this method gives results only for a single EA. Other Expert Advisors with the same NS but different inputs do not have such a feature and so far we have not managed to find out for them methods of defining signs of optimization results for successful forward tests.

And the number of deals is controlled? NS is flexible and if we just set the minimum drawdown as a training target function, it can easily find an option with zero drawdown. If the architecture and mathematics of concrete NS allows, it may simply find some weights, so there will be few (statistically insignificant amount) deals, but no drawdown... Maybe this is why it does not work with other inputs and networks?

I often use a similar variant: criterion = max balance - drawdown, but with mandatory control of minimum number of trades. I.e. I believe that NS must do at least 100 trades per year, and if it shows a super result but with 99 trades - the result is thrown out automatically...

 
mersi:

neural network weighting requires trillions of options, and ga can only give out 10-18 thousand.

so the right thing to do would be to run the optimization several times (at least five) in ha mode and only then choose something appropriate.


Do you use a GA tester to train the NS? How did you do it and what kind of NS are they? How many "scales" can you "fit" using this approach?
 
Figar0:

Do you use GA tester to train NS? How did you do it and what kind of NS are they? How many "scales" can you "fit" using this approach?

While Yuri's answering, I'll tell you what I did.

There are a total of 21 weights. The variables take values from -1 to 1. I made the step of variable optimization 0.05.

I cannot do smaller steps because the number of combinations is at the limit for the optimizer - 19 digits, I don't even know such numbers.

That is, it was the limit for the optimizer, something like 9999999999999999999.

My topic: https://www.mql5.com/ru/forum/126476

 
Figar0:

Do you use GA tester for teaching NS? How did you do it and what kind of NS are they? How many "weights" can you "adjust" using this approach?

20 synapses.

8 inputs

4 + 3 hidden layers

1 output

all neurons with Fa - hyperbolic tangent

===========

There are three such networks. Outputs of all three form a committee.

First the first network is trained. Outputs of the other two are zero

Then with the first one enabled the second one is optimized in order to decrease the drawdown to the minimum while maintaining or increasing the profit

then the third network is connected to the two already existing ones and weights are adjusted as in the previous case

 
Figar0:

And the number of deals is controlled? NS is flexible, if we simply set the minimum drawdown as a target learning function, NS can easily find an option with zero drawdown.

The Metatrader's tester will not produce zero drawdown because it calculates by equity, not balance. That is, even if all trades are profitable, the drawdown will not be zero anyway, as candlesticks have shadows.

And it would not be desirable to make such a fitting so that there were no losing trades at all. Such fittings, with very rare exceptions, fail on forward tests.

 
mersi:

neural network weighting requires trillions of choices, and ga can only give out 10-18 thousand.

So the right thing to do would be to run optimization several times (at least five) in ha mode and only then choose something suitable.

You are choosing the wrong neural network architecture. Actually the grid should be such, that a slight change in settings (weights and thresholds) gives the same result on outputs. If the grid architecture is over-engineered, it needs super-sleek tuning, and the result of such over-engineering will be over-training (tweaking).

For example my architecture is such that 10 000 passes of GA is already redundant, i.e. after optimization similar results (by balance, profit factor, expected payoff and drawdown) appear with slightly different settings. This allows the grid to produce correct results in a wider range of settings - it is more thick-skinned.

 

Explanation of the previous post.

Let's assume you managed to train a network and it is able to distinguish between patterns 3 and 6.

The purpose of the second and third nets (in my case) is to prevent the Expert Advisor from triggering when it encounters patterns h and b, which the first net has mistaken for 3 and 6.

 
Reshetov:

You are not selecting the neural network architecture correctly. In fact, the grid should be such that a slight change in the settings (weights and thresholds) gives the same result in the outputs. If the grid architecture is tweaked it needs super fine tuning, and the result of such tweaks will be over-training (fitting).

For example my architecture is such that 10 thousand passes of GA is already redundant, i.e. after optimization similar results (balance, profit factor, expected payoff and drawdown) with slightly different settings appear. This allows the grid to produce correct results in a wider range of settings - it is more thick-skinned.

All neural network researchers disagree with this statement.

Almost all articles on ns can read that the better the network, the more neurons it has, but at the same time it must not have too many of them.

That's why most of them tend towards networks with 2-3 hidden layers.

Reason: