Advisor for an article. Testing for all comers. - page 8

 
Avals:


There is no need for a forward at all in a proper analysis of wholes.

The essence of forward is to assess whether extremums of optimized parameters float over time. I.e. cut off the cases when there are several local extrema in the entire testing area (optimization + autofsample). It can be cut off much better through a separate analysis for each option of its extremum singularity and monotonicity. I.e. it is already a guarantee that an option does not "float" in time. And forward has a serious disadvantage - it only considers individual points on the optimization surface and not in the aggregate. That, coupled with phonon division of plots into optimisation and autofsamples, reduces statistical reliability of such analysis below the plinth)) It's just one realization - maybe one will be lucky to select an autofsample and a shitty set of options will be passed, or vice versa - an autofsample will fall in a period of temporary drawdown of a "good" set of options.

But in any case I repeat that the optimization task is to evaluate robustness of each parameter of the system. If in doubt it is better to discard or modify it. Leave only what is 100% supported by statistics and trading logic.


So you want to say that instead of running a series of forward tests it is better to optimize parameters on as large a period of history as possible? In principle, there is some truth in this too, as it is more difficult to "fit" parameters on a larger area to get a nice curve. :)
 
Avals:


don't accidentally distort anything. This is done by the GA itself by the mutation mechanism. Optimization is needed to check each individual option for robustness, not to find global extrema.

That's the point, GA is not just looking for global extrema but for an extremum which will surpass other extrema with respect to the value of the multivariate function, i.e.

max(y = f(x0, x1, ... xn))

where:

x0, x1 ... xn - input parameters of the TS

y is a fitness function of the GA

And there is no guarantee that the GA extremum found at the point of the multidimensional space with corrdinates {x0, x1, ... xn} is not time floating and time extremum only for a section of historical data. If the optimization were capable of robustness checking, then the fit would not exist at all. And since the fit does exist, then additional checks are needed, including forward tests.

Avals:

But in any case, again, the task of optimisation is to assess the robustness of each parameter within the system.

Bullshit and sabotage. Optimisation is about finding extremes by definition and does not solve any other problems.
 
tol64:

So you want to say that instead of running a series of forward tests it is better to optimize parameters on as large a history interval as possible? In principle, there is some truth in this too, as it is more difficult to "fit" parameters on a larger interval to obtain a beautiful curve. :)

If the quotes were stationary, we would get more statistically correct, because the law of large numbers takes place on stationary data.

And since we are dealing with non-stationary data, statistics and the law of large numbers do not work here. Because Chebyshev's law of large numbers states that with increasing number of trials and presence of expectation = Const and finite variance, statistical data become closer to their constant values. Non-stationarity excludes expectation constancy and finite dispersion and therefore we are unable to refine anything since what does not and cannot exist by definition makes no sense to calculate and refine.

It is hard to look for a black cat in a dark room, especially if it is not there (cf. Confucius)

 
Reshetov:


Bullshit and sibilance. Optimisation is about finding extrema by definition and does not solve any other problems.
The optimizer can still do a lot of things, if you use it correctly. It's only nerds who rub it in, optimizing a bunch of parameters to use a single top line ;)
 
IgorM:
Hv. Yuri, what about the article? When will it be published?

Just sent the text of the article to be checked.

After verification it will be available at: https://www.mql5.com/ru/articles/366

 
Reshetov:

Just sent the text of the article to be checked.

After verification it will be available at: https://www.mql5.com/ru/articles/366

Thank you!

ZS: I can already see the first lines of the article: "404 Requested page not found", takes my breath away .... )))))

ZZY: I hope that your article will tell how to choose the optimal network structure, and when the network is considered sufficiently trained, here I am spoiling learning 2x2 ...

>
 
IgorM:

Thank you!

ZS: I can already see the first lines of the article: "404 Requested page not found" takes my breath away .... )))))

ZZY: I hope that your article will tell how to choose the optimal network structure, and when the network is considered sufficiently trained, here I am spoiling learning 2x2 ...

Yes, but it is not exactly a network, as the first layer is an expert system for three inputs, not neurons, the hidden layer is a perceptron, i.e. already a neuron, and the output is a linear sigmoid. Necessity and sufficiency of rule selection for the expert system knowledge base are described in detail. I.e. there is no need to optimize anything. The expert system must fully meet all conditions as described in the article and no other architecture is suitable for it - nothing can be removed since it is not trained enough, and nothing can be added since it is sure to be retrained.

There are detailed instructions on how to optimize an EA with forward tests and how to perform an additional stability check of the extremum identified. It cannot be said that all that is more than enough to consider TS 100% trained under non-stationarity conditions but it can be said that all that should be done to avoid getting caught up in instability or randomness of a forward test.

 
Reshetov:

Yes, but it is not exactly a network, since the first layer is an expert system for three inputs rather than neurons, the hidden layer is a perceptron, i.e. already a neuron, and the output is a linear sigmoid. The necessity and sufficiency of rule selection for the knowledge base of the expert system are described in detail.

Interesting..., I am thinking of trying to make a system out of an array of NSs - the input of NSs should be fed with outputs of already trained NSs
 

Thepublication of the article has been temporarily delayed. The text has been edited, but the screenshots, due to my inattention, have exceeded the allowable size. I will have to re-run optimization all over again to take screenshots. And optimization in MT5 is terribly slow. Therefore, publication has been postponed indefinitely for the time being.

 
Reshetov:


The publication of the article has been temporarily delayed. The text has been edited, but the screenshots, due to my inattention, have exceeded the allowable size. I will have to re-run optimization all over again to take screenshots. And optimization in MT5 is terribly slow. Therefore, publication has been postponed for the time being.


Can the screenshots be made smaller in Photoshop without loss of quality?

P.S. Although, if there are elements of the terminal, it won't work very well.

Reason: