Evaluating the effectiveness of filters in the construction of an ATC - page 7

 
-Aleks-:

If you do not know how to use this method, then you will not be able to use it for the first time, but you will be able to use it for the second time, if you do not know how to use it, then you will not be able to use it for the first time.

If I may give an example, I do not immediately understand the phrase " uniform growth of PF with an even decline in total profits is good" growth in relation to what - the last pass? Then what is the overall decline in profits?

When changing the filter parameter the number of trades decreases and hence the total profit, but the qualitative parameter must improve (for example TF). Then it is an effective filter. For example if the volatility filter, for example by max - min for X hours>Y, then the more Y, the higher is the FF, but less trades, then this is good. If PF goes up and down with increasing Y, then either the filter sucks and needs to be changed, or there are not enough trades in the sample when Y goes up
 
Avals:
When changing the filter parameter the number of trades decreases and hence the total profit, but the quality parameter must improve (for example TF). Then it is an effective filter. For example, if the filter volatility, for example, on maximal - minimal for X hours>Y, then the greater Y, the higher PF, but the less trades, then it is good. If PF goes up and down with increasing Y, then either the filter sucks and needs to be changed, or there are not enough trades in the sample when Y goes up

In the example I posted the average PF grows with all filters, but this is contrary to the method used for analysis by Dennis Kirichenko, according to which, as I understand it, the filter should uniformly increase the financial result for all optimization passes.

 
-Aleks-:

In the example I posted, the average PF grows with all filters, but this contradicts the method used by Dennis Kirichenko to analyse the situation, according to which, as I understand it, the filter should evenly increase the financial result on all optimization passes.

I am a supporter of the need to test and analyze each part of the system separately. And this applies not only to filters, but to any parameters. For example, we separately optimize vola and if PF grows with filtering stiffening, then this filter is presumably good. Similarly for all other parts of the system.

There are parameters that do not decrease the number of deals. For example, filter of ox for what period to calculate (X days, hours, etc.). When optimizing this parameter, if the filter is suitable, there will be some extremum of quality (for example, PF) at some value. The uniformity of its achievement is also important here. That is if PF: ...2.1; 2.2; 2.4; 2.7; 3.0; 2.9; 2.6; 2.2..., then statistically it is much better than 1.0; 2.2; 0.9; 4.0; 3.1; 2.5; 6.0. There should be one extremum with uniformly increasing PF function before it and decreasing after. Then, such optimization is not a simple overshooting but a market research and the value of the parameter reaching the extremum is likely to have a market logic justification. The presence of such justification and market logic increases the chances that the system found is robust, as well as giving the possibility to find other parts of it.

 
Personally, I use my own system to assess the quality of the Expert Advisor, described through the algorithm within OnTester. In short, it analyses the profit, the duration and the risk you had to take to make that profit. It returns a certain amount of points. Then I add a filter to the code and test it again and look at the score - if it is lower, I discard it, if it is higher, I leave it. In practice, it turns out to be very difficult to choose such a filter so that it does no harm, and I optimize it many times.
 
Denis Glaz:
Personally, I use my own system to assess the quality of the Expert Advisor, described through the algorithm within OnTester. In short, it analyses the profit, the duration and the risk you had to take to make that profit. It returns a certain amount of points. Then I add a filter to the code and test it again and look at the score - if it is lower, I discard it, if it is higher, I leave it. In practice, it turns out to be very difficult to choose such a filter so that it does not harm, and I optimize it many times.

This is interesting - please don't just write in brief - how the evaluation takes place, preferably with an example.
 
-Aleks-:

It's interesting - write not in brief - how the evaluation takes place, preferably with an example.
This is already a trade secret) I worked on it long enough and it has 2 thousand lines of code
 
Denis Glaz:
And this is a trade secret) I've been working on it for quite a long time and it has 2 thousand lines of code

That's how our whole forum is a mystery...

Is it an averaging Expert Advisor or a regular one?

 
-Aleks-:

That's how the whole forum is a mystery...

Is the advisor an averager or a regular one?

An averaging agent? How?) I've written the algorithm once in an external file and just insert it into the code. I have already used it for several Expert Advisors. It works for any
 
Denis Glaz:
A mediator? How?) I wrote the algorithm 1 time in an external file and just include it in the code. I have already used it for several Expert Advisors. And it fits for any.

Hmm, how can it be suitable for any? If the trader is an intermediary (more than one order per position), our analysis should be different - separate for profitable and losing order groups.
 
-Aleks-:

Hmm, how can it be suitable for any? If it is an intermediary (more than one order per position), the analysis must be fundamentally different - separate for groups of profit and loss orders...
What for? When it only analyzes changes of the balance and real and potential losses of each change (order) separately. And it also considers the frequency of changes, thereby including situations of hedging or locking of trades. The only condition is to use only one strategy at a time.
Reason: