Methods of carrying out a rolling forward - page 8

 
Youri Tarshecki:
And how does your winning set become workable for thisOOS?
OOS is internal, I allow the tester to trade on the OOS period and put the parameters I want, and the set has already been found.
 
Alexandr Andreev:

4 years ago they started digging into volking trading..... and very hard.

And I have questions for you, what do you want from volking? To find out if the system works so you don't have to test it on a demo?

It's cool, but you should try it on demo anyway. If you hope that the proto system will work with volking, it is not so, he often says that everything is wrong during the extensive testing.

We are giving a huge sample to volking that just can't be reduced (to set direction), otherwise the whole principle of volking will break on the fly. And the reason why 80% of all calculations will be wasted is because of peculiarities of working with agents. I.e. when from first three days we understand that the result will be worse than the one we have we will continue testing until the end.

Do you have any idea how generalized the strategy for volking trading must be? - There will be more parameters to optimize than when you manually try to optimize passes predetermining profitable ones.

All you have written shows a complete lack of understanding.

WF is needed to evaluate an EA which is reoptimized regularly, that's one.

Second, we can more accurately choose both the length of history for optimization and the length of the confidence interval of the EA's workflow.

Third, WF shows if there was a fitting. And this is probably the main advantage of WF.

 
Alexandr Andreev:

And I have questions for you, what do you want from volking trading?

Automatically weed out the monkeys. In fact, having such a thing is just a huge plus and a help to everyone who automates trading strategies.

Maybe that's why it will never be in the standard terminal features ))

 
Nikolay Demko:

Everything you have written shows a complete misunderstanding of the question.

WF is needed to evaluate an EA that is regularly over-optimized, for one thing.

Second, we can more accurately choose both the length of history to optimise and the length of the confidence interval of the EA's working stroke.

Third, WF shows if there was a fitting. And that is probably the main advantage of WF.

Believe me understanding = equals release, back in 2014 I bought a grid from metaquest to solve this issue. And had to send the agent a lot of unnecessary information due to lack of dialogue with the agent.

Yes it gives an answer, but the answer will give - all bad if you do not give specifics.

For example, here we have a strategy and we send only stop level via WF - this is not correct. We should send as general variant as possible.

We should also add one more step if we want to go further. + If you want to do something, you should not do it at all. If we do something, we should do it inversely. And the point of the question is not what we will get - but where to count it all!

 
elibrarius:

Making your own genetic algorithm? That's a lot of work that's already done in the in-house tester. I think Metacquotes has spent more than a hundred hours developing it.

A hundred hours? We wrote university implementations in the lab about ten years ago, there's nothing complicated about it.
 
Alexandr Andreev:
And we need a ton of resources, and all for the reason that 80% of all calculations will go to waste because of the way we work with agents. I.e. when we understand from first three days that the result will be worse than it is in a hedgehog, we will continue testing until the end for some reason.

I solve this problem this way: if during the test the drawdown reaches 60%, then ExpertRemove() will exit; If this drawdown happens on the 3rd day, then the rest of the time interval with these parameters will not be calculated. This is just speeding up the calculations.

And I have questions for you, what do you want from volking trading? To know if the system works without testing it on demo?

It seems to me volking should help to define a "criterion of choosing one of the optimization variants (a winning set), which is going to be used.

Igor Volodin A hundred hours? We were writing a lab experiment at the university about ten years ago, there is nothing complicated about it.

Well, I am a self-taught programmer. I will not argue - you know better.)

 
In other words, for those who have a huge computing resource, ready to discuss running a ready-made WF with all the pitfalls and nuances, so to speak pro+ versions
 
elibrarius:

I solve this problem as follows: if during the test the drawdown reaches 60%, then ExpertRemove() will exit; If this drawdown happens on the 3rd day, then the rest of the time interval with these parameters will not be calculated. This is just speeding up the calculations.

I think volking should help to determine the "criterion for choosing one of the optimization variants (the winning set), which we will run".
A bad variant even doesn't solve half of problems, for example if we have no good passes at all or for example: we have the best pass at 2% (has overall score 87), in process of new testing we know that score higher than 10 will not be, but as agent has no chances to know best current score - resource that goes down the drain again
 
Alexandr Andreev:

Believe me understanding = equals release, I bought a grid from methaquets back in 2014 just to solve this issue. And had to send the agent a lot of unnecessary information due to lack of dialogue with the agent.

Yes gives an answer, but the answer will give - all bad if you do not give specifics.

For example, here we have a strategy and we send only stop level via WF - this is not correct. We should send as general variant as possible.

We should also add one more step if we want to go further. + If you want to do something, you should not do it at all. If we do something, we should do it inversely. And the point of the question is not what we get - but where to count it all!

The grid (Cloud) is optimized for something else, it's like using a microscope to hammer nails. In order to use it correctly you need to run GA search many times with forwarders, forwarders should be accurately recorded and then from records you can reconstruct the whole picture.

Cloud is intended for one-time optimization, because it takes a long time to warm up, but when the grid is up it will quickly calculate everything and then go back down. There will be a startup readiness at each startup, and there are plenty of such microstarts in WF.

As long as MQ doesn't implement WF in-house, there's nothing to hump without understanding how the resource in use works. It's easier to write your own GA, your own tester (can be simplified on indicators, as TheXpert said), and in it to implement WF.

 
Igor Volodin:
A hundred hours? We used to write implementations at the university for labs about ten years ago, there's nothing complicated there.
That's the thing is that here we are moving away from the platform itself, by the way another problem with large projects is that when you update MT, you get a lot of errors.
Reason: