
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In this thread interesting things were discussed on the subject, it is a long thread, but I see valuable information
Nice
With the simple Custom Criterion Max optimization i tried balancing the buy and sell trades and their frequency , all the trades were virtual and it actually did not stop the optimization
(although there were no trades taken!)
I just used the OnTesterPass to pass my evaluation score to the optimizer .
This is really handy , kudos to MQ
(i ran it on one asset , the balancing of multiple assets is a bit harder , will try later)
Con la sencilla optimización Custom Criterion Max, intenté equilibrar las operaciones de compra y venta y su frecuencia, todas las operaciones eran virtuales y en realidad no detuvo la optimización.
(¡aunque no se realizaron cambios!)
Acabo de usar OnTesterPass para pasar mi puntaje de evaluación al optimizador.
Esto es realmente útil, felicitaciones a MQ
(Lo ejecuté en un activo, el equilibrio de múltiples activos es un poco más difícil, lo intentaré más tarde)
Great!!! Nice work!
Tx
Although there is a potential issue .
The balance is affecting the score by being multiplied to the p+l .
That means that a bad setup with negative score will get its score shrank thereby ranking higher the less balanced it is.
I played around , as a noob , with frames a bit . They are actually not difficult.
If you consider the solution you would need to deploy to access the results (and more statistics that are not displayed) from each optimization ,
and then you read the frames documentation that is exactly what it is .
Here is a little low level example you can try by optimizing the only parameter from 1 to 10 in custom max mode.
I played around , as a noob , with frames a bit . They are actually not difficult.
If you consider the solution you would need to deploy to access the results (and more statistics that are not displayed) from each optimization ,
and then you read the frames documentation that is exactly what it is .
Here is a little low level example you can try by optimizing the only parameter from 1 to 10 in custom max mode.
Barbarian! What for you are newbie tests would take me 100 hours of thinking and executing ;)
A great advance on your part. I'm looking closely at all of this. Really interesting
Barbarian! What for you are newbie tests would take me 100 hours of thinking and executing ;)
A great advance on your part. I'm looking closely at all of this. Really interesting
xD No where near to fxsaber's solution , i just wanted something you set up , fill up and collect at the end .
In fact , mq expected these things and allows the start of stuff at TesterInit , and the end of stuff at TesterDeinit like its a separate program that starts ontesterinit and ends on testerdeinig .Hence the extra chart to the left of the stats per
run , it can run commands display tables etc . really handy.
Okay here is my attempt .
So what on earth does it do ?
The Frames send inputs and a data array to the end of the optimization .
So we come in and say since the tester starts a "program" on "OnTesterInit" which ends "OnTesterDeinit" we will create a structure
which is going to contain a list of our custom (or default ,or both) statistics .
Then we make sure to add the stats in the same sequence in the ontester function
Which means we will receive a nice package of named statistics and inputs at the end of the test.
This nice package can be saved on the normal MQL5\Files location and the idea is to then view it and
inspect it . (and export set files too).(both not implemented yet)
Here is an example of usage , a very simple one .
I hope this is not confusing at all , and the include is a little bloated because of some additions during testing . Not all functions are needed (damit)
Cheers
Okay here is my attempt .
So what on earth does it do ?
The Frames send inputs and a data array to the end of the optimization .
So we come in and say since the tester starts a "program" on "OnTesterInit" which ends "OnTesterDeinit" we will create a structure
which is going to contain a list of our custom (or default ,or both) statistics .
Then we make sure to add the stats in the same sequence in the ontester function
Which means we will receive a nice package of named statistics and inputs at the end of the test.
This nice package can be saved on the normal MQL5\Files location and the idea is to then view it and
inspect it . (and export set files too).(both not implemented yet)
Here is an example of usage , a very simple one .
I hope this is not confusing at all , and the include is a little bloated because of some additions during testing . Not all functions are needed (damit)
Cheers
I think you've done a spectacular job with this. I'm out right now and can't prove it, although I'm sure it's something really impressive. I think you should write an article or something similar, there is a very important step forward for the community here, I hope you know how to value it. As soon as I return to work I get to work to integrate it
I think you've done a spectacular job with this. I'm out right now and can't prove it, although I'm sure it's something really impressive. I think you should write an article or something similar, there is a very important step forward for the community here, I hope you know how to value it. As soon as I return to work I get to work to integrate it
Thanks . A library for these things already exists actually and its probably superior . I'm just anal like that , the best way to understand something is to build it.
And it fits what i need right now because i will try an approach with more than 1 custom factors , so i need to be able to see the additional custom stats as well , choose a solution, export the set file and then run it again , collect the patterns , next symbol optimization etc.