You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I updated to last version of BestInterval and Virtual.
When compile with new lines at the beginning of the code got many errors. At the end only 3 (see image).
Thank you very much for your help
Put these lines at the beginning, not at the end.
Usually such a backtest goes straight to the trash right during Optimisation (or a single pass)
However, BestInterval during Optimisation this pass will show one of the best. Reason
The disadvantage is that there are no metrics to determine how much it is not fitting to the interval being optimised
For example, my library draws exactly the same thing for 1 pass in the tester, but it does not throw out intervals, but turns over losing trades and approximates a new sequence.
it turns out that the more trades in the initial TS, the more beautiful pictures Bestinterval will show, and it does not matter what distribution of trades were before that.
The disadvantage is that there are no metrics to determine how much it is not a fit to the interval being optimised
You can write any metric you want for OnTester. The library allows it.
for example, my library draws exactly the same thing for 1 pass in the tester, only it does not throw out intervals, but reverses losing trades and approximates a new sequence
It turns out that the more trades in the initial TS, the more beautiful pictures Bestinterval will show, and it doesn't matter what distribution of trades were before that.
Ambiguous statement. I determine the fit simply - I take ONE best result of BestInterval and run it (single) on an interval twice as large. If the extension of the interval does not change the character of the direct balance - it is not a fitting. In the second figure above > 1000 trades. If I double the interval, the straight line remains. I.e. it is not a 100% fit.
Another thing is that "not fitting" means only one thing - the patterns were found real. But nothing guarantees that they will continue. And this lack of guarantee does not indicate any kind of fit at all. It's just a law of life.
You can write any metric for OnTester. The library allows.
Ambiguous statement. I define fitting simply - I take ONE best result of BestInterval and run it (single) on an interval twice as large. If the extension of the interval does not change the character of the direct balance - it is not a fitting. In the second figure above > 1000 trades. If I double the interval, the straight line remains. I.e. not fitting 100%.
Another thing is that "not fitting" means only one thing - the patterns were found real. But nothing guarantees that they will continue. And this lack of guarantee does not indicate any kind of fit at all. It's just a law of life.
In general, you are doing pure machine learning, but for some reason you ignore that topic :)
yes, if you build in some metrics and screen out models based on those metrics, then just the overfitting routine would be reduced, that's my point. Ideally, if the machine will select the best one, which is more likely to show something on OOS too
In general, you're doing pure machine learning, but for some reason you're ignoring that topic :)
I can't take it.
yes, if you build in some metrics and sift out models based on them, then just the routine of searching will be reduced, that's my point. Ideally, if the best one will be automatically selected, which with a higher probability on the OOS will also show something
Well, if there were orders of magnitude more computational resources, no BestInterval would be needed. I.e. there is no hint of MO here. It is just a filter that is convenient and fast to apply, nothing more.
As for the OOS probability, it is determined exactly as I said above. That is, any selection/optimisation factor is eliminated. Either everything is great or rubbish.
A much more complicated question is when there is a working TS and it is necessary to select values of optimal input parameters for the real.
It became a discovery for me that GA+BestInterval is a disgusting mapping. I.e. convergence to some local extremum occurs, and when checked, it turns out to be an absolute fit.
Buteforce+BestInterval is an excellent mapping, as it turns out. Very much, where GA made me believe that the TC was rubbish, Bruteforce showed promising.
Well, for Bruteforce to be fast, you need a small number of ticks, and fast TC logic. That is why all acceleration mechanisms give not only quantitative but also qualitative results.
GA itself is as good as milk from a bull ) It is considered to be an exploratory optimisation, which does not owe anything at all in terms of the reliability of results.
I cannot make friends with GA.