Machine learning in trading: theory, models, practice and algo-trading - page 3173

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It's more about which pill to take: blue or red :)
It's luck and p-hacking, yes. So the results could be anything.
Wait, what kind of selection are we talking about at this point?
Wait, what choice are we talking about at this point?
Well you chose this option and showed that the left is good and the right is bad.
I mean you have several options for optimisation, but you choose this one to show.
Well you chose this option and showed that the left is good and the right is bad
I didn't. Here is the method.
Forum on trading, automated trading systems and testing trading strategies.
Machine learning in trading: theory, models, practice and algo-trading
fxsaber, 2023.08.17 06:58
I do not practice such self-deception. I do only in this way.
I didn't choose it. Here's the method.
Well, more sophisticated p-hacking :) there is still multiple testing and selection of ranges of TC parameters. They are not taken from the ceiling.
If the training period is reduced, will the graph trend reversal occur just as quickly?
It varies, of course. But very often you can see a break right after Sample. Perhaps it is a cognitive distortion, when you pay more attention to something and get the impression that it happens too often.
Don't know much about tick strategies, but one of the factors for this behaviour is lack of comparable data when training, for example - in training it was mostly trending down on some TF.
The chart shows three years of daily trading.
I don't know what training method you use, if it is tree systems or filters just clamping the range of a conditional indicator (function), it is worth estimating the number of examples falling into each of such ranges.
What I didn't do was plot each range. I counted the statistical data, but did not look at the graph itself.
A possible situation is data drift and a shift in the probability distribution for the filter/list.
For example, when I select quantum segments on a sample for training, and then estimate their distribution (percentage of correct and incorrect responses to the target 0||1) on two other samples, then meeting the stability criterion on 3 samples is found within 25%-30% - it is clear that in this case the model has more chances to choose an unstable predictor, which will stop working on one of the sites.
In the end, it all comes down to analysing simple patterns, namely finding reasons to consider them as such, rather than a random observation of a comet's tail in a telescope.
I don't understand the highlighted part.
Well, more sophisticated p-hacking :) there is still multiple testing and selection of TC parameter ranges. They are not taken from the ceiling.
Stop. You are not against the optimisation process itself, are you? Obtaining the desired curve on the Sample-interval has nothing to do with other intervals purely logically.
Whoa. You are not against the optimisation process itself, are you? Obtaining the desired curve on the Sample interval is not logically related to other intervals.
How long does the system stay profitable?
I don't quite understand the question. The left OOS is a year. Should it be increased backwards?
I have met with such behaviour of the system, when on the right OOS there is a sharp plum, I do not think that it is connected directly with a sharp 180 degree reversal of the found market patterns (it would indicate the reasons of mystical nature, the use of voodoo practices and in general anything rather than any real problems like retraining or adjustment, because it is at least strange when a sharp plum always happens after the end of training). Usually this is due to some errors in the code causing false positives (or false negatives) as Max said above, the correction of which leads to random behaviour on the OOS right in the worst case (overtraining) or gradual fading of profitability in the best case (fading of found patterns and/or their gradual change).
I assume that an indication that there are no bugs in the code is that the code does exactly what was intended before programming. In this sense, everything is fine.
And in the general case, a TC with errors in the code is still a TC. It's just not exactly what the author originally intended.