Machine learning in trading: theory, models, practice and algo-trading - page 3704

 
There can also be similarities in different scales (multifractal), you should also look for them additionally. For example, the length of one is 100 bars and the other is 50 bars, but they are very similar. Then use skips or interpolation to equalise the lengths.
 
You get to look for a pattern on trend continuation, not all sorts of double tops, heads/shoulders, etc that haven't worked for a long time. Only the trend is your frend))
 
Forester double tops, heads/shoulders, etc that haven't worked for a long time. Only the trend is your friend)))
I don't know what you get there in the aggregate, more like a mush of more likely signals than randomly marking trades :)
The trend doesn't necessarily continue, it can reverse at the end of the structure.
 
First off machine learning , learning what ? .the past , do markets just run the exact same in the exact same scenarios , no they are ever changing . Second native Mt5 without external sources can't do machine learning , super A.I blah blah , no matter what some marketing ploys tell you . 
 
Forester double tops, heads/shoulders, etc that haven't worked for a long time. Only the trend is your friend)))

Had a similar thought as well.

 

I would also like to point out that if we are talking about correlation between prices (and not their increments), then it is actually cointegration. After all, cointegration is sought through linear regression (called "false" for nothing), and this always comes down to correlation formulas. But the point is not that correlation is some kind of basis for everything, but simply that there are not many different formulas in linear algebra and one of them (scalar product) is commonly called correlation in our forum.

Perhaps a connection to cointegration would help to interpret the results somehow.

 
Aleksey Nikolayev linear regression (called "false" for nothing), and this always comes down to correlation formulas. But the point is not that correlation is some basis for everything, but simply that there are not so many different formulas in linear algebra and one of them (scalar product) is commonly called correlation in our forum.

Perhaps a connection to cointegration would help to interpret the results somehow.

Correlation with QC threshold can be seen on the backtest.

IQ threshold 0.5 (all patterns above the threshold are selected).

Threshold 0.85.

I don't know if this is enough for anything.

 
In general, this approach finds little. I'll also try entropy. Like what is the point of rotating graphs in parts, if you can just count entropy in a window.
 

The new probability booster from Stanford University. Well, how new, 6 years old.


NGBoost: Natural Gradient Boosting for Probabilistic Prediction
  • stanfordmlgroup.github.io
NGBoost: Natural Gradient Boosting for Probabilistic Prediction.
 
Maxim Dmitrievsky #:

The new probability booster from Stanford University. Well, it's new, six years old.


Interesting, I'll have to check it out. It's good that probabilistic learning continues to evolve.