Machine learning in trading: theory, models, practice and algo-trading - page 3521

 

Horizon 50 :)

Iteration: 0, Cluster: 9, PE: 0.19187815187230203
R2: 0.9251399849015964
Iteration: 0, Cluster: 5, PE: 0.1777799526873072
R2: 0.9411135040325027
Iteration: 0, Cluster: 10, PE: 0.1911386275608683
R2: 0.9824844090017448
Iteration: 0, Cluster: 0, PE: 0.17365747619232763
R2: 0.9526050618178151
Iteration: 0, Cluster: 2, PE: 0.1955128258151347
R2: 0.94080469873982
Iteration: 0, Cluster: 14, PE: 0.18717625687320816
R2: 0.9494205895982205
Iteration: 0, Cluster: 7, PE: 0.18377017584958724
R2: 0.9854412735000905
Iteration: 0, Cluster: 4, PE: 0.19457491293528226
R2: 0.9707028666409455
Iteration: 0, Cluster: 8, PE: 0.19303068880218144
R2: 0.9436050178630804
Iteration: 0, Cluster: 1, PE: 0.17772969862939111
R2: 0.9644481799583889
Iteration: 0, Cluster: 12, PE: 0.19687310008150688
R2: 0.9403352291614797
Iteration: 0, Cluster: 3, PE: 0.2035721978224435
R2: 0.946321085154211
Iteration: 0, Cluster: 6, PE: 0.16732132934343538
R2: 0.6934348651830222
Iteration: 0, Cluster: 13, PE: 0.133697960635451
R2: 0.8435168292151949
Iteration: 0, Cluster: 11, PE: 0.20661933275140204
R2: -0.49276758465328296


 

And at 100, it's not much different from 50

Iteration: 0, Cluster: 10, PE: 0.16038670523330656
R2: 0.9580619927003785
Iteration: 0, Cluster: 1, PE: 0.15737235389885312
R2: 0.9659289040195005
Iteration: 0, Cluster: 9, PE: 0.1478247664210394
R2: 0.9693791189100427
Iteration: 0, Cluster: 2, PE: 0.15021002743081394
R2: 0.9673070214237375
Iteration: 0, Cluster: 3, PE: 0.15299954318048092
R2: 0.9231769724429475
Iteration: 0, Cluster: 11, PE: 0.1384676715929523
R2: 0.8693818621168186
Iteration: 0, Cluster: 14, PE: 0.15557181624465333
R2: 0.9368067810197325
Iteration: 0, Cluster: 0, PE: 0.15229071787639473
R2: 0.9607822838854807
Iteration: 0, Cluster: 5, PE: 0.14474537028244805
R2: 0.9698991100312909
Iteration: 0, Cluster: 8, PE: 0.14420260682560085
R2: 0.8769507302434456
Iteration: 0, Cluster: 4, PE: 0.15773505587243142
R2: 0.8376469887869636
Iteration: 0, Cluster: 12, PE: 0.1421691062142389
R2: 0.8871514149822588
Iteration: 0, Cluster: 6, PE: 0.1244569768624934
R2: -0.023682081750673878
Iteration: 0, Cluster: 7, PE: 0.16146347813874914
R2: 0.598090344655112
Iteration: 0, Cluster: 13, PE: 0.35335933502142136
 

The longer the forecasting horizon, the lower the entropy. It is connected with the capture of trends, a lot of unidirectional deals in a row can be obtained.

So you still need to consider the balance between horizon and entropy.

It turns out that there is no direct correlation between the entropy of labels and stability on the OOS. But there is some indirect one.

 

Here I took 100 models on the same data trained - different Seed, below 3 graphs of PE dispersion and balance on my sample.

There is a dependence, but not what it should be - it turned out the other way round - the more PE, the more financial result.

Let's try to predict the balance on exam using PE from train

Trying to predict the balance on exam using PE from test.

And for comparison -let's try to use the balance from test to predict the balance on exam.

Well, it's not much better than the balance. I don't know, I think Recall is still the answer.

Below are 3 scatter plots of PE and Recall on my sample.

Trying PE from train to predict Recall on exam


Trying PE fromtest to predict Recall on exam .

And, the balance of Recall from test try to predict Recall on exam sample.


 
As they say, nice try
 
Maxim Dmitrievsky #:
As they say, nice try

Maybe next time I'll get lucky.

 
Maxim Dmitrievsky #:

7 bars ahead prediction

TF, symbol?
 
fxsaber #:
TF, symbol?

H1 euro pound in this case

 
Aleksey Vyazmikin #:

Maybe next time we'll get lucky...

Explore this symbol.

Maxim Dmitrievsky #:

H1 euro pound in this case

 
fxsaber #:

Explore this symbol.

I think it's pretty self-explanatory.

Reason: