Is there a pattern to the chaos? Let's try to find it! Machine learning on the example of a specific sample. - page 13

 
Maxim Dmitrievsky #:

in small groups of five or ten

1-3

If none of them gives anything, what is the point of talking about a mythical connection between them? rubbish + rubbish...?

Can you calculate how many combinations it will be if I have, let's say, 5000 predictors? It would be good to train each combination 100 times with a different sid.... and let's say a minute for training, how much time will it take?

 
Aleksey Vyazmikin #:

Can you calculate how many combinations it will be if I have, let's say, 5000 predictors? Each combination should be trained 100 times with a different sid.... and let's say a minute for training, how long will it take?

Take a few pieces from different indicators, all the others will be similar anyway.

there is no point in looking for a connection between them, supposedly some combinations will give better results, when separately they do not give anything.

and if they work separately, their combination can strengthen the TS, but not always.

 
Maxim Dmitrievsky #:

Take a few pieces from different indicators, all the others will be similar anyway

there is no point in looking for a connection between them, supposedly some combinations will give better results, when individually they do not give anything.

and if they work separately, their combination can strengthen the TS, but not always.

What is "give nothing", how do you measure it? For example, I estimate the probability shift of each quantum segment of the predictor, and select, let's say, those that shift the probability by 5% in one direction or another.

And about different indicators, well, I have already published the model internals here - it includes a decent number of indicators - are you sure that if you randomly replace them with other ones, the model will earn the same?

Forum on trading, automated trading systems and testing trading strategies.

Is there a pattern in chaos? Let's try to find it! Machine learning on the example of a specific sample.

Aleksey Vyazmikin, 2022.11.02 18:10

Here's another variant - I like it even more, as the result is stable on all samples.

606,1048,1060,1083,1095,1103,1108,1110,1137,1198,1347,1353,1511,1525,1526,2055,2581,2582,3078,3153,3273,3341,3676,3690,3695,3839,3919,3967,4397,4433,5052,5364,5579



Balance


 
Aleksey Vyazmikin #:

What is "nothing", how do you measure it? For example, I estimate the probability shift of each quantum segment of the predictor, and select, say, those that shift the probability by 5% in one direction or another.

And about different indicators, well, I have already published the model's internals here - it includes a decent number of indicators - are you sure that if you randomly replace them with other ones, the model will earn the same?

Well, how to measure on new data.

and when there are a lot of them, it's impossible to interpret the results.

 
Maxim Dmitrievsky #:

Well, how do you measure on new data

and when there's a lot of data, it's impossible to interpret the results.

Impossible, that's why I'm developing different selection methods.

It is possible to look at new data, but should we expect the result to be repeated on the next new data.... I would like to have some internal criterion for evaluating the predictor, which allows me to expect this with a higher probability.

I was doing some experiments - I will publish probably later, so it turned out that you can train a model on the first half of 2014 and it will earn in 2022.... but not always in the intermediate half-years between these periods. So what conclusions should we draw - is the model a slag or does it still need additional predictors to identify the difference between these half-years?

 
Aleksey Vyazmikin #:

Impossible, that's why I'm developing different methods of selection.

It is possible to look at new data, but should we expect the result to be repeated on the next new data.... I would like to have some internal criterion for evaluating the predictor that allows me to expect this with a higher probability.

I was doing some experiments - I will publish probably later, so it turned out that you can train a model on the first half of 2014 and it will earn in 2022.... but not always in the intermediate half-years between these periods. So what conclusions should we draw - is the model a slag or does it still need additional predictors to identify the difference between these half-years?

Well, I took 4 std indicators with different periods and trained on them for the last 12 years, the previous 10 years test (to the left of the dotted line).

There it just falls on the change of the global trend (orange chart), but the TS somehow holds.

then you can see what kind of trades he opens on the chart and where, so that you can roughly estimate the principle and start from there.


 
Maxim Dmitrievsky #:

Well, I took 4 std indicators with different periods and trained on them for the last 12 years, the previous 10 years test (to the left of the dotted line).

There it just falls on the change of the global trend (orange chart), but the TS somehow holds on.

then you can see what kind of trades he opens on the chart and where, so that you can roughly estimate the principle and start from there.


It looks like a counter-trend strategy with entries on strong outliers/trends against the movement - few trades in 10 years.

If Recall is very low, it also makes sense to combine models - they may cross very rarely, but the number of trades will increase over the time period.

 
Aleksey Vyazmikin #:

Looks like a counter-trend strategy with entries on strong outliers/trends against the move - few trades in 10 years.

With ultra low Recall, it also makes sense to combine patterns - they may very rarely overlap, but the number of trades will increase over the time period.

this is just an example

 
Maxim Dmitrievsky #:

this is just an example

An example of a consistent pattern revealed by the model over a long period of time? All right.

 
Aleksey Vyazmikin #:

An example of the manifestation of a stable pattern revealed by the model over a long period of time? All right.

An example that it is possible to train on a few signs, then make clear interpretable TCs out of it. This will not work with hundreds of signs.
Reason: