Machine learning in trading: theory, models, practice and algo-trading - page 3038

 
Aleksey Vyazmikin #:

It turns out to select rules, but we must realise that some of them do not work steadily from year to year, some of them stop working at all, and the other part continues to work steadily.

Of course, we are interested in those that continue to work - what distinguishes them from others is the mystery that will significantly improve any TS.

That's exactly what I'm trying to increase potentially the number of good rules, by selecting a limited number of sections of predictors for them. In order to do this, we need to identify areas of "stable" performance of each predictor that will be used to create rules. This is the task I am currently interested in.

I have not experimented with other targets, as I am looking for a less costly way than I have for mining these rules.

Did I understand correctly that you decided not to do a colab to compare the two methods?

I'm thinking about doing it on automatic for now.

 
Aleksey Vyazmikin #:

It turns out to select rules, but we must realise that some of them do not work steadily from year to year, some of them stop working at all, and the other part continues to work steadily.

The truth is that if we take 1000 random TCs, we will have exactly the same results.

Moreover, if we make another sample for the test, a fourth type - Train-Test-Valid-Test2.

then we find out that "the other part continues to work steadily."

exactly the same will not work ))

It's all random, and the found pattern is actually random.

 
mytarmailS #:

the truth is that if you take 1,000 random TCs, you get exactly the same results.

Moreover, if we make another sample for the test, a fourth type - Train-Test-Valid-Test2.

you'll find that "the other part continues to work steadily."

will not work in the same way ))

It's all random, and the pattern you find is actually random.

I have trained 10000 models and I know that this is not the case - most of them stop working on new data.

And the leaves - yes you did not read carefully apparently - I wrote that I trained on data from 2014 to 2019 (a couple of months), including validation, and showed how they worked in 2021 - ie here was an honest test, without looking into the future - in profit 50%.

Maybe everything is random, but every random has a cyclicality according to which it happens not to be random :)

 
Aleksey Vyazmikin #:

I have trained 10000 models and I know that this is not the case - most of them stop working on new data.

And the leaves - yes you did not read carefully apparently - I wrote that I trained on data from 2014 to 2019 (a couple of months), including validation, and showed how they worked in 2021 - ie here was an honest test, without looking into the future - in profit 50%.

Maybe everything is random, but every random has a cyclicality, according to which it is not random :)

Cyclicality that is not cyclical... you're back to random.
 
Maxim Dmitrievsky #:
Like if you get a steady state via FF, it will characterise a steady state TC? Everyone realises that this is kurwafitting
That's the only way to get a stable FS by chance, by brute force.

The FF must describe the rules of a stable TC. If the TC turns out to be unstable, then the rules in the FF are wrong.

The problem is that nobody has managed to find such rules for FF yet (I haven't seen them, at least). There are two ways: either to simplify the TS so that it would have as few degrees of freedom as possible, which means that it is more likely to be stable draining or stable pouring, or to find rules for FF, which is a more complicated way.

In general, there is no universal set of actions for a grail builder. There is a third way - creating a flexible self-learning AI like ChatGPT, but even here there is FF that is used in training. I wanted to say one thing - FF will always be present in one form or another, there is no way to get rid of it.

The problem is not in the construction of a stable TS, but in the descriptive characteristics that are put in the FF.

 
Andrey Dik #:

The FF should describe the rules of a stable TC. If the TC turns out to be unstable, then the rules in the FF are wrong.

The problem is that nobody has managed to find such rules for FFs (at least I have not seen them). There are two ways: either to simplify the TS so that it would have as few degrees of freedom as possible, which means that it is more likely to be stable draining or stable pouring, or to find rules for FF, which is a more complicated way.

In general, there is no universal set of actions for a grail builder. There is a third way - creating a flexible self-learning AI like ChatGPT, but even here there is FF that is used in training. I wanted to say one thing - FF will always be present in one form or another, there is no way to get rid of it.

The problem is not in the construction of a stable TS, but in the descriptive characteristics that are put in the FF.

Quite correct and competent reasoning, contradictory of course - "... not in building a sustainable TS", just to the technology of building/selecting/evolution of sustainably profitable TS and we are striving for it.

 
Andrey Dik #:

The FF should describe the rules of a stable TC. If the TC turns out to be unstable, then the rules in the FF are wrong.


The FF cannot solve any problems by definition. Either there is THAT much to improve by unit per cent or there is not. You can't improve rubbish, no matter how much you dig through rubbish, rubbish will remain rubbish.

Therefore, initial considerations of the relationship between target and predictors are initial . Moreover, a quantitative assessment of this relationship is needed, and moreover not just an assessment of the relationship, but a quantitative assessment of the predictors' ability to predict future values of the target variable (the teacher). There is no place for FF in this chain of reasoning, so one can bluntly take/pick an MO algorithm, of which there are hundreds, and use them as black boxes without trying to "improve" anything in specific algorithms.

Moreover, FF classes are a fraught thing - the smell of over fitting to history is too strong.

 
Forester #:

Exactly, none. It's unclear why you keep touting 20% as an achievement...
Neither 20%, by 8% by 50% mean anything. The numbers mean nothing.

The balance is interesting. No graph?

You have a column called classification error, now classification is irrelevant.

It's impossible to understand. I wish I could.

Maybe you can state your results more clearly?

 
crazy, crazy, crazy, crazy, crazy.

When will it end?
 
СанСаныч Фоменко #:

No problems can be solved by FF by definition. Either there is THAT much to improve by single per cent or there isn't. You can't improve rubbish, no matter how much you dig through rubbish, rubbish will remain rubbish.

Therefore, initial considerations of the relationship between target and predictors are initial . Moreover, a quantitative assessment of this relationship is needed, and moreover not just an assessment of the relationship, but a quantitative assessment of the predictors' ability to predict future values of the target variable (the teacher). There is no room for FF in this chain of reasoning, so one can bluntly take/pick an MO algorithm, of which there are hundreds, and use them as black boxes, without trying to "improve" anything in specific algorithms.

Moreover, FF classes are a fraught thing - the smell of over fitting to history is too strong.

I knew that you would not pass by my post. moreover, I knew that you would talk. unfortunately, you do not realise that what is highlighted in bold red is FF.... I don't understand why you're so allergic to FF.

By the way, you can make a lot of useful things out of rubbish - recycling is called recycling. I will say more - the presence of "rubbish" just allows you to still make a stable profit on the markets, there are examples even on this forum.

Reason: