Machine learning in trading: theory, models, practice and algo-trading - page 3292

 
Andrey Dik #:

on what experiment? - You could say "personal," but that's not much of an answer.

You and Alexei won't be satisfied with any answer at this stage. Because you're not familiar with the techniques.

 
Maxim Dmitrievsky #:

No answer will satisfy you and Alexei at this stage. Because you're not familiar with the techniques.

You're just stating a hypothesis. I'm stating my hypothesis.

My hypothesis is that there is a certain threshold after which quantity passes into quality. your hypothesis is the opposite (or my hypothesis is the opposite, if you find the word "opposite" offensive).

In my hypothesis, you have not reached the threshold.


 
Andrey Dik #:

You made a hypothesis. I made my hypothesis.

My hypothesis is that there is a certain threshold after which quantity passes into quality. you have the opposite hypothesis (or I have the opposite hypothesis, if you find the word "opposite" offensive).

Under my hypothesis, you haven't reached the threshold.

Quantity does not translate to quality, estimators stop working at high dimensions. You can't estimate the effect on over-trained models. You can on under-trained models.
 
Andrey Dik #:

You made a hypothesis. I made my hypothesis.

My hypothesis is that there is a certain threshold after which quantity passes into quality. you have the opposite hypothesis (or I have the opposite hypothesis, if you find the word "opposite" offensive).

In my hypothesis, you have not reached the threshold.


According to your hypothesis, if the quality drops rapidly, you just have to do more shit...

if the factory has crooked nuts, you have to produce more of them no matter what - then the quantity of nuts will turn into quality and communism with universal prosperity will come.

 
Maxim Kuznetsov #:

Your hypothesis is that if the quality is dropping rapidly, you just have to do more shit.

If the factory has crooked nuts, it is necessary to produce more such nuts no matter what - then the quantity of nuts will turn into quality and communism with universal prosperity will come.

No, not like that. You can't achieve communism that way. Try making nuts if you don't believe it, the time rate will fall, you will earn less and work more (in communism it should be the other way round).
information is not nuts.
 

In MO, a different graph is used

In kozol inference it is easier to work with bias than variance. Hence the non-hypothetical conclusion that model complexity or increasing number of features hinders more than helps.


 
Andrey Dik #:
No, not like that. you can't achieve communism that way. try making nuts if you don't believe me, the time rate will drop, you'll earn less and work more (in communism it should be the other way round).
information is not nuts.

Well, it's like that all over the forum.

"my programme/method/calculation/model does faster/bigger/prettier/deeper than everyone else!...but at a loss" and the frequency of these creations is increasing.

it becomes easier to make crap, they make it more often and the bar for evaluation falls...

 
Maxim Kuznetsov #:

Well, it's all over the forum.

"my programme/method/calculation/model does faster/bigger/prettier/deeper than everyone else!...but at a loss" and the frequency of appearance of these creations is increasing.

it becomes easier to make crap, they make it more often, and the bar for evaluation falls...

That's not what I'm talking about. You're not comparing finger to finger.

 
the status of wise elders has not been confirmed
 
Maxim Dmitrievsky #:

A different schedule is used in the MoD

In kozol inference it is easier to work with bias than variance. Hence the non-hypothetical conclusion that model complexity or increasing number of features hinders more than helps.


Where does this graph come from?

MO uses completely different criteria, such as AIC, which is penalised for having too many parameters.

This and other information criteria are in line with a common assumption in modelling, that out of two models with the same performance, the one with fewer parameters is chosen.

Let's not forget that the very concept of "model" is a dumbing down of reality. There are no extremes here. There is a balance between coarsening and acceptability of model accuracy. But the main thing is not the accuracy of the model, but its coarsening, its generalisation ability. And this is understandable, since the main enemy of modelling is over-fitting, the sibling of model accuracy.

Reason: