Machine learning in trading: theory, models, practice and algo-trading - page 3298

 
Andrey Dik #:

here, you yourself have shown that a pre-trained brain with fake data solves specific problems it didn't know before. and you say you don't need extra "knowledge".

I didn't say that, stop lying 😀
Signs are not knowledge.
 
Andrey Dik #:

You keep confusing the concept of "extremum" with "sharp peak" (the point at which a function has no derivative).

Even a flat surface has an extremum.

Another thing is that the FFs always try to choose so that the FF surface would be as smooth as possible and the global extremum would be the only one. The only global extremum must be the only unambiguous solution of the problem.

If the global extremum of the FF is not the only one, and even more so if it does not have a derivative, it means an incorrect choice of the FF (the criterion for model evaluation). Misunderstanding of this leads to the term "overfitting", misunderstanding of this leads to searching for some ambiguous local extremum.

We can draw an analogy: a specialist - a doctor - is trained, qualification examinations (FF) are developed for certification, for a doctor there can be no concept of "overtrained" or "overfitted", if a doctor does not get the maximum score - it means he is undertrained. And according to you, a good doctor should always be an undertrained non-scientist.

Once again, the problem with "overtraining" is the wrong choice of criteria for evaluating the model. It seems that such cool experts are present on the forum, but they repeat the same mistakes over and over again. Developing correct estimation criteria is no less important than selecting predictors, otherwise it is simply impossible to adequately estimate the model.

I anticipate a flurry of objections, it's okay, I'm used to it. If it will be useful to someone - great, and those who will not be useful - whatever, so they think that it is fine as it is.

Super-fitting models has NOTHING to do with optimisation at all.

The ideal over-fit model of a quote is the quote itself. Just like any other case of model building: there is no optimisation, the estimation of such a model is degenerate, etc.

You do not understand the meaning of the word "model". For example, a model of Newton's Law of Universal Gravitation. Some ideal, applicable in some ideal conditions: vacuum, no other bodies in the universe. Nevertheless, it allows you to do a lot of calculations with sufficient accuracy for practice.

And the whole problem of building a model is to find one whose error with real data suits us. It should be understood that the error that we see will not necessarily be in the future, but will be in some confidence interval. Therefore, we are looking for a model with an error within the interval acceptable in practice. We do not need extrema.

 
Forester #:

150 billion neurons, not just one output per neuron, but many. AI will not reach this level of intelligence for a long time or never.
NS is compared by the level of intelligence to a cockroach - run, bite - run away.

If multiplied by several billion individuals of mankind, the current civilisation shows that on super-small amount of data it is possible to put forward hypotheses, which can accurately predict a lot of observed and even recreate unobserved phenomena.

 
Maxim Dmitrievsky #:
One-shot-learning. When a large pre-trained NS (brain) is pre-trained on left data with just a few examples. If the model has initially learnt the laws of the world, it easily clicks a new task with a cursory glance.

This is how large language models, in particular, are pre-trained, for new tasks. But if you force it to learn these new examples for a long time, it will start to forget previous experience and become biased towards new data.
It would be interesting to plot a graph: the quality of One-shot-learning against the age of the cub.
 

Well.... trees have optimisation. In selecting the best split. All columns/attributes are checked, different splits are made and the one with minimum class impurity value for classification or maximum accuracy for regression is taken into use.
For random forests, this is where it ends. Then we simply average the result of a set of trees that are randomly given for example 50% of the features.
In bousting, each subsequent tree learns the error of the sum of the previous trees and minimises this error.

But it is all hidden from the user under the bonnet and there is no point in talking about it. It is not like the optimisation we do in the tester by searching values of some parameters that change the signs or teachers for the model (e.g. selecting TP/SL).

 
Maxim Dmitrievsky #:
Signs are not knowledge.

What do you think signs are? Calm down.
And what is knowledge?

You said today that optimisation has nothing to do with MO, then you admitted that it does.
Wait, now we're getting to the point where signs are knowledge.
 
fxsaber #:
It would be interesting to plot a graph: the quality of One-shot-learning from the age of the cub.

Most likely the NS-brain is hugely influenced by the environment (and digital), especially during the most rapid period of NS formation - early childhood.

It would be interesting to compare such NS of the same age on different tasks, one NS - gadget from 2-3 years old, the other - without gadget.

I.e. to understand what kind of NS development positively/negatively affects the solution of certain tasks.


Perhaps thoughtful TC invention has less success than superficial clip thinking.

 
Andrey Dik #:

What do you think signs are? Let's be calm.
And what is knowledge?

You said today that optimisation has nothing to do with MO, then you admitted that it does.
Wait, now we're getting to the point where signs are knowledge.
Where did I say that optimisation has nothing to do with IO?

Please come without me.
 
Maxim Dmitrievsky #:
Where did I say that optimisation has nothing to do with the MoD?

Please come without me.

I knew it.
re-read what you said.
 
Andrey Dik #:

I knew it.
re-read what you said.

Go back and find where I said that if you're gonna say it.

Or stop talking shit.
Reason: