Machine learning in trading: theory, models, practice and algo-trading - page 3295

 
Andrey Dik #:

1. At what "here"?

2. Don't you still know that a neural network is an approximator?

You don't know, apparently. Otherwise you wouldn't wonder what approximation has to do with it.

You put your optimisation into the discussion for what purpose?

Can you understand that these are different things?
 
Andrey Dik #:

No, that's not what we're talking about.
The accumulation of heterogeneous information only prevents from finding connections, and many contradictions arise.
But only up to a certain level, at a certain level there is so much information that it is enough to form a monolith, the missing puzzles themselves start to be restored.

an analogy can be given as an example: if you polish a flat surface of two bars made of different metals, the less roughness, the better the bars slide on each other. if you keep polishing the surfaces, the bars will stick together, molecules will start penetrating each other from the two bars, i.e. there is not a further reduction of friction forces, but on the contrary, a jump-like growth!

most likely it is contradictory, but the idea is clear.

I agree

but there will be no sudden growth because:

- not everyone is so stubborn as to not stop at the achieved development of intelligence.

- trade secrets and high prices start to form, which in turn reduces supply and demand

Your thought is on your own graph at the very beginning: low quantity at high quality.

for high quality an excessive amount of information is discarded
 
Maxim Dmitrievsky #:
You don't know, apparently. Otherwise you wouldn't wonder what approximation has to do with it.

You put your optimisation into the discussion for what purpose?

Can you realise that these are different things?

I answered your question, why are you repeating yourself?

My post was addressed to Sanych's post, who mentioned crooked FF.

Can you understand that?

and also you can't understand that learning of any kind is not possible without optimisation, they are inseparable things.

 
Andrey Dik #:

I answered your question, why are you repeating yourself?

my post was addressed to Sanych's post, who mentioned the crooked FF.

can you understand that?

and also you can't understand that learning in any form is not possible without optimisation, they are inseparable things.

He wrote correctly that we don't have the concept of extrema. We have approximation and stability criteria on new data, which are the components of model error.
 
Renat Akhtyamov #:

It's probably contradictory, but I get the idea.

I agree

but there will be no sudden growth because:

- not everyone is so stubborn as to not stop at the achieved development of intelligence.

- trade secrets and high prices start to form, which in turn reduces supply and demand

Your thought is on your own graph at the very beginning.

I gave the example of polished bars, there is a jump in friction forces.

With information, of course, there will be no jump, but a smoothed transition.

 
Maxim Dmitrievsky #:
He correctly wrote that we do not have the concept of extrema. We have approximation and stability criteria on new data, which are the components of the model error.

Do you improve the approximation and stability criteria iteratively or not?

Or is it like in the fairy tale, when a rich man lay on the cooker for 30 years and suddenly got up and went and kicked everyone's arse? In 10 days, the lubricant in immobile joints disappears, so the rich man will not be able to kick anyone, but he won't be able to get up in 10 days.

and you're like in a fairy tale, you just do it. No, you do it iteratively, improving the scores, it's a process of optimisation.

 
Andrey Dik #:

approximation and stability criteria do you improve iteratively or not?

No, you do it iteratively, improving the estimates, it is an optimisation process.

What does that mean? When you increase the degree of the polynomial, what happens?
 
Maxim Dmitrievsky #:
What does that mean?

have you already forgotten your question?
It follows that you do optimisation always, even if you think you don't.
your criteria are FFs that you improve by optimisation methods.
 
Andrey Dik #:

have you forgotten your question yet?
it follows that you do optimisation always, even if you think you don't.
Your criteria are FFs that you improve with optimisation methods.
I don't have a question. I wrote why a large number of features gives poor results in causal inference.

You are writing things that are abstracted from that.
 
Maxim Dmitrievsky #:
I don't have a question. I wrote why a large number of features gives poor results in causal inference.

and I told you that it's just your hypothesis "as the number of features increases, the results will get worse".

and I stated my hypothesis. no one here in MO has tried to test this yet due to the cost of experimentation. but I remind you, some people did for GPT, there was a jump in the quality of connections of heterogeneous information to the point where it was possible to create new connections and inferences.