Machine learning in trading: theory, models, practice and algo-trading - page 3398

 
mytarmailS #:
I listened with pleasure
This one's good too https://youtu.be/EutX1Knl6v4?si=bBicqPoh474EiRIc 
 
There's a kozula at the end of the basics, I'd draw attention to that
 
Where exactly? I'd say it's the other way round at the beginning.
 
mytarmailS #:
Where exactly? I'd say it's the other way round at the beginning.
I meant Alexei's video, I haven't seen yours yet. Yours is about feature selection. I don't like it very much, because I don't have a lot of signs.)
 
Maxim Dmitrievsky #:
because I don't have many traits.)

That's how it works, from "many" different ones you get "not many" but good ones.

And the more "many" you have at the beginning, the richer and better you get "not many" but good ones at the end.

 
mytarmailS #:

That's how it works, out of "many" different ones you get "not many" but good ones.

And the more "many" you have in the beginning, the richer and better you get "not many" but good at the end.

It's been done through gmdh or whatever it is.
Kozul seems promising (it is very hard to come up with algorithms based on it, you have to have a wild imagination). And language models - it is very difficult to train them. These from Google are shorter, there is a small model for 2 billion parameters, you can still try to train it. One shot methodology.
 
Maxim Dmitrievsky #:
It was done through gmdh or whatever it is
.
Kozul seems promising (it is very hard to come up with algorithms based on it, you have to have a wild imagination). And language models - it is very difficult to train them. These from Google are shorter, there is a small model for 2 billion parameters, you can still try to train it. One shot methodology.

What does LLM have to do with it?

 
mytarmailS #:

what does LLM have to do with it?

Because they generalise well, in theory.

The larger the training sample, the better the statistics (in general).

 
Maxim Dmitrievsky #:
Because they generalise well, in theory.

they generalise well because they're trained on billions of word datasets, and we have prices.

What are you gonna train a neuron to do if it's trained to talk?

And you can't train yours on prices because you need a lot of visualisations.


So either I don't know something or again - what does LLM have to do with it?

 
mytarmailS #:

they generalise well because they're trained on billions of word datasets and we have prices.

What are you going to train a neuron to do if it's trained to talk?

And you can't train yours on prices because you need a lot of visualisations.


So either I don't know something or again, what does LLM have to do with it?

Vorontsov says in the video, you watched it. About the concept of fundamental models, from the hour begins.

I asked mine


Reason: