Machine learning in trading: theory, models, practice and algo-trading - page 2206

 
Maxim Dmitrievsky:

I've got bots running around in the real world. Do you want more cocky mom traders jumping in here? It's kind of like density estimation, it's the same thing (gmm and auto encoder is the same thing, by doctor's appointment). It's just that the encoder you can rip from anything, including recurrence layers, i.e. it's a more advanced model. If you fantasize, you can draw analogies to the essence, yes

spoiler alert for the next article. I found it later when I tried to find the reason why it works this way. And originally kinda came up with it myself :)

Look, aren't there any ready-made packages on Semi-supervised learning?

There, everything should be ready to use

 
mytarmailS:

Look, aren't there any ready-made Semi-supervised learning packages ?

It's supposed to be all ready to go.

there is. But not everything cooked is always edible.

 
Maxim Dmitrievsky:

there is. But not everything cooked is always edible.

Have you tried it?

After all, they do what you do, only automatically + you can choose different approaches, not only by distributions to model the data

 
mytarmailS:

Have you tried it?

After all, they do what you do, only on automatic + you can choose different approaches, not only by distributions to model the data

I'm on my way... or in the process.

Why do you think I'm writing these articles? So that I can understand them myself, and not just for bragging. As you write, you'll figure it out for yourself.

 
Maxim Dmitrievsky:

I'm on my way... or in the process.

Why do you think I write articles? To figure it out for myself, not for bragging. While you're writing, you realize

I am trying to explain to my doctor what I am doing and it often has a positive effect when you catch up with him. And the fact that the interlocutor is out of tune is all bullshit))))

 
Valeriy Yastremskiy:

The fact is, until you explain it to someone else, you will not get it yourself, too, even my friend's doctor tries to explain what I do, and, honestly, the positive effect is that you catch up with yourself. And the fact that the interlocutor is out of tune is all bullshit))))

it is true ))

 
Maxim Dmitrievsky:

did I understand you correctly in the article.

1) you take a small piece of real data, mark up the labels

2) train a semi... model

3) test the semi... model on a large strip of real data

4) and so on in a circle, until you find a good semi... model which adequately responds toa large segment of real data

 
mytarmailS:

did I understand you correctly in the article.

1) you take a small piece of real data, mark up the labels

2) you train a semi... model

3) test the semi... model on a large plot of real data

4) and so on, until a good semi... model responds adequately to a large segment of real data

Then I do some more checks at the reference site and check the number of good models in all passes. If there are a lot of them, that's a plus.

 
Maxim Dmitrievsky:

Then I also look at the control section, and I look at the number of good models in all the passes. If there are a lot of them, that's a plus.

Look, I don't think it's all the fault of the semi... models, but the problem of bad labeling.

our "supervise markup" is too inadequate for the market, and semi... just makes it a little more adequate and that's it...

And if you make adequate markup, you can get even better results ...


What I mean is, train not as a classification problem, but as an optimization problem... Teach the model as a search for the minimum/maximum

function, e.g. maximize profit + commission, this would be the most adequate leibeling

Think about it.

 
mytarmailS:

Listen! it seems to me that it's all not the merit of semi... models, but the problem of bad labeling

our "manual marking" is too inadequate to the market, and it just makes it a little more adequate, and that's it...

And if you make adequate markup, you can get even better results ...


What I mean is, train not as a classification problem, but as an optimization problem... Teach the model as a search for the minimum/maximum

function, e.g. maximize profit + commission, this would be the most adequate leibeling...

Think about it...

That's what it says, making adequate partitioning is expensive and generally unknown... so semi-controlled learning may work better in many cases

It's been tested on SEALs and many other things, it's been shown to work well. The same article from dipmind...

Reason: