Machine learning in trading: theory, models, practice and algo-trading - page 3027

 
Maxim Dmitrievsky #:
I think just to generate a verification bot at once and check the necessary rules through a tester/optimiser

It is better to do leaf selection in python, and the final model - well, you can do it in terminal. But you will have to transfer the rules to the terminal, which is also not so easy. That's why it's better to do everything in python - even if it's less accurate in some sense, but the process can be seen from beginning to end. This is just an experiment for now....

 
mytarmailS #:
Why would you put rules in a booster???

I wrote it down, one way to aggregate them. Give out essentially weights and remove inconsistencies. Identify the best instances.

You could use a simple tree. Or you could aggregate and give out weights yourself. I've tried all of these methods.

Do you have some other idea?

 
Aleksey Vyazmikin #:

It is better to select leaves in Python, and the final model - well, you can do it in the terminal. But you will have to transfer the rules to the terminal, which is also not so easy. That's why it's better to do everything in python - even if it's less accurate in some sense, but the process can be seen from beginning to end. This is just an experiment for now....

Well, it doesn't cost anything to move the rules.

selection in python by metrics, maybe I'll make a tester for them.

You can do a lot of things. Through wooden models, through linear models, through boustings.

+ the generator of signs through reconciliations, one of the most effective. But it will take a long time to count. It is an automatic analogue of your quantisation.

 
Aleksey Vyazmikin #:

I wrote - one way to aggregate them. Give out essentially weights and remove inconsistencies. Identify the best instances.

You could use a simple tree. Or aggregate and distribute the weights yourself. I have tried all of these methods to apply.

Do you have some other idea?

So Statistics per rule solves this problem.
Frequency, accuracy, size, probability.... Etc.

I don't understand why there's a boost.
 
Maxim Dmitrievsky #:

Well, it doesn't cost anything to move the rules.

Python selection by metrics, maybe I'll make a tester for them.

There's a lot of things you can do. Through wooden models, through linear models, through boustings.

+ feature generator through collations, one of the most efficient. But it will take a long time to count. It is an automatic analogue of your quantisation.

What's "collation"?

 
mytarmailS #:
This is how Statistics per Rule solves this problem
Frequency, accuracy, size, probability.... Etc

Why there's a boost there, I don't understand

Tree model to remove inconsistencies and reveal mutual non-linear dependencies. It is not about using leaves from one tree, but from many different trees.

 
Aleksey Vyazmikin #:

Tree model to remove contradictions and reveal mutual non-linear dependencies. It's not about using leaves from a single tree, but from many different trees.

Are those rules just binary features for the model?
 
mytarmailS #:
Those rules are just binary attributes for the model?

That's right.

 
Aleksey Vyazmikin #:

What's "reconciliations"?

convolutions, convolutional kernels.

convolutional kernel transform

 
Aleksey Vyazmikin #:

That's right.

That's what you should have said in the first place.

Instead of three understandable words, 100 incomprehensible concepts.

You're sitting there wondering what the hell you're talking about.
Reason: