Machine learning in trading: theory, models, practice and algo-trading - page 3018

 
Aleksey Vyazmikin #:

To the question "Why?"

I don't have time to deal with it.

And anyway.
I'm sure if you think about it, this paralleling is probably unnecessary.
 
mytarmailS #:
I don't have time to mess with it...

I don't know.
I'm sure if you think about it, this paralleling is probably unnecessary.

Why not? It would be convenient for me...

 
Aleksey Vyazmikin #:

Why not - it would be convenient for me....

It is better to learn how to work with data instead of shoving gigabytes of correlated rubbish across servers
 
Vladimir Perervenko #:

folk wisdom says you can't see the forest for the trees. I wonder if you can see a tree by picking leaves. I'm not asking about the forest.

Is this the only algorithm you know? Or is it the most efficient? Why are you fixated on it?

It's a passing thought.

Good luck

Aren't you and Sanych from the same shop?
The frequency of meaningless comments has reached a critical level, is it possible that R has run out of packets?
 
Aleksey Vyazmikin #:

It differs by trying to use not the best predictor split, but different variants from the best. In this way splits are made sequentially, and theestimation success is done onthe leaf, if I understand the algorithm correctly. From the successful generation, predictors closer to the leaf are cut off and the construction is retried. I can't analyse the algorithm itself in detail - I'm not the author. But, in idea, this approach is better than randomisation in theory.

Not a greedy splitting algorithm, but a genetic one. Well, dipminds were also wondering about this, they were pulling rules out of neural networks. But not much information was found. There is an article and a ready model, but there is no inspiration to try it all. There are other implementations of pulling rules out of neuronets. You can probably get something from there.
 
mytarmailS #:
It is better to learn how to work with data, rather than shoving gigabytes of correlated rubbish across servers

You could have just said that you didn't understand the topic, made wrong conclusions, and now you're acting in such a way that you understand the essence of the problem and go straight to the retreat.

This marginal idea - that everyone is a fool but you - it repels people - think about it.

 
Maxim Dmitrievsky #:
Not a greedy splitting algorithm, but a genetic one. Well, dipminds were also looking into this, pulling rules out of neural networks. But not much information was found. There is an article and a ready model, but there is no inspiration to try it all. There are other implementations of pulling rules out of neuronets. You can probably learn something from there.

That's how I wrote what is the difference between greedy and genetics for a tree - maybe I didn't understand the question.

I haven't heard about pulling rules from neuralnet. Can you provide a link? So far something cumbersome is being drawn in my imagination.

But I think that neural networks here will be obviously slower than trees in terms of the speed of issuing new rules.

 
Aleksey Vyazmikin #:

That's how I wrote what the difference is between greedy and genetics for wood - maybe I didn't understand the question.

I haven't heard about pulling rules from neural network. Can you give me a link? So far something cumbersome is drawing in my imagination.

But I think that neural networks here will obviously be slower than trees in terms of the speed of issuing new rules.

I was just summarising about the tree ) Google works, I use it myself. Dipminds usually do very close to how I myself perceive reality.


 
Aleksey Vyazmikin #:

You could have just said that you didn't understand the topic, made erroneous conclusions, and now you're behaving in such a way that you just realised the essence of the problem and went straight to the retreat.

This marginalised idea - that everyone is a fool except you - it drives people away - think about it.

In your topics should be you understand, not someone else....
Once it gets into your head, it's a process.

Think about it.
 
Aleksey Vyazmikin #:

That's how I wrote what the difference is between greedy and genetics for wood - maybe I didn't understand the question.

I haven't heard about pulling rules from neural network. Can you give me a link? So far something cumbersome is drawing in my imagination.

But I think that neural networks here will obviously be slower than trees in terms of the speed of issuing new rules.

Your whole idea of separating "good rules" from bad ones is completely dead-end, methodologically dead-end.

You, for some reason, think that "good" rules (trees) are really "good".

And it's not just the vagueness of their future, it's the fact that there are no rules that can be taken according to some criteria at all. There are rules that produce a VARIABILITY of "goodness" that changes as the window moves. and it is quite possible that this rule will go from "good" to "bad" as the window moves. This variability is defined by a value that divides the prediction probability into classes.

Standardly in MO algorithms, the division into classes is done by dividing the class prediction probability in half, but this is completely incorrect. I consider the value of the division into classes - it is never 0.5: this value varies and depends on the particular predictor.

Now back to your "good" trees.

If you have selected trees whose "Goodness" lies close to the threshold that moves. This is why I argued above that the "good" trees you selected could easily become bad trees.


It's a dead end.

Reason: