Machine learning in trading: theory, models, practice and algo-trading - page 3018

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
To the question "Why?"
I don't have time to mess with it...
Why not? It would be convenient for me...
Why not - it would be convenient for me....
folk wisdom says you can't see the forest for the trees. I wonder if you can see a tree by picking leaves. I'm not asking about the forest.
Is this the only algorithm you know? Or is it the most efficient? Why are you fixated on it?
It's a passing thought.
Good luck
It differs by trying to use not the best predictor split, but different variants from the best. In this way splits are made sequentially, and theestimation success is done onthe leaf, if I understand the algorithm correctly. From the successful generation, predictors closer to the leaf are cut off and the construction is retried. I can't analyse the algorithm itself in detail - I'm not the author. But, in idea, this approach is better than randomisation in theory.
It is better to learn how to work with data, rather than shoving gigabytes of correlated rubbish across servers
You could have just said that you didn't understand the topic, made wrong conclusions, and now you're acting in such a way that you understand the essence of the problem and go straight to the retreat.
This marginal idea - that everyone is a fool but you - it repels people - think about it.
Not a greedy splitting algorithm, but a genetic one. Well, dipminds were also looking into this, pulling rules out of neural networks. But not much information was found. There is an article and a ready model, but there is no inspiration to try it all. There are other implementations of pulling rules out of neuronets. You can probably learn something from there.
That's how I wrote what is the difference between greedy and genetics for a tree - maybe I didn't understand the question.
I haven't heard about pulling rules from neuralnet. Can you provide a link? So far something cumbersome is being drawn in my imagination.
But I think that neural networks here will be obviously slower than trees in terms of the speed of issuing new rules.
That's how I wrote what the difference is between greedy and genetics for wood - maybe I didn't understand the question.
I haven't heard about pulling rules from neural network. Can you give me a link? So far something cumbersome is drawing in my imagination.
But I think that neural networks here will obviously be slower than trees in terms of the speed of issuing new rules.
You could have just said that you didn't understand the topic, made erroneous conclusions, and now you're behaving in such a way that you just realised the essence of the problem and went straight to the retreat.
This marginalised idea - that everyone is a fool except you - it drives people away - think about it.
That's how I wrote what the difference is between greedy and genetics for wood - maybe I didn't understand the question.
I haven't heard about pulling rules from neural network. Can you give me a link? So far something cumbersome is drawing in my imagination.
But I think that neural networks here will obviously be slower than trees in terms of the speed of issuing new rules.
Your whole idea of separating "good rules" from bad ones is completely dead-end, methodologically dead-end.
You, for some reason, think that "good" rules (trees) are really "good".
And it's not just the vagueness of their future, it's the fact that there are no rules that can be taken according to some criteria at all. There are rules that produce a VARIABILITY of "goodness" that changes as the window moves. and it is quite possible that this rule will go from "good" to "bad" as the window moves. This variability is defined by a value that divides the prediction probability into classes.
Standardly in MO algorithms, the division into classes is done by dividing the class prediction probability in half, but this is completely incorrect. I consider the value of the division into classes - it is never 0.5: this value varies and depends on the particular predictor.
Now back to your "good" trees.
If you have selected trees whose "Goodness" lies close to the threshold that moves. This is why I argued above that the "good" trees you selected could easily become bad trees.
It's a dead end.