Machine learning in trading: theory, models, practice and algo-trading - page 2649
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'll give my thoughts on this sometime.
No, it works for any number of predictors. At each step, one chooses which predictor and which slice (left or right) to cut is optimal. Conventional decision trees do the same thing -- at each step, both the predictor and its cut point are chosen to be optimal to produce two new boxes. The only difference with PRIM is that at each step, a boundedly small slice is cut, resulting in a gradual process, hence the word patient in the name.
Personally, I find another modification of the standard approach interesting, where each box is cut into not two but three new boxes. I'll provide some thoughts on this sometime.
No, it works for any number of predictors. At each step, one chooses which predictor and which slice (left or right) to cut is optimal. Conventional decision trees do the same thing -- at each step, both the predictor and its cut point are chosen to be optimal to produce two new boxes.
So I didn't claim otherwise - it works - it's just a matter of implementation - if you take two predictors with good bounds arbitrarily, the box won't come out - that's the point! That's why I assumed that pairwise search happens at once.
PRIM differs only in that at each step a limitedly small piece is cut off, which leads to a gradual process, whence the word patient in the name.
Cut off - what does it mean - a small residue after splitting conditionally close to the root of the tree?
Personally, I find interesting another modification of the standard approach, when each box is cut not into two, but three new ones. I will give some thoughts on this sometime.
Why not 5? :) I'm all for experimentation!
Maybe a realisation and a test would be better
I don't think that's going to happen. So far there is only a rambling assumption.
Suppose we know for sure from somewhere that the important rule is A<x1<B, but for now there is a rule a<x1<b, where a<A and B<b. A good rule will be obtained in at least two steps, for example 1) a<x1<B and 2) A<x1<B. In practice, this may mean that a partitioning step on a different predictor will accidentally wedge in between these two steps and this important rule will simply not appear in the end. Therefore, the number of partitioning chunks at each step may not be fixed, but determined from optimality considerations. Then in special cases (almost as for sine in wartime) their number can be equal to five) The tree, of course, ceases to be binary.
Cut off - what does that mean - a small remnant after splitting conventionally close to the root of the tree?
There is probably no idea to build a nice tree at all - they just want to cut a "good piece") I am close to the idea that one should rather cut out pieces suitable for trading, rather than pretending to be a tiler who has to pave the whole space without gaps) This is quite in line with the old quote "don't try to be in the market all the time". Cases where predictors don't fall into "good chunks" are simply ignored, so trees end up not being much use.
Yes, abandoning the tree leads to fiddling with possible box intersections, but if it's going to work, trees aren't pitiful)
h ttps:// xgboost.readthedocs.io/en/stable/tutorials/feature_interaction_constraint.html
It seems to be something more serious than my small experiments) Kind of has to do with taking into account the dependency structure between predictors, which is known inadvance.
I don't think it's going to work. So far, it's just a rambling guess.
Suppose we know for sure from somewhere that the important rule is A<x1<B, but for now there is a rule a<x1<b, where a<A and B<b. A good rule will be obtained in at least two steps, for example 1) a<x1<B and 2) A<x1<B. In practice, this may mean that a partitioning step on a different predictor will accidentally wedge in between these two steps and this important rule will simply not appear in the end. Therefore, the number of partitioning chunks at each step may not be fixed, but determined from optimality considerations. Then in special cases (almost as for sine in wartime) their number can be equal to five) The tree, of course, ceases to be binary.
Use symbolic regression, design exactly what you want, not what other algorithms offer.
You know Rku, there is a package, there are examples, everything has been done before and for us.
It seems to be something more serious than my small experiments) Kind of has to do with taking into account the dependency structure between predictors, which is known inadvance.
I assume that simply builds 2 trees from 1,2 and from 3,4,5 sets. If there are 10 sets, then there are 10 trees. Etc.
Use symbolic regression, design exactly what you want, not what other algorithms suggest.
You know Rku, there is a package, there are examples, everything has been done before and for us.
I'm working on it. No grail yet)