You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
as you wish :)
but this is trivial stuff. Importance depends on the variance of the fiche (almost always, except for very simple models). Forest doesn't perform any transformations on the fiches, doesn't multiply or divide by each other, etc., it just scatters their values across nodes, so there are no interactions, only separation
The sequence of spreading (how deep the predictor will be in the tree) matters, and it depends on other predictors, including those that are better at spreading....
The sequence of spreading (how deep the predictor will be in the tree) matters, and it depends on other predictors, including those that are better spread....
If it's a 3-split tree, I guess, if it's a forest, forget it.
or find confirmation of your thoughts in literature, research of professionals. Otherwise, why do something that the model is not designed for at all, make up some nonsenseIf it's a 3-split tree, I guess, if it's a forest, forget it.
or find confirmation of your thoughts in literature, research of professionals. Otherwise, why do something that the model is not designed for at all, to invent some nonsenseYou are better with literature, especially foreign literature. Thoughts that greedy tree construction can be wrong I have heard from different speakers on youtube. I don't have any authoritative studies, I only have the results of my experiments, it is possible that I am misinterpreting the results. In general, then what is the point of this article, if you can input all predictors at once and get an almost perfect model?
The literature is better dealt with by you, especially foreign literature. I have heard thoughts that greedy tree construction can be wrong from different speakers on youtube. I don't have any authoritative studies, I only have the results of my experiments, it is possible that I am misinterpreting the results. In general, then what is the point of this article, if you can input all predictors at once and get an almost perfect model?
to remove noise from the unnecessary ones, to lighten the model.
but the point of the article is actually different, it's actually in the title.to remove the noise from the extra ones, to lighten the model.
but the point of the article is actually different, it's actually in the title.So you make a random selection of predictors and choose the best model from it, but it is not known how good it is from absalut, I propose to make a more reasonable selection, which, hypothetically, can approach the absolute model to a greater extent. The goals are the same, but the tools are slightly different.
So you make a random selection of predictors and choose the best model, but you don't know how good it is from absalut, I suggest to make a more reasonable selection, which, hypothetically, may be more close to the absolute model. The goals are the same, but the tools are slightly different.
There is no random selection of predictors.
what do you mean by "a more informed search"?
There is no random selection of predictors
what is meant by "more reasonable search"?
I realise that I read (perceived) one word wrong in the article and misunderstood it, it says "the Monte Carlo method or random sampling of target variables should be considered separately", and I somehow perceived that it is about predictors, I apologise.
A more valid oversampling is an oversampling based on past results, with reinforcement, so to speak.
I realise that I read (perceived) one word in the article incorrectly and misunderstood it, it says "the Monte Carlo method or random sampling of target variables should be considered separately", and I somehow perceived that it is about predictors, I apologise.
More valid oversampling is oversampling with past results, with reinforcement, so to speak.
I don't get the point, probably not relevant to the topic of the article.
I think I've already written about importans.
New article Applying Monte Carlo method in reinforcement learning has been published:
Author: Maxim Dmitrievsky
Thank you. Is it possible to make the training using the GPU instead of CPU?
Thank you. Is it possible to make the training using the GPU instead of CPU?
Yes, if you rewrite all logic (RF include) on open cl kernels :) also random forest has worst gpu feasibility and parallelism