
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, a virtual tester is also in the plans, but for now we need to refine other things, like automatic selection and predictor reduction are most important right now so that the model doesn't overtrain on history so much
Don't be cheeky, if you are writing a new class anyway, use something like this:
Number of indicators and number of trees are set by variables - well very simplifies life when playing with different numbers later. This is not the main thing, so a small indulgence to my favourite so that to finish less, but I will definitely try it.Don't be cheeky, if you are writing a new class anyway, use something like this:
The number of indicators and the number of trees are set by variables - it makes life very easy when playing with different numbers. This is not the main thing, so a small indulgence to my favourite so that to finish less, but I will definitely try.approximately so and done, this part is already available
If it's not too much trouble, one more question.
In the order of nonsense: the first passages are trained by 1 forest, the second part of passages - 2, etc. We have an ensemble of forests with different degrees of training, the first one - possibly retrained, the last one - undertrained. Does it make sense, in your opinion, to bother in this direction?
If it's not too much trouble, one more question.
In the order of nonsense: the first passages are trained by 1 forest, the second part of passages - 2, etc. We have an ensemble of forests with different degrees of training, the first - possibly retrained, the last - undertrained. Does it make sense in your opinion to bother in this direction?
To do this, you need to understand the mechanics of why it should work. If you don't have an idea, you're just pointing your finger at the sky
For example, if all scaffolds are overfitted on training examples and don't work on oob, I don't see the point, many bad solutions won't give good ones, unless by chance or if it's bousting. Committees are built mainly to compensate for infrequent errors of a particular model by averaging the results of several, but all models should be of good enough quality.
Good afternoon (night), sorry for my intrusiveness, but the topic of tree forest was brought up by you, no judgement....
I tried to look at the implementation of the forest in AlgLib. The training function initialises arrays and passes control to a special function that actually conducts training. It is relatively easy to organise real-time retraining, but this implementation does not (as far as I can judge, parsing someone else's code, which is poorly documented in terms of algorithm, is a pleasure) cut off branches like the CART tree building algorithm. There will be a problem of retraining. I.e. we train the network, in real life it is retrained as if prolonging the period of its efficiency, then a new optimisation. Realisation of cut-off for the possibility of full retraining is time-consuming and if only in the future will interest MT5 developers (they have taken the library in the delivery set). And traditionally: is there any sense in such additional training limited in time?
Good afternoon (night), pardon my intrusiveness, but the topic of the forest of trees was brought up by you, no judgement....
I tried to look at the implementation of the forest in AlgLib. The training function initialises arrays and passes control to a special function that actually conducts training. It is relatively easy to organise real-time retraining, but this implementation does not (as far as I can judge, parsing someone else's code, which is poorly documented in terms of algorithm, is a pleasure) cut off branches like the CART tree building algorithm. There will be a problem of retraining. I.e. we train the network, in real life it is retrained as if prolonging the period of its efficiency, then a new optimisation. Realisation of the cut-off for the possibility of full retraining is time-consuming and if only in the future will interest MT5 developers (they have taken the library in the delivery set). And traditionally: is there any sense in such additional training limited in time?
Pruning is interesting in itself, but it is not here, yes. Scaffolding retraining I don't know how useful an idea it is, because it doesn't take long to retrain him completely either. Unless retraining is planned to be done very often - but then it turns out that the algorithm is not able to work for a long time... a two-way street.
I think 1-2 weeks on russian, and then they will translate it
Dear Maxim Dmitrievsky,
Can you please update if you have published your next article regarding implementation of Random Decision forest with different agents and without fuzzy logic which you mentioned previously?
Thank you very much
Hi Maxim Dmitrievsky,
How long have you finished the next article about randomised decision forest?
Or have you already published the next article? If yes, can you provide a link?
thank you very much
Hi Maxim Dmitrievsky,
How long have you finished the next article about randomised decision forest?
Or have you already published the next article? If yes, can you provide a link?
thank you very much
Hi, it is not ready yet. When it is finalised, I will email you.