Probably, the larger the training result file, the higher the probability of fitting.
If the sample is unrepresentative, which is always the case in forex, then yes. It can be reduced by decreasing regularisation, a little bit.
Also, the issue of target oversampling has not been solved yet, which in combination with input oversampling should give more interesting results.
If the sample is unrepresentative, which is always the case in forex, then yes. You can reduce it by reducing the regularisation, a bit
Also, the issue of target search has not been solved yet, which together with the search of inputs should give more interesting results.
The closer the size of the learning result file is to the size of the price history, the less you need to do something in the model.
For example, if the learning file is equal in size to the OHLC file, the model is the history itself. I.e. it is good that the learn-file (the part of it used) should be orders of magnitude smaller than the history size.
The closer the size of the training result file is to the size of the price history, the less you need to do something in the model.
For example, if the learning file is equal in size to the OHLC file, the model is the history itself. I.e. it is good that the lean file (the part of it used) should be orders of magnitude smaller than the history size.
This way random forest is preserved, always large files, they can't be reduced. You can use linear, then the file will contain only regression coefficients, and retraining will be less.
I was just interested in using RF, initially. But it is too fond of overtraining at any circumstances
This way random forest is saved, always large files, they cannot be reduced. You can use linear, then only regression coefficients will be in the file, and retraining will be less.
I was just interested in using RF, initially. But something it likes to retrain too much under any circumstances.
The point is that it would be good to save only the information used on learn=false. If there is a lot of it, the work is almost wasted.
As an analogy, saving BestInterval-data. If there are few of them, you can still look without any tricks. But if there are a lot of them - only for pictures.
The point is that it would be good to save only the information used on learn=false. If there is a lot of it, the work is almost wasted.
As an analogy, saving BestInterval-data. If there are few of them, you can still look at them without any tricks. But if there's a lot of it, it's only for pictures.
Well, yes, especially if there is more explanatory information than explainable - it's a joke :)
I will offer other variants of the lib later
I tested it, my impression is ambiguous, I tested it on a custom chart generated by the Weierstrass function using the formula.
In theory, on this custom chart RandomForest should have found entry points very close to ZigZag, or at least not to have losing orders at all, on TF H1 periodicity is clearly traced, but RF sort of found this pattern, but losing orders are also present
tested earlier in MT4 on the same data the old GoldWarrior Expert Advisor (found on the English forum) - an advisor on ZigZag, in the MT4 optimiser on all TFs up to M15 clearly finds patterns and exclusively in + all orders.
I tested an indicator Expert Advisor on the crossing of regression lines (alas, I made it to order, I can't provide the code), and this Expert Advisor in the optimiser quickly found regularities on the Weierstrass function.
Why these examples? - If primitive methods can find regularities, then machine learning is all the more obliged to find them.
with all due respect to the author, but the result is doubtful, or rather the example of work with RandomForest is excellent. but there is still room to put your efforts ;).
ZY: trained from 2000.01.01 to 2001.01.01 tested from 2001.01.01 to 2002.01.01
aha, thank you!
I spent 20 minutes reading your library, but still didn't figure it out, I'm not attentive today, and the purpose was to test the author's code to see what RandomForest sees.
updated my script and attached it above again
I tested it, the impression is ambiguous, I tested it on a custom chart generated by the Weierstrass function using the formula
In theory on this custom chart RandomForest should have found entry points very close to the ZigZag, or at least not to have losing orders at all, on the H1 TF the periodicity is clearly traceable, but RF kind of found this pattern, but losing orders are also present
I tested earlier in MT4 on the same data the old GoldWarrior Expert Advisor (found on the English forum) - ZigZag Expert Advisor, in MT4 optimiser on all TFs up to M15 it clearly finds regularities and all orders are exclusively in +.
I tested an indicator advisor on the crossing of regression lines (alas, I made it to order, I can't provide the code), and this advisor quickly found regularities on Weierstrass function in the optimiser.
Why these examples? - If primitive methods can find regularities, then machine learning is all the more obliged to find them.
with all due respect to the author, but this is a questionable result, or rather the example of work with RandomForest is excellent. but there is still room for improvement ;)
SZY: trained from 2000.01.01 to 2001.01.01 tested from 2001.01.01 to 2002.01.01
No, it doesn't work like that now. It does not look for any harmonic patterns. The outputs are sampled randomly and very often, and then it tries to approximate the policy the best it can. For it to approximate any patterns it needs to sample the outputs according to some other logic, maybe the same zigzag.
This is all easily changed in reward f-i. I just don't have so much time right now to do hundreds of experiments. Maybe you can somehow come up with it through the Optimiser.
In any case, you need to define some range of conditions among which the best variant will be searched. Because there are an infinite number of variants.
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
RL algorithms:
Libraries based on the article "Random decision forest in reinforcement learning"
Author: Maxim Dmitrievsky