You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
There will be more experiments with different variants.....
I am trying these algorithms with a set of indicators and a mixture (indicators+closing prices). If possible and in line with your research, implement the initial rejection of correlated data, i.e. check for correlation. With your method, two close indicators will be selected (if they influence the result enough), although one of them should give way to an independent indicator.
I am trying these algorithms with a set of indicators and a mix (indicators+closing prices). If it is possible and in line with your research, implement initial rejection of correlated data, i.e. check for correlation. With your method, two close indicators will be selected (if they influence the result enough), although one of them should give way to an independent indicator.
It will be possible to try it. At the moment, the version of recursive and Logistic recursive (logit regression instead of RDF) implements the selection of price increments, i.e. at a given array of predictors of 1000 closing prices, for example, it divides the zero bar by each subsequent price value, thus obtaining a large set of increments with different lags, and selects the best of them. I.e. feeding different oscillators like momentum or rsi with different periods does not make much sense in this case
Might as well give it a try...
Correct me if I am wrong in my reasoning (on prices):
Several posted methods use something like:
With selection by minimum error. This can be seen as a test of correlation with the result.
Let's say there is a wave process (corridor, wedge, etc.). 100 values on an hour timeframe is ~4 days, there is a high probability that there is more than 1 wave in the interval. Then we have 2 (or more) correlated points. If one of them strongly influences the result, the second one will also pass this test. And there are also half-waves with negative correlation. As a result, out of ten readings selected for further construction, 2-3 are independent (the rest are strongly correlated with them and give little improvement in recognition). It is required to first select the readings that correlate little with each other (it is not a problem to calculate for all pairs, but how to cut a piece of ham out of this pig I have little idea yet).
As an indirect confirmation, in serious manuals it is stated that the method of tree bousting gives a result by a couple of % less than a random forest (bagging). However, it strongly depends on the implementation and internal tree building algorithms (in the above result both methods used CART).
As a result, the efficiency of one method strongly depends on the selection of the test period. By the way, the channel slope will not completely save from this problem.To catch up,
Maybe we should use the same logit regression or some other method instead of averaging agent readings? I think the theory says it works.
Catch-up,
Maybe we should use the same logit regression or some other method instead of averaging agent readings? I mean, in theory, that's the way it works.
You can use anything, but in theory some things are not obvious at all and it is not clear whether it is worth spending time on them (as well as on the whole approach in general). At least it will be a manual on how not to do it :)
on multicollinearity of predictors - I agree. It is necessary to build a covariance matrix and make selection based on it, I will think later how to do it better
I tested it, the impression is ambiguous, I tested it on a custom chart generated by the Weierstrass function using the formula.
In theory, on this custom chart RandomForest should have found entry points very close to ZigZag, or at least not to have losing orders at all, on TF H1 periodicity is clearly traced, but RF sort of found this pattern, but losing orders are also present
tested earlier in MT4 on the same data the old GoldWarrior Expert Advisor (found on the English forum) - an advisor on ZigZag, in the MT4 optimiser on all TFs up to M15 clearly finds patterns and exclusively in + all orders.
I tested an indicator Expert Advisor on the crossing of regression lines (alas, I made it to order, I can't provide the code), and this Expert Advisor in the optimiser quickly found regularities on the Weierstrass function.
Why these examples? - If primitive methods can find regularities, then machine learning is all the more obliged to find them.
with all due respect to the author, but the result is doubtful, or rather the example of work with RandomForest is excellent. but there is still room to put your efforts ;).
ZY: trained from 2000.01.01 to 2001.01.01 tested from 2001.01.01 to 2002.01.01
ZY: script for custom chart attached, Symbol library in KB
Unbite Weierstrass on one-two, slightly adapted version, the whole plot is oos. Didn't see the point in going any further, it's already obvious. But it amused me :)
it won't work with the market, of course. Thanks for the script, by the way, the test was very useful, because I could not understand at first whether there were errors in the logic, on real quotes the results are much more modest
Linear model copes just as easily, i.e. for predicting this function, even a neural network is not needed.
Reverse trades, to check.
As for the presence or absence of losing trades - the question is purely rhetorical and a trade-off between accuracy and overfitting.
Thanks for the script, by the way, the check was very useful, because I could not understand at first if there are errors in the logic, on real quotes the results are much more modest
;) ... I've been talking about this for half a year.
I am "chewing" mathematics, I have non-standard ideas - non-smooth analysis and I was surprised to find out that quite an interesting fiction read "When Genius Fails" by R. Lowenstein - I read it to distract myself.
very interested in your code, if you can, give me a sneak peek in PM
;) ... I've been talking about this for about six months.
I am "chewing" on maths, I have some non-standard ideas - non-smooth analysis and I was surprised to find a rather interesting fiction read "When Genius Fails" by R. Lowenstein - I read it to distract myself.
very interested in your code, if you can, give me a sneak peek in PM
non-smooth and non-fluffy? :) in the evening I will collect in 1 file, I will share the common project... maybe we will do something together.
...on multicollinearity of predictors - I agree. It is necessary to build a covariance matrix and make selection on it, later I will think how to do it better
I may be knocking on an open door, but since there was no...
They recommend preprocessing before converting to the range [0,1]:
1. removal of periodicity, I did it in periodicityArrays() - the profile composed by days of week and hour for a month is subtracted from the range (though I did it only for an hour timeframe to try).
2.Removal of linear trend in linearTrendArrays() as recommended a[i]=a[i]-a[i+1].
3.And here is where the autocorrelation check should be (not implemented yet)
Predictability after 1 and 2 actions has improved significantly. The code is attached.