You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Don't offer opium to the people who love the market ))
BestInterval with five thrown out bad intervals can be considered as adding ten inputs (each takes integer values from 0 to 2500 and the next one is larger than the previous one) to the Expert Advisor.
It turns out that just 10 additional input parameters can perfectly train (and instantly) almost any TS on any time frame.
I have > 1000 positions (such a TS) on a half-year history. I.e. only 10 parameters create such indicators. And what about NS, where parameters can be orders of magnitude more, as well as ranges of values?
My point is that if 10 parameters are enough for fitting, is it not self-deception to talk about more parameters (this is about NS)?
To continue the thought. If we roughly estimate how many combinations (number of vectors) these 10 parameters can make, it will be ~10^30. That is, there will always be one combination (actually there are many, many more) out of this not very large number that will show excellent results on any data of arbitrary length. This is somewhat discouraging to me.
BestInterval with five bad intervals thrown out can be thought of as adding ten inputs (each taking integer values from 0 to 2500 and the next larger than the previous) to the EA.
It turns out that just 10 additional input parameters can perfectly train (and instantly) almost any TS on any time interval.
I have > 1000 positions (such a TS) on a half-year history. I.e. only 10 parameters create such indicators. And what about NS, where parameters can be orders of magnitude more, as well as ranges of values?
My point is that if 10 parameters are enough for fitting, is it not self-defeating to talk about more parameters (this is about NS)?
Self-deception, it is like increasing the degree of approximating polynomial, as the order increases, the fit becomes larger and larger, up to the ideal. So you can take many at first, and then remove them to a minimum, leaving the best ones behind
It is a pity that the generalisation ability is very bad... but I have seen your EA testerEA works very well on oos. By the way, I did not manage to fit your EA well, but I did not try much. There is also an error of division by 0 in the EMA library.
There is also a division by 0 error in the EMA library
You should not use zero periods. I added it above.
self-defeating, it is like increasing the degree of an approximating polynomial, as the order increases, the fit becomes larger and larger, up to the ideal.
With a polynomial the story is somewhat different, because the range of coefficient values is almost unlimited.
I have seen your EA testerEA works very well on oos. By the way, I did not manage to fit your EA as well, but I did not try much.
Unfortunately, I have not yet been able to finish to realise the combat version for real. Because the scheme should be as follows
Hardly anyone will use such a complicated scheme, so I added this for ease of use
It kind of says "don't bother, put this range in your TC, it won't be worse".
You don't need to use zero periods. I added it above.
Ah, there it is, I get it ) I'll run it later, I'm just tinkering with my nerds.
you need to use highly regularised models, because without this they just fails even for any noise that is not significant, I switched to linear models in general (I added one more library to RL). they learn very quickly and the number of coefficients is small, not like the forest.
You need to use highly regularised models, because without it they just fit even for any noise that is not significant, I switched to linear models (I added one more library to RL). they learn very quickly and the number of coefficients is small, not like in the forest.
How many is that?
How much is that?
equal to the number of features, +1, well, regression coefficients. Only it's logit regression with buy/sell classification.
The generalising power is small, though. It doesn't fit well over long intervals, but short intervals are fine. I think to add a couple of regressions to it, combine them into a mini NS, and a good feature sorter (like MGUA).
is equal to the number of features +1, well, regression coefficients. Only there is logit regression with buy/sell classification
Generalisation ability is small, though, it doesn't fit well over a long interval, but over short intervals it's OK. I'm thinking of adding a couple of regressions to it and combining them into a mini NS.
That's a lot of money. There is also such a remark. ML and BestInterval are different concepts. ML looks for TC, BestInterval looks for nothing.
I wonder how an example like this would work. Suppose ML has 100 parameters and finds a TC. What will be better in the end, ML100 + BestInterval10 or ML110?
Unfortunately, I have not yet managed to complete the realisation of the combat version for real. Since the scheme should be as follows
Hardly anyone will use such a complicated scheme, so I added this for ease of use
It kind of says "don't bother, put this range in your TC, it won't be worse".
yes, the scheme is really complicated, not everyone will be able to use it :)