Machine learning in trading: theory, models, practice and algo-trading - page 1151
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
OOOO And I just came in topic no. Well, I think I will not disturb the inflamed minds, tea work......
And then, on the contrary, I see that it has risen. I went in, and there are bourgeois... ugh...... Eh.... they sold out mother Russia..... Shame on you!!!!!!!
Of course it's a joke!!!!!!!
How do I correctly write iCustom for ZigZag to give out extremum values?
write here:
https://www.mql5.com/ru/forum/160683
Despite the commonality of your approach to calculating sharps for assets and portfolios, I'm not ready to transfer it to individual TSs. I believe that the TS is in no way a portfolio, but only a possible part of it.
It's not even about the shuffle itself, but about the imposed approach, when instead of my TS I have to consider something unknown, where many trades can be artificially glued into one and non-existent null trades can be added. And it's only because "it has to be that way".
To me, the sharpe is a characteristic of the distribution of profits of deals, which shows the statistical significance of the difference between the average profit and zero. In the case where the number of trades in the TS can vary so much, the sharpe will have to be modified. For this purpose we should subtract from it a value of k/sqrt(n) type, where n is a number of deals. The point is that with increasing number of trades, the confidence interval for the expectation is narrowed and this can compensate to some extent for the decrease of the usual sharpening with increasing number of trades. If the number of trades does not jump that much, then this correction does not affect optimization and therefore the standard Sharpe can be used.
All right, the Sharpe Ratio is more for portfolios than for separate TS and nobody forces to use it or not as a metric, there are plenty of other metrics and you can invent your own ones, they are not worse
Well, try it...
By the way, make sure you add more predictors, I think candlestick patterns will be good for filtering, you can also enter not immediately by the signal, but through a candlestick with some kind of confirmation, for example
I don't think so far... I don't see any predictive power in your series, but if you mix it with other features you get something like "axe soup")
I don't think it's very good yet... I didn't find any predictive power in your rows themselves, and if you mix it with other features, it's like an "axe soup"))
I looked at the data I sent you and it looks like something is wrong...
Really mush, probably somewhere made a mistake when writing the code
I looked at the data that I sent you, and it seems that something is wrong...
It's really a mess, I must have made a mistake somewhere when writing the code
Suppose there is some data matrix, a vector series divided into Lern and test, someone spelled something on Lern and put a certain series on the test, I want to evaluate data trade value of this series for my system.
This may be an anonymous datafeed, scalar or vector, they say that there is even a "weak prediction market" and some serious institutions (banks, hedge funds) buy "sub-spread" forecasts in bulk, unless they are profitable to trade them and then use them as features in their models (while they are useful). So I'm interested in coming up with a sensible algorithm to check the usefulness of some series for trading, a clear formal procedure and a number of metrics.
It would be interesting to set the following problem in more general terms: suppose there is some data matrix, a vector series divided into a lorn and a test, someone spells something on the lorn and produces some series on the test, you need to assess the trading value of the data in this series for your system.
This may be an anonymous datafeed, scalar or vector, they say that there is even a "weak prediction market" and some serious institutions (banks, hedge funds) buy "sub-spread" forecasts in bulk, if it is not profitable to trade them and then use them as features in their models (while they are useful). So I'm interested to come up with a sensible algorithm to check the usefulness of some series for trading, a clear formal procedure and a number of metrics.
I'm not quite sure why... There are "future selection" algorithms that solve the problem of separating useful predictors from noise