
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Vladislav, one more question. I apologize in advance, I am not a mathematician by education, so perhaps I ask an incorrect question, which may be set out in the FAQ section of theorist course. You say that you make several samples to calculate the Hurst index, and then from them you choose the sample that has the largest deviation of the index from the value of 0.5, and then based on it you build a regression channel. The questions then are:
How exactly is the sample of values for calculating the Hearst figure prepared? I assume there may be the following options. For example, over the period H1 for 3-5-7 days you take in a row each bar separated by a certain fixed number of bars and enter it into the array for the Hearst index calculation. Then you displace the entire sample by 1 bar and again calculate the index and so on, until the fixed number of bars to be taken is exhausted. By the way, by which price do you work by Close, Open, High or Low? Or do you use some additional method "ala Zigzag" to identify significant highs and lows over a period of days, then build a regression channel using these significant highs and lows with some approximation of the price function values between these highs and lows. Then you calculate the Hearst Ratio using this approximated by, for example, straight lines function. Or do you do simpler things without any approximation - you just fill the array with maxima and minima and calculate the Hearst's value assuming that the points are situated at equal distances from each other? Is that possible, though?
No. Once again, the Hearst index sample is the bars that make up the channel sample. And the index is calculated exactly for the channel - at this stage we are estimating the suitability of the channel to build the projection (or is it enough to draw the history nicely?). I've already mentioned that the procedure is iterative, which means the following process: find a sample, build a channel, evaluate it, refine it, build a projection, evaluate it and correct the sample if it is not satisfactory - and so on every bar. The process either converges or meets certain limitations - in this case it is pipsing ;). The first variants took me about 40 minutes :) - Now I manage it within 30-40 seconds per instrument. Although I use a slower computer (secleron 600), but it works great when searching for bottlenecks :). On P4 I wouldn't even notice that the algorithm isn't smooth :).
Example: You build a channel that shows downwards and then what ? You can also find a channel in the same place which shows upwards ;).
And then a series of questions about suitability for forecasting (noise or not, stable structures or not, and quite a few more, but these are the main ones - suitability and method of use depends on it).
I don't see the point in "nicely drawing the left part of the graph": there's no need for all that - just take any sample, build the regression channel and enjoy the view. Just try to use the standard MT4 tools to build a couple of three "off-channel" regression channels - it becomes very beautiful, and the "truth" (let's call it that) is hiding somewhere, but you have to find it ;). And the prediction has to rely on the best approximation at the moment. Hence a host of related subtasks and quality criteria. All of this is, of course, IMHO - it's part of what I was starting from when I formulated the problem statement for channel search.
Good luck and good luck with the trends.
In general, everything is clear! I have to go back to study all the materials from the very beginning because "the game is definitely worth the candle" ;o). Well, I didn't learn it in the institute (as I had a vague idea where it may come in handy later in life) - I'll learn it now. I have already downloaded your books which you recommended. I am reading them. And I also downloaded the unbeatable Wentzel book, just to shake out the old days ;o)!
I think not too quickly, but I will be able to implement for myself all that you described, because the essence of your strategy was set forth in full.
Medotica has been laid out.
Good luck and good trends.
Wavelets in case of one-dimensional sequences are good for analysis and determination of singular points on graphs (casp's), poorly defined by Fourier-methods, a variant of which is DCT. My goal was to smooth out a real function without lag, by transforming a DCT spectrum to keep the cuttings (mini-trend) and kick out the flies (noise). I have not seen much profit in using wavelets. Maybe it's for nothing.
Wavelets in the case of one-dimensional sequences are good for analyzing and determining special points on graphs (casp's), poorly defined by Fourier methods, a variant of which is DCT. My aim was to smooth out a real function without lag, by transforming a DCT spectrum so that the cutlets (mini-trend) are left and the flies (noise) are kicked out. I have not seen much profit in using wavelets. Maybe it is for nothing.
I just did not understand it deeply... May the newly computed DCT point change after the next bar appears? I.e. are previous, previously computed values recalculated ?
If yes, then wavelets are better ... imho, all other things being equal.
Yes, it is. DCT is an integral technique, i.e. the conversion is applied to the whole array at once.
The point is that genetically the wavelet transform also goes back to integral transforms. I won't dwell on mathematical details, I will only mention that the entire array is also recalculated in known to me implementations. Perhaps some modifications that require recalculation of only part of the array are used to analyze just financial sequences. Unfortunately, I'm not aware of it. I used the familiar classical methodology.