Discussion of article "Forecasting market movements using the Bayesian classification and indicators based on Singular Spectrum Analysis" - page 2
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Given the rather high level of the article, I want to raise doubts in the author's mind, namely the necessity of writing an expert at this stage. There is no evidence in the article that the results of testing an expert can be trusted. Well, if we get 2 or 3, even 10 values of the profit factor, or any other values from the tester - is this a statistic? What is the guarantee that the expert will behave in the same way in the FUTURE?
At the heart of these doubts is the author's assertion that SSA is capable of working in NOT stationary markets. Where is the proof? I don't recall any such proof.
Suppose I have missed something in this matter. But the article does not specify at all what varieties of non-stationarity SSA solves and what the result is. Is it possible to isolate the trend? But then the residual from subtracting the trend is not necessarily stationary. This issue is addressed in great detail in the framework of ARCH models. Because of the variety of residuals, a very large variety of ARCH models has arisen.
This piece is not in the article, and therefore there is no evidence that trading decisions are made on a stationary time series. Hence it follows that the future behaviour of the TS on these ideas is NOT predictable.
PS.
About 10 years ago I used "caterpillar" (FATL-SATL). Expert Advisors lived from 3 to 6 months, then started to drain. The main problem is not only the non-stationarity in the classical sense (MO and variance change), but also the changing periodicity, which can be clearly seen in the ZZ.
The fact that PCA is used to analyse non-stationary time series is not an "author's statement", but a feature of the method, based on the formation of its own orthogonal basis, the most optimal for a given fragment of the time series. Actually, even a trend or periodicity is already non-stationary. Therefore, trading decisions are not made on a stationary series, because we do not have such a series and there is no approximation to it (unlike ARMA-models). The model is based on the representation of time series as a sum of trend, periodic (with changing period) and noise components. Noises in the model are not controlled and filtered, and for stable components there is a forecast for the very near future. The method assumes local stability, not stationarity of the processes forming prices. Secondly, nobody gives guarantees either. We are talking about the fact that training of the model on some set of historical data, chosen arbitrarily, has shown stable results of its application on other series and time scales. How long lasting these results are is a separate question, but in my opinion it is controllable, for example by running it over "recent history" before trading. Re-training also takes a bit of time. It is more important, again from my point of view, to filter possible "false entries" more reliably and reduce risks, and this will require further expansion of the Expert Advisor with control methods, for example: limiting the bot's trades by schedule, expanding the set of indicators for Bayesian analysis or with the help of neural networks. And the results of the current testing will show that the filters that are already available, work or need to be developed. The only proof of suitability will be, as always, practice.
Noise in the model is not controlled and filtered....
From your post, the key phrase for me is the above one.
Noise can and should be filtered if it is stationary, or better, normally distributed. And if the noise is not, and from the article we do not know what kind of noise remains after SSA filtering, then it cannot be filtered under any circumstances. This is what all ARCH models are based on, because the dog is always buried in thick tails, which do not always take place in the noise distribution, but if they occur, they will definitely drain the depot.
Your idea can be approached from the other side.
The point is that you are using a classifier that inputs predictors trained by SSA. In this case, the importance of stationarity of the used predictors is not very clear to me personally, but there are well-proven requirements to the predictors fed to the input of the classifier - they should be cleaned from noise ones, i.e. those that have a "weak relation to the target variable (not to be confused with the noise mentioned above). The most interesting thing is that any kind of smoothing (trends) are referred to noise in this approach. This is all available in the Machine Learning branch.
Any way you slice it....
PS.
Practice is the criterion of truth, but only if this practice is based on theory.
From your post, the key phrase for me is the one quoted.
Noise can and should be filtered if it is stationary, and preferably normally distributed. And if the noise is not such, and from the article we do not know what noise remains after SSA filtering, then it cannot be filtered under any circumstances. This is what all ARCH models are based on, because the dog is always buried in thick tails, which do not always take place in the noise distribution, but when they occur, they are sure to drain the depo.
In other words, nobody cancels the Butterfly Effect :)
SSA is a principal component method, right?
More like FFT.
SSA is the principal component method, right?
Similar to it. The algorithms are somewhat different. General description can be found in Loskutkov A.Yu. " Time series analysis. MSU lecture course". It can be used for forecasting either by "Caterpillar" or by autoregression.
In contrast to the Fourier method, PCA extracts components with modulated amplitude and frequency, as well as non-periodic components.
As for the noise to be investigated, in the next step SSA is combined with Box-Jenkinson models (ARPSS, etc.) and these models work with the "residual" of the trend obtained by CCA.
It is argued that a model combining the GARCH process with a model describing the behaviour of the mean is promising from a forecasting point of view. As an option, it is possible to further implement GARCH+SSA in the Expert Advisor.
One should not overlook the "level of randomness" of the series on the used time horizon. If it is "off the scale", the forecast by the most perfect model will not give anything good and the process is not characterised by a long memory. So in the future it is logical to add the estimation of series fractality(similar to the Hurstexponent ) in order to find out the "colour" of noise and classify the current price process. This suggests that a reliable expert should first of all monitor and optimise the level of risk for the capital he manages.
One should not overlook the "level of randomness" of the series on the used time horizon. If it is "off the scale", the forecast by the most excellent model will not give anything good and the process is not characterised by a long memory. So in the future it is logical to add the estimation of series fractality(similar to the Hurstexponent ) to find out the "colour" of noise and classify the current price process. This suggests that a reliable expert should first and foremost monitor and optimise the level of risk to the capital he is managing.
So the "level of randomness" is the main point that makes your model will never work, so I wouldn't call it great, more like delusional. It's a 50/50 guessing game. Who told you at all that the market processes are more likely to be persistent than antipersistent, for example.... and that there are any periodic cycles at all. The screens show typical predictions of this kind of systems, which are about nothing.....
But, I admire your level of understanding and experience in mathematics and in building models, for this, of course, 5++++... so that it does not seem that I criticise everything and everyone :)
More like FFT.
So the "level of randomness" is the main point that makes your model never work, so I wouldn't call it beautiful, more like delusional. It's a 50/50 guessing game. Who told you at all that the market processes are more often persistent than antipersistent, for example.... and that there are any periodic cycles at all. The screens show typical predictions of this kind of systems, which are about nothing.....
But, I admire your level of understanding and experience in mathematics and in building models, for this, of course, 5++++... so that it does not seem that I criticise everything and everyone :)
If you think that all movements in the market are random, you are very much mistaken. All modern models try to take into account the effect of prehistory - "heavy tails", because they contain "precursors" of further behaviour. The main task of the model is to extract the signals of precursors from the noise, and the problem of the model is its adequacy to the situation and its ability to adapt.
If you think that all movements in the market are random, you are very much mistaken. All modern models try to take into account the effect of prehistory - "heavy tails", because they contain "precursors" of further behaviour. The main task of the model is to extract the signals of the precursors from the noise, and the problem of the model is its adequacy to the situation and its ability to adapt.
One precursor appeared, we predicted, in 3 bars another precursor appeared, and so on ad infinitum. There is no single source influencing prices on the market, so initial conditions for the development of this or that situation arise spontaneously and overlap each other. Suppose we have found some initial conditions that continue to influence the situation, how can we be sure that on the next bar there will not be another influencing information that will again destroy everything? Where are the criteria for assessing the reliability of the forecast?