Econometrics: bibliography - page 7

 
faa1947:

TS and DS are a Russian dissertation invention.

The problem is different. My view. isolate the deterministic component from the quotient and look at the residual. If the residual is stationary, we can extrapolate the deterministic component. If not, then extract the deterministic component from the residual .... Is it possible to get a working system that way? Not in the general case, I have no proof of it. But in the attached attachments it is argued that everything will work fine if there were no kinks in the trend. But some suggestion is made for this case to overcome this nuisance as well.

100% agree, I wrote an EA for the same course, adaptive time series prediction model with unstable fluctuations, works along the trend until the end, but it sheds reversals, in the flat it behaves acceptable.
 
faa1947:

TS and DS are a Russian dissertation invention.

The problem is different. My view. we extract the deterministic component from the quotient and look at the residual. If the residual is stationary, we can extrapolate the deterministic component. If not, then extract the deterministic component from the residual ....

If you have extracted the deterministic component and removed it, then obviously you will need to use another extraction method to extract something from the residual (unless you want to get a deliberate zero in the output). And so on each step.

Judging by the posts, the collective does not understand what a "breakpoint" is. in simple terms. They have adjusted the model. At each new bar we re-fit and the new one matches the previous one. And then the new model by its parameters does not coincide with the previous one. It means that the kotier inside the sample has changed in such a way that the model parameters have changed. If the parameters are good, we can adjust them and hope that everything will be all right on the next bar. But sometimes the quotient changes so that the functional form has to be changed. Besides, the break is most probably not diagnosed at the arrival of one bar, but it needs several bars, i.e. we moved to the loss and here starts the song about SL.

Here's an attachment about this problem. The way I see it - this is the main problem in trading - it's fracture.

The problem, as applied to trading, is to detect a breakpoint (or whatever you want to call it, I prefer the expression "stationarity disruption") as early as possible, before the SL goes off. But using the method in your attachment, we can hardly do that, if only because within a linear model we have absolutely no means of determining at what precise point within the sample the break occurred. And, as you correctly point out, in this model a break is almost never diagnosed by the arrival of a single bar, unless that bar immediately puts you in a moose.

 
orb:
100% agree, I wrote an Expert Advisor on the same course, adaptive model for predicting time series with unstable fluctuations, works along the trend until the end, but reversals are lost, in the flat behaves acceptable.
"Don't read Soviet newspapers".
 
alsu:

If you've extracted a deterministic component and removed it, then obviously in order to extract something from the remainder, you'll need to use another extraction method (unless you want to get a deliberate zero in the output). And so on at each step.

Why a different one? I don't get it. In the prediction branch I showed multiple applications of Hodrick-Prescott filter. Demonstrated that it didn't do any good. I didn't squeeze anything out of the collective. But now I can say that there were two problems: (1) the complaint about the filter, which I suspect is an edge effect on the right, and (2) as much of a problem as the model itself is the use of the resulting prediction. If this issue is not resolved, then there is no point in speculating about a smoothing method. That said, I discard MM as a method of solving the problems of using the prediction.

On this basis, I have posted an attachment and a link to the handbook. Both of these positions are new to the forum and in my mind very promising.

The problem when applied to trading is to detect a kink (or whatever you want to call it, I like the expression "stationarity disruption" better) as early as possible, before the SL is triggered. But using the method in your attachment, we can hardly do that, if only because within a linear model we have absolutely no means to determine at what particular point within the sample the break occurred. And, as you correctly point out, in this model a kink is almost never diagnosed by the arrival of a single bar, unless that bar has immediately driven you into a moose.

The idea of attach is to forecast by multiple models. Different models will give a kink at different points and due to that the forecast will be refined. I think so.

 
faa1947:


Why the other one? I don't understand.

Simple logic. Suppose we have a signal and want to extract the deterministic component from it. Of course, we want to do it optimally, i.e. so that the method we use cannot give the best result on this signal under any circumstances. It should be noted here that in this case we must introduce the optimality criterion, by which we will impose constraints on the algorithm parameters. But it follows that if we did achieve it and squeezed an optimum out of the method used, then the use of the same method with the same optimality criterion must return zero on the residue, because otherwise we have a contradiction with the fact that the parameters of the previous step were calculated on the basis of the optimum criterion...

Here is a casualty - if we do not apply the optimization criterion and simply, for example, just filter the signal with a static filter, then in theory we have no right to call the result a deterministic component. What is it deterministic? After all, you can apply a bunch of filters of the same structure but with different parameters and they will all give different results. Which of them then to consider as a deterministic component? All parameter sets are equal, as long as we don't introduce an optimality criterion.

(1) the claims about the filter, which I suspect is a marginal effect on the right

Edge effects are inevitable in any method, they are a consequence of the principle of causality, and we can never get rid of them completely. But we can try to counteract them by smoothing out their effects. This requires a priori knowledge of the sample, which means some basic research.


and (2) as much of a problem as the model itself is the use of the resulting prediction.

Well, that's an old song altogether))
 
alsu:

Simple logic. Suppose we have a signal and want to extract the deterministic component from it. Of course, we want to do it optimally.

We know the criterion - RMSE. So as not to get bogged down with SE. This criterion allows us to select the smoothing parameters for a particular sample. When shifting, we recalculate.

We get a deterministic component in the sense that part of the quotient is approximated by a formula. Often a smooth differential, etc. There is no smell of randomness. But there is always an error of approximation. And there is another consideration.

The original quotient is non-stationary. This smooth approximation is subtracted from it. Question: where is the non-stationarity? Has it disappeared? Is the residual stationary? If the residual is stationary, we can make a prediction. If it is not stationary, we cannot make forecasts and have to continue smoothing by taking bites out of non-stationarity. The absolute value of the residue decreases and after the third smoothing the spread is usually a fraction of a pip, so you can finally forget about it.

 
faa1947:

Simple logic. Suppose we have a signal and want to extract the deterministic component from it. Of course, we want to do it optimally.

We know the criterion - RMSE. So as not to get bogged down with SE. This criterion allows us to select the smoothing parameters for a particular sample. When shifting, we recalculate.

We get a deterministic component in the sense that part of the quotient is approximated by a formula. Often a smooth differential, etc. There is no smell of randomness. But there is always an error of approximation. And there is another consideration.

The original quotient is non-stationary. This smooth approximation is subtracted from it. Question: where is the non-stationarity? Has it disappeared? Is the residual stationary? If the residual is stationary, you can make a prediction. If it is not stationary, we cannot make forecasts and have to continue smoothing by taking bites out of non-stationarity. Considering that the residue decreases in absolute value and after the third smoothing the spread is usually about a pip, we can finally spit on it.

Eventually, we can consider the iterative procedure itself to be the optimal method of determining the deterministic component. The main thing is that it must lead to the stationary white noise in the output, i.e. not only non-stationarity must be removed, but the residuals autocorrelation as well, otherwise the forecast will be worthless. In short, the problem has long been known in this formulation, but I haven't seen its solution for Fora in open access. But even if it is so, who can say that the form of the deterministic component in the analysis window is stationary in itself, i.e. it will not change when the window is shifted? If it doesn't, then the prediction is worthless.

 
alsu:

And if not, the forecast is worthless.

The ideal is not achievable.

Let's look at a plan as an example.

Let's take a machine with T=10. For me it is 10 independent variables taken into calculation with constant coefficient = 0.1.

Once counted, the fitting error is over 100 pips for H1.

What's the problem? Obviously the constant coefficient.

We take the regression for 10 lag values and count the coefficients. They are not equal to 0.1 The error is less, but still about 100 pips.

Next question. Why 10 independent variables?

Next, why a linear combination of these variables?

What I am getting at at this point in the reasoning.

We have to adjust: coefficients, number of independent variables, functional form.

Is that all?

No it's not.

We put forward the concept of an adaptive model to the market, but the question arises as to what we see in the market or what we take from the market?

If you take EViews, there is a set of tests that allow you to isolate a wider than above set of parameters to approximate. almost completely this set of parameters I showed in the prediction branch.

 

That's right. That's all that's left:

Адаптировать [...] коэффициенты, кол-во независимых переменных, функциональную форму

A mere small thing))
 
orb:
=) go on, go on) I don't hear much, I don't know much.
Read Ilya Prigozhin. You will learn a lot. There is chaos in all dynamic systems.
Reason: