Econometrics: one step ahead forecast - page 47

 
faa1947:

A week ago, I suggested a plan of action:

2. I suggest that everyone who is interested:

a) discuss these results

b) modernise this model

c) propose their models
.

3. I am ready to implement the results of discussion and modernisation in code and post the results.

Let me remind you of the type of models:

a) For EURUSD on lags: EURUSD = hp(-1 to -4) + hp_d(-1 to -2)

b) For DX:

DXM = 1/DX - we use the inverse of the quotient

EURUSD = DXM_HP(-1 TO -4) + DXM_HP_D(-1 TO -2)

In these formulas HP is the Hedrick-Prescott indicator, and HP_D is the residual = kotir - indicator. The bars in brackets are the bars before the current one, (-1 to -4) means the last 4 bars.

The real equation after evaluating the coefficients with the variables is as follows:

EURUSD = -1552.7613734*DXM_HP(-1) + 4731.89082764*DXM_HP(-2) - 4360.68995095*DXM_HP(-3) + 1287.82064375*DXM_HP(-4) - 98.9244837504*DXM_HP_D(-1) - 131.011472103*DXM_HP_D(-2)

Anyone interested - take part in the econometrics exercise!

Of course, some progress has been made. In any case the discussion of the forecast error was a clear progress, a thing unthinkable for TA apologists.

But it is only the beginning of a journey.

I await manna from avtomat with his state space.

My suggestion to yosuf to run his model continues.

I also assume to clarify the importance of prediction error.


I read the thread... thought it would be interesting... my mistake...

apart from the TA and NS shit, there's nothing new... the forecasting ideology is still at the same lame-ass, meaningless level...

the road to nowhere in general...

 
Vizard:


read the thread... thought it would be interesting... my mistake...

nothing new, apart from the TA and NS... the forecasting ideology is still at the same lame-duck, meaningless level...

the road to nowhere...

Unfortunately, I have to agree. Extremely rarely anything of substance. General words with a road to nowhere.
 
faa1947: At any rate a clear progression was the discussion of prediction error, a thing unthinkable to TA apologists.

I want to specify: I almost never was a TA apologist (well, except at the beginning, after the first failure of a real deposit, I played with different irons). I do not use now any of standard indicators, including scales, regressions and other irons.

All these irons are just a way to bring the data to the view perceived by man. He does not like these fractals and jumps - he wants something smooth and easily predictable. So he substitutes the real world with a smoothed fake, refusing to directly contemplate and explore it.

He thinks mash-ups/regressions are predictable better. Well, yes, better, as they are smoother. But the crux of the matter is that the error of the mashka when you try to recalculate the mashka forecast on the price movement is multiplied, roughly speaking, by the mashka period. And the whole zeitgeist of the predicted smoothness disappears. This is exactly what you are getting at.

In short, I don't believe in the smoothness of the market that can be seen. That said, I don't deny that there may be some stationary characteristics inherent in it. But they are not so simple, you can't see them on the chart for shit.

 
faa1947:
Unfortunately, I have to agree. Extremely rarely anything of substance. General words with a road to nowhere.


On the merits I suggested a long time ago - start by doing everything on automatic...

I can tell you how I did it a few years ago (I can't code, though ))))

Take any forecast packages... upload quotations or whatever you want to put in the model... make several models (models are different - to test different ideas) .... ...and some packages have macros and batch modes - use samples for training (take as many as needed for the model) - do the training - then upload the result to a file (you can screenshot the model output and save)... Then repeat everything and add new quotes ( if you predict for a day ahead, add a day - if the sample is fixed ) everything goes into a loop... Once you've got a couple of computers set for a couple months... then open the screens or model output files and look at the forecasts on history... much will become clear... no need to wait for forecasts every day...

 
Mathemat: And the whole cyme of predictive smoothness disappears. That's exactly what you get.

In short, I don't believe in the smoothness of the market that can be seen. That said, I don't deny that there may be some stationary characteristics inherent in it. But they are not so simple, you can't see them on the chart for shit.

And all the zeitgeist of the predicted smoothness disappears. That's exactly what you're getting at.

I don't have any smoothness whatsoever. The model I'm investigating contains a quotient completely: a trend is marked out using HP and the residue between this trend and the quotient is added to it. Not a pip of information of the initial quotient is lost unlike any approaches in TA.

In short, I don't believe in the smoothness of the market that can be seen.

I totally agree.

That said, I don't deny that there may be some stationary characteristics inherent in it. But they are not so simple, you can't see them on the chart for shit.

Generally speaking, they are not inherent. If stationarity is present anywhere, it's an accident. The idea is different:

1. we sequentially isolate from the quotient what can be expressed by a formula.

2. We stop the process if a) the residual is too small in range (e.g. less than a pip), or b) it is stationary, which means that it can be replaced with a variance.

But now the most important thing. Is such a model predictable? The calculated error is fine. But it is the error of the value of the prediction, not the error (probability) of that prediction being correct!

It was in an attempt to solve this problem that I started the topic and support for the two articles. And it's a general problem and doesn't depend on theory: TA, parametric or non-parametric econometrics.

 
Vizard:


I suggested it a long time ago - start by making everything automatic...

I can tell you how I did it a few years ago (I can't code, though ))))

Take any forecast packages... upload quotations or whatever you want to put in the model... make several models (models are different - to test different ideas) .... ...and some packages have macros and batch modes - use samples for training (take as many as needed for the model) - do the training - then upload the result to a file (you can screenshot the model output and save)... Then repeat everything and add new quotes ( if you predict for a day ahead, add a day - if the sample is fixed ) everything goes into a loop... Once you've got a couple of computers set for a couple months... then open the screens or output files and look at the forecasts on history... much will become clear... no need to wait for forecasts every day...

I don't need any of that. I know how to make EAs with a profit factor over 5. So what? All of them fade to the point where I have to throw them away. The worst thing is that I cannot tell the difference between the beginning of fading and another drawdown.
 
faa1947: Но это ошибка значения прогноза, а не ошибка (вероятность) правильности этого прогноза!

Hehe, that's five! So you have to look somewhere else, where there's at least a hint of a correctness assessment.

Here is an article about Bayesian criterion (where p is a probability). Have a look and see if you like it. I'm hooked (especially the interpretation of Bayesian criterion as a criterion for proving new data). Looking for something else about the Bayesian approach. Interestingly, it is widely used in the most practical radio engineering (radars and other military stuff).

My point is that we should not stop at a single approach. Even the terver has several different interpretations.

P.S. And here is the first article in this series by the same author. Probably that's where you should start, not the second article.

Don't be afraid of the fact that it's about biostatistics. All the same, the approach can be applied anywhere, if you think with your head.

 
Mathemat:

Hehe, that's five! So you have to look somewhere else, where there's at least a hint of a correctness assessment.

Here is an article about Bayesian criterion (where p is probability). Have a look and see if you like it. I'm hooked (especially the interpretation of Bayesian criterion as a criterion for proving new data). Looking for something else about the Bayesian approach. Interestingly, it is widely used in the most practical radio engineering (radars and other military stuff).

My point is that we should not stop at a single approach. Even the terver has several different interpretations.

P.S. And here is the first article in this series by the same author. Probably that's where you should start, not the second article.

Don't be afraid of the fact that it's about biostatistics. All the same, the approach can be applied anywhere, if you think with your head.

The question of the predictive capability of the approach and the particular model in the approach must be answered before anything can be done.

The most serious approaches are in TAP. There is specific estimation there. As an echo of TAR in econometrics, there are models in state space.

Allegedly there is such a possibility with smoothing by cubic splines and wavelets - but this is not available in EViews

 
Mathemat:

Hehe, that's five! So you have to look somewhere else, where there's at least a hint of a correctness assessment.

Here is an article about Bayesian criterion (where p is a probability). Have a look and see if you like it. I'm hooked (especially the interpretation of Bayesian criterion as a criterion for proving new data). Looking for something else about the Bayesian approach. Interestingly, it is widely used in the most practical radio engineering (radars and other military stuff).

My point is that we should not stop at a single approach. Even the terver has several different interpretations.

P.S. And here is the first article in this series by the same author. It's probably the one to start with, not the second article.

Don't be afraid of what's in there about biostatistics. Still, the approach can be applied anywhere, if you think with your head.

)))) Well, to him all non-econometricians are amateurs!!! -- uh... how shall I put this delicately... -- incompetent!!! there!!! :)))))))))))

.

thank god it's not a techie you're offering him...

 
faa1947:
I don't need any of that.


If these tests had been carried out...then questions about the error and such a model construction...and the topic in general did not arise...

Well, don't do it, don't do it... it's our job to offer...

Reason: