Market prediction based on macroeconomic indicators - page 14

 

It seems to me that it makes sense to model on shorter periods, at least on daily data

If you take a year and a half of history, you can catch the long-term correlations and buy on corrections with horizon from several days to several weeks.

If we take 4h, we need several months of history and a position horizon of a few days

 

According to monthly opening prices from 01. 10. 2011 to the present by the method https://www.mql5.com/ru/articles/250


Универсальная регрессионная модель для прогнозирования рыночной цены
Универсальная регрессионная модель для прогнозирования рыночной цены
  • 2011.02.07
  • Yousufkhodja Sultonov
  • www.mql5.com
Рыночная цена складывается в результате устойчивого равновесия между спросом и предложением, а те, в свою очередь, зависят от множества экономических, политических и психологических факторов. Непосредственный учет всех составляющих осложнен как различием природы, так и причиной воздействия этих факторов. На основании разработанной регрессионной модели в статье сделана попытка прогнозирования рыночной цены.
 

This is how the rating started immediately after the crisis (based on 7 data from 01. 03. to 01. 08. 2009 - bold blue line) and the following could have been predicted (red-green line of calculated values), which is approximately what happened (blue line of actual data):


 
yosuf:

This is how the rating started immediately after the crisis (based on 7 data from 01. 03. to 01. 08. 2009 - bold blue line) and the following could have been predicted (red-green line of calculated values), which is approximately what happened (blue line of actual data):


A prediction curve built so that each prediction is one step ahead of the last known price will have a much smaller prediction error on average than your prediction curve. Also, there must have been stretches of time in history where your trend prediction was exactly the opposite. I recommend that you run your method through history and calculate the RMS error of all the predictions without looking ahead and pronormalize it by the RMS error of all the predictions on a "future value equals last known value" basis. If your normalised RMS of the predictions is less than 1, then I will have a huge interest in your method.
 

So, last week the US GDP growth for the first quarter of 2015 was released. I ran my predictor for the next two quarters and this is what it produced.

GDP:

S&P500:

My prediction of GDP growth for Q1 was not as low as the unveiled growth. But the latter is just an estimate that will be adjusted several times. The S&P500 continues to rise. No recession is in sight yet.

 

Hello.

A long time ago, when I was a kid, in the sense of just starting my understanding of the market, I came up with ideas similar to yours.

I showed that technical analysis (looking for all sorts of figures, etc.) cannot work and has no predictive power.

I did approximately what you did in terms of mathematics, but in terms of the physical sense. Namely:

1. we save the history, for example on M5, long enough, for example a hundred or more thousand bars, of course several hundred thousand bars is better.

2. we select a segment of e.g. n = 144 (12 hours) bars at the end of the chart. This is the latest bars, the current state of the market.

3. In steps of 1, start sliding a window of width n into the past. And read the Pearson's linear correlation coefficient. It gradually decreases from 1 to a certain value, then increases again, and naturally, in these fluctuations, approaches unity again and again...

4. Introduce a threshold. For example Level = 0.9. The threshold may be less or more, but it does not matter. And determine the coordinates of all those found pieces of history, when the form of the chart with the correlation coefficient higher than the threshold coincides with the interesting piece at the end of the chart.

5. An important step: for the length of interest (the length of the forecast, for example, also 12 hours) we define "after-bites", that is pieces of the chart following the pieces we have found that are well shaped to coincide with the piece at the end. We carry out preprocessing by sign (if the correlation of the chunks is less than minus levele, we invert the sign of the afterbites), by scale(standard deviations or something else), and such found and preprocessed afterbites we average. The idea is simple: if there's a certain repeatability, a certain regularity, that "after you get a feint like this you usually get this one", then it will be statistically significant.

6. As a result of averaging we have: practically a horizontal straight line... Not perfect of course, not straight and not horizontal, but obviously aiming at it: 50/50 chance to go up and down, such a "forecast" ...

7. Conclusion: This result of averaging shows (and I can prove it from other ideas), that all ideas of technical analysis in the spirit of "after such a twist is outlined, it usually goes this way" - nonsense for suckers.

 
Dr.Fx:

Hello.

A long time ago, when I was a kid, in the sense of just starting my understanding of the market, I came up with ideas similar to yours.

I showed that technical analysis (looking for all sorts of figures, etc.) cannot work and has no predictive power.

I did approximately what you did in terms of mathematics, but in terms of the physical sense. Namely:

1. we save the history, for example on M5, long enough, for example a hundred or more thousand bars, of course several hundred thousand bars is better.

2. we select a segment of e.g. n = 144 (12 hours) bars at the end of the chart. This is the latest bars, the current state of the market.

3. In steps of 1, start sliding a window of width n into the past. And read the Pearson's linear correlation coefficient. It gradually decreases from 1 to a certain value, then increases again, and naturally, in these fluctuations, approaches unity again and again...

4. Introduce a threshold. For example Level = 0.9. The threshold may be less or more, but it does not matter. And determine the coordinates of all those found pieces of history, when the form of the chart with the correlation coefficient higher than the threshold coincides with the interesting piece at the end of the chart.

5. An important step: for the length of interest (the length of the forecast, for example, also 12 hours) we define "after-bites", that is pieces of the chart following the pieces we have found that are well shaped to coincide with the piece at the end. We carry out preprocessing by sign (if the correlation of the chunks is less than minus levele, we invert the sign of the afterbites), by scale(standard deviations or something else), and such found and preprocessed afterbites we average. The idea is simple: if there's a certain repeatability, a certain regularity, that "after you get a feint like this you usually get this one", then it will be statistically significant.

6. As a result of averaging we have: practically a horizontal straight line... Not perfect of course, not straight and not horizontal, but obviously aiming at it: 50/50 chance to go up and down, such a "forecast" ...

7. The conclusion: this result of averaging shows (and I can prove it from other ideas), that all ideas of technical analysis in the spirit of "after such a twist is outlined, it usually happens this way" - nonsense for suckers.

What is the number of coincidences for what size plots that result can be called statistically significant?

What is the criterion for selecting the threshold?

Finally, it would be good to understand whether small differences not perceived by the correlation coefficients are significant,

essential to change the direction of the prediction (in other words, aren't we throwing out the baby with the soapy water)?

Nobody has cancelled the butterfly effect, and indeed at the moment of consolidation when nothing is happening and future movement is being prepared.

 
Vladimir:

So, last week the US GDP growth for the first quarter of 2015 was released. I ran my predictor for the next two quarters and this is what it produced.

GDP:

S&P500:

If you don't mind )))) What's on the charts - put it in txt or csv. The two files are


Date;GDP;GDP_N;GDP_F

N = Blue line
F = Red line

------------------------ и

Date;S&P500;S&P500_Q;S&P500_F

Q = Blue line
F = Red Line
 

New predictions:

S&P500:

GDP:

No change in the trend so far. The economy will grow at a slow pace. There is no inflation. Prices are rising moderately, wages are rising slowly. No reason for interest rates to rise. My model predicts that the Fed will not change interest rates this year. However, since interest rates are decided by people who make mistakes, you cannot trust the mechanical predictions of these rates.

 
Vladimir:

New predictions:

S&P500:

GDP:

No change in the trend so far. The economy will grow at a slow pace. There is no inflation. Prices are rising moderately, wages are rising slowly. No reason to raise interest rates. My model predicts that the Fed will not change interest rates this year. But since interest rates are decided by people who make mistakes, you can't trust mechanical predictions about interest rates.

The FedRezrv will absolutely LIKE to raise interest rates THIS YEAR - IN THE AUTUMN. He has announced it himself. Solemnly.

Reason: