Market prediction based on macroeconomic indicators - page 41

 
Sergiy Podolyak:

What do you mean? I predicted the global crisis, back in 2007.


All "predictions" in this life are worth NOTHING - only money is worth something.

Did you make a million on that prediction?

There are millions of people around the world who have "predicted" something - only a few have made money.

I was sure - I should have mortgaged my flat and opened myself up to everything.

 
Дмитрий:

All "predictions" in this life are worth NOTHING - only money is worth something.

Did you make a million on this prediction?

There are millions of people around the world who have "predicted" something - only a few have made money.

I was sure - I should have mortgaged my flat and opened myself up to everything.

Cossack, your greed reeks of Goldman Sachs. You should at least disguise yourself somehow,you know -.....

Not everything is measured in money. The only way you'll understand that is if you become an economist or a trader.

 
Sergiy Podolyak:

Cossack, your greed reeks of Goldman Sachs. You should at least disguise yourself somehow,you know -.....

Not everything is measured in money. The only way you'll understand it is if you become an economist or a trader.

)))

You can't be a TRADER without measuring everything in terms of money!

You can be an economist, but not a trader.

If you had opened everything in 2007, you would have gone out of the market by the fall of 2008.

 
)))) need to stop by Lomonosov and kick Chernyak in the ears - who did he teach......
 
Vladimir: The other columns are different economic predictors. GDP is in 1166 column.......
The data is not great to say the least. + Too much stuff.))
Throw out -
GDP = 144.1876*COL83
GDP = COL226
GDP = COL739
GDP = 62 + COL1128
GDP = 0.001*COL1168
etc...
(COL83 = column 83)
Didn't do much digging (less than matlab searched)))
Export from matlab with separators ; = dlmwrite('myfile',Data,';')
From a hunch, two models are 1 pixel ahead(red and green).

Right hand side of the vertical line = application of modellers to new data...


 
Vizard_:
The data is not great to say the least. + Too much stuff.))
Discard -
GDP = 144.1876*COL83
GDP = COL226
GDP = COL739
GDP = 62 + COL1128
GDP = 0.001*COL1168
etc...
(COL83 = column 83)
Didn't do much digging (less than matlab searched)))
Export from matlab with separators ; = dlmwrite('myfile',Data,';')
Hopscotch the two models 1 nb forward(red and green).

Right hand side of the vertical line = application of the models to the new data...


Not bad for a start. What year does the vertical line correspond to? And how many predictors are in the models?
 
Vladimir:

Well, it's spring, the bear is awake, hungry ...

Shall I show you how the S&P500 fell in 2008 during the negative GDP growth, or do you want to see it for yourself?

Go to the beginning of the thread and read about my goals - predicting crashes and avoiding long positions before they occur. I'm not interested in trading, so I use quarterly data. The main thing is capital preservation. And how the S&P500 fluctuates around its trends and flat is of no interest to me.

>> Get rid of that radio-technical arrogance of thinking all economists are idiots... What makes you think you are the only one who reads these thousands of economic indicators? They read them too.

I don't think they are all idiots, but most of them are, they postulate things to come or they constantly predict a recession until it happens, and then they declare themselves geniuses. So far not one economist has predicted the 2008 crash, so, yes, idiots except for a couple. The Fed Bank uses its SDGE model with 16 generally accepted indicators to predict the economy and that model is dumb. And you probably haven't even heard of such a model.

It has nothing to do with radio engineering and correlation. I don't use radiotechnical methods, but I don't see any harm in them either. You are arrogant in the sense that you do not allow the existence of methods of analysis and the creation of models different from those generally accepted in economics. Ignoring the progress of modelling and machine learning in other branches of science is arrogance, or even stupidity, not allowing new discoveries.

That's right, there are no authorities at the forefront. Respect and respect to authorities, but you should think with your head, and doubt everything, even if you are sure, leave a small percentage of doubt, and then suddenly this small percentage may lead to a breakthrough.

The meme "during negative GDP growth" is a delight :))))

 
Vladimir:
Not bad for a start. What year does the vertical line correspond to? And how many predictors are in the models?
Not good. I looked for the sake of curiosity and immediately deleted them all. There are no normal predictors in the data in my opinion. I think I trimmed the first four lines, there were a lot of empty values.
I built models on first 100 observations. It turns out 104, and with 105 is already OOS. Green is very simple, 6-7 predictors without transformation. The red one is twice as much.
+ I took absolute values and sines, etc., and this is visible on the LSS, it starts to storm)))) In both (for stability) - no coefficients, just simple formulas
between the predictors used. Not normalized, to the first differences (increments) then brought to make it more clearly visible where a flaw. Observations are insufficient, there should be
To make a model make more or less "physical sense". Like - unemployment and (or) blah-blah-blah, their relationship ... etc. Tried it, failed))) If with a retrain on
every observation (quarter) like you're doing, it doesn't make any difference what you're inputting, as long as there's a cut. As far as I understand the grid is picking itself, nevertheless the data
it's better to clear the data beforehand...
 

My Matlab code first removes the predictors that have NaN in the simulated history, then converts all the data by the same method, then runs through all the history trying each of the 2 thousand predictors and their delayed versions for predictive capability of the past future, calculates the accumulated prediction error of each predictor and finally gives a list of predictors sorted by their error. If this is done for every past moment in history and takes the best predictors and predicts the future, the result is pretty decent for a few years until a recession happens. At such times the past best forecasters do not predict the fall of GDP well and are replaced by new forecasters. And so it goes on until a new recession occurs. Whether there is a universal formula for the dependence of GDP on some key forecasters I do not know. If we add another hundred years of history then at the end of these hundred years we have a list of prognosticators that predicted all recessions more or less well but when the next recession comes they can be replaced by new predictors again.

Intuitively picking predictors is also wrong. For example, is the unemployment rate a leading or lagging predictor? Does a high unemployment rate cause a recession or vice versa does a recession cause a high unemployment rate? It seems to me that recession causes high unemployment so using unemployment to predict recessions is not an option. But the decision to use any predictors in the model is made by my code based on accumulated prediction errors. So far the leading role in my model is taken by predictors based on private investment in house building and domestic consumption. This is probably logical, because houses and appliances are a big part of GDP. If people don't buy houses, refrigerators and televisions, production goes down, GDP goes down, factories fire workers, unemployment goes up, consumption goes down even more. Republicans and Democrats are getting the country out of recession in different ways. The democrats give money to the low wage population (vouchers) to increase their consumption or encourage immigration to create a new consumeristocracy. Republicans argue that a one-time $500-700 allowance for poor families will not allow them to buy a new house or car and move the economy forward. They prefer to give money to the poor by lowering taxes, especially on investments. Their theory is that the rich, by saving more money in lower taxes, will buy more expensive things (houses, cars, etc.) which will increase consumption where it matters, or they will invest the money in businesses which will reduce unemployment, increase the ability to pay and increase consumption. Reaganomics was based on this.

 
Vladimir:

1. Calculate the relative velocities: r[i] = x[i]/x[i-1]-1. This transformation automatically normalises the data, there is no looking into the future, you don't need to do anything else. But a big problem exists with zero data (x[i-1]=0) and negative data, and there are many of these in economic indicators.

2. Calculate increments d[i] = x[i] - x[i-1]. This transformation does not care about zero and negative data, but the increments grow over time for exponentially growing data such as annual gross product. I.e. the variance is not constant. For example, it is not possible to plot the dependence of GWP increments on the unemployment rate because the unemployment rate fluctuates within a range with constant variance, while GWP grows exponentially, with exponentially growing variance. So the increments must be normalized to the time-varying variance. But calculating the latter is not easy.

3. Remove from the data the trend calculated for example by Hodrick-Prescott filter and normalize the high-frequency residual by the time-varying variance and use as a model input. The problem here is that Hodrick-Prescott filter and other filters based on polynomial fitting(Savitzky-Golay filter, lowess, etc.) look ahead. Mooving lags the data and is unsuitable for trend removal, especially on exponentially increasing data.

Any other ideas?

I use (x[i] - x[i-1]) / (x[i] + x[i-1]). Negative data is as good as positive data. Normalization in [-1, +1] is imho better than in [0, 1].