Market prediction based on macroeconomic indicators - page 37
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Very simple - do not open at the time of important news
I don't trade on the news. There is enough movement without it.
The Expert Advisor is fine-tuned, but the broker can throw a fit.
Continuing the theme. A reminder that my model predicts the market based on macroeconomic indicators. Out of 2,000 economic indicators a few are selected based on their ability to predict the future. There is no looking ahead. Every quarter, when GDP growth and other indicators come in, the model automatically runs through the history including fresh data, selects indicators that predicted well the old and new data and makes new predictions 2 quarters ahead based on them. After my last prediction I found some bugs in the code. Also found a new conversion of economic indicators which makes the predictions more accurate. Short term advice to other forecasters and predictors, input data differentiation is not very good for predictions because it looses structure (signal) and makes data noise.
Here is a prediction of GDP growth in the US. The blue line is the actual data. The red line is the predictions. For each past prediction only the data available up to that time was used minus two quarters. The economy will grow moderately for now. Although judging by the slow smoothing GDP growth, a recession next year is quite possible. In the next post I will show the accuracy of GDP predictions by various banks and economists.
The S&P500 prediction is not ready yet. But it is much harder to predict the market than the economic indicators themselves. There is a lot of noise in prices.
Continuing the theme. A reminder that my model predicts the market based on macroeconomic indicators. Out of 2,000 economic indicators a few are selected based on their ability to predict the future. There is no looking ahead. Every quarter, when GDP growth and other indicators come in, the model automatically runs through the history including fresh data, selects indicators that predicted well the old and new data and makes new predictions 2 quarters ahead based on them. After my last prediction I found some bugs in the code. Also found a new conversion of economic indicators which makes the predictions more accurate. Short term advice to other forecasters and predictors, input data differentiation is not very good for predictions because it looses structure (signal) and makes data noise.
Here is a prediction of GDP growth in the US. The blue line is the actual data. The red line is the predictions. For each past prediction only the data available up to that time was used minus two quarters. The economy will grow moderately for now. Although judging by the slow smoothing GDP growth, a recession next year is quite possible. In the next post I will show the accuracy of GDP predictions by various banks and economists.
The S&P500 prediction is not ready yet. But it is much harder to predict the market than the economic indicators themselves. There is a lot of noise in prices.
and you can look at the raw actual data for GDP - you can do it here with a simple table
Here: https://research.stlouisfed.org/fred2/series/GDPC96#
The table and growth calculations are attached.
The last value is not 0.7 but 1%.
That's for sure. I see that the Fed adjusted the GDP data last Friday. My predictions do not change due to corrections as they use past established data. Adjustments will go on for months and can change the advance data quite significantly. My Q4 prediction is 2.1% growth, the correction changed the advance 0.7% to 1%. It is not advisable to use the advance data as a measure of predictive accuracy. Here are examples of past adjustments:
If you are interested, the economists' predictions can be found here: http://projects.wsj.com/econforecast/#ind=gdp&r=20
Below is a table of past predictions by the most accurate predictors (Standards and Poor, Bank of America, Moody's, Goldman Sachs, Northern Trust, Combinatorics Capital, UBS bank). There are about 50 forecasters in total. Most interesting is the period in 2008, the first quarter, when the GDP fell by 2.7%. Not a single economist could have predicted this 2 quarters ahead, though the above mentioned economists in the table below were able to predict it 1 quarter ahead. The other 40 economists, including the biggest banks, continued to predict growth into the 4th quarter of 2008. To see the predictions of all the economists, use the link above on the menu on the left, at the very bottom go to Edition and then the link above Download.
If you are interested, the economists' predictions can be found here: http://projects.wsj.com/econforecast/#ind=gdp&r=20
Below is a table of past predictions by the most accurate predictors (Standards and Poor, Bank of America, Moody's, Goldman Sachs, Northern Trust, Combinatorics Capital, UBS bank). There are about 50 forecasters in total.
http://library.hse.ru/e-resources/HSE_economic_journal/articles/18_01_07.pdf interesting article on the subject.....
Thank you. I'll do some reading.
The hardest part of creating economic models is transforming the input data. If you look at economic indicators (there are about 10,000 of them), they differ from each other in many ways. Some grow exponentially, others twitch in some range, others twitch around zero with increasing magnitude, others change jerkily in the middle of history, etc. To create a model, all these data must be altered so that they have similar statistical characteristics that do not change over time. There are such possibilities:
1. Calculate the relative speeds: r[i] = x[i]/x[i-1]-1. This transformation automatically normalises the data, there is no looking into the future, you don't need to do anything else. But a big problem exists with zero data (x[i-1]=0) and negative data, and there are many of these in economic indicators.
2. Calculate increments d[i] = x[i] - x[i-1]. This transformation does not care about zero and negative data, but the increments grow over time for exponentially growing data such as annual gross product. I.e. the variance is not constant. For example, it is not possible to plot the dependence of GWP increments on the unemployment rate because the unemployment rate fluctuates within a range with constant variance, while GWP grows exponentially, with exponentially growing variance. So the increments must be normalized to the time-varying variance. But calculating the latter is not easy.
3. Remove from the data the trend calculated for example by Hodrick-Prescott filter and normalize the high-frequency residual by the time-varying variance and use as a model input. The problem here is that Hodrick-Prescott filter and other filters based on polynomial fitting(Savitzky-Golay filter, lowess, etc.) look ahead. Mooving lags the data and is unsuitable for trend removal, especially on exponentially increasing data.
Any other ideas?
There is a peek into the future in my last GWP growth prediction. I only discovered it after publication. That's why the model predicted past events so well. I keep struggling.