Econometrics: why co-integration is needed - page 25

 
faa1947:
The shaman of technical analysis again. Where can you get away from?
You can't hide from us) Keep counting the numbers)
 
faa1947:
In TA you are looking for patterns with unknown statistical characteristics. It is very close to guessing with coffee grounds. I look for statistical characteristics of rows and on this basis I predict future behaviour. For example, to our rams. Within the framework of this approach, the sacred cow "you cannot over-sit" is dead. Overfreezing is possible, because we will come to zero anyway and the loss on the way to zero does not have the property of growing indefinitely.
Of course, it doesn't grow infinitely, if the deposit is a million and the lot is 0.01)) Patterns are also searched for depending on the fundamentals, my friend:) And you're trying to describe the market by figures, cointegrations of all kinds, it's all a joke - just trust me)
 
alsu:

The simplest thing that follows from the principle of construction of both tests is that the residuals of regression equations included in the tests must be stationary and uncorrelated with the series itself, otherwise the method loses its meaning. For Granger - all of the above, but for any number of lags in the equations (which in practice is generally difficult to implement - hence this test is good primarily for macroeconomic data where the length of the series - annual, quarterly, monthly - usually maximum dozens of samples, but not millions)

And a lot of other subtleties.... The normality of the residuals distribution, for example... (also not very fulfilling)

Plus, as far as causality is concerned, Granger has introduced an excellent definition of it, but like any ideal, such a formulation has proven to be unverifiable in practice. So the test of the same name, even if all the prerequisites are fulfilled, will surely only show you the absence of causality if it really does not exist, but not its presence if it really does.

I like the very idea of getting rid of non-stationarity and making trading decisions based on a stationary series. The causality test is part of it. The lags are. Normality is not needed, stationarity is enough.

But the problems remain. It is not clear to me what causes of non-stationarity are removed when merging the two series? Let's discard shifts as an unsolvable problem.

Although we can spit and run TC on a large interval and see the result.

 
faa1947:

It is not clear to me what reasons for non-stationarity are removed by merging the two series?

The existence of a stationary linear combination suggests a similar nature of the series, their origin, so to speak, from the same source of reality). But these are rather general words.

If I were you, if cointegration is so interesting, I would try to determine how stable it is, i.e. if we increase the length of the outlier, at what point the cointegration equation ceases to have solutions. And how the co-integration coefficients change as a function of row length. This may or may not give a lot of useful information.

 
alsu:

The existence of a stationary linear combination indicates the similar nature of the series, their origin, so to speak, from the same source of reality). But these are rather general words.

If I were you, if cointegration is so interesting, I would try to determine how stable it is, i.e. if we increase the length of the outlier, at what point the cointegration equation ceases to have solutions. And how the co-integration coefficients change as a function of the length of the series. This might give a lot of useful information (or it might not:).

Here is the cointegration equation

EURUSD = C(1)*GBPUSD + C(2) + C(3)*@TREND

We take a sample of 6,700 H1 bars and move the 118 bar (week) window on it. Coefficients are changed (the third one is not shown). (the third one is not shown) and the result of the unit root test.

I cannot draw any conclusions. It is clear that we should fight for the unit root, but the tool of fight is not clear.

 
faa1947:

Here is the cointegration equation

EURUSD = C(1)*GBPUSD + C(2) + C(3)*@TREND

We take a sample of 6,700 H1 bars and move the 118 bar (week) window accordingly. Coefficients are changed (the third one is not shown). (the third one is not shown) and the result of the unit root test.

I cannot draw any conclusions. It is clear that one has to fight for the unit root, but the tool of fighting is not clear.

My point is this:

We take a sample from a given moment of size (for example) 24 bars and increase its length: 25, 26, .... until we get bored. Watch the coefficients. Fix the moment when the equation is no longer solved. It is desirable to repeat this procedure for different starting points.

If the dynamics of the ratios is clear (not noise), we can draw conclusions about the general characteristics of cointegration. For the second parameter, estimate the cointegration time constant.

 
alsu:

My point is this:

If the dynamics of the coefficients are clear (not noise), it will be possible to draw conclusions about the general characteristics of cointegration. For the second parameter, estimate the cointegration time constant.

Above are graphs of coefficients when the window is shifted by one bar. There is no stability to speak of. Is the cointegration level incorrectly specified? Usually the trend specification is the problem. The residual after detrending should be stationary. It is not. So instead of coefficient it is noise.
 
faa1947:
Above are the graphs of the coefficients when the window is shifted by one bar. There is no stability to speak of. Is the cointegration level incorrectly specified? Usually the trend specification is where the problem lies. The residual after detrending should be stationary. It is not. So instead of a coefficient, it's noise.

I don't know how to explain..... I'll try.

What you/we/they calculate are not coefficients. They are estimates. We will never know the coefficients, we can only estimate them with some degree of probability. Since the series is random, naturally the estimates are noisy. Otherwise we would have to admit that our series is not random, but completely deterministic. So noise is normal, but it is at different sample sizes that we should see some dependence, albeit noisy. This would indicate that the cointegration calculations make practical sense.

 
alsu:

Well I don't know how to explain..... I'll try.

What you/we/they calculate are not coefficients. It's their estimates. We will never know the coefficients, we can only estimate them with some degree of probability. Since the series is random, it is natural that the estimates are noisy. Otherwise we would have to admit that our series is not random, but completely deterministic. So noise is normal, but it is with different sample sizes that we should see some dependence, albeit noisy. This would indicate that the cointegration calculations make practical sense.

Here is the co-integration regression coefficient estimate

Dependent Variable: EURUSD

Method: Dynamic Least Squares (DOLS)

Date: 04/26/12 Time: 10:29

Sample: 6619 6736

Included observations: 118

Cointegrating equation deterministics: C @TREND @TREND^2

Automatic leads and lags specification (lead=12 and lag=12 based on AIC

criterion, max=12)

Long-run variance estimate (Bartlett kernel, Newey-West fixed bandwidth =

5.0000)

No d.f. adjustment for standard errors & covariance

Variable Coefficient Std. Error t-Statistic Prob.

GBPUSD 1.129724 0.137650 8.207248 0.0000

C 35.58951 22.84113 1.558133 0.1228

@TREND -0.011004 0.006888 -1.597440 0.1137

@TREND^2 8.39E-07 5.16E-07 1.626326 0.1074

Let's look at the t-Statistic column. If you divide 100% by the value in this bar, you get the error of the coefficient estimate. It is huge. Could this be the yardstick?


 
faa1947:

Here is an estimate of the co-integration regression coefficient

Dependent Variable: EURUSD

Method: Dynamic Least Squares (DOLS)

Date: 04/26/12 Time: 10:29

Sample: 6619 6736

Included observations: 118

Cointegrating equation deterministics: C @TREND @TREND^2

Automatic leads and lags specification (lead=12 and lag=12 based on AIC

criterion, max=12)

Long-run variance estimate (Bartlett kernel, Newey-West fixed bandwidth =

5.0000)

No d.f. adjustment for standard errors & covariance

Variable Coefficient Std. Error t-Statistic Prob.

GBPUSD 1.129724 0.137650 8.207248 0.0000

C 35.58951 22.84113 1.558133 0.1228

@TREND -0.011004 0.006888 -1.597440 0.1137

@TREND^2 8.39E-07 5.16E-07 1.626326 0.1074

Note the t-Statistic column. If you divide 100% by the value in this column, you get the error of the coefficient estimate. It is huge. Could this be the yardstick?

(a) The t-statistic assumes that the data have a normal distribution and is only for such data, otherwise it distorts the result.

b) what is the new direction in the matstat to divide 100% by the value of the t-criterion, please enlighten