Matstat Econometrics Matan - page 4

 
denis.eremin:

I don't quite understand the question - why use white noise?

If you need such a series, you can generate a SB series in Excel or another program and take its first differences - that would be white noise.

If a rough estimate fits - the first differences of a price series are also quasiWhite Noise

The point is that in practice, in formulas, the noise may not always be a randomly generated series.
It is a series resulting from some calculations from the original data.
That is, there is some internal information in the noise that contributes to the accuracy of the overall model calculations.
So, I'm confused by these interpretations of noise )) And wanted to make sure, who understands this noise how.
Randomise, or useyour calculated noise.

 
Roman:

Alexei, such a question has arisen.
I have dug into econometric formulas, and in many formulas there is a variable that is white noise.
By definition, white noise has perfect characteristics, the presence of normality with a constant variance of one.
Obviously such white noise is probably not to be found in reality. So the question is:
in practice, what is used as white noise?
Does this white noise have anything to do with the input data? For example, take residuals as noise, but then the condition of normality and dispersion would be violated.
Or should it really be extraneous noise which can simply be randomly generated with specified characteristics?
Or is that the point, to get white noise characteristics from the residuals? That is, normality is there, variance is constant, no autocorrelation.

You just have to look at econometrics textbooks (Magnus, Verbeek, etc.). They usually spell out all the right stuff there.

The point is that a model always takes into account an incomplete set of factors and justification is needed as to why the others are discarded. It is usually assumed that all the other factors simply add up to white noise, so that you don't have to scrutinise them. But this is just an assumption, a hypothesis that needs to be confirmed, which is usually done by examining the model residuals. If the model residuals don't look like white noise, then it's a bad model and needs to be changed to another.

White noise does not have to be Gaussian, but it is its Gaussianness which allows MNA to be applied to find model parameters. If, for example, the noise is Laplace distributed, then you will have to minimise the sum of moduli rather than squares. This is not difficult to figure out if calculated using the maximum likelihood principle.

Thus, the last line of your post is correct.

 
Aleksey Nikolayev:

Just have a look at econometrics textbooks (Magnus, Verbik, etc.). They usually spell out all the right things to do.

The point is that a model always accounts for an incomplete set of factors, and you need a justification for why the rest are discarded. It is usually assumed that all the other factors just add up to white noise, so that you don't have to scrutinise them. But this is just an assumption, a hypothesis that needs to be confirmed, which is usually done by examining the model residuals. If the model residuals don't look like white noise, then it's a bad model and needs to be changed to another.

White noise does not have to be Gaussian, but it is its Gaussianness which allows us to apply ANC to find model parameters. If, for example, the noise is Laplace distributed, then it will be necessary to minimise the sum of moduli rather than squares. This is not difficult to figure out if calculated using the maximum likelihood principle.

So the last line of your post is correct.

Exactly. I've got Magnus lying around somewhere, I'll have to look it up. Thank you. (chuckles)
Thanks for the clarification too, got it.

 
Aleksey Nikolayev:

If, for example, the noise is Laplace-distributed, then it is no longer the sum of squares that has to be minimised, but the moduli. This is not hard to figure out, if you calculate using the maximum likelihood principle.

It's hard to figure it out using the principle of maximum likelihood) Can you help?
 
denis.eremin:

All numerical series are divided into three types - deterministic, random and stochastic.

Aren't "stochastic" and "random" the same thing?

 
PapaYozh:

Aren't "stochastic" and "random" the same thing?

No

 
PapaYozh:

Aren't "stochastic" and "random" the same thing?

In econometrics, everything is upside down and upside down. What people call random is called stochastic, and random is a mixture of stochastic and deterministic.

 
PapaYozh:

Aren't "stochastic" and "random" the same thing?

Roughly speaking, a task is a prediction or classification.

A deterministic process is 100% predictable.

Stochastic is not predictable at all. Well, the whole world is unpredictable, only the automaton and Alejandro beat the coin....

The object of research is random processes in which various methods and models try to isolate the deterministic component and the residual which is not predictable.

 
denis.eremin:

Roughly speaking, a task is a prediction or classification.

A deterministic process is 100% predictable.

A stochastic one is not predictable at all. Well, the whole world is unpredictable, only the automaton and Alejandro beat the coin....

The object of research is random processes in which various methods and models try to isolate the deterministic component and the residual, which is not predictable.

Yes...

A deterministic process does not need to be predicted as it is predetermined, i.e. known in advance.

A random process is random because there is no deterministic component.

 
PapaYozh:

Yes...

A deterministic process does not need to be predicted because it is predetermined, i.e. known in advance.

A random process is random because there is no deterministic component.

))) And if a random process has no deterministic component - how is it predicted?

Can you give an example of a non-deterministic series which is nevertheless predictable?