Dependency statistics in quotes (information theory, correlation and other feature selection methods) - page 29

 
alexeymosc:.

I myself spent many months mastering neural networks ...

I keep being amazed that people who have mastered some pretty complicated stuff (I should add DSP to NS) don't bother to master the science directly relevant to their business?

And in general, I can't understand why there's no econometrics on this site? Started a search about six months ago, got a couple of links. About anything but econometrics.

...tried different kinds of transformations of the original time series to improve its stationarity, because NS are sensitive to this phenomenon and inadequately trained

From the point of view of econometrics, NS are nothing more than a smoothing function, and in my opinion, not the most successful one. There are other ways of smoothing that are more efficient, manageable, elaborate and illustrative. More importantly, these methods usually involve estimating the results of smoothing.

... but stationarity tests have not been conducted.

The stationarity test is just the beginning. The Dickey-Fuller test tells you nothing in the vast majority of cases, and always gives numbers. You have to go step by step, and there are almost no results of the next step where you can strictly reject/accept the corresponding null hypothesis. For example, when comparing two models according to the test for one type of heteroscedasticity, you get two figures - 30% and 40% - probability of no heteroscedasticity. For the other type of heteroscedasticity - 50% and 20% - the question is: which model is better?

 
faa1947:

I myself spent many months mastering neural networks ...

I keep being amazed that people who have mastered some pretty complicated stuff (I should add DSP to NS) don't bother to master the science directly relevant to their business?

And in general, I can't understand why there's no econometrics on this site? Started a search about six months ago, got a couple of links. About anything but econometrics.

...tried different kinds of transformations of the original time series to improve its stationarity, because NS are sensitive to this phenomenon and inadequately trained

From the point of view of econometrics, NS are nothing more than a smoothing function, and in my opinion, not the most successful one. There are other ways of smoothing that are more efficient, manageable, elaborate and illustrative. More importantly, these methods usually involve estimating the results of smoothing.

... but stationarity tests have not been carried out.

The stationarity test is just the beginning. The Dickey-Fuller test tells you nothing in the vast majority of cases, and always gives numbers. You have to go step by step, and there are almost no results of the next step where you can strictly reject/accept the corresponding null hypothesis. For example, when comparing two models according to the test for one type of heteroscedasticity, you get two figures - 30% and 40% - probability of no heteroscedasticity. For the other type of heteroscedasticity - 50% and 20% - the question is: which model is better?

And what is DSP?

For me it's not all business, but more appropriately the word "hobby". I work in a different direction and have neither an economic nor a mathematical background, but rather a liberal arts background using statistical methods. Because of this background I study what interests me, and if I realise that I'm lacking some knowledge, I dig down to accumulate it. It's like a snowball effect. That's just lyricism.

"In terms of econometrics, NS is nothing more than a smoothing function, and in my opinion, not the most successful. There are other ways of smoothing that are more efficient, manageable, elaborate and illustrative. More importantly, these methods usually involve estimating the results of smoothing."

Not always so. You named a particular application of neural nets. You can use them to classify, rigidly or not rigidly (probabilistically). Also do cluster analysis. I've tried all that, along with the task of approximating a function over a time series.

"The Dickey-Fuller test overwhelmingly tells you nothing, but the numbers always do."

This is the problem with many statistical hypothesis tests.

In general, my understanding of econometric research, at least yours, is that you take some kind of approximating curve or line and study a series of residuals in order to get unrevised normally distributed noise. Right?

 
alexeymosc:

What is DSP?

For me, it's not all business, but the word "hobby" is more appropriate. I work in a different field and have neither an economic nor a mathematical background, but rather a liberal arts background, using statistical methods. Because of this background I study what interests me, and if I realise that I'm lacking some knowledge, I dig down to accumulate it. It's like a snowball effect. That's just lyricism.

"In terms of econometrics, NS is nothing more than a smoothing function, and in my opinion, not the most successful. There are other ways of smoothing that are more efficient, manageable, elaborate and illustrative. More importantly, these methods usually involve estimating the results of smoothing."

Not always so. You named a particular application of neural nets. You can use them to classify, rigidly or not rigidly (probabilistically). Also do cluster analysis. I've tried all that, along with the task of approximating a function over a time series.

"The Dickey-Fuller test overwhelmingly tells you nothing, but the numbers always do."

This is the problem with many statistical hypothesis tests.

In general, my understanding of econometric research, at least yours, is that you take some kind of approximating curve or line and study a series of residuals in order to get unrevised normally distributed noise. Right?

And what is DSP?

Digital signal processing.

It's not always like that. You named a particular application of neural networks

Yes, of course, and in econometrics it's broader than that.

You take some approximating curve or line and study a series of residuals to get unrevised normally distributed noise. Right?

Yes, that's the goal. Although not achieved, it has been possible to make much more robust models, with the variance of the prediction error fluctuating within 5%.

 
faa1947:


Yes, that is the goal. Although not achieved, it has succeeded in making much more robust models, with a variation in forecast error variance within 5%.


I do the same thing when forecasting sales with statistical models. As an assessment of the quality of the model - analyse the residuals for autocorrelation and the type of probability density function. Also, of course, R^2. These are, in principle, generally accepted time series forecasting techniques.

As for neural networks, perhaps I've never done this, and should also analyse the residuals. You're right about that (if I'm plotting an approximating series curve).

Regarding TI, the role of theory is to find meaningful lags or other independent variables. As a matter of fact, de facto, a residuals series analysis should also be carried out on the built model.

 
faa1947:

I don't quite get it.

Econometrics works with non-stationary processes, the approximate algorithm is described in the post. We should understand that non-stationarity leads to the fact that we cannot take the best indicator or a set of indicators and get TS and trade stably, because due to non-stationarity any estimates of TS (PF, drawdown and others) are fictitious and in the future there will appear such areas of quotient, where TS will sell out the deposit.

The science of measuring economic data - econometrics, has differences from other very respectable sciences, but it is a separate independent science and proposes to act consistently, fixing each intermediate result as a model, aiming to obtain a stationary residual, gives stability estimates of future TS when working on a non-stationary market.

This is shown by an example for EURUSD and three indicators (straight line, exponential smoothing, Hodrick-Prescott filter) here.

Guys, let's use a separate science to measure economic data, and not try to pull something out of the neighboring sciences, just because we are too lazy to read the econometric textbook. In our country, there are such textbooks dating back to 2000, i.e. for more than 10 years, universities have been producing specialists who measure economic data scientifically and do not suffer the crap called "information dependence".

And anyway, let's get on with it.


Your argument is clear, but the problem with econometrics is that wrong theories and tools are created on initially wrong beliefs and assumptions, then for those tools the object of study is substituted for the real market, and a conclusion is drawn about how similar the two different objects of study are.

Where is the certainty that econometrics will not repeat the fate of Ptolemy's theory in 20-30 years? This theory was also once very good at explaining the world around us.

Here is what E.Peters writes about it (a mathematician, and one of few who understand the matter so well):

And here is what he writes about econometrics:

There are many more examples in his book about where theories detached from practice can lead.

 
alexeymosc:.

residuals analysis for autocorrelation and type of probability density function. Also, of course, R^2. These are, in principle, common techniques for predicting time series.

This is just a start, and in the sense of being generally accepted, not complete. Complete is shown here, although it is only an example of use. Three groups of analyses: coefficients, residuals and stability. If you can reconcile the contradictions, you might get an estimate, which is always a prediction error, as the target is the prediction and everything else is an intermediate result.

 
faa1947:

I am always amazed that people who have mastered some pretty sophisticated stuff (I should add DSP to the NS) don't bother to master the science that is directly relevant to their business?

Econometrics is only relevant to business if that business is gambling. Look up its adherents on the casino owners' website.
 
C-4:
Econometrics is only relevant to a business if that business is a gambling business. Look up its adherents on the casino owners' website.
Bullshit, read books, start with the alphabet.
 
C-4:



There are many more examples in his book of where theories disconnected from practice can lead.


Efficient market theory is not considered in econometrics. Its assumptions are all based on the fact that the market is not efficient. Econometrics does not include Markowitz and his apologists for efficient portfolios. Econometrics has been around for over 100 years it has never been disproved by Peters, Mandelbrot and others, as it is originally based on the assumption that the market is non-stationary.

It is econometrics that justifies a forecast one step forward and shows the reasons for fatal deterioration of the forecast several steps forward.

It is good that you have a friend Ptolemy, but that is no reason to deny econometrics by attributing to it what it does not contain.

 
faa1947:
Bullshit, read books, start with the alphabet.

Same to you.