Econometrics: one step ahead forecast - page 74

 
C-4:

Says the strategy tester, which the top-starter stubbornly refuses to acknowledge

I did and will. He implemented it in EViews and tabulated the results.

And at the same time he wonders why his model doesn't work. Why bother with all this mess with R^2 etc. when simple testing is much more objective and tells us what is what.

Before testing a car on a track, calculation of bolts and nuts is done. Without those calculations no one will test anything. Testing is necessary, but of a properly designed car.

My model differs from a TA vehicle in that it has some set of properties with its own numerical characteristics.

My goal: from the measurable properties of the model to infer the predictive capability of the model

Invited everyone to discuss this problem.

I do not have a goal of leaking working models to the collective. Those who wish to trap me in something or to profit at my expense are free.

 
avtomat:
there's a rule of thumb in statistics -- there should be at least 300 points -- that's the lower limit.

It's just a matter of opinion. It all depends on what we're counting and what the distribution is.
 
Avals:

it's a no-brainer. It all depends on what we're counting and what the distribution is.
Of course. This is only as a first guide, for initial targeting.
 
Avals:



So are all other statistical values and numerical criteria - accurate estimates are needed. The confidence interval is one way of doing this. 116 observations is not enough to believe the results of attributing or not attributing a distribution to normal, whichever criterion is applied.

How do you not analyse it? Your article says so in 1.3.

At the very beginning of the analysis I cited for people looking for stationary quotes. If non-stationarity is the axiom for you, then you don't need to check the analysis for normality.

Is the profit factor on 400 trades the same as on 40?

Of course, 400 is better. You can run it on a history of 400, but I'll get a more reasonable answer to my question about the suitability of the model. I'm trying to infer the predictive ability of the model from the numerical characteristics of its properties. In your terms: you have drawn a conclusion about the trending ability on historical data. Can that conclusion be extrapolated over a sample? That is a very interesting question. Any information within a sample is worthless if it is not retained at least a step outside the sample.

 
faa1947:

Of course 400 is better. I can run the story at 400, but I will get a more reasonable answer to the question of the model's suitability. I'm trying to get a conclusion about the predictive ability of the model from the numerical characteristics of its properties. In your terms: you have drawn a conclusion about the trending ability on historical data. Can that conclusion be extrapolated over a sample? That is a very interesting question. Any information within a sample is worthless if it is not retained at least a step outside the sample.

This is a robustness estimation. Formally, some statistical characteristics are retained, including outside the test sample. But the formal solution leads either to the system being detected too late, or not detected at all. Therefore it is necessary to be more flexible, but this is not the topic of the branch it seems
 
Reshetov:

Keep going. The residual is non-stationary because if the model fitted to a single sample is tested on any other independent sample, the residual is no longer a constant. It is possible to make fits to other samples, but after those fits we get a different model for each individual sample.

Once again, I repeat for the especially gifted: stationarity can be revealed only by coincidence of statistical data on different, independent samples. And there is no such coincidence.

The trick with econometric manipulations is that they found a method to fit a model to a sample in such a way that all the residuals in that sample are approximately equal. But since such a trick only occurs for a single sample and in other samples the model gives different results, the residuals are not stationary, but only fitted to a single sample. Econometric models cannot extrapolate the future because they do not yet have historical data (which will only appear in the future) that can be fitted to the model.

This is the same as a redrawing indicator - adjusting its readings to specific data, changing them retroactively.


I don't have a goal of allocating a residual that will be stationary in conjunction with future residuals. I don't know the future and I am interested in the future exactly one step ahead in the next bar outside the sample.

The idea is as follows: We build a model for the available sample. The end of the model construction is the stationary residual for that sample. I don't make any conclusions about the stationarity of future residuals, and I don't need them. I'm trying to build a model such that its characteristics are sufficient for exactly one bar forward. That is all, no more. I forecast this bar. When it arrives, I start to build the model again. The entire algorithm from the beginning. If you look at the table, you can see that shifting by one bar changes the number of lags. It's like an adaptation algorithm.

I am not doing anything retrospectively. I have purposely cited in the summary table data about the extraordinary qualities of the model when looking ahead. And with them the results when the prediction is strictly next bar out of sample.

 
Avals:

I am not suggesting to increase the window for calculating regression coefficients. The window for that is not defined by their convergence to a number. I'm talking about the number of observations and how it affects the accuracy of the estimates of the criteria and statistical estimates you apply

Made estimates for H1 samples from 40 to 300. From 118 (this is a week) the profit factor is almost unchanged, the coefficients stabilise.

One thing is clear, the model with ideal properties does not work, and the reason for this I do not understand

 

Sorry, topicstarter, toftop a little, but because my question is related to statistics, it is not really offtop.

I don't know where I met a script which collects statistics for instruments, can someone please tell me. I'm interested in instrument with the maximum ratio of return to spread. Roughly speaking, I am interested in the instrument with the largest number of candlesticks with the maximum upper and lower shadows.

 
joo:

Sorry, topicstarter, toftop a little, but because my question is related to statistics, it is not really offtop.

I don't know where I met a script which collects statistics for instruments, can someone please tell me. I'm interested in instrument with the maximum ratio of return to spread. Roughly speaking, I am interested in the instrument with the largest number of candlesticks with the maximum upper and lower shadows.

I don't know.
 
faa1947:

I don't have the goal of isolating a residual that is stationary in conjunction with future residuals.

You as an adherent of the econometric sect cannot have such a goal, as the future exposes the fit and therefore compromises religious beliefs. But the mathematical definition of stationarity always implies that stationarity is the independence of values of variance and expectation from the sample, future or past or whatever. Anything that is dependent on a sample is, by mathematical definition, non-stationary.

faa1947:

I'm trying to build a model so that its characteristics will suffice for exactly one bar ahead. That's it, no more. I predict this bar. When it comes, I start building the model again.

This is overbidding, i.e. backward adjustments. This is exactly the same trick as the redraw indicators. A model without overshooting should produce stationary residuals irrespective of the sample, then we can talk about stationarity of the residuals produced by the model.