You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Every econometrics course (are you really an econometrician?:)) tells us what the variance of model parameter estimates is and the speed of convergence of estimates to true values: the smaller the sample size and if there are no structural changes in the series, the larger the variance of model parameter estimates. As the sample size increases, the variance (most often:)) decreases as eps*sqrt(n), eps>0, n being the number of observations.
Parameter estimation errors contribute to the error of any model. Therefore the smaller the accuracy of parameter estimation, the larger the error of the model.
On the other hand, a small window allows for adaptation to parameter changes. In practice this problem is much better solved by solving the decay problem for model parameters instead of reducing the window size.
In every econometrics course (are you really an econometrician?:)
I am not an econometrician - I have a degree in econometrics. As they say, feel the difference.
In my work, I am limited to a specific package and cannot go beyond it. The package includes a selection of the optimal model by the following criteria:
I read from a book that accompanies the model that different criteria give better results for different initial random processes, but on average the best result gives the logarithmic likelihood.
With the tester I found that the window for my sample varies from 20 to 40 bars - about the same result, but deteriorates sharply outside those dimensions. But this is on my particular sample. I would like to have some other basis - I don't trust the tester, it gives a private result, and no basis for generalising this result.
I was interested in the type of function w from there. The balance is of little to no interest, analyse the means (equity).