In other words, as long as it is decomposed using the aproximating functions it is a stationary series, and when the white noise has gone it is the end of the cycle. Did I understand you correctly?
Approximation is a fit. That's why I propose to get a stationary BP not on approximation but on extrapolation.
Please forgive me for my thoughts. Perhaps my understanding is not yet up to your heights. Let me make a very tactful suggestion.
Do you think there is a circular argumentation in the first post?
A few definitions (in free form), so that there is no debate, about definitions:
On an intuitive level, we associate stationarity of a time series with the requirement that it has a constant mean and oscillates around this mean with a constant variance.
A series x(t) is called strictly stationary (or stationary in the narrow sense) if the joint probability distribution of m observations x(t1),x(t2),:,x(tm) is the same as for m observations
In other words, the properties of a strictly stationary time series do not change with a change in the origin.
In particular, the assumption on the strict stationarity of the time series x(t) implies that the probability distribution law of the random variable x(t) does not depend on t, so all its basic numerical characteristics, including
Mathematical expectation Mx(t)=a
Dispersion Dx(t)=M(x(t)-a)2= c^2
A series x(t) is called weakly stationary (or stationary in the broad sense) if its mean and variance are independent of t.
Obviously, all strictly stationary (or stationary in the narrow sense) time series are also stationary in the broad sense, but not vice versa.
A non-stationary series is a series which differs from a stationary series by a non-random component.
Do you think there is circular reasoning in the first post?
No.
1. We first approximate the price series. We get the formula for approximating the price BP: price_appr(time)
2. extrapolate price_appr(time + i)
3. Get synthetic delta(time + i) = Open[time + i] - price_appr(time + i)
4. Check delta(x) for white noise. If it is noisy, it's a bummer. If it doesn't make noise, then continue.
5. Approximate the synthetic and get the formula: delta_appr(time)
6. Forecast: forecast(time + i + j) = price_appr(time + i + j) + delta_appr(time + i + j)
where: i and j are OOS from previous steps. time, i and j are non-intersecting time sets
Sounds tempting.But.
We can only check for noise at the extrapolation interval.
This means that for each step, we must create a margin in advance in the form of an interval at which to check for noise.
Doesn't it break the whole idea?
Yes, by the way, how long does a row have to be to be able to determine reliably enough that it's noisy (not noisy)?
The stationarity of the residuals means that the extrapolation model is adequate. The residuals should be normally distributed and have MO=0, contain no autocorrelations etc. In general they should be independent.
"
......
But a qualitative model must not only give a sufficiently accurate forecast but be economical and have independent residuals containing only noise with no systematic components (in particular, the ACF of the residuals must not have any periodicity). Therefore a comprehensive analysis of the residuals is needed. Good checks on the model are: (a) graphing the residuals and examining their trends, (b) checking the ACF of the residuals (the ACF graph usually clearly shows periodicity).
Residuals analysis. If residuals are systematically distributed (e.g. negative in the first part of the series and approximately zero in the second part) or include some periodic component, this indicates inadequacy of the model. Residuals analysis is extremely important and necessary in time series analysis. The estimation procedure assumes that the residuals are uncorrelated and normally distributed. "
The stationarity of the residuals means that the extrapolation model is adequate. The residuals should be normally distributed and have MO=0, not contain autocorrelations etc. In general they should be independent.
"
......
But a qualitative model must not only give a sufficiently accurate forecast but be economical and have independent residuals containing only noise with no systematic components (in particular, the ACF of the residuals must not have any periodicity). Therefore a comprehensive analysis of the residuals is needed. A good check of the model is: (a) graphing the residuals and examining their trends, (b) checking the ACF of the residuals (the ACF graph usually clearly shows periodicity).
Residuals analysis. If residuals are systematically distributed (e.g., negative in the first part of the series and approximately zero in the second part) or include some periodic component, this indicates inadequacy of the model. Residuals analysis is extremely important and necessary in time series analysis. The estimation procedure assumes that the residuals are uncorrelated and normally distributed. "
Nerd bluster. Isn't your own brains enough to realise that everything in the link you cited is nonsense?
Read on, and I quote: "Limitations. Recall that the ARPSS model is only suitable for series that are stationary(mean, variance and autocorrelation are approximately constant over time); for non-stationary series, take differences. It is recommended to have at least 50 observations in the raw data file. It is also assumed that the model parameters are constant, i.e. do not change over time. " (I don't want to discuss the figure of 50 observations, because even a fool on this forum is clear that 50 transactions is not a result)
Let us have a non-stationary series, we have taken the residuals - delta(x). The residuals themselves, as suggested in this nerdy "work" must meet the requirements, quote: "containing only noise without systematic components".
Fuck it. Let there be noise. The noise itself cannot be predicted in any way. Therefore, it is useless to approximate. But it does have the property, and I quote: "The residuals shall be normally distributed and have MO=0."
Hence, instead of noise we take its MO = 0.
Substitute it into the forecast: forecast(time + i + j) = price_appr(time + i + j) + delta_appr(time + i + j) = price_appr(time + i + j) + 0 = price_appr(time + i + j)
So, the forecast on noise is the first approximation: price_appr(x). And the first approximation, as I said in the third post of this thread, is a naked fit. The result is:
Botanical prediction = fitting
The most primitive version. We approximate the price BP. Extrapolate. The difference between extrapolated BP and real BP is also BP, but already stationary. Let's call this new GR a synthetic GR.
For example, forecasting by means of EMA (second order, for example) does not give stationary VR of residuals. So the issue of extrapolation is also quite tough. I think gpwr published an indicator in which various linear extrapolation methods were implemented. Would you like to analyse the residuals distributions?
As we know, stationary BPs are predictable if they are not white noise
I wonder if anyone has ever had to get white noise in price transformations?
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Do you have any real ideas on how to account for non-stationarity in the tester?
So it is not very complicated. It requires some work, but by and large the problem is solvable. But for some reason not discussed.
As you know, stationary BPs are predictable if they are not white noise.
Therefore, there is an urgent demand for converting non-stationary price BPs to stationary ones, but with the possibility of inverse conversion.
The most primitive variant. Approximate the price VR. Extrapolate. The difference between the extrapolated BP and the real BP is also BP, but a stationary one. Let us call this new BP a synthetic one.
Extrapolating synthetic BP. Add to the price VR extrapolation. If the synthetic BP is not white noise, the output is the result of the summation of the two extrapolations.