You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So ..... ? So what? Spectral power density is not necessary for traders because it does not allow predicting (synthesizing) the shape of the signal for the future.
Not necessarily - a reversal prediction is enough for trend trading
This is just your *assumption* from the realm of myths. You can't even show HOW you would do it (predict a reversal) with only spectral power density.
As we know, stationary BPs are predictable if they are not white noise.
There is therefore an urgent demand for converting non-stationary price BPs to stationary, but with the possibility of inverse conversion.
The most primitive variant. Approximate the price VR. Extrapolate. The difference between the extrapolated BP and the real BP is also BP, but a stationary one. Let us call this new BP synthetic.
Extrapolate synthetic BP. We sum it up with the extrapolation of price VR. If the synthetic BP is not white noise, the output is a forecast - the result of the summation of the two extrapolations.
I was too lazy to read all the posts here. So I'll ask a question which may have already been asked before me. What is the criterion for approximation? What is the minimum standard deviation on the interval being tested? And how do we choose the length of the model?
To see where I'm going with my questions, take the AR model as an example. To fit it, partition the past data into training samples, i.e. inputs and outputs. Fitting the model to that data by either linear regression, Burg's method, or another linear prediction method. You suggest that after fitting this AR model, we should calculate prediction errors on past data (i.e. the same errors that we tried to reduce during fitting) and fit another AR model to a range of errors and so on. There is little point in doing this as the length of the AR model should be chosen so that the approximation error has the properties of white noise. Otherwise, you have a short model and its errors do not behave like white noise, but something predictable. But fitting a second model under a series of errors, and then a third and so on has the same result as increasing the order (length) of the first AR model.
It is more correct to build the first model in steps, increasing the length of the model until the approximation error behaves like noise. Much has been written about this in books and articles.
I was too lazy to read all the posts here. So I'll ask a question that may have already been asked before me. What is the approximation criterion? What is the minimum standard deviation on the interval being tested? And how do we choose the length of the model?
To see where I'm going with my questions, take the AR model as an example. To fit it, partition the past data into training samples, i.e. inputs and outputs. Fitting the model to that data by either linear regression, Burg's method, or another linear prediction method. You suggest that after fitting this AR model, we should calculate prediction errors on past data (i.e. the same errors that we tried to reduce during fitting) and fit another AR model to a range of errors and so on. There is little point in doing this as the length of the AR model should be chosen so that the approximation error has the properties of white noise. Otherwise, you have a short model and its errors do not behave like white noise, but something predictable. But fitting a second model under a series of errors, and then a third and so on has the same result as increasing the order (length) of the first AR model.
It is more correct to build the first model in steps, increasing the length of the model until the approximation error behaves like noise. Much has been written about this in books and articles.
Oh, man!
Oh, come on, colleague!? No one here even knows such clever words (as you write: "criterion", "approximation"), but knows exactly 3-4 people. And they are obviously tired of explaining the plain truths to everyone, so they keep silent.
Converting a non-stationary series into a stationary series is some kind of exercise that has nothing to do with profit.
>> Nothing of the kind. What is being discussed, in the broadest sense, is getting from a non-stationary series of prices, a stationary series of profits.
At the instigation of grasn (for which I thank him), I began to develop the following idea.
3. We predict ZZ in 2 steps - completion of the current wave and the next one. Probably, it is possible to use a tricky regression model, for the time being I limit myself to the usual statistics.This point is the most important one, because it gives the largest amplitudes. Could you elaborate on that? ;-) Obviously, it's not a statistic, i.e. it's not just averages of step sizes or even the distribution of the next step size from the previous one.
Nothing of the sort. What is being discussed, in the broadest sense, is getting from a non-stationary series of prices, a stationary series of profits.
I don't get it. TS makes a profit and I haven't seen a word about TS.
I don't get it. The TS makes a profit and I haven't seen a word about the TS
profit=f(price series)
This point is the most important because it operates with the largest amplitudes. Can you be more specific? ;-) It evidently does not mean usual statistics, i.e. not simply average size of step ZZ and even not distribution of size of next step from the previous one.
Alas, for now - distribution. I plan to classify the distributions by prediction error, perhaps there will be a pattern there.
No.
1. First we approximate the price series. We obtain the formula for approximation of the price BP: price_appr(time)
Extrapolate price_appr(time + i).
3. Obtain synthetic delta(time + i) = Open[time + i] - price_appr(time + i)
4. Check delta(x) for white noise. If it is noisy, bummer granny. If it doesn't make noise, continue.
5. Approximate synthetics and get formula: delta_appr(time)
6. Forecast: forecast(time + i + j) = price_appr(time + i + j) + delta_appr(time + i + j)
where: i and j are OOS from previous steps. time, i and j are non-overlapping time sets
This is an interesting suggestion.
Although the prediction methodology is not quite clear. What is actually being predicted?
But we have to solve quite a different problem first.
How to check white noise or not?
profit=f(price series)
Conversions of BP into something more decent are plentiful - all (or almost) indicators, but no profit is visible. Always, when an indicator is developed, it's always the idea first and then the implementation. Here they say "it's good if VR is stationary in place of non-stationary". What is good? The development of all indicators is aimed at the fact that they reflect some characteristic of the initial BP. Here such a task is not set at all, the task is to statetaracteristics of the result, and what this result will display of the initial BP is unknown.
By the way, here on the forum saw a graph that shows that the length of candles depends on the time of day.