a trading strategy based on Elliott Wave Theory - page 192
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In general case, centering of a random variable is a procedure: X(t)-m(t) where X(t) is a random variable and m(t) is the expectation (average over the interval). Thus, by calculating expectation by averaging over a fixed sliding window, we get rid of the constant component in the initial time series. This makes spectrogram reading easier. Indeed, compare the spectrum of the original series and the centered one. The original series has a strong scramble in the low frequency region. But there's some uncertainty with the choice of averaging window... the low-frequency boundary of the spectrogram depends on it. Roughly speaking, the spectrum will not contain harmonics with a period longer than the averaging time.
I use for myself the centring of the series using the formula: X[i]=Open[i-1]-Open[i]. It is not difficult to draw an analogy in this case with the numerical differentiation procedure (given that dt=1). We remember that if we apply the differentiation operator to the original series containing harmonic functions, the output will be a series containing the same harmonics with the amplitude increased in proportion to the frequency. I.e. the differentiation procedure of the original series:
1. does not lead to the loss of useful information (we are talking about spectral analysis);
2. allows us to represent the spectral density in a digestible form;
3. allows us to minimize the inevitable phase lag associated with the averaging procedure.
Remember, the dimensionality of the spectral density A^2/Hz is the power (square of the amplitude) referred to a frequency unit, whereas the dimensionality of the calculated (after differentiation procedure) value is: Hz*A^2 and in order to restore the spectral density the resulting vector shall be divided by the square of the frequency. In addition, we are primarily interested in the amplitude of a particular harmonic. To find it, divide the resulting spectral density by the period and take the square root out of this.
And lastly, I must have made a mistake somewhere... Yurixx will tell you where:-)
to Candid
Candid, good to see you!
No, it doesn't.
On the contrary, the differentiation of the series leads to an "overdifferentiated series" which, although stationary, has some undesirable properties related to the irreversibility of its MA component; there is a parasitic autocorrelation of neighbouring values of the overdifferentiated series (short cycles dominate in the spectrum). Moreover, it becomes impossible to use the usual algorithms of parameter estimation and series prediction (see, for instance, [Hamilton (1994), chapters 4 and 5]).
However, this is a different story. We are talking about peculiarities of autoregressive models.
Thanks, I appreciate the humour. :-)) However, to take the low-frequency component out of context, I want to clarify.
Your posts are always informative and therefore make me want to understand and understand what is stated in them.
So I'm not looking for errors, I'm looking for understanding. And for that I have to clarify details. :-)
The fact that the operation X[i]=Open[i-1]-Open[i] is in fact a series differentiation, occurred to me from the very beginning.
And I kept trying to understand why you were using it for centering. There seems to be no connection here. Now I understand it, and thank you again.
The only thing I still do not understand is the mathematical expectation of series X[i]=Open[i-1]-Open[i]. As far as I understand, the expectation of this series on the intervals you took is non-zero. Therefore, you cannot apply to it the statements concerning stationary series with zero mathematical expectation.
It is rigorously proven mathematically that one cannot beat in the long run by any kind of TS a time series created by integration of a stationary series with zero expected payoff (it is, with some reservations, analogous to price series of currency instruments and resembles Brownian motion of a particle)
It is mathematically strictly proved that it is impossible to beat in the long run with any TS a time series constructed by integration of a stationary series with zero expected payoff (it is, with some reservations, analog of price series of currency instruments and reminds of the Brownian motion of a particle).
We were told a lot of interesting things about game theory in the institute. As it was long ago - I was quoting from memory...
Perhaps it is correct:
...it is impossible to beat in the long run with any kind of TS a time series constructed by integration of a stationary series with zero correlogram...
Let's construct a series, each successive term of which equals the previous one multiplied by the coefficient, for example, a=-0.5:
X[i+1]=-0.5*x[i]+sigma, where sigma is a normally distributed random variable with zero expectation.
This is a 1st order AR(1) autoregressive model with strong negative autocorrelation (analog of the bounce market). The sequences satisfying the relation X[i+1]=a*x[i]+sigma are often also called Markov processes. So, expectation for it equals zero at any sufficiently long interval and it's easy to make money on such a market.
This, indeed, contradicts my first statement.
Interestingly, for Markov processes with negative autocorrelation coefficient (the analogue of almost all Forex price series) we can easily obtain the formula for estimating the expected profitability of TS. It is important that the following condition is fulfilled for the selected timeframe:
|a(t)|*s(t)>Spread, where s is the standard deviation for sigma.
If |a| is close to one, the volatility of the instrument will be much higher than s. And that means that if the neighbouring values of series x[i] are strongly correlated, then a series of rather weak perturbations will generate sprawling price fluctuations. In this sense, it is more correct to substitute the volatility of an instrument instead of the standard deviation, which characterises the random component of the pricing process, in the formula to estimate the return on the instrument.
You are right grasn trading without stops is very dangerous! While I was on a business trip, I made one trade without a stop loss and my demo account went to zero :( I opened a new one. I`m now trying to develop my trading strategy with stops.
I will see in a month what will be the result :)
Thank you, clarification has come quite. "Quite" - in the mathematical sense of the word. :-)
I have learned many interesting things at the same time. And most importantly - the hope of earning on forex does not contradict mathematical theory !
By the way, I recently had a discussion with grasn about how volatility is measured in Forex. My point of view was that it uses a staple of an instrument to do so. As far as I know this is not entirely correct, but it is more or less adequate. In relation to your statement
I would like to ask how it is actually calculated. Maybe you can enlighten me? Just to make us happy. :-))
Vol[T]=SQRT[SUM{(High[i-k]-Low[i-k])^2}/(n-1)], where the summation is carried out on k=0...n.
What is the connection between T and n? If there is one, of course.
Vol[T]=SQRT[SUM{(High[i-k]-Low[i-k])^2}/(n-1)], где суммирование ведётся по k=0...n.
What is the connection between T and n? If there is one, of course.
In the right part of the equation, the values High[i] and Low[i] depend on TimeFrame (T). As a first approximation,
Vol[T] is proportional to the root of the TimeFrame expressed in min and multiplied by Vol[1 min]:
Vol[T]==Vol[1 min]*SQRT(T).
n is chosen for statistical validity reasons, e.g. at least 100 bars.
You're right grasn trading without stops is very dangerous! While I was on a business trip, I made one trade without a stop loss and my demo account went to zero :( I opened a new one. I`m now trying to develop my trading strategy with stops.
I will see in a month what will be the result :)
"Forewarned is forearmed :o)". Once I realized the same thing, he who takes the risk, doesn't always drink champagne sometimes, he has to drink plain water. Only consolation in this case is the advice of doctors that water is much healthier than champagne. :о)
Alex, best of luck in the new trading period. We are waiting for your amazing results.
Neutron
The volatility of an instrument on the selected TimeFrame can be calculated using the formula:
Vol[T]=SQRT[SUM{(High[i-k]-Low[i-k])^2}/(n-1)] where summation is performed on k=0...n.
If I am not mistaken, this is the 3rd or 4th definition of volatility in my memory and they all differ significantly from each other. In our discussion with Yurixx we gave, if memory serves me correctly, considerable space to the very philosophy of this concept as a measure of risk. As I understand it, all the calculations I am familiar with do not reflect the very essence. More often than not, volatility loosely replicates "large" price movements, i.e. if the market is rising, then volatility is also rising, and it would seem this should be interpreted as increased risk and not trying to trade at increased risk. But then, where is the point? Unfortunately I can't find a decent place for volatility. Maybe someone can tell me how it can be used.