a trading strategy based on Elliott Wave Theory - page 185

 
Grasn, thank you for the link and your interest in the problem raised.
In science, there are standard approaches and methods for solving a given problem. The beauty of this approach is its certainty, availability of proven tools, and invariable success (if, of course, there is a solution to the problem in principle). Such an approach saves time and guarantees results. It is attractive. By the way, on the link cited by you, it is possible to demonstrate clearly, how not to solve a problem. Indeed, it's sufficient to use a developed apparatus of spectral analysis of time series or analyze the spectral density of a stationary time series determined through its autocorrelation function in order not to bore our head with empirical observations about the quantity and quality of Elliott waves. The market is volatile, and it is similar to death to use a stationary model consisting of five waves, for example. Once it worked, but six months later it would be more correct to use the eleven-wave model. So... ... shall we empirically adjust the model to a volatile market every time? This is not an example of rational behaviour.
Yurixx in his posts above seemed to share my point of view, and it would be interesting to see his work in this area.
Regarding your comment about the randomness in choosing of autoregressive model ratios (if I understood correctly), I must disagree, because the given ratios are uniquely determined by the autocorrelation coefficients of the investigated time series by solving Yule-Walker equations [Yule (1927)], [Walker (1931)].
Grasn, could you please tell us more about your research in this area.

Regards.
 
The beauty of this approach is its certainty, availability of proven tools, and consistent success (assuming, of course, there is a solution to the problem in principle). Such an approach saves time and guarantees results.

Please explain what approach you are talking about. What is the approach that ensures unfailing success and guarantees results ?

By the way, the link you cited can clearly demonstrate how not to solve the problem. Indeed, it is enough to use the developed apparatus of spectral analysis of time series or analyze the spectral density of a stationary time series defined by its autocorrelation function in order not to make empirical observations about the quantity and quality of Elliott waves. The market is volatile, and it is similar to death to use a stationary model consisting of five waves, for example. Once it worked, but six months later it would be more correct to use the eleven-wave model. So... ... shall we empirically adjust the model to a volatile market every time? This is not an example of rational behaviour.

Completely agree with your assessment. Any deterministic model is doomed to short life. And the more deterministic it is, the shorter its life will be. It is only unclear how you manage to combine such a view with what you wrote in the first post:
I'm interested in the possibility of a deterministic description of the pricing mechanism.

By the way, as a person more than far from DSP, I would like you to explain details concerning the analysis of spectral density of a stationary time series defined via its autocorrelation function. Especially about how a time series is defined via its autocorrelation function.
 
<br/ translate="no"> Grasn, could you please elaborate on your research in this area


OK, I'll try to describe it briefly.


Rosh
grasn, what is the difference between Extreme 1 and Extreme 2 in your opinion(your reasoning) and how to recognize them(distinguish between them) online(on the right side of history)?

In a way, an alternative approach to choosing a reliable channel.

I'll start with the lyrics. One day, I went to see an old friend of mine. By the look in my eyes he immediately understood the reason for my appearance and without asking anything, said: "If you have an idea, first of all, sit down, calm down, pour yourself a glass of good cognac and ask the question - why didn't it work for your predecessors". Probably I was not the only one to come up with this idea, but at least I have not come across analogues from the sources available to me. But never mind, maybe I do not read much and do not pretend on authorship at all (though I thought it up honestly, being engaged in neural networks). Its full implementation seems to me very difficult, even seems impossible in places. But that is not the reason for wanting to "talk about it :o)". There is still a lot of vagueness in it. If we discuss it, we can find the right path, take it all the way to the end. Although, if we imagine that together we are a big superbrain, we will probably be able to solve such a problem completely. :о)

I am talking about an idea, which may become the basis for an alternative forecast of price movement and eventually take its rightful place in developed systems (this is not from importance, but from personal estimation). The implementation in my system, I regard as an auxiliary module and see the application directly as an additional criterion for selecting a reliable channel, but of course not the only one, and having read to the end, you can understand why I want to attribute "and thank God it's not the only one". But closer to the body, as the old Maupassant used to say.

The main idea
So, I set myself the following goal: modeling (predicting) the price movement based on the representation of news as signals (sorry, it's all the influence of digital signal processing). That's the simple idea. There are no trends, cyclicality, or anything else, none of that. There is incoming news and a signal correlated to it.

Assumptions (in brief and not all of them)
At any moment the market is in a single state, which is divided into two parallel and related sub-states: waiting for news and reacting to the received news. The market is in this state now, it will be in a minute, an hour, a month, always.

The news just packs information (data or otherwise knowledge) into a shell and delivers it via various communication channels. And it's not news that "holds" the market, of course, but information. It doesn't matter if "Uncle Sam" or a trader with a $200 deposit receives and processes this information directly or indirectly. Make no mistake that indirectly news does not affect a trader who does not formally perform news analysis. Any indicator built on a price series already contains transformed information (TA's first postulate). And therefore the news. For some people a received quote can be "news", ahem, just kidding.

By information, I mean any meaningful data affecting a quote (rumors, reports, forecasts of fundamental data, arrival of fundamental data, elections, etc., etc.) contained in the news.

Limitations (briefly and far from all)
Do we get all the news? This question cannot be answered in the affirmative. For example, we don't know anything about an extreme deal. We may simply not get all the news for the simple reason that we have chosen a bad provider, and it is certainly not possible for each of us to handle everything.

Does the news have an impact? I wrote two paragraphs here, mainly for Alex(remember Alex wrote that news has no influence at the time when professionals earn on it), but I erased them. Putting philosophy aside, I will immediately voice my own opinion - yes they do.

The idea is not to build an economic model and not to slip into this area. The essence of the idea is to classify incoming information and compare it with a certain signal and give "feedback" in the form of signal parameters on the basis of qualitative analysis of incoming information.

Model (briefly and far from everything)
If we don't get all the news, what then? The answer probably lies in the fact that we don't need all the information. It is unlikely that a player reacts to all news in a row, most likely he expects, depending on the targets (there are not many), some specific news. Consequently, based on statistical principles we need to identify the really important information, which is expected by the vast majority and then work only with it. In general, such work seems to have been done for us, and to start research it can be used and trusted, which is exactly what I did.

Structuring information, and much more ...... are separate interesting topics.

Mathematically, each news (taken for consideration) is modelled by a certain signal class (the signal is used in the context of digital signal processing) with its own characteristics. A conditional convolution of such signals (a pulse is also a signal) will give a complete predictive signal. Consequently, it is necessary to match each significant news taken into consideration with parameters and type of signal used. Parameters of all impulses should be normalized, and calculated from the current price level. The forecast should be on a weekly basis, with a strategic forecast for the long term should be based on Friday's forecast value.

Application
The application is varied. You can nail it to the wall, print it out and take it to the bathroom, or you can, for example, refine a reliable channel by simply "fitting" the forecast signal into the channels.

PS: This is not an idea that will give you very accurate price predictions, not at all. And solving the Yule-Walker equation I'm afraid won't help, although if Neutron shares, I'd appreciate it.

About arbitrariness and research
That's where the real arbitrariness is, where you can turn around and enjoy it to the fullest extent. I enjoyed it: from the stunning results (I was running on the ceiling, tripping over the chandelier with emotion overwhelming me) to the philosophical "yeah...do they even read the news?". :о))

Development of the idea
It is quite possible that the results of a proper implementation will correlate with EWT. To put it crudely, but gently, why not go from the opposite, in this theory, i.e. from the "crowd" and its mood?

So, shall we create a "mood formula", dear forum members? :о)
 
<br/ translate="no"> analysis of the spectral density of a stationary time series defined through its autocorrelation function


Using an autocorrelation function is one way to calculate the spectrum


Especially about how a time series is defined through its autocorrelation function


and this I did not understand myself, maybe not the exact wording
 
Hi Serguei!
I will start with the lyrics.

If you now add those charts and half a dozen or so not too complicated rules of market behaviour, you could publish a book on forex trading. No worse than Williams and Elliott. :-)))
 
Hi Sergei ! <br / translate="no">
I'll start with the lyrics.

If you now add those charts and half a dozen of some not too complicated rules of market behavior, you can already publish books on forex trading. No worse than Williams and Elliott. :-)))


Hello Yuri!
You and I have decided when we will start publishing books. I haven't said goodbye to the idea yet.
:о)))
 
By the way, here is a very good article on visualization of news (events): "MQL4: Working with files. Example of visualization of important market events".

You just need to take a few more steps forward...
 
You and I have decided when we will start publishing books. I haven't given up the idea yet<br / translate="no"> :o)))

But you have to live on something! Forex is good, but it's purely scientific. :-)
 
Grasn, thank you for your thorough response. Very interesting.
I use in my strategy an approach that allows to exploit the Mechanical Trading System (MTS). Even a cursory analysis of possible trading algorithms shows that only the approach based on the analysis of already available historical data satisfies this requirement. In other words, I made a hypothesis that history repeats itself and it is possible to build a strategy that exploits the property of predictability for a few steps ahead of a time series of an instrument.
Naturally, this hypothesis required confirmation and creation of an adequate model of the price formation process. As a model, it seemed logical to assume that the price additively includes a random component and a deterministic one. This assumption is based on a conjecture about the stabilising role of the central bank (the central bank benefits from keeping the price within a limited band, i.e. there must be a stabilising effect due to the introduction of a negative feedback loop between price movements and the actions of the central bank) and, in combination, the destabilising role of market players (the crowd tends to herd behaviour, i.e. they benefit from trending price movements). At the same time, we do not exclude the presence of a seasonal or cyclical component and possible deterministic trends (directed actions of large players).
Let us introduce some basic concepts:
1. A series is called strictly stationary (or stationary in the narrow sense) if the joint probability distribution of m observations is the same as for m observations.
In other words, the properties of a strictly stationary time series do not change when the origin of time is changed. In particular, it follows from the assumption of strict stationarity of a time series that the probability distribution law of a random variable is independent of time, and hence all its main numerical characteristics, including mean and variance, are also independent of time.
Obviously, the mean value defines a constant level, relative to which the analysed time series fluctuates, while the dispersion (D) characterises the range of these fluctuations. Since the probability distribution law of a random variable is the same at all t, it itself and its main numerical characteristics can be estimated from observations.
2. Deterministic linear trend is a directional price movement caused by certain events on the market. The criterion is a non-zero expectation of a forcedly stationary time series and detected by means of low-frequency digital filters.
3. Non-deterministic linear trend - a directional price movement caused by a random pricing process. The criterion is the zero expectation of a forcibly stationary time series and cannot be detected by low-pass digital filters due to inevitable phase lag of casual filtering schemes.
4. The real time series in the currency market can be considered an integrated stationary series. In doing so, the expectation of the generating stationary series can be assumed to be zero.

The last point follows from the results of the study of stationary time series obtained by differentiating the existing real data from the archives of quotes. Further, for convenience, we will talk about stationary series, keeping in mind that a real time series is reconstructed from a stationary one by simply integrating the latter. Besides, it follows from paragraph 4 that there are no deterministic directional movements on the Forex market, while any similar directional movement is of random nature and therefore is of no practical interest (trend do not friend!). Differentiating the initial series will allow us to get rid of stochastic trends, which will further simplify our model.
Thus, we assume that the pricing process can be described by a model that includes a cyclic component and a stationary time series with zero expected payoff. The question of whether or not there is a cyclical component in the price series can be answered by applying a Fourier analysis or by influencing the time series with a narrow-band digital filter. In my practice I have used both methods. The obtained results imply that cycles in the currency market do exist but they are stochastic, i.e. there are no cycles with a stationary or nearly stationary period. This property, unfortunately, makes it fundamentally impossible to exploit strategies based on the cyclicality of the pricing process. Let me repeat that this conclusion applies only to the foreign exchange market! The stock market has a stationary seasonal component and deterministic trends. This fact allows you to hope for possible exploitation of these properties of the stock market in TS. In light of the above, my opinion is that the Elliott Theory is applicable only in the stock and futures market, but not in the foreign exchange market.
As a result, our model contains only two components: a deterministic component and a random one. The price formation process can be described as the market's memory of an infinite number of previous price jumps, each of which has its own decreasing weight and random component. In general case we need to reasonably limit the number of members involved in the price formation and find a way to calculate the given coefficients (weights) from available and calculable parameters that characterize the stationary process of interest. And also to determine the parameters of the random component, which is not a difficult task. In this case the next (third+1) price leap will be determined by the sum of n previous leaps S(i), each multiplied by its weight a(i) monotonically decreasing with distance from the leading edge of history, and a random variable sigma with a known distribution law, zero expected value and known standard deviation:
S(i+1)=SUM(a(i-k)*S(i-k))+sigma where summing is performed over all k from zero to n.
Thus, we are dealing with an autoregressive model of n-th order.
In principle, we only need the exact form of the random variable if we want to obtain a time series S(i) that is completely identical to the generating one (in terms of characteristics), but this task seems superfluous to me. Indeed, we are interested only in the predictive capability of the model which will inevitably suffer from an element of uncertainty introduced by the term responsible for the random component, but considering the random sign of the introduced error, we may safely say that after a large number of runs the prediction error associated with the random term will be reduced to zero! And finally our model looks quite simple:
S(i+1)=SUM(a(i-k)*S(i-k)), where summation is conducted over all k from zero to n.

The spectral density of the nth-order autoregressive process is defined using the formula:
p(omega)=2D/|1-SUM(a(k)*exp{-i*2k*omega})|^2, where summation is conducted over all k from 1 to n,
i=SQRT(-1) and 0<=omega<=1/2 .
 
Neutron thanks, very interesting approach. I'll take a short time-out to think about it.

Preliminarily, I note that
<br/ translate="no"> ... I hypothesized that history repeats itself and it is possible to build a strategy exploiting the property of predictability several steps ahead of the time series of an instrument... As a model it seemed logical to assume that price additively includes a random component and a deterministic one


Reflected in my research as well:

History does repeat itself and this is demonstrated by the Hurst index, only it assesses the possibility of repetition/continuation of the established structure (as I wrote about earlier), which somewhat changes the approach to TC.

Correctly performed "feedback" normalisation, i.e. matching signal parameters to information quality, allows to obtain in general a locally deterministic component. There really are no cycles (they don't sell galoshes :o), but it is the basic information (M0, M1, rates etc.) that has cyclicities. The baseline ("near-deterministic") forecast is based on cyclical information.

The only problem is, the degree of influence of specific information changes over time and once you do the history rationing, you can start doing it all over again :o(. But so far this approach is really no more than a scientific hobby.