Hodrick-Prescott filter - page 4

 
Neutron писал(а) >>

+5

No point.

Well for nothing...... In my opinion, the muwings difference is a great indicator that really shows where price is. And it also introduces minimal distortions to the original signal..... ))))))

 

The muving difference is nothing but the first derivative of the faster muving and it shows HIS extrema, not the quotient. This raises several reasonable questions:

Firstly, why put on pants over the mountain and use this way of determining the derivative when there is a classical one?

Secondly, the use of the first derivative in the analysis of time series (TD) like prices implies the validity of this approach in this case, and there is none! Indeed, BP is not smooth (the autocorrelation coefficient is negative on all TFs) and the method simply cannot and does not work here. The consequence of using smoothing in our case, will be the inevitable phase lag, which will negate all attempts to detect extrema on the kotyr in time.

Third, I still do not understand the sense of using a slightly redrawing muving if trading by it is still identical to working on a non-drawing muving. Why these "tricks"? Is this some kind of a flirting with yourself?

 
The autocorrelation coefficient of the price series is in the range (plus) 0.6-0.9,
It is this feature that allows you to call trading a profession,
to use muwings, empirical graphical analysis, neural networks,
and even, quite surprisingly, statistical semi-empirical methods.
 
Korey >> :
The autocorrelation coefficient of the price series is in the range (plus) 0.6-0.9,
It is this feature that allows you to call trading a profession,
to use muwings, empirical graphical analysis, neural networks,
and even, quite surprisingly, statistical semi-empirical methods.

Agreed!

 
Korey писал(а) >>
The autocorrelation coefficient of the price series is in the range (plus) 0.6-0.9,

If you look down at the problem of trading, we are ultimately interested in price increments, not its absolute values; it is on price changes that money is made.

Therefore, in this case we are talking about the series of the first price difference of a quote, and not the original price series. For the first difference series (for example, Open[i]-Open[i+1]), the correlation coefficient between neighboring samples is small (<<1) and always negative. In order to apply differential calculus to arbitrary BP (for example, Taylor series expansion and building a forecasting model on its basis - that's what we all try to get from a moving average), the series of its first difference must be positively autocorrelated (it provides smoothness of the initial series), unfortunately price series do not satisfy this condition. Exactly this fact I meant, when I said that muwings are unpromising in our case - they show history. By the way, 20 years ago, price series, though weak, but were positively correlated (their first difference), it allows earning using simple models of classical TA. Now the situation is different, and non-trivial approaches to the problem of effective trading are needed.

Constantin wrote(a) >>

Agreed!

Ridiculous.
 
Neutron >> :

The picture is different now, and non-trivial approaches to solving the problem of efficient trading are needed.

What do you mean by "non-trivial" approaches to the task of effective trading?

 

Good question.

For example, there is an alternative to the Taylor series expansion that works for BP with negative autocorrelation in its first-difference series. It can be obtained explicitly as a consequence of solving the problem for a single-layer Neural Network with multiple inputs. For instance, here is the first term of such a decomposition obtained as a solution for a two-input NS:

where d[i+1] is the prediction of i+1 increments in the price series.

Of course, it's not a panacea, but it's at least something non-trivial. so it seems to me.

 
Neutron писал(а) >> they show history .

What does the future show?

 
Neutron писал(а) >>

Good question.

For example, there is an alternative to the Taylor series expansion that works for BP with negative autocorrelation in its first-difference series. It can be obtained explicitly as a consequence of solving the problem for a single-layer Neural Network with multiple inputs. For instance, here is the first term of such a decomposition obtained as a solution for a two-input NS:

where d[i+1] is the prediction of i+1 increments in the price series.

Of course, it's not a panacea, but it's at least something non-trivial. so it seems to me.

In practical terms, it is better not to talk about a single-layer neural network at all. It is just a linear filter with constant weights and nothing more. Strangely enough, "trivial approaches" are quite workable with non-trivial thinking. Look at the championship winners, the beauty is in the simplicity, everyone knows these strategies, but not everyone knows how to use them. You can describe the price movement with millions of formulas, but not have the main thing, the profit.

 

Anything is possible (anything is possible), the problem is that we don't know everything.

Which is better, a trivial method with a non-trivial approach, or a trivial approach with non-trivial thinking? I don't know... which criteria for betterness to use is a separate topic. You can kill your whole life wandering in the dark in search of something special, or you can use something that has long been known to all... It's a matter of taste.

I adhere to the point of view that there are optimal methods of solving a problem and they are certainly achievable within the scientific paradigm, without deviations such as "it seems to me" or "everyone does so".