Building a trading system using digital low-pass filters - page 19

 
Mathemat, a gift for you - J. Bendat, A. Pearsol, "Applied Random Data Analysis" (http://dsp-book.narod.ru/bendat.djv). The authors give a description and examples of the use of the inversion method to check the stationarity of a random process. I haven't gone into the details and rigour of the method itself, but superficially it is credible. I think you need to dig in this direction.

For example, here: http://edu.secna.ru/main/review/2001/n3/MONA2001/Morozova.pdf - this work is used to justify stationarity of results of some types of wavelet transforms of price series. In fact, it's what you needed.
 
grasn:
to Neutron

Maybe you can briefly explain to me what the filter predicts for you and Prival? Thanks in advance. Did you really do the AF???

Glad to help. Do you happen to have a detailed algorithm for its implementation? :о)


I don't know what the filter in Prival predicts, but mine predicts nothing :-(

I don't understand what AF is... Look yourself, I run Predict function on smooth VR with SWF and get less smooth VR with lower SWF, but by smoothing quality it is not better than the same LPF with smaller averaging window, and on large horizons it is noticeably weaker than the latter (see avishka). I.e. the predictor is repulsed in its work from the smoothed series and "crumbles" as the horizon approaches the initial BP, but the LPF, on the contrary, repulses from the initial BP and gradually moves away from it becoming smoother... This result is to be expected, indeed one cannot get much information from BP even having smoothed it out beforehand - you can't cheat nature! Although there was a picture on the forum with a demonstration of NS-based LPF work, no PF was observed (almost) with excellent smoothing quality! If it is not bullshit, we have something to work on.

P.S. I don't have an algorithm for the Predict function.


Yurixx:

But what happens if you don't increase the forecast horizon, but let predict on the obtained result, i.e. on a thin black line ?

So you're suggesting that you let the predictor run over the result of his own prediction? After all, the thin black line, is the forecast averaging (the thick blue line) with an ever increasing horizon...

Explain, please.

 
bstone писал (а): Mathemat, a present for you - J. Bendat, A. Pearsall, Applied Random Data Analysis
Thank you very much, bstone. Already downloaded it. Let's see what these authors say about stationarity...
 
Neutron:
Yurixx:

But what happens if you don't increase the forecast horizon but let the predictor over the result, i.e. the thin black line ?

So you're suggesting that the predictor should be let in on the result of his own prediction? After all, the thin black line, is the forecast averaging (the thick blue line) with an ever increasing horizon...



Exactly. And why not ? I realise, of course, that the result of these actions, performed maybe several times, cannot ultimately yield a range of prices - there are no miracles. But it's interesting to see how this algorithm works. :-) It does work on past data, it doesn't look into the future ?
 
bstone:
Mathemat, a gift for you - J. Bendat, A. Pearsol, "Applied Random Data Analysis" (http://dsp-book.narod.ru/bendat.djv). The authors give a description and examples of the use of the inversion method to check the stationarity of a random process. I haven't gone into the details and rigour of the method itself, but superficially it's credible. I think you need to dig in this direction.



For example, here: http://edu.secna.ru/main/review/2001/n3/MONA2001/Morozova.pdf - this work is used to justify stationarity of results of some types of wavelet transforms of price series. In fact, it's what you needed.


Thank you very much!!!
 
bstone:
Mathemat, a gift for you - J. Bendat, A. Pearsol, "Applied Random Data Analysis" (http://dsp-book.narod.ru/bendat.djv). The authors give a description and examples of the use of the inversion method to check the stationarity of a random process. I haven't gone into the details and rigour of the method itself, but superficially it's credible. I think you need to dig in this direction.

For example, here: http://edu.secna.ru/main/review/2001/n3/MONA2001/Morozova.pdf - this work is used to justify stationarity of results of some types of wavelet transforms of price series. In fact, it's what you needed.

thank you valuable site, and this one is great http://dsp-book.narod.ru/KM.djvu
 
Prival:
bstone:

Mathemat, a handout for you - J. Bendat, A. Pearsol, "Applied Random Data Analysis" (http://dsp-book.narod.ru/bendat.djv). The authors give a description and examples of the use of the inversion method to check the stationarity of a random process. I haven't gone into the details and rigour of the method itself, but superficially it is credible. I think you need to dig in this direction.



For example, here: http://edu.secna.ru/main/review/2001/n3/MONA2001/Morozova.pdf - this work is used to justify stationarity of results of some types of wavelet transforms of price series. In fact, it is what you needed.



thanks valuable site, and this one is great at all http://dsp-book.narod.ru/KM.djvu

wow Turns out everything is there Remains to be applied...
 
Yurixx:

I understand, of course, that the result of these actions, performed maybe several times, can't ultimately give a number of prices - miracles don't happen . But it's interesting to see how this algorithm works. :-) It works on past data, it doesn't look into the future.

Yes, it only works on past data.

The interesting thing is that similar prediction result using Predict function can be obtained without being sly - simply by decomposing smoothed BP LPF in left neighborhood of each point (so as not to look into "future") into regular Taylor series (RT) and then extrapolating to the required number of steps ahead. You might find it interesting, grasn - instead of digging the algorithm of the function built into Matcad, take PT and play with it, trim it, see what it leads to...

In fig. the red dots are the price series, the red line is Moving Average, the blue line is RT, the black line is Predict. The forecast horizon is the same and equals to 5 samples. We can see that the behavior of the forecast indicators almost coincide, their behavior when increasing the horizon up to the value of VLFF can be seen in the attached animation. Unfortunately, both tools "fall apart" when approaching the predicted limit, which always coincides with the FZ of the muving used! There seem to be two reciprocal mappings - smoothing by integration, and recovering the raw data from them, by extrapolation in one way or another. But, we cannot in principle anticipate (predict) the behaviour of BPs of the price type, because there is no (or very little) necessary information in the smoothed series for this purpose. By the way, these predictors perfectly lead the generation series, allowing us to hope for the potential possibility of creating the leading indicator, but it is possible until the amplitude of the noise component exceeds the useful signal.

Files:
2.zip  910 kb
 

to Neutron

I don't understand what AF is
.

Shortened the term 'adaptive filter'. This is to say that the targets in question, both in this topic and in others, can only be obtained based on adaptive filtering. There is no other way, and roughly speaking, this is it:

Judge for yourself, I run the Predict function on a smoothed BP with FZ and get a less smoothed BP with less FZ but, in terms of smoothing quality it is no better than the same LPF with a smaller averaging window, and at larger horizons it is noticeably weaker than the latter (see aviska). I.e. the predictor is repulsed in its work from the smoothed series and "crumbles" as the horizon approaches the initial BP, but the LPF, on the contrary, repulses from the initial BP and gradually moves away from it becoming smoother... This result is expected, indeed it is impossible to get more information from BP even having smoothed it out beforehand - you can't cheat nature!

does not make any practical sense and is by and large a "self-deception".

Although there was a picture on the forum with a demonstration of LPF based on NS, no PF was observed (almost) with excellent quality of smoothing! If this isn't nonsense, then we have some work to do.

I've been working with the NeuroSolutions package, if you put it up you'll find a detailed example of a NS-based LPF.

Interesting is that similar result for prognostication using Predict function can be obtained without playing around - just decompose smoothed BPF in left neighborhood of each point (not to look into "future") into regular Taylor series (RT) and then extrapolate for required number of steps ahead. You might find it interesting - instead of digging into the algorithm of Matcad built-in function, take RT and play with it, cut it up, see what it leads to...

Prediction by Taylor series extrapolation isn't interesting for me at all and won't give comparable predictions, maybe one variant per 100 tries :o) But thanks for the advice.

Neutron, you misunderstand a bit - I'm not digging into the "prediction" algorithm. The simple thoughts posted are about two years old. If it was really needed - would have found sources and done it, it's not that difficult. I wrote, the predicate, like any other such algorithms - DOES NOT WORK, predicting series by statistics gives very poor results. The only way to apply it is to go to the generalized characteristics of the forecast series, and that has to be done competently. Systems on this basis are profitable - but not interesting to me.

to mql4-coding
Wow It turns out everything is there Remains to be applied...

I think I've already read this many times on various forums.... but still - good luck :o)))

Oh man, lost all my links, anyway - there was a forum, quite a long one, where guys seriously got down to two things

  • Write an open-source filter package (I understand that people got mad at the authors of those fattles, sattles - I don't remember what they're called correctly
  • Develop a strategy based on the filters

There seemed to be a lot of useful stuff. I'm disappointed in this approach, I think it's not quite the right one.

 

How do different BP averaging algorithms compare with each other? How to choose the optimal averaging window?

Indeed, if you choose a large window, the signal will lag significantly due to the inevitable FP, on the other hand, if you choose a small window, the averaging quality will be unsatisfactory. It seems that the optimum lies somewhere in the middle, but what should we compare the result of obtained averaging with?

Let's assume that we have a hypothetical LPF with zero PDF, then we can compare with it. It is possible to realize such "magic" filter if to run an ordinary (not looking into the future) LPF back and forth along the analyzed BP and take the middle part of the graph, thus excluding from the analysis the inevitable edge effects at the right and left ends of BP (for this reason such LPF cannot be used in TS).

In the left figure the dots show the BP, the red line shows the symmetrical LPF (with LPF), and the blue and black lines show the conventional moving average with different averaging times. For each window we look for the standard deviation on the whole set of BP points between the ideal filter and the one under study and normalize it by the standard deviation between the BP points and the LPF. Thus, we get rid of the arbitrariness associated with the choice of the averaging window of the LPF. The choice of the standard deviation in this case does not seem random; indeed, this quantity will equally well reflect the departure of the smooth curve to the right due to the PZ and the increase in the range of its oscillations at a narrow averaging window.

Let us choose a standard moving average with a rectangular averaging window (blue line in Fig. 2), a triangular averaging window (black line) and the 1st order Butterworth filter (red line) to analyze the quality of smoothing. We can see that with a small window the filters do not smooth the series due to the large "chattering" tending to the volatility of the initial BP. When the window is increased, an optimum is observed for each filter and then the smoothing properties deteriorate again due to increasing PDF. The best result among three presented algorithms is observed for the trivial moving average with a rectangular averaging window having the window width of 7-8 bars! This is optimum for this type of LPF, thus, it effectively suppresses the noise component by 15% and loses its smoothing properties when the window of 17-18 bars is narrower, not giving any advantage to the initial BP. Recall that if we calculate in this case the standard deviation for the SFNF, we get zero or 100% smoothing, i.e. the ideal variant. So far we have a 15% approximation to the ideal. I wonder if it is possible to get more?

Thus, we have a tool that allows us to estimate objectively the smoothing properties of the LPF. If Prival gives us the code of his adaptive Kalman filter based on ACF, we immediately put it (the filter) on a place of honour, and North Wind will get an answer to his already rhetorical question...