Adaptive digital filters - page 4

 

Prival

I apologise, I was not careful in my remarks. Personally, I never thought to express any doubts about your competence in DSP issues. I think the same is true for grasn (I hope grasn will forgive me for "signing off" on his behalf). It's about the very idea and research methodology. Nothing personal. All authors who actively "dig mathematical methods" on this forum, I treat strictly positively, in view of the uniqueness of this community. And in view of the prospects that it (the community) have. But I can't agree with your suggestion because I completely disbelieve in polynomials as a tool that has some sort of "predictive power". We are talking about time intervals of less than a day, and you want to exploit your idea precisely at small intervals. I just do not see the reason, which will force the polynomial - in fact, absolutely adapted to the signal (according to some criterion) function, to make a forecast of price behaviour. Because the prediction will always be about 50\50. This is due both to the processes occurring in the "market" and to the form of the signal representation, which in its essence completely distorts the picture. If you want to use DSP in trading - you're welcome, but first prepare adequate data for DSP. The "signal" itself is certainly present in the price, but. But the level of that signal (as Mathemat seems to have correctly pointed out) is many times less than the "noise" (although there is no "noise" in the "market"). Overlaying this is the non-stationarity of the signal itself. As a result, almost none of the traditional methods work. I don't think this is because DSP theory is wrong - of course it is, it's just that the signal here is completely different. A signal in which a huge part of the information is simply lost. And paradoxically, a great deal of information is simply unnecessary. You said that you are a military man, so treat it as a fact that all your devices are clogged with interference, behind which you cannot see the signal from the enemy aircraft. And that interference is of a very high quality. But if you go outside and look at the sky you will immediately see everything. But aiming and shooting from the hip is not the best solution. :)

And thank you for the present, I will definitely find time to get acquainted with it.

 
I suggest we switch to another task: we need to convert the usual chart representation (based on bars with equal astronomical time intervals) to one with as few catastrophes as possible (preferably with stationary p.d.f. returns). This, by the way, may be one of the challenges of adequate data preparation for DSP, as mentioned by NorthernWind. Catastrophes (especially micro-catastrophes within flats) break practically all known traditional indirectors. The conversion, by the way, does not have to be mutually unambiguous at all (many indirectors are not mutually unambiguous operations on data, and that doesn't bother us).

If there are valid arguments against it, please join the critics.
 
NorthernWind:

Prival

I'm sorry, I wasn't careful in my remarks. Personally, I never thought to express any doubts about your competence in DSP issues. I think the same is true for grasn (I hope grasn will forgive me for "signing off" on his behalf). It's about the very idea and methodology of research. Nothing personal. All authors who actively "dig mathematical methods" on this forum, I treat strictly positively, in view of the uniqueness of this community. And in view of the prospects that it (the community) have. But I can't agree with your suggestion because I completely disbelieve in polynomials as a tool that has some sort of "predictive power". We are talking about time intervals of less than a day, and you want to exploit your idea precisely at small intervals. I just do not see the reason, which will force the polynomial - in fact, absolutely adapted to the signal (according to some criterion) function, to make a forecast of price behaviour. Because the prediction will always be about 50\50. This is due both to the processes occurring in the "market" and to the form of representation of the signal, which in fact completely distorts the picture. You want to use DSP in trading - go ahead, but first prepare adequate data for DSP. The "signal" itself is certainly present in the price, but. But the level of that signal (as Mathemat seems to have correctly pointed out) is many times less than the "noise" (although there is no "noise" in the "market"). Overlaying this is the non-stationarity of the signal itself. As a result, almost none of the traditional methods work. I don't think this is because DSP theory is wrong - of course it is, it's just that the signal here is completely different. A signal in which a huge part of the information is simply lost. And paradoxically, a great deal of information is simply unnecessary. You said that you are a military man, so take it as an indication that all your devices are clogged with interference, which prevents you from seeing the signal from the enemy aircraft. And that interference is of a very high quality. But if you go outside and look at the sky you will immediately see everything. But aiming and shooting from the hip is not the best solution. :)

Thanks for the present, I will definitely find time to look it over.

Did you ever think that support (resistance) lines are first-degree polynomials (straight line equation) y(x)=a*x+b? When the price rebounds on the level of some strong resistance or correction begins and at this point the curve can be described by the polynomial of the second degree y(x)=c*x^2+a*x+b. For some stable oscillations in the channel, you can use MNC to find a polynomial of higher degree. I.e. it is necessary to choose a degree of a polynomial according to some criterion (teach the computer to do it, like a human does) + many people have already come to a conclusion that if they knew (know) when to start forming, say, PriceCannel (sampling depth), they may build a pretty good TS.

But the proposed idea has it, we can set the polynomial degree + depth N, say, a week, and the algorithm itself will choose whether to use the whole sample N or only its part, it may also choose any amount of data (digits) for analysis from the last 2 digits to N and select a polynomial. And let's say the gap is the maximum variance, so the algorithm should select the last 2 points and only a straight line through 2 points, lag = 0. Something like this.

It's so simple idea, there are a lot of them, maybe something good will come out. I tried to make it as clear as possible to show those who want to build something adaptive, how to do it, without resorting to complex terms. And about this "but first prepare adequate data for DSP", yes indeed I do, because I believe that the X axis is also a random number and this should not be forgotten. On the Y axis we have already invented a lot, but on the X axis we are still building bars 100 years ago (with a constant interval). I've written about it here ('Tick builders. Optimization. DDE in VB (VBA)'), even Renata gave me a present. I am thinking now how to prepare it correctly for DSP :-) + also all the time thinking, what is the signal (useful component, what moves this curve), and whether there is noise here.

P.S. And the fact that one argues and disagrees is good. In disputes (correct) sometimes the truth is born. And with you I am ready to argue and not just give presents in the form of books, hand them over. I would have poured brandy too but I cannot attach it :-).

 
Mathemat:
I suggest we switch to another task: we need to convert the usual chart representation (based on bars with equal astronomical time intervals) to one with as few catastrophes as possible (preferably with stationary p.d.f. returns). This, by the way, may be one of the challenges of adequate data preparation for DSP, as mentioned by NorthernWind. Catastrophes (especially micro-catastrophes within flats) break practically all known traditional indirectors. The conversion, by the way, does not have to be mutually unambiguous at all (many indirectors are not mutually unambiguous operations on data, and that doesn't bother us).

If you have a well-founded argument against it, please join critics.

By the time I was writing, Mathemat had already become a telepath :-). I am in favour. I suggest that we start from minimizing the "catastrophes", and then deal with them. And not only indicators break, good old Fourier is flying away, and how to eat it if we don't know what sampling rate it is too difficult.
 
Mathemat:
I suggest we switch to a different problem: we need to convert the usual chart representation (based on bars with equal astronomical time intervals) to one with as few catastrophes as possible (preferably with stationary p.d.f. returns). This, incidentally, may be one of the challenges of adequate data preparation for DSP, as mentioned by NorthernWind. Catastrophes (especially micro-catastrophes within flats) break practically all known traditional indirectors. The conversion, by the way, does not have to be mutually unambiguous at all (many indirectors are not mutually unambiguous operations on data, and that doesn't bother us).

If there are valid arguments against it, please join the critics.

Here, not too long ago, there was a raid of impatient m0thematist-maximalists, in one of the neighbouring threads. Although the conversation with them did not turn out very well, despite this I would like to say that they speak very correctly, practically "think my thoughts". The most important thing that has been said and which I urge all interested parties to pay attention to. You have to work on the real market and not on a Forex brokerage house. Then you will have information not only about price, but also about volumes, interest, stakes, quote flow and so on. No doubt, the exchange is more complicated than the process of guessing in the DC's sweepstakes the direction of numbers on the screen, but also much closer to the concept of a live market, where the price is dictated by the balance of supply and demand and the mood of the participants. Of course Quick and the like are rubbish compared to Metatrader, but people work on that too. By the way, if the metaquotes do not think about mastering this squalor, their fate will be the same as it is now.

What I'm saying here is that you can dig very deeply into DT quotes, and you may even find something, but it will be rather shaky. I ask you to understand correctly, there's no criticism of cents or the process of cents formation. They do everything right and for the most part (those who are more respectable) do not cheat anyone. But if you look at the data coming from the "market" through the brokerage companies to the client, then you cannot help thinking that brokerage companies are filtering and rendering machines. I've already told what happens if a random process is applied to a certain meaningful process - the resulting process will be the same random.

In order to avoid the question "Do I believe that you can earn on Forex brokerage commerce" I should say at once - yes, I think you can. But not a lot and not for a long time and not for everyone. In fact, with not very high risk you can play for a day or more, but the income will be, if you play right, the same as the price change over the day, that is not very high.

That is my opinion. Returning to the matter under discussion, the conversion of the traditional chart in the other, I can say that I personally, this conversion has helped me in my quest. But not much. Practically there's no choice, of all the possibilities we have on hand, only the price of it is not clear. There is also a notion of tick rate that somehow corresponds to the activity of world markets, but the degree of this correspondence is illusory and ephemeral. What can be done with it? Well, first, consider the history up to 2004-2003 years only, don't look further for "the market was different then". Consider every week/day separately. Consider the transition from one week to another, i.e. weekends, separately. At least consider the time of activity of major exchanges. At least consider those few news in a year, to which the market has really reacted. Replace OHLC as a relic of the dark past. Do not consider local extrema (zigzag, kagirenko) as the final truth, these extrema are very unlikely. And so on. Practically it all results in having to form one's own data, in one's own representation, from an initial flow of ticks (or minutes).

One last thing. Note, I never say that I always know everything correctly. So here too, I may be wrong.

 

Prival 13.01.2008 03:08

Have you ever thought that support (resistance) lines are first-degree polynomials (straight line equation) y(x)=a*x+b. And it can and seems to work not only within the day. When the price rebounds when approaching a strong resistance level or, for example, a correction is taking place and at this point the curve of the curve can be described by the polynomial of the second degree (parabola) y(x)=c*x^2+a*x+b. For some stable oscillations in the channel, you can use MNC to find a polynomial of higher degree. I.e. according to some criterion the polynomial degree should be selected (to teach the computer to do it, like a human does) + many people have already come to a conclusion that if they know (know) when to start building, say, PriceCannel (sampling depth), they can build a pretty good TS.

If we are talking about the so-called "channel" strategies, the essence of which is to identify some trends, even if they are described not by simple linear regression, but by polynomials, then I generally believe in them. Moreover, I think that the market has only one thing - trends in ascending and descending channels. But the traditional support/resistance lines do not have much relation to them. The problem with these channels is that they traditionally do not show much result on traditional data. At least that's how it has worked out for me. There is a lot of data which "hinders" construction of a trend characterizing maximum price tendencies using ISC. More often than not we get a trend

of changes in the character of "noise".

But the suggested idea contains it, we can set the polynomial degree + depth N for a week, and the algorithm will decide whether to use the whole sample N or only a part of it, it may also choose any quantity of data (digits) for analysis from the last 2 digits up to N and select a polynomial. And let's say the gap is the maximum variance, so the algorithm should select the last 2 points and only a straight line through 2 points, lag = 0. Something like this.

Yes, this has been discussed for a long time both on this forum and in the "parallel thread". It's practically where the local community started. But it seems to me that it doesn't really apply to traditional adaptive filters.

It's just an idea of a lot of them, maybe something good will come out. I tried to show as clear as possible to those who wants to build something adaptive, how to do it, without resorting to complex terms. And about this "but first prepare adequate data for DSP", yes indeed I do, because I believe that on axis X also random number and it should not be forgotten. On the Y axis we have already invented a lot, but on the X axis we are still building bars 100 years ago (with a constant interval). I've written about it here ('Tick builders. Optimization. DDE in VB (VBA)'), even Renata gave me a present. I am thinking now how to prepare it correctly for DSP :-) + also all the time I wonder what is the signal (useful component that moves this curve), and whether there is noise here.

Bars are a tradition based on capability. In fact, the technology that allows you to get rid of bars and move to other forms of representing information appeared not long ago. Counting for years, so let's not be too harsh to them.

P.S. And the fact that one argues, does not agree is good. In disputes (correct ones) the truth is sometimes born. And with you ready to argue and not just gifts in the form of books, hand over. I would have poured cognac, but I can not attach it :-).

:)

ZS terrible not convenient forum engine.

 

to Prival

Prival, I won't apologize for my post, it says everything correctly. And there's not a word that you don't understand anything in DSP, but there's only an attitude to a particular proposed "adaptive filtering".

It seems to me that by writing about my proposal and claiming that there is no adaptive filtering there, grasn is wrong.

Not wrong on any point.

And will not be able to answer where, when and for what reason, say, it is necessary to apply a Hemming window, and when its application only harms. What is the difference between Wiener adaptive filter from Widrow-Hopf filter when analyzing their FFC or Butterworth filter from Chebyshev, when you need and can apply the first filter, and when the second.

Did you get a kick out of it? In general, of course, correctly done, but what is the point? Besides several gigabytes of electronic trash on DSP, I have on my shelf five more books, which, can you imagine, I have read (although they have not become my desktop recommendations). And to your questions, Mr. professor, of course I can answer them. Prival, I wrote that I am self-taught, but that does not mean that I am a complete idiot.

PS1: I served just in the Air Defence Forces, so I can read out my position from paragraph 27 of my military ID (to make it more solid :o): "commander of the department of anti-aircraft missile radio control facilities". And I know more than well (as they usually write - not by hearsay :o) that our famed complexes (and not only ours) not only shoot down, they don't even really SEE the targets. Prival, be careful with forex radars... and especially with the Nyquist frequency which rules the world. :о)

PS2: Prival, you are not very lucky with me - the fact that you taught DSP means nothing but respect to me. And if I see some outright rubbish, I write so, irrespective of positions and ranks :o)

toNorth Wind

Good to see!!!! I've also practically stopped participating on the forum, though I occasionally quarrel with Prival. I have a question for you. I remember you investigated using zigzag, and described it as a very simple one, invented by someone nearly a century before last, or even later. I want to use it for a little experiment in a sequel to 'Stochastic Resonance' (post grasn 28.10.2007 13:26).
Could you describe it in more detail or provide a link? Need to test some thought.

 
NorthernWind:

Prival 13.01.2008 03:08

Yes, it's been discussed for a long time both on this forum and in the "parallel thread". It's practically where the local community started. But it seems to me that it's not very relevant to traditional adaptive filters.


So you should not dig too much (although the link in this thread has already been to a lecture on DSP). Here's a piece of it. That there are filters which are constructed by ANC and the sampling depth is important, for almost all types of filter, and especially for ANC.

To grasn you yourself gave a link to this material, and what about this (Subject #3)? That regardless of ranks, etc., that's good. And for radar, I've already reported in writing for what I wrote here :-( It's a pity I was left without 13, those fools punish first and then sort it out.

Files:
filtr_mnk.zip  87 kb
 
Prival:
NorthernWind:

Prival 13.01.2008 03:08

Yes, it's been discussed for a long time both on this forum and in the "parallel thread". It's practically where the local community started. But it seems to me that it's not very relevant to traditional adaptive filters.


Just so you don't dig too hard (although the link in this thread was already on the DSP lecture). Here's a bit of it. That there are filters which are constructed by OLS and the sampling depth is important, for almost all types of filter, and especially for OLS.

To grasn you yourself gave a link to this material, but what about this (Subject #3)? That regardless of ranks etc., that's good. And for the radar I have already reported in writing, for what I wrote here :-( It's a pity I am without 13, those fools punish first and then sort it out.

Prival, a slightly different least squares and a slightly different filter I at least had in mind. section "Widrow-Hopf Adaptive Least Squares Algorithm" Topic 11, not at all 3
I was able to attach it.

PS: Prival, I'm a bit confused, have you been stripped of your bonus for writing the word "radiolocation" on the traders forum?

Files:
dsp11.zip  124 kb
 

grasn 13.01.2008 14:14

Good to see!!!! I've also practically stopped participating in the forum, though I occasionally quarrel with Prival. I have a question for you. I remember you conducted some research using zigzags and described them as very simple, invented by someone before last century or even later. I want to use it for a little experiment in continuation to 'Стохастический резонанс' (post grasn 28.10.2007 13:26).
Could you describe it in more detail or give me a link? I need to check some idea.

Good to see you too and happy to try and answer questions.

On the zigzag. In the book in which I first saw its definition it was certainly not called a zigzag (I think Kendall and Stewart). It was a question of finding points of local extremums of the first order. They didn't say anything special about it, they just said that a local extremum is when the points on the left and right of the graph in question are both smaller or larger. That's all. It was also related to the second order local extrema when the extrema of the first order on the left and on the right are both smaller or larger. Only in this case one should not look at the neighboring extrema but through one, as the neighboring extrema will in any case be less or greater. The third order is similar to the second order but it is built based on the extrema of the second order. And so on. The Kagi construction you are no stranger to exploits the same idea, only "less than H"/"more than H" is used instead of "less than H"/"more than H". That's all there is to it. The algorithm is suitable for points and is poorly applicable to OHLC, due to the fact that there are 4 values at a point. There is a trick, when one of the points is equal to the neighbour, then you have to make a decision, - the wise elders do not say what. Personally I just converted all neighbouring points with equal values to one. I've looked how people implemented zigzags, it's very complicated, some kind of temporary arrays, etc. For me it was enough to use several variables and one pass. All I need is to store the last extremum identified (which may vary) and the previous extremum identified for comparison. That is, it will analyze three points: the current point, the last extremum identified which is incomplete yet and the penultimate extremum. This is for the case of kaga.

Referring to the link. I share this phrase: "The signal, however, takes into account the "energy structure" of the input data and is in some sense better than a classical zigzag whose validity of extrema is very doubtful to me". I myself dabbled in such constructions - I was looking for minima and maxima by some averaging (functions). Unfortunately the problem turned out to be a bit more complicated. At least for me. The idea that these extrema characterize somehow the state of moods to buy/sell for changers and traders is an exciting idea in itself. And on some simple mathematical model of crowds buying and selling it was exactly the same. But the reality is different and many extrema marked in that way didn't show anything useful at all. So I put the idea aside, but I do not forget about it. It is too simple and clear. But that's me, others may be more fortunate.

Prival 13.01.2008 14:24

So you don't have to dig too much (although the link in this thread was already on a lecture on DSP). Here is a piece of it. That there are filters that are built by LOC, while the depth of sampling is important for almost all types of filters, and especially for LOC.

Ahh! So that's what we're talking about! I'll tell you a secret, I use these very filters that are described in the link. Only I take them not as a "filter", but as a function giving some best estimate in terms of probability and a given law. Although of course you can look at them as filters. Unfortunately, those formulas give an estimate for the midpoint, so I use them as a research tool, but not as a prediction tool. Frankly speaking, of course I should have deduced a form for estimating the extreme point, but I'm too lazy to do it.

By the way, in addition to what you have on hand, on this subject, you can look at T. Anderson's "Statistical Analysis of Time Series". Chapter 3.3.1 "Smoothing Procedures".

And by the way, there's a special kind of splines with similar properties, where a curve is constructed based on the simplest functions to give some best estimate in terms of MNC.