Stochastic resonance - page 29

 
Yurixx:
When it comes to standard TA indicators, not much. But it deserves attention too. I have already posted the pictures. There are RSI charts for two different periods and my comments on it. If to normalize RSI so that its magnitude does not depend on the smoothing period, it may be used more effectively. The same applies to some other indicators.


It is important how the indicator is "used". Maybe, there are no such problems.

There have been articles on the subject of adaptivity of indicators. The simplest one is to overlay the Bollinger RSI. Simple statistical way based on RMS and without building theoretical distributions.

 
Mathemat:
Yurixx wrote (a): And who said that this integral is taken by the gradient descent method ?

That's OK, Yurixx, I'm not attributing that phrase to you. And what about skeptical attitude to contrivances... I have Maple installed at home, sometimes it really helps me, including symbolic computations. However, I haven't used it for a long time.

I used to have a Matcad, then I upgraded to Matlab. Recently installed neuroshell2. Where else would I take the time to get to grips with it all. And I'd like to... There are some things I'd really like to get to grips with.

Therefore, without joking, my skeptical attitude is limited to skepticism about my abilities to grasp everything I want. All these things are wonderful luggage for application of already developed and perfected methods, by those who do not need to delve deeply, who need results in numbers. If we talk about all of us here, we're trying to create something new. It is hardly possible without deep insight. But... that's what grandfathers are for, to dig deep.

 
Avals:
Yurixx:
When it comes to standard TA indicators, not much. But it deserves attention too. I have already posted the pictures. There are RSI charts for two different periods and my comments on it. If to normalize RSI so that its magnitude does not depend on the smoothing period, it may be used more effectively. The same refers to some other indicators.


It is important how the indicator is "used". There may not be such a problem.

There were some articles about adaptivity of indicators. The simplest one is to overlay Bollinger's RSI. A simple statistical way based on RMS and without building theoretical distributions.


No doubt there are plenty of different possibilities and methods. Does this mean that we should refuse to do anything new, in particular "theoretical distributions" ?
 
grasn:

to Yurixx


:-))
 
Yurixx:

I have an interesting question as I go along. Can someone enlighten me as to why such a simple and convenient distribution function with good properties is not used in statistics? And if it is used, why isn't it written about? I've never seen anyone try to approximate an incremental distribution other than the lognormal.

I can assume that the theory uses distributions derived from first principles. This function is just one of possible approximating functions, this is the domain of phenomenology.

I have the following note concerning the essence of my work: it is necessary to clarify that we really speak about expectation Ymin and Ymax. The "killing" condition of calculating minimal average by minimal values of the series smoothes this drawback, but generates another one - in fact it concerns probability of occurrence of M minimal (maximal) values of series in a row (that is why I use the word "killing"). With N tending to infinity the probability of such event will tend to 0. I haven't analyzed the calculations in details, but we must suppose that X1 will run to 0 and X2 will also run to infinity. After them Ymin and Ymax will follow the same way, the first clearly seen in the second picture, the second does not fit into any diagram. This makes their value as normalization coefficients questionable, even if tending slowly enough.
I have been practicing normalization for quite a long time, including for prices. IMHO, the most natural thing is to use a confidence interval for it. That is F(Ymax)=1-Delta, if in practice - you make real distribution of Y with maximum available N and for chosen Delta you find Ymax by sorting. I didn't time it, but for simple Y it won't take much.
 
Yurixx:
grasn:

to Yurixx


:-))

Concise but succinct. Forgive my morbid natural curiosity, you always want to understand even what you personally do not need. :о)

 
grasn:
Yurixx:
grasn:

to Yurixx


:-))

Concise but succinct. Forgive my morbid natural curiosity, you always want to understand even what you personally do not need. :о)


That's why I love you all, people! :-)
 
Yurixx:

...I have an interesting question along the way. Can someone enlighten me as to why such a simple and convenient distribution function with good properties is not used in statistics? And if it is used, why isn't it written about? I've never seen anyone try to approximate an incremental distribution other than the lognormal.

Yura, I don't know the answer to this question.

I can only assume that your proposed distribution p(X)=A*(X^a)*exp(-B*(X^b)), is a particular case (e.g. Generalized Exponential Distribution p(X)=a/(2G[1/a]*l*s)exp{-[(x-m)/l*sl*s]^a}, Bulashev, p.41), or those few, who also managed to get to the bottom of it, decided to keep silent and quietly mow the cabbage on the vast Forpolye:)

But I have a counter-question!

Some time ago I was studying autoregressive models of arbitrary order (when we look for the dependence of the amplitude of the current bar and its sign on the sum of actions on it of an arbitrary number of previous bars). I solved this problem so well that I could not tell if the model series was real or not by its appearance, but for one exception - the distribution function (DP) of the model series was far from reality. I never could find the reason for the discrepancy. Intuitively I felt that the coincidence of the autocorrelation functions was sufficient to match the PDF of their first differences. Turns out it wasn't... There's something I'm not taking into account in modelling the behaviour of the residuals series.

What do you think about this issue?

 

I'm going to step in here, Neutron. I am not a statistician, so I had to ask the question on mexmat.ru. It is here: http://lib.mexmat.ru/forum/viewtopic.php?t=9102

Question: what information about the stationary process is enough to reproduce it correctly? The answer was: one must know the covariance function and the m.o. of the process. I do not yet know how to build a process with a given covariance function. But the idea is that the resulting process could be considered a proper implementation of the original simulated process. Maybe your process was not stationary?

P.S. I want a plausible simulation of the residuals process (returns). According to Peters, the distribution of the residuals is fractal with acceptable accuracy, and the process is stationary. Although other models are not excluded...

 
lna01:
Yurixx:

I have an interesting question as I go along. Can someone enlighten me as to why such a simple and convenient distribution function with good properties is not used in statistics? And if it is used, why isn't it written about? I've never seen anyone try to approximate a distribution of incremental type other than lognormal.

I can assume that the theory uses distributions derived from first principles. And this function is just one of the possible approximating functions, this is the realm of phenomenology.

As the matter of fact I have the following note: it is necessary to clarify that we really speak about expectation Ymin and Ymax. The "killing" condition of calculating minimal average by minimal values of the series smoothes this drawback, but generates another one - in fact it concerns probability of occurrence of M minimal (maximal) values of series in a row (that is why I use the word "killing"). With N tending to infinity the probability of such event will tend to 0. I haven't analyzed the calculations in details, but we must suppose that X1 will move to 0 and X2 will also move to infinity. After them Ymin and Ymax will follow the same way, the first one is clearly seen in the second picture, the second one will not fit into any diagram. This makes their value as rationing coefficients doubtful, even at rather slow striving.
I've been practicing normalisation for quite some time, including for prices. IMHO, the most natural thing to do is to use a confidence interval for it. That is F(Ymax)=1-Delta, if in practice - you make real distribution of Y with maximum available N and for chosen Delta you find Ymax by sorting. I didn't time it, but for simple Y it won't take much.



I agree with all the comments. And the picture of the behaviour of the limits at N -> there is perfectly correct. But.

This is not a calculation of the limits Ymin and Ymax, only their statistical evaluation. The goal, range normalization, imposes not too rigid requirements on the accuracy of the task. Taking this into account, I think such assumptions (incorrect in fact) are quite acceptable. But if it were necessary to determine the time of call beyond the boundary, it would have to be determined much more accurately.

I really limited myself to the case of finite N, which is what I explicitly said. If even you use the maximum available but finite N in your calculations, then I am entitled to it. :-)) What will happen to it when N reaches infinity is unknown. One consolation - you and I will not exist anymore. And forex too.

I want to draw your attention to the main purpose of the problem. It is not about calculating Ymin and Ymax per se. It's about recalculating the data of a derivative series using the data of the original series. Also, your method of recalculating normalisation is arbitrary, tied to the historical set on which you do it. When you switch t/f it can change from 2000 bars to, say, 500000 bars. Reaching the range boundary in the first case says nothing, but in the second case it says a lot. You can accuse my method of arbitrariness only with a model distribution function in mind. However, if the real, experimentally plotted on the "maximum available" amount of data is well approximated by the model distribution, then where is the arbitrariness in it?