Our Masha! - page 4

 
Prival >> :

The perfect MA kind of does. 'Author's dialogue. Alexander Smirnov.'

see post ANG3110 06.02.2008 20:48

How perfect is it if it redraws?

 
Neutron >> :

we maximize the profit "without knowing" the history, analyzing only the latest reading of the quote and its one previous value X[i]-X[i-1] and that's it. It seems like this.

That is, you are creating the most profitable system possible. And the method you're developing uses all regularities that can be traced on the history available to you.

>> Bold!

 
TheXpert писал(а) >>

How perfect is it if it redraws?

We take a story, we build a perfect MA on it. This is what to strive for. And we look for the one that will not overdraw and has a minimum deviation from this curve. It goes like this

 
I keep wanting to get to Bulashov. The formula for the perfect mash-up was about the same there. But it ended up being a DEMA
 
mql4com писал(а) >>

In other words, you are creating the most profitable system possible. And the method you are developing uses all the regularities that you can trace on the history available to you.

Bold!

Well, well, well. Do I look like patient number six?

Obviously, if you use this method or any other to predict the dynamics of an integrated random variable (exactly random, not quasi-random), you'll get zero! By definition, you can't beat a random process, it's a law of nature. On the other hand, time series (TRs), like price series, are not completely random, there are explicit and implicit regularities, the exploitation of which allows you to obtain statistical income in the Forex market. So, we need methods and tools to detect these weak patterns. One of such methods is the application of moving averages to quotes. This method has a well-defined area of applicability, where its exploitation is justified and mathematically correct. In its essence, all moving averages are a form of integration of the initial BP. In the most general sense, integration is the prediction of the future by trends, and differentiation is the determination of process trends. But which trends exactly? If we look closely at the bundle of

BP-MA-prediction, it is not difficult to determine the requirement for applicability of the MA method, as a positive correlation coefficient between neighboring readings in the series of the first difference of BP. It is in this case, the MAs will give profitable strategies and our MA will give the best profit of all possible! This is what we are fighting for.

However, if we analyze price BPs for compliance with the above requirement, the result of the analysis will be rather negative. Really, the reality is that price series on all TFs usually have a small negative autocorrelation coefficient in the series of the first difference and only sometimes, on trends, this coefficient is positive.

The applicability of the proposed method and its efficiency can be estimated only by the experiment results.

Vinin wrote >>.
I keep wanting to get to Bulashov. The formula for the perfect mash was about the same there. But it ended up being a DEMA.
Not DEMA, but MEMA and its functional did not have a term responsible for maximization of TC profitability, but had a term minimizing the second derivative. This allowed constructing a very smooth MA and that's all. And here is Bulashov's article:
Files:
mema_3.zip  279 kb
 
Neutron >> :


Instead of Mashek, I use interpolation by power polynomials using the NK method
on some window. Clearly, extrapolation of the interpolation curvature
even in a small future vicinity, is almost meaningless, but to describe the current
at the most interesting place - at the right edge of BP, it allows us to describe the current state.
By varying the window size and degree of the curvature it is possible to
on the other hand it is possible to have a more general or detailed view of what is going on, emphasizing the current process and its phases.
process and its phases.


In my opinion, the only way to predict the future of BP is to analyse
the evolution of the processes - like, there was a strong downward process, in the current state
it was replaced by a lateral process -> further a growth process is possible.


This approach, in my opinion, is especially useful for networkers, because NSs can
feed some typical characteristics of an interpolation curve,
e.g. the trend component (direction and magnitude), the deviation from the
trend component, a formalized curve shape, etc. - in general, as far as
of your imagination - by teaching the network to identify the current and predict
predicting future processes and building a trading strategy on this basis.


You may also smooth the distant and near past in different ways - something
similar to the EMA. It is also possible to implement a synthetic approach
- use a very strongly smoothed moving average with a correspondingly long
delay, and the near past, where the mouvings are not yet working,
analyze it using the interpolation curve.

 
Aleku >> :

Such an approach, in my opinion, is especially useful for networkers, because NS can
to feed some typical characteristics of an interpolation curve,
e.g. trend component (direction and magnitude), deviation from
trend component, a formalized curve shape, etc. - in general, as far as
of your imagination - by teaching the network to identify the current and predict
predicting future processes and building a trading strategy on this basis.

In my opinion, this kind of preprocessing of input data for NS is a kind of crutch. By integrating the initial BP with muves, we first of all make the pricing picture clear to ourselves (smooth curve, visible trends), while the smoothing process itself does not bring any additional information to the input data (it is not available) and, therefore, does not make the NS work easier. From this point of view, specially dissected data should be fed to the input of the NS, which maximally focuses the attention of the network on the quasi-stationary process. A candidate for such process can be a negative correlation coefficient in the PDF, it, by the way, can not be isolated by kotir integration (smoothing). Other methods and approaches are needed here. This seems promising.


You may also smooth the distant and near past in different ways - something
similar to the EMA. It is also possible to implement a synthetic approach
- use a very strongly smoothed moving average with a correspondingly long
delay, and the near past, where the muving is not yet working,
analyze by interpolation curve.


All this is complicated and requires good justification, but it's almost certainly a waste of effort and time.

 
Neutron писал(а) >>

... price series on all TFs tend to have a small negative autocorrelation coefficient in the first-difference series and only sometimes, on trends, this coefficient is positive.

How did you calculate the autocorrelation coefficient? I am aware of the 'aut ocorrelation function'. But it is a function, not a number.
 

Suppose, there is some sample from the original BP, e.g. on M1. We construct a series of the first difference d1[i]=Open[i]-Open[i-1], then the correlation coefficient for TF=1m between neighboring samples is calculated as; f1=SUM(d1[i]*d1[i-1])/SUM(d1[i]^2), where index runs all values of BP. For TF=2m we do the same, first constructing BP for 2m and finding its first difference d2[i] and so on up to the desired TF. I limited myself to TF=1500 minutes (about 24 hours). The question may arise how to construct another TF from minutes, for example for M2, but everything seems to be transparent here. It is this data (value of correlation coefficient in RPM for different TFs) that I plotted in the previous post.

 
Neutron писал(а) >>

Suppose, there is some sample from the original BP, e.g. on M1. We construct a series of the first difference d1[i]=Open[i]-Open[i-1], then the correlation coefficient for TF=1m between neighboring samples is calculated as; f1=SUM(d1[i]*d1[i-1])/SUM(d1[i]^2), where index runs all values of BP. For TF=2m we do the same, first constructing BP for 2m and finding its first difference d2[i] and so on up to the desired TF. I limited myself to TF=1500 minutes (about 24 hours). The question may arise how to construct another TF from minutes, for example for M2, but everything seems to be transparent here. Exactly this data (value of correlation coefficient in RPM for different TFs) is what I have shown in the previous post on the chart.

Even better ) What are these formulas and where do you get them from.

вот посмотрите как расчитывается коэффициент кореляции https://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D1%8D%D1%84%D1%84%D0%B8%D1%86%D0%B8%D0%B5%D0%BD%D1%82_%D0%BA%D0%BE%D1%80%D1%80%D0%B5%D0%BB%D1%8F%D1%86%D0%B8%D0%B8

The correlation coefficient is calculated between arrays, not between counts. Please be precise in your wording so that others can understand what you are saying, asserting and calculating.