Author's dialogue. Alexander Smirnov. - page 41

 
lna01:
The line coefficients a and b are calculated in these lines.
A = (SumXY - N3*SumY)*N4;
B = (N1*SumY - SumXY)*N2;
For illustration, I attach MovingLR_2 version that just draws the current linear regression. Especially, because there was a mistake in previous one, when calculating N4 :)

MovingLR_2 gives pure linear regression and it's pretty easy to make sure of that. In at_LR0 there is inaccuracy of change from period in hours to period in bars. If you change Close in at_LR0 to (High+Low)/2 and take a period of 1, and change the period in MovingLR_2 to 61 instead of 60 and display it on the one-minute chart, the results will be exactly the same.


Well then MovingLR_2 is a good algorithm, just tweak the code layout and it's ok!

In at_LR0 the one bar offset is done to fit the linear regression from MT4 standard toolkit. Maybe this didn't need to be done...

 
Mathemat:

2 zigan:

For linear regression, the formula is: LRMA = 3*LWMA - 2*MA

For quadratic regression:

Quadratic Regression MA = 3 * SMA + QWMA * ( 10 - 15/( N + 2 ) ) - LWMA * ( 12 - 15/( N + 2 ) )

Here N is the period of the averages,

QWMA( i; N ) = 6/( N*(N+1)(2*N+1) ) * sum( Close[i] * (N-i)^2; i = 0...N-1 ) (the square-weights machine).

for cubic: oops, still can't get it out of Trading Solutions, my formula is too wild there.

2 Candid: you're really paranoid, I wouldn't have thought of it...


I got different formulas.

where is

 
Hmm, for RMS, it's even more tasty result than Yurixx's:

RMS^2 = (Sum(Y*Y) - A*Sum(X*Y) - B*Sum(Y))/(N-2)

Similarly to Yurixx, simplicity of expression is due to choice of origin and direction of X-axis. If nobody will find errors, this is the end of approach to the algorithm. So that calculated A and RMS do not disappear, I left LR line drawing in the indicator and added RMS channel :)

ANG3110:

a little tweaking of the code layout, and everything is OK!

I don't think I will use exactly this indicator, I will use an algorithm.

2 Yurixx:
It seems that the difference in RMS values for small N is not due to RMS correction, but to the choice of start and direction X. Because when you change them my former formula also starts to give different results, and it is accurate without any reservations.
Files:
 
lna01:

2 Yurixx:
It looks like the difference in RMS values for small N is not due to RMS correction, but to the choice of start and direction X. Because when you change them my old formula starts giving different results too, while it is accurate without any reservations.

You are certainly right that the correct choice of coordinate system is a powerful technique to simplify the calculations and the appearance of the final formulas. I did not use it for linear regression, everything turned out nice enough. But for parabolic regression, given a certain choice of origin, final expressions turn out to be twice as easy, while effectiveness of the algorithm increases by an order of magnitude. In addition, the problem of limitations on the accuracy of calculations is completely removed.

I cannot agree with you on one point, however. RMS values, as well as the actual regression values, cannot depend on the choice of the origin of the X-axis. Perhaps it's not the formula itself that starts giving different results but the very issue of calculation accuracy becomes apparent. Since only 15 significant digits of double are stored (let alone int), an error is accumulated rather quickly in the process of calculations. This is especially true when X and Y have different orders of magnitude. For example, X is a bar number of the order of hundreds of thousands, Y is a price of the order of 1 and the price change is of the order of 0.0001.


PS

I wanted to understand what makes this formula "tasty". Obviously, it's much simpler - in one line. Although I don't understand why you divide by (N-2) and not by (N-1). I should still note that, aiming for maximum acceleration, you would have to use a different formula. If you fix the choice of origin X in relation to the current price value, it is more advantageous to use formulas without Sum(X*Y). Then you won't have to calculate the convolution on every bar. But to update Sum(Y*Y) or Sum(X*X) on every bar is one operator.

 

If you know the current value of coefficients A and B in a linear regression, can you calculate the RMS

here are the formulas

coefficient A

coefficient B

 
Prival:

If you know the current value of coefficients A and B in a linear regression, can you calculate the RMS

With a quadratic term like QWMA, it is probably possible. But the algorithm via dummies is inherently non-optimal. A chance was an opportunity to use native code for built-in dashes, but it seems to have failed.
P.S. I just remembered that QWMA is quadratic by X and I'll need a term quadratic by Y. So QWMA won't help.
 
Prival:

If you know the current value of coefficients A and B in a linear regression, can you calculate the RMS


I don't think you can. The regression straight line is defined by two constants, A and B. With the same A and B, there can be any spread of values around this straight line. To calculate the RMS you also need the variance of X and Y. Probably QWMA will not be sufficient either, as it does not contain Y squares and therefore does not determine the variance of Y.
 
Yurixx:
QWMA probably won't be enough either, as it doesn't square Y and therefore doesn't determine the variance of Y.
Yes, I just forgot that QWMA is not Y-squared at all. When I remembered, I added a postscript.
 
Yurixx:

One thing, however, I cannot agree with you. The RMS values, as well as the regression values themselves, cannot depend on the choice of starting point for the X-axis. Perhaps it's not the formula itself that starts giving different results, but this very problem of calculation accuracy.

That's exactly what I meant by calculation results.

Although I don't understand why you divide by (N-2) and not by (N-1).

Because regression has an additional degree of freedom. Yandex can help with details, e.g. http://cmacfm.mazoo.net/archives/000936.html
I should still note that, aiming for maximum acceleration, you should have used a different formula. If you fix the choice of origin X in relation to the current price value, it is more advantageous to use formulas without Sum(X*Y). Then you won't have to calculate the convolution on every bar. But to update Sum(Y*Y) or Sum(X*X) on each bar is one operator.
We already have Sum(X*Y) - you cannot calculate neither A nor B without it. It is calculated recurrently, in three operations. Look more carefully at_LR0 or MovingLRv3.

P.S. For Sum(Y*Y) - I also have three operations, for Sum(X*X) - none.
 
Prival:
Mathemat:

Quadratic Regression MA = 3 * SMA + QWMA * ( 10 - 15/( N + 2 ) ) - LWMA * ( 12 - 15/( N + 2 ) )

QWMA( i; N ) = 6/( N*(N+1)(2*N+1) ) * sum( Close[i] * (N-i)^2; i = 0...N-1 ) (a square-weighted wizard).

I got other formulas.

where

Exactly the same formulas, thank you, Prival. Give me similar ones in relation to mash-ups.