Author's dialogue. Alexander Smirnov. - page 44

 

It's been a long time since March, but I have to say that I'm not done with mash-ups yet. It's true that I use them in a very different way to just crossings...

 

Has anyone tried the lawyers that lie here (registration is required on Spider)?

 
Mathemat писал(а) >>

It's been a long time since March, but I have to say that I'm not done with mash-ups yet. But I'm not using them in the same way as I use them for crossings...

yes Alexei! the waving arms are the power!

I'm looking at 2008.

Sometimes it's enough to put out two heavy bags and trade on them!

at least not against them!

Look at the 2008 Champ, the trendsetters are in favor there!

and the authors will probably use mostly dummies as a direction!

--

it's not like i was arguing in the div divs thread !

but clearly argued that BARABANE of divergences and convergences, it's the direction that counts!

( As a rule, a divergence and a convergence only provides a painless entry with fast enough return to profit,

but they do not guarantee the correct entry.)

It is the direction which is decisive, and none of the authors of the graphical divergence entries and others

- how to choose a direction!

 
Sceptic Philozoff:
Yes, if you call yourself a coward, you get in the back. OK, Sergey, here is a proof (I need it anyway, for my own confidence):

Suppose we have time samples - t = 1, 2, ... N. Numbering is reversed in MQL4, i.e. N is the current bar, "zero". These readings correspond to the clause Сlose(1), Сlose(2), ... Сlose(N). Let us try to construct a straight line y = A*t+B passing through the cloises by MNC. Then we calculate A*N + B, i.e. LRMA at the current bar.

We calculate the sum of error squares:

Delta^2 = Sum( ( y(i) - Close(i) )^2; i = 1..N ) = Sum( ( A*i + B - Close(i) )^2; i = 1..N )

We differentiate this stuff by A and B and get a system of equations for optimal A and B quotients:

Sum( ( ( A*i + B - Close(i) ) * i ); i = 1...N ) = 0
Sum( A*i + B - Close(i) ); i = 1...N ) = 0

Expanding the sums, we get (I omit index ranges to simplify the notation)

A*Sum( i^2 ) + B*Sum( i ) = Sum( i*Close(i) )
A*Sum( i ) + B*Sum( 1 ) = Sum( Close(i) )

Prival, now look at the right-hand sides. The sum to the right in the first equation is almost LWMA, only without the normalizing kt. In the second, it is SMA, also without it. Here are the exact formulas for these scales:

LWMA = 2/(N*(N+1)) * Sum( i*Close(i) )
SMA = 1/N * Sum( Close(i) )

Now recall what the sum of the squares of natural 1 to N equals (it's N*(N+1)*(2*N+1)/6), substitute it into our system and we get:

A * N*(N+1)*(2*N+1)/6 + C * N*(N+1)/2 = LWMA * N*(N+1)/2
A * N*(N+1)/2 + C * N = SMA * N

Simplifying:

A * (2*N+1)/3 + C = LWMA
A * (N+1)/2 + C = SMA

I'm not going to solve the system, I'm too lazy (it's already clear here). I'll just multiply the first equation by 3, and the second by 2, and then subtract the second one from the first one:

A * (2*N+1) + 3 * C - A * (N+1) - 2 * C = 3 * LWMA - 2 * SMA

On the left, after simplification, A*N + B remains, i.e. exactly our regression at point N.

What a blast! Especially starting with this post.