Fourier-based hypothesis - page 9

 
grasn писал(а) >>

not an expert in linear algebra, but I've seen descriptions of faster algorithms. By the way, if anyone has - pass it to Urain, it will be even more useful library in sense of speed of calculations.

Faster algorithm is Gaussian method (with appropriate modifications).

I started to write library on linear algebra yesterday (I didn't rely on Urain library). My library has more possibilities. Wait for it.

 
lea писал(а) >>

A faster algorithm is the Gaussian method (with appropriate modifications).

I started to write a library on linear algebra yesterday (I didn't rely on Urain library). My library has more possibilities. Wait.

In order my words didn't seem to you empty - I will lay out a header file of my library. My library itself is still being extended and tested (I check calculations in maple).

Files:
libmatrix.mqh  18 kb
 
grasn >> :

Not an expert in linear algebra, but I've seen descriptions of faster algorithms. By the way, if anyone has it - pass it to Urain, it will be even more useful library in sense of speed of calculations.

You should reduce matrix to a triangular form, for example, by Jordano-Hauss elimination method. Product of its (triangle) diagonal elements is a determinant of initial matrix (there it is necessary to take into account signs at line transposition at elimination). After that, one can reverse the whole matrix using its partial minors and partial determinants. It works ten times faster than canonical methods. And it is possible to check program correctness just by canonical short algorithms.

Numerical Recipes in C, Second Edition (1992).

Solution of Linear Algebraic Equations

http://www.nrbook.com/a/bookcpdf.php


By the way, there is also a useful free good book (although most of it is on Fourier)

2. The Scientist and Engineer's Guide to Digital Signal Processing
By Steven W. Smith, Ph.D.

http://www.dspguide.com/pdfbook.htm

 
AlexEro >> :

You reduce a matrix to a triangular form - for example, by Jordano-Hauss elimination method, the product of its (triangle) diagonal elements is the determinant of the initial matrix (there you should also take into account signs when rearranging rows at elimination). After that, one can reverse the whole matrix using its partial minors and partial determinants. It works ten times faster than canonical methods. And it is possible to check program correctness just by canonical short algorithms.

Numerical Recipes in C, Second Edition (1992).

Solution of Linear Algebraic Equations

http://www.nrbook.com/a/bookcpdf.php


By the way, there is also a useful free good book (although most of it is on Fourier)

2. The Scientist and Engineer's Guide to Digital Signal Processing
By Steven W. Smith, Ph.

http://www.dspguide.com/pdfbook.htm


Actually, this method is implemented in finding the determinant, but is there anything faster for Conversion?

I find a minor for each cell and divide by the determinant. (This turns out to be N^2 Minors to be found, and the minor is also a determinant by a rank less)

 
Urain писал(а) >>

Actually this method is implemented in finding the determinant, but is there anything faster for Conversion.

I find a minor for each cell and divide by the determinant. (It turns out N^2 Minors need to be found, and a minor is also a determinant by a rank less)

Gauss method can be adapted to this. O(N^3). Look up "inverse matrix" on wikipedia.

 
Urain >> :

Actually this method is implemented in finding the determinant, but is there anything faster for Conversion.

I find a minor for each cell and divide by the determinant. (It turns out N^2 Minors need to be found, and a minor is also a determinant by a rank less)

This cycle takes just a little time. The problem is that you count a minor recursively, isn't it? You can speed it up by counting each minor not through recursion, but by VER converting each (minor, private, sub-matrix) matrix to a triangular form.

 
grasn >> :

The row of such a matrix is essentially the dynamics of the KP coefficient on a given history. And such series, oddly enough, are stationary and have a lot of advantages. Here are a few samples for example:

Frequency 0:

Thanks for the program in Mathcade. Tried to replicate it, but came across that it behaves a bit differently than your example. For forecasting I took a section of last week on M15 with length of 1500 bars on EURUSD. It looks about the same as your test section.

But after I've used CreateModeMatrix() I've got quite a different picture at frequency 0:


The same picture is roughly the same at other frequencies. That is, no large periods like in your example. If you don't mind, please give your opinion which of the options is correct:

a) different data set - different characteristics;

b) misinterpretation of DW matrix results;

c) typing errors in the program.

 
equantis >> :

Thanks for the program in Mathcade. Tried to replicate it but came across that it behaves a bit differently to your example. For forecasting I took a section of the last week on M15 with a length of 1500 bars on EURUSD. It looks about the same as your test section.

But after I've used CreateModeMatrix() I've got quite a different picture at frequency 0:


The same picture is roughly the same at other frequencies. That is, no large periods like in your example. If you don't mind, please give your opinion which one is correct:

a) different data set - different characteristics;

b) misinterpretation of DW matrix results;

c) typing errors in the program.


1:1 implementation?


PS: A correction. If 1:1 and the input row is quoted, it's pretty weird. If the picture is steady, then it's already really weird.

 
Urain >> :

I find a minor for each cell and divide by the determinant. (That's N^2 Minors to find, and a minor is also a determinant by a rank less.)

Of course, it's a slow method. I wonder how you still got something for a 100 by 100 matrix.

 

Just in case, to ease your conscience:o)

Warning

Looking at the topic of applying the Fourier transform, I remembered what I used to amuse myself with, and wrote, thinking that this would be "a store of models that shouldn't be there". I gave up on this model at the time, to be honest, having realised in full the complexity and practical impossibility of implementing this approach. It's only in the concept, we break complex into simple. In practice it turns out that it is impossible to make 50, 100 or more forecasts with sufficient accuracy, to put it mildly. Nature is rather difficult to cheat, more precisely - impossible. And what makes things worse is that we don't need the first results (they are the most accurate) but the last in the forecast series, and it's just the least accurate. And the series itself is not so simple. As a consequence, it is practically impossible to use the forecast for trading (there is no need to pay attention to a lucky single picture).


I'm not so sure that it is necessary to spend time on this direction... the solution may certainly exist, but it is very, very, very, very difficult to find, given all the specifics of market quotes.