Fourier-based hypothesis - page 11

 
equantis >> :

That's right, it's worth keeping...

In order to close the issue I would like to show a picture with typical results:

Blue - price

Red - forecast made by progamme with cosine transformation from 0

purple - the same curve, but calculated from the starting point of the forecast (100)

Green - simple forecast based on the price curve (I used built-in predict function)



As I've written before, the trick is to identify the model. The problem is really complicated, it cannot be solved just by eye. And conceptually, if I may say so - "prediction in the frequency domain" is used in some tasks, but very specific.

 
grasn писал(а) >>

As I've written before, the trick is to identify the model. The problem is really complicated, it cannot be solved just by eye. And conceptually, if I may say so - "prediction in frequency domain" is used in some tasks, but very specific.

I have spent almost a month for disassembling with PF and all in vain. I've tried to predict anything, time series, moving average, indicators, but all results were 50-50 (or so).

Tried to implement your idea with discrete cosine transformation. Unfortunately after reverse cosine transformation I got the following picture: in the restored signal the last bar (for the sake of which the prediction was actually done) just repeated the penultimate bar (the last one in the training set) with a certain small error.

Just in case, I'll describe a brief algorithm of what I was doing:

  1. Took a test dataset starting with START = 1:FRAME
  2. Selected a WIND window for each data set (i.e. sampling was performed at the range START:START+WIND)
  3. For each window, did a cosine transformation.
  4. All results were combined into a matrix sized FRAME x WIND where columns contained cosine conversion coefficients for each frequency and rows contained coefficients for one frequency for each test dataset
  5. For each column of coefficients a small neural network was trained, which based on four previous values predicted very well the coming change of the sinusoid by 1 bar. The neural network was approached because the AR prediction was giving very poor results.

  1. It produced a set of predicted coefficients for the predicted data set, where the last bar was the desired value. An inverse cosine transform was performed over the predicted coefficients.

Now a pause... alas, there are no miracles - the predicted bar with a small error repeated the penultimate bar from the test sequence. When analyzing the result I found out that the first bar had changed a little, while all others remained unchanged, just shifting by one position (as it should be). But the last bar simply repeated the penultimate one instead of predictions. (See the picture above).

Maybe this result may be useful for some mathematicians, but for the task of EURUSD prediction it turned out to be useless. Alas. Yet. )))

 
equantis >> :

I spent almost a month trying to sort things out with the PF and it was all for nothing. I tried to predict everything: time series, moving average, indicators, and all the prediction results were 50-50 (or so).

Tried to implement your idea with discrete cosine transformation. Unfortunately after reverse cosine transformation I got the following picture: in the restored signal the last bar (for the sake of which the prediction was actually done) just repeated the penultimate bar (the last one in the training set) with a certain small error.

Just in case, I'll describe a brief algorithm of what I was doing:

  1. Took a test dataset starting with START = 1:FRAME
  2. Selected a WIND window for each data set (i.e. sampling was performed at the range START:START+WIND)
  3. For each window, did a cosine transformation.
  4. All results were combined into a matrix sized FRAME x WIND where columns contained cosine conversion coefficients for each frequency and rows contained coefficients for one frequency for each test dataset
  5. For each column with the coefficients a small neural network was trained, which based on four previous values predicted very well the coming change of the sinusoid by 1 bar. The neural network was approached because the AR prediction was giving very poor results.

  1. It produced a set of predicted coefficients for the predicted data set, where the last bar was the desired value. An inverse cosine transform was performed over the predicted coefficients.

Now a pause... alas, there are no miracles - the predicted bar with a small error repeated the penultimate bar from the test sequence. When analyzing the result I found out that the first bar had changed a little, while all others remained unchanged, just shifting by one position (as it should be). But the last bar simply repeated the penultimate one instead of predictions. (See the picture above).

Maybe this result may be useful for some mathematicians, but for the task of EURUSD prediction it turned out to be useless. Alas. Yet. )))

And I didn't say that there will be a miracle. There are some uncertainties:

A small neural network was trained for each column with coefficients, which predicted very well the coming 1 bar change in the sinusoid based on the four previous values. The neural network was approached because the AR prediction was giving very poor results.

I'm not sure that an NS on 4 numbers of such a complex series can predict the future very well. Very doubtful. And if it does predict, why is there such a big discrepancy? And what does "sine wave" have to do with it. As for the AR model, each curvature, is actually an AR process, very close to it in its properties. Identification of such a model is complicated, a lot of methods are used (strangely enough, more complicated than NS): backtracking and prediction, Akiache criterion, "suitcase", autoregressive transfer functions, agreement and maximum likelihood criteria (and their variants), mutual correlation, stochastic approximation, filtering (it is used for model identification as well).


At least the idea is as good as yours. :о) If it didn't work - it didn't work, and it happens, I'm sincerely sorry for your time, I can't help return it. But, by the way, here https://forum.mql4.com/ru/24888/page9 just in case warned you. There are a lot of subtleties in this model, some of which I modestly omitted. One of those subtleties is that it's pointless to predict a single count with such a model, just pointless. You simply won't get the accuracy you want, and never will. You have to predict in a "statistical" sense. That's the way it is, in literary terms.


 

Grasn, anyway, thanks a lot for the idea! The process was enjoyable and no time was wasted)) And the result is still to come!

grasn писал(а) >>

I'm not sure that NS on 4 numbers of such a complex series can predict the future well. I doubt it very much. And if it does, why is there such a large discrepancy? And what does "sine wave" have to do with it.

1. If we consider the change of each DCT coefficient, as you wrote, it looks very much like some kind of "curve-sinusoid" (with frequency corresponding to the order number of the coefficient, especially for oscillations with high frequencies) which will change its amplitude with time. I've tried using AR to predict "head-on", roughly as in your example in Matcad.

If we consider predicting "curvulina" for 1 bar, then AR (at least I've tried all the formulas I have in Matlab) gives very inaccurate results, especially for "curvulina-sinusoids" with odd periods (although I may not have tried them all). In this case a simple neural network (in Matlab it's implemented with the function newlind (I think it's not even a neural network, but just a solver for a set of linear equations), when predicted by 1 point, gives very good (visually) results.

2. I look pretty good, out of 50 bars it predicts exactly (almost exactly) 48 bars and moves them 1 position to the left, making only 1 error (I don't know why) and the last one (I'm sorry, I was doing it all for). Apparently the "micro errors" of the prediction on each "curveline" add up in the reverse conversion in this way.

I tried "cheating" the algorithm by trying to hide the last bar inside the test segment (I did a simple circular shift), but still it was the last and first bars that were wrong.

3. By the way, tried to predict not only close (as the most poorly predicted series), but also high/low/open, and the difference and even the minimums-maximums of the zigzag. (As an example of a series, with a distorted time axis). Since the result is the same, the conclusion is obvious - "head-on" this method only moves N-2 bars left by 1 bar well, but does not predict forex rows.

As for the AR model, each curve is in fact an AR process, very close to it by its properties. Identification of such a model is complicated, a lot of methods are used (strangely enough, more complicated than NS): reversal and prediction, Akiache criterion, "suitcase", autoregressive transfer functions, agreement and maximum likelihood criteria (and their variants), mutual correlation, stochastic approximation, filtering (which is also used for model identification).

Thanks a second time - lots of new names - more to try!

There are a lot of subtleties in this model, some of which I have humbly omitted. One of those subtleties is that it's pointless to predict a single sample by such a model, just pointless. You simply won't get the accuracy you want, and never will. You have to predict in a "statistical" sense. That's the way it is, in literary terms.

Thanks a third time, let's try it in a "statistical" sense))

 

glad to be of help. Good luck :o)


By the way, I'll try to post a forecast in the 'Real Time Prediction System Testing' thread here nearby. If I make it by Monday, if not I'll post it later. As they call it, 'get in'.

 
2 grasn:

1. An idea occurred to me: if I apply the cosine transform twice (first on the test section and then on each of the "curves" which are so similar to sine waves), won't this "improve" the predictive properties of the process? I'll try to tell you the results tomorrow.

2. of course in case of long term prediction of certain type of processes AR will be better, although superposition of two sinusoids may well be interpolated by a neural network.

3. Did I understand correctly (I read your post somewhere), that for this method it's better to predict ln(Xi/Xi-1) instead of Close itself?

 
equantis >> :

1. An idea occurred to me: if I apply the cosine transform twice (first on the test section and then on each of the "curves" which are so similar to sine waves), won't this "improve" the predictive properties of the process? I'll try to tell you the results tomorrow.

2. of course in case of long term prediction of certain type of processes AR will be better, although superposition of two sinusoids may well be interpolated by a neural network.

3. Have I got it right (I read your post somewhere), that for this method it's better to predict ln(Xi/Xi-1) instead of Close itself?

1. it has to be tried.

2. the point is that dynamics by frequencies have no periods, it's a complicated process and not a superposition of sine waves at all

3. yes, this is one of the options for bringing to a stationary row.