Dependency statistics in quotes (information theory, correlation and other feature selection methods) - page 31
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
And more often the system should say: "let's sit on the fence, I have a crisis of the model of the universe". I suppose this is a useful quality of any intelligent trading system, reflecting the chaotic reality of the market: it only allows a slight glimpse into the future at certain moments.
Candid: Generally, judging by the fact that the posts keep hanging in the air, my time in this thread has either gone or not yet come :). It's probably time for the fountain to rest :).
Probably not yet :).
And I honestly wasn't going to activate the topic yet, but after it came up, I assumed there would be about this development. But I don't regret it, as the discussion has clarified some things.
Candid: I assumed from the beginning that the methodology senses any dependencies, both useful for forecasting and useless. Regarding volatility, there is definite evidence to support such an assumption here.
Volatility is a serious player in this Information Game, but I think it is still not the king and god.
I'll continue the theme, rather for aesthetes. Maybe that will be the end of the topic. Or maybe another one will open.
I'll post the results of my experiments.
A chart showing the amount of mutual information on lags from 1 to 250 for the zero bar (more precisely, the price increments p[0] - p[1]) for EURUSD D1.
Next, let's try to keep the original volatility of the series (keeping the increment moduli) while mixing up the signs of the increments. We obtain.
A similar chart and the sum of mutual information is very similar. It means that deleting the sign of the gradient did not affect the mutual information. In order to confirm the insignificance of the sign, let us try the following. Let's leave the sequence of signs of increments as in the original format but mix the moduli of increments having broken the structure of volatility. Now we have.
The chart has a different look. The sum has decreased significantly. So, having removed the volatility, with the presence of the original sequence of increment signs we have much less information about the zero bar.
Now let's mix up both increment signs and increment modulus sequence, i.e. let's get rid of volatility and the sequence of signs that take place in the original series.
We obtain approximately the same, even the sum is higher. We assume that the volatility-free series is nearly the same as a completely random series (which, however, has the distribution law preserved).
In order not to make multiple realizations for each experiment, let's carry out statistical testing of the hypothesis about the difference between obtained values of mutual information for different series.
Kolmogorov-Smirnov test for mutual information of the original series and series with preserved volatility. p > 0,1. The hypothesis of difference is rejected.
For the original series and series with retained sign of increments: p < 0.01. The hypothesis of a difference is confirmed.
Test for the series with the sign retained and the random series. p < 0,1. An ambiguous result, but the sum of mutual information for the random series is even larger, so I am inclined to accept the hypothesis of a difference, or at least of no superiority over the random series.
Conclusion: this methodology, working with closing price increments, allows to detect dependencies of price volatility, while dependencies of signs of increments are not detectable, if at all in any sense. It is impossible to predict the direction of price movement with this methodology.
I've been off the subject for the last month: I've been very busy with other things, so I haven't had time for it.
I agree with the verdict in principle. But only about the days. I already suspected and told before (and not only me) that there is much more chaos on days than on shorter timeframes.
Also it should be taken into account that bars with excessive information have not been screened out. I suspect it greatly affects the result.
In short, the selection of data that possibly will be provided to the input of the neural network should be approached much more seriously. So it turns out that in order to benefit from the neural network, you have to feed it extremely dirt-free top delicacies. And right now it's not a delicacy yet, but an uncaught starred sturgeon.
I've been off the subject for the last month: I've been very busy with other things, so I haven't had time for it.
I agree with the verdict in principle. But only about the days. I already suspected and told before (and not only me) that there is much more chaos on days than on shorter timeframes.
Also it should be taken into account that bars with excessive information have not been screened out. I suspect it greatly affects the result.
In short, the selection of data that possibly will be provided to the input of the neural network should be approached much more seriously. So it turns out that in order to benefit from the neural network, you have to feed it extremely dirt-free top delicacies. And right now it's still not a delicacy, but an uncaught starred sturgeon.
Alexei, first of all, I'm glad to see you in the thread. I agree with your opinion. I have also heard and thought about the large volume of chaos in diaries. My opinion is as follows: on large TFs the time series function is not as smooth as on 1-minute and 5-minute ones, and even less so on ticks. If one learns to predict several bars ahead on small TFs, there will be power. Of course, I can calculate the mutual information for minutes as well, it will be even more interesting. I may do it for ticks as well, I will take it from Gain Capital's site. But the problem of using information from the ensemble of bars is not worked out, I'm stuck on that. Sorry.
I completely agree that the "starling" hasn't been caught. And the problem of redundant information is important in this respect. If we take information on specific bars, we are, fundamentally, raising the issue of the importance of each lag taken.
All in all, but see you on the air again.
For minutes, much less ticks, it's probably too wasteful in terms of time and use of PC resources. I'm counting on taking watches and counting them. We'll see.
The most serious problem here is not on the surface, but inside: the past history is not a constant for the DC. Bars appear and disappear all the time. And local changes of the past history may seriously affect the result (or rather the Matrix). I am extremely uncomfortable with this. I am looking for a way to solve the problem of history permanence and, at the same time, reduce the number of calculations by an order of magnitude.
Alexei, first of all, I'm glad to see you in the thread. I agree with your opinion. I've also heard and thought about the large amount of chaos in the days. My opinion is as follows: on large TFs the time series function is not as smooth as on minutes and five minutes, and even less so on ticks. If one learns to predict several bars ahead on small TFs, there will be power. Of course, I can calculate the mutual information for minutes as well, it will be even more interesting. I may do it for ticks as well, I will take it from Gain Capital's site. But the problem of using information from the ensemble of bars is not worked out, I'm stuck on that. Sorry.
I completely agree that the "starling" hasn't been caught. And the problem of redundant information is important in this respect. If we take information on specific bars, we fundamentally raise the issue of the importance of each lag taken.
All in all, but see you on the air again.
Maybe on large timeframes the function of time series is not as smooth as on 1-minute and 5-minute ones, and even less so on ticks, but it is more predictable. On smaller timeframes, especially on 1-minute ones, the function of time series shows regularity within several hundreds or even thousands of bars, while within tens (-s) of bars the proportion of random components of a possible general pattern is very high.
I agree, Yusuf. There is that opinion too. That's why I took the daily bars, by the way. But, interestingly, the sum of the mutual information for the same number of lags is greater for hourly bars than for daytime bars. Even if it's mostly volatility, but a fact is a fact. So maybe smaller timeframes are better suited for a particular prediction model.
For minutes, much less ticks, it's probably too wasteful in terms of time and use of PC resources. I'm counting on taking watches and counting them. We'll see.
The most serious problem here is not on the surface, but inside: the past history is not a constant for the DC. Bars appear and disappear all the time. And local changes of the past history may seriously affect the result (or rather the Matrix). I am extremely uncomfortable with this. I'm looking for a way to solve the problem of history constancy and, at the same time, reduce the number of calculations by an order of magnitude.