Market phenomena - page 23

 

Gradually approaching the models, and the phenomenon. So, stochastic models with a random structure presuppose, of course, the models themselves and a description of the transition between them (i.e. some probabilistic logic of intercepting one process by another, which these models generate). We can say that BP is described by 100 differential equations of Ito, and then there is a question of identification of models, - what functions, what biases, what diffusion coefficients for each, what is the initial probability vector of states of systems, in general - is not a trivial task.

So I've actually invented a transformation that decomposes any initial time series into two subprocesses. I haven't seen anything similar earlier, but maybe it's one of particular cases of canonical representation of random functions. Who knows, I'm not a professional mathematician. The gist of it is "sifting" through a grid. But never mind, I won't lay out the mathematics yet, I have to deal with the ideas and the concept. What's important is that after transformation we get only two processes, these processes are linear but have more complicated structure.

Random process. The process is matched in its characteristics to the increments of M15

After the transformation we obtain:

For a random process, the coefficients b(alpha) and b(omega) in the models, will be modulo the same, the difference in lengths respectively will show the predominance of one or the other dynamics, and for a random process the internal structure of the separated processes will be close to the line. There are still some theoretical issues and development of better algorithms, but that is a separate story.

By the way, another indirect (in the sense not yet strictly proven) assertion is that the quoted process is not random, since the decomposition characteristics are different from those of random BPs (well ... not strictly all there yet).

So, there remains the question of transition probabilities between states (processes). If these transitions can be considered "Markovian", then by Kolmogorov-Chempen formula it will be possible to obtain probabilities of states of the system at a given horizon in the future.

About the coolest phenomenon (this isn't just a branch of ready-made phenomena, as if research is allowed or not?). So, here I am sure there are "stochastic patterns" (TA has nothing to do with it), a very strong certainty, I hope they will be confirmed. It is possible that I am wrong and then, I already feel, it is scary to imagine, paukas after all an invoice for the lost profit will be submitted for payment.

 
IgorM:

I do not understand one thing, why do you need to analyse Oreps?

Well, oppens and closes are artificially lowered or raised at the close of the bar, sometimes it seems that at the close of the bar "there is a game to redraw the colour of the candle".

My theory is that everything we get from DTs, the whole OHLC is artificial. But seriously, we need to measure the process at equal time intervals (somehow from an engineering point of view DSP is more familiar and correct). OHLC properties don't seem to be much different to me.
 
Farnsworth:
Seriously though, you need to measure the process at equal time intervals (DSP is more familiar and correct from an engineering point of view).
NN, you are looking for some kind of momentum
 
IgorM:
NN, you are looking for some kind of momentum
No, momentum (and all its derivatives) has nothing to do with it.
 
Farnsworth:
My theory is that everything we get from DC, all OHLC, is artificial. Seriously though, we need to measure the process at equal time intervals (as from an engineering point of view DSP is more familiar and correct). OHLC properties don't seem to differ much to me.
Well, for the sake of interest, you could try watching alpha by high-am and omega by low-am.
 
marketeer:
Well, for the sake of interest you could try to watch alpha by high-am and omega by low-am.

Not a question, only next Sunday, when I get to the lab. I don't think they will change fundamentally, even all the conversion characteristics will remain approximately the same.

to All

Once again I would like to pay attention, I don't know about the colleagues, but when taken within the RMS of the increment yielded a trend - I was very surprised. I confess I was expecting something like a wandering trend. It really is a trend. I showed it using determination coefficient as an example, but it's a very bad indicator because it characterizes the quality of model fit in the original series, i.e. a linear regression can fit any series perfectly, even a random one and it will show that for example 95% of time series on 10 year history are explained by a+b*x within the average error. Once used fractal characteristics, in particular estimation of Hurst index in several ways (R/S analysis, variance variance deviation ...). It takes a long time to calculate, but confidently shows the trend for a given LAMBDA, unlike random walks.

There are also interesting subtleties, but that's much later.

 
Farnsworth:
We will hopefully get to more serious "fractal" mathematics in the study of "fat tails". It will take some more time, but for now I am posting a near-scientific study that has given me some thoughts.

Assumptions about the model.

...

Taki, this is much more interesting than it was before.

Here's the question, is there any meaningful correlation between the counts, in both processes (linear trend, that's not really it). The thought is simple, if such dependence appears, after all transformations, then the processes (both) do have properties that distinguish them from a random walk.

By the way, to divide into processes we can also use it. I.e. divide not by the RMS cutoff, but by... I don't know, by autocorrelation, for example.

 
HideYourRichess:

Taki, this is much more interesting than it was before.

The question is whether there is any meaningful dependence between the samples in both processes (linear trend, not really). The thought is simple, if such dependence appears, after all transformations, then the processes (both) do have properties that distinguish them from a random walk.

By the way, to divide into processes we can also use it. I.e. divide not by RMS cutoff, but by... I don't know, autocorrelation, for example.


Where do the patterns/dependencies come from? You take a timeframe and put some of the increments in one pile and some in another, depending on the value. And a few points or a shift of reference point may change the composition of these "processes". Where does the trading logic come from with such a breakdown? We will refer 20 points on m15 to omega, but if we had 21 points it would be different - it's alpha :) Where did such a returnee division matrix come from in the first place? How could it have turned out differently than even a random walk, as the matrix shows that one "process" will get more negative returnees and the other more positive? Of course, one process will get more negative returnees and the other will get more positive returnees?
 
HideYourRichess:

Taki, this is much more interesting than it was before.

The question is whether there is any meaningful dependence between the counts in both processes. The thought is simple, if such dependence appears after all transformations, then the processes really have properties which distinguish them from random walk.

The correlation for these processes has not looked yet. Moreover, I did not look at it on purpose. The main reason is that I "picked out" from the series only those samples that fell under the classification. The appearing holes were simply ignored. That is, according to the original conception, there is a deterministic trend, with a more complex structure than just a line, but it is deterministic. And this "trend" process is interrupted (exactly interrupted or destroyed) by another, more complex "killer process" (tails, ears, whatever sticks out). It is important to note that it is not the trend that is mixed with the noise, but rather two, very complex processes competing, one creative, the other destructive.

Use? - Almost easy :o) You can predict the "carrying process" accurately enough (within reasonable limits), and then, for example, using the Monte Carlo method, estimate future destruction, as well as estimate the most probable levels of price accumulation after the "crash".

And I think that in this endless process of trend creation and destruction there must be these very "stochastic patterns". Coming at them from different angles, and here's another approach. But the philosophy changes a bit, it turns out that there is a trend, it is predetermined by the very nature of the company, society, country, whatever. It is one, i.e. there are no bulls and bears. But there are environmental conditions in which this trend cannot exist in ideal conditions, and society itself can destroy it (the trend). But this is all lyrics, don't pay attention.

By the way, in order to divide a process, it is also possible to use this. I.e. divide not by RMS cutoff, but by... I don't know, by autocorrelation, for example.

In principle this is correct, it is not the only way of filtering.

PS IMPORTANT: I couldn't filter this process in terms of DSP, I couldn't filter it at all!!! But this primitive method gave results. I think this should work well here, anything involving the prefix "multi".

Next Sunday I'll try to evaluate the different characteristics of these particular processes.

 
Avals:

Where do the patterns/dependencies come from? They've taken some timeframe and put some of the increments in one pile and some in another depending on the value. And a few points or a shift of reference point may change the composition of these "processes". Where does the trading logic come from with such a breakdown? We will refer 20 points on m15 to omega, but if we had 21 points it would be different - it's alpha :) Where did such a returnee division matrix come from in the first place? How could it have turned out differently than even a random walk, as the matrix shows that one "process" will get more negative returnees and the other more positive? Of course, one process will get more negative returnees and the other will get more positive returnees?

it's not that simple. Recalling Alexei's post:

Another phenomenon is long-term memory.

Most of us (of those who do it, of course) are used to measuring market memory by Pearson correlation - more precisely, autocorrelation. It is well known that such correlation is quite short-lived and significant with lags of up to 5-10 bars at most. Hence it is usually concluded that if the market has a memory, it is very short-lived.

However, Pearson correlation is only able to measure linear relationships between bars - and virtually ignores non-linear relationships between them. The correlation theory of random processes is not called linear for nothing.

However, there are statistical criteria that allow us to establish the fact of an arbitrary relationship between random variables. For example, the chi-square criterion - or the criterion of mutual information. I haven't really bothered with the second one, but I have bothered with the first one. I will not explain how to use it: there are plenty of manuals on the Internet, which explain how to use it.

The main question was this: is there a statistical relationship between bars which are far away (for example, if there are a thousand bars between them)? There was no question about how to use it in trading.

The answer is yes, it does exist, and it is very significant.

For example, if we take the EURUSD history from 1999 on H1 and check the chi-square for pair returns, we find out that in the range of "distances" between bars between 10 and 6000, in about 90% of cases the current bar depends on the bars from the past. 90%! At distances between bars of more than 6000 such dependences occur less frequently, but still occur!

Frankly, I was stunned by this "discovery" as it directly shows that the euro has a very long term memory. On EURUSD H1 6000 bars is about a year. This means that among the hourly bars of a year ago, there are still bars that the current zero "remembers".

On H4 significant dependence is found up to about 1000-1500 bars. I.e. the duration of "market memory" is still the same - about a year.

Recall Peters who says that the market memory is about 4 years. Contradiction, however... I do not know yet how to solve it.

Not having calmed down, I decided to check if my chi-square would show such dependencies if I fed the input synthetic returns generated independently. I chose two possible distributions of the synthetic returns - normal and Laplace - and ran it. Yes, it shows, but within the significance level of the criterion (I had 0.01)! In other words, the synthetic showed about 1% dependent bars in the past - just at the level of probability of criterion error.

What are the conclusions?

1. Euro quotes are definitely not a Markov process. In a Markov process the current value depends only on the previous value. In our case we have numerous bars in the very distant past, on which the current bar depends.

2. The so-called "foundation" certainly plays a certain role - let's say, as an excuse to move the quotes. But it is certainly not the only one. We need to look at the technique!

3. This result is still purely theoretical and has no practical importance. Nevertheless, it clearly shows that all is not lost on those who look for something.

Avals, don't jump to conclusions...

PS: Moreover, what Alexey wrote - I confirm it completely!!!