Market model: constant throughput - page 9

 
gip:

...

Compression is conventionally a function of the distribution, but how do you think you can predict the price from all of this?

The method may not help to predict it, but it will help to select an instrument with the largest possible patterns to trade. Therefore, this branch is a part of this one, I think. At least it looks like these two branches are interconnected. Right, author?

Mathemat:

So far I see a hint that Candid along with hrenfx are moving towards proving that marketable BPs are not SBs. Well that's at least worth a Fields medal (they don't give a Nobel to mathematicians).

The fact that market BPs are not SBs has already been proven by many on this forum, directly or indirectly. The author's method could at least be used as an estimate of the degree of non-randomness of a particular SWR.

 
joo:

How to predict may not help this method, but selecting an instrument with as many patterns as possible for trading will. Therefore, this branch is a part of this one, I think. At least it looks like these two branches are interconnected. Right, author?

The fact that one series compresses better than the other needs to be considered not halom. Sometimes excess coding information is easy to identify and sometimes difficult. But it has nothing to do with market patterns
 
gip:


Well, if you (don't consider "you" to be rude please) are so on topic, why didn't you suggest anything to Mr Dickfix?

All the comments that I think are important have already been set out in this thread and I see no point in repeating myself.

hrenfx:

If you don't mind pointing out where to see the articles.


I don't remember. I have to look it up.
 
joo:

How to predict may not help this method, but selecting an instrument with as many patterns as possible for trading will. Therefore, this branch is a part of this one, I think. At least it looks like these two branches are interconnected. Right, author?

Yes, interconnected. My botanical branches bend in the same direction. And I won't be ashamed to admit my mistakes and wrongdoing when I realise.

Just one work in CodeBase, which I justified. But it did not interest anyone. Botanical topics (where there would be only words at all, if not posted results of minimal research attempts) attract educated people more for some reason.

 
hrenfx:

Random BPs are better compressed. The compressibility seems to be asymptotically bounded from below. The asymptote of price BPs is higher than the asymptote of random BPs.

I just don't understand, according to the Compression ratio - ratio of sizes before and after compression. On the left edge of the chart for price series this ratio is around 330 and for a random series it is only 250.

In other words, the traditional interpretation of the term Compression ratio is exactly the opposite - random series are worse comp ressed.

 
Candid:

I just don't understand, the Compression ratio is the ratio of sizes before and after compression. On the left edge of the chart for price series this ratio is around 330 and for random series it is 250.

That is, in the traditional interpretation of the term Compression ratio it is exactly the opposite - random series are compressed worse.


I did not explain the graph right away. In that graph, the number of financial instruments in the sample is shown on the abscissa. That is, the size of the sliding sample window changes linearly with the number of financial instruments.

And on the ordinate is the matrix obesity of the window compression size divided by the number of financial instruments.

I made it so that I could see how compression improves with increasing number of financial instruments. Well, where expectation is higher, compression is worse. Higher in SWR. SWR are better compressed.

 
hrenfx:


And on the ordinate, the matrix obesity of the window compression size divided by the number of fin. tools.

Still not clear :). If underlined read as size of compressed window, then SB is better compressed. And if read as window compression ratio, then vice versa. So the term compression size I personally can't interpret unambiguously.

It's not a nagging thing. You can't argue conclusions without giving unambiguous definitions of the values used. Especially if you advocate a rather paradoxical result.

 
Candid:

Still not clear :). If the underlined reads as the size of the compressed window, then SB compresses better. And if it is read as the compressed window size, it is the other way round. So the term compression size I personally cannot interpret unambiguously.

Yes, I misspelled the ending in the word. The correct word is compressed window size.
 
I see, thank you.
 

1. If we take the initial idea as an axiom, then first we need an ideal archiver, and we are as far from it as the moon http://unseal.narod.ru/molekula_dnk.html,

2. To check the idea, I think we should model the market by tick-flow modeling in the tester, but over a decent time interval, create minute bars on the basis of this flow, compress it and compare the results with the real ones. At some point there may be a slip in the purity of the experiment.

The main thing I'm interested in here is the minimum number of pairs needed for market analysis? If hrenfx finds these pairs I will be very grateful to him. Referring to this topic https://www.mql5.com/ru/forum/114579 I want to say that this question is raised more than once, whether using majors is enough, or whether the maximum number of instruments in the cluster is necessary.