Market model: constant throughput - page 14

 
sanyooooook:
more often, less often, but what if time is not taken into account?
Determination uses probability, there is no time reference, there is complete freedom with that. Or, if you prefer, the unknown :).
 

Предполагается, что рынок, как относительно замкнутая система, за единицу времени генерирует постоянное (или медленно меняющееся) количество информации.

I might agree to some extent, the market will not change by a certain amount (let's say points) until a certain amount of information arrives

 
sanyooooook:

I might agree to some extent, the market will not change by a certain amount (let's say points) until a certain amount of information arrives


New people with new ideas about old information and the hope to make money on the fallacy of other people's assumptions come and change the market :) They have not heard anything about the theory of efficient markets :)
 
What is this all about?
 
Avals:

New people came in with new ideas about old information and the hope of making money off the fallacy of others' assumptions and changed the market :) They haven't heard anything about the theory of efficient markets :)


They came in and 40 pips later left? Realised the fallacy of their assumptions about the novelty of their ideas (about old information) and went to read about the efficiency of markets?

--

It's not Friday.

 
gip:
Moped didn't dickfix, he provided a link, and it's from somewhere else quite far away.

Okay. As one of the co-authors of the moped, I'll give you a little insight:

"Standard" archiving methods (never mind LZ, Huffman, more exotic ones) work with a byte stream at the input.

A stream of quotes is usually encoded as a stream of real numbers, i.e. in packets of a few bytes each.

As long as this fact is ignored, the compression by popular universal archivers cannot even come close to a meaningful examination of the amount of information in a quote stream.

Moreover lempel-ziv will stably outperform Huffman by times, which is quite logical (but absolutely meaningless), because the input contains easily predictable "technical" patterns, related to data representation (and not to quote behavior).

On the other hand, it seems premature to write a specialized archiver for compression of the stream double, for the prospect of this approach is unclear and vague.

So, what meaningful things can be done with minimum effort and maximum reasonableness of the results to advance the research?

The way I see it, first of all, is to transform the quotes flow.

1) to byte stream

2) first differences

3) logarithmic.

In this case, maybe this information research will become a little more informative.

// paddon for the pun.

 
MetaDriver:

As long as this fact is ignored, the compression by popular universal archivers cannot even come close to a meaningful study of the information volume in a quotation stream.

Moreover, lempel-ziv will stably outperform Huffman by times, which is logical (but absolutely meaningless), because easily predictable "technical" patterns, related to data representation (and not to behavior of quotes), are coming in.

And what do you think about the result obtained by the topicstarter? I'm referring to the higher compression rate for random series.

Can it mean, that the archiver was not able to "decrypt" the information contained in the price series, but also not able to discard it?

How do I think - first of all transform the price series stream

1) to byte stream

2) first differences

3) logarithmic.

This is just to the question of what to apply the information meter to.

There seems to me to be a rather general problem here, and it is that it is the rare events that carry the maximum amount of information.

However, it is precisely because they are rare that we cannot reliably reconstruct for them the probability density function needed to quantify the information they contain.

 
Candid:

What do you think about the result obtained by the topicstarter? I am referring to the higher compression rate for the random series.

Could this mean that the archiver could not "decipher" the information contained in the price series, but also could not discard it?

No. It seems more primitive to me. As far as I understood looking through the topic, the candlesticks were compressed. When generating a random signal, candlesticks were also simulated. Note that the normal distribution was used. That's the trick (imha). It (NR) creates a higher probability density of candlesticks equal in amplitude and shift value. I.e. "less informativity" in terms of information theory. Hence greater compressibility. The result obtained, by the way, shows that the method is workable in spite of the instrumental primitiveness of the approach.

It is just to the question of what to apply the information meter to.

Here there is, it seems to me, rather general problem and it consists in that the maximum information bear namely rare events.

However, it is because of their rarity that we cannot reliably reconstruct for them the probability density function needed to quantify the information they contain.

This is where ticks can be of help. In the sense that

1) you have plenty of input information, there is much to scroll and squeeze.

2) the tick analysis will be much more correct, than "castrated" by bars.

And then you can do all sorts of fine threshold Zigzags. Renko, Kagi and all sorts of other modifications.

And then there's that. - Rareness of event is relative, very rare ones, you may ignore (cut off), and for the rest of them to collect some statistics.

--

The overall impression of the topic is that it is somewhat muddled. The level of discussion deserves an upgrade. The topic is worth it. A meaningful topic.

 

Candid:

Could this mean that the archiver failed to "decipher" the information contained in the price series, but also failed to discard it?

Actually, I should not have written 'Yes no'. It is perfectly possible to say so. My interpretation does not contradict it in any way, but is quite consistent.

It's all the fault of the fat tails again. Assholes. It's all their fault.

 

MetaDriver:

... thick tails. Assholes. They're the cause of everything.

Don't write, "You just don't know how to cook them..."

I know.

:)