From theory to practice - page 698

 
Aleksey Nikolayev:

1) We are talking about a very specific concept of an event from Kolmogorov's axiomatics.

2) There are no algorithms in this axiomatics.

I haven't violatedKolmogorov's axiomatics anywhere in my statements, and I certainly haven't denied it. But you saw it somewhere? Where? Give me a link.

You're confusing the soft with the soft.

What are we talking about here? We're talking about an event that's the result of an algorithm:

In this algorithm there is a fixed condition: to the event x assign value 1 if the value of p is greater than the value of1-p. Otherwise, the value-1 must be assigned to eventx.

When the algorithm works, this condition will always be satisfied.


You're saying that sometimes this event may or may not happen:

Forum on trading, automated trading systems and trading strategy testing

Random Wandering :

Aleksey Nikolayev, 2018.10.28 11:17

This is not correct. Falloutx1=-1 is also possible, though less likely. As they say in matstat - with a large number of trials it will happen about 10% of the time. This is actually basic axiomatics of probability theory. If you don't agree with me on this, then I should stop discussing it with you.


Your statement is completely out of line. And it contradictsKolmogorov's axiomatics.

Try to look at the whole thing soberly.

 
Олег avtomat:

I have not violatedKolmogorov's axiomatics anywhere in my statements, and I certainly have not denied it. But you saw it somewhere? Where? Give me a link.

You're confusing the soft with the soft.

What are we talking about? We're talking about an event that's the result of an algorithm:

In this algorithm there is a fixed condition: to the event x assign value 1 if the value of p is greater than the value of1-p. Otherwise, the value-1 must be assigned to eventx.

When the algorithm works, this condition will always be satisfied.


But you declare that sometimes this event may occur this way or that way:


Your statement doesn't fit at all. It contradictsKolmogorov's axiomatics.

Try to look at all this soberly.

In the initial definition (picture on the first page of your thread taken from the wiki) pi are probabilities. In your algorithm they are not probabilities.

 
Aleksey Nikolayev:

In the initial definition (picture on the first page of your branch taken from the wiki) pi are probabilities. In your algorithm they are not probabilities.

My algorithm is fully consistent with the initial definition.

In my algorithm, probabilitiespi are given by a random number generator from the interval (0, 1) with a uniform distribution, with this functionrnd(1).

At each step, the probabilitypi is given by the updated value of the function rnd(1).

The functionrnd(1) is recalculated at each step. Don't you know that?

 
Олег avtomat:

My algorithm is fully consistent with the initial definition.

In my algorithm, the probabilitypi is given by a random number generator from the interval (0, 1) with a uniform distribution. This is the functionrnd(1).

At each step the probabilitypi is given by the updated value of function rnd(1). Functionrnd(1) is recalculated at each step.

Don't you know it?

You are mistaken. In your algorithm, p is just a redundant variable. The condition on p: p>1-p is equivalent to the condition p>1/2. Since p=rmd(1), the direction selection condition can be rewritten as: if (rnd(1)>1/2) x[i]=1, doing without any p. Within the initial definition, you only generate a special case, the code is all pi=1/2 - a "fair coin".

To meet the initial definition, your algorithm should take array p[n] as input and for each i=1,...,n the directional choice condition will be: if (rnd(1)<p[i]) x[i]=1.

 
Олег avtomat:

My algorithm is fully consistent with the initial definition.

In my algorithm, the probabilitypi is given by a random number generator from the interval (0, 1) with a uniform distribution. This is the functionrnd(1).

At each step, the probabilitypi is given by the updated value of the function rnd(1).

The function rnd(1) is recalculated at each step. Don't you know that?

To improve the quality, first you generate sequences (e.g. 1000pc) and then use statistics of these sequences to select more correct ones. Then for each step you have to sequentially read from the already prepared sequence. In conditionally fair gambling a sequence is generated in the beginning, and then the player gets (subsequent) values from this sequence, i.e. any reactions from win/loss conditions and player's actions are cardinally excluded.

 
Aleksey Nikolayev:

You are mistaken. In your algorithm, p is just a redundant variable. The condition on p: p>1-p is equivalent to the condition p>1/2. Since p=rmd(1), the direction selection condition can be rewritten as: if (rnd(1)>1/2) x[i]=1, doing without any p. Within the initial definition, you only generate a special case, the code is all pi=1/2 - a "fair coin".

To fit the initial definition, your algorithm should take p[n] as input and for each i=1,...,n the directional choice condition would look like this: if (rnd(1)<p[i]) x[i]=1.

1) You are mistaken. The algorithm can be modified, simplified and optimized. Trust me, I can reconstruct it in many different ways. But it doesn't change the essence of the matter. The result is a random walk process.

2) This array must be filled with the samernd(1). And nothing in principle would have changed. See item 1.

You are arguing for the sake of argument. It seems so to me for some reason... IMHO, so to speak...

Just make your own version of SB - it will take you five minutes and you won't have to make anything up. Though, judging by your statements, I think you have never modelled SB.
 
Unicornis:

To improve quality, at the beginning sequences are generated (for example, 1000 pieces), then according to statistics of these sequences more correct ones are chosen, and then for each step there is consecutive reading from already prepared sequence. In conditionally fair gambling the sequence is generated first, and then player gets (subsequent) values from this sequence, i.e. any reactions from win/loss conditions and player's actions are cardinally excluded.

The gambling is already involved here...

Just make your own version of SB - it takes five minutes.
 
Олег avtomat:

2) This array would have to be filled with the samernd(1). And nothing would change in principle. See point 1.

Not necessarily randomly, there are a huge number of possible variants with very different results. For example, at the beginning of array the probability is less than 1/2, and at the end it is more (on the average over the array about 1/2). You get a pattern of a downtrend changing to an uptrend.

 
Aleksey Nikolayev:

Not necessarily random, there are a huge number of possible variations with very different results. For example, at the beginning of the array the probability is less than 1/2, and at the end it is greater (the average for the array is about 1/2). You get a pattern of a downtrend changing to an uptrend.

I see you've already started trolling...

Out of this"huge number of possible variants with very different results" you have to settle on some one variant. I settled on the option I demonstrated.

You can choose your variant.

Just make your own variant of SB - it takes five minutes and you won't have to invent anything. Although, judging by your statements, I think you have never modelled SB.

Frankly speaking, I am sick and tired of this empty ruminating.

 

Just to keep the conversation going, for those who confuse random walk with white noise or expectation with probability