What is it? - page 19

 
Avals писал(а) >>

You have calculated probabilities for event 1200/800 i.e. P(A1 && A2)

But you were talking about event A2|A1 (conditional probability of event A2 if event A1 has already occurred)

Where did I talk about conditional probabilities?

I'm not poking around. I just think that if I'm misunderstood, I'm partly to blame for that too.

 

Thank you.

Sekei, in his Paradoxes in Probability Theory and Mathematical Statistics, describes the de Moivre paradox. Avatara must have hinted at it for you...

And arguing with Candide is pointless, he didn't get the hints and went straight to flaming.

I have measured myself.

 
lasso >>:Приведенная цитата не есть определение МО. Само определение мат.ожидания чуть ниже.

I think we can be satisfied with this definition for our discussion: MO is the average of all possible realizations of a random variable.

I am too lazy to find references to books where it is written that integration means averaging (in general case with accuracy to the normalizing factor), but many people on this forum can confirm that. The same people will tell you that for discrete quantities integration is replaced by summation.

MO is the expected value. In other words it is what we expect, what value of frequency of occurrence we expect from a random variable in its ideal behaviour (distribution).

You did not calculate the Mathematical Expected, but some mixture of Mat.Happened (600) + MO from a second series of 1000 events(500)

You missed an important clue, when talking about MO you should always specify what quantity you are talking about. Well, in your problem we are talking about exactly what I wrote about: the MO of the number of red falls in the series of 2000 rolls, assuming that after the first thousand will be 600. You're trying to replace it with just the MO of the number of red falls in a series of 2000 roll s. These are different values, by Bayes :)

What else can I tell you? At MO=1100, the variants A1 && B2 and A1 && A2 are arranged symmetrically around MO, thus removing the question of why their probabilities are equal. That's it, I'm tired, if that's not enough for you, I'll have to exclude you from my reference group :) .


P.S. I forgot to say, there is another useful trick for understanding - thoughtfully reread everything again.

 
lasso писал(а) >>
Colleagues, quiet. >> hush. Let's get this over with. Only, please, let us defend our points with arguments, with calculations, without involving "Michurinians" and "junatists".

The quote above is not the definition of ME. The definition of mate expectation itself is just below.

ME is the expected value . In other words, it is what we expect, what magnitude of frequency of occurrence we expect from a random variable in its ideal behaviour (distribution).

And it does not depend on the results of specific (local) series of events.

MO is assumed: a) based on physical properties of the object, e.g. a regular cube p=1/6 MO=n*p

Or it is determined: b) by experience. For example, We made 50 series of 1000 tests in each series. And from the values obtained in each series we find the average

this is not the case. What you call MO is probability, and MO for a discrete distribution is equal to the sum of the product of the possible values by their probability. If probability of heads/tails = 0.5/0.5 and for heads=+1, tails=-1, then MO=1*0.5-1*0.5=0.

But if we do not have probabilities (and in practice we never have them), we must estimate P(heads)=Total number of tokens/total number of rolls. That is, the estimated probability is equal to the frequency of the event

MO=(1*Number of eagles - 1*Number of tails)/Number of casts. This is for two values of NE.

At higher values the formula will be: MO=(x1*N1+x2*N2+...+xi*Ni)/N, where x1...xi - NE, N1...Ni - number of falls, N=N1+...+Ni - total number of throws

Why at falling out 600/400 the probability comes back to 0.5/0.5? So, it is not because the series remembers something and compensates. It's the law of large numbers. This deviation will be compensated by the fact that the deviation will grow slower as N increases than N itself. If first time it was 600/400 then the probability estimate is 0.6/0.4 If we run 1000 more trials and get for example 500/500, the probability estimate will be 0.55/0.45. Roughly speaking, this deviation will wash out as the number of trials increases. Estimate of probability (frequency of event) will be reduced to probability only in the limit of infinity (and by the way the more tests the less chance that it will be equal).

lasso wrote >>

Where did I say about conditional probabilities???

I'm not poking around. I just think that if I'm misunderstood, I'm partly to blame.

So if you didn't mean it, your task is simple: run 2,000 trials - 1,200 red, 800 black. Without the hassle of splitting it into series of 1000 and getting intermediate results

 
Candid писал(а) >>

(1) You see, when you try to assess your opponent's level, you're assessing either his level or your ceiling.

(2) And don't confuse one with the other.

(1) This is true.

(2) And it's not feasible. However, maybe it's only my ceiling... ;) Share the technology? If there's a self-sabotage.

 
Candid писал(а) >> If you say you're only interested in options when it was 600 after the first 1000, you're making options that don't go past that point impossible. The MO changes accordingly. And where it lies, I do not remember, it was long ago :)

Candid wrote(a) >> I'm too lazy to find links to books, where it is written that integration means averaging (in general case with accuracy to a constant), but many people on this forum can confirm it to you. The same people will tell you that for discrete quantities integration is replaced by summation.

Please, do not hesitate to provide the sources of such interesting information. Where do they give out such knowledge?

Members of the Forum! Please do not keep silent, but make your arguments. What is wrong in my first post on this page?

Candid wrote(a) >> You missed an important clue, when talking about ME you should always state what value you are talking about.

Tired of writing about it, but I'll say it again: ........ I'm basing this on centuries and thousands of years of tape measure observations, and the assumption that the rudder table and wheel are perfectly manufactured and balanced. There are no zeros on my tape measure (so that we don't get even more lost). 36 holes. 18 reds. 18 black. That's 0.5 by 0.5.

Candid wrote (a) >>

So, your problem is about exactly what I wrote about: the MO number of reds in a series of 2000 shots, assuming there are 600 after the first thousand. You are trying to replace it with just the MO of the number of red falls in a series of 2000 roll s. They are different values, by Bayes :)

Well, there are no conditions in the definition of MO (... assuming there are 600 after the first thousand... ) NO!!! Otherwise - a link to the source is mandatory!

Candid wrote(a) >>

That's it, I'm tired, if that's not enough for you, I'll have to exclude you from my reference group :) .

No. No. Don't even think about it.... In the middle of a round, you can only lie down yourself if you're really tired... )) And you can't quit. No one will understand.

If you've been boxing, of course. ))

 

Avals, thank you. Our views are almost identical. I was about to put you in the "enemy" camp )))) But still....

Avals писал(а) >>

If the first time was 600/400 then the probability estimate is 0.6/0.4 If another 1000 trials are performed and you get for example 500/500 then the probability estimate is already 0.55/0.45.

Once again, We NOT make an estimate of the probability of Red falling out at the Slingshot by some discrete series of events, it has already been produced BEFORE HIS by our predecessors (Laplace, Bernoulli, Bayes), our history, the history of Red-Black falling out. That's it!!! p=q=0.5 or so #define p 0.5 THIS IS THE POINT.

Avals wrote >>.

So if you didn't mean it, then your problem is formulated simply: you ran 2000 trials - 1200 red, 800 black. Without all the trouble of breaking it down into series of 1000 and getting intermediate results.

No, it's not. I'm stumped. How do I get my point across? Please read the original problem again https://www.mql5.com/ru/forum/122871/page14#254008

and her interpretation of the slingshot https://www.mql5.com/ru/forum/122871/page16#255508

 
lasso >>:

Само определение мат.ожидания чуть ниже.

МО это ожидаемое значение. Другими словами это то, что мы ждем, какую величину частоты появления ожидаем от случайной величины в идеале её поведения (распределения).

This is an interpretation in terms of worldly sense, but not a definition. You know the definition: it is an average of ideal realisations; there is nothing in it about expectations or the future. In the same way the prediction of a random process at some point in the future is defined: it is m.o. and nothing else.

One can reason much about the nature and meaning of probability, but it still has something which is absent in the frequency: probability implicitly contains the model of behaviour of the phenomenon which we suppose to be applicable to it in the past, present and future. The frequency, on the other hand, has only the past.

Well, there are no conditions in the definition of ME (... assuming there are 600 after the first thousand...)

All right, so be it. So what, now we have to give up accounting for the credible events that you so stubbornly refuse to see? We have a credible event: the first series of tests brought us 600 hits on the Red. We have to calculate what to expect on average from the full event (2000 trials) - but on the assumption that the first thousand trials have already resulted in 600 Reds.

It's not going to be a big deal. We know that the expectation of the number of Reds in the second series of 1000 trials is exactly 500. Our process is Bernoulli's, so we know that the past does not affect this expectation: it is the same as 500 anyway. Now, knowing that 600 has already been in the first series, we add another 500.

No matter how you call it, expectation, prediction or any other name, anyway, 500+600 will be in the centre of what you get as a result of series of 2000 trials.

 
lasso >>:

Потрудитесь, пожалуйста, источники столь интересной информации все же предоставить. Где раздают такие знания?

Well in the right university I think you can get. Maybe you should really go to school?

Well, there are no conditions in the definition of MO (... provided that after the first thousand there will be 600...)

Once again, now definitely the last one. It can't be in the definition of MO, it is in the definition of the value whose MO you want to know. And you personally gave that definition, no one pulled your tongue.


Since you started to write the post, I will suggest one more way.

So, take your correct roulette and spin (remembering to throw the ball) many, many times. Divide ALL the results into a series of 2000 rolls. Calculate the average of the results and, if you've done a good job, get a result close to 1000. This will be the MO estimate of the number of red falls in the series of 2000 roll s. If you keep spinning to infinity, you will get an infinitely close to 1000.

But don't relax! :) Next task will be more complicated. You will have to estimate the number of red hits in 2000 series with the condition that after the first thousand there will be 600 of them. From all 2000 shots you will have to keep only those series with 600 red hits after the first thousand. And there are far fewer of them. So for a good estimate of the MO you will have to spin the roulette wheel not many, many times, but many, many times more. That's your own fault. But here you finally get a fairly large number of these series, calculate the average and... I bet it's a lot closer to 1,100 than 1,000. I'm willing to let you spin the roulette wheel until you get 1000. Or until you agree with me.

You can even practice on a simpler task first. Let it be not 2000, 1000 and 600, but 4, 2 and 2. That is, divide the results of the draws into a series of 4 and select those in which there were 2 reds after two draws. You won't need a huge number of draws for your first decent score, so you can take a coin (if you don't have roulette) and start right away. You can still do this until the MO score is close to 2, or until you agree that the MO for that value is 3.

Agreed?

Should a series of 4 rolls tend towards your (or rather your) expectation after two red falls?

 
Avals писал(а) >>

Are you wondering why the probability will revert to 0.5/0.5 when you hit 600/400?

That question doesn't bother me at all. What bothers me is that I can't mathematically explain my roulette winnings (money-wise), although with the amount of games played and such a negative expectation ( 1/37 = zero ) and such starting capital (deposit) we should have gone bankrupt at least 6-7 times. But that didn't happen.

.......

I'm plagued by the same thing as the top starter. Only with a slight difference: He shows someone else's charts and asks "What are these?"

I "show" my charts (albeit in roulette, not the point) and also ask "What is it?". But unlike Charts - I can explain something. But no one seems to be interested!

So why are we here, gentlemen?