# Random variable reflecting its probability

• B
Gold Member
If we have a series of, say, twenty coin tosses, then each discernable specific series of outcomes has equal probability to occur. However, there is only one discernable specific series consisting of twenty 1's, while there are many more discernable series consisting of ten 1's and ten 0's.

So is that the cause that it is more probable for a 50/50 probability random variable to score a series consisting of as much 1's as 0's, than of a series of more 1's than 0's or vice versa, thus reflecting its probability in the series it produces?

Thanks.

Update: I realized that I did not take de probability value of the random value into account, such as for example 70/30 or something like that.

Last edited:

You have one possibility for twenty 1s (or 0s), but for other combinations you have more than one possibility. This is governed by the binomial distribution as

$$\text{Pr}[\text{getting N heads}]={K\choose N}p_h^N(1-p_h)^{K-N}$$

Since head and tail have the same probability, the binomial coefficient determines which combination that has the maximum probability.

Last edited:
entropy1
Gold Member
There seems to be a preference for having a certain number of heads in a certain number of tosses. For example, if the probabilities are 50/50, the preference is for as much heads as tails right? Which is not obvious, for you could have 20 heads in a row and it wouldn't infringe on the legitimity of calling the probability 50/50!

What do you mean a "preference"? It is more likely to get 10 heads and 10 tails than 20 heads or 20 tails. But this doesn't mean that this what will happen.

Having 20 heads in a row is not an indication of probability. You need to toss the coin a large number of times to calculate the probability.

Stephen Tashi
You need to toss the coin a large number of times to calculate the probability.

It's better to say that by tossing a coin a large number of times, you can estimate a probability.

When you are given certain probabilities as facts then you can calculate other probabilities.

EngWiPy
WWGD
Gold Member
I think an issue here is that of distinguishing outcomes from events. Each outcome is equally probable but not so for each event. An event is defined as a collection of outcomes.

StoneTemplePython
Gold Member
I think an issue here is that of distinguishing outcomes from events. Each outcome is equally probable but not so for each event. An event is defined as a collection of outcomes.

A better way to put it, I think, is to use some terminology from Feller (volume 1 with discrete sample space).

Feller volume 1 said:
the results of experiments or observations will be called events
(note: I would suggest we could insert the word outcome instead of 'result' there if we were so inclined )

Feller volume 1 said:
We shall distinguish between compound (or decomposable) and simple (or indecomposable) events... in this way every compound event can be decomposed into simple events, that is to say, a compound event is an aggregate of certain simple events... the simple events will be called samples points or points for short. By definition, every indecomposable result of the (idealized) experiment is represented by one, and only one, sample point. The aggregate of all sample points will be called the sample space.

Every now and then the terminology in this seems old fashioned (e.g. he doesn't like the term CDF) but overall it's hard to wrong with Feller.
- - - -

So maybe it is better to say an issue comes in distinguishing between "compound events" and "simple events". Each "simple event" is equally likely in OP's original problem, but when you look at "compound events" (read: sums/convolutions of Bernouli trials) OP needs to take some care with the math.

- - - -
in general I quite like decompositions, so this kind of jargon has a certain charm to it for me.

FactChecker