# Random variable reflecting its probability

• B
• entropy1
In summary: So the preference of having equal numbers of 1's and 0's is reflected in the probabilities of the different series of outcomes.
entropy1
If we have a series of, say, twenty coin tosses, then each discernable specific series of outcomes has equal probability to occur. However, there is only one discernable specific series consisting of twenty 1's, while there are many more discernable series consisting of ten 1's and ten 0's.

So is that the cause that it is more probable for a 50/50 probability random variable to score a series consisting of as much 1's as 0's, than of a series of more 1's than 0's or vice versa, thus reflecting its probability in the series it produces?

Thanks.

Update: I realized that I did not take de probability value of the random value into account, such as for example 70/30 or something like that.

Last edited:
You have one possibility for twenty 1s (or 0s), but for other combinations you have more than one possibility. This is governed by the binomial distribution as

$$\text{Pr}[\text{getting N heads}]={K\choose N}p_h^N(1-p_h)^{K-N}$$

Since head and tail have the same probability, the binomial coefficient determines which combination that has the maximum probability.

Last edited:
entropy1
There seems to be a preference for having a certain number of heads in a certain number of tosses. For example, if the probabilities are 50/50, the preference is for as much heads as tails right? Which is not obvious, for you could have 20 heads in a row and it wouldn't infringe on the legitimity of calling the probability 50/50!

What do you mean a "preference"? It is more likely to get 10 heads and 10 tails than 20 heads or 20 tails. But this doesn't mean that this what will happen.

Having 20 heads in a row is not an indication of probability. You need to toss the coin a large number of times to calculate the probability.

EngWiPy said:
You need to toss the coin a large number of times to calculate the probability.

It's better to say that by tossing a coin a large number of times, you can estimate a probability.

When you are given certain probabilities as facts then you can calculate other probabilities.

EngWiPy
I think an issue here is that of distinguishing outcomes from events. Each outcome is equally probable but not so for each event. An event is defined as a collection of outcomes.

WWGD said:
I think an issue here is that of distinguishing outcomes from events. Each outcome is equally probable but not so for each event. An event is defined as a collection of outcomes.

A better way to put it, I think, is to use some terminology from Feller (volume 1 with discrete sample space).

Feller volume 1 said:
the results of experiments or observations will be called events
(note: I would suggest we could insert the word outcome instead of 'result' there if we were so inclined )

Feller volume 1 said:
We shall distinguish between compound (or decomposable) and simple (or indecomposable) events... in this way every compound event can be decomposed into simple events, that is to say, a compound event is an aggregate of certain simple events... the simple events will be called samples points or points for short. By definition, every indecomposable result of the (idealized) experiment is represented by one, and only one, sample point. The aggregate of all sample points will be called the sample space.

Every now and then the terminology in this seems old fashioned (e.g. he doesn't like the term CDF) but overall it's hard to wrong with Feller.
- - - -

So maybe it is better to say an issue comes in distinguishing between "compound events" and "simple events". Each "simple event" is equally likely in OP's original problem, but when you look at "compound events" (read: sums/convolutions of Bernouli trials) OP needs to take some care with the math.

- - - -
in general I quite like decompositions, so this kind of jargon has a certain charm to it for me.

entropy1 said:
If we have a series of, say, twenty coin tosses, then each discernable specific series of outcomes has equal probability to occur. However, there is only one discernable specific series consisting of twenty 1's, while there are many more discernable series consisting of ten 1's and ten 0's.

So is that the cause that it is more probable for a 50/50 probability random variable to score a series consisting of as much 1's as 0's, than of a series of more 1's than 0's or vice versa, thus reflecting its probability in the series it produces?
Yes, that is exactly right. And if you count up the number, N, of ways to get 10 1's and 10 0's, you will see that the probability of 10 1's and 10 0's is exactly N multiplied by the probability of getting 20 1's.

## 1. What is a random variable?

A random variable is a numerical value that is associated with a random experiment and can take on different values based on the outcome of the experiment. It is usually denoted by the letter X and can be discrete or continuous.

## 2. What does it mean for a random variable to reflect its probability?

A random variable reflects its probability when the likelihood of it taking on a certain value is directly related to the probability of that value occurring in the experiment. This is often represented graphically through a probability distribution.

## 3. How is a probability distribution calculated for a random variable?

A probability distribution for a random variable is calculated by assigning probabilities to each possible value of the variable and then summing those probabilities to equal 1. The distribution can be represented in a table, graph, or formula depending on the type of random variable.

## 4. What is the difference between a discrete and continuous random variable?

A discrete random variable can only take on a finite or countably infinite number of values, while a continuous random variable can take on any value within a given range. For example, the number of heads in 10 coin flips is a discrete random variable, while the weight of a person is a continuous random variable.

## 5. How are random variables used in scientific research?

Random variables are used in scientific research to model and analyze data, make predictions, and test hypotheses. They can also help researchers understand the relationship between different variables and make inferences about populations based on samples.

• Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
7
Views
428
• Set Theory, Logic, Probability, Statistics
Replies
11
Views
479
• Set Theory, Logic, Probability, Statistics
Replies
7
Views
943
• Set Theory, Logic, Probability, Statistics
Replies
5
Views
877
• Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
7
Views
312