Variance of binomial distribution

In summary: So even if we knew the distribution of ##Y## perfectly, we couldn't say for certain what the distribution of ##I_j## would be.
  • #1
backtoschool93
4
0

Homework Statement


Random variable Y has a binomial distribution with n trials and success probability X, where n is a given constant and X is a random variable with uniform (0,1) distribution. What is Var[Y]?

Homework Equations


E[Y] = np
Var(Y) = np(1-p) for variance of a binomial distribution
Var(Y|X) = E(Y^2|X) − {E(Y|X)^2} for conditional variance of y given x
Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] for the law of total variance

The Attempt at a Solution


Knows that probability X is a uniform (0,1) random variable, we can calculate E(Y). From there, we should be able to calculate Var(Y) using the relevant equations, I think. Using the equation for variance of a binomial distribution and simply plugging in the values for p that we solved considering the uniform (0,1) distribution of X seems too easy/doesn't appear to be correct. My inclination is to use the law of total variance to solve for Var(Y), but it requires calculating Var(Y|X) as well as E(Y^2|X) ? This is where I get stuck, how to calculate E(Y^2|X) given the information I know about Y and X. Using the law of total variance, I also struggle to see how the equation for variance of a binomial distribution comes into play. Any idea if I am the right track/any advice?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
backtoschool93 said:

Homework Statement


Random variable Y has a binomial distribution with n trials and success probability X, where n is a given constant and X is a random variable with uniform (0,1) distribution. What is Var[Y]?

Homework Equations


E[Y] = np
Var(Y) = np(1-p) for variance of a binomial distribution
Var(Y|X) = E(Y^2|X) − {E(Y|X)^2} for conditional variance of y given x
Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] for the law of total variance

The Attempt at a Solution


Knows that probability X is a uniform (0,1) random variable, we can calculate E(Y). From there, we should be able to calculate Var(Y) using the relevant equations, I think. Using the equation for variance of a binomial distribution and simply plugging in the values for p that we solved considering the uniform (0,1) distribution of X seems too easy/doesn't appear to be correct. My inclination is to use the law of total variance to solve for Var(Y), but it requires calculating Var(Y|X) as well as E(Y^2|X) ? This is where I get stuck, how to calculate E(Y^2|X) given the information I know about Y and X. Using the law of total variance, I also struggle to see how the equation for variance of a binomial distribution comes into play. Any idea if I am the right track/any advice?

I could not quite understand why you are unsure about your proposed methods. You are claiming that to compute ##\text{Var}(Y|X)## you could just plug in ##p=X## in the binomial variance formula. And, of course, ##E(Y^2|X) = \text{Var}(Y|X) + (E(Y|X))^2##. All of that is perfectly true; you can be 100% sure.

So, why should we access the variance formula? Knowing the variance is useful because we usually do not commit to memory a formula for ##E(B^2)## for a binomial r.v. ##B##, but rather we remember the formula for ##\text{Var}(B),## and can get ##E(B^2)## from that.
 
  • #3
if it were me, I'd solve for ##E\Big[\mathbb I \big \vert X = p \Big]## first... That is the indicator random variable / bernouli trial, that makes your binomial. Each trial is independent (but see linguistic note at the end) so the variances should add.

For interest, also note that bernoulis are idempotent, so

##\mathbb I^2 = \mathbb I##

- - - -
Then carefully tie in the machinery from your(?) last problem here:
https://www.physicsforums.com/threads/expected-value-of-binomial-distribution.957052/
- - - -
equivalently your problem reduces to very carefully considering the variance for some Bernoulli Trial.

Strictly speaking I cannot tell whether you get independent experiments of ##X\big(\omega\big)## from the way you've worded your problem...
 
  • #4
StoneTemplePython said:
if it were me, I'd solve for ##E\Big[\mathbb I \big \vert X = p \Big]## first... That is the indicator random variable / bernouli trial, that makes your binomial. Each trial is independent (but see linguistic note at the end) so the variances should add.

For interest, also note that bernoulis are idempotent, so

##\mathbb I^2 = \mathbb I##

- - - -
Then carefully tie in the machinery from your(?) last problem here:
https://www.physicsforums.com/threads/expected-value-of-binomial-distribution.957052/
- - - -
equivalently your problem reduces to very carefully considering the variance for some Bernoulli Trial.

Strictly speaking I cannot tell whether you get independent experiments of ##X\big(\omega\big)## from the way you've worded your problem...

I'm not sure how useful indicators are in this problem: they are conditionally independent, yes, but not (unconditionally) independent. The distribution of ##Y## -- namely, ##P(Y=k) = \int_0^1 P(Y=k|X=p) f_X(p) \; dp = \int_0^1 C(n,k) p^k (1-p)^{n-k} \; dp## -- is not the same as ##P(I_1 + I_2 + \cdots + I_n = k)## for independent ##I_j## with the distribution ##P(I_j = 1) = \int_0^1 P(I_j=1|X=p) f_X(p) \; dp.## In other words, the assumption of independent ##I_j## gives the wrong sum-distribution.
 
  • #5
Thank you! However, I am still not sure that I follow...

I am starting with Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] and all I know is that E(Y|X) = nP (with P having a uniform 0,1 distribution).
Therefore, Var(Y) = E[E(Y^2|X) − {E(Y|X)^2}] + Var[n/2]
= E[E(Y^2|X) - (n^2/4)] + Var(n/2)

Once I get here, I fail to see how to calculate E[Y^2|X] or how to find Var[n/2] if that is even what I am supposed to do? And you're saying from the formula for Var(Y) = np(1-p), I should be able to calculate E[Y^2|X]?

Alternatively if I start with Var(Y|X) = np(1-p) then I get Var(Y|X) = n/4, which doesn't seem right either, because plugging that information into Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] then I would be trying to calculate E[n/4] + Var[n/2].
 
  • #6
backtoschool93 said:
Thank you! However, I am still not sure that I follow...

I am starting with Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] and all I know is that E(Y|X) = nP (with P having a uniform 0,1 distribution).
Therefore, Var(Y) = E[E(Y^2|X) − {E(Y|X)^2}] + Var[n/2]
= E[E(Y^2|X) - (n^2/4)] + Var(n/2)

Once I get here, I fail to see how to calculate E[Y^2|X] or how to find Var[n/2] if that is even what I am supposed to do? And you're saying from the formula for Var(Y) = np(1-p), I should be able to calculate E[Y^2|X]?

Alternatively if I start with Var(Y|X) = np(1-p) then I get Var(Y|X) = n/4, which doesn't seem right either, because plugging that information into Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] then I would be trying to calculate E[n/4] + Var[n/2].

The parameter ##n## is just a number, not a random variable.

I am saying that if you know the formula for ##\text{Var}(B)## when ##B## has distribution ##\text{Binomial}(n,p)##, then you can substitute in ##p = X##. After all, nobody forces us to use the symbol ##p## to denote the success probability per trial; we can equally well call it ##X## or ##\Lambda## or anything else if we prefer. When we speak of ##Y|X##, the problem spells out for us that we are dealing with ##\text{Binomial}(n , X).##
 
  • #7
Okay, that is helpful--I think I understand it enough to figure out the rest of the problem. Thank you very much!
 
  • #8
Ray Vickson said:
I'm not sure how useful indicators are in this problem: they are conditionally independent, yes, but not (unconditionally) independent. The distribution of ##Y## -- namely, ##P(Y=k) = \int_0^1 P(Y=k|X=p) f_X(p) \; dp = \int_0^1 C(n,k) p^k (1-p)^{n-k} \; dp## -- is not the same as ##P(I_1 + I_2 + \cdots + I_n = k)## for independent ##I_j## with the distribution ##P(I_j = 1) = \int_0^1 P(I_j=1|X=p) f_X(p) \; dp.## In other words, the assumption of independent ##I_j## gives the wrong sum-distribution.

The thing is, I'm not that interested in the distribution -- just pairwise comparisons. Keeping it simple, I'd just flag that OP computed the associated mean in a prior post. so we just need ##E\Big[Y^2\Big]## to get the variance.

In general we can decompose a 'counting variable' as a sum of possibly dependent indicator random variables
##Y = \mathbb I_1 + \mathbb I_2 +... + \mathbb I_n = \sum_{k=1}^n \mathbb I_k##

so when we look to the second moment for this problem, we get
##E\Big[Y^2\Big] ##
##= E\Big[E\big[Y^2\big \vert X \big]\Big] ##
##= E\Big[E\big[Y^2\big \vert X =x \big]\Big]##
##= E\Big[E\big[(\sum_{k=1}^n \mathbb I_k)^2\big \vert X =x \big]\Big]##
##= E\Big[E\big[(\sum_{k=1}^n \mathbb I_k^2\big) + (\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j \big) \big \vert X =x \big]\Big]##
##= E\Big[E\big[\big(\sum_{k=1}^n \mathbb I_k) + (\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j )\big \vert X =x \big]\Big]##
##= E\Big[E\big[\sum_{k=1}^n \mathbb I_k\big \vert X =x \big]\Big] + E\Big[E\big[\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##
##= E\Big[\sum_{k=1}^n E\big[ \mathbb I_k\big \vert X =x \big]\Big] + E\Big[\sum_{k=1}^n\sum_{j\neq k} E\big[ \mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##
##= n\cdot E\Big[E\big[\mathbb I_k\big \vert X =x \big]\Big] + n(n-1)\cdot E\Big[E\big[\mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##

(where it is understood that ##j \neq k##)
- - - - -
The first term should look familiar. The second term takes a little bit of insight and thinking which I leave for the OP. Then some simplification is needed to get the final answer.

Since they are Bernoulis, I sometimes find it helpful to draw a tree with 4 leaves, e.g. for ##\mathbb I_k \mathbb I_j## to sketch the probabilities and payoffs. Of course, what we actually need to do here is a slight refinement of this, since we would sketch this tree conditioned on ##X=x##.

The above technique is quite useful in computing variance for much nastier distributions btw.
 
Last edited:
  • #9
Thank you all for the responses, they are very helpful!
 

What is the variance of binomial distribution?

The variance of binomial distribution is a measure of how spread out the data is from the mean in a binomial distribution. It tells us how much the individual data points differ from the average value. In simple terms, it is a measure of the variability or diversity of the data.

How is the variance of binomial distribution calculated?

The variance of binomial distribution can be calculated by multiplying the number of trials (n) by the probability of success (p) by the probability of failure (q), where q = 1-p. The result is then multiplied by the probability of success and the probability of failure. The formula is Var(X) = n * p * q.

What does a high variance in binomial distribution indicate?

A high variance in binomial distribution indicates that the data is widely spread out from the mean. This means that the individual data points are significantly different from the average value, and there is a large amount of variability in the data.

What is the relationship between mean and variance in binomial distribution?

The mean and variance in binomial distribution are directly related. As the mean increases or decreases, the variance also increases or decreases proportionally. This means that if the data is spread out from the mean, the variance will be higher, and if the data is clustered around the mean, the variance will be lower.

Can the variance of binomial distribution be negative?

No, the variance of binomial distribution cannot be negative. It is always a positive value, as it is a measure of variability and cannot be less than zero. A negative variance would indicate that the data points are not spread out from the mean, but rather clustered in a specific range.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
579
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
691
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
865
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
15
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
399
  • Calculus and Beyond Homework Help
Replies
5
Views
536
Back
Top