(adsbygoogle = window.adsbygoogle || []).push({}); "Let X be a Bernoulli random variable. That is, P(X = 1) = p and P(X = 0) = 1 − p. Then E(X) = 1 × p + 0 × (1 − p) = p. Why does this definition make sense? By the law of large numbers, in n independent Bernoulli trials where n is very large, the fraction of 1’s is very close to p, and the fraction of 0’s is very close to 1 − p. So, the average of the outcomes of n independent Bernoulli trials is very close to 1 × p + 0 × (1 − p)."

I don't understand why it gives the average of1 × p + 0 × (1 − p).

So, we are given with total n number of independent trials. Then, lets say we have k number of success, and n-k number of failures.

then,1*p*kwill be our success fraction, and(1-p)(n-k)*0will be the failure fraction. If we find the average for n trials, it must bepk/n.

how do we have 1 × p + 0 × (1 − p) as our average

**Physics Forums | Science Articles, Homework Help, Discussion**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Expected value of bernoulli random variable.

**Physics Forums | Science Articles, Homework Help, Discussion**