(adsbygoogle = window.adsbygoogle || []).push({}); "Let X be a Bernoulli random variable. That is, P(X = 1) = p and P(X = 0) = 1 − p. Then E(X) = 1 × p + 0 × (1 − p) = p. Why does this definition make sense? By the law of large numbers, in n independent Bernoulli trials where n is very large, the fraction of 1’s is very close to p, and the fraction of 0’s is very close to 1 − p. So, the average of the outcomes of n independent Bernoulli trials is very close to 1 × p + 0 × (1 − p)."

I don't understand why it gives the average of1 × p + 0 × (1 − p).

So, we are given with total n number of independent trials. Then, lets say we have k number of success, and n-k number of failures.

then,1*p*kwill be our success fraction, and(1-p)(n-k)*0will be the failure fraction. If we find the average for n trials, it must bepk/n.

how do we have 1 × p + 0 × (1 − p) as our average

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Expected value of bernoulli random variable.

Loading...

Similar Threads - Expected value bernoulli | Date |
---|---|

A Value at Risk, Conditional Value at Risk, expected shortfall | Mar 9, 2018 |

I Conditional Expectation Value of Poisson Arrival in Fixed T | Dec 21, 2017 |

B The expectation value of superimposed probability functions | Sep 9, 2017 |

I Expected Value Question | Feb 17, 2017 |

Sample distribution and expected value. | Dec 10, 2015 |

**Physics Forums - The Fusion of Science and Community**