Expected value of bernoulli random variable.

Click For Summary

Discussion Overview

The discussion revolves around the expected value of a Bernoulli random variable, specifically examining the calculation of its expected value and the implications of the law of large numbers in the context of independent trials. Participants explore the relationship between the number of successes and failures in a series of trials and how this affects the average outcome.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant states that for a Bernoulli random variable, the expected value is calculated as E(X) = 1 × p + 0 × (1 − p) = p, questioning why this definition is valid.
  • Another participant argues that the average of outcomes in n independent Bernoulli trials approaches p as n increases, based on the law of large numbers.
  • A participant expresses confusion about the calculation of the average, suggesting that the average should be pk/n, where k is the number of successes.
  • Buzz challenges the initial reasoning, asserting that the success fraction should be defined as the number of successes divided by the number of trials, leading to k/n = p.
  • Further clarification is sought regarding the independence of trials and the distribution of successes and failures, with an example of coin tossing introduced to illustrate the point.
  • Another participant acknowledges a misunderstanding regarding the calculation of the success fraction and recognizes that larger n minimizes deviation in the fraction.

Areas of Agreement / Disagreement

Participants exhibit disagreement regarding the interpretation of the average outcome in relation to the expected value of a Bernoulli random variable. While some clarify the definition of success fraction, others maintain differing views on how to approach the calculation.

Contextual Notes

The discussion highlights potential misunderstandings about the definitions and calculations involved in determining expected values and success fractions in Bernoulli trials. There are unresolved aspects regarding the interpretation of averages in small versus large sample sizes.

kidsasd987
Messages
142
Reaction score
4
"Let X be a Bernoulli random variable. That is, P(X = 1) = p and P(X = 0) = 1 − p. Then E(X) = 1 × p + 0 × (1 − p) = p. Why does this definition make sense? By the law of large numbers, in n independent Bernoulli trials where n is very large, the fraction of 1’s is very close to p, and the fraction of 0’s is very close to 1 − p. So, the average of the outcomes of n independent Bernoulli trials is very close to 1 × p + 0 × (1 − p)."
I don't understand why it gives the average of 1 × p + 0 × (1 − p).
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.

then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction. If we find the average for n trials, it must be pk/n.

how do we have 1 × p + 0 × (1 − p) as our average
 
Physics news on Phys.org
kidsasd987 said:
then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction.
Hi kidsasd:

This is where you went astray. Your success fraction is the number of successes divided by the number of trials, that is, k/n = pn/n = p.

Regards,
Buzz
 
Buzz Bloom said:
Hi kidsasd:

This is where you went astray. Your success fraction is the number of successes divided by the number of trials, that is, k/n = pn/n = p.

Regards,
Buzz

Thanks. But I guess n number of independent trials has to consist of number of success k and failure (n-k). Since each trial is independent, thus we cannot have success only.

for example, if I toss a fair coin i'd observe two possible outcomes. Head and tail. If I toss a coin n times, it would not give all head or all tail. that's what I thuoght, and why I introduced k. Please correct me where I got this wrong.
 
kidsasd987 said:
Thanks. But I guess it(n number of independent trials) has to be divided into number of success k and failure (n-k). Since each trial is independent, we cannot have success only.
Hi kidsasd:

You said:
kidsasd987 said:
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.

Do you agree that the "success fraction" is the number of successes divided by the number of trials? If so, then what is the number of successes, and what is the number of trials?

Regards,
Buzz
 
Buzz Bloom said:
Hi kidsasd:

You said:Do you agree that the "success fraction" is the number of successes divided by the number of trials? If so, then what is the number of successes, and what is the number of trials?

Regards,
Buzz
isn't success fraction, 1*P(X=1)*k?
where number of trials=n, and number of success=k.

avg={1*P(X=1)*k+P(X=0)*(n-k)*0}/n
Oh now I see where I got it wrong.

so if n is a small number then there is a greater possibility of deviating the fraction but such trend is minimized as we set n to a large number.

Thanks!
 
Hi kidsasd:

Glad to have been of help.

Regards,
Buzz
 
  • Like
Likes   Reactions: kidsasd987

Similar threads

  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K