Sample distribution and expected value.

Click For Summary
In a discussion about sample distribution and expected value, participants explore why the expected values of individual samples (Xi) equal the population mean (µ). It is clarified that each sample drawn from a population with a defined probability distribution maintains the same mean and variance as the population. The expectation value of a random variable Xi is derived from the identical expressions for the population mean. Questions arise regarding the nature of samples and how individual sample means relate to the overall population mean, with explanations emphasizing that Xi represents random variables rather than specific samples. The conversation concludes with a focus on understanding the concept of expectation in relation to random variables.
kidsasd987
Messages
142
Reaction score
4
Consider a scenario where samples are randomly selected with replacement. Suppose that the population has a probability distribution with mean µ and variance σ 2 . Each sample Xi , i = 1, 2, . . . , n will then have the same probability distribution with mean µ and variance σ 2 . Now, let us calculate the mean and variance of X_bar: E(X_bar) = 1/n*(E(X1) + E(X2) + · · · + E(Xn)) = 1/n (µ + µ + · · · + µ ) = µ

*X_i is independent random variable.Hello. I wonder why the expected values of Xi are the same as population average µ.
 
Last edited:
Physics news on Phys.org
Hi,

Not sure what you mean with the probability distribution of a single sample. What's that ?
 
BvU said:
Hi,

Not sure what you mean with the probability distribution of a single sample. What's that ?

I guess it means that random variable has the same probability for P(X=x), like Bernoulli random variable.


Please refer to the link above.
 
Last edited:
It's probably more like a short form of saying that the set of all possible individual xi has the same probability distribution as ... (because it's the same population).

kidsasd987 said:
why the expected values of Xi are the same as population average µ
Well, that is because the expression in the definition of ##\mu## and the expression for the expectation value are identical.
 
BvU said:
It's probably more like a short form of saying that the set of all possible individual xi has the same probability distribution as ... (because it's the same population).

Well, that is because the expression in the definition of ##\mu## and the expression for the expectation value are identical.

I am sorry. Maybe I am too dumb to understand at once. Can you help me to figure out the questions below?
(*they are not homework questions but I wrote them in statement form because It'd be easier to answer.)1. Xi are the samples with n size.
Does that mean X1 can have n number of data within it? For example, let's say our population has a data set {1,2,3,4,5,6,7,8,9,10}
and X1 has a size of 2, then {1,2},{1,4},... on can be the sample X1.

2. (if 1 is correct) I understand why E(X)=μ, but how their samples E(X1),E(X2).. and on equal to μ.
E(X)=sigma(P(X=xi)*xi)
E(X1)=sigma(P(X1=xj)*xj) but the sum will be significantly smaller than E(X)?

Thanks.
 
1. Xi are the samples with n size.
Does that mean X1 can have n number of data within it? For example, let's say our population has a data set {1,2,3,4,5,6,7,8,9,10}
and X1 has a size of 2, then {1,2},{1,4},... on can be the sample X1.

2. (if 1 is correct) I understand why E(X)=μ, but how their samples E(X1),E(X2).. and on equal to μ.
E(X)=sigma(P(X=xi)*xi)
E(X1)=sigma(P(X1=xj)*xj) but the sum will be significantly smaller than E(X)?
Thanks.

##X_i## is not a sample. It is a random variable. We find the expectation value of that random variable defined as,
##E(X_i) = \Sigma{x_iP(x_i)} = \mu##
Hope this helps!
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
5K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 13 ·
Replies
13
Views
3K