Mean of a sum of random variables

In summary: No it isn't...Does it have the same variance as the Xi? If it does, you can just as well include it among them and otherwise the statement is not really...No it isn't...
  • #1
albega
75
0

Homework Statement


If Y=X1+X2+...+XN prove that <Y>=<X1>+<X2>+...+<XN>

Homework Equations


<Y>=∫YP(Y)dY over all Y.

The Attempt at a Solution


I only seem to be able to show this if the Xi are independent, and I also think my proof may be very wrong. I basically have said that we can write the probability in the interval X1+dX1, X2+dX2,..., XN+dXN, as
j=1nPXj(Xj)dXj (I really doubt this is right).
Then
<Y>=∫(∑i=1nXi)∏j=1nPXj(Xj)dXj
=∑i=1n∫Xi∏j=1nPXj(Xj)dXj
then all the integrals apart from the ith one go to one because the various probability functions are normalised so
=∑i=1n∫XiPXi(Xi)dXi
=∑i=1n<Xi>
however in saying all the integrals go to one, I have assumed I could separate all the integrals, i.e that the variables were independent.

Also, is there not a really easy way to prove this - I can't seem to find any books/websites proving it making me think it's just trivial...
 
Last edited:
Physics news on Phys.org
  • #2
Your proof seems sound at first glance. It should be easily fixed for all distributions using P(A|B) P(B) = P(AB).
 
  • #3
For example, if the joint density of [itex] x_1, x_2 [/itex] is [itex] f(x_1,x_2) [/itex]

[itex]\int \int { (x_1 + x_2) f(x_1,x_2) } dx_1 dx_2 = \int \int {x_1 f(x_1,x_2)} dx_1 dx_2 + \int \int {x_2 f(x_1,x_2) } dx_1 dx_2 [/itex]

Then an individual integals like [itex] \int \int x_2 f(x_1,x_2) dx_1 dx_2 [/itex] have the general pattern (expressed in different variables) of

[itex] \int \int h(r) f(r,s) ds\ dr = \int\ h(r)\ ( \int f(r,s) ds )\ dr [/itex]

The integration [itex] \int {f(r,s)} ds [/itex] produces the density function for [itex] r [/itex]. (It's integration of a joint density to produce a marginal density.)
 
  • #4
albega said:

Homework Statement


If Y=X1+X2+...+XN prove that <Y>=<X1>+<X2>+...+<XN>

Homework Equations


<Y>=∫YP(Y)dY over all Y.

The Attempt at a Solution


I only seem to be able to show this if the Xi are independent, and I also think my proof may be very wrong. I basically have said that we can write the probability in the interval X1+dX1, X2+dX2,..., XN+dXN, as
j=1nPXj(Xj)dXj (I really doubt this is right).
Then
<Y>=∫(∑i=1nXi)∏j=1nPXj(Xj)dXj
=∑i=1n∫Xi∏j=1nPXj(Xj)dXj
then all the integrals apart from the ith one go to one because the various probability functions are normalised so
=∑i=1n∫XiPXi(Xi)dXi
=∑i=1n<Xi>
however in saying all the integrals go to one, I have assumed I could separate all the integrals, i.e that the variables were independent.

Also, is there not a really easy way to prove this - I can't seem to find any books/websites proving it making me think it's just trivial...

The result is true in general, even if the variables ##X_1, X_2, \ldots, X_n## dependent. Look at the case ##n=2##, and take sums instead of integrals (because the result is also true for discrete random variables). Using the standard notation ##E## for expectation (instead of your Physics-oriented notation ##\langle \cdot \rangle##) we have, for a joint probability mass function ##P\{X_1 = k_1, X_2 =k_2 \} = p_{12}(k_1,k_2)##:
[tex] E(X_1 + X_2) = \sum_{k_1,k_2} p_{12}(k_1,k_2) (k_1 + k_2)
= \sum_{k_1,k_2} p_{12}(k_1,k_2) k_1 + \sum_{k_1,k_2} p_{12}(k_1,k_2) k_2 \\
= \sum_{k_1} k_1\underbrace{ \left( \sum_{k_2} p_{12} (k_1,k_2) \right) }_{=p_1(k_1)}
+ \sum_{k_2} k_2 \underbrace{\left( \sum_{k_1} p_{12} (k_1,k_2) \right)}_{=p_2(k_2)} \\
= E X_1 + E X_2 [/tex]
Here, ##p_1(k_1) = P\{ X_1 = k_1 \}## and ##p_2(k_2) = P \{ X_2 = k_2 \}## are the marginal probability mass functions of ##X_1## and ##X_2## separately.
 
Last edited:
  • #5
Orodruin said:
x

Stephen Tashi said:
x

Ray Vickson said:
x

Thanks for the replies. I have another issue related to the same setup... If we have n independent random variables given by Xi from i=1 to n, each with the same mean <X> and the same variance, how do we know <Xi2>=<X2>. I can't see this although I'm guessing it's obvious...
 
  • #6
Start from the definition of variance V(X) = <X^2> - <X>^2 (or, equivalently, V(X) = <(X - <X>)^2>).
 
  • #7
Orodruin said:
Start from the definition of variance V(X) = <X^2> - <X>^2 (or, equivalently, V(X) = <(X - <X>)^2>).
<Xi2>=<Xi>2+V(X)=<X>2+V(X)=<X2>?
 
  • #8
What is your definition of X?
 
  • #9
Orodruin said:
What is your definition of X?

Some random variable that has a probability distribution with mean <X> and variance V(X)=<X2>-<X>2?
 
  • #10
I am just saying, because it is not clear if it is one of the Xi or not.
 
  • #11
Orodruin said:
I am just saying, because it is not clear if it is one of the Xi or not.
No it isn't...
 
  • #12
Does it have the same variance as the Xi? If it does, you can just as well include it among them and otherwise the statement is not really correct.
 
  • #13
Orodruin said:
Does it have the same variance as the Xi? If it does, you can just as well include it among them and otherwise the statement is not really correct.

Well no I guess I have just assumed that and I shouldn't have, which means that proof doesn't work...
 
  • #14
What I am curious about is if this is the actual problem statement:
albega said:
each with the same mean <X> and the same variance
Which would mean that <Xi> = <X>, but not necessarily V(Xi) = V(X), unless you also add a "V(X)" after "same variance".

If V(Xi) = V(X), then the problem is trivial as you noticed. If it is not, then the statement is false.
 
  • #15
Orodruin said:
What I am curious about is if this is the actual problem statement:

Which would mean that <Xi> = <X>, but not necessarily V(Xi) = V(X), unless you also add a "V(X)" after "same variance".

If V(Xi) = V(X), then the problem is trivial as you noticed. If it is not, then the statement is false.

Oh I see... The actual statement is
'each with the same mean <X> and the same variance σx2'
but I didn't think defining what it actually was mattered but it clearly does. So I guess that makes it fine then.
Thankyou.
 
  • #16
[

albega said:
If we have n independent random variables given by Xi from i=1 to n, each with the same mean <X> and the same variance, how do we know <Xi2>=<X2>.

It isn't clear what the notation [itex] <X^2> [/itex] signifies. If each of [itex] X_i [/itex] has mean [itex] \mu [/itex] then it is not true that the mean value of [itex] X_i^2 [/itex] must equal [itex] \mu^2 [/itex]. If [itex]R [/itex] and [itex] W [/itex] are independent random variables that have the same mean value then it is not true that [itex] R^2 [/itex] and [itex] W^2 [/itex] must have the same mean value.
If [itex] R [/itex] and [itex] W [/itex] are independent, identically distributed random variables, you could get that result.
 
  • #17
Stephen Tashi said:
[
It isn't clear what the notation [itex] <X^2> [/itex] signifies. If each of [itex] X_i [/itex] has mean [itex] \mu [/itex] then it is not true that the mean value of [itex] X_i^2 [/itex] must equal [itex] \mu^2 [/itex]. If [itex]R [/itex] and [itex] W [/itex] are independent random variables that have the same mean value then it is not true that [itex] R^2 [/itex] and [itex] W^2 [/itex] must have the same mean value.
If [itex] R [/itex] and [itex] W [/itex] are independent, identically distributed random variables, you could get that result.

I disagree, <X^2> is standard notation for the expectation value of X^2, not for the square of the expectation value, which is normally written <X>^2. If the mean and variance of all of the stochastic variables are the same, then so is the expectation values of their squares.
 

1. What is the mean of a sum of random variables?

The mean of a sum of random variables is the expected value of the sum of all the individual random variables. It is calculated by adding the means of each of the random variables together.

2. Can the mean of a sum of random variables be negative?

Yes, the mean of a sum of random variables can be negative if the individual random variables have a combination of positive and negative values. It is important to consider the signs of the individual variables when calculating the mean of a sum.

3. How is the mean of a sum of random variables affected by the number of variables?

The mean of a sum of random variables is directly proportional to the number of variables. As the number of variables increases, the mean of the sum also increases, assuming all other factors remain constant.

4. What is the difference between the mean of a sum of independent random variables and dependent random variables?

The mean of a sum of independent random variables can be calculated by simply adding the means of each variable together. However, in the case of dependent random variables, the mean of the sum also takes into account the covariance between the variables.

5. Why is the mean of a sum of random variables an important concept in statistics?

The mean of a sum of random variables is an important concept in statistics because it helps us understand the overall distribution of a set of variables. It is also used in various statistical models to make predictions and in hypothesis testing to determine the significance of results.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
643
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
979
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus
Replies
3
Views
1K
Back
Top