Expectation operator - linearity

Pietair
Messages
57
Reaction score
0

Homework Statement


Show that the expectation operator E() is a linear operator, or, implying:
E(a\bar{x}+b\bar{y})=aE(\bar{x})+bE(\bar{y})

Homework Equations


E(\bar{x})=\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx

With f_{\bar{x}} the probability density function of random variable x.

The Attempt at a Solution


aE(\bar{x})=a\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx and:
bE(\bar{y})=b\int_{-\infty}^{+\infty}yf_{\bar{y}}(y)dy

Introducing a new random variable:
\bar{v}=a\bar{x}+b\bar{y}

Then:

E(\bar{v})=E(a\bar{x}+b\bar{y})=\int_{-\infty}^{+\infty}vf_{\bar{v}}(v)dv=\int_{-\infty}^{+\infty}(ax+by)f_{\bar{v}}(v)dv

And accordingly:
E(a\bar{x}+b\bar{y})=\int_{-\infty}^{+\infty}(ax+by)f_{\bar{v}}(v)dv=a\int_{-\infty}^{+\infty}xf_{\bar{v}}(v)dv+b\int_{-\infty}^{+\infty}yf_{\bar{v}}(v)dv

So what remains to proof is that:

a\int_{-\infty}^{+\infty}xf_{\bar{v}}(v)dv+b\int_{-\infty}^{+\infty}yf_{\bar{v}}(v)dv=a\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx+b\int_{-\infty}^{+\infty}yf_{\bar{y}}(y)dy

And now I am stuck... I don't know how I can relate the p.d.f. of random variable v to the p.d.f.'s of random variables x and y.

Thank you in advance!
 
Physics news on Phys.org
Pietair said:

Homework Statement


Show that the expectation operator E() is a linear operator, or, implying:
E(a\bar{x}+b\bar{y})=aE(\bar{x})+bE(\bar{y})

Homework Equations


E(\bar{x})=\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx

With f_{\bar{x}} the probability density function of random variable x.

The Attempt at a Solution


aE(\bar{x})=a\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx and:
bE(\bar{y})=b\int_{-\infty}^{+\infty}yf_{\bar{y}}(y)dy

Introducing a new random variable:
\bar{v}=a\bar{x}+b\bar{y}

Then:

E(\bar{v})=E(a\bar{x}+b\bar{y})=\int_{-\infty}^{+\infty}vf_{\bar{v}}(v)dv=\int_{-\infty}^{+\infty}(ax+by)f_{\bar{v}}(v)dv

And accordingly:
E(a\bar{x}+b\bar{y})=\int_{-\infty}^{+\infty}(ax+by)f_{\bar{v}}(v)dv=a\int_{-\infty}^{+\infty}xf_{\bar{v}}(v)dv+b\int_{-\infty}^{+\infty}yf_{\bar{v}}(v)dv

So what remains to proof is that:

a\int_{-\infty}^{+\infty}xf_{\bar{v}}(v)dv+b\int_{-\infty}^{+\infty}yf_{\bar{v}}(v)dv=a\int_{-\infty}^{+\infty}xf_{\bar{x}}(x)dx+b\int_{-\infty}^{+\infty}yf_{\bar{y}}(y)dy

And now I am stuck... I don't know how I can relate the p.d.f. of random variable v to the p.d.f.'s of random variables x and y.

Thank you in advance!

Haven't you seen some kind of results that gives the pdf of X+Y in terms of the pdf of X and Y?? Hint: it has to do with convolution.
 
Thank you for your reply.

I know that the probability distribution of the sum of two or more random variables is the convolution of their individual pdf's, but as far as I know this is only valid for independent random variables. While ##E(aX+bY)=aE(X)+bE(Y)## is true in general, right?.
 
Pietair said:
I know that the probability distribution of the sum of two or more random variables is the convolution of their individual pdf's, but as far as I know this is only valid for independent random variables. While ##E(aX+bY)=aE(X)+bE(Y)## is true in general, right?.
Yes to both.
 
Pietair said:
Thank you for your reply.

I know that the probability distribution of the sum of two or more random variables is the convolution of their individual pdf's, but as far as I know this is only valid for independent random variables. While ##E(aX+bY)=aE(X)+bE(Y)## is true in general, right?.

Depending on the level of rigor required, you may have to qualify things a bit. For example, if Y = -X, then E(X+Y) = EX + EY is false if EX = EY = ∞, because we would be trying to equate 0 to ∞ - ∞, which is not allowed.

So, the easiest way is to assume that both EX and EY are finite. Now you really have a 2-stage task:
(1) Prove the "theorem of the unconscious statistician", which says that if Z = g(X,Y), then
EZ \equiv \int z \: dF_Z(z)
can be written as
\int g(x,y) \: d^2F_{XY}(x,y),
which becomes
\sum_{x,y} g(x,y)\: P_{XY}(x,y)
in the discrete case where there are no densities, and becomes
\int g(x,y) f_{XY}(x,y) \, dx \, dy
in the continuous case where there are densities. Of course, the result is also true in a mixed continuous-discrete case where there are densities and point-mass probabilities, but then we need to write Stieltjes integrals, etc. Then, you need to do the much easier task of proving that
\int (ax + by) f_{XY}(x,y) \, dx \, dy<br /> = a \int x f_X(x) \, dx + b \int y f_Y(y) \, dy = a EX + b EY .
The last form holds in general, whether the random variables are continuous, discrete or mixed. As you say, it holds even when X and Y are dependent.
 
Last edited:
Ray Vickson said:
Depending on the level of rigor required, you may have to qualify things a bit. For example, if Y = -X, then E(X+Y) = EX + EY is false if EX = EY = ∞, because we would be trying to equate 0 to ∞ - ∞, which is not allowed.

So, the easiest way is to assume that both EX and EY are finite. Now you really have a 2-stage task:
(1) Prove the "theorem of the unconscious statistician", which says that if Z = g(X,Y), then
EZ \equiv \int z \: dF_Z(z)
can be written as
\int g(x,y) \: d^2F_{XY}(x,y),
which becomes
\sum_{x,y} g(x,y)\: P_{XY}(x,y)
in the discrete case where there are no densities, and becomes
\int g(x,y) f_{XY}(x,y) \, dx \, dy
in the continuous case where there are densities. Of course, the result is also true in a mixed continuous-discrete case where there are densities and point-mass probabilities, but then we need to write Stieltjes integrals, etc. Then, you need to do the much easier task of proving that
\int (ax + by) f_{XY}(x,y) \, dx \, dy<br /> = a \int x f_X(x) \, dx + b \int y f_Y(y) \, dy = a EX + b EY .
The last form holds in general, whether the random variables are continuous, discrete or mixed. As you say, it holds even when X and Y are dependent.

Hmm, so there really is no elementary proof?? (I guess it depends on what you call elementary though). This kind of makes me happy that I know measure theory, it makes the proof of this result so much simpler.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top