Deriving Expectations (i.e. mean)

  • Thread starter Thread starter kurvmax
  • Start date Start date
  • Tags Tags
    deriving Mean
kurvmax
Messages
11
Reaction score
0
Deriving Expectations (i.e. means)

I'm looking at my Introduction to Econometrics book and trying to figure out the derivations in the 2nd Chapter.

First, E(Y^{2}) = \sigma^{2}_{Y}+\mu^{2}_{Y}

The derivation goes like this:

E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}} = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y} because E(Y - \mu_{Y}) = 0

If this last thing equals zero, then why doesn't everything but \mu^{2}_{Y} drop out?
 
Last edited:
Physics news on Phys.org
kurvmax said:
E(Y^{2}) = E[(Y - \mu_{Y})^{2}] = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y} because E(Y - \mu_{Y}) = 0

Already the first "equality" is certainly not true.
 
Last edited by a moderator:
Err, yeah. Sorry -- fixed.
 
kurvmax said:
I'm looking at my Introduction to Econometrics book and trying to figure out the derivations in the 2nd Chapter.

First, E(Y^{2}) = \sigma^{2}_{Y}+\mu^{2}_{Y}

The derivation goes like this:

E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}} = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y} because E(Y - \mu_{Y}) = 0

If this last thing equals zero, then why doesn't everything but \mu^{2}_{Y} drop out?

Because E(Y-\mu_Y)=0 does not imply that E\left[(Y-\mu_Y)^2\right]=0. Otherwise, every random variable would be degenerate. Note that the square is *inside* the expectation.
 
Last edited by a moderator:
Ah, yeah. E\left[(Y-\mu_Y)^2\right]=\sigma^{2}, right?

So now I'm wondering how E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}}
 
Are you sure it is not E[((Y-\mu_Y)+ \mu_Y)^2][/itex]? That would then be E[(Y-\mu_Y)^2- 2\mu_Y(Y- \mu_Y)+ \mu_y^2] which gives the rest.
 
Err, sorry. That's what it is. But I still don't see how you go from E(Y^2) to E[((Y-\mu_Y)+\mu_Y)^2]. How'd they come up with the latter definition? Did they just add and subtract \mu_{Y} and then add it, then group them using the associativity property?

If I didn't add and subtract \mu_{Y}\, it seems to me that I would get E(Y^2) = \mu_Y^2...

How do you do the proof of E(Y) = \mu_Y?

Nevermind, I see. You have to add and subtract \mu_Y (why is this formatting strangely?) to even get it. The proof of E(Y) = \mu_Y is, I guess

E(Y) = E[(Y -\mu_Y) + \mu_Y] = E(Y - \mu_Y) + E(\mu_Y)

Was what I did above legal? Can I distribute the E, and would E(\mu_Y) = \mu_Y
 
Last edited:
kurvmax said:
it seems to me that I would get E(Y^2) = \mu_Y^2
No, because

<br /> E(Y^2)\neq \left[E(Y)\right]^2 = \mu_Y^2<br />


kurvmax said:
How do you do the proof of E(Y) = \mu_Y?

That's a definition.

kurvmax said:
Nevermind, I see. You have to add and subtract \mu_Y to even get it. The proof of E(Y) = \mu_Y is, I guess

E(Y) = E[(Y -\mu_Y) + \mu_Y] = E(Y - \mu_Y) + E(\mu_Y)

Was what I did above legal? Can I distribute the E, and would E(\mu_Y) = \mu_Y

It was "legal", but to see that the first term vanishes you use E(Y-\mu_Y)=0 which is equivalent to E(Y)=\mu_Y which is what you want to show. \mu_Y is just a short notation for E(Y)
 
Last edited by a moderator:
Pere Callahan said:
No, because

<br /> E(Y^2)\neq E(Y)^2 = \mu_Y^2<br />

The notation could be a source of confusion. Some people interpret the square in the second term to act on Y rather than on the expectation as a whole. (To the Original poster): perhaps its better if you write

E^{2}(Y) = (E(Y))^2 = (\mu_Y)^{2} = \mu_{Y}^2

(PS--I wrote this because some books define variance(X) as E(X-\mu_{X})^2. They actually mean E[(X-\mu_{X})^2], that is, expectation of the square of the difference between X and its mean.)
 
Back
Top