Deriving Expectations (i.e. mean)

In summary, the derivation of expectations in the 2nd chapter goes like this: first, E(Y^{2}) = \sigma^{2}_{Y}+\mu^{2}_{Y}, and if this last thing equals zero, then why doesn't everything but \mu^{2}_{Y} drop out. However, this last thing does not always equal zero, so there are some errors in the derivation. To get the expectation of a random variable, you first have to add and subtract \mu_{Y} to even get to the expectation.
  • #1
kurvmax
11
0
Deriving Expectations (i.e. means)

I'm looking at my Introduction to Econometrics book and trying to figure out the derivations in the 2nd Chapter.

First, [tex]E(Y^{2}) = \sigma^{2}_{Y}+\mu^{2}_{Y}[/tex]

The derivation goes like this:

[tex]E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}} = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y}[/tex] because [tex]E(Y - \mu_{Y}) = 0[/tex]

If this last thing equals zero, then why doesn't everything but [tex]\mu^{2}_{Y}[/tex] drop out?
 
Last edited:
Physics news on Phys.org
  • #2
kurvmax said:
[tex]E(Y^{2}) = E[(Y - \mu_{Y})^{2}] = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y}[/tex] because [tex]E(Y - \mu_{Y}) = 0[/tex]

Already the first "equality" is certainly not true.
 
Last edited by a moderator:
  • #3
Err, yeah. Sorry -- fixed.
 
  • #4
kurvmax said:
I'm looking at my Introduction to Econometrics book and trying to figure out the derivations in the 2nd Chapter.

First, [tex]E(Y^{2}) = \sigma^{2}_{Y}+\mu^{2}_{Y}[/tex]

The derivation goes like this:

[tex]E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}} = E[(Y- \mu_{Y})^2] + 2\mu_{Y}E(Y-\mu_{Y})+ \mu^{2}_{Y} = \sigma^{2}_{Y} + \mu^{2}_{Y}[/tex] because [tex]E(Y - \mu_{Y}) = 0[/tex]

If this last thing equals zero, then why doesn't everything but [tex]\mu^{2}_{Y}[/tex] drop out?

Because [itex]E(Y-\mu_Y)=0[/itex] does not imply that [itex]E\left[(Y-\mu_Y)^2\right]=0[/itex]. Otherwise, every random variable would be degenerate. Note that the square is *inside* the expectation.
 
Last edited by a moderator:
  • #5
Ah, yeah. [tex]E\left[(Y-\mu_Y)^2\right]=\sigma^{2}[/tex], right?

So now I'm wondering how [tex]E(Y^{2}) = E{[(Y - \mu_{Y})+ \mu_{Y}]^{2}} [/tex]
 
  • #6
Are you sure it is not E[((Y-\mu_Y)+ \mu_Y)^2][/itex]? That would then be [itex]E[(Y-\mu_Y)^2- 2\mu_Y(Y- \mu_Y)+ \mu_y^2][/itex] which gives the rest.
 
  • #7
Err, sorry. That's what it is. But I still don't see how you go from [tex]E(Y^2)[/tex] to [tex]E[((Y-\mu_Y)+\mu_Y)^2][/tex]. How'd they come up with the latter definition? Did they just add and subtract [tex]\mu_{Y}[/tex] and then add it, then group them using the associativity property?

If I didn't add and subtract [tex]\mu_{Y}\[/tex], it seems to me that I would get [tex]E(Y^2) = \mu_Y^2[/tex]...

How do you do the proof of [tex]E(Y) = \mu_Y[/tex]?

Nevermind, I see. You have to add and subtract [tex]\mu_Y[/tex] (why is this formatting strangely?) to even get it. The proof of [tex]E(Y) = \mu_Y[/tex] is, I guess

[tex]E(Y) = E[(Y -\mu_Y) + \mu_Y] = E(Y - \mu_Y) + E(\mu_Y)[/tex]

Was what I did above legal? Can I distribute the E, and would [tex]E(\mu_Y) = \mu_Y[/tex]
 
Last edited:
  • #8
kurvmax said:
it seems to me that I would get [tex]E(Y^2) = \mu_Y^2[/tex]
No, because

[tex]
E(Y^2)\neq \left[E(Y)\right]^2 = \mu_Y^2
[/tex]


kurvmax said:
How do you do the proof of [tex]E(Y) = \mu_Y[/tex]?

That's a definition.

kurvmax said:
Nevermind, I see. You have to add and subtract [tex]\mu_Y[/tex] to even get it. The proof of [tex]E(Y) = \mu_Y[/tex] is, I guess

[tex]E(Y) = E[(Y -\mu_Y) + \mu_Y] = E(Y - \mu_Y) + E(\mu_Y)[/tex]

Was what I did above legal? Can I distribute the E, and would [tex]E(\mu_Y) = \mu_Y[/tex]

It was "legal", but to see that the first term vanishes you use [itex]E(Y-\mu_Y)=0[/itex] which is equivalent to [itex]E(Y)=\mu_Y[/itex] which is what you want to show. [itex]\mu_Y[/itex] is just a short notation for [itex]E(Y)[/itex]
 
Last edited by a moderator:
  • #9
Pere Callahan said:
No, because

[tex]
E(Y^2)\neq E(Y)^2 = \mu_Y^2
[/tex]

The notation could be a source of confusion. Some people interpret the square in the second term to act on Y rather than on the expectation as a whole. (To the Original poster): perhaps its better if you write

[tex]E^{2}(Y) = (E(Y))^2 = (\mu_Y)^{2} = \mu_{Y}^2[/tex]

(PS--I wrote this because some books define variance(X) as [itex]E(X-\mu_{X})^2[/itex]. They actually mean [itex]E[(X-\mu_{X})^2][/itex], that is, expectation of the square of the difference between X and its mean.)
 

What is the definition of deriving expectations?

Deriving expectations, also known as finding the mean, is a statistical method used to measure the central tendency of a set of data points. It is calculated by adding all of the values in the data set and dividing by the total number of values.

Why is deriving expectations important in scientific research?

Deriving expectations is important in scientific research because it allows researchers to summarize and analyze large amounts of data. It also provides a single value that can represent the entire data set, making it easier to draw conclusions and make predictions.

What are some common misconceptions about deriving expectations?

One common misconception about deriving expectations is that it is the same as the median or mode. While all three measures are used to describe the central tendency of a data set, they are calculated differently and can result in different values.

How is deriving expectations used in hypothesis testing?

In hypothesis testing, deriving expectations is used to compare the mean of a sample data set to a known population mean. This allows researchers to determine if there is a significant difference between the two, providing evidence for or against the research hypothesis.

What are some limitations of deriving expectations?

One limitation of deriving expectations is that it can be greatly affected by outliers or extreme values in the data set. It is also not always the best measure of central tendency for skewed data sets. Additionally, it does not provide information about the variability or distribution of the data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
958
  • Introductory Physics Homework Help
Replies
6
Views
226
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
988
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
467
Replies
3
Views
321
Back
Top