Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

E(X) < ∞ ? What does it mean?

  1. Jul 14, 2009 #1
    Let X and Y be random variables, F be a function.

    I have seen in a variety of contexts that they write "E(X) < ∞". What does it mean? My guess is that it means that E(X) is finite, but if this is the case, shouldn't they say -∞ < E(X) < ∞ instead?


    Also, I have seen the notation of the letter "d" above the equal sign. What does it mean? e.g.
    d
    X=F(Y)
    [the "d" should be above the equal sign, not above X]
    Does anyone have any idea or have seen the above notation?


    Thanks for any help!
     
  2. jcsd
  3. Jul 15, 2009 #2
    Probably the r.v. [tex]X[/tex] is known to be nonnegative. Then writing [tex]E(X) < \infty[/tex] means what you thought. Otherwise, probably an error on the part of the writer. Rarely, the writer may wish to allow [tex]E(X) = -\infty[/tex] but not [tex]+\infty[/tex] I suppose.
     
  4. Jul 15, 2009 #3

    statdad

    User Avatar
    Homework Helper

    yes, this means the expectation is finite (real). it is usually unspoken that it is also greater than negative infinity (i don't know the historical reasons for this).
    The 'd', or 'D', means "has the same distribution as", and is used to show that two random quantities have the same probability distribution. Examples:

    [tex]
    X \mathop{=}^d F(Y)
    [/tex]

    tells you that the r.v. X has the same distribution as F(Y).

    [tex]
    \sqrt n \left(\hat{\beta} - \beta_0 \right) \xrightarrow{D} n\left(0, \Sigma^{-1}\right)
    [/tex]

    means that as [tex] n \to \infty [/tex] the distribution of the LHS goes to the normal distribution given on the right.

    As an alert: some writers will use [tex] \mathcal{L} [/tex] rather than d. This

    [tex]
    X \mathop{=}^\mathcal{L} F(Y)
    [/tex]

    means the same as my previous example. The [tex] \mathcal{L} [/tex] represents probability Law.
     
  5. Jul 15, 2009 #4
    1) So for positive random variables, E(X) < ∞ means E(X) is finite. That makes sense.
    But how about a general random variable X that may take on negative values. What is the standard notation to use for E(X) being finite?
     
  6. Jul 15, 2009 #5
    2) Then why can't we simply write X=F(Y) [without the "d" above the equal sign] to mean that the random variables X and F(Y) have the same distribution? I don't understand this.

    Thanks for explaining!
     
  7. Jul 15, 2009 #6

    CRGreathouse

    User Avatar
    Science Advisor
    Homework Helper

    Two things can have the same distribution without being equal.
     
  8. Jul 15, 2009 #7
    This isn't seem too obvious (the subject is still pretty new to me).

    e.g. If X and Y are independent standard normal random variables, then it mean that X(ω) = Y(ω) for ALL ω E Ω and so X=Y ?? Why not?

    How can two random variables have the same distribution without being equal? Can someone please provide a specific example?

    Thank you!
     
  9. Jul 15, 2009 #8
    Standard notation: [tex]E(|X|) < \infty[/tex], in abstract probability space there is no notion of conditional convergence.
     
  10. Jul 18, 2009 #9
    |E(X)| ≤ E(|X|) < ∞
    i.e. -∞ < E(X) < ∞
    So I think it makes sense. Thanks!
     
  11. Jul 18, 2009 #10
    Can someone please help me with this? I would really appreciate it!
     
  12. Jul 18, 2009 #11
    If that was the case, they wouldn't be independent. If you want an example, take two quarters, give one to a friend, and start flipping them. Define a random variable for you and your friend (say, Y and F). Their distribution will be the same, but they're unlikely to be equal.
     
  13. Jul 20, 2009 #12
    um...I don't quite understand how your example would demonstrate that X and Y would have the same distribution, but not X=Y.
    For your example, the sample space would be {H,T}.

    Let ω1=H (for head), ω2=T (for tail)
    X(ω1)=0 [the sample outcome of head would be mapped to the real number 0]
    X(ω2)=1 [the sample outcome of tail would be mapped to the real number 1]
    Y(ω1)=0
    Y(ω2)=1

    Then X(ω) = Y(ω) for ALL ω E Ω, right???

    Can you please explain a little bit more about why it isn't the case that X=Y?

    Thank you!
     
    Last edited: Jul 20, 2009
  14. Jul 20, 2009 #13

    statdad

    User Avatar
    Homework Helper

    Probability distributions describe the behavior of a random variable, but they do not guarantee that the values will always be the same. To expand on Tibarn's example:

    You and a friend have a coin, each of you will begin flipping your coin and keeping track of the total number of heads. Suppose you each agree to perform 100 flips, your random variable (number of heads) has a binomial (100,.5) distribution, and the same can be said for your friend's r.v. This means that (as a particular instance) we know that for both of you the probability of seeing at least 57 heads will be the same, but it does not mean that you are guaranteed to obtain exactly the same number of heads.

    Similar comments, a little more technical due to continuity, hold for cases where both random variables are continuous.
     
  15. Jul 20, 2009 #14
    I begin to see intuitively why X and Y can have the same distribution with X≠Y from your example, but how can this be justified in terms of ω and Ω?
    X=Y means X(ω) = Y(ω) for ALL ω E Ω. Now I am a little bit confused about what the ω and Ω actually are...
    For your example, what is the sample space Ω? And what is an example of a sample point ω?

    Thanks for your help!
     
  16. Jul 20, 2009 #15

    statdad

    User Avatar
    Homework Helper

    From first principles, an outcome to any sequence of 100 flips consists of a sequence of H (for Heads) and T (for Tails) of length 100, so [tex] \Omega [/tex] would be the set of all [tex] 2^{100} [/tex] sequences, from all Ts through all Hs.
    One particular [tex] \omega [/tex] would be this one:

    [tex]
    \omega = \underbrace{HH \cdots H}_{\text{length 50}} \overbrace{TT \cdots T}^{\text{length50}}
    [/tex]

    for the r.vs I defined, and for this [tex] \omega [/tex],

    [tex]
    X(\omega) = Y(\omega) = 50
    [/tex]


    Notice the incredible amount of savings we have in the move from the original sample space [tex] \Omega [/tex], which has [tex] 2^{100} [/tex] elements, to the set of values of [tex] X [/tex] (and of course [tex] Y [/tex]) - there are only 101 different values to "keep track of".

    I hope this helps.
     
  17. Jul 20, 2009 #16
    Well now kingwinner has every right to be confused, since if we define [tex]X, Y, \Omega[/tex] the way statdad is suggesting, then indeed [tex]X = Y[/tex]. An example where [tex]X[/tex] and [tex]Y[/tex] are distributed the same, but are not necessarily the same would be, for example, if [tex]X[/tex] counts the number of tails in a sequence of 100 flips, and [tex]Y[/tex] counts the number of heads. The distributions of [tex]X[/tex] and [tex]Y[/tex] are the same, but the random variables themselves are different.
     
  18. Jul 21, 2009 #17

    statdad

    User Avatar
    Homework Helper

    Moo, the random variables are defined the same way, but I took Kingwinners question to be how variables could have the same distribution but not always be equal in observed value. My example works for that.
     
  19. Jul 21, 2009 #18
    Thanks, it helps and now I understand the meaning of ω and Ω better.

    But by definition, X=Y means X(ω) = Y(ω) for ALL ω E Ω.
    And for your coin example, X and Y have the same distribution and also X(ω) = Y(ω) for ALL ω E Ω, i.e. X=Y.


    But still, I can't think of an example of this happening...how can two random variables have the same distribution without being equal??

    EDIT: I actually missed Moo Of Doom's post before...now I actually see an example of this happening. Thank you!
     
    Last edited: Jul 22, 2009
  20. Aug 8, 2009 #19
    Try to think of random variables as being functions from the sample space to an other space (usually n-dimensional reals). So [itex] X:\Omega \rightarrow \mathbb{R}^n [/itex], and same for Y. When you say [itex]X=Y[/itex], you mean equality in terms of functions. A probability measure on a probability space is also a map [itex] \mathbb{P}:F \rightarrow [0,1] [/itex], where F is a sigma algebra on [itex]\Omega[/itex] (basicly a collection of subsets). A condition of being a random variable is that the inverse image of each open set lands in F, though usually people impose the condition that the inverse image of each Borel set should land in F. A law (or distribution) of a random variable is [itex]\mathbb{P} \circ X^{-1}[/itex]. So your law maps sets to [0,1], which are usually Borel sets. Equality in distribution means that they have the same law. Can you from that deduce that they are indeed the same function?
     
  21. Aug 8, 2009 #20
    Toss a fair coin 10 times. The random variable X = "number of heads" and the random variable Y = "number of tails" are certainly not equal, but have the same distribution.
    That is ... P(X=k) = P(Y=k) for all k.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: E(X) < ∞ ? What does it mean?
  1. What does 'mean' mean? (Replies: 6)

  2. What is E[|X|] ? (Replies: 10)

Loading...