Statistics: E(X) = Integral(0 to infinity) of (1-F(x))dx

  1. "If X is non-negative, then E(X) = Integral(0 to infinity) of (1-F(x))dx, where F(x) is the cumulative distribution function of X."


    First of all, does X have to be a continuous random variable here? Or will the above result hold for both continuous and discrete random variable X?

    Secondly, the source that states this result gives no proof of it. I searched the internet but was unable to find a proof of it. I know that by definition, since X is non-negative, we have E(X) = Integral(0 to infinity) of x f(x)dx where f(x) is the density function of X. What's next?

    Thanks for any help!
  2. jcsd
  3. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, the thing you know has an x in it, but the thing you're trying to get to doesn't... and the thing you're trying to get to has an F in it, but the thing you know has the derivative of F in it....
  4. This works for discrete, cont., and mixed. Though the derivative statement applies for cont. only. Use integration by parts for cont. case.
  5. But for discrete random variable X, would it still make sense to talk about "integration"? (i.e.INTEGRAL(0 to infinity) of (1-F(x))dx) Or should it be replaced by a (sigma) sum?

    Do you mean using integration by parts for the expression of the definition of E(X)? What should I let u and dv be?

  6. No, in the discrete case you would be using sums instead of integrals since expectation is defined in terms of a sum not integrals. Well if u = 1 - F(x) then du = -f'(x) dx and dv = dx then v = x. So now you have x*S(x) (evalulated between your limits) + integral(x*f(x) dx). Obviously the first part of your sum vanishes since at "infinity" S(x) -> 0 and at 0, x*S(x) = 0. And so now you are left with what your usual definition of E(X).

    Note: this is a very handwavy proof as you would really want to be rigorous when talking about the limits that make the first term vanish.
  7. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    It does when you learn measure theory. Until then, just replace it with a sum without thinking about it.
  8. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    Did you think about that question at all? I practically told you what u and dv should be in post #2....
  9. Just one point I am having troubles with: (in red)

    lim x(1-F(x))
    This actually gives "infinity times 0" which is an indeterminate form and requires L'Hopital's Rule. I tried many different ways but was still unable to figure out what the limit is going to can we prove that the limit is equal to 0?

  10. HallsofIvy

    HallsofIvy 41,260
    Staff Emeritus
    Science Advisor

    Hurkyl, you don't "need" measure theory to write a sum as an integral. The Riemann-Stieljes integral will do that.
  11. Another way to do it is to write the expected value as

    [tex]E[X]=\int_{0}^{\infty}sf(s)ds = \int_{s=0}^{\infty}\int_{x=0}^{s}f(s)dxds[/tex]

    and then change the order of the integrals to get your formula. To see what the new bounds on the integrals would be, draw a picture of the region of integration. You can use this same approach to find that

    [tex]E[X^2] = \int_{0}^{\infty}s^2f(s)ds = \int_{s=0}^{\infty}\int_{x=0}^{s}2xf(s)dxds=

    which is also valid for X nonnegative.
  12. Or, equivalently, write the probability distribution as a sum of delta functions.
  13. Integration by parts:

    [tex]m_X=\int^{\infty}_{0} x f_X(x) dx = -\int^{\infty}_{0} x (-f_X(x) dx)[/tex] eq(1)

    Let [tex] u=x [/tex] and [tex]dv = -f_X(x) dx [/tex]

    Thus [tex] du=dx [/tex] and [tex] v = 1-F_X(x) [/tex]

    Chech that [tex] dv/dx = d/dx (1-F_X(x)) = d/dx(-F_X(x)) = -f_X(x) [/tex] o.k.

    Then substitute in (1)

    [tex]m_X=-[uv|^{\infty}_{0}-\int^{\infty}_{0}vdu] [/tex]

    [tex]m_X=-[x[1-F_X(x)]|^{\infty}_{0}]+\int^{\infty}_{0}[1-F_X(x)]dx [/tex]

    The first term is zero at x = 0. As [tex] x\rightarrow\infty, 1-F_X(x)[/tex] tends to zero faster than the increase of [tex] x [/tex] and thus [tex] x[1-F_X(x)]\rightarrow0 [/tex]


    [tex]m_X=\int^{\infty}_{0}[1-F_X(x)]dx [/tex]


  14. [tex]
    E[X] &= E\bigg[\int_0^X 1\,dx\bigg]\\
    &= E\bigg[\int_0^\infty 1_{\{X>x\}}\,dx\bigg]\\
    &= \int_0^\infty E[1_{\{X>x\}}]\,dx\\
    &= \int_0^\infty P(X > x)\,dx\\
    &= \int_0^\infty (1 - F(x))\,dx

    By the way, this formula is true no matter what kind of random variable X is, and we do not need anything more than freshman calculus to understand the integral on the right-hand side. (We need neither measure theory nor Stieltjes integrals.) Even when X is discrete, the function 1 - F(x) is still at least piecewise continuous, so the integral makes perfectly good sense, even when understood as a good old-fashioned Riemann integral.
    Last edited: Apr 30, 2009
Know someone interested in this topic? Share this thead via email, Google+, Twitter, or Facebook

Have something to add?