Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Troubling Lemma

  1. Feb 3, 2005 #1

    cepheid

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'm trying to figure out what is going on with the proof of the following lemma as presented by my prof. There seem to be some notational inconsistencies. I also found myself questioning my understanding of integration variables:

    Lemma

    Suppose X is a continuous random variable with probability density function f.
    f(x) = 0 whenever x < 0. Then:

    [tex] E[X] = \int_0^{\infty}{P(X>x)dx} [/tex]

    Proof

    [tex] \int_0^{\infty}{P(X>x)dx} = \int_0^{\infty}\left[{\int_x^{\infty}{f(y)dy}\right]dx} [/tex]

    So he substituted an expression for the probability (the integrand) which is *itself* an integral of the density function. Now, before I continue, I'd like to make sure I understand the introduction of the dummy variable y. Basically, we want the probability in question, but x is unspecified. The limit of integration is a variable...so the resulting probability will be expressed as a function of x, which is what we want. The question arises...what are we integrating the density function with respect to? As I understand it: we are STILL integrating the density function wrt to the values (between x and infinity) that X can take on. So the independent variable represents the same quantity in principle. We are just denoting it by 'y' to distinguish it from the lower limit of those values, x. P is only a function of that lower limit, not of all of those values over which we integrated. They are gone because we integrated over them. Am I right?

    Next, the prof noted that rather than integrating the density fcn. from x to infinity, we could integrate across the whole domain of the density function, provided we introduced something to sift out the undesirable values of f(y) for which y <= x. So he introduced an indicator variable:


    [tex] \text{Let} \ \ I_{y>x} = \left\{\begin{array}{cc}1 & \text{if} \ \ y > x \\0 & \text{if} \ \ y \leq x \end{array} [/tex]

    With this indicator variable, the preceding integral becomes:

    [tex] \int_0^{\infty}{\int_0^{\infty}{I_{y>x} f(y) dy}dx} = \int_0^{\infty}{dyf(y)\int_0^{\infty}{dx I_{y>x}} [/tex]

    I was going to ask a question about this step, but based on my previous thoughts, I think I now understand the separation of the the two integrals w.r.t x and y. y is independent of x. f(y) has some value, regardless of what the value of your "marker point" for the integration of the density fcn, x, is.

    That rightmost integral becomes 'y', because the integrand is non zero only when x is less than y, so we have the integral from zero to y of dx which becomes [y-0] = y. Substituting, the integral becomes:

    [tex] \int_0^{\infty} {yf(y)dy} = E[Y] [/tex]

    [tex] \text{Q.E.D} [/tex]

    What's up with that last line!? There is NO random variable called 'Y' anywhere in this lemma. We were trying to prove that it was E[X]! Furthermore, I have no problem believing that that integral in the last line *IS* E[X]. After all, f is the probability density function of X. It doesn't matter what symbol we use...y still represents the possible values that X can take on i.e. y is an argument of f. I think it should say E[X], and that the prof just absentmindedly put E[Y] when he got to the end of his proof, because he saw 'integral of f of y dy " on the board. However, I would like that, and all of my other inferences, confirmed independently.

    Thanks.
     
  2. jcsd
  3. Feb 3, 2005 #2

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    I really am not sure I can see what the problem is. You seem to be creating more issues than you need to.

    P(X>x) is a function of x, perhaps more usually seen as 1-F(x) where F is the (cumulative) probability distribution function. It is itself an integral int_x^inf f(y)dy.

    You do not integrate things "wrt to values between" that makes no sense. You are integrating f(y) with respect to y.

    This seems to be about you not being happy with integration, rather than probability.
    The last line is indeed a slip, it should by E(X). Lecturers make slips all the time. We don't mean to but we do. And I promise you we don't notive half the time. I've several times said "minus one" and written "one". Actually I think I must have said "negative one", cos the students had a hard time understanding the "Brit".
     
  4. Feb 3, 2005 #3

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    the reason i never say "negative 1", is my students say also "negative x", for -x, and assume that it must indeed always be negative.

    i.e. for me, "negative" can only be properly used in the sense "x is negative". or "x is not negative". the word "negative" does not modify an arbitrary number.

    i.e. it does not to me mean "opposite sign", only "less than zero".
     
  5. Feb 3, 2005 #4

    cepheid

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'll try rephrasing it:

    "we are still integrating the density function between x and infinity w.r.t. some variable (called y) that represents the values that X can take on."

    Is that more accurate?

    Yeah, I'm not disputing that. That's what I tried to convey before..that this particular example made it clear I needed to brush up on some concepts. I just posted my thoughts to make sure I was on the right track will all my interpretations.

    Also, I have no problem with the prof making a mistake. I do realise that lecturerers are human and slip up sometimes. That's not what I was getting all excited about. I was just worried: what if I was wrong? What if it wasn't an error on the prof's part? Then I would have been totally lost. It's mildly distressing to be presented with something that makes no sense, and wonder whether it's just a mistake, or whether it's ME. :biggrin:

    Thanks for all your help
     
    Last edited: Feb 3, 2005
  6. Feb 4, 2005 #5

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    I guess so, though I'm still a little confused as to the confusion.

    Let's try this.

    Given a cts r.v. X with p.d.f. f, what is the probability that X>x for x in R in terms of f?
     
  7. Feb 4, 2005 #6

    cepheid

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think I get it now. To answer your question, P(X>x) in terms of f is:

    [tex] \int_x^{\infty}{f(y)dy} [/tex]

    I see your point. What's the big deal? y is the argument of f. f is a function of y. f(y) is f evaluated for X=y. The stated probability, on the other hand , is a function of x. Case closed.

    I figured out how the confusion arose. Here is the def'n of a cts rv given in our notes:

    X is a continuous r.v. if [itex] \exists [/itex] a function f(x), ([itex] x \in \mathbb{R} [/itex]), with f(x) [itex] \geq [/itex] 0, such that:

    [tex] P(X \in B) = \int_B{f(x)dx} [/tex]

    Later on he gave the definition of the cumulative distribution function (cdf):

    [tex] F_{X}(a) = P(X \leq a) = P(X \in (-\infty, a]) [/tex]

    [tex] = \int_{-\infty}^a{f(x)dx} [/tex]

    So I was confused because initially he used x for the argument of f,and it was only later on (when he needed to make the upper limit of that integral for the cdf a variable), that he started using x for that. But that's not his fault (I actually like this prof, he's very good). I was just being *stupid*. It didn't occur to me immediately that when referring to f and F in the same context, their arguments would be given by two different variables, the latter being the upper limit of integration in the former.
     
    Last edited: Feb 4, 2005
  8. Feb 4, 2005 #7

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    Good notation is the secret of good exposition. If anyone ever figures out how to do it maths'll be easy to learn. But where's the fun in that?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Troubling Lemma
  1. Proof of Zorn's Lemma (Replies: 8)

  2. Diagonal Lemma (Replies: 0)

Loading...