Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Proof involving means in continuous distributions

  1. Apr 25, 2012 #1
    I recall reading somewhere that the mean value of a continuous variable is situated at a point that acts as a fulcrum about which all other values are considered "weights".

    In other words, if we define the mean as

    [tex] μ = \int^{∞}_{-∞} x ρ(x) dx [/tex] (where rho is the probability density)

    then can we prove that

    [tex] \int^{∞}_{μ} |x-μ| ρ(x) dx = \int^{μ}_{-∞} |x-μ| ρ(x) dx [/tex]

    I am not sure my question is very clear considering I don't understand this too well, but perhaps someone understands what I mean?
    I'm also not sure the equation is even correct, but my memory tells me I did this a while ago and my gut tells me my memory is not wrong. :D

    EDIT: Made a big error with the LaTeX. Just fixed it.

    BiP
     
    Last edited: Apr 25, 2012
  2. jcsd
  3. Apr 26, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey BiPolarity.

    Basically the fulcrum argument says that if you look at the mean equation, then the contribution of [tex]\int^{∞}_{μ} xp(x)dx = \int^{μ}_{-∞}xp(x)dx[/tex]

    Now in terms of proving this, we can use the property that:

    [tex]\int^{∞}_{-∞}xp(x) = μ[/tex]. Now we know that [tex]\int^{∞}_{μ} xp(x)dx + \int^{μ}_{-∞}xp(x)dx = \int^{∞}_{-∞}xp(x) = μ[/tex] So if both are equal, then both must equal [tex]\frac{μ}{2}[/tex] which is the intuitive explanation of the fulcrum interpretation.

    Now in terms of showing this as a general property, I'm afraid that you will need to do it as far as I know for each PDF individually, since p(x) is a general probability density function.

    What you can do is for certain types of PDE's like symmetric and even functions, you can show that this property holds by using properties of these kinds of general classes, but other than that, to prove it for a general PDF, you will need to plug in the function and prove it.

    One way though that I think would be optimal for the general case is if you standardized your random variable to have a mean 0 and a symmetric distribution about that mean, which would prove it for the general symmetric distribution, but the thing is many are not symmetric under a normal standardization.
     
    Last edited: Apr 26, 2012
  4. Apr 26, 2012 #3

    chiro

    User Avatar
    Science Advisor

    After reading your post, it would make sense that you should standardized your random variable to have mean = 0 using a simple transformation [tex]Y = X - μ[/tex] so that you have a zero mean.

    You can then move on to prove the identity [tex]\int^{0}_{-∞}yp(y)dy = \int^{∞}_{0}yp(y)dy[/tex] which removes μ completely from the proof.

    So with regard to your original expression, you need to lose the absolute value signs and you need to use a random variable transformation to get the above result as opposed to a normal integral substitution.
     
    Last edited: Apr 26, 2012
  5. Apr 26, 2012 #4
    I see. Thanks chiro!

    By the way, I think you mean [tex]\int^{0}_{-∞}yp(y)dy = -\int^{∞}_{0}yp(y)dy[/tex].
    There needs to be a negative sign because if the PDF is standardized, then one side of the fulcrum must have negative values. Or you could put absolute values in front of the y?

    Oh and another thing, suppose that the probability distribution is not symmetric. Then does the property still hold?


    BiP
     
  6. Apr 26, 2012 #5

    chiro

    User Avatar
    Science Advisor

    No that's not correct.

    Remember both are the same value as you get the same quantity on the left as on the right which means they are equal and not negatives of each other.

    As an example if you have a fulcrum and you have two equal weights: they are both say 2, not 2 and -2: Remember they are balanced just like the fulcrum and this is expressed with an equality.
     
  7. Apr 26, 2012 #6
    Actually chiro, I would have to disagree. If the PDF is [itex] ρ(x) = e^{-x^{2}} [/itex], then the left wing of the fulcrum will be the negative of the right wing, so you need to negate one of them to get the other.

    Also, I think even if the probability distribution is not symmetric, the equality in my original post should hold, as long as the mean is defined. I should like to prove this for non-symmetric probability distributions, but I don't know how to.

    BiP
     
  8. Apr 26, 2012 #7

    chiro

    User Avatar
    Science Advisor

    Yeah you are right, because if it is symmetric then you will end up having a non-symmetric function when multiplying by x, when x can be both positive and negative. Also this means I might be wrong about above for the relationship with [tex]\frac{\mu}{2}[/tex] and that it will only hold if x is always positive. If x is negative, then you will need to deal with a more general case.

    In terms of non-symmetric distributions, I'm not sure how you prove the general case either because the function is so general that unless it has a property like being symmetric or whatever, then it's really unconstrained and too general to work with.

    The only way I can think of is if you transform the PDF to something symmetric by introducing a transformation of the random variable and then showing some kind of argument relating the transformed variable and transformed mean back to the original variable and original mean.

    I don't know if the above would work though, but it's the only thing I can think of currently.
     
  9. Apr 26, 2012 #8
    ρ
    Actually I think it is true even if x is negative. I have been doing some calculations on maple with various probability distributions, and the equation I posted in the OP appears to hold no matter the probability distribution, as long as the mean for that distribution is defined, i.e.
    As long as
    [tex] \int^{∞}_{-∞}xρ(x)dx[/tex] is finite, then the "idea" of the mean being the fulcrum must be true. I can only say this out of intuition however.

    However I myself do not know that much about probability distributions. Could you recommend me some PDFs that I can perform calculations on just to see whether or not it's correct?

    EDIT: IF you look at the equation in my OP, the absolute value is designed to account for negative x values left of the mean.

    BiP
     
    Last edited: Apr 26, 2012
  10. Apr 26, 2012 #9

    chiro

    User Avatar
    Science Advisor

    No, the negative property doesn't hold when x is always positive because p(x) is always positive which means the integral is always positive for all x which means that LHS integral is positive and the RHS is positive since xp(x) is always positive. Again think about having 2 and dividing into 1 on LHS and 1 on RHS because the integral for both parts is positive. It's when you have negative x that you need to consider the negative case that you have mentioned (like the normal distribution).

    In terms of distributions, I'd try it on the exponential, normal, uniform for starters: basically anything you can integrate out for the mean expression to get an analytic answer so that you can check it symbolically: setup the integral and see what you get.
     
  11. Apr 26, 2012 #10
    I actually do not know the PDF for those distributions myself, could you list the actual functions?

    BiP
     
  12. Apr 26, 2012 #11

    chiro

    User Avatar
    Science Advisor

    Google is your friend ;), but for these distributions you have:

    [itex]U(x) = \frac{1}{b-a}[/itex] for b > a and a < x < b which is the uniform distribution.
    [itex]E(x) = ae^{-ax}[/itex] for x > 0 and a > 0 which is the exponential distribution
    [itex]N(x) = \frac{1}{(2\pi)^{1/2}}*e^{-\frac{x^2}{2}}[/itex] which is the standard normal N(0,1)
     
    Last edited: Apr 26, 2012
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Proof involving means in continuous distributions
Loading...