Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Conditional PDF with multiple random variables

  1. Sep 26, 2011 #1
    1. The problem statement, all variables and given/known data

    D = (L + E) / S
    Where L, E, and S are mutually independent random variables that are each normally distributed.

    I need to find (symbolically), the conditional PDF f(d|s).

    2. Relevant equations

    3. The attempt at a solution

    Not sure what to do with so many variables... I'm guessing that I can treat "s" as a constant since it's "given" for the conditional PDF. I also know that adding L + E will result in a normally distributed random variable. So D is also a random variable, right?

    I tried to use Bayes' Rule and also the definition of conditional probability - didn't help.

    I would be willing to bet that I need to integrate something...

    THANK YOU for any guidance you can provide!
  2. jcsd
  3. Sep 26, 2011 #2

    You seem to be making a generally confused impression, e.g.

    You shouldn't guess, of course

    Why? Is anything at all given about L and E? What if they were both 0, then the sum is 0 and that's hardly a normal distribution.

    I don't know how this follows from the previous, but it is true, but it should be more obvious

    Of course, I don't mean to come across as belittling: my point is to point out why I consider you to be generally confused and I want to help. When I'm generally confused, I find it always helps to go back to the definitions. For example, go back to the definition of a random variable: verify and understand why D indeed is a random variable.
    Then look up the definition of something like f(d|s). Most probably your book will have defined it as
    [tex]f(d|s) = \frac{f(d,s)}{f(s)}[/tex]
    so by definition we need f(d,s) and f(s) [or you could try to use something like Bayes' rule, as you did, but that also uses things of the form f(x|y) so you can guess that that won't simplify matters]

    Now the question has come to: how do I get f(d,s) and f(s). Your integration radar is indeed correct: it is the clue to understanding that we only need to get f(d,s). Is it obvious how f(s) follows from f(d,s)?

    Now the problem has been reduced to "what is the joint probability distribution of D and S?".

    You most probably saw a method to get a probability distribution of functions of random variables. If this doesn't ring a bell, reread your course and give it some thought :)

    I'll be here if you have more questions.
  4. Sep 26, 2011 #3
    I appreciate your attempt to help, but I did indeed find your comments a bit condescending. You seemed to be aware of your tone, but I wanted to confirm.

    I know the definition of conditional probability, of course, AND attempted to apply it - mentioned in my original post.

    If this problem had two random variables, I would be good to go. I am confused because there are FOUR variables, D being dependent upon L, E, and S (recall - L, E, S given as having a normal distribution).

    To answer your question - what I meant was that if I add f(l) and f(e) as normally distributed RVs, I will get another normally distributed RV. Sorry for being unclear.

    If I use the standard definition of conditional probability for PDFs, then yes, I need to find the joint PDF of f(d,s). I don't know how to do that from D = (L + E)/S. I already have f(s) - normally distributed RV, so I do not need to derive it by integrating f(d,s) over d.

    I welcome any other advice you have.
  5. Sep 27, 2011 #4
    Good point, I erred there.

    Also, I need to apologize for my line: "Why? Is anything at all given about L and E? What if they were both 0, then the sum is 0 and that's hardly a normal distribution."
    I had overlooked you stating "Where L, E, and S are mutually independent random variables that are each normally distributed."!

    Anyway, back to the problem:

    Have you seen something of the following in your course: given two random variables X & Y with density f(X,Y), then if g and h are functions such that g(X,Y) and h(X,Y) are again random variables, we can express the joint probability distribution of g(X,Y) and h(X,Y) in terms of f (the formula uses the Jacobian)?
    If this sounds very unfamiliar, then probably a less general approach will suffice, but I wanted to check this first.
  6. Sep 27, 2011 #5

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Your "guess" is correct, and is easy to justify, depending on how you define conditional densities. Take the case of a trivariate density f(x,y,z). Suppose we *define* the conditional density f(x,y|z) so that f(x,y|z) dx dy =lim_{h-->0} Pr{x<X<x+dx,y<Y<y+dy|
    z<Z<z+h}. Then, indeed, we have that for h going to zero we have that f(x,y|z) = C*f(x,y,z), where C depends only on z (is a constant as far as x and y are concerned). Basically, C is a normalization constant that ensures the x,y integral of the conditional
    PDF is 1. Of course, C = 1/f_Z(z), where f_Z is the marginal density of Z.

    Last edited: Sep 27, 2011
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook