Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Mixed Conditional PDFs

  1. Sep 7, 2008 #1
    I ran across this identity for a conditional PDF where the dependent random variable X is continuous and the independent variable N is discrete:


    In the limit as dx approaches 0 this yields:

    [tex]f_{X|Y}(x|n) =\frac{P(N=n|X=x)}{P(N=n)}f(x)[/tex]

    I think I understand the 2nd step, but not the initial identity. The reversing of the conditions (from X dependent on N to N dependent on X) reminds me of Bayes Law, but if he is using Bayes Law here, it is not clear to me exactly how. Could someone help me understand this identity?
  2. jcsd
  3. Sep 8, 2008 #2
    The dx is probably misleading here, the first identity is a Newton quotient,
    \frac{Pr(x<X<x+y)}{y}=\frac{Pr(X<x+y)-Pr(X<x)}{y} [/tex]
    This when y tends to 0 gives the derivative. You can condition this on N but I still don't see why. The second identity comes from Bayes' theorem as you said, i.e.
    [tex]Pr(A|B)=\frac{Pr(B|A) Pr(A)}{Pr(B)}[/tex]
    If you let
    [tex] A=x<X<x+y \quad B=N=n[/tex]
    you get exacly the right hand side of the equation.
  4. Sep 8, 2008 #3
    Thanks for this reply!!

    How about the right hand term? I refer to this:

    [tex]\frac{Pr(x < X < x + dx)}{dx}[/tex]

    This term gives the non-conditional marginal PDF f(x), and the point of the identity seems to be to show that you can represent a conditional PDF as a product of a non-conditional PDF and the RHS of Bayes Theorem. But where does that righthand term come from?

    Here's a problem this identity is reputed to solve. Consider n + m trials having a common probability of success. Suppose, however, that this success probability is not fixed in advance but is chosen from a uniform (0,1) population. We want to determine the conditional PDF for success given that the n + m trials result in n successes.

    Using the identity established above:

    [tex]f_{X|Y}(x|n) =\frac{P(N=n|X=x)}{P(N=n)}f(x)[/tex]

    we have

    [tex]\frac{\left(\begin{array}{c}n+m\\n\end{array}\right)x^n (1-x)^m }{Pr(N=n)}=cx^n (1-x)^m [/tex]

    which is the PDF of a beta random variable with parameters n + 1 and m + 1.

    This is supposed to show that when PDF of a trial success is uniformly distributed over (0,1) prior to the collection of data, then given n sucesses in n+m trials the posterior or conditional PDF is a beta distribution with parameters n + 1 and m + 1.
  5. Sep 8, 2008 #4
    Please disregard my question about the righthand term. I'm used to seeing Bayes THM with an AND term in the numerator, so my left brain took over and I did not *see* the conditional term, which of course, explains why you need the righthand term.


    However, I am also not sure I understand how he applies the identity to obtain the solution of the example problem. Once x established, the number of successes is modeled by the the binomial distribution. O.k. But he feels free to absorb the binomial coefficient and the denominator into a constant c as if it were irrelevant to the main point of showing that this is a beta distribution:


    Why is the author so dismissive of the binomial coefficient and the denominator?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook