Understanding Mixed Conditional PDFs in Continuous and Discrete Variables

  • Context: Graduate 
  • Thread starter Thread starter pluviosilla
  • Start date Start date
  • Tags Tags
    Conditional Mixed
Click For Summary

Discussion Overview

The discussion revolves around the understanding of mixed conditional probability density functions (PDFs) in the context of continuous and discrete random variables. Participants explore the derivation and implications of a specific identity involving conditional PDFs, Bayes' theorem, and its application to a problem involving trials with a variable success probability.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents an identity for a conditional PDF involving a continuous variable X and a discrete variable N, expressing uncertainty about the initial identity and its relation to Bayes' theorem.
  • Another participant suggests that the initial identity resembles a Newton quotient and questions the conditioning on N, while confirming that the second identity aligns with Bayes' theorem.
  • A third participant seeks clarification on the right-hand term of the identity, noting its connection to the non-conditional marginal PDF and its role in representing a conditional PDF as a product involving Bayes' theorem.
  • This participant also introduces a specific problem regarding trials with a common probability of success, aiming to derive the conditional PDF for success given a certain number of successes.
  • A later reply retracts a previous question about the right-hand term, acknowledging a misunderstanding related to Bayes' theorem and the conditional term.
  • The same participant expresses confusion about the treatment of the binomial coefficient and the denominator in the example problem, questioning why they are considered irrelevant to the main point of demonstrating a beta distribution.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and confusion regarding the application of the identity and the treatment of terms in the example problem. No consensus is reached on the clarity of the derivation or the significance of certain components.

Contextual Notes

Participants highlight potential ambiguities in the application of Bayes' theorem and the interpretation of terms in the identity. There is also uncertainty regarding the relevance of certain mathematical components in the context of the example problem.

pluviosilla
Messages
17
Reaction score
0
I ran across this identity for a conditional PDF where the dependent random variable X is continuous and the independent variable N is discrete:

[tex]\frac{P(x<X<x+dx|N=n)}{dx}=\frac{P(N=n|x<X<x+dx)}{P(N=n)}\frac{P(x<X<x+dx)}{dx}[/tex]

In the limit as dx approaches 0 this yields:

[tex]f_{X|Y}(x|n) =\frac{P(N=n|X=x)}{P(N=n)}f(x)[/tex]

I think I understand the 2nd step, but not the initial identity. The reversing of the conditions (from X dependent on N to N dependent on X) reminds me of Bayes Law, but if he is using Bayes Law here, it is not clear to me exactly how. Could someone help me understand this identity?
 
Physics news on Phys.org
The dx is probably misleading here, the first identity is a Newton quotient,
[tex] \frac{Pr(x<X<x+y)}{y}=\frac{Pr(X<x+y)-Pr(X<x)}{y}[/tex]
This when y tends to 0 gives the derivative. You can condition this on N but I still don't see why. The second identity comes from Bayes' theorem as you said, i.e.
[tex]Pr(A|B)=\frac{Pr(B|A) Pr(A)}{Pr(B)}[/tex]
If you let
[tex]A=x<X<x+y \quad B=N=n[/tex]
you get exacly the right hand side of the equation.
 
  • Like
Likes   Reactions: pluviosilla
Thanks for this reply!

How about the right hand term? I refer to this:


[tex]\frac{Pr(x < X < x + dx)}{dx}[/tex]

This term gives the non-conditional marginal PDF f(x), and the point of the identity seems to be to show that you can represent a conditional PDF as a product of a non-conditional PDF and the RHS of Bayes Theorem. But where does that righthand term come from?

Here's a problem this identity is reputed to solve. Consider n + m trials having a common probability of success. Suppose, however, that this success probability is not fixed in advance but is chosen from a uniform (0,1) population. We want to determine the conditional PDF for success given that the n + m trials result in n successes.

Using the identity established above:

[tex]f_{X|Y}(x|n) =\frac{P(N=n|X=x)}{P(N=n)}f(x)[/tex]

we have

[tex]\frac{\left(\begin{array}{c}n+m\\n\end{array}\right)x^n (1-x)^m }{Pr(N=n)}=cx^n (1-x)^m[/tex]

which is the PDF of a beta random variable with parameters n + 1 and m + 1.

This is supposed to show that when PDF of a trial success is uniformly distributed over (0,1) prior to the collection of data, then given n sucesses in n+m trials the posterior or conditional PDF is a beta distribution with parameters n + 1 and m + 1.
 
Please disregard my question about the righthand term. I'm used to seeing Bayes THM with an AND term in the numerator, so my left brain took over and I did not *see* the conditional term, which of course, explains why you need the righthand term.

APOLOGIES!

However, I am also not sure I understand how he applies the identity to obtain the solution of the example problem. Once x established, the number of successes is modeled by the the binomial distribution. O.k. But he feels free to absorb the binomial coefficient and the denominator into a constant c as if it were irrelevant to the main point of showing that this is a beta distribution:

[tex]\frac{\left(\begin{array}{c}n+m\\n\end{array}\right)}{Pr(N=n)}[/tex]

Why is the author so dismissive of the binomial coefficient and the denominator?
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
Replies
5
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K