Conditional Probability Distribution

Click For Summary

Homework Help Overview

The discussion revolves around conditional probability distributions, specifically focusing on the joint probability mass function and how to derive conditional distributions from it. Participants are exploring the implications of the condition that the sum of certain random variables equals a specific value.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Participants are attempting to understand how the condition \(\sum_{i=1}^k N_i = n\) affects the joint distribution. There are questions about the role of distinct \(\lambda_i\) values and how they relate to the \(n_i\) values in the context of the joint PMF.

Discussion Status

There is an ongoing exploration of the relationship between the joint PMF and the conditional distribution. Some participants are questioning the assumptions made about the independence of the \(\lambda_i\) values and their impact on the joint distribution. Guidance has been offered to consider simpler cases with fewer variables to identify patterns.

Contextual Notes

Participants are grappling with the implications of the condition on the joint distribution and the potential complexity introduced by distinct \(\lambda_i\) values. There is a recognition that the joint PMF's structure may not straightforwardly accommodate the condition imposed by the sum of the random variables.

SP90
Messages
19
Reaction score
0

Homework Statement



attachment.php?attachmentid=47884&stc=1&d=1338504146.png


The Attempt at a Solution



I have that the joint probability mass function would be

\Pi_{i=1}^{k} \frac{\lambda_{i}^{n_{i}}}{n_{i}!} e^{-\lambda_{i}}

How would I go about applying the conditional to get the conditional distribution?
 

Attachments

  • Screen Shot 2012-05-31 at 23.41.23.png
    Screen Shot 2012-05-31 at 23.41.23.png
    17 KB · Views: 751
Physics news on Phys.org
SP90 said:

Homework Statement



attachment.php?attachmentid=47884&stc=1&d=1338504146.png


The Attempt at a Solution



I have that the joint probability mass function would be

\Pi_{i=1}^{k} \frac{\lambda_{i}^{n_{i}}}{n_{i}!} e^{-\lambda_{i}}

How would I go about applying the conditional to get the conditional distribution?

You need to compute
P(N_1 = n_1, N_2 = n_2, \ldots, N_k = n_k | \sum_{i=1}^k N_i = n)<br /> = \frac{ P(N_1 = n_1, N_2 = n_2, \ldots, N_k = n_k \: \&amp; \sum_{i=1}^k N_i = n)}{P(\sum_{i=1}^k N_i = n)} for any k-tuple (n_1, n_2, \ldots, n_k) that sums to n. Can you simplify the "event" in the numerator probability? Can you compute the denominator probability?

RGV
 
I don't understand how the \sum_{i=1}^k N_i = n term constrains the joint distribution, given that each \lambda_{i} can be distinct.

Also, to evaluate P(\sum_{i=1}^k N_i = n), wouldn't this be the joint distribution, evaluated at N_{1}=n-\sum_{i=2}^{k}N_{i} and then N_{2}=n-\sum_{i=3}^{k}N_{i} and so on? I feel that's wrong because it doesn't seem like it'd collapse down nicely.
 
SP90 said:
I don't understand how the \sum_{i=1}^k N_i = n term constrains the joint distribution, given that each \lambda_{i} can be distinct.

Also, to evaluate P(\sum_{i=1}^k N_i = n), wouldn't this be the joint distribution, evaluated at N_{1}=n-\sum_{i=2}^{k}N_{i} and then N_{2}=n-\sum_{i=3}^{k}N_{i} and so on? I feel that's wrong because it doesn't seem like it'd collapse down nicely.

The \lambda_i have absolutely nothing to do with the n_i; if you think they do, then you seriously misunderstand the material. The formula you wrote for the joint pmf is P(N_1 = n_1, \ldots, N_k = n_k). The only role of the \lambda_i is to control the size of this probability, but that has nothing to do with the n_i values themselves.

RGV
 
But because the joint PMF is a multiplication of the individual PMFs, the term becomes \lambda_{1}^{n_{1}}\lambda_{2}^{n_{2}}\lambda_{3}^{n_{3}}... which means the term \sum_{i=1}^k N_i = n doesn't appear in the joint PMF (although it would if the \lambda_{i} were equal), so I don't understand how the condition \sum_{i=1}^k N_i = n is applied to give P(N_1 = n_1, N_2 = n_2, \ldots, N_k = n_k \: \&amp; \sum_{i=1}^k N_i = n)
 
SP90 said:
But because the joint PMF is a multiplication of the individual PMFs, the term becomes \lambda_{1}^{n_{1}}\lambda_{2}^{n_{2}}\lambda_{3}^{n_{3}}... which means the term \sum_{i=1}^k N_i = n doesn't appear in the joint PMF (although it would if the \lambda_{i} were equal), so I don't understand how the condition \sum_{i=1}^k N_i = n is applied to give P(N_1 = n_1, N_2 = n_2, \ldots, N_k = n_k \: \&amp; \sum_{i=1}^k N_i = n)

Try first to figure out what happens when k = 2, then maybe k = 3. You should see a pattern emerging.

RGV
 

Similar threads

Replies
6
Views
2K
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
976
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
0
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K