What Is the Probability Function of Z When X~Bernoulli(θ) and Y~Geometric(θ)?

AI Thread Summary
The discussion revolves around finding the probability function of Z, where Z = X + Y, with X following a Bernoulli distribution and Y a Geometric distribution, both defined by parameter θ. The participants clarify the probability distributions for X and Y, noting that Z's distribution is derived from the convolution of these two distributions. There is confusion regarding the application of summation and dummy variables, specifically how to correctly express the probabilities in relation to Z. The conversation emphasizes the importance of consistent notation and understanding the roles of dummy variables in probability summations. Ultimately, the goal is to accurately compute P(Z=z) using the established distributions for X and Y.
sneaky666
Messages
64
Reaction score
0

Homework Statement


Let X~Bernoulli(θ) and Y~Geometric(θ), with X and Y independent. Let Z=X+Y. What is the probability function of Z?


Homework Equations





The Attempt at a Solution



I am getting
PX(1) = θ
PX(0) = 1-θ
PX(x) = 0 otherwise
pY(y) = θ(1-θ)^y for y >= 0
pY(y) = 0 otherwise


not sure where to go from here...
 
Physics news on Phys.org
Do you know this result? The probability distribution of Z is the convolution of the probability distributions of X and Y.
 
no i am not sure, which is why i need help.
 
There's a quick proof of the convolution result at the top of this PDF file:

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf

That should get you started. If you get stuck, post what you have and I'll try to help.
 
i did that research and the only thing i could come up with is

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

so (X=k) and (Y=z-k) since Z = X+Y

PZ(Z=z)=

summation from -inf to inf
θ^2 * (1-θ)^(z-1)
if x=1,y=1

summation from -inf to inf
θ * (1-θ)^(z+1)
if x=0,y=0

0
otherwise


Is this right?
 
I don't think that's quite right. I am going to introduce some notation to make it easier to express the functions:

Kronecker delta function

\delta(k) = \begin{cases}<br /> 1, &amp; k = 0 \\ 0, &amp; \textrm{otherwise} \end{cases}

Unit step function

u(k) = \begin{cases}<br /> 1, &amp; k \geq 0 \\ 0, &amp; \textrm{otherwise} \end{cases}

Bernoulli distribution

b(k) = (1-\theta)\delta(k) + \theta \delta(k-1)

Geometric distribution

g(k) = \theta (1-\theta)^k u(k)

Distribution of sum of independent bernoulli and geometric

\begin{align*}<br /> s(k) &amp;= \sum_{m=-\infty}^{\infty} g(m) b(k-m) \\<br /> &amp;= \sum_{m = 0}^{\infty} \theta(1 - \theta)^m [(1-\theta)\delta(k-m) + \theta \delta(k-m-1)]<br /> \end{align*}

You can now consider three cases:

1) k &lt; 0: the sum is zero
2) k = 0: only one of the two \delta functions is nonzero for some m \geq 0
3) k &gt; 0: both \delta functions are nonzero for some m \geq 0
 
ok i see, but i looked on wikipedia and for the bounoulli dist. and geometric dist. i thought it was

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

or is this basically what you have? and why did you also add those extra functions in them?, I don't understand how your getting those functions from what i have...
 
sneaky666 said:
ok i see, but i looked on wikipedia and for the bounoulli dist. and geometric dist. i thought it was

PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

or is this basically what you have? and why did you also add those extra functions in them?, I don't understand how your getting those functions from what i have...

Yes, your functions and mine are equivalent. I added the extra functions so I don't have to express them as individual cases (x = 1, x = 0, otherwise) as you did.

To see that mine are the same as yours, just plug in various values of k.

For example, if

b(k) = (1 - \theta)\delta(k) + \theta\delta(k-1)

then notice that \delta(k) is zero except when k = 0, and \delta(k-1) is zero except when k = 1.

Thus b(0) = (1 - \theta)(1) + 0 = (1 - \theta) and b(1) = 0 + \theta(1) = \theta and b(k) = 0 if k is neither 0 nor 1.

Similarly with the geometric distribution.

The point is that it makes it possible to write a one-line expression that is valid for all k, which in turn makes it easier to express the convolution sum.

By the way, this isn't some weird invention of mine - it's a standard thing to do when working with functions defined in pieces, and the notation (\delta(k) and u(k)) are quite standard as well.
 
i have one last concern about why my answer is wrong:

since the main equation is
\sum P(X=k)P(Y=z-k) for all k
inf on top
k=-inf on bot

so how I got my answers is because of
PX(X=1) = θ
PX(X=0) = 1-θ
PX(X=x) = 0 otherwise
PY(Y=y>=0) = θ(1-θ)^y
PY(Y=y) = 0 otherwise

i have 3 cases, if X=k is 0, 1, or something else
so if k = 0 then you would have
P(X=k)P(Y=z-k)
P(X=0)P(Y=z-0)
(1-θ)P(Y=z)
(1-θ)θ(1-θ)^z
θ(1-θ)^(z+1)

so if k = 1 then you would have
P(X=k)P(Y=z-k)
P(X=1)P(Y=z-1)
θθ(1-θ)^(z-1)
θ^2(1-θ)^(z-1)

so if k != 0,1 then you would have
P(X=k)P(Y=z-k)
0*P(Y=z-k)
0

(and of course the summation beside them, i didnt add it here)

So i don't understand what is wrong here?
 
  • #10
You are mixing up your k and z. One of them is a dummy variable used in the summation, and the other one is the letter that you use to fill in the blank:

P(X + Y = ____)

So let's pick which one is which and stick with it.

If you want to fill in the blank with z,

P(X + Y = z) = \sum_{k=-\infty}^{\infty} P(X = k) P(Y = z - k)

then k is the dummy variable in the sum (it doesn't appear on the left side at all). So your three cases apply to z, not k:

Case 1: z < 0
Case 2: z = 0
Case 3: z > 0

Try that and see if it helps.
 
Back
Top