Calculating Conditional Probability of Y with Exponential RVs

In summary, the problem is to find the conditional probability f_Y(y|\gamma_f\leq \gamma_0) where Y=\gamma_f+\text{min}(\gamma_h,\gamma_g) and all gammas with subscripts f, g, h are i.i.d exponential random variables. The approach suggested is to work with cumulative distribution functions and then take the derivative to get the density function. The authors divided the function into two intervals: \gamma\leq \gamma_0 and \gamma>\gamma_0. However, the answer presented in the paper is different from the one obtained by the expert. The expert suggests using X for the random variable and the rate \lambda = 1/\overline{\gamma
  • #1
EngWiPy
1,368
61
Dear all,

How can I find the the following conditional probability: [tex]f_Y(y/\gamma_f\leq \gamma_0)[/tex], where [tex]Y=\gamma_f+\text{min}(\gamma_h,\gamma_g)[/tex], where all gammas with subscripts f, g, h are i.i.d exponential random variables.

Thanks in advance
 
Physics news on Phys.org
  • #2
I would start out by working with cumulative distribution functions, and then take the derivative to get the density function. For instance, I would first find the cumulative distribution function for the random variable M = min(X_g, X_h), and then get the density from that. Then I would use the density for M together with the densities for X_f and X_0 to get the cumulative distribution function for Y given X_f <= X_0.

I'm guessing you are also doing this problem with the exponential distribution first in anticipation of doing it with more complicated distributions later. As for your other question about moment generating functions, I don't know if there is a general way to relate MGF of order statistics to the MGF of the i.i.d random variables. It sounds difficult.
 
  • #3
techmologist said:
...

I'm guessing you are also doing this problem with the exponential distribution first in anticipation of doing it with more complicated distributions later.

...


Exactly. The problem is that I have worked it out and got a result that is differentf from the paper I am reading now. The authors divided the function into two intervals: [tex]\gamma\leq \gamma_0[/tex] and [tex]\gamma>\gamma_0[/tex].
 
  • #4
S_David said:
Exactly. The problem is that I have worked it out and got a result that is differentf from the paper I am reading now. The authors divided the function into two intervals: [tex]\gamma\leq \gamma_0[/tex] and [tex]\gamma>\gamma_0[/tex].

Hmm. Did you work out an answer for the case of exponentially distributed RVs? For what it is worth, I got

[tex]f_Y(y|\gamma_f \leq \gamma_0) = \frac{4y}{\overline{\gamma}^2}e^{-\frac{2y}{\overline{\gamma}}}[/tex]
 
  • #5
techmologist said:
Hmm. Did you work out an answer for the case of exponentially distributed RVs? For what it is worth, I got

[tex]f_Y(y|\gamma_f \leq \gamma_0) = \frac{4y}{\overline{\gamma}^2}e^{-\frac{2y}{\overline{\gamma}}}[/tex]

Actually, the answer in the paper is:

[tex]f_Y\left(y/\gamma_f\leq\gamma_0\right)=
\begin{cases}
\frac{\text{e}^{-y/\overline{\gamma}}-\text{e}^{-y/\overline{\gamma}_f}}{(\overline{\gamma}-\overline{\gamma}_f)\,\left(1-\text{e}^{-\gamma_0/\overline{\gamma}_f}\right)} & \text{if}\,\,\, y\leq\gamma_0\\
\frac{1-\text{e}^{-\gamma_0(1/\overline{\gamma}_f-1/\overline{\gamma})}}{(\overline{\gamma}-\overline{\gamma}_f)\,\left(1-\text{e}^{-\gamma_0/\overline{\gamma}_f}\right)}\,\text{e}^{-y/\overline{\gamma}} & \text{if}\,\,\,y>\gamma_0
\end{cases}[/tex]

where [tex]\overline{\gamma}[/tex] is the mean of [tex]\text{min}(\gamma_h,\,\gamma_g)[/tex]

Regards
 
  • #6
***I just thought of this...does gamma_0 have the same distribution as the other gammas? I had assumed that in my answer above and in what follows below.*****

Is that answer supposed to be for the exponential distribution? I don't know how they got that, but I'm not saying it is incorrect. Also, if these are identically distributed random variables, then [tex]\overline{\gamma}_f = \overline{\gamma}_0 = 2\overline{\gamma} [/tex], which would simplify those expressions. I also don't see why it needs to be separated into two intervals of y. Do the authors explain any of this or do they present the answer out of thin air?

Here's how I got my answer. I am going to use X for the random variable so I don't have to keep typing out "gamma". Also, I will use the rate [tex]\lambda = 1/\overline{\gamma}[/tex], the inverse mean. So the density function for the [tex]X_i[/tex] is


[tex]f_{X_i}(x) = \lambda e^{-\lambda x}[/tex]

First of all, try to find the cumulative distribution function for Y given X_f <= X_0. This is

[tex]P\{Y \leq y | X_f \leq X_0\} = \frac{P\{Y \leq y , X_f \leq X_0\}}{P\{X_f \leq X_0\}}[/tex]

I assumed that X_f and X_0 are identically distributed, in which case the probability in the denominator is just 1/2. But if that assumption was wrong, I don't think it is too hard to modify. So basically I'm interested in finding [tex]P\{Y \leq y , X_f \leq X_0\}[/tex]. First I need to find the cumulative distribution function for the random variable M = min(X_h, X_g):

[tex]P\{M \leq x\} = 1 - P\{M > x\} = 1 - P\{X_h>x\}P\{X_g>x\}[/tex]
[tex]P\{M \leq x\} = 1 - e^{-2\lambda x}[/tex] using i.i.d.

So the density for M is

[tex]f_M(x) = 2\lambda e^{-2\lambda x}[/tex], exponential with mean half that of the X's. Now to find

[tex]P\{X_f + M \leq y , X_f \leq X_0\}[/tex], I use the joint density for M, X_f, and X_0, which by independence is

[tex]f_{X_0, X_f, M}(r,s,t) = f_{X_0}(r)f_{X_f}(s) f_{M}(t) = \lambda e^{-\lambda r} \lambda e^{-\lambda s} 2\lambda e^{-2\lambda t}[/tex]

Finally, compute the integral

[tex]P\{X_f + M \leq y , X_f \leq X_0\} = \int_{t=0}^y\int_{s=0}^{y-t}\int_{r=s}^{\infty}f_{X_0, X_f, M}(r,s,t)drdsdt = -\lambda y e^{-2\lambda y} + \frac{1}{2}(1-e^{-2\lambda y})[/tex]

Differentiating this and multiplying by 2 (from my assumption that P{X_f <= X_0} = 1/2) gives the answer

[tex]f_Y(y| X_f \leq X_0) = 4\lambda ^2 y e^{-2 \lambda y} [/tex]
 
  • #7
Execuse me, but [tex]\gamma_0[/tex] is just a constant, and the random variables [tex]\gamma_f,\,\gamma_h,\,\text{and}\,\gamma_g[/tex] are i.i.d, so [tex]\overline{\gamma}_f=\overline{\gamma}_h=\overline{\gamma}_g[/tex].

I am thinking in another way:

[tex]\text{Pr}\left[X_f+M\leq x,\,X_f\leq x_0\right]=\text{Pr}\left[2\,X_f+M\leq x+x_0\right][/tex]

given that x and x_0 must be >= 0.

I don't know if it is a valid approach or not.

Regards
 
  • #8
S_David said:
Execuse me, but [tex]\gamma_0[/tex] is just a constant, and the random variables [tex]\gamma_f,\,\gamma_h,\,\text{and}\,\gamma_g[/tex] are i.i.d, so [tex]\overline{\gamma}_f=\overline{\gamma}_h=\overline{\gamma}_g[/tex].

Ah, okay. I was confused about that. But this makes the problem easier, I think.

I am thinking in another way:

[tex]\text{Pr}\left[X_f+M\leq x,\,X_f\leq x_0\right]=\text{Pr}\left[2\,X_f+M\leq x+x_0\right][/tex]

given that x and x_0 must be >= 0.

I don't know if it is a valid approach or not.

That was actually my first thought, too. But it is incorrect for the following reason: although

[tex]X_f + M \leq y \text{ and } X_f \leq x_0[/tex] implies [tex]2X_f + M \leq y+x_0[/tex] ,

the converse is not true. For example, you could have [tex]X_f + M = y + \epsilon[/tex], where epsilon is some positive number less than x_0. Then for values of X_f satisfying

[tex]X_f \leq x_0 - \epsilon[/tex] ,

it would be true that [tex]2X_f + M \leq y+x_0[/tex] .

Now that I know x_0 is a constant, I see why the answer is broken up into two intervals. You get the probability [tex]Pr\[ X_f + M \leq y , X_f \leq x_0 \][/tex] by integrating the joint density [tex]f_{X_f, M}(r,s)[/tex] over a certain region in the r-s plane. The shape of that region depends on whether y is less than x_0 or greater than x_0. For the first case, the region is an equilateral right triangle. For the latter case, it is a quadrilateral (same triangle, but with one corner cut off).


EDIT: "equilateral right triangle" should be "isoscoles right triangle": two sides equal, not all three. haha :redface:
 
Last edited:
  • #9
Were you able to get the author's answer?
 
  • #10
techmologist said:
Were you able to get the author's answer?

Not really. Have you?
 
  • #11
Yes. The joint density function for X_f and M is [tex]f_{X_f, M}(r,s) = \lambda_f e^{-\lambda_f r} \lambda_M e^{-\lambda_M s}[/tex], and

[tex]P\{X_f+M \leq y, X_f \leq x_0\} = \int_{r=0}^{y}\int_{s=0}^{y-r}f_{X_f, M}(r,s)dsdr[/tex]

for y <= x_0, and

[tex]P\{X_f+M \leq y, X_f \leq x_0\} = \int_{r=0}^{x_0}\int_{s=0}^{y-r}f_{X_f, M}(r,s)dsdr[/tex]

for y > x_0.

M is also an exponential random variable, independent of X_f. When I converted back to the notation of the authors: [tex]\lambda_f = 1/\overline{\gamma_f}[/tex] and [tex]\lambda_M = 1/\overline{\gamma}[/tex], I got the same answers as you posted in post #5. EDIT: That is, of course, after taking the derivative w.r.t y and dividing by [tex]1-e^{-\lambda_f x_0}[/tex]

NOTE: I am having trouble previewing the latex images, so if what I posted looks totally strange, just ignore it. I'll fix it when I get to a better computer.
 
Last edited:
  • #12
It makes sense. That is great. Thanks for your help.

Regards
 
  • #13
You're welcome. Also, if you ever discover the general approach to that problem about moment generating functions, please post it. I'm curious now...
 
  • #14
techmologist said:
... if you ever discover the general approach to that problem about moment generating functions, please post it ...

Ok, I will. Thanks
 

1. What is conditional probability?

Conditional probability is the likelihood of an event occurring given that another event has already occurred. It is calculated by dividing the probability of the two events occurring together by the probability of the first event occurring.

2. How is conditional probability calculated?

To calculate conditional probability, you need to first determine the probability of the two events occurring together, then divide it by the probability of the first event occurring. This can be represented mathematically as P(Y|X) = P(X∩Y) / P(X), where P(X∩Y) is the probability of X and Y occurring together and P(X) is the probability of X occurring on its own.

3. What are exponential random variables?

Exponential random variables are a type of continuous random variable that models the time between events in a Poisson process. They have a probability density function that follows an exponential distribution and are commonly used in statistics, physics, and engineering.

4. How do I calculate the conditional probability of Y with exponential random variables?

To calculate the conditional probability of Y with exponential random variables, you can use the formula P(Y≤y|X=x) = 1 - e^(-λy), where λ is the rate parameter of the exponential distribution. This formula assumes that X follows an exponential distribution with a rate parameter of λ and Y is the time between events.

5. What are some real-world applications of calculating conditional probability with exponential random variables?

Calculating conditional probability with exponential random variables can be useful in a variety of fields, such as finance, insurance, and reliability engineering. For example, it can be used to determine the likelihood of a loan defaulting given the borrower's credit score and income, or the probability of a car breaking down within a certain time period based on its age and mileage.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
925
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
Back
Top