MHB Finding Joint Density for Minimum of Independent Variables: $\min(A,B)$ Formula

  • Thread starter Thread starter Jason4
  • Start date Start date
  • Tags Tags
    Density Joint
Click For Summary
To find the density for the minimum of two independent random variables, \( C = \min(A, B) \), the correct formula is derived from the properties of independent distributions. The density function \( f_C(c) \) can be expressed as \( f_C(c) = f_A(c)(1 - F_B(c)) + f_B(c)(1 - F_A(c)) \), where \( f_A \) and \( f_B \) are the densities of \( A \) and \( B \), respectively, and \( F_A \) and \( F_B \) are their cumulative distribution functions. This approach accounts for the two cases of \( A < B \) and \( A \geq B \). The final result simplifies to \( f_C(c) = (\lambda + \mu)e^{-(\lambda + \mu)c} \), confirming the correctness of the derived formula. Understanding this derivation is crucial for accurately calculating the joint density of the minimum of independent variables.
Jason4
Messages
27
Reaction score
0
I have:

$f_A=\lambda e^{-\lambda a}$

$f_B=\mu e^{-\mu b}$

I need to find the density for $C=\min(A,B)$

($A$ and $B$ are independent).

Is this correct or utterly wrong?

$f_C(c)=f_A(c)+f_B(c)-f_A(c)F_B(c)-F_A(c)f_B(c)$

$=\lambda e^{-\lambda c}+\mu e^{-\mu c}-\lambda e^{-\lambda c}(1-e^{-\mu c})-(1-e^{-\lambda c})\mu e^{-\mu c}$

$=\lambda e^{-\lambda c}e^{-\mu c}+\mu e^{-\lambda c}e^{-\mu c}$

$=2(\lambda+\mu)e^{-c(\lambda+\mu)}$
 
Last edited:
Physics news on Phys.org
Jason said:
I have:

$f_A=\lambda e^{-\lambda a}$

$f_B=\mu e^{-\mu b}$

I need to find the density for $C=\min(A,B)$

($A$ and $B$ are independent).

Is this correct or utterly wrong?

$f_C(c)=f_A(c)+f_B(c)-f_A(c)F_B(c)-F_A(c)f_B(c)$

You need to explain where this comes from.

Because we have two cases; \( A<B\) and \(A\ge B\) I would start:

$ \large f_C(c)=f_A(c)Pr(B>c|A=c)+Pr(A>c|B=c)) $

then independence reduces this to:

$ \large f_C(c)=f_A(c)Pr(B>c)+f_B(c)Pr(A>C) $

so:

\( \large f_C(c)=f_A(c)(1-F_B(c))+f_B(c)(1-F_A(c) \))

CB
 
Last edited:
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
485
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K