Distribution of Maximum of Two Random Variables

AI Thread Summary
The discussion focuses on finding the distribution of the random variable X defined as the sum of the maximum of two independent and identically distributed (i.i.d) random variables. Participants suggest using order statistics and the convolution theorem to derive the distribution, emphasizing the complexity of the individual components, which are products of chi-squared random variables. A transformation technique is recommended to handle the norm-squared terms, and the joint probability density function (pdf) for the maximum values is also discussed. Additionally, using Extreme Value Theory is proposed as a potential simplification for large N. The conversation highlights the intricate nature of the calculations involved in determining the distribution of X.
EngWiPy
Messages
1,361
Reaction score
61
Hi all,

I have a random variable (RV):

X=\text{max}X_i+X_j

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance
 
Physics news on Phys.org
S_David said:
Hi all,

I have a random variable (RV):

X=\text{max}X_i+X_j

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

Hey S_David.

For this problem you can use order statistics and the convolution theorem to get what you want.

The order statistics is used for getting the Max(X) term and the convolution is used to calculate the distribution for summing independent (but not necessarily indentically distributed) random variables.

Are you familiar with these?
 
chiro said:
Hey S_David.

For this problem you can use order statistics and the convolution theorem to get what you want.

The order statistics is used for getting the Max(X) term and the convolution is used to calculate the distribution for summing independent (but not necessarily indentically distributed) random variables.

Are you familiar with these?

I worked with order statistics when I choose one random variable, but in this case I need to pick the maximum two random variables. The problem is that the distribution of each component is very complicated, and I need a way that I can handle this. Actually:

X_i=\|h_{1i}\|^2\|h_{2i}\|^2

where each of the components in the multiplication is Chi-square RV with 2L degrees of freedom.

Thanks
 
S_David said:
I worked with order statistics when I choose one random variable, but in this case I need to pick the maximum two random variables. The problem is that the distribution of each component is very complicated, and I need a way that I can handle this. Actually:

X_i=\|h_{1i}\|^2\|h_{2i}\|^2

where each of the components in the multiplication is Chi-square RV with 2L degrees of freedom.

Thanks

One idea that comes to mind is to use a transformation technique.

One suggestion I have is to find the pdf/cdf of the square of a chi-squared distribution. Is this what you mean when you use a norm-squared term?

After this you could use other techniques to find your expression where you multiply two norm-squared terms to get the pdf/cdf of your X_i.

After that you can use other techniques like order statistics and convolution to do the rest.
 
S_David said:
Hi all,

I have a random variable (RV):

X=\text{max}X_i+X_j

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

These can get fairly complicated:

fX(i),X(j)(u, v) =<br /> \frac {n!}{(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j

for −\infty &lt; u &lt; v &lt; \infty

Maybe you can work backwards from this.
 
Last edited:
chiro said:
One idea that comes to mind is to use a transformation technique.

One suggestion I have is to find the pdf/cdf of the square of a chi-squared distribution. Is this what you mean when you use a norm-squared term?

After this you could use other techniques to find your expression where you multiply two norm-squared terms to get the pdf/cdf of your X_i.

After that you can use other techniques like order statistics and convolution to do the rest.

The steps are clear in my mind, however, the details are very complicated and the results are very involved.

I hoped there was an easier way.

Thanks anyway
 
SW VandeCarr said:
These can get fairly complicated:

fX(i),X(j)(u, v) =<br /> n!<br /> (i − 1)!(j − 1 − i)!(n − j)!<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j

for −\infty &lt; u &lt; v &lt; \infty

Maybe you can work backwards from this.

What are these? I am sorry, but I did not get it.

Thanks
 
S_David said:
What are these? I am sorry, but I did not get it.

Thanks

Sorry. Bad Latex. It should be the joint pdf of X(i),(Xj). This is what you asked for, isn't it?

fX(i),X(j)(u, v) =<br /> \frac {n!}<br /> {(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j
for -\infty&lt; u &lt; v &lt; \infty
 
Last edited:
SW VandeCarr said:
Sorry. Bad latex. It should be the joint pdf of X(i),Xj). This is what you asked for, isn't it?

fX(i),X(j)(u, v) =<br /> \frac {n}!<br /> {(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j
for -\infty&lt; u &lt; v &lt; \infty

Yeah, I need the joint p.d.f of the summation of the maximum two RVs.
 
  • #10
S_David said:
Yeah, I need the joint p.d.f of the summation of the maximum two RVs.

You quoted it before I could correct another mistake. I think it's OK now.
 
Last edited:
  • #13
S_David said:
Hi all,

I have a random variable (RV):

X=\text{max}X_i+X_j

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

If N is large enough you can use Extreme Value Theory distribution for \text{max}X_i instead order statistics which I think it would simplify the calculations... Good luck!
 
Back
Top