Distribution of Maximum of Two Random Variables

Click For Summary

Discussion Overview

The discussion revolves around finding the distribution of a random variable defined as the sum of the maximum of two random variables, specifically from a set of independent and identically distributed (i.i.d) normal random variables. Participants explore various mathematical techniques and theories applicable to this problem, including order statistics, convolution, and transformation techniques.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants suggest using order statistics and the convolution theorem to find the distribution of the maximum of two random variables.
  • Others mention the complexity of the distribution of the components involved, particularly when dealing with products of chi-squared random variables.
  • A participant proposes using transformation techniques to derive the probability density function (pdf) or cumulative distribution function (cdf) of the square of a chi-squared distribution.
  • Another participant provides a joint pdf formula for the maximum of two random variables, indicating that it may be useful for the problem at hand.
  • Some express uncertainty about the mathematical expressions shared, seeking clarification on their meaning and application.
  • One participant suggests that using Extreme Value Theory might simplify the calculations if the number of random variables is large enough.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best approach to solve the problem, as multiple competing views and techniques are presented. The discussion remains unresolved regarding the most efficient method to find the distribution of the random variable.

Contextual Notes

Participants note the complexity of the distributions involved and the potential difficulties in applying the proposed techniques. There are also indications of unresolved mathematical steps and the need for further clarification on certain expressions.

EngWiPy
Messages
1,361
Reaction score
61
Hi all,

I have a random variable (RV):

[tex]X=\text{max}X_i+X_j[/tex]

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance
 
Physics news on Phys.org
S_David said:
Hi all,

I have a random variable (RV):

[tex]X=\text{max}X_i+X_j[/tex]

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

Hey S_David.

For this problem you can use order statistics and the convolution theorem to get what you want.

The order statistics is used for getting the Max(X) term and the convolution is used to calculate the distribution for summing independent (but not necessarily indentically distributed) random variables.

Are you familiar with these?
 
chiro said:
Hey S_David.

For this problem you can use order statistics and the convolution theorem to get what you want.

The order statistics is used for getting the Max(X) term and the convolution is used to calculate the distribution for summing independent (but not necessarily indentically distributed) random variables.

Are you familiar with these?

I worked with order statistics when I choose one random variable, but in this case I need to pick the maximum two random variables. The problem is that the distribution of each component is very complicated, and I need a way that I can handle this. Actually:

[tex]X_i=\|h_{1i}\|^2\|h_{2i}\|^2[/tex]

where each of the components in the multiplication is Chi-square RV with 2L degrees of freedom.

Thanks
 
S_David said:
I worked with order statistics when I choose one random variable, but in this case I need to pick the maximum two random variables. The problem is that the distribution of each component is very complicated, and I need a way that I can handle this. Actually:

[tex]X_i=\|h_{1i}\|^2\|h_{2i}\|^2[/tex]

where each of the components in the multiplication is Chi-square RV with 2L degrees of freedom.

Thanks

One idea that comes to mind is to use a transformation technique.

One suggestion I have is to find the pdf/cdf of the square of a chi-squared distribution. Is this what you mean when you use a norm-squared term?

After this you could use other techniques to find your expression where you multiply two norm-squared terms to get the pdf/cdf of your X_i.

After that you can use other techniques like order statistics and convolution to do the rest.
 
S_David said:
Hi all,

I have a random variable (RV):

[tex]X=\text{max}X_i+X_j[/tex]

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

These can get fairly complicated:

[itex]fX(i),X(j)(u, v) =<br /> \frac {n!}{(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j[/itex]

for [itex]−\infty < u < v < \infty[/itex]

Maybe you can work backwards from this.
 
Last edited:
chiro said:
One idea that comes to mind is to use a transformation technique.

One suggestion I have is to find the pdf/cdf of the square of a chi-squared distribution. Is this what you mean when you use a norm-squared term?

After this you could use other techniques to find your expression where you multiply two norm-squared terms to get the pdf/cdf of your X_i.

After that you can use other techniques like order statistics and convolution to do the rest.

The steps are clear in my mind, however, the details are very complicated and the results are very involved.

I hoped there was an easier way.

Thanks anyway
 
SW VandeCarr said:
These can get fairly complicated:

[itex]fX(i),X(j)(u, v) =<br /> n!<br /> (i − 1)!(j − 1 − i)!(n − j)!<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j[/itex]

for [itex]−\infty < u < v < \infty[/itex]

Maybe you can work backwards from this.

What are these? I am sorry, but I did not get it.

Thanks
 
S_David said:
What are these? I am sorry, but I did not get it.

Thanks

Sorry. Bad Latex. It should be the joint pdf of X(i),(Xj). This is what you asked for, isn't it?

[itex]fX(i),X(j)(u, v) =<br /> \frac {n!}<br /> {(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j[/itex]
for [itex]-\infty< u < v < \infty[/itex]
 
Last edited:
SW VandeCarr said:
Sorry. Bad latex. It should be the joint pdf of X(i),Xj). This is what you asked for, isn't it?

[itex]fX(i),X(j)(u, v) =<br /> \frac {n}!<br /> {(i − 1)!(j − 1 − i)!(n − j)!}<br /> fX(u)fX(v)[FX(u)]i−1[FX(v)−FX(u)]j−1−i[1−FX(v)]n−j[/itex]
for [itex]-\infty< u < v < \infty[/itex]

Yeah, I need the joint p.d.f of the summation of the maximum two RVs.
 
  • #10
S_David said:
Yeah, I need the joint p.d.f of the summation of the maximum two RVs.

You quoted it before I could correct another mistake. I think it's OK now.
 
Last edited:
  • #13
S_David said:
Hi all,

I have a random variable (RV):

[itex]X=\text{max}X_i+X_j[/itex]

where Xi and Xj are two different RVs from a set of i.i.d N RVs. I need to find the distribution of X. What is the most efficient way?

Thanks in advance

If N is large enough you can use Extreme Value Theory distribution for [tex]\text{max}X_i[/tex] instead order statistics which I think it would simplify the calculations... Good luck!
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K