Need Some Mathematical Guidance Regarding Random Variables

Click For Summary
The discussion focuses on establishing a mathematical relationship for the probability that one random variable, x1, drawn from an exponential distribution, is less than another variable, x2, drawn from a power law distribution. The joint probability distribution function is key, defined as p_{X,Y}(x,y) = p_X(x)p_Y(y) due to the independence of the variables. An example is provided using exponential distributions, leading to the conclusion that P(X < Y) can be calculated through integration, resulting in the formula P(X < Y) = α / (α + β). The conversation emphasizes the importance of integrating over the appropriate region to derive the desired probability. Overall, the guidance provided offers a clear method for calculating probabilities involving independent random variables.
joshthekid
Messages
46
Reaction score
1
This is not a homework question but I project I am working on and need someone with more mathematical prowess than myself. I am using a computer program to draw random numbers from two independent distributions, x1 and x2, for two different cases and I want to establish a theoretical mathematical relationship for the probability that x1 will be less then x2. The first case is two different exponential distributions, i.e. exp(-αx), and the second from a power law, x^(-β), over a limited range a to b. I have been working on this for about a week so any help or guidance would be much appreciated. Thanks

Josh
 
Physics news on Phys.org
What you need to do is write down the joint probability distribution function. This is a function ##p_{X,Y}## in variables x and y such that for any region A, ##\iint_A p_{X,Y} = P((X, Y) \in A)##.

Because your random variables are independent, ##p_{X,Y}(x,y) = p_X(x)p_Y(y)##, where ##p_X## and ##p_Y## are the pdfs of your individual random variables.

Then you just integrate over the region A that is the set of points where X < Y, and you have P(X < Y).

I'll do the exponential distribution as an example. ##p_X(x) = \alpha e^{-\alpha x}##, and ##p_Y(y) = \beta e^{-\beta y}##. So the joint pdf is ##p_{X,Y}(x,y) = \alpha \beta e^{-\alpha x - \beta y}##. Now you integrate:

##P(X < Y) = \int_0^\infty \int_0^y \alpha \beta e^{-\alpha x - \beta y}\,dx\,dy = \alpha \beta \int_0^\infty e^{-\beta y} {1 - e^{-\alpha y}\over \alpha}\,dy = 1 - {\beta \over \alpha + \beta} = {\alpha \over \alpha + \beta}.##

So that's the answer.
 
thanks Eigenperson!

Josh
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K