Distribution Function: Computing P(X-Y > a) w/ f(m,v)

Click For Summary
SUMMARY

The discussion focuses on computing the distribution function P(X-Y > a) using the joint density function f(m,v). The user seeks clarification on whether they can derive this new distribution function by taking partial derivatives of the existing density function. Two approaches are outlined: calculating the double integral for a specific value of a or deriving the distribution of the random variable Z = X - Y through integration or variable transformation. The user expresses confusion regarding the setup of limits and the dependence on variables in their calculations.

PREREQUISITES
  • Understanding of joint probability density functions
  • Familiarity with double integrals in probability theory
  • Knowledge of transformation of variables in statistics
  • Experience with partial derivatives in the context of probability distributions
NEXT STEPS
  • Study the computation of double integrals for joint distributions
  • Learn about transformations of random variables in probability
  • Explore the concept of conditional probability density functions
  • Review examples of deriving distributions from joint density functions
USEFUL FOR

Statisticians, data analysts, and anyone involved in probability theory or statistical modeling who seeks to understand the computation of distribution functions and joint density functions.

Rane3
Messages
2
Reaction score
0
I've computed a distribution function f(m,v) by taking partials of P(X<m, Y<v) with respect to m, v. Suppose I wanted the distribution function for P(X-Y > a). Since I know f(m,v), can I use that to help me compute my new distribution function by taking partials? If so, how? I'm a little confused about this. Any good resources/references?
 
Physics news on Phys.org
You mean you have a density function [tex]f(m,v)[/tex] - the (joint) distribution function
would be [tex]F(m,v) = P(X < m, Y < v)[/tex].

I'm not sure which of the following two items you want for your second question:

i) You want a specific calculation of [tex]P(X - Y > a)[/tex] for a given value of [tex]a[/tex]. In this case you calculate this double integral

[tex] \iint_{\{X-Y > a\}} f(x,y) \, dx dy[/tex]

ii) You want an expression for the distribution of the random variable [tex]Z = X - Y[/tex].
You can either work out it out as an integral:

[tex] P(Z \le z) = \iint_{\{X-Y \le z\}} f(x,y) \, dx dy[/tex]

or you can do a transformation of variables approach.
 
I am looking for the second description, although I just want the probability density. If I know that:
X>0
X-Y>Z
and I know f(x,y), how can I find the density for X-Y>Z by taking partial derivatives of the integral? I'm getting myself confused. Should it again be partials with respect to Y,X, like I used to find f(x,y) in the first place? It seems that when I setup my limits, there is no dependence on Y and that throws me off.
[tex]\int_{-\infty}^{X}\int_0^{X-z}f(x,y)dydx[/tex] Is this even the correct integral?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
12
Views
3K