Joint Density of (X,Y) from R_PDF(r),V_PDF(v)

  • Context: Undergrad 
  • Thread starter Thread starter rabbed
  • Start date Start date
  • Tags Tags
    Distributions Joint
Click For Summary
SUMMARY

The joint density function XY_PDF(x,y) for random variables (X,Y) defined as (R*cos(V), R*sin(V)) can be derived using the PDFs R_PDF(r) = 2*r/K^2 and V_PDF(v) = 1/(2*pi) through the application of Jacobians. To compute XY_PDF(x,y), one must first identify independent variables that describe X and Y, derive their marginal distributions, and then combine them to form the joint distribution. The Kolmogorov-Arnold representation theorem is relevant in understanding the conditions under which joint distributions can be expressed as products of marginal distributions.

PREREQUISITES
  • Understanding of joint probability distributions
  • Familiarity with Jacobians in coordinate transformations
  • Knowledge of marginal and conditional distributions
  • Awareness of the Kolmogorov-Arnold representation theorem
NEXT STEPS
  • Study the application of Jacobians in probability density functions
  • Learn about the Kolmogorov-Arnold representation theorem in depth
  • Explore independent component analysis techniques
  • Investigate methods for simulating joint distributions from independent random variables
USEFUL FOR

Statisticians, data scientists, and mathematicians working with multivariate distributions, particularly those interested in transforming and simulating joint probability distributions.

rabbed
Messages
241
Reaction score
3
For random variables (X,Y) = (R*cos(V),R*sin(V))
I have R_PDF(r) = 2*r/K^2
and V_PDF(v) = 1/(2*pi)
where (0 < r < K) and (0 < v < 2*pi)

Is XY_PDF(x,y) the joint density of X and Y that I get by using the PDF method with Jacobians from the distribution R_PDF(r)*V_PDF(v)?
So without having R_PDF(r) and V_PDF(v), just knowing that X^2+Y^2=R^2 - if I want to get XY_PDF(x,y) I would first need to find two independent variables describing both X and Y, then those independent variables marginal distributions in order to create the independent variables joint distribution and then I can calculate XY_PDF(x,y)?
Because only independent marginal distributions can be multiplied to form a joint density?
 
Physics news on Phys.org
rabbed said:
For random variables (X,Y) = (R*cos(V),R*sin(V))
I have R_PDF(r) = 2*r/K^2
and V_PDF(v) = 1/(2*pi)
where (0 < r < K) and (0 < v < 2*pi)

Is XY_PDF(x,y) the joint density of X and Y that I get by using the PDF method with Jacobians from the distribution R_PDF(r)*V_PDF(v)?

Yes, you would use a Jacobian to change coordinates in doing an integration and a "joint density" in a particular coordinate system is an integrand in that particular coordinate system. In your example, the joint distribution is a function of two variables. If you think about a distribution as being analagos to a physical object then its mass and mass density don't change physically just because you change the coordinate system that you are using to describe the object.
So without having R_PDF(r) and V_PDF(v), just knowing that X^2+Y^2=R^2

What do you mean by "just knowing". If you know the relation between (R,V) and (X,Y) but don't know the joint distribution of either (R,V) or (X,Y) then the relation between (R,V) and (X,Y) by itself doesn't give you distribution of either vector.

- if I want to get XY_PDF(x,y) I would first need to find two independent variables describing both X and Y, then those independent variables marginal distributions in order to create the independent variables joint distribution and then I can calculate XY_PDF(x,y)?
Because only independent marginal distributions can be multiplied to form a joint density?

If you have random variables (P,Q) with joint distribution f(P,Q) then it's very handy if you can find a way to change coordinates and describe the distribution as g(S,T) = h(S) m(T). However, this is not always possible.

A statement about what is always possible is the Kolmorogov-Arnold representation theorem https://en.wikipedia.org/wiki/Kolmogorov–Arnold_representation_theorem,

The cases where g(S,T) can be written in some convenient way as g(S,T) = h(S) m(T) or g(S,T) = h(S) + m(T) etc. are remarkable and topics in statistics such as principal component analysis or independent component analysis focus around finding empirical ways to decompose joint densities in special ways.

I think what you have in mind is the fact that it's inconvenient to simulate a joint distribution in a computer program unless one can find a way to simulate it by simulating independent real valued random variables. However, the fact that it is not possible to write a given function f(P,Q) as a product doesn't mean that f(P,Q) is an "unknown" function.
 
Thanks for the good answer, Stephen
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
5K
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 43 ·
2
Replies
43
Views
6K
  • · Replies 7 ·
Replies
7
Views
3K