Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Joint distributions

  1. Mar 25, 2016 #1
    For random variables (X,Y) = (R*cos(V),R*sin(V))
    I have R_PDF(r) = 2*r/K^2
    and V_PDF(v) = 1/(2*pi)
    where (0 < r < K) and (0 < v < 2*pi)

    Is XY_PDF(x,y) the joint density of X and Y that I get by using the PDF method with Jacobians from the distribution R_PDF(r)*V_PDF(v)?
    So without having R_PDF(r) and V_PDF(v), just knowing that X^2+Y^2=R^2 - if I want to get XY_PDF(x,y) I would first need to find two independent variables describing both X and Y, then those independent variables marginal distributions in order to create the independent variables joint distribution and then I can calculate XY_PDF(x,y)?
    Because only independent marginal distributions can be multiplied to form a joint density?
     
  2. jcsd
  3. Mar 26, 2016 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    Yes, you would use a Jacobian to change coordinates in doing an integration and a "joint density" in a particular coordinate system is an integrand in that particular coordinate system. In your example, the joint distribution is a function of two variables. If you think about a distribution as being analagos to a physical object then its mass and mass density don't change physically just because you change the coordinate system that you are using to describe the object.


    What do you mean by "just knowing". If you know the relation between (R,V) and (X,Y) but don't know the joint distribution of either (R,V) or (X,Y) then the relation between (R,V) and (X,Y) by itself doesn't give you distribution of either vector.

    If you have random variables (P,Q) with joint distribution f(P,Q) then it's very handy if you can find a way to change coordinates and describe the distribution as g(S,T) = h(S) m(T). However, this is not always possible.

    A statement about what is always possible is the Kolmorogov-Arnold representation theorem https://en.wikipedia.org/wiki/Kolmogorov–Arnold_representation_theorem,

    The cases where g(S,T) can be written in some convenient way as g(S,T) = h(S) m(T) or g(S,T) = h(S) + m(T) etc. are remarkable and topics in statistics such as principal component analysis or independent component analysis focus around finding empirical ways to decompose joint densities in special ways.

    I think what you have in mind is the fact that it's inconvenient to simulate a joint distribution in a computer program unless one can find a way to simulate it by simulating independent real valued random variables. However, the fact that it is not possible to write a given function f(P,Q) as a product doesn't mean that f(P,Q) is an "unknown" function.
     
  4. Mar 27, 2016 #3
    Thanks for the good answer, Stephen
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Joint distributions
  1. Joint Distribution (Replies: 3)

Loading...