Why do Jacobian transformations in probability densities require a reciprocal?

Click For Summary
Jacobian transformations in probability densities require a reciprocal due to the need to maintain the normalization of probability when changing variables. In probability, the density function must account for how the transformation affects the volume element, which is why the Jacobian determinant is inverted. Conversely, in standard calculus transformations, the Jacobian is used directly without inversion because the focus is on the change in variables rather than probability density. The discussion illustrates this difference using a simple substitution example, highlighting how the Jacobian relates to the scaling of the variable transformation. Understanding this distinction is crucial for correctly applying transformations in probability theory.
IniquiTrance
Messages
185
Reaction score
0
Why is it that if you have:

U=g_1 (x, y), \quad V = g_2 (x,y)
X = h_1 (u,v), \quad Y = h_2 (u,v)

Then:

f_{U,V} (u,v) du dv = f_{X,Y} (h_1(u,v), h_2 (u,v)) \left|J(h_1(u,v),h_2(u,v))\right|^{-1} dxdy

While when doing variable transformations in calculus, you have:

du dv = \left|J(h_1(u,v),h_2(u,v))\right| dx dy

without the reciprocal. Why is it that with the probability densities, you take the reciprocal, rather than how it's typically done without the reciprocal?

Thanks!
 
Physics news on Phys.org
IniquiTrance said:
While when doing variable transformations in calculus, you have:
du dv = \left|J(h_1(u,v),h_2(u,v))\right| dx dyWe should check that.

We could begin by looking at a simpler case.

If we consider the integral \int_{0}^{1} 1 du and used the substitution x = 2u, we have du = (1/2) dx and the range of x in the integration is [0,2].

As I relate this to the notation in your question, x = h_1(u) = 2u.
| J(h_1(u)]| = 2.
 
Thank you.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

Replies
19
Views
2K
Replies
4
Views
1K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 2 ·
Replies
2
Views
1K