Change of variables in double integrals

Boorglar
Messages
210
Reaction score
10
I know the formula for a change of variables in a double integral using Jacobians. $$ \iint_{S}\,dx\,dy = \iint_{S'}\left\lvert J(u,v) \right\rvert\,du\,dv $$ where ## S' ## is the preimage of ## S ## under the mapping $$ x = f(u,v),~ y = g(u,v) $$ and ## J(u,v) ## is the Jacobian of the mapping in terms of ## u, v ##.

What bothers me is that the only proof I know of this fact involves Green's Theorem and then a chain rule followed by Green's theorem in reverse, and somehow the Jacobian magically pops up. The case for 3 variables seems even worse.

Is there a more straightforward proof which does not use Green's Theorem? I feel like it is overkill and too indirect; there should be a proof using only Riemann sums or manipulations of the differentials (chain rule).

I tried directly replacing ## dx ## by ##\frac{\partial f}{\partial u}\,du + \frac{\partial f}{\partial v}\,dv ## and similarly for ## dy ## but then the resulting integral doesn't seem to make sense: $$ \iint_{S'}\left(\frac{\partial f}{\partial u}\,du + \frac{\partial f}{\partial v}\,dv\right)\left(\frac{\partial g}{\partial u}\,du + \frac{\partial g}{\partial v}\,dv\right) $$

So I guess my question has a few parts:
(1) Is it valid to just replace dx and dy like that?
(2) Is there a proof of the formula not based on Green's Theorem (or on another FTC analog)?
 
Last edited:
Physics news on Phys.org
Essentially, we are taking a flat region of the x,y plane, mapping it to the u,v plane which distorts it, there will be compressed areas and stretched areas. The integrand then gets an added factor which is how compressed the region became at that point. It (the Jacobian in 2 dimensions) is a weight function basically, the area density of the embedded region.
 
Hello verty!

I understand how the Jacobian can be viewed as a measure of area distorsion of the mapping. This makes it even more desirable to find a rigorous proof based on that. It acts a bit like the determinant of a linear transformation which measures the area of the image of the unit square.
 
I think you're on your own for that part. I've done what I could.
 
It's alright, thanks anyways! I'll keep looking for a way to use the distorsion property of Jacobians, but I guess I can live with the proof using Green's Theorem too. :)
 
There is no need to use Green's Theorem.
I have just checked Schaum's Advanced Calculus by Wrede & Speigel
They just use the area of a parallelogram with vectors and derived the entire thing simply, geometrically and intuitively. The Jacobian just pops out naturally.
(ref: page 216, Second edition)
(ref: page 230, Third edition)
a nice diagram is provided.
 
Last edited:
Boorglar said:
I know the formula for a change of variables in a double integral using Jacobians. $$ \iint_{S}\,dx\,dy = \iint_{S'}\left\lvert J(u,v) \right\rvert\,du\,dv $$ where ## S' ## is the preimage of ## S ## under the mapping $$ x = f(u,v),~ y = g(u,v) $$ and ## J(u,v) ## is the Jacobian of the mapping in terms of ## u, v ##.

What bothers me is that the only proof I know of this fact involves Green's Theorem and then a chain rule followed by Green's theorem in reverse, and somehow the Jacobian magically pops up. The case for 3 variables seems even worse.

Is there a more straightforward proof which does not use Green's Theorem? I feel like it is overkill and too indirect; there should be a proof using only Riemann sums or manipulations of the differentials (chain rule).

I tried directly replacing ## dx ## by ##\frac{\partial f}{\partial u}\,du + \frac{\partial f}{\partial v}\,dv ## and similarly for ## dy ## but then the resulting integral doesn't seem to make sense: $$ \iint_{S'}\left(\frac{\partial f}{\partial u}\,du + \frac{\partial f}{\partial v}\,dv\right)\left(\frac{\partial g}{\partial u}\,du + \frac{\partial g}{\partial v}\,dv\right) $$
Didn't I respond to this on a different forum? :-p

You can write the integral like that but you have to realize that the "arithmetic of differentials" is NOT ordinary arithmetic. In particular "dxdx" would be 0: you cannot integrate twice with respect to the same variable. In fact, multiplication of differentials is "anti-commutative": dxdy= -dydx.

So \left(\frac{\partial f}{\partial u}du+ \frac{\partial f}{\partial v}dv\right)\left(\frac{\partial g}{\partial u}du+ \frac{\partial g}{\partial v}dv\right)
= \frac{\partial f}{\partial u}\frac{\partial g}{\partial u}dudu+ \frac{\partial f}{\partial u}\frac{\partial g}{\partial v}dudv+ \frac{\partial f}{\partial v}\frac{\partial g}{\partial u}dvdu+ \frac{\partial f}{\partial v}\frac{\partial g}{\partial v}dvdv

But dudu= 0, dvdv= 0, and dvdu= -dudv so that becomes
\frac{\partial f}{\partial u}\frac{\partial g}{\partial v}dudv- \frac{\partial f}{\partial v}\frac{\partial g}{\partial u}dudv= \left(\frac{\partial f}{\partial u}\frac{\partial g}{\partial v}- \frac{\partial f}{\partial v}\frac{\partial g}{\partial u}\right)dudv


So I guess my question has a few parts:
(1) Is it valid to just replace dx and dy like that?
(2) Is there a proof of the formula not based on Green's Theorem (or on another FTC analog)?
 
Last edited by a moderator:
Yet another way of looking at it: if you have a surface in three dimensions, since a surface is a two dimensional object, it can be written in terms of two parameters:
x= f(u,v), y= g(u,v), z= h(u,v).

We can write each point on the surface as a vector equation:
\vec{r}(u,v)= f(u,v)\vec{i}+ g(u,v)\vec{j}+ h(u,v)\vec{k}

The two derivatives
\vec{r}_u= f_u\vec{i}+ g_u\vec{j}+ h_u\vec{k}
\vec{r}_v= f_v\vec{i}+ g_v\vec{j}+ h_v\vec{k}
are tangent vectors to the surface with components depending on the coordinate measurements

The cross product of those is
\left|\begin{array}{cc}\vec{i} & \vec{j} & \vec{k} \\ f_u & g_u & h_u \\ f_v & g_v & h_v\end{array}\right|= (g_uh_v- g_vh_u)\vec{i}+ (f_vh_u- f_uh_v)\vec{j}+ (f_ug_v- f_vg_u)\vec{k}
is perpendicular to surface with components depending on the area measurements.

Here, because we are in the xy-plane the ]\vec{h} vector is 0 so that vector is (f_ug_v- f_vg_u)\vec{k}, its length is f_ug_v- f_vg_u and the differential is (f_ug_v- f_vg_u)dudv.
 
HallsofIvy said:
Didn't I respond to this on a different forum? :-p

You can write the integral like that but you have to realize that the "arithmetic of differentials" is NOT ordinary arithmetic. In particular "dxdx" would be 0: you cannot integrate twice with respect to the same variable. In fact, multiplication of differentials is "anti-commutative": dxdy= -dydx.

Interesting! I did not know about the algebra of differentials. This looks a lot like how cross-products behave. Is it related to differential forms? I don't know much about that topic, but from what I understand it allows to consider higher-dimensional differentials.
 
  • #10
Yes, of course it is "related to differential forms". dx and dy are differential forms.
 
Back
Top