How does a change of variables affect a double integral?

  • Thread starter Thread starter RingNebula57
  • Start date Start date
  • Tags Tags
    Cartesian Polar
Click For Summary

Homework Help Overview

The discussion revolves around the effects of changing variables in double integrals, particularly focusing on the transformation from Cartesian to polar coordinates. Participants explore the implications of this transformation on the differential area elements and the Jacobian determinant.

Discussion Character

  • Exploratory, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants examine the differentiation of Cartesian coordinates expressed in polar coordinates and question the relationship between the differential area elements. Some express confusion regarding the application of partial derivatives in this context. Others discuss the geometric interpretation of the Jacobian and its role in transforming integrals.

Discussion Status

The discussion is ongoing, with various interpretations being explored. Some participants provide detailed reasoning and examples related to the transformation of integrals, while others seek clarification on specific aspects of the differentiation process and the use of the Jacobian.

Contextual Notes

There is a focus on the need for clarity regarding the assumptions made in the transformation process, particularly concerning the dependence of variables and the implications for integration. The discussion also highlights the importance of understanding the geometric interpretation of the Jacobian in the context of double integrals.

RingNebula57
Messages
56
Reaction score
2
If we expres cartesian cordinates in polar coordinates we get:

x=r*cos(theta)
y=r*sin(theta)

let's differentiate those 2 eqs:

dx= dr cos(theta) -r* d(theta) * sin(theta)
dy=dr sin(theta) + r* d(theta) * cos(theta)

why isn't dx*dy= r* dr* d(theta) ( like when taking the jacobian , or when doing geometric interpretation)?
 
Last edited by a moderator:
Physics news on Phys.org
##x## and ##y## may depend on ##r## and ##\theta.## Shouldn't it be partial derivative?
 
RingNebula57 said:
If we expres cartesian cordinates in polar coordinates we get:

x=r*cos(theta)
y=r*sin(theta)

let's differentiate those 2 eqs:

dx= dr cos(theta) -r* d(theta) * sin(theta)
dy=dr sin(theta) + r* d(theta) * cos(theta)

why isn't dx*dy= r* dr* d(theta) ( like when taking the jacobian , or when doing geometric interpretation)?

When you say "differentiate", what are you differentiating with respect to?
 
Last edited by a moderator:
RingNebula57 said:
If we expres cartesian cordinates in polar coordinates we get:

x=r*cos(theta)
y=r*sin(theta)

let's differentiate those 2 eqs:

dx= dr cos(theta) -r* d(theta) * sin(theta)
dy=dr sin(theta) + r* d(theta) * cos(theta)

why isn't dx*dy= r* dr* d(theta) ( like when taking the jacobian , or when doing geometric interpretation)?

I assume you are speaking in the context of integration. When transforming to polar co-ordinates, it can be shown:

$$\iint_D f(x,y) \space dA = \iint_{D'} f(r \cos(\theta), r \sin(\theta)) \space dA'$$

Where ##dA' = J_{r , \theta} (x, y) \space dr d \theta##. We have to multiply the function by the Jacobian ##J## whenever an invertible transformation is used to transform an integral. It turns out:

$$J_{r , \theta} (x, y) = x_r y_{\theta} - x_{\theta} y_r = r$$

You can work this result out yourself by taking the partials of the transformation. Hence we can write:

$$\iint_D f(x,y) \space dA = \iint_{D'} f(r \cos(\theta), r \sin(\theta)) \space dA' = \iint_{D'} f(r \cos(\theta), r \sin(\theta)) \space J_{r , \theta} (x, y) \space dr d \theta = \iint_{D'} f(r \cos(\theta), r \sin(\theta)) \space r \space dr d \theta$$

Where the order of integration is still to be decided. It is very important to multiply by the volume element ##J## because you want to preserve the result of the integral (the transformation is one to one and onto).
 
Last edited by a moderator:
  • Like
Likes   Reactions: ShayanJ and RUber
tommyxu3 said:
##x## and ##y## may depend on ##r## and ##\theta.## Shouldn't it be partial derivative?
Imagine a third variable t, and we know x,y, r, theta are functions of t. So when you write the total derivative of x to respect to t we get:

dx/dt = d(partial)x/d(partial)r * dr/dt + d(partial)x/d(partial)(theta) * d(theta)/dt, and if you multiply by dt this equation , you get the above one.
 
I'm not sure what a total derivative has to do with this. Why don't we see how a change of variables is going to affect a double integral?

Suppose there is a rectangle ##R## in the arbitrary ##uv##-plane. Suppose further the lower left corner of the rectangle is at the point ##(u_0, v_0)##, and the dimensions of the rectangle are ##\Delta u## for the width and ##\Delta v## for the height respectively.

Now let's define an invertible transformation ##T: (u, v) \rightarrow (x, y)## such that the rectangle ##R## in the ##uv##-plane can be mapped to a region ##R'## in the ##xy##-plane. This transformation from the ##uv##-plane to the ##xy##-plane will be given by some ##x = g(u, v)## and ##y = h(u, v)##. For example, the lower left corner of ##R## can be mapped to the boundary of ##R'## by using ##T## like so:

$$T(u_0, v_0) = (x_0, y_0)$$
$$x_0 = g(u_0, v_0), \space y_0 = h(u_0, v_0)$$

Now, define a vector function ##\vec r(u, v)## to be the position vector of the image of the point ##(u, v)##:

$$\vec r(u, v) = g(u, v) \hat i + h(u, v) \hat j$$

Note the equation for the bottom side of the rectangle ##R## in the ##uv##-plane is given by ##v = v_0##, and the equation for the left side of the rectangle ##R## in the ##uv##-plane is given by ##u = u_0##. The image of the bottom side of ##R## in the ##xy##-plane is given by ##\vec r(u , v_0)##, and the image of the left side of ##R## in the ##xy##-plane is given by ##\vec r(u_0 , v)##.

The tangent vector at ##(x_0, y_0)## to the image ##\vec r(u , v_0)## is given by:

$$\vec r_u(u, v) = g_u(u_0, v_0) \hat i + h_u(u_0, v_0) \hat j = x_u \hat i + y_u \hat j$$

Similarly, the tangent vector at ##(x_0, y_0)## to the image ##\vec r(u_0 , v)## is given by:

$$\vec r_v(u, v) = g_v(u_0, v_0) \hat i + h_v(u_0, v_0) \hat j = x_v \hat i + y_v \hat j$$

We can approximate the region ##R'## in the ##xy##-plane by a parallelogram determined by the secant vectors:

$$\vec a = \vec r(u_0 + \Delta u, v_0) - \vec r(u_0, v_0) ≈ \Delta u \vec r_u$$
$$\vec b = \vec r(u_0, v_0 + \Delta v) - \vec r(u_0, v_0) ≈ \Delta v \vec r_v$$

So to determine the area of ##R'##, we must determine the area of the parallelogram formed by the secant vectors. So we compute:

$$\Delta A_{R'} ≈ \left| (\Delta u \vec r_u) \times (\Delta v \vec r_v) \right| = \left| \vec r_u \times \vec r_v \right| (\Delta u \Delta v)$$

Where we have pulled out ##\Delta u \Delta v## because it is constant. Computing the magnitude of the cross product we obtain:

$$\left| \vec r_u \times \vec r_v \right| = x_u y_v - x_v y_u$$

So we may write:

$$\Delta A_{R'} ≈ [x_u y_v - x_v y_u] (\Delta u \Delta v)$$

Where ##x_u y_v - x_v y_u## can be determined by evaluating ##g_u(u_0, v_0), h_v(u_0, v_0), g_v(u_0, v_0)##, and ##h_u(u_0, v_0)## respectively.

Now that we have formalized all of that, divide the region ##R## in the ##uv##-plane into infinitesimally small rectangles ##R_{ij}##. The images of the ##R_{ij}## in the ##xy##-plane are represented by the rectangles ##R_{ij}'##. Applying the approximation ##\Delta A_{R'}## to each ##R_{ij}'##, we can approximate the double integral of a function ##f## over ##R'## like so:

$$\iint_{R'} f(x, y) dA ≈ \sum_{i = 1}^m \sum_{j = 1}^n f(x_i, y_j) \Delta A_{R'} ≈ \sum_{i = 1}^m \sum_{j = 1}^n f(g(u_i, v_j), h(u_i, v_j)) [x_u y_v - x_v y_u] (\Delta u \Delta v)$$

Notice this looks like a typical Riemann sum. Now as ##m \to \infty## and ##n \to \infty##, the double sum converges to a double integral over ##R##:

$$\displaystyle \lim_{m \to \infty} \displaystyle \lim_{n \to \infty} \sum_{i = 1}^m \sum_{j = 1}^n f(g(u_i, v_j), h(u_i, v_j)) [x_u y_v - x_v y_u] (\Delta u \Delta v) = \iint_R f(g(u, v), h(u, v)) [x_u y_v - x_v y_u] \space dudv$$

Where we usually write the Jacobian ##J = [x_u y_v - x_v y_u]##. So we can finally conclude:

$$\iint_{R'} f(x,y) \space dA = \iint_{R} f(g(u, v), h(u, v)) \space J \space dudv$$

This argument for an arbitrary ##(u, v)## space applies to polar ##(r, \theta)## space as well. In fact, this argument will apply for any kind of other invertible transformation ##T##.
 
  • Like
Likes   Reactions: RUber and ShayanJ

Similar threads

  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
7
Views
2K