# I Change of variable and Jacobians

1. Feb 25, 2017

### dyn

Hi. I realise that Jacobians are not normally used in 1-D but i'm confused as to the following.
If I have the following integral ∫a-a f(x)dx and make the change of variable y = -x then dy = -dx and the limits of integration reverse so I end up with
-aa f(-y)(-dy)
but if I use the Jacobian to perform the change of variable it takes the modulus of the determinant so I end up with dy = dx and the limits still reverse so I end up with
-aa f(-y)dy.
So my 2 integrals which should be the same differ by a minus sign ! What am I doing wrong ?
Thanks

2. Feb 25, 2017

### BvU

I think you mistake the vertical bars as an indication you need to take the absolute value.
That's not the idea. It indicates you should take the determinant of this 1x1 matrix, i.e. simply the value (including the sign).

3. Feb 25, 2017

### dyn

In the books I have used the Jacobian is the determinant and then its modulus is taken but in the examples I have seen the change of variables is in 2-D or 3-D so it refers to area or volume elements which I presume would always have to be positive. Maybe this version does not apply in 1-D ?

4. Feb 25, 2017

### Staff: Mentor

5. Feb 26, 2017

### Stephen Tashi

This thread is worth pursuing because one can find examples in instructional material where the absolute value of the Jacobian is taken. (e.g example 3 in http://tutorial.math.lamar.edu/Classes/CalcIII/ChangeOfVariables.aspx).

Sometimes when computing things "known" to be positive such as areas, people adjust signs in an ad hoc manner to make things work out. It would be interesting to know if there is a technical error in example 3 that forced the author to take the absolute value of -12. - Or does the change of variable actually make area negative as if a small rectangle has one side of positive length and one side of negative length?

6. Feb 26, 2017

### dyn

Every book I have ever seen takes the modulus of the determinant but that doesn't seem to work in the 1-D case !

7. Feb 26, 2017

### StoneTemplePython

The issue of taking the absolute value of the Jacobian determinant in multivariable case (vs single variable) is addressed on page 532 of Strang's Calculus. It is freely available here:

https://ocw.mit.edu/ans7870/resources/Strang/Edited/Calculus/Calculus.pdf

(If you are looking to go directly to this page, you want the pdf to say page 551 of 671).

In a lot of cases of interest (e.g. symmetric positive definite covariance matrices), the determinant must be positive so I don't think about it too much.

I think OP may need to flip limits of integration from [-a,a] to [a, -a], in one of the integrals (i.e. make it $\int_{a}^{-a} f(-y)(-dy)$) though I haven't looked closely enough to be certain.

8. Feb 26, 2017

### Stephen Tashi

Strang "addresses" the issue in the sense of stating a convention, but he doesn't explain whether the convention is a definition, theorem, or tradition.

9. Feb 26, 2017

### StoneTemplePython

Not sure I understand what you are getting at.

When he says "Double integrals could too, but normally they go left to right and down to up", it seemed clear that he is indicating this is a convention that is accurate and traditionally used, but other approaches could be used too.

Earlier you stated (emphasis in bold is mine):

It seems to me that your underlying question perhaps has more to do with determinants and linear algebra that anything else. There are a handful of conditions required for inner products and length norms-- one of which is lengths must be non-negative, so your question / suggestion -- that the length and volume is "actually" negative just does not make much sense to me.

Determinants have multiple interpretations and are never "known" to be positive. For a real valued non-singular matrix, your determinant will tell you the signed volume of the underlying Parallelpiped, which is to say the volume, and some additional orientation information. The sign can also be interpreted as a necessary consequence wanting to preserve the linearity in each argument of the determinant. The sign can further be interpreted as a necessary consequence of the determinant being the product of the eigenvalues of a matrix.

If the sign does not have any useful / meaningful information, you may choose to suppress it. That is what the absolute value does.

Here is Treil's take in "Linear Algebra Done Wrong" made freely available by the author here: https://www.math.brown.edu/~treil/papers/LADW/book.pdf. Emphasis in bold, is mine.

Maybe your question relates to something else. But it seems fairly obvious to me that if the signed component of the signed volume is not useful to you, then you may make it go away via absolute value. If it is useful to you, well then use it.

10. Feb 27, 2017

### Stephen Tashi

My question is: If we look at the theorems and definitions that establish the technique of integration by change of variables,, do we find any mention of using the absolute value of the Jacobian ?

I suspect we do not. The procedure of using the absolute value of the Jacobian appears to be something people do to adjust the result of an integration so it produces a positive answer because they wish to interpret the answer as a (unsigned) area. For example, $\int_0^{2\pi} sin(x) dx$ has a unambiguous mathematical definition. By contrast "Find the area between the graph of $sin(x)$ and the $x$ axis between $x = 0$ and $x = 2\pi$" might have a tradition of being interpreted as demanding we count the area below the x-axis to be positive.

Does omitting the absolute value correctly compute signed area?

In the example I cited ( example 3 in http://tutorial.math.lamar.edu/Classes/CalcIII/ChangeOfVariables.aspx ), the total signed area of the figure in the xy plane is zero. If we use (-12) for the value of the Jacobian, the computation for the signed area in the uv-plane (following the work in the example) would not be zero. So, if the example is worked correctly, then preserving the signed area is more complicated that simply using the Jacobian determinant instead of its absolute value.

11. Feb 27, 2017

### blue_leaf77

It seems this topic is rather tricky. I don't know if someone has pointed this out here but the way we define which variable corresponds to which column (which is completely arbitrary) also determines the determinant's sign for the interchange of two column arises a negative sign. I think the Jacobian method for a change of variable in an integral has its inherent ambiguity only in the overall sign. If I were to work in a problem that involves using this method, I will just use either the modulus or the original Jacobian, doesn't put too much care on the limit exchange, and adjust the sign of my final integrated value to be the same as that of the original integral. Of course this means, the knowledge of the sign of the original integrated value is necessary but I would say this is what we get for using this quick method.

12. Feb 27, 2017

### Stephen Tashi

Wow! Another issue. One can argue that the order of variables in the Jacobian is not arbitrary because it's row order is defined in terms of the order of the variables in the argument of the function being integrated and the column order is determined by the order of variables in the new coordinate system - in particular, changing the order of variables in the new coordinate system may changed the "handed-ness" of the coordinate system.

Having stated that argument, we can probe it's assumptions.

First, is there a definite order to the variables in the function of several variables? I think so. The functions $F(a,b) = a^2 + b$ and the function $G(p,q) = q^2 + p$ are different functions.

Second, is there a definite order to the variables used in a change of variables? I think so. The change of variables involves a particular transformation, which is a vector valued function of a vector. Even if we don't interpret the new coordinates as cartesian coordinates, there is still a definite order to them when we use a particular transformation - just because a particular transformation is a particular function.

Last edited: Feb 27, 2017
13. Mar 2, 2017

### Stephen Tashi

Let's consider how Strang's advice "We use the absolute value |J| and run forward " could be applied in 1-dimenson.

Let $I = \int_a^b f(x) dx$.
Change variable by $x = -u$. We know the correct answer is:
$I = \int_{-a}^{-b} f(-u) (-du) = - \int_{-a}^{-b} f(-u)du = \int_{-b}^{-a} f(-u)du$

Now do the change of variable using $|-du|$ instead and let limits of integration "run forward". We obtain $I = \int_c^d f(-u) |-du|$ where $c = min( -a, -b)$, $d = max(-a,-b)$.

If $a \le b$ then $c = -b, \ d = -a$. So we get $I = \int_{-b}^{-a} f(-u) du$ which agrees with the correct answer.

However if $b \lt a$ then $c = -a,\ d = -b$ and we get $I = \int_{-a}^{-b} f(-u) du$, which is not correct.

So Strang's advice won't work for cases like $\int_2^1 f(x) dx$.

That leads to the following conjecture about the use of the absolute value of the Jacobian determinant. The problems in texts that use the absolute value of the Jacobian deal with integrating a function "over an area". For example, $\int_{x0}^{x1} \int_{y0}^{y1} f(x,y)\ dy\ dx$ can be regarded as integrating the function over the area $A =\int_{x0}^{x1} \int_{y0}^{y1} 1 \ dy\ dx$ defined by integrating the constant function 1. In order to get the usual interpretation of an area as a non-negative quantity, we need $x0 \le x1, \ y0 \le y1$

So the situation where using Strang's advice might not work is when we are doing a multiple integral that does not have the interpretation of integrating a function "over an area".

In the case of 1-dimension, if textbooks only dealt with problems involving integrating a function "over a length" and "over a length" was defined to mean that $\int_{x0}^{x1} 1\ dx \ge 0$ then they would always be in the situation where Strang's advice works, since it would be a case where $x0 \le x1$.

14. Mar 2, 2017

### dyn

Hi. Just to let you know I started this thread as a precursor to a thread I started in the Quantum Physics forum regarding the parity operator being Hermitian and the proof in 3-D. In that case the limits of integration run from +∞ to -∞ .

15. Mar 3, 2017

### Stephen Tashi

Lets try to do a simple multivariate integral with "reversed limits" and see if using the absolute value of the Jacobian works.

Let $I = \int_2^1 \int_1^2 { (x+y) \ dy\ dx } = \int_2^1 \big{|}_{y=1}^{y=2} (xy + \frac{y^2}{2}) \ dx$
$= \int_2^1 ( (2x +\frac{4}{2}) - (1x + \frac{1}{2})) \ dx = \int_2^1 (x + \frac{3}{2}) \ dx$
$= \big{|}_{x=2}^{x=1} (\frac{x^2}{2} + \frac{3x}{2}) = ( \frac{1}{2} + \frac{3}{2}) - (\frac{4}{2} + \frac{6}{2})$
$= 2 - 5 = -3$

Using the change of variables $x = -u, \ y = v$ we have $J = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}$.
$det(J) = -1, \ |det(J)| = 1$.

$I = \int_{-2}^{-1} \int_1^2 (-u + v) (1) \ dv\ du = \int_{-2}^{-1} \big{|}_{v=1}^{v=2} ( -uv + \frac{v^2}{2} ) \ du$
$= \int_{-2}^{-1} ( (-2u + \frac{4}{2}) - ( -u + \frac{1}{2})) \ du = \int_{-2}^{-1} (-u + \frac{3}{2})\ du$
$= \big{|}_{u=-2}^{u=-1} (-\frac{u^2}{2} + \frac{3u}{2} ) = (-\frac{1}{2} + \frac{-3}{2}) - (-\frac{4}{2} + \frac{-6}{2})$
$= (-2)-(-5) = -2 + 5 = 3$.

So, if I did that correctly, using the absolute value of the Jacobian produced the wrong result. Check my work!

16. Mar 3, 2017

### JonnyG

I haven't thought about it too much, but you might consider the fact that a change of variables will reverse the orientation of your manifold (open interval in this case) if the determinant of its Jacobian is negative. In the 1-d case this essentially means that if the derivative of your substitution is negative then the orientation of the open interval gets reversed, resulting in that negative sign.

If you don't take the absolute value of your determinant in the multi-dimensional case, then you will also get the negative sign. If we treat the integrand as a differential form being integrated over an oriented manifold, then we do NOT take the absolute value of the determinant of the Jacobian when doing a change of variables; this gives you the "correct" result.

Last edited: Mar 3, 2017