Integration and the Jacobian

In summary: Calc 2 (u sub, by parts, trig sub, etc...) to take this integral fails. However, if you switch to polar coordinates, it becomes possible to take this integral. The polar form of integration can be derived from the Jacobian Matrix and it is simple to show this(in example 2). Here is my question. There are a ton of integrals that look impossible to do in certain coordinate systems but if we switch coordinate systems, they become possible like in this example. Does this mean that all integrals are potentially possible to do if we switch coordinate systems by using Jacobian Matrices?In summary, the conversation discusses an example of a definite integral that is impossible to solve using traditional methods in Cartesian coordinates
  • #1
hover
343
0
Before I ask my question, I'll lead up to it through an example. Just for reference, I have only taken up to Calc 3 and haven't taken Vector Calc. Let's look at this definite integral:

[tex]∫∫cos(x^2+y^2)dxdy[/tex]

The bounds on the outer integral is from 0 to 1 while the bounds on the inner integral is from 0 to [itex]\sqrt{1-y^2}[/itex]. I don't know how to include that in the Latex. If you have taken Calc 3 or at least Calc 2, you will notice that this is an impossible integral to take in Cartesian coordinates. Anything learned in Calc 2(u sub, by parts, trig sub, etc...) to take this integral fails. However, if you switch to polar coordinates, it becomes possible to take this integral. You can see the full problem done out here in example 5. The polar form of integration can be derived from the Jacobian Matrix and it is simple to show this(in example 2).

Here is my question. There are a ton of integrals that look impossible to do in certain coordinate systems but if we switch coordinate systems, they become possible like in this example. Does this mean that all integrals are potentially possible to do if we switch coordinate systems by using Jacobian Matrices?
 
Physics news on Phys.org
  • #2
hover said:
Before I ask my question, I'll lead up to it through an example. Just for reference, I have only taken up to Calc 3 and haven't taken Vector Calc. Let's look at this definite integral:

[tex]∫∫cos(x^2+y^2)dxdy[/tex]

The bounds on the outer integral is from 0 to 1 while the bounds on the inner integral is from 0 to [itex]\sqrt{1-y^2}[/itex]. I don't know how to include that in the Latex.
$${}$$
[tex]\int_0^1\int_0^{\sqrt{1-y^2}}\cos(x^2+y^2)dxdy[/tex]

DonAntonio



If you have taken Calc 3 or at least Calc 2, you will notice that this is an impossible integral to take in Cartesian coordinates. Anything learned in Calc 2(u sub, by parts, trig sub, etc...) to take this integral fails. However, if you switch to polar coordinates, it becomes possible to take this integral. You can see the full problem done out here in example 5. The polar form of integration can be derived from the Jacobian Matrix and it is simple to show this(in example 2).

Here is my question. There are a ton of integrals that look impossible to do in certain coordinate systems but if we switch coordinate systems, they become possible like in this example. Does this mean that all integrals are potentially possible to do if we switch coordinate systems by using Jacobian Matrices?
 
  • #3
That depends on what you mean by "possible". Let's take a step back here and look simply at a function [itex] f:A\rightarrow \mathbb{R} [/itex] for A, some open subset of [itex]\mathbb{R}[/itex]. By the fundamental theorem of calculus, if f is continuous, then it has an antiderivative.

But let's consider the function [itex] f(x)=e^{-x^2}[/itex]. This function is continuous everywhere, so it admits an antiderivative. However, there is a rather famous theorem by Liouville (see http://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra ) which asserts that no closed form expression (a finite sum of funtions like x, e^x, trig fuctions, powers of x, and their compositions and products, etc.) for said antiderivative exists.

This is why we have this thing called the "error function". It is defined as being a particular antiderivative of [itex]e^{-x^2}[/itex]. Specifically,

[tex] erf(x)=\int_{0}^{x}e^{-t^2}dt .[/tex]

That isn't to say it doesn't have a series representation. We have the following representation,

[tex]erf(x)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{n! (2n+1)},[/tex]

which is valid for all x. However, since this is an infinite sum, it is not a closed-form expression.

So back to your question... If you could find a coordinate transform (in closed form expression) which would allow you to find an antiderivative (in closed form) for e^{-x^2}, then in theory, after you were finished, you should be able to transform back to the original coordinates. But then you would have a closed form expression for the antiderivative, which contradicts Liouville's theorem.

In conclusion, it is not always possible to find a coordinate transformation which will allow you to integrate a continuous function using traditional methods (integration by parts, etc), because these methods yield closed form expressions, and closed form expressions don't always exist.

[EDIT] It is worth mentioning that the integral

[tex]\int_{-\infty}^{\infty}e^{-t^2}dt [/tex]

can be calculated, and is found to be [itex]\sqrt{\pi}[/itex]. This is found by a little bit of trickery, and you guessed it, a coordinate transformation (see Gaussian Integral) However, for arbitrary limits of integration, you're out of luck.
 
Last edited by a moderator:
  • #4
christoff said:
That depends on what you mean by "possible". Let's take a step back here and look simply at a function [itex] f:A\rightarrow \mathbb{R} [/itex] for A, some open subset of [itex]\mathbb{R}[/itex]. By the fundamental theorem of calculus, if f is continuous, then it has an antiderivative.

But let's consider the function [itex] f(x)=e^{-x^2}[/itex]. This function is continuous everywhere, so it admits an antiderivative. However, there is a rather famous theorem by Liouville (see http://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra ) which asserts that no closed form expression (a finite sum of funtions like x, e^x, trig fuctions, powers of x, and their compositions and products, etc.) for said antiderivative exists.

This is why we have this thing called the "error function". It is defined as being a particular antiderivative of [itex]e^{-x^2}[/itex]. Specifically,

[tex] erf(x)=\int_{0}^{x}e^{-t^2}dt .[/tex]

That isn't to say it doesn't have a series representation. We have the following representation,

[tex]erf(x)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{n! (2n+1)},[/tex]

which is valid for all x. However, since this is an infinite sum, it is not a closed-form expression.

So back to your question... If you could find a coordinate transform (in closed form expression) which would allow you to find an antiderivative (in closed form) for e^{-x^2}, then in theory, after you were finished, you should be able to transform back to the original coordinates. But then you would have a closed form expression for the antiderivative, which contradicts Liouville's theorem.

In conclusion, it is not always possible to find a coordinate transformation which will allow you to integrate a continuous function using traditional methods (integration by parts, etc), because these methods yield closed form expressions, and closed form expressions don't always exist.

[EDIT] It is worth mentioning that the integral

[tex]\int_{-\infty}^{\infty}e^{-t^2}dt [/tex]

can be calculated, and is found to be [itex]\sqrt{\pi}[/itex]. This is found by a little bit of trickery, and you guessed it, a coordinate transformation (see Gaussian Integral) However, for arbitrary limits of integration, you're out of luck.

Just to add a bit to Cristoff's post:

I guess this is the transform you're referring to:

[tex]\int_{-\infty}^{\infty}e^{-x^2}dx [/tex]

Multiply it by :

[tex]\int_{-\infty}^{\infty}e^{-y^2}dy [/tex]

To end up with an integrand [tex] e^{-y^2-x^2}dxdy [/tex]

And then using polar coordinates.

Also, after rescaling, erf is the probability density function for a normally-distributed

random variable with mean 0 and σ=1:

http://en.wikipedia.org/wiki/Normal_distribution

And so the rescaled version must integrate to 1 in (-∞,∞)
variable
 
Last edited by a moderator:

1. What is integration and why is it important in science?

Integration is a mathematical process that involves finding the area under a curve or the volume of a three-dimensional object. It is important in science because it allows us to calculate important quantities such as velocity, acceleration, and work. It also helps us understand the behavior of physical systems and make predictions based on mathematical models.

2. How is integration related to the Jacobian?

The Jacobian is a mathematical concept that is used to transform one coordinate system into another. Integration and the Jacobian are closely related because the Jacobian is used to change the variables in an integral, making it easier to solve. In other words, the Jacobian allows us to change the coordinates in an integral to simplify the integration process.

3. What is the role of the Jacobian in multivariable calculus?

The Jacobian plays a crucial role in multivariable calculus as it is used to calculate the change of variables in multiple dimensions. It helps us transform integrals from one coordinate system to another, making it easier to solve problems in higher dimensions. The Jacobian is also used in vector calculus to calculate line integrals, surface integrals, and volume integrals.

4. How is the Jacobian used in physics and engineering?

In physics and engineering, the Jacobian is used to solve problems involving multiple variables, such as motion in three-dimensional space or heat transfer in different directions. It is also used in control systems and optimization problems to find the most efficient solutions. In addition, the Jacobian is used in fluid mechanics to calculate the flow of fluids through different dimensions.

5. Can the Jacobian be negative and what does it represent?

Yes, the Jacobian can be negative. The sign of the Jacobian is determined by the orientation of the coordinate system. A negative Jacobian represents a coordinate transformation that flips the orientation of the system. This can be useful in certain situations, such as when calculating work done by a force in a system with reversed orientation.

Similar threads

Replies
12
Views
1K
Replies
1
Views
946
Replies
2
Views
302
  • Calculus
Replies
15
Views
3K
  • Calculus
Replies
5
Views
2K
Replies
1
Views
1K
Replies
20
Views
2K
Replies
2
Views
2K
Back
Top