Are All Integrals Solvable by Changing Coordinate Systems?

  • Context: Undergrad 
  • Thread starter Thread starter hover
  • Start date Start date
  • Tags Tags
    Integration Jacobian
Click For Summary

Discussion Overview

The discussion centers around the question of whether all integrals can be solved by changing coordinate systems, particularly through the use of Jacobian matrices. Participants explore examples of integrals that are difficult or impossible to solve in Cartesian coordinates but may become solvable in polar coordinates. The conversation touches on theoretical aspects of calculus, including the existence of antiderivatives and the implications of Liouville's theorem.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a definite integral that is challenging to solve in Cartesian coordinates but becomes solvable in polar coordinates, questioning if this applies to all integrals.
  • Another participant argues that while many integrals can be transformed to be solvable, there are exceptions, citing the function f(x)=e^{-x^2} which has no closed-form antiderivative despite being continuous.
  • This participant references Liouville's theorem, which states that not all continuous functions have closed-form antiderivatives, thus complicating the idea of solvability through coordinate transformation.
  • Further, they mention the error function as an example of an antiderivative that cannot be expressed in closed form, and discuss the implications of this for coordinate transformations.
  • A later reply expands on the Gaussian integral, noting that while certain integrals can be solved using coordinate transformations, arbitrary limits of integration may still pose challenges.
  • Another participant adds to the discussion by explaining a method of transforming the Gaussian integral using polar coordinates, emphasizing the complexity of integrating functions with arbitrary limits.

Areas of Agreement / Disagreement

Participants express differing views on the extent to which coordinate transformations can solve integrals. While some suggest that many integrals can be made solvable through such transformations, others highlight significant exceptions and limitations, indicating that the discussion remains unresolved.

Contextual Notes

Limitations include the dependence on definitions of solvability, the nature of closed-form expressions, and the specific conditions under which certain integrals can be evaluated. The discussion also reflects the complexity of integrating functions with arbitrary limits.

hover
Messages
342
Reaction score
0
Before I ask my question, I'll lead up to it through an example. Just for reference, I have only taken up to Calc 3 and haven't taken Vector Calc. Let's look at this definite integral:

[tex]∫∫cos(x^2+y^2)dxdy[/tex]

The bounds on the outer integral is from 0 to 1 while the bounds on the inner integral is from 0 to [itex]\sqrt{1-y^2}[/itex]. I don't know how to include that in the Latex. If you have taken Calc 3 or at least Calc 2, you will notice that this is an impossible integral to take in Cartesian coordinates. Anything learned in Calc 2(u sub, by parts, trig sub, etc...) to take this integral fails. However, if you switch to polar coordinates, it becomes possible to take this integral. You can see the full problem done out here in example 5. The polar form of integration can be derived from the Jacobian Matrix and it is simple to show this(in example 2).

Here is my question. There are a ton of integrals that look impossible to do in certain coordinate systems but if we switch coordinate systems, they become possible like in this example. Does this mean that all integrals are potentially possible to do if we switch coordinate systems by using Jacobian Matrices?
 
Physics news on Phys.org
hover said:
Before I ask my question, I'll lead up to it through an example. Just for reference, I have only taken up to Calc 3 and haven't taken Vector Calc. Let's look at this definite integral:

[tex]∫∫cos(x^2+y^2)dxdy[/tex]

The bounds on the outer integral is from 0 to 1 while the bounds on the inner integral is from 0 to [itex]\sqrt{1-y^2}[/itex]. I don't know how to include that in the Latex.
$${}$$
[tex]\int_0^1\int_0^{\sqrt{1-y^2}}\cos(x^2+y^2)dxdy[/tex]

DonAntonio



If you have taken Calc 3 or at least Calc 2, you will notice that this is an impossible integral to take in Cartesian coordinates. Anything learned in Calc 2(u sub, by parts, trig sub, etc...) to take this integral fails. However, if you switch to polar coordinates, it becomes possible to take this integral. You can see the full problem done out here in example 5. The polar form of integration can be derived from the Jacobian Matrix and it is simple to show this(in example 2).

Here is my question. There are a ton of integrals that look impossible to do in certain coordinate systems but if we switch coordinate systems, they become possible like in this example. Does this mean that all integrals are potentially possible to do if we switch coordinate systems by using Jacobian Matrices?
 
That depends on what you mean by "possible". Let's take a step back here and look simply at a function [itex]f:A\rightarrow \mathbb{R}[/itex] for A, some open subset of [itex]\mathbb{R}[/itex]. By the fundamental theorem of calculus, if f is continuous, then it has an antiderivative.

But let's consider the function [itex]f(x)=e^{-x^2}[/itex]. This function is continuous everywhere, so it admits an antiderivative. However, there is a rather famous theorem by Liouville (see http://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra ) which asserts that no closed form expression (a finite sum of funtions like x, e^x, trig fuctions, powers of x, and their compositions and products, etc.) for said antiderivative exists.

This is why we have this thing called the "error function". It is defined as being a particular antiderivative of [itex]e^{-x^2}[/itex]. Specifically,

[tex]erf(x)=\int_{0}^{x}e^{-t^2}dt .[/tex]

That isn't to say it doesn't have a series representation. We have the following representation,

[tex]erf(x)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{n! (2n+1)},[/tex]

which is valid for all x. However, since this is an infinite sum, it is not a closed-form expression.

So back to your question... If you could find a coordinate transform (in closed form expression) which would allow you to find an antiderivative (in closed form) for e^{-x^2}, then in theory, after you were finished, you should be able to transform back to the original coordinates. But then you would have a closed form expression for the antiderivative, which contradicts Liouville's theorem.

In conclusion, it is not always possible to find a coordinate transformation which will allow you to integrate a continuous function using traditional methods (integration by parts, etc), because these methods yield closed form expressions, and closed form expressions don't always exist.

[EDIT] It is worth mentioning that the integral

[tex]\int_{-\infty}^{\infty}e^{-t^2}dt[/tex]

can be calculated, and is found to be [itex]\sqrt{\pi}[/itex]. This is found by a little bit of trickery, and you guessed it, a coordinate transformation (see Gaussian Integral) However, for arbitrary limits of integration, you're out of luck.
 
Last edited by a moderator:
christoff said:
That depends on what you mean by "possible". Let's take a step back here and look simply at a function [itex]f:A\rightarrow \mathbb{R}[/itex] for A, some open subset of [itex]\mathbb{R}[/itex]. By the fundamental theorem of calculus, if f is continuous, then it has an antiderivative.

But let's consider the function [itex]f(x)=e^{-x^2}[/itex]. This function is continuous everywhere, so it admits an antiderivative. However, there is a rather famous theorem by Liouville (see http://en.wikipedia.org/wiki/Liouville%27s_theorem_(differential_algebra ) which asserts that no closed form expression (a finite sum of funtions like x, e^x, trig fuctions, powers of x, and their compositions and products, etc.) for said antiderivative exists.

This is why we have this thing called the "error function". It is defined as being a particular antiderivative of [itex]e^{-x^2}[/itex]. Specifically,

[tex]erf(x)=\int_{0}^{x}e^{-t^2}dt .[/tex]

That isn't to say it doesn't have a series representation. We have the following representation,

[tex]erf(x)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{n! (2n+1)},[/tex]

which is valid for all x. However, since this is an infinite sum, it is not a closed-form expression.

So back to your question... If you could find a coordinate transform (in closed form expression) which would allow you to find an antiderivative (in closed form) for e^{-x^2}, then in theory, after you were finished, you should be able to transform back to the original coordinates. But then you would have a closed form expression for the antiderivative, which contradicts Liouville's theorem.

In conclusion, it is not always possible to find a coordinate transformation which will allow you to integrate a continuous function using traditional methods (integration by parts, etc), because these methods yield closed form expressions, and closed form expressions don't always exist.

[EDIT] It is worth mentioning that the integral

[tex]\int_{-\infty}^{\infty}e^{-t^2}dt[/tex]

can be calculated, and is found to be [itex]\sqrt{\pi}[/itex]. This is found by a little bit of trickery, and you guessed it, a coordinate transformation (see Gaussian Integral) However, for arbitrary limits of integration, you're out of luck.

Just to add a bit to Cristoff's post:

I guess this is the transform you're referring to:

[tex]\int_{-\infty}^{\infty}e^{-x^2}dx[/tex]

Multiply it by :

[tex]\int_{-\infty}^{\infty}e^{-y^2}dy[/tex]

To end up with an integrand [tex]e^{-y^2-x^2}dxdy[/tex]

And then using polar coordinates.

Also, after rescaling, erf is the probability density function for a normally-distributed

random variable with mean 0 and σ=1:

http://en.wikipedia.org/wiki/Normal_distribution

And so the rescaled version must integrate to 1 in (-∞,∞)
variable
 
Last edited by a moderator:

Similar threads

  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K