Using laplace transforms to solve integrals

In summary, the student is trying to solve a homework problem using a different method then substitution, but is getting the wrong answer. He has found that the delta function can make the equation tricky to solve and that it is not possible to switch the terms of integration.
  • #1
JBrandonS
21
0

Homework Statement



##\int_0^\infty \frac{a}{a^2+x^2} dx##

Homework Equations



All the basic integration techniques.

The Attempt at a Solution



So, I saw this problem and wanted to try it using a different method then substitution, which can obviously solve it pretty easy. Since it is a very clear laplace transform I figured I could easily use that but I am getting the wrong answer. Here is what I do:

##
\int_0^\infty \frac{a}{a^2+x^2} dx = \int_0^\infty dx \int_0^\infty e^{-a t}Cos(x t) dt
= \int_0^\infty e^{-a t} \int_0^\infty Cos(x t) dx dt= \int_0^\infty \pi e^{-a t} \delta(t) dt
= \pi (1 - \theta(x))
##The correct answer is:
##\frac{\pi}{2}## iff a > 0
0 iff a = 0
##-\frac{\pi}{2}## iff a < 0
 
Last edited:
Physics news on Phys.org
  • #2
I don't think
[itex]\int_0^\infty e^{-a t} \int_0^\infty Cos(x t) dx dt[/itex]
when the x-integral is taken has a proper value, since the integral doesn't really converge. By changing how you take that integral, you could get a positive, negative, or zero result.

It's an interesting idea, though.
 
  • #3
jfizzix said:
I don't think
[itex]\int_0^\infty e^{-a t} \int_0^\infty Cos(x t) dx dt[/itex]
when the x-integral is taken has a proper value, since the integral doesn't really converge. By changing how you take that integral, you could get a positive, negative, or zero result.

It's an interesting idea, though.

Well, the dirac delta function is defined as ## \delta(a) = 1/\pi \int_0^\infty cos(a x) dx ## So you use that to say that ## \int_0^\infty cos(a x) dx = \pi \delta(a) ## and as you said this turns it into a piecewise function with a <0, a = 0, and a >0.

Having talked to some other people it seems that the delta function mucks this equation up and makes it so I cannot switch the terms of integration (or w/e the dxdt is called) which is something I didn't know about.
 
  • #4
JBrandonS said:
Well, the dirac delta function is defined as ## \delta(a) = 1/\pi \int_0^\infty cos(a x) dx ## So you use that to say that ## \int_0^\infty cos(a x) dx = \pi \delta(a) ## and as you said this turns it into a piecewise function with a <0, a = 0, and a >0.

Having talked to some other people it seems that the delta function mucks this equation up and makes it so I cannot switch the terms of integration (or w/e the dxdt is called) which is something I didn't know about.
I don't know how you got that definition of the Dirac delta distribution. I get the feeling it's wrong, mostly because the integral doesn't converge.
 
  • #5
Mandelbroth said:
I don't know how you got that definition of the Dirac delta distribution. I get the feeling it's wrong, mostly because the integral doesn't converge.

Initially it was brought in the notes of Feynman's numerical methods class that ##\int_0^\infty cos(b x) dx = \pi \delta(b)## I was unable to locate the origional link I used to download the file so I uploaded it here. That equality is first given on page 16 of the pdf (labeled as page 14 in the pdf).

The definition of the delta was found on some site, I cannot confirm it though but I still am very inclined to believe that feynman knew what he was talking about. :)
 
  • #6
The Fourier representation of the [itex]\delta[/itex] distribution (not function!) is correct, because
[tex]\int_0^{\infty} \mathrm{d} x \cos(a x)=\int_0^{\infty} \mathrm{d} x \frac{\exp(\mathrm{i} a x)+\exp(-\mathrm{i} a x)}{2}.[/tex]
Now substitution of [itex]x'=-x[/itex] in the second term gives
[tex]\int_0^{\infty} \mathrm{d} x \cos(a x)=\frac{1}{2} \int_{\mathbb{R}} \mathrm{d} x \exp(\mathrm{i} a x)=\pi \delta(a).[/tex]
What's wrong is the final integral, because you integrate only over the positive [itex]t[/itex] axis. Correct is
[tex]\int_0^{\infty} \mathrm{d} t \exp(-a t) \delta(t)=\int_{\mathbb{R}} \mathrm{d} t \exp(-a t) \Theta(t) \delta(t)=\Theta(0)=\frac{1}{2}.[/tex]

To prove all this, one has to regularize the integrals. Let's start with the [itex]\delta[/itex] distribtion in terms of a cosine-Fourier transform. A regularization is
[tex]I_{\epsilon}(a)=\int_0^{\infty} \mathrm{d} t \exp(-\epsilon t) \cos(a t)=\frac{\epsilon}{a^2+\epsilon^2}.[/tex]
To show that the weak limit for [itex]\epsilon \rightarrow 0^+[/itex] is indeed [itex]\pi \delta(a)[/itex], we have to integrate an arbitrary test function (e.g., from the space of quickly decaying functions, the Schwartz space) and then take the limit
[tex]l=\lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}} \mathrm{d} a I_{\epsilon}(a) \phi(a).[/tex]
To evaluate the integral we can use the theorem of residues by closing the integration path by a semicircle at infinity in the upper [itex]a[/itex]-half plane:
[tex]l=\lim_{\epsilon \rightarrow 0^+} 2 \pi \mathrm{i} \frac{\epsilon}{2 \mathrm{i} \epsilon} \phi(\mathrm{i} \epsilon)=\pi \phi(0).[/tex]
This means that indeed the week limit is [itex]\pi \delta(a)[/itex].

The final integral can also be evaluated with help of our approximation of the [itex]\delta[/itex] distribution. We have
[tex]\int_0^{\infty} \mathrm{d} t \frac{\epsilon}{\epsilon^2+t^2} \exp(-a t)=2 \mathrm{Ci}(\epsilon a) \sin(\epsilon a)+\frac{\pi \cos(a \epsilon)}{2} [\pi+2 \mathrm{Si}(a \epsilon)]\rightarrow \frac{\pi}{2}[/tex]
for [itex]\epsilon \rightarrow 0^+[/itex].
 
  • #7
JBrandonS said:
Initially it was brought in the notes of Feynman's numerical methods class that ##\int_0^\infty cos(b x) dx = \pi \delta(b)## I was unable to locate the origional link I used to download the file so I uploaded it here. That equality is first given on page 16 of the pdf (labeled as page 14 in the pdf).

The definition of the delta was found on some site, I cannot confirm it though but I still am very inclined to believe that feynman knew what he was talking about. :)
Feynmann is pretty awesome, but he was definitely talking about physics. What he said was mathematically incorrect. He later justifies his equation by saying "for all physical systems, ##\sin\beta L## will eventually dampen out." This is not a mathematical justification.
 
  • #8
Let me first say that I am self taught, I have only taken 1 semister's worth of calc classes but I have taught my self several courses. Getting on this site, however, makes me feel like I know so very little. So, thank you for your help but a lot of it I didn't understand. I will try and outline everything that is unclear for me.

vanhees71 said:
The Fourier representation of the [itex]\delta[/itex] distribution (not function!) is correct, because
[tex]\int_0^{\infty} \mathrm{d} x \cos(a x)=\int_0^{\infty} \mathrm{d} x \frac{\exp(\mathrm{i} a x)+\exp(-\mathrm{i} a x)}{2}.[/tex]
Now substitution of [itex]x'=-x[/itex] in the second term gives
[tex]\int_0^{\infty} \mathrm{d} x \cos(a x)=\frac{1}{2} \int_{\mathbb{R}} \mathrm{d} x \exp(\mathrm{i} a x)=\pi \delta(a).[/tex]

Two things here. First: How is it you are able to make that substitution? If I assume that substitution is possible I understand the outcome, just don't see how it is possible.
Second: Am I correct in thinking that ##\int_{\mathbb{R}}## is shorthand for ##\int_{-\infty}^\infty##


vanhees71 said:
What's wrong is the final integral, because you integrate only over the positive [itex]t[/itex] axis. Correct is
[tex]\int_0^{\infty} \mathrm{d} t \exp(-a t) \delta(t)=\int_{\mathbb{R}} \mathrm{d} t \exp(-a t) \Theta(t) \delta(t)=\Theta(0)=\frac{1}{2}.[/tex]

I believe that the ##\Theta(t)## here is the Heavside Theta, correct? If so, why is it you are saying that ##\Theta(0)=\frac{1}{2}##? I have never seen it be defined at 0. Also, where did the ##\pi## go to?

vanhees71 said:
To show that the weak limit for [itex]\epsilon \rightarrow 0^+[/itex] is indeed [itex]\pi \delta(a)[/itex], we have to integrate an arbitrary test function (e.g., from the space of quickly decaying functions, the Schwartz space) and then take the limit
[tex]l=\lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}} \mathrm{d} a I_{\epsilon}(a) \phi(a).[/tex]
To evaluate the integral we can use the theorem of residues by closing the integration path by a semicircle at infinity in the upper [itex]a[/itex]-half plane:
[tex]l=\lim_{\epsilon \rightarrow 0^+} 2 \pi \mathrm{i} \frac{\epsilon}{2 \mathrm{i} \epsilon} \phi(\mathrm{i} \epsilon)=\pi \phi(0).[/tex]
This means that indeed the week limit is [itex]\pi \delta(a)[/itex].

The final integral can also be evaluated with help of our approximation of the [itex]\delta[/itex] distribution. We have
[tex]\int_0^{\infty} \mathrm{d} t \frac{\epsilon}{\epsilon^2+t^2} \exp(-a t)=2 \mathrm{Ci}(\epsilon a) \sin(\epsilon a)+\frac{\pi \cos(a \epsilon)}{2} [\pi+2 \mathrm{Si}(a \epsilon)]\rightarrow \frac{\pi}{2}[/tex]
for [itex]\epsilon \rightarrow 0^+[/itex].

Pretty much all the above went above my head. If someone wants to try and dumb that down for me it would be great but otherwise I can look into it a lot more when I get home. But any recommendations on books / articles on the topics discussed would be great.
 
  • #9
The natural thing to consider is
$$\int_0^\infty \frac{a}{a^2+x^2} \mathrm{d}x=\mathcal{L} \{ \sin(t)/t \} |_{s=0}=\int_0^\infty \frac{\sin(t)}{t} \mathrm{d}t=\pi/2$$
Usually it is done in reverse as the sin(t)/t integral is considered harder than the a/(a^2+x^2) integral.
 
  • #10
Mandelbroth said:
Feynmann is pretty awesome, but he was definitely talking about physics. What he said was mathematically incorrect. He later justifies his equation by saying "for all physical systems, ##\sin\beta L## will eventually dampen out." This is not a mathematical justification.


Well, after vanhees71's explanation do you still believe this to be the case? And if so, under what conditions do you think this would be false or true?

lurflurf said:
The natural thing to consider is
$$\int_0^\infty \frac{a}{a^2+x^2} \mathrm{d}x=\mathcal{L} \{ \sin(t)/t \} |_{s=0}=\int_0^\infty \frac{\sin(t)}{t} \mathrm{d}t=\pi/2$$
Usually it is done in reverse as the sin(t)/t integral is considered harder than the a/(a^2+x^2) integral.

That is quite interesting. The main concern I have for that method is how can I handle the cases of a < 0 and a = 0, both of which provide different answers.
 
  • #11
JBrandonS said:
Well, after vanhees71's explanation do you still believe this to be the case? And if so, under what conditions do you think this would be false or true?
vanhees71's explanation comes from an older, less agreeable argument for the definition of the delta distribution. Again, it's a physics argument. The math is wrong. The integral still doesn't converge to anything.
 
  • #12
JBrandonS said:
Well, after vanhees71's explanation do you still believe this to be the case? And if so, under what conditions do you think this would be false or true?



That is quite interesting. The main concern I have for that method is how can I handle the cases of a < 0 and a = 0, both of which provide different answers.
oops that should have been

$$\int_0^\infty \frac{a}{a^2+x^2} \mathrm{d}x=\mathcal{L} \{ \sin(a \, t)/t \} |_{s=0}=\int_0^\infty \frac{\sin(a \, t)}{t} \mathrm{d}t=\mathrm{sign}(a) \, \pi/2$$
 
  • #13
Mandelbroth said:
vanhees71's explanation comes from an older, less agreeable argument for the definition of the delta distribution. Again, it's a physics argument. The math is wrong. The integral still doesn't converge to anything.

I would not say the math is wrong and the integral does not converge. A different definition of convergence is being used.
 
  • #14
lurflurf said:
I would not say the math is wrong and the integral does not converge. A different definition of convergence is being used.
I don't know what to say to that. The definition is not one I consider canonical, and I don't think it advisable to define convergence that way in math. It's really just physics.
 
  • #15
^Using different definitions is not a problem except when it becomes unclear which one is being used. One should mention the definition being used. Often an integral sign is used without comment and who knows what integral is intended.
 
  • #16
Mandelbroth said:
vanhees71's explanation comes from an older, less agreeable argument for the definition of the delta distribution. Again, it's a physics argument. The math is wrong. The integral still doesn't converge to anything.

The math isn't wrong. This is all very standard and canonical mathematics. And this is certainly not physics, although it was originally invented in physics. But it can certainly be made rigorous in mathematics.
 
  • #17
Mandelbroth said:
vanhees71's explanation comes from an older, less agreeable argument for the definition of the delta distribution. Again, it's a physics argument. The math is wrong. The integral still doesn't converge to anything.

What is wrong, and why is this an "older, less agreeable argument"? Of course, the first part of my posting was a quick and dirty physicists evaluation of the final integral, particularly, because I multiplied two distributions, namely the Heaviside unitstep and the Dirac [itex]\delta[/itex] distribution. Then I used the rule of thumb [itex]\Theta(0)=1/2[/itex], which is correct in Fourier analysis, because Fourier series or integrals converge to the mean of a function where this function has jumps.

The second part, where I try to prove that these assumptions are correct in the given case, should be ok. I defined the [itex]\delta[/itex] distribution over the test-function space with the domain [itex]\mathbb{R}_{\geq 0}[/itex] as a weak limit of a standard "[itex]\delta[/itex] series" of test functions. What's old-fashioned with this and (more importantly) wrong with this? What would be the modern way to evaluate the final integral in the OP with the correct result?

I think, we all agree that this method to evaluate the very first integral, is not very elegant, because this you can do much better directly by simply evaluating the integral, since
[tex]\int \mathrm{d} x \frac{a}{a^2+x^2}=\arctan(a x).[/tex]
Then you get everything correct for both signs of [itex]a[/itex]. It's also clear that for [itex]a=0[/itex] the integral doesn't exist, because it diverges at the lower boundary [itex]x=0[/itex].
 
  • #18
vanhees71 said:
Now substitution of [itex]x'=-x[/itex] in the second term gives
[tex]\int_0^{\infty} \mathrm{d} x \cos(a x)=\frac{1}{2} \int_{\mathbb{R}} \mathrm{d} x \exp(\mathrm{i} a x)=\pi \delta(a).[/tex]


Can anyone explain how this substitution works and how you are able to use this to combine the two exp's. Still not seeing how that is possible.
 
  • #19
JBrandonS said:
Can anyone explain how this substitution works and how you are able to use this to combine the two exp's. Still not seeing how that is possible.
It's a linear combination of complex exponentials. The integral is a linear operator.
 
  • #20
Mandelbroth said:
It's a linear combination of complex exponentials. The integral is a linear operator.

I am still failing to see where the x' goes when we combine them.
 
  • #21
Substitution of [itex]x'=-x[/itex] gives for the second part of the integral
[tex]\int_{0}^{\infty} \mathrm{d} x \exp(-\mathrm{i} a x)=\int_{-\infty}^{0} \mathrm{d} x' \exp(\mathrm{i} x' a).[/tex]
Now you rename the integration variable back to [itex]x[/itex] and write
[tex]I(a)=\frac{1}{2} \int_0^{\infty} \mathrm{d} x \exp(\mathrm{i} x a) + \frac{1}{2} \int_{-\infty}^{0} \mathrm{d} x \exp(\mathrm{i} x a)=\frac{1}{2}\int_{-\infty}^{\infty} \mathrm{d} x \exp(\mathrm{i} x a) =\pi \delta(a).[/tex]
 
  • #22
vanhees71 said:
Substitution of [itex]x'=-x[/itex] gives for the second part of the integral
[tex]\int_{0}^{\infty} \mathrm{d} x \exp(-\mathrm{i} a x)=\int_{-\infty}^{0} \mathrm{d} x' \exp(\mathrm{i} x' a).[/tex]
Now you rename the integration variable back to [itex]x[/itex] and write
[tex]I(a)=\frac{1}{2} \int_0^{\infty} \mathrm{d} x \exp(\mathrm{i} x a) + \frac{1}{2} \int_{-\infty}^{0} \mathrm{d} x \exp(\mathrm{i} x a)=\frac{1}{2}\int_{-\infty}^{\infty} \mathrm{d} x \exp(\mathrm{i} x a) =\pi \delta(a).[/tex]

This helps clear it up. I was forgetting to modify the bounds and it was not obvious that the negative from the dx' = -dx would go away because you then have to flip the bounds.

Thanks!
 

1. How do Laplace transforms help in solving integrals?

Laplace transforms are a mathematical tool that transforms a function from the time domain to the frequency domain. This allows for easier manipulation and solution of differential equations, which are commonly used to model integrals.

2. What types of integrals can be solved using Laplace transforms?

Laplace transforms can be used to solve a wide range of integrals, including those with polynomial, exponential, and trigonometric functions.

3. Are there any limitations to using Laplace transforms for solving integrals?

While Laplace transforms are a powerful tool, they may not always be the most efficient method for solving integrals. In some cases, other techniques such as substitution or integration by parts may be more suitable.

4. Can Laplace transforms be used to solve integrals with multiple variables?

Yes, Laplace transforms can be extended to functions with multiple variables. However, the process becomes more complex and may require additional techniques such as partial fraction decomposition.

5. Are there any real-world applications for using Laplace transforms to solve integrals?

Yes, Laplace transforms have numerous applications in science and engineering, including solving differential equations in physics, modeling electrical circuits, and analyzing signals in communication systems.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
897
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
174
Replies
9
Views
936
  • Calculus and Beyond Homework Help
Replies
8
Views
705
  • Calculus and Beyond Homework Help
Replies
6
Views
268
  • Calculus and Beyond Homework Help
Replies
31
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
243
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
431
Back
Top