Integral of a delta function from -infinity to 0 or 0 to +infinity

In summary, there is an ongoing debate about the definition of the delta function and its integration limits. Some books define the Heaviside step function as 1/2 at x = 0, while others see it as an ambiguous result due to the singularity of the integrand. This ambiguity also arises when considering different sequences of functions converging to the delta function. There is no consensus on the correct definition and integration limits, leading to an ongoing discussion among engineers and mathematicians.
  • #1
maverick280857
1,789
4
Hello everyone

Today in my QM class, a discussion arose on the definition of the delta function using the Heaviside step function [itex]\Theta(x)[/itex] (= 0 for x < 0 and 1 for x > 0). Specifically,

[tex]\Theta(x) = \int_{-\infty}^{x}\delta(t) dt[/tex]

which of course gives

[tex]\frac{d\Theta(x)}{dx} = \delta(x)[/tex]

Some books (esp those on Communications and Signal analysis) define [itex]\Theta(0) = 1/2[/itex]. However, if I set [itex]x = 0[/itex] in the above integral, I get

[tex]\int_{-\infty}^{0}\delta(t) dt = \Theta(0) = \frac{1}{2}[/tex]

To me, this is an ambiguous result, because even though this would follow if [itex]\delta(x)[/itex] were a "normal function" by virtue of its evenness, the point [itex]x = 0[/itex] is a singularity point of the integrand and besides, the normal Riemann integral would implicitly assume an open interval formed by the limits of integration: [itex](-\infty,0)[/itex] and not a closed interval [itex](-\infty,0][/itex].

Now, I have the following question:

Is the expression [itex]\int_{-\infty}^{0}\delta(t) dt = \frac{1}{2}[/itex] correct?

If I construct a sequence of well behaved functions (rectangular, gaussian, or something else) [itex]\{\delta_{n}(x)\}[/itex] which converge to [itex]\delta(x)[/itex], if the elements of this sequence are even then indeed

[tex]\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}[/tex]

But can one infer

[tex]\int_{-\infty}^{0}\delta(t) dt = \lim_{n \rightarrow \infty}\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}[/tex]

from this always? I think this should depend on the definition of the sequence, and that such a result in general does not make sense as there is an ambiguity when one writes 0 as the upper limit: does it mean 0- or 0+? (For 0-, the integral is zero and for 0+, the integral is 1).

Can a rigorous justification and answer be given for this?

Thanks in advance.
 
Last edited:
Physics news on Phys.org
  • #2
maverick280857 said:
Hello everyone

Today in my QM class, a discussion arose on the definition of the delta function using the Heaviside step function [itex]\Theta(x)[/itex] (= 0 for x < 0 and 1 for x > 0). Specifically,

[tex]\Theta(x) = \int_{-\infty}^{x}\delta(t) dt[/tex]

which of course gives

[tex]\frac{d\Theta(x)}{dx} = \delta(x)[/tex]

Some books (esp those on Communications and Signal analysis) define [itex]\Theta(0) = 1/2[/itex]. However, if I set [itex]x = 0[/itex] in the above integral, I get

[tex]\int_{-\infty}^{0}\delta(t) dt = \Theta(0) = \frac{1}{2}[/tex]

To me, this is an ambiguous result, because even though this would follow if [itex]\delta(x)[/itex] were a "normal function" by virtue of its evenness, the point [itex]x = 0[/itex] is a singularity point of the integrand and besides, the normal Riemann integral would implicitly assume an open interval formed by the limits of integration: [itex](-\infty,0)[/itex] and not a closed interval [itex](-\infty,0][/itex].

Now, I have the following question:

Is the expression [itex]\int_{-\infty}^{0}\delta(t) dt = \frac{1}{2}[/itex] correct?

If I construct a sequence of well behaved functions (rectangular, gaussian, or something else) [itex]\{\delta_{n}(x)\}[/itex] which converge to [itex]\delta(x)[/itex], if the elements of this sequence are even then indeed

[tex]\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}[/tex]

But can one infer

[tex]\int_{-\infty}^{0}\delta(t) dt = \lim_{n \rightarrow \infty}\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}[/tex]

from this always? I think this should depend on the definition of the sequence, and that such a result in general does not make sense as there is an ambiguity when one writes 0 as the upper limit: does it mean 0- or 0+? (For 0-, the integral is zero and for 0+, the integral is 1).

Can a rigorous justification and answer be given for this?

Thanks in advance.
I think you are right; defined in that way, without specifing the symmetry of the delta function, H(0) could have any value between 0 and 1.
The ambiguity is also indicated here:
http://en.wikipedia.org/wiki/Heaviside_step_function
 
  • #3
lightarrow said:
I think you are right; defined in that way, without specifing the symmetry of the delta function, H(0) could have any value between 0 and 1.
The ambiguity is also indicated here:
http://en.wikipedia.org/wiki/Heaviside_step_function

Thanks for your reply lightarrow, but can you point me to a source where this problem is discussed precisely with regard to the integration limits and different sequence definitions for the delta function, so that I could show it to my instructor.

So far, since we've only worked with even functions converging to the Dirac Delta in the limit, it is not obvious to anyone that the integral does not "have" to be half but rather is ambiguous since I can always construct a delta function from say a rectangular function with height 1/A and width A extending from x = 0 to x = A, and not necessarily from x = -A/2 to x = +A/2. For such a definition, the integral would be 0 and not 1/2.
 
  • #4
maverick280857 said:
Thanks for your reply lightarrow, but can you point me to a source where this problem is discussed precisely with regard to the integration limits and different sequence definitions for the delta function, so that I could show it to my instructor.

So far, since we've only worked with even functions converging to the Dirac Delta in the limit, it is not obvious to anyone that the integral does not "have" to be half but rather is ambiguous since I can always construct a delta function from say a rectangular function with height 1/A and width A extending from x = 0 to x = A, and not necessarily from x = -A/2 to x = +A/2. For such a definition, the integral would be 0 and not 1/2.

being an EE who works in signal processing, i have had many conversations (some disputed) with others regarding the meaning of the Dirac delta function (what we EEs like to call the "unit impulse function" - we also call the Heaviside function the "unit step function").

from a strict mathematical POV, the Dirac delta "function" is not really a function, but something they call a distribution and there is supposedly some whole theory behind this. but, as far as engineers are concerned, we treat it as a function that is the limit of those "nascent" delta functions that you call [itex]\delta_n(t)[/itex]. the problem (or one of them) that the mathematicians have with this, is the integral of two functions, [itex]f(t)[/itex] and [itex]g(t)[/itex] that are equal almost everywhere (everywhere except a countable number of infinitely thin points on the t-axis), that those two integrals (over the same limits) must also be equal. if you set [itex]f(t)[/itex] to [itex]\delta(t)[/itex] and [itex]g(t)[/itex] to 0, you will see that they agree almost everywhere, yet the integral (from some negative t to some other positive t) of [itex]\delta(t)[/itex] is 1 yet the integral of [itex]g(t)[/itex] is 0. so there is, from a pure mathematical POV a problem.

from my POV (not as anal-retentive as this distribution or generalized function theory is), i resolve the problem by simply letting [itex]\delta(t)[/itex] be one of those nascent [itex]\delta_n(t)[/itex], say the rectangular function, where the width of the delta function is one Planck Time in width. that's a legit function for the mathematicians, and it's close enough to the zero width [itex]\delta(t)[/itex] that it would make no physical difference in any physical situation. if the Planck Time is not narrow enough, make it a half or tenth of a Planck Time.
 
  • #5
one of the reasons that i prefer the symmetric definition of the Dirac delta is so that we can equate the integral of it to the step function:

[tex] \int_{-\infty}^{t} \delta(u) du = H(t) [/tex]

and also define the step function in terms of the sign or signum function:

[tex] \frac{1}{2}(1 + \mathrm{sgn}(t)) = H(t) [/tex]these simple definitions sometimes makes our lives easier in signal processing.
 
  • #6
rbj said:
being an EE who works in signal processing, i have had many conversations (some disputed) with others regarding the meaning of the Dirac delta function (what we EEs like to call the "unit impulse function" - we also call the Heaviside function the "unit step function").

from a strict mathematical POV, the Dirac delta "function" is not really a function, but something they call a distribution and there is supposedly some whole theory behind this. but, as far as engineers are concerned, we treat it as a function that is the limit of those "nascent" delta functions that you call [itex]\delta_n(t)[/itex]. the problem (or one of them) that the mathematicians have with this, is the integral of two functions, [itex]f(t)[/itex] and [itex]g(t)[/itex] that are equal almost everywhere (everywhere except a countable number of infinitely thin points on the t-axis), that those two integrals (over the same limits) must also be equal. if you set [itex]f(t)[/itex] to [itex]\delta(t)[/itex] and [itex]g(t)[/itex] to 0, you will see that they agree almost everywhere, yet the integral (from some negative t to some other positive t) of [itex]\delta(t)[/itex] is 1 yet the integral of [itex]g(t)[/itex] is 0. so there is, from a pure mathematical POV a problem.

from my POV (not as anal-retentive as this distribution or generalized function theory is), i resolve the problem by simply letting [itex]\delta(t)[/itex] be one of those nascent [itex]\delta_n(t)[/itex], say the rectangular function, where the width of the delta function is one Planck Time in width. that's a legit function for the mathematicians, and it's close enough to the zero width [itex]\delta(t)[/itex] that it would make no physical difference in any physical situation. if the Planck Time is not narrow enough, make it a half or tenth of a Planck Time.


does anyone know why my [itex]g(t)[/itex] is rendered "g(i)" rather than g(t) by LaTeX? i clearly put a "t" in the equation.
 
  • #7
I agree with you rbj, but when you talk of [itex]\delta(x)[/itex], the Dirac delta distribution itself, the ambiguity at x = 0 cannot be avoided. Working with the sequence of functions [itex]\delta_{n}(x)[/itex] each member of which is well behaved, there is no such problem. Somehow, I've always had a problem reconciling the definitions...I've always believed in the distribution theory version and not the 'practical' way out...to define the integral to be half, just because the Dirac delta is even. As you rightly pointed out, its not a function anyway.

PS--I am aware of the EE definitions, and it was in a course on signals and systems that I first came across this conceptual difficulty, when a discussion with some mathematics and EE professors led to the conclusion that the said integral has no meaning whatsoever. So while one can define the Heaviside step function to be equal to 1/2 at x = 0, the corresponding integral of delta(x) from x = -infinity to x = 0 cannot be unambiguously defined. I am looking for a rigorous reference for the same. Distribution theory textbooks that I have looked at, have proceeded step by step to derive the standard properties of such distributions, but have not discussed such weird situations.
 
Last edited:
  • #8
rbj said:
one of the reasons that i prefer the symmetric definition of the Dirac delta is so that we can equate the integral of it to the step function:

[tex] \int_{-\infty}^{t} \delta(u) du = H(t) [/tex]

and also define the step function in terms of the sign or signum function:

[tex] \frac{1}{2}(1 + \mathrm{sgn}(t)) = H(t) [/tex]

these simple definitions sometimes makes our lives easier in signal processing.

Agreed, but then again the extreme limits of a Riemann integral from a to b are really x = a+ to x = b-, irrespective of how you partition the set. So that's why I said it creates a problem...perhaps no computational issues will arise, but conceptually this doesn't seem rigorous enough to me. To cite an example, what would you compute

[tex]\int_{-\infty}^{0}dx (x^2 -4) \delta(x)[/tex]

as? I would write it as zero, because the point x = 0 is excluded in [itex](-\infty,0)[/itex]. But if you assume 0 here means 0+, then the answer would be -4. This is too simple an example and so perhaps I need to cook up something better :-p
 
  • #9
I believe the problem at hand deals with the fact that the engineers aren't always as rigorous as the physicists (who aren't always as rigorous as the mathematicians). At the fundamental level, I believe in the ambiguity of H(0). However, in engineering practice, it may be sometimes convenient to say H(0)=1/2 without questioning it further.

I believe this "definition" comes from the Dirichlet theorem of Fourier analysis (which forms the basis of signals and systems). According to the Dirichlet theorem, the Fourier series of a signal converges to the midpoint at jump discontinuities. Therefore, if H'(x) is the Fourier expansion of H(x), then H'(0)=1/2. Conveniently (see: sloppily), the engineers just say that this implies H(0)=1/2. But hey, if it makes my cell phone work, who am I to complain? :tongue2:

Note, a while back I ran across a journal article that dealt with some subtleties of the integral definition of the delta function. Sadly I didn't really read it in-depth, but it may be worth it to look into again. It was by David Griffiths in the American Journal of Physics.
 
  • #10
maverick280857 said:
Agreed, but then again the extreme limits of a Riemann integral from a to b are really x = a+ to x = b-, irrespective of how you partition the set. So that's why I said it creates a problem...perhaps no computational issues will arise, but conceptually this doesn't seem rigorous enough to me. To cite an example, what would you compute

[tex]\int_{-\infty}^{0}dx (x^2 - 4) \delta(x)[/tex]

as? I would write it as zero, because the point x = 0 is excluded in [itex](-\infty,0)[/itex]. But if you assume 0 here means 0+, then the answer would be -4.

and if we used the midpoint definition, then the answer is -2 and we can say that, in general


[tex]\int_{-\infty}^{0} f(x) \delta(x) dx \ + \ \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0) [/tex]

at least in the electrical engineering and signal processing context (can't say diddley about QM), life is much easier with the (even) symmetrical [itex]\delta(x)[/itex] definition.


cmos said:
I believe the problem at hand deals with the fact that the engineers aren't always as rigorous as the physicists (who aren't always as rigorous as the mathematicians).

it's true, regarding the Dirac delta function (and maybe the subtle differences between Riemann and Lebesque integration). don't know if it's true about other stuff. we try to be rigorous.

At the fundamental level, I believe in the ambiguity of H(0). However, in engineering practice, it may be sometimes convenient to say H(0)=1/2 without questioning it further.

I believe this "definition" comes from the Dirichlet theorem of Fourier analysis (which forms the basis of signals and systems). According to the Dirichlet theorem, the Fourier series of a signal converges to the midpoint at jump discontinuities. Therefore, if H'(x) is the Fourier expansion of H(x), then H'(0)=1/2. Conveniently (see: sloppily), the engineers just say that this implies H(0)=1/2.

sure, you're right. but, at least this engineer says that real Dirac delta functions don't really exist in physical reality and these functions are useful to deal with impulsive-like physical events (like elastic collisions of really hard objects, or what happens when you connect an uncharged capacitor to a well-regulated voltage source).

i, personally, have not found the strict definition and treatment of the Dirac delta to be useful. this has had practical implications. i have no trouble with certain expressions with the Dirac delta that lives outside of an integral, although i recognize that, eventually, it needs to find itself inside an integral in order to really do something with it. e.g., in the Nyquist-Shannon Sampling and Reconstruction Theorem:

[tex] \sum_{k=-\infty}^{+\infty} \delta(t-k) = \sum_{n=-\infty}^{+\infty} e^{i 2 \pi n t} [/tex]

i use that in our (perhaps sloppy) derivation of the results of the Sampling Theorem and I've had anal-retentive mathematicians tell me the above equation is meaningless and cannot be used in any derivation. i beg to differ.
 
  • #11
rbj said:
and if we used the midpoint definition, then the answer is -2 and we can say that, in general


[tex]\int_{-\infty}^{0} f(x) \delta(x) dx \ + \ \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0) [/tex]

I would write this as

[tex]\int_{-\infty}^{0} f(x) \delta(x) dx \ + \int_{0-}^{0+} f(x)\delta(x) dx + \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0) [/tex]

The first and third terms in the left most expression would then be zero and the contribution would come only from the integral over (0-,0+)
 
  • #12
maverick280857 said:
I would write this as

[tex]\int_{-\infty}^{0} f(x) \delta(x) dx \ + \int_{0-}^{0+} f(x)\delta(x) dx + \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0) [/tex]

The first and third terms in the left most expression would then be zero and the contribution would come only from the integral over (0-,0+)

yeah, but what does it gain you? you introduce another extraneous notation. i know we engineers see it in our first introduction to the Laplace Transform ("0-" which is another way to say [itex]\lim_{\epsilon \to 0}-\epsilon^2[/itex]), and it's for this very same reason; so we make sure we include all of the Dirac impulse, no matter how it's defined. but it's not necessary if the Laplace Transform is defined as the double-sided L.T.:

[tex] X(s) \ \equiv \ \mathcal{L}\{x(t)\} \ = \int_{-\infty}^{+\infty} x(t) e^{-st} dt [/tex]

as is the Fourier Transform.

not having to restrict the limits to 0- and 0+ can be convenient at times when setting up a problem.
 
  • #13
Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

[tex]\int_{0}^{\infty}\delta(x)dx[/itex]

[tex]\int_{-\infty}^{0}\delta(x)dx[/itex]

I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory.

If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks!
 
  • #14
maverick280857 said:
Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

[tex]\int_{0}^{\infty}\delta(x)dx[/tex]

[tex]\int_{-\infty}^{0}\delta(x)dx[/tex]

I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory.

If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks!

I could be totally wrong, but i don't see how is possible to justify that those integrals are 0 or 1/2.

The definition of the delta function, at least for what I know, is:

[tex]1.\ \int_{-\infty}^{\infty}\delta(x)dx\ =\ 1[/tex]

[tex]2.\ \int_{-\infty}^{\infty}\delta(x) f(x)dx\ =\ f(0)[/tex]

So, a delta function defined as in your previous example as a limit rectangle all in the +x, or all in the -x, or something else not symmetric, does satisfy that definition and so

[tex]\int_{0}^{\infty}\delta(x)dx[/tex]

or

[tex]\int_{-\infty}^{0}\delta(x)dx[/tex]

cannot have a unique value but depends on how you constructed the delta function.
 
  • #15
maverick280857 said:
Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

[tex]\int_{0}^{\infty}\delta(x)dx[/itex]

[tex]\int_{-\infty}^{0}\delta(x)dx[/itex]

I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory.
The proof goes like:
We have previously made the definition
[tex]\int_{-\infty}^{0}\delta(x)dx = 1/2[/tex]
Therefore, the value of
[tex]\int_{-\infty}^{0}\delta(x)dx[/tex]
is 1/2.​

The clearest approach to this is probably purely algebraic. We have a linear functional [itex]\int_{-\infty}^0[/itex] which has already been defined on the set of test functions. We have simply built a new functional (which we denote by the same symbol) that extends this one to (some) distributions, by specifying it's value at a particular point (i.e. [itex]\delta[/itex]). Really, the only thing there is to check is that this new functional has the properties we desire.
 
  • #16
Hurkyl said:
The proof goes like:
We have previously made the definition
[tex]\int_{-\infty}^{0}\delta(x)dx = 1/2[/tex]​


So we can't prove that equality, if we have to define it, it's this you are saying?​
 
  • #17
I don't think you can give a rigorous meaning to the integral. The reason is that in the rigorous approach you have to work with the delta functional defined as:

[tex]\delta[f]= f(0)[/tex]

where f is an arbitrary infinitely differentiable function that is equal to zero outside some compact set (if I remember correctly).

The integral would correspond to applying the delta distribution to a test function which is equal to 1 for negative x and equal to zero for x>=0. But such a test function is not infinitely differentiable so it is not a legal test function.

If remember correctly, the fact that you can define distributions, which are, sort of, very wildly behaved functions, is due to the fact that the set of test functions is so well behaved. There is a duality here, as you can view the test functions as functionals on the set of distributions. The more well behaved one set is the less well behaved the other can be.
 
  • #18
maverick280857 said:
Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

[tex]\int_{0}^{\infty}\delta(x)dx[/itex]

[tex]\int_{-\infty}^{0}\delta(x)dx[/itex]

I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory.

If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks!


Both integrals, because of the evenness or parity invariance of [tex]\delta(x)[/itex], are equal to 1/2 --[tex]\delta(x)[/itex] = [tex]\delta(-x)[/itex]. And, of course, the sum of the two is 1.

To make this a bit more rigorous, let's recall that the delta dunction is a distribution or, equivalently, a generalized function -- see Lighthill's classic Fourier Analysis and Generalized Functions, which is very elegant,understandable and is rigorous and can be read (easily) by undergraduates. The basic idea can be explained as follows:

Let G(x,s) be a normalized Gaussian with zero mean and standard deviation, s, with x any real number. To maintain normalization as s becomes smaller and smaller, the value of G(0,s) becomes bigger and bigger. Clearly, as s->0, G becomes infinite, and we have a function that is impossible -it's non-zero only on a set of measured 0, so best to go some place else.

On the other hand, lim s->0 of [tex]\int_{-\infty}^{\infty}\G(x,s)\f(x)dx[/itex] = f(0).


[tex]\int_{0}^{\infty}\delta(x)dx[/itex]

[tex]\int_{-\infty}^{0}\delta(x)dx[/itex]


That is the sequence is integrate, then take the limit as s->0. But taking the limit and then integrating does not work well. The use of the proper sequence for a delta function guarantees that the integral of a delta function over the half line is 1/2. The use of the Gaussian provides all the rigor you will ever need -- see Lighthill for more details.

There's a related issue that often pops up in QFT, and that is

[tex]\int_{0}^{\infty}\exp(ikx)dx[/itex] -- like half a delta function. In fact this integral

is (1/2){ [tex]\delta(x)[/itex]} + i pi P 1/x,






where P stands for Principal Part. You can find this discussed in many QFT or Quantum Optics book, when they deal with (anti)commutators of quantum fields.

Note also that much of this argument belongs in the domain of Hilbert Transforms, dispersion theory, and for work with causal signals in EE filters -- often done with complex integration

So, integrate first, and then take the limits.

Regards,
Reilly Atkinson
 
  • #19
For the sake of precision, I feel the urge to point out that none of what reilly says can be derived from the definition of [itex]\delta[/itex] as a distribution on R -- it is just an example of how one can go about defining more general extensions. (Though it is surely a useful extension)

In particular, there are many sequences [itex]\delta_n[/itex] that converge to [itex]\delta[/itex], and only a few of them have the property that [itex]\int_{-\infty}^0 \delta_n \rightarrow 1/2[/itex].





And for the sake of mentioning other directions, if you take the approach of measure theory, you would find that

[tex]\int_{(-\infty, 0)} \delta = 0[/tex]

[tex]\int_{(-\infty, 0]} \delta = 1[/tex]
 
  • #20
Since this thread seems to have gone inactive in the last several days, I would like to pose a related question. Let f(x) be a well-behaved function (everywhere continuous and differentiable); furthermore, let a<0<b. What should we then make of:
[tex]\int^{b}_{a}\frac{df}{dx}\Theta(x)dx[/tex]
At first glance, I was tempted to say f(b)-f(0). This, however, is subject to the ambiguity at x=0.
 
  • #21
Hurkyl said:
For the sake of precision, I feel the urge to point out that none of what reilly says can be derived from the definition of [itex]\delta[/itex] as a distribution on R -- it is just an example of how one can go about defining more general extensions. (Though it is surely a useful extension)

In particular, there are many sequences [itex]\delta_n[/itex] that converge to [itex]\delta[/itex], and only a few of them have the property that [itex]\int_{-\infty}^0 \delta_n \rightarrow 1/2[/itex].





And for the sake of mentioning other directions, if you take the approach of measure theory, you would find that

[tex]\int_{(-\infty, 0)} \delta = 0[/tex]

[tex]\int_{(-\infty, 0]} \delta = 1[/tex]



But, the [itex]\delta[/itex] function is even, so both half infinite integrals must be 1/2. Lighthill's book defines a generalized function in terms of equivalence classes of sequences of so-called good functions, like exp{- nx**2 }√( n/ pi), which as n-> infinity inside an integral clearly becomes a [itex]\delta[/itex] function. That being said, Lighthill's equivalence class approach says all sequences that converge generally to a delta function, say at x=0, must, in the limit, be even. And so, many standard operations with delta functions -- changes of variables, for example -- require the [itex]\delta[/itex] to be even -- about its argument.

Clearly the integrals of exp{- nx**2 }√( n/ pi) over half infinite intervals equal 1/2, so the generalized limit =1/2 for the integrals in question.

Lighthill does what you say can't be done -- further his book is in many ways Laurant Schwartz for the practical man.

In the middle, for the physicist who worries a bit about mathematical rigor, is Zemanian's Distribution Theory and Transform Analysis, which works with Lebesgue integration and linear functionals, and more detailed discussion of function classes and distribution classes and convergence. He ends up with practical stuff like Fourier and Laplace Transforms, solutions of DEs, and gives an interesting view of causality -- with an approach usually discussed by EEs. And, by the way, his delta function, which he calls a delta functional, is even.

Lighthill's book was published by Cambridge University Press in 1960. It's a very superior book. Zemanian's book is published by Dover.


If possible, could you please give an example of a sequence for a delta function that does not yield the half infinite integrals = 1/2.

Thanks and regards,
Reilly
 
  • #22
What about the sequence:

exp{- n^3(x - 1/n)^2 }√( n^3/ pi) ?
 
  • #23
reilly said:
If possible, could you please give an example of a sequence for a delta function that does not yield the half infinite integrals = 1/2.
The simplest example I can think of is

[tex]\delta_n(x) =
\begin{cases} n & x \in [0, 1/n] \\ 0 & x \notin [0, 1/n] \end{cases}[/tex]

For any test function f, we have:

[tex]\lim_{n \rightarrow +\infty} \int_{-\infty}^{+\infty} \delta_n(x) f(x) \, dx = f(0)[/tex]

[tex]\lim_{n \rightarrow +\infty} \int_0^{+\infty} \delta_n(x) f(x) \, dx = f(0)[/tex]

[tex]\lim_{n \rightarrow +\infty} \int_{-\infty}^0 \delta_n(x) f(x) \, dx = 0[/tex]
 
  • #24
The point is, starting from a theory of distributions on R, you can't derive a theory of distributions on half-infinite intervals. While you could attempt pin down the sources of ambiguity and attempt to make choices to resolve them, the effort would be so great that it would surely not be worth the effort -- it would be better to either start with a theory that is more appropriate (e.g. to begin with some sort of theory of local distributions, or jump ship and use nonstandard analysis), or simply make ad-hoc definitions along the way to meet your needs.
 
  • #25
Thank you everyone, this has turned out to be quite an interesting discussion. Please keep it on. I logged into PF after a long gap today, and was pleasantly surprised to see this post right on top :approve:
 
  • #26
Typically you want to link the delta function and its derivatives to the differential operators,[tex]
\begin{aligned}
&~~\delta(t)&\ast~~& f(t) &= ~~&~~f(t) \\ \\
&~\frac{\partial \delta(t)}{\partial t}&\ast~~ &f(t) &= ~~&~\frac{\partial f(t)}{\partial t} \\ \\
&\frac{\partial^2 \delta(t)}{\partial t^2}&\ast~~& f(t) &= ~~&\frac{\partial^2 f(t)}{\partial t^2} \\ \\
\end{aligned}
[/tex]for consistent operations, where [itex]\ast[/itex] denotes convolution. This is a symmetric definition of [itex]\delta(x)[/itex],
and corresponds with 1/2 for the half-space integral.

Note that higher order derivatives of the delta function occur in physical propagators
as soon as you work in more then three dimensions. See for instance:
http://physics-quest.org/Higher_dimensional_EM_radiation.pdf
Regards, Hans
 
Last edited:
  • #27
Hurkyl said:
The point is, starting from a theory of distributions on R, you can't derive a theory of distributions on half-infinite intervals. While you could attempt pin down the sources of ambiguity and attempt to make choices to resolve them, the effort would be so great that it would surely not be worth the effort -- it would be better to either start with a theory that is more appropriate (e.g. to begin with some sort of theory of local distributions, or jump ship and use nonstandard analysis), or simply make ad-hoc definitions along the way to meet your needs.

Sorry that it's taken so long for me to reply.

I might be missing something in your discussion of half-infinite distributions. I say that because in the Zemanium book -- mentioned above -- there's a great deal of material on distributions defined on a half interval, some of which involves Laplace transforms.

The standard delta function of physics -- like the integral over all space of a plane wave. -- is an even function, as pointed out by Hans. The series you proposed gives a function that agrees with the physics delta function for x>0, and is zero for x<0. It is not the physics delta function.

Just a note that the Cauchy Integral Thrm provides another approach, one used very often in EE and in the dispersion relations of QM.
Regards,
Reilly
 
  • #28
reilly said:
Both integrals, because of the evenness or parity invariance of [tex]\delta(x)[/itex], are equal to 1/2 --[tex]\delta(x)[/itex] = [tex]\delta(-x)[/itex].
Anyway, the fact that [tex]\delta(x)\ =\ \delta(-x)[/tex] does not imply that the delta function was constructed using a sequence of even functions [tex]\delta_n(x)[/tex]; it can be proved without using that hypothesis.:

[tex]\int_{-\infty}^{+\infty}\delta(-x)f(x)dx\ =\ (-x\ =\ u)

=\ -\int_{+\infty}^{-\infty}\delta(u)f(-u)du\ =\ \int_{-\infty}^{+\infty}\delta(u)f(-u)du\ =\ f(0)[/tex]

So it would seem that we could also use non symmetric [tex]\delta_n(x)[/tex].
 
  • #29
lightarrow said:
Anyway, the fact that [tex]\delta(x)\ =\ \delta(-x)[/tex] does not imply that the delta function was constructed using a sequence of even functions...So it would seem that we could also use non symmetric [tex]\delta_n(x)[/tex].

Spot on!

That was precisely my point when I tried to demonstrate that you can use an asymmetric (neither even nor odd) aperiodic rectangular pulse to get to the delta 'function', in which case you can't use the even symmetry property of the sequence.
 
  • #30
reilly said:
I might be missing something in your discussion of half-infinite distributions. I say that because in the Zemanium book -- mentioned above -- there's a great deal of material on distributions defined on a half interval, some of which involves Laplace transforms.
I'm not trying to say you cannot have distributions on the half-line. Indeed, the essential details of the construction of distributions work for any infinite-dimensional vector space. The point is that, unlike functions, distributions do not have a restriction map: if you're given a distribution on the real line, there is no 'good' way to turn it into a function on a half-line.

In particular, using just the property that
[tex]\int_{-\infty}^{+\infty} \delta(x) f(x) \, dx = f(0)[/tex]
for any test function f, you cannot derive the equation
[tex]\int_{-\infty}^{0} \delta(x) f(x) \, dx = \frac{1}{2}f(0)[/tex]
If you want to (rigorously) discuss such notions, you have to acknowledge that they are not a consequence of facts about distributions on the entire real line.


Incidentally, the fact that distributions are dual to test functions means that distributions do have a canonical extension map let's you turn a distribution on the half-line into a distribution on the entire real line. However, test functions do not have this feature.



The standard delta function of physics -- like the integral over all space of a plane wave. -- is an even function, as pointed out by Hans. The series you proposed gives a function that agrees with the physics delta function for x>0, and is zero for x<0. It is not the physics delta function.

Just a note that the Cauchy Integral Thrm provides another approach, one used very often in EE and in the dispersion relations of QM.
Regards,
Reilly[/QUOTE]
 
  • #31
i think i have a missing point in the delta function

if
[tex]\delta_{}n (x) = (n/\Pi) . (1/(1+n^{}2 x^{}2 ))[/tex]

so how can we show that
[tex]\int\delta_{}n (x) d(x) = 1[/tex]
 
  • #32
Assuming you tried to say
[tex]\delta_n(x) = \frac{n}{\pi} \frac{1}{1 + n^2 x^2}[/tex]
you can easily work out that
[tex]\int_{-a}^{a} \delta_n(x) \, \mathrm{d}x = \frac{2}{\pi} \operatorname{arctan}(a n)[/tex].
This is no coincidence of course, it is the reason the ugly factor of 1/pi was added in the first place.
Of course, for [itex]a \to \infty[/itex] this converges to 1. So if we define
[tex]\int_{-\infty}^\infty \delta_n(x) \, \mathrm dx = \lim_{a \to \infty} \int_{-a}^{a} \delta_n(x) \, \mathrm{d}x = \frac{2}{\pi}[/tex]
it follows.

One has to be careful in applying limits on integrals though, in particular
[tex]0 = \int_{-\infty}^{\infty} \lim_{n \to \infty} \delta_n(x) \, \mathrm dx \neq \lim_{n \to \infty}\left( \int_{-\infty}^\infty \delta_n(x) \, \mathrm dx \right) = 1[/tex]
 

Similar threads

Replies
2
Views
409
  • Advanced Physics Homework Help
Replies
4
Views
225
  • Quantum Physics
Replies
13
Views
709
Replies
5
Views
983
Replies
2
Views
1K
Replies
14
Views
2K
  • Calculus and Beyond Homework Help
Replies
31
Views
2K
  • Quantum Physics
Replies
8
Views
7K
Replies
1
Views
1K
Replies
5
Views
2K
Back
Top