# Integral of a delta function from -infinity to 0 or 0 to +infinity

#### maverick280857

Hello everyone

Today in my QM class, a discussion arose on the definition of the delta function using the Heaviside step function $\Theta(x)$ (= 0 for x < 0 and 1 for x > 0). Specifically,

$$\Theta(x) = \int_{-\infty}^{x}\delta(t) dt$$

which of course gives

$$\frac{d\Theta(x)}{dx} = \delta(x)$$

Some books (esp those on Communications and Signal analysis) define $\Theta(0) = 1/2$. However, if I set $x = 0$ in the above integral, I get

$$\int_{-\infty}^{0}\delta(t) dt = \Theta(0) = \frac{1}{2}$$

To me, this is an ambiguous result, because even though this would follow if $\delta(x)$ were a "normal function" by virtue of its evenness, the point $x = 0$ is a singularity point of the integrand and besides, the normal Riemann integral would implicitly assume an open interval formed by the limits of integration: $(-\infty,0)$ and not a closed interval $(-\infty,0]$.

Now, I have the following question:

Is the expression $\int_{-\infty}^{0}\delta(t) dt = \frac{1}{2}$ correct?

If I construct a sequence of well behaved functions (rectangular, gaussian, or something else) $\{\delta_{n}(x)\}$ which converge to $\delta(x)$, if the elements of this sequence are even then indeed

$$\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}$$

But can one infer

$$\int_{-\infty}^{0}\delta(t) dt = \lim_{n \rightarrow \infty}\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}$$

from this always? I think this should depend on the definition of the sequence, and that such a result in general does not make sense as there is an ambiguity when one writes 0 as the upper limit: does it mean 0- or 0+? (For 0-, the integral is zero and for 0+, the integral is 1).

Can a rigorous justification and answer be given for this?

Last edited:
Related Quantum Physics News on Phys.org

#### lightarrow

Hello everyone

Today in my QM class, a discussion arose on the definition of the delta function using the Heaviside step function $\Theta(x)$ (= 0 for x < 0 and 1 for x > 0). Specifically,

$$\Theta(x) = \int_{-\infty}^{x}\delta(t) dt$$

which of course gives

$$\frac{d\Theta(x)}{dx} = \delta(x)$$

Some books (esp those on Communications and Signal analysis) define $\Theta(0) = 1/2$. However, if I set $x = 0$ in the above integral, I get

$$\int_{-\infty}^{0}\delta(t) dt = \Theta(0) = \frac{1}{2}$$

To me, this is an ambiguous result, because even though this would follow if $\delta(x)$ were a "normal function" by virtue of its evenness, the point $x = 0$ is a singularity point of the integrand and besides, the normal Riemann integral would implicitly assume an open interval formed by the limits of integration: $(-\infty,0)$ and not a closed interval $(-\infty,0]$.

Now, I have the following question:

Is the expression $\int_{-\infty}^{0}\delta(t) dt = \frac{1}{2}$ correct?

If I construct a sequence of well behaved functions (rectangular, gaussian, or something else) $\{\delta_{n}(x)\}$ which converge to $\delta(x)$, if the elements of this sequence are even then indeed

$$\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}$$

But can one infer

$$\int_{-\infty}^{0}\delta(t) dt = \lim_{n \rightarrow \infty}\int_{-\infty}^{0}\delta_{n}(t) dt = \frac{1}{2}$$

from this always? I think this should depend on the definition of the sequence, and that such a result in general does not make sense as there is an ambiguity when one writes 0 as the upper limit: does it mean 0- or 0+? (For 0-, the integral is zero and for 0+, the integral is 1).

Can a rigorous justification and answer be given for this?

I think you are right; defined in that way, without specifing the symmetry of the delta function, H(0) could have any value between 0 and 1.
The ambiguity is also indicated here:
http://en.wikipedia.org/wiki/Heaviside_step_function

#### maverick280857

I think you are right; defined in that way, without specifing the symmetry of the delta function, H(0) could have any value between 0 and 1.
The ambiguity is also indicated here:
http://en.wikipedia.org/wiki/Heaviside_step_function
Thanks for your reply lightarrow, but can you point me to a source where this problem is discussed precisely with regard to the integration limits and different sequence definitions for the delta function, so that I could show it to my instructor.

So far, since we've only worked with even functions converging to the Dirac Delta in the limit, it is not obvious to anyone that the integral does not "have" to be half but rather is ambiguous since I can always construct a delta function from say a rectangular function with height 1/A and width A extending from x = 0 to x = A, and not necessarily from x = -A/2 to x = +A/2. For such a definition, the integral would be 0 and not 1/2.

#### rbj

Thanks for your reply lightarrow, but can you point me to a source where this problem is discussed precisely with regard to the integration limits and different sequence definitions for the delta function, so that I could show it to my instructor.

So far, since we've only worked with even functions converging to the Dirac Delta in the limit, it is not obvious to anyone that the integral does not "have" to be half but rather is ambiguous since I can always construct a delta function from say a rectangular function with height 1/A and width A extending from x = 0 to x = A, and not necessarily from x = -A/2 to x = +A/2. For such a definition, the integral would be 0 and not 1/2.
being an EE who works in signal processing, i have had many conversations (some disputed) with others regarding the meaning of the Dirac delta function (what we EEs like to call the "unit impulse function" - we also call the Heaviside function the "unit step function").

from a strict mathematical POV, the Dirac delta "function" is not really a function, but something they call a distribution and there is supposedly some whole theory behind this. but, as far as engineers are concerned, we treat it as a function that is the limit of those "nascent" delta functions that you call $\delta_n(t)$. the problem (or one of them) that the mathematicians have with this, is the integral of two functions, $f(t)$ and $g(t)$ that are equal almost everywhere (everywhere except a countable number of infinitely thin points on the t-axis), that those two integrals (over the same limits) must also be equal. if you set $f(t)$ to $\delta(t)$ and $g(t)$ to 0, you will see that they agree almost everywhere, yet the integral (from some negative t to some other positive t) of $\delta(t)$ is 1 yet the integral of $g(t)$ is 0. so there is, from a pure mathematical POV a problem.

from my POV (not as anal-retentive as this distribution or generalized function theory is), i resolve the problem by simply letting $\delta(t)$ be one of those nascent $\delta_n(t)$, say the rectangular function, where the width of the delta function is one Planck Time in width. that's a legit function for the mathematicians, and it's close enough to the zero width $\delta(t)$ that it would make no physical difference in any physical situation. if the Planck Time is not narrow enough, make it a half or tenth of a Planck Time.

#### rbj

one of the reasons that i prefer the symmetric definition of the Dirac delta is so that we can equate the integral of it to the step function:

$$\int_{-\infty}^{t} \delta(u) du = H(t)$$

and also define the step function in terms of the sign or signum function:

$$\frac{1}{2}(1 + \mathrm{sgn}(t)) = H(t)$$

these simple definitions sometimes makes our lives easier in signal processing.

#### rbj

being an EE who works in signal processing, i have had many conversations (some disputed) with others regarding the meaning of the Dirac delta function (what we EEs like to call the "unit impulse function" - we also call the Heaviside function the "unit step function").

from a strict mathematical POV, the Dirac delta "function" is not really a function, but something they call a distribution and there is supposedly some whole theory behind this. but, as far as engineers are concerned, we treat it as a function that is the limit of those "nascent" delta functions that you call $\delta_n(t)$. the problem (or one of them) that the mathematicians have with this, is the integral of two functions, $f(t)$ and $g(t)$ that are equal almost everywhere (everywhere except a countable number of infinitely thin points on the t-axis), that those two integrals (over the same limits) must also be equal. if you set $f(t)$ to $\delta(t)$ and $g(t)$ to 0, you will see that they agree almost everywhere, yet the integral (from some negative t to some other positive t) of $\delta(t)$ is 1 yet the integral of $g(t)$ is 0. so there is, from a pure mathematical POV a problem.

from my POV (not as anal-retentive as this distribution or generalized function theory is), i resolve the problem by simply letting $\delta(t)$ be one of those nascent $\delta_n(t)$, say the rectangular function, where the width of the delta function is one Planck Time in width. that's a legit function for the mathematicians, and it's close enough to the zero width $\delta(t)$ that it would make no physical difference in any physical situation. if the Planck Time is not narrow enough, make it a half or tenth of a Planck Time.

does anyone know why my $g(t)$ is rendered "g(i)" rather than g(t) by LaTeX? i clearly put a "t" in the equation.

#### maverick280857

I agree with you rbj, but when you talk of $\delta(x)$, the Dirac delta distribution itself, the ambiguity at x = 0 cannot be avoided. Working with the sequence of functions $\delta_{n}(x)$ each member of which is well behaved, there is no such problem. Somehow, I've always had a problem reconciling the definitions...I've always believed in the distribution theory version and not the 'practical' way out...to define the integral to be half, just because the Dirac delta is even. As you rightly pointed out, its not a function anyway.

PS--I am aware of the EE definitions, and it was in a course on signals and systems that I first came across this conceptual difficulty, when a discussion with some mathematics and EE professors led to the conclusion that the said integral has no meaning whatsoever. So while one can define the Heaviside step function to be equal to 1/2 at x = 0, the corresponding integral of delta(x) from x = -infinity to x = 0 cannot be unambiguously defined. I am looking for a rigorous reference for the same. Distribution theory textbooks that I have looked at, have proceeded step by step to derive the standard properties of such distributions, but have not discussed such weird situations.

Last edited:

#### maverick280857

one of the reasons that i prefer the symmetric definition of the Dirac delta is so that we can equate the integral of it to the step function:

$$\int_{-\infty}^{t} \delta(u) du = H(t)$$

and also define the step function in terms of the sign or signum function:

$$\frac{1}{2}(1 + \mathrm{sgn}(t)) = H(t)$$

these simple definitions sometimes makes our lives easier in signal processing.
Agreed, but then again the extreme limits of a Riemann integral from a to b are really x = a+ to x = b-, irrespective of how you partition the set. So thats why I said it creates a problem...perhaps no computational issues will arise, but conceptually this doesn't seem rigorous enough to me. To cite an example, what would you compute

$$\int_{-\infty}^{0}dx (x^2 -4) \delta(x)$$

as? I would write it as zero, because the point x = 0 is excluded in $(-\infty,0)$. But if you assume 0 here means 0+, then the answer would be -4. This is too simple an example and so perhaps I need to cook up something better

#### cmos

I believe the problem at hand deals with the fact that the engineers aren't always as rigorous as the physicists (who aren't always as rigorous as the mathematicians). At the fundamental level, I believe in the ambiguity of H(0). However, in engineering practice, it may be sometimes convenient to say H(0)=1/2 without questioning it further.

I believe this "definition" comes from the Dirichlet theorem of Fourier analysis (which forms the basis of signals and systems). According to the Dirichlet theorem, the Fourier series of a signal converges to the midpoint at jump discontinuities. Therefore, if H'(x) is the Fourier expansion of H(x), then H'(0)=1/2. Conveniently (see: sloppily), the engineers just say that this implies H(0)=1/2. But hey, if it makes my cell phone work, who am I to complain? :tongue2:

Note, a while back I ran across a journal article that dealt with some subtleties of the integral definition of the delta function. Sadly I didn't really read it in-depth, but it may be worth it to look into again. It was by David Griffiths in the American Journal of Physics.

#### rbj

Agreed, but then again the extreme limits of a Riemann integral from a to b are really x = a+ to x = b-, irrespective of how you partition the set. So thats why I said it creates a problem...perhaps no computational issues will arise, but conceptually this doesn't seem rigorous enough to me. To cite an example, what would you compute

$$\int_{-\infty}^{0}dx (x^2 - 4) \delta(x)$$

as? I would write it as zero, because the point x = 0 is excluded in $(-\infty,0)$. But if you assume 0 here means 0+, then the answer would be -4.
and if we used the midpoint definition, then the answer is -2 and we can say that, in general

$$\int_{-\infty}^{0} f(x) \delta(x) dx \ + \ \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0)$$

at least in the electrical engineering and signal processing context (can't say diddley about QM), life is much easier with the (even) symmetrical $\delta(x)$ definition.

I believe the problem at hand deals with the fact that the engineers aren't always as rigorous as the physicists (who aren't always as rigorous as the mathematicians).
it's true, regarding the Dirac delta function (and maybe the subtle differences between Riemann and Lebesque integration). dunno if it's true about other stuff. we try to be rigorous.

At the fundamental level, I believe in the ambiguity of H(0). However, in engineering practice, it may be sometimes convenient to say H(0)=1/2 without questioning it further.

I believe this "definition" comes from the Dirichlet theorem of Fourier analysis (which forms the basis of signals and systems). According to the Dirichlet theorem, the Fourier series of a signal converges to the midpoint at jump discontinuities. Therefore, if H'(x) is the Fourier expansion of H(x), then H'(0)=1/2. Conveniently (see: sloppily), the engineers just say that this implies H(0)=1/2.
sure, you're right. but, at least this engineer says that real Dirac delta functions don't really exist in physical reality and these functions are useful to deal with impulsive-like physical events (like elastic collisions of really hard objects, or what happens when you connect an uncharged capacitor to a well-regulated voltage source).

i, personally, have not found the strict definition and treatment of the Dirac delta to be useful. this has had practical implications. i have no trouble with certain expressions with the Dirac delta that lives outside of an integral, although i recognize that, eventually, it needs to find itself inside an integral in order to really do something with it. e.g., in the Nyquist-Shannon Sampling and Reconstruction Theorem:

$$\sum_{k=-\infty}^{+\infty} \delta(t-k) = \sum_{n=-\infty}^{+\infty} e^{i 2 \pi n t}$$

i use that in our (perhaps sloppy) derivation of the results of the Sampling Theorem and i've had anal-retentive mathematicians tell me the above equation is meaningless and cannot be used in any derivation. i beg to differ.

#### maverick280857

and if we used the midpoint definition, then the answer is -2 and we can say that, in general

$$\int_{-\infty}^{0} f(x) \delta(x) dx \ + \ \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0)$$
I would write this as

$$\int_{-\infty}^{0} f(x) \delta(x) dx \ + \int_{0-}^{0+} f(x)\delta(x) dx + \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0)$$

The first and third terms in the left most expression would then be zero and the contribution would come only from the integral over (0-,0+)

#### rbj

I would write this as

$$\int_{-\infty}^{0} f(x) \delta(x) dx \ + \int_{0-}^{0+} f(x)\delta(x) dx + \int_{0}^{+\infty} f(x) \delta(x) dx = \int_{-\infty}^{+\infty} f(x) \delta(x) dx = f(0)$$

The first and third terms in the left most expression would then be zero and the contribution would come only from the integral over (0-,0+)
yeah, but what does it gain you? you introduce another extraneous notation. i know we engineers see it in our first introduction to the Laplace Transform ("0-" which is another way to say $\lim_{\epsilon \to 0}-\epsilon^2$), and it's for this very same reason; so we make sure we include all of the Dirac impulse, no matter how it's defined. but it's not necessary if the Laplace Transform is defined as the double-sided L.T.:

$$X(s) \ \equiv \ \mathcal{L}\{x(t)\} \ = \int_{-\infty}^{+\infty} x(t) e^{-st} dt$$

as is the Fourier Transform.

not having to restrict the limits to 0- and 0+ can be convenient at times when setting up a problem.

#### maverick280857

Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

$$\int_{0}^{\infty}\delta(x)dx[/itex] [tex]\int_{-\infty}^{0}\delta(x)dx[/itex] I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory. If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks! #### lightarrow Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions. What is the value of the following integrals [tex]\int_{0}^{\infty}\delta(x)dx$$

$$\int_{-\infty}^{0}\delta(x)dx$$

I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory.

If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks!
I could be totally wrong, but i don't see how is possible to justify that those integrals are 0 or 1/2.

The definition of the delta function, at least for what I know, is:

$$1.\ \int_{-\infty}^{\infty}\delta(x)dx\ =\ 1$$

$$2.\ \int_{-\infty}^{\infty}\delta(x) f(x)dx\ =\ f(0)$$

So, a delta function defined as in your previous example as a limit rectangle all in the +x, or all in the -x, or something else not symmetric, does satisfy that definition and so

$$\int_{0}^{\infty}\delta(x)dx$$

or

$$\int_{-\infty}^{0}\delta(x)dx$$

cannot have a unique value but depends on how you constructed the delta function.

#### Hurkyl

Staff Emeritus
Gold Member
Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

$$\int_{0}^{\infty}\delta(x)dx[/itex] [tex]\int_{-\infty}^{0}\delta(x)dx[/itex] I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory. The proof goes like: We have previously made the definition [tex]\int_{-\infty}^{0}\delta(x)dx = 1/2$$
Therefore, the value of
$$\int_{-\infty}^{0}\delta(x)dx$$
is 1/2.​

The clearest approach to this is probably purely algebraic. We have a linear functional $\int_{-\infty}^0$ which has already been defined on the set of test functions. We have simply built a new functional (which we denote by the same symbol) that extends this one to (some) distributions, by specifying it's value at a particular point (i.e. $\delta$). Really, the only thing there is to check is that this new functional has the properties we desire.

#### lightarrow

The proof goes like:
We have previously made the definition
$$\int_{-\infty}^{0}\delta(x)dx = 1/2$$​

So we can't prove that equality, if we have to define it, it's this you are saying?​

#### Count Iblis

I don't think you can give a rigorous meaning to the integral. The reason is that in the rigorous approach you have to work with the delta functional defined as:

$$\delta[f]= f(0)$$

where f is an arbitrary infinitely differentiable function that is equal to zero outside some compact set (if I remember correctly).

The integral would correspond to applying the delta distribution to a test function which is equal to 1 for negative x and equal to zero for x>=0. But such a test function is not infinitely differentiable so it is not a legal test function.

If remember correctly, the fact that you can define distributions, which are, sort of, very wildly behaved functions, is due to the fact that the set of test functions is so well behaved. There is a duality here, as you can view the test functions as functionals on the set of distributions. The more well behaved one set is the less well behaved the other can be.

#### reilly

Ok, let me restate my question, since so far all our discussions have centered around convenience as a key idea in these definitions.

What is the value of the following integrals

$$\int_{0}^{\infty}\delta(x)dx[/itex] [tex]\int_{-\infty}^{0}\delta(x)dx[/itex] I am looking for a mathematically rigorous argument that can justify whether the integrals are 0 or 1/2. This should preferably be from distribution theory. If someone can point me to a source on the internet or a book where precisely these issues have been dealt with and such integrals are explicitly listed with a sufficiently rigorous and non-handwaived justification, I would be very grateful. Thanks! Both integrals, because of the evenness or parity invariance of [tex]\delta(x)[/itex], are equal to 1/2 --[tex]\delta(x)[/itex] = [tex]\delta(-x)[/itex]. And, of course, the sum of the two is 1. To make this a bit more rigorous, let's recall that the delta dunction is a distribution or, equivalently, a generalized function -- see Lighthill's classic Fourier Analysis and Generalized Functions, which is very elegant,understandable and is rigorous and can be read (easily) by undergraduates. The basic idea can be explained as follows: Let G(x,s) be a normalized Gaussian with zero mean and standard deviation, s, with x any real number. To maintain normalization as s becomes smaller and smaller, the value of G(0,s) becomes bigger and bigger. Clearly, as s->0, G becomes infinite, and we have a function that is impossible -it's non-zero only on a set of measured 0, so best to go some place else. On the other hand, lim s->0 of [tex]\int_{-\infty}^{\infty}\G(x,s)\f(x)dx[/itex] = f(0). [tex]\int_{0}^{\infty}\delta(x)dx[/itex] [tex]\int_{-\infty}^{0}\delta(x)dx[/itex] That is the sequence is integrate, then take the limit as s->0. But taking the limit and then integrating does not work well. The use of the proper sequence for a delta function guarantees that the integral of a delta function over the half line is 1/2. The use of the Gaussian provides all the rigor you will ever need -- see Lighthill for more details. There's a related issue that often pops up in QFT, and that is [tex]\int_{0}^{\infty}\exp(ikx)dx[/itex] -- like half a delta function. In fact this integral is (1/2){ [tex]\delta(x)[/itex]} + i pi P 1/x, where P stands for Principal Part. You can find this discussed in many QFT or Quantum Optics book, when they deal with (anti)commutators of quantum fields. Note also that much of this argument belongs in the domain of Hilbert Transforms, dispersion theory, and for work with causal signals in EE filters -- often done with complex integration So, integrate first, and then take the limits. Regards, Reilly Atkinson #### Hurkyl Staff Emeritus Science Advisor Gold Member For the sake of precision, I feel the urge to point out that none of what reilly says can be derived from the definition of $\delta$ as a distribution on R -- it is just an example of how one can go about defining more general extensions. (Though it is surely a useful extension) In particular, there are many sequences $\delta_n$ that converge to $\delta$, and only a few of them have the property that $\int_{-\infty}^0 \delta_n \rightarrow 1/2$. And for the sake of mentioning other directions, if you take the approach of measure theory, you would find that [tex]\int_{(-\infty, 0)} \delta = 0$$

$$\int_{(-\infty, 0]} \delta = 1$$

#### cmos

Since this thread seems to have gone inactive in the last several days, I would like to pose a related question. Let f(x) be a well-behaved function (everywhere continuous and differentiable); furthermore, let a<0<b. What should we then make of:
$$\int^{b}_{a}\frac{df}{dx}\Theta(x)dx$$
At first glance, I was tempted to say f(b)-f(0). This, however, is subject to the ambiguity at x=0.

#### reilly

For the sake of precision, I feel the urge to point out that none of what reilly says can be derived from the definition of $\delta$ as a distribution on R -- it is just an example of how one can go about defining more general extensions. (Though it is surely a useful extension)

In particular, there are many sequences $\delta_n$ that converge to $\delta$, and only a few of them have the property that $\int_{-\infty}^0 \delta_n \rightarrow 1/2$.

And for the sake of mentioning other directions, if you take the approach of measure theory, you would find that

$$\int_{(-\infty, 0)} \delta = 0$$

$$\int_{(-\infty, 0]} \delta = 1$$

But, the $\delta$ function is even, so both half infinite integrals must be 1/2. Lighthill's book defines a generalized function in terms of equivalence classes of sequences of so-called good functions, like exp{- nx**2 }√( n/ pi), which as n-> infinity inside an integral clearly becomes a $\delta$ function. That being said, Lighthill's equivalence class approach says all sequences that converge generally to a delta function, say at x=0, must, in the limit, be even. And so, many standard operations with delta functions -- changes of variables, for example -- require the $\delta$ to be even -- about its argument.

Clearly the integrals of exp{- nx**2 }√( n/ pi) over half infinite intervals equal 1/2, so the generalized limit =1/2 for the integrals in question.

Lighthill does what you say can't be done -- further his book is in many ways Laurant Schwartz for the practical man.

In the middle, for the physicist who worries a bit about mathematical rigor, is Zemanian's Distribution Theory and Transform Analysis, which works with Lebesgue integration and linear functionals, and more detailed discussion of function classes and distribution classes and convergence. He ends up with practical stuff like Fourier and Laplace Transforms, solutions of DEs, and gives an interesting view of causality -- with an approach usually discussed by EEs. And, by the way, his delta function, which he calls a delta functional, is even.

If possible, could you please give an example of a sequence for a delta function that does not yield the half infinite integrals = 1/2.

Thanks and regards,
Reilly

#### Count Iblis

exp{- n^3(x - 1/n)^2 }√( n^3/ pi) ?

#### Hurkyl

Staff Emeritus
Gold Member
If possible, could you please give an example of a sequence for a delta function that does not yield the half infinite integrals = 1/2.
The simplest example I can think of is

$$\delta_n(x) = \begin{cases} n & x \in [0, 1/n] \\ 0 & x \notin [0, 1/n] \end{cases}$$

For any test function f, we have:

$$\lim_{n \rightarrow +\infty} \int_{-\infty}^{+\infty} \delta_n(x) f(x) \, dx = f(0)$$

$$\lim_{n \rightarrow +\infty} \int_0^{+\infty} \delta_n(x) f(x) \, dx = f(0)$$

$$\lim_{n \rightarrow +\infty} \int_{-\infty}^0 \delta_n(x) f(x) \, dx = 0$$

#### Hurkyl

Staff Emeritus
Gold Member
The point is, starting from a theory of distributions on R, you can't derive a theory of distributions on half-infinite intervals. While you could attempt pin down the sources of ambiguity and attempt to make choices to resolve them, the effort would be so great that it would surely not be worth the effort -- it would be better to either start with a theory that is more appropriate (e.g. to begin with some sort of theory of local distributions, or jump ship and use nonstandard analysis), or simply make ad-hoc definitions along the way to meet your needs.

#### maverick280857

Thank you everyone, this has turned out to be quite an interesting discussion. Please keep it on. I logged into PF after a long gap today, and was pleasantly surprised to see this post right on top

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving