# Dirac Delta function

1. Jun 20, 2013

### dreamLord

I know this probably belongs in one of the math sections, but I did not quite know where to put it, so I put it in here since I am studying Electrodynamics from Griffiths, and in the first chapter he talks about Dirac Delta function.

From what I've gathered, Dirac Delta function is 0 for x$\neq$0, and ∞ for x = 0.

Now he assumes any function f(x), and says that the product f(x)*$\delta$(x) = 0 for x$\neq$0. Fine, got that.

Now he goes on to say that the above statement can also be written as f(0)*$\delta$(x) = 0. My question is - we could also have written it as f(29.5)*$\delta$(x) = 0 for x$\neq$0, right? So then why did we choose f(0)?

2. Jun 20, 2013

### vanhees71

First of all it is very important to understand that $\delta$ is not a function but a distribution. It is defined as a linear form on an appropriate space of functions, e.g., the infinitely many times differentiable functions with compact support or rapidly falling functions (Schwartz space). It is defined as
$$\int_{\mathbb{R}} \mathrm{d} x f(x) \delta(x)=f(0).$$
This is not 0.

Sometimes you can simplify equations by the formal setting $f(x) \delta(x)=f(0) \delta(x)$. Strictly speaking that's not correct, because you cannot integrate the $\delta$ distribution over the test function which is constant, because this function does not belong to the test-function space, where the $\delta$ distribution is defined.

3. Jun 21, 2013

### dreamLord

I'm afraid your first few lines were completely lost on me! Is there any way you can dumb it down a bit?

Also, the equation that you wrote ; is this the definition of the Dirac Delta function? Or the fact that it is 0 when x is not zero, and infinity when x is 0. Which one defines it? Or are they the same thing.

4. Jun 21, 2013

### DimReg

Griffths focuses on f(0)δ(x) because f(0) is the only value of f that matters with the dirac delta. So basically f(0)δ(x) will behave the same way as f(x)δ(x).

This can be best understood under an integral sign, which is the only place the dirac delta function is precisely defined. You have (edit: you can take these two properties are the definition, but the exact mathematical definition is a bit more complicated):

$\int \delta (x) dx = 1$ and $\int f(x) \delta (x) dx = f(0)$

So you can write $\int f(0) \delta (x) dx = f(0) \int \delta(x)dx = f(0)*1 = f(0)$ which is the same as for f(x)δ(x)

On the other hand, $\int f(1) \delta(x) dx = f(1)\int \delta(x)dx = f(1)$, which is not correct.

Last edited: Jun 21, 2013
5. Jun 21, 2013

### Jano L.

The vanhees71 definition is right. The property "δ=0 for x ≠ 0, δ=∞ fo x = 0" is just an intuitive description of sharply peaked function, which is valid picture of δ only in some situations. For example, it is valid for charge density distribution of point-like charged particle. However, when solving for the Green function of the Schroedinger equation, such description of δ is incorrect, while the integral property above is valid.

6. Jun 21, 2013

### DimReg

I don't get the impression that the OP is comfortable talking about distributions or Green's functions. If he were, I doubt he would be having trouble with the dirac delta function.

7. Jun 21, 2013

### vanhees71

The point is that many introductory physics books confuse their readers with unprecise definitions of what a distribution is. Griffiths seems to be another example. I don't know his E&M book very well besides from discussions here in the forum.

Objects like the Dirac $\delta$ are socalled distributions. They are defined as mappings from a function space (containing a certain set of functions, called test functions) to the (real or complex) numbers. They can only be defined in a manner that makes sense under an integral, where they are multiplied with a test function, and for the Dirac $\delta$ distribution this definition reads
$$\int_{\mathbb{R}} \mathrm{d} x \delta(x) f(x)=f(0).$$
It's the value of the test function at the argument 0.

It is quite obvious that $\delta(x)$ cannot be a function in the usual sense, because you won't find any function with the above given property. However you can define the $\delta$ distribution as kind of limit, the socalled weak limit. The idea is to define functions which are sharply peaked around 0 with the integral normalized to 1. The most simple example is the "box function",
$$\delta_{\epsilon}(x)=\begin{cases} 1/(2 \epsilon) & \text{for} \quad x \in (-\epsilon,\epsilon) \\ 0 & \text{elsewhere}. \end{cases}$$
The test function should have "nice properties" to make things convenient. They should still form a vector space of functions, i.e., with two functions also their sum and the product with a constant should belong to this function space. A very convenient choice is Schwartz's space of rapidly falling smooth functions, i.e., they are arbitrarily many times differentiable and fall off at infinity faster than any polynomial.

Now we check the integral
$$I_{\epsilon}=\int_{\mathbb{R}} \mathrm{d} x \delta_{\epsilon}(x) f(x) = \frac{1}{2 \epsilon} \int_{-\epsilon}^{\epsilon} \mathrm{d} x f(x).$$
Now according to the mean-value problem for integrals over continuous functions, there is a value $\xi \in [-\epsilon,\epsilon]$ such that
$$I_{\epsilon}=f(\xi).$$
Now, since $f$ is continuous and we let $\epsilon \rightarrow 0^+$, you get
$$\lim_{\epsilon \rightarrow 0^+} I_{\epsilon}=f(0).$$
This means that in the sense of a weak limit you may write
$$\lim_{\epsilon \rightarrow 0^+} \delta_{\epsilon}(x)=\delta(x).$$
"Weak limit" means that you have to follow the above given procedure: You first have to take an integral with a test function and then take the limit. This is the crucial point.

If you now look what happens in this example, Griffiths sloppy definition makes some sense, but one has to keep in mind the proper meaning in the above given sense. Obviously our functions $\delta_{\epsilon}$ are concentrated around 0 and become the larger the smaller $\epsilon$ gets, when taking the limit $\epsilon \rightarrow 0^+$. At the same time the interval, where our function is different from 0 shrinks, and the construction is such that the total area under the graph (which here is a rectangle) stays constant 1 for all $\epsilon$. In this sense you may charcterize the $\delta$ distribution as cited by Griffiths. To avoid confusion, however, it's mandatory to learn about the proper definition of distributions (also known as "generalized functions").

The best book for the physicist I know of is

M. J. Lighthill, Introduction to Fourier Analysis and Generalised Functions, Cambridge University Press (1959)

That it treats the distributions together with Fourier series and Fourier integrals is no disadvantage since this you'll need anyway when studying electrodynamics.

8. Jun 21, 2013

### Fredrik

Staff Emeritus
Topology & analysis is the right place for it. I'm moving it there. Edit: I also changed "Diract" to "Dirac" in the title.

Last edited: Jun 21, 2013
9. Jun 21, 2013

### lurflurf

Yes Griffiths explanation is horrible, here is the idea without technicalities.

In finite calculus we define the delta function so that

$$a_0=\sum_{k=-\infty}^\infty \delta_k a_k$$

That a handy thing to do, it lets us write function evaluation as a sum.
We would like to do the same thing in infinitesimal calculus

$$\mathop{f}(0)=\int_{-\infty}^\infty \! \mathop{\delta}(x) \, \mathop{f}(x) \,\mathop{dx}$$
we ignore that the delta function does not exist as a function

now we adopt as equality f=g if
$$\int_{-\infty}^\infty \! (\mathop{f}(x) - \mathop{g}(x)) \,\mathop{dx}=0$$

in this sense
$$\mathop{\delta}(x) \, \mathop{f}(x)=\mathop{\delta}(x) \, \mathop{f}(0)$$
since clearly
$$\int_{-\infty}^\infty \! (\mathop{\delta}(x) \, \mathop{f}(x) - \mathop{\delta}(x) \, \mathop{f}(0)) \,\mathop{dx}=\int_{-\infty}^\infty \! ( \mathop{\delta}(x) \, ( \mathop{f}(x) - \mathop{f}(0))) \,\mathop{dx}=( \mathop{f}(0) -\mathop{f}(0) )=0$$

For some purposes we probably want to adopt as equality f=g if
$$\int_{a}^b \! (\mathop{f}(x) - \mathop{g}(x)) \,\mathop{dx}=0$$
for all a and b

Last edited: Jun 21, 2013
10. Jun 21, 2013

### dreamLord

Things are becoming a little clearer now, though I am still fairly lost. Thank you for the amazing posts, vanhees, DimReg, Jano and lurflurf. I will need to read this thread a couple more times before I am ready to frame my doubts regarding your posts.

11. Jun 21, 2013

### lurflurf

Do you know about the Riemann–Stieltjes integral?
By convention the spike is at x=0. Since δ(x) purpose is to evaluate f(x) near x=0 it does not care what f does away from zero much like

$$\lim_{x \rightarrow 0} \mathop{f}(x)$$

12. Jun 21, 2013

### dreamLord

No lurflurf, I do not know what that integral is.

By the way, an immediate question regarding your post (#9) ; how did you proceed in the second last step? That is :
∫(δ(x)(f(x)−f(0)))dx=(f(0)−f(0))=0

Thanks for telling me the purpose of the delta function - I did not understand why Griffiths brought it up in the first place!

13. Jun 21, 2013

### DimReg

I showed the algebraic steps required in my first reply. Basically, f(0) is a constant, and integrals are linear, so:

$\int(\delta(x)(f(x) - f(0)))dx = \int \delta(x) f(x)dx - \int \delta(x) f(0) dx = \int \delta(x) f(x) dx - f(0) \int \delta(x) dx = f(0) - f(0)$

Where in the last step I used ∫f(x)δ(x)dx = f(0) for the first term and ∫δ(x)dx = 1 for the second term

Last edited: Jun 21, 2013
14. Jun 21, 2013

### Fredrik

Staff Emeritus
One thing that I think should be mentioned is that when $\delta$ is defined as a function that takes test functions to numbers, the definition can be written as $\delta(f)=f(0)$ for all test functions f. The notation $\delta(f)$ is far more natural than $\int \delta(x)f(x)dx$. The reason that the latter is used must be that distributions were invented to make sense of expressions like $\int \delta(x)f(x)dx$, which were already used in non-rigorous calculations.

So $\int\delta(x)f(x)dx$ isn't an integral of the product of a distribution and a function. It's just a notation that means $\delta(f)$.

For each real number x, define $\delta_x$ by $\delta_x(f)=f(x)$ for all test functions f. Define the notation $\int f(x)\delta(x-y)dx$ to mean $\delta_y(f)$. This ensures that $\int f(x)\delta(x-y)dx=f(y)$.

15. Jun 21, 2013

### dreamLord

Thanks DimReg, I understand the step now.

Fredrik ; so does that mean that if I take f(x) = 2x - 5, then δ(f) = f(0) = -5 ?
Also, in your last 2 lines, why did you change your definition from δ(f) = f(0) to δ(f) = f(x)?

By the way, thanks for moving the thread to the correct section and also for fixing the typo!

16. Jun 21, 2013

### dreamLord

You lost me in this specific paragraph. Why is epsilon approaching 0 from the + side? And if it is, how does the next equation follow?

17. Jun 21, 2013

### Fredrik

Staff Emeritus
Yes.

I didn't, I defined infinitely many new distributions, one for each real number. Only one of them ($\delta_0$) is equal to $\delta$.

18. Jun 21, 2013

### dreamLord

So it is also true that δ(f) = f(1) = -3 ? If I take f(x) = 2x - 5.

19. Jun 21, 2013

### Fredrik

Staff Emeritus
No, by my definitions $\delta(f)=\delta_0(f)=f(0)=-5$, but $\delta_1(f)=f(1)=-3$.

I don't know if anyone else uses this notation by the way. I just think it's a good way to make sense of expressions of the form $\int f(x)\delta(x-y)dx$ where y is a real number.

20. Jun 21, 2013

### Jolb

I have never seen Fredrik's notation, and I can't really make any sense of it. The Dirac delta is never equal to anything besides 0 or infinity. In fact I often use the identity
$$\delta(f(x))=\sum_{\{\tilde{x}|f(\tilde{x})=0\}}\frac{\delta(x-\tilde{x})}{\left | \frac{df}{dx}|_\tilde{x}\right|}$$

If you're having trouble reading that, it just says to find the Dirac delta where the argument is a function, find all the zeros of the function, then form the sum of dirac deltas, one located at each zero, and each one devided by the absolute value of the function's derivative at that zero.

That's actually a rigorous statement that follows from the most common definition of the dirac delta function:
$$\delta(x):=\lim_{\alpha\rightarrow\infty}\sqrt{\frac{\alpha}{\pi}}e^{-\alpha x^2}$$

This definition works better than the limit of rectangular functions since you can find the derivative with this one.

Last edited: Jun 21, 2013
21. Jun 21, 2013

### dreamLord

I can't quite understand what delta-not and delta-one are, Fredrik (apologies, I can't use LaTex currently). Can you explain what they stand for?

Jolb ; why do we need to find the zeroes of the function? I thought the delta function was valid for all x?

I have never encountered such a vague and confusing topic in maths so far - which probably means I haven't done much, but either way, I am thoroughly confused. I'm not even sure I know why we need the delta function.

22. Jun 21, 2013

### Jolb

The reason you need to find the zeros of the function in the argument of the Dirac delta is because the Dirac delta only "fires" when its argument is zero. Whenever the Dirac delta's argument is nonzero, the Dirac delta is equal to zero, and does nothing interesting. When its argument is zero, the Dirac delta does interesting things.

To explain this and the OP in a dumbed-down way, there's a great mnemonic to help with this. Sometimes people call the Dirac delta the "sampling function." If you have any function f(x) and you want to somehow pull out its value at a point x', you can get it by "sampling" it with the Dirac delta:

f(x') = ∫δ(x-x') f(x) dx

23. Jun 21, 2013

### dreamLord

By argument equaling zero, you mean the function that is multiplied with it - like f(x), should be 0 right? If that is the case, then why do we have expressions like the one in post #2 by vanhees? How are they relevant? Under the integral, we don't have f(x) = 0, which it ought to be for the Dirac function to be 'interesting' as you put it.

24. Jun 21, 2013

### WannabeNewton

Most physics books at the undergrad level will thoroughly butcher the definition and unfortunately the rigorous formulation requires some advanced mathematics (distribution theory). For now, can you at least see the physical motivations for it? Recall Griffiths' motivation, which is the apparent vanishing divergence of the Coulomb field at all points in space even when there is a localized point charge which should technically contribute to the divergence via Gauss's law.

25. Jun 21, 2013

### dreamLord

Yes, I understood how the divergence was vanishing everywhere except at r = 0. Does that mean that the divergence of Electric Field is a Dirac Delta function?

Also, Wannabe, aren't you an undergrad ? How are you so goddamn knowledgeable!