# Taylor series expansion of Dirac delta

I'm trying to understand how the algebraic properties of the Dirac delta function might be passed onto the argument of the delta function.

One way to go from a function to its argument is to derive a Taylor series expansion of the function in terms of its argument. Then you are dealing with the argument and powers of it.

Normally, the Taylor series expansion for f(x) is:
$$f(x) = \sum\limits_{n = 0}^\infty {\frac{{{f^{(n)}}({x_0})}}{{n!}} \cdot {{(x - {x_0})}^n}}$$

But how does this change for a composite function? How do you expand $f(g(x))$? Do you just replace x in the sum with g(x) and x0 with g(x0)? Or do you differentiat f(g(x)) with respect to x in each term before setting x to x0?

But I suspect the even harder question is how do you take the derivatives of the Dirac delta function for each term in the sum. And if you have to evaluate it at x0, won't that make each term infinity? Is there an easy expression for ${\delta ^{(n)}}({x_0})$? Or better yet, how do we evaluate ${\delta ^{(n)}}(g(x))$? Thank you.

CompuChip
Homework Helper
The mistake you are making is that the Dirac delta is not a function. It has some properties of a function, and is sometimes called a generalized function (better known as distribution). One of these is that you can define an n-th order derivative. But one of the things you can't do is write down a series, see for example this old PF thread.

There are different ways to define the Dirac delta. Normally, in distribution theory, it is defined by the way it acts as integration kernel, i.e. by the property $\int \delta(x) f(x) \, dx = f(0)$. However, you can write it down as the limit of a series of continuous functions, my favourite one being
$$\delta_a(x) = \frac{1}{a \sqrt{\pi}} e^{-x^2 / a^2}$$
such that $\delta(x) = \lim_{a \to 0} \delta_a(x)$
and of course you are free to expand $\delta_a(x)$ because it's as smooth as you could possibly wish.

The mistake you are making is that the Dirac delta is not a function. It has some properties of a function, and is sometimes called a generalized function (better known as distribution). One of these is that you can define an n-th order derivative. But one of the things you can't do is write down a series, see for example this old PF thread.

There are different ways to define the Dirac delta. Normally, in distribution theory, it is defined by the way it acts as integration kernel, i.e. by the property $\int \delta(x) f(x) \, dx = f(0)$. However, you can write it down as the limit of a series of continuous functions, my favourite one being
$$\delta_a(x) = \frac{1}{a \sqrt{\pi}} e^{-x^2 / a^2}$$
such that $\delta(x) = \lim_{a \to 0} \delta_a(x)$
and of course you are free to expand $\delta_a(x)$ because it's as smooth as you could possibly wish.

I was thinking more in terms like this for the kth derivative of the delta,
$${\delta ^{(k)}}[\varphi ] = {( - 1)^k}{\varphi ^{(k)}}(0)$$
that I found about a third of the way down on the wikipedia.com website for the Dirac delta. Could this be used to construct an expansion? I don't know if the $\varphi$ here can be used as the g(x) in my composition.

Last edited:
CompuChip
Homework Helper
Yes, this is the definition as you would get it from distribution theory, where the Dirac delta would be defined by the way it "acts on" other functions $\phi$.
This definition is what I said earlier:
$$\int \delta(x) \phi(x) dx = \phi(0)$$

To find the k'th derivative, apply partial integration k times to
$$\int \delta^{(k)}(x) f(x) dx$$
to move it over to the test function f(x). This gives
$$\int \delta^{(k)}(x) f(x) dx = (-1)^k \int \delta(x) f^{(k)}(x) \, dx$$
which is $(-1)^k f^{(k)}(0)$ by the definition applied to $\phi = f^{(k)}$.