# Dirac delta function. Integral

1. Jul 4, 2014

### LagrangeEuler

How to calculate
$\int^{\infty}_{-\infty}\frac{\delta(x-x')}{x-x'}dx'$
What is a value of this integral? In some youtube video I find that it is equall to zero. Odd function in symmetric boundaries.

2. Jul 4, 2014

### jostpuur

One possible way to make the expression precise is that you first denote as $C(\mathbb{R})$ the set of all continuous functions $f:\mathbb{R}\to\mathbb{R}$. Then you define a mapping $\Phi_x:C(\mathbb{R})\to\mathbb{R}$ by setting $\Phi_x(f)=f(x)$, and also denote

$$\int\limits_{-\infty}^{\infty} \delta(x-x')f(x')dx' := \Phi_x(f)$$

If you define $f$ with the formula $f(x') = \frac{1}{x-x'}$, then $f\notin C(\mathbb{R})$, and it makes no sense to ask what $\Phi_x(f)$ is.

It's like defining some function $\phi:\{0,1,2,3\}\to\mathbb{R}$ and then asking what $\phi(4)$ is.

Do you know yourself what definition you are using for the Dirac delta?

3. Jul 4, 2014

### pwsnafu

The Fisher product of the generalised function PV $x^{-1}$ and Dirac $\delta$ is equal to $-\frac{1}{2}\delta'$. This is a Schwartz distribution of compact support, so it can be applied to to the test function $\phi(x)=1$, which gives the answer 0.

4. Jul 5, 2014

### LagrangeEuler

Tnx for the answer. But I really don't understand it. Can you explain that with detail or just give me some text which I can read.

5. Jul 5, 2014

### LagrangeEuler

If I understand you well.
$\delta \cdot x^{-1}=-\frac{1}{2}\delta'$
where $\cdot$ is some kind of product. So in my case.
$\int^{\infty}_{-\infty}\frac{\delta(x)}{x}dx=-\frac{1}{2}\int^{\infty}_{-\infty}\delta'(x)dx$
and we know that the right side is zero.

6. Jul 5, 2014

### jostpuur

It seems obvious that LagrangeEuler himself does not know what definition he is using for the Dirac delta, so the pedagogical answer should be to point out this fact, and request for clarification on the definition.

Nevertheless, I too would be interested to learn what the Fisher product is, so I wouldn't mind if pwsnafu showed the definition here.

7. Jul 5, 2014

### LagrangeEuler

I use Dirac delta in the form
$\int^{\infty}_{-\infty}\delta(x)f(x)dx=f(0)$
However, I can not solve previous integral in this way so I am confused. Sometimes I use that the Dirac delta is even function so $\delta(-x)=\delta(x)$. And of course sometimes I use integral representation.
$\delta(x)=\frac{1}{2\pi}\int^{\infty}_{-\infty}e^{ikx}dk$
or differential representation
$\theta'(x)=\delta(x)$
where $\theta(x)$ is Heaviside step function. I know some definition but I give precise what my problem is!

8. Jul 5, 2014

### pwsnafu

The text I use is the one by R.F. Hoskins and J.Sousa Pinto Distribution, Ultradistributions and other Generalised Functions because it covers a very large number of topics in very short space.

I'm going to give a proof sketch of the Fisher's result
$$x^{-r} \cdot \delta^{(r-1)} = (-1)^{r} \frac{(r-1)!}{2(2r-1)!}\delta^{(2r-1)}(x).$$
The result is published in Proc Camb Phil Soc, 72, pp 201-204. I going to assume you know the basics, namely that the space of test functions $\mathcal{D}$ is a dense subset of $\mathcal{D}'$.

The (classical) Fisher product is an example of what we call sequential products. The ideal is simple: for any two Schwartz distributions $\mu$ and $\nu$, we find sequences of smooth functions $\mu_n \to \mu$ and $\nu_n \to \nu$ and define $\mu \cdot \nu = \lim_{n\to\infty} \mu_n \nu_n$. This ultimately is a losing battle. The more constraints you place on your sequences the more functions you can multiply, but the properties of your product get steadily worse (such as no distributive law).

We start by choosing a smooth function $\rho(x)$ satisfying
1. $\rho$ is non-negative,
2. the area under the curve over the reals is 1,
3. $\rho(x) = 0$ for all $-1 \leq x \leq 1$,
4. $\rho(-x) = \rho(x)$ for all x,
5. $\rho^{(r)}(x)$ has only $r$ changes of sign for $r=1,2,3,\ldots$.
The sequence $\rho_n(x) = n\rho(nx)$ is called a symmetric model sequence and we set $\mu_n = \mu * \rho_n$ (where * is the convolution of distributions). Such a $\rho$ exists, namely the bump function,
$$\rho(x) = A exp(1/(x^2-1))$$ for $|x|\leq1$ and zero elsewhere. Here A is just a normalization constant. It should be clear that $\rho_n \to \delta$.

The singular distribution $x^{-r}$ is defined as
$$x^{-r} = \frac{(-1)^r}{(r-1)!}D^r \log|x|$$
and our sequence
$$(x^{-r})_n = \frac{(-1)^{r-1}}{(r-1)!} \int_{-1/n}^{1/n} \rho^{(r)}_{n}(t) \log|x-t|\, dt.$$
Now we define $\mathcal{I}f(x) := \int_{-\infty}^x f(t) dt$ then
$$F_n(x) := \mathcal{I}^{2r-1}[(x^{-r})_n\rho^{(r-1)}_n(x)] = \frac{1}{(2r-2)!}\int_{-1/n}^{x}(t^{-r})_n \rho^{(r-1)}_n(x-t)^{2r-2}dt.$$
It can be shown that
$$\int_{-1/n}^{1/n}(x^{-r})_n \rho_n^{(r-1)}(x)x^{m}dx = \frac{(-1)^{r+1}}{2}(r-1)!$$
when $m=2r-1$ and zero for $m=0,1,\ldots, 2r-2$.

Putting the results together you get
$$\int_{1/n}^{-1/n}(t^{-r})_n \rho_n^{(r-1)}(t) (1/n-t)^{2r-2}dt =0$$
and using $\rho_n^{(r-1)}(x) = 0$ for $|x| \geq 1/n$; you get $\mathcal{I}^{2r-1}[(x^{-r})_n\rho_n^{(r-1)}(x)] = 0$ for $|x| \geq 1/n$ as well. The basically means that the support is converging to $\{0\}$.

Moving on. $\rho^{(r)}$ has only r changes of sign; therefore $(x^{-r})_n$ has only r changes of sign, therefore the $(2r-1)$th primitive is either always non-negative or always non-positive. And finally,
$\int_{-1/n}^{1/n}\mathcal{I}^{2r-1}[(x^{-r})_n\rho_n^{(r-1)}(x)]dx = (-1)^r\frac{(r-1)!}{2(2r-1)!}.$
Hence $F_n$ converges distributionally to $(-1)^r\frac{(r-1)!}{2(2r-1)!} \delta(x)$. Differentiate (2r-1) times to get the result. Setting r=1, we get $x^{-1}\cdot\delta(x) = -\frac{1}{2}\delta'(x)$ as required.

Last edited: Jul 6, 2014
9. Jul 5, 2014

### pwsnafu

It's worth pointing out there is shorter proof of the r=1 case due to Mikusinski. Let
$$u_n (x) = \rho_n(x) (x^{-1}*\rho_n(x)).$$
We define
$$I_n = \int_{-\infty}^\infty u_n(x) \, dx = \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{\rho_n(x)\rho_n(t)}{x-t}dxdt,$$
$$K_n = \int_{-\infty}^\infty x\,u_n(x) \, dx = \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{x \rho_n(x)\rho_n(t)}{x-t}dxdt,$$
these are understood in principal integrals. If we swap x and t, then $I_n$ swaps sign hence $I_n =0$.

To find $K_n$ we write
$$K_n = 1- \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{t \rho_n(x)\rho_n(t)}{x-t}dxdt,$$
and swap x and t to obtain the identity $K_n = 1- K_n$ which means $K_n = 1/2$.

Lastly define
$$F_n(x) = \int_{-\infty}^x (x-t)u_n(t) \, dt = x\int_{-\infty}^x u_n(t)dt - \int_{-\infty}^x t u_n(t) dt,$$
If x<0 then $F_n(x)$ converges to zero, while x>0 then $F_n(x)$ converges to -1/2. The work above demonstrates that $F_n$ is bounded by constants independent of n.

Hence $F_n$ converges to $-\frac{1}{2}H$ where $H$ is the Heaviside step function. But $u_n = F_{n}''$ so $u_n$ converges to $-\frac{1}{2}\delta'$ as required.

Aside Mikusinski used this result to prove
$$\delta^2(x) - \frac{1}{\pi^2}\left(\frac{1}{x}\right)^2 = -\frac{1}{\pi^2 x^2}$$
from quantum physics.
Mikusinski's insight was to observe that the individual terms on the left cannot exist individually, but the entire expression on the left hand side can be given meaning.

Last edited: Jul 5, 2014
10. Jul 6, 2014

### jostpuur

It seems that LagrangeEuler still does not know what definition he is using.

This is artificial, because there exists sequences of functions, which converge to $\delta_0$ (with many possible definitions), but which are not symmetric.

11. Jul 6, 2014

### pwsnafu

Correct, but it is necessary in order define this product. At one point the proof uses
$$f(x,t):=\rho^{(r)}_n(x)\rho^{(r)}_n(t)(x^{m+1}-t^{m+1})\log|x-t|=-f(t,x)$$
which is from symmetry of $\rho$.
Just because there are non-symmetric $\rho$ doesn't mean we care about them. Again, the more constraints on $\rho$ the more functions are able to be multiplied, at the cost of making it harder to obtain desirable properties. That is, the stronger the constraints placed on $\rho$ there are less and less functions that $\rho$ can take, this means the product can be used to multiply more and more distributions.

NB: I'm not sure if if it is necessary for this specific multiplication. I'll see what I can dig up.

Update:It appears the requirements (4) and (5) are necessary for $x^{-r}\cdot\delta^{(r-1)}$ for $r=2,3,4,\ldots$ but not for $r=1$. My copy of Hoskins and Pinto states the following:

Define a sequence of smooth functions (chosen from $\mathcal{D}$)
1. $\rho_n(x) \geq 0$ for all x
2. area under curve is 1
3. supp$(\rho_n)\to\{0\}$ as $n\to\infty$.
We then call $\rho_n$ a strict delta sequence.
Note that we are not choosing a $\rho$ then setting $\rho_n(x)=n\rho_n(x)$.
The product S1 is defined as $\lim_{n\to\infty}(\mu*\rho_n)\nu$ and SP4 is defined as $\lim_{n\to\infty}(\mu*\rho_n)(\nu*\rho_n)$.

Now apparently in Theory of distributions : the sequential approach, Antosik et al prove that $x^{-1}\cdot\delta$ is undefined as SP1 product, but $x^{-1}\cdot\delta = -\frac{1}{2}\delta'$ exists as SP4 product. I say apparently because I don't own their book so I can't verify their proof right now. But it's still a moot point: the Fisher product can multiply together anything SP4 can multiply together. Removing symmetry to obtain something worse is counterproductive.

Last edited: Jul 6, 2014
12. Jul 6, 2014

### jostpuur

Here comes my attempt to succeed in pedagogy:

Problem one: First I defined a function $\phi$ by setting

$$\phi:\{0,1,2,3\}\to\mathbb{R},\quad\quad\left\{\begin{array}{l} \phi(0) = 5 \\ \phi(1) = -4 \\ \phi(2) = 30 \\ \phi(3) = 14 \end{array}\right.$$

Then I got stuck trying to prove what $\phi(4)$ is. LagrangeEuler, do you have any idea what $\phi(4)$ is? Can you prove it?

Problem two: First I defined a function $\Phi$ by setting

$$\Phi:C(\mathbb{R})\to\mathbb{R},\quad\quad \Phi(f) = f(0)$$

Here $C(\mathbb{R})$ is the set of all continuous functions $f:\mathbb{R}\to\mathbb{R}$.

Then a defined a function $f$ by setting

$$f:\;]-\infty,0[\;\cup\;]0,\infty[\;\to\mathbb{R},\quad\quad f(x) = \frac{1}{x}$$

and I got stuck trying to prove what $\Phi(f)$ is. Can anyone here solve the $\Phi(f)$? Can anyone here prove that $\Phi(f)$ is something?

LagrangeEuler, do you see how the problems one and two are related?

13. Jul 6, 2014

### HallsofIvy

Staff Emeritus
That's non-sense. If you defined $\phi$ by that, then $\phi(4)$ does not exist- it is literally "undefined". (Or was that your point?)

14. Jul 6, 2014

### jostpuur

Yes it was my point

I'm trying to explain that the $\Phi(f)$, which becomes mentioned next, is undefined too. So to clarify the reason the undefined quantity $\Phi(f)$ is being compared to another undefined quantity $\phi(4)$.

The final step in the answer to the original question would be to convince LagrangeEuler of the fact that the most obvious way to interpret the formal integral expression he mentioned would be to interpret it as something like $\Phi(f)$. Other interpretations would not be reasonable unless clearly explained alongside with the formal integral expression.

15. Jul 6, 2014

### pwsnafu

This problem really illustrates why the integral notation needs to be avoided.