# Change of the order of integration including Dirac delta

I'd be content just having this passage explained to me:

$$\frac{\delta(x-x')}{\sqrt{\delta(0)}}=\sqrt{\delta(x-x')}$$
Thanks

Demystifier
Gold Member
Oh... my... Gawd!

I'd love to see you try and make that rigorous.
It's very simple indeed to make it more rigorous. One can always replace the ##\delta## "function" with a true function, such as a very narrow Gaussian parameterized with a small width ##\epsilon##. Or instead of a Gaussian, an even better choice is the narrow "wall" function ##\delta_{\epsilon}(x)## defined by
$$\delta_{\epsilon}(x)=1/\epsilon \;\;{\rm for}\;\; |x|<\epsilon/2$$
$$\delta_{\epsilon}(x)=0 \;\;{\rm for}\;\; |x|>\epsilon/2$$
$$\delta_{\epsilon}(x)=1/2\epsilon \;\;{\rm for}\;\; |x|=\epsilon/2$$
It satisfies
$$\int_{-\infty}^{\infty} dx \, \delta_{\epsilon}(x)=1$$
By Taylor expansion ##f(x)=f(0)+xf'(0)+x^2f''(0)/2+...## one obtains
$$\int_{-\infty}^{\infty} dx \, f(x)\delta_{\epsilon}(x)=f(0) +\epsilon^2\frac{f''(0)}{24}+...=f(0) +{\cal O}(\epsilon^2)$$
Repeating my previous renormalization procedure by a replacement ##\delta\rightarrow\delta_{\epsilon}## at the right places, one obtains
$$\psi_{ren}(x)=\frac{\delta_{\epsilon}(x-x')}{\sqrt{\delta_{\epsilon}(0)}} =\sqrt{\delta_{\epsilon}(x-x')}$$
which is well defined for an arbitrarily small positive ##\epsilon##. Putting also the limit ##\epsilon\rightarrow 0## in the right places, one covers also the ##\delta## "function" case. But such pedantry makes all equations more cumbersome, so for the sake of practical calculus I would prefer not to take care of all these details. For me, it's sufficient to know that I can do it rigorously as sketched above, if I really want to.

Last edited:
Igael
Demystifier
Gold Member
I'd be content just having this passage explained to me:
Here is an explanation at the level of practical calculus. When ##x\neq x'##, both sides are zero, so there is a match. When ##x=x'##, the left-hand side is
$$\frac{\delta(0)}{\sqrt{\delta(0)}}=\sqrt{\delta(0)}$$
which equals the right-hand side.

If you want a more rigorous argument, see the hint in the post above.

ddd123
Demystifier
Gold Member
Note also that physics textbooks often do some similar "illegitimate" manipulations. For example, for the Dirac ##\delta## in the momentum space ##\delta^4(k)## physicists often write
$$\delta^4(0)=\frac{TV}{(2\pi)^4}$$
where ##V## is the volume of the "laboratory" and ##T## is the time duration of the experiment. Without carrying about rigor, in this way they obtain results which agree with experiments.

vanhees71
Gold Member
Come on, it doesnt' make sense to take a square or a square root of a ##\delta## distribution (NOT FUNCTION). In QT you must have
$$\langle x | x' \rangle=\delta(x-x'),$$
because otherwise the entire Dirac formalism of bras and kets breaks down, and it's well defined in the sense of distributions. You can take various weak limits to define this properly.

strangerep
vanhees71
Gold Member
Note also that physics textbooks often do some similar "illegitimate" manipulations. For example, for the Dirac ##\delta## in the momentum space ##\delta^4(k)## physicists often write
$$\delta^4(0)=\frac{TV}{(2\pi)^4}$$
where ##V## is the volume of the "laboratory" and ##T## is the time duration of the experiment. Without carrying about rigor, in this way they obtain results which agree with experiments.
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.

Demystifier
Gold Member
Come on, it doesnt' make sense to take a square or a square root of a ##\delta## distribution (NOT FUNCTION). In QT you must have
$$\langle x | x' \rangle=\delta(x-x'),$$
because otherwise the entire Dirac formalism of bras and kets breaks down, and it's well defined in the sense of distributions. You can take various weak limits to define this properly.
The square root in #27 certainly does make sense.

Demystifier
Gold Member
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.
Yes, just like #23 is a hand-waving shortcut which can be justified by more rigorous procedure sketched in #27.

samalkhaiat
Can somebody help me? I am studying Faddeev-Popov trick, following the Peskin and Schroeder's QFT book, but I can't understand one thing. After they inserted the Faddeev-Popov identity,
$$I = \int {{\cal D}\alpha \left( x \right)\delta \left( {G\left( {{A^\alpha }} \right)} \right)\det \left( {\frac{{\delta G\left( {{A^\alpha }} \right)}}{{\delta \alpha }}} \right)}$$
they exchanged the order of integration, but to my knowledge, since it includes delta function, there's no guarantee to exchange the order. Where can I find the reasoning for this?

[Mentor's note: This text had been edited to fix the Latex formatting. everyone is reminded that there's section explaining how to make Latex work with the Physics Forums software on our help page: https://www.physicsforums.com/help/] [Broken]
There is no problem with such formal manipulations at all. The whole purpose of the Faddeev-Popov procedure is to introduce the correct integration measure,
$$\Delta[A] = \det \left| \frac{\delta G}{\delta \theta}\right|_{G = 0} ,$$
into the path integral so that we can factor out the infinite group volume $\int \mathcal{D} g(\theta)$ which causes the infinite over-counting, i.e. summing over equivalent gauge field configurations. Plus, in actual calculations, you never need to interchange the order of integrations because once you choose the gauge-fixing surface $G^{a}[A] = 0$ so that it intersects every group-orbit exactly once, all pieces of the integrand will become independent of the group coordinates $\theta^{a}(x)$, and you can safely pull out the group volume $\int \mathcal{D} g(\theta)$. For example in the covariant gauge $G^{a} = \partial^{\mu}A^{a}_{\mu}$, the path integral becomes
$$\int \mathcal{D}A_{\mu} \ e^{i S[A]} = \int \mathcal{D}A_{\mu} \int \mathcal{D}g(\theta) \ \prod_{x} \delta [\partial^{\mu}A_{\mu}] \det \left| \partial^{\nu}D^{ab}_{\nu}\right| \ e^{i S[A]} .$$ Clearly, nothing is there to prevent you from factoring out the infinite volume of the gauge group $\mbox{Vol}(\mathcal{G}) = \int \mathcal{D} g$. The only important point to observe is this, your choice of gauge fixing $G^{a}[A]$ which defines the local cross section on the principal fibre bundle (gauge slice transverse to every $\mathcal{G}$-orbit) must not come with vanishing Jacobian, i.e., the matrix $$\Delta[A] = \frac{\delta G^{a}}{\delta \theta_{b}}|_{\vec{\theta} = 0} ,$$ must have an inverse.

Last edited by a moderator:
nrqed and GIM
Thank you, @samalkhaiat! I think this would be the best correct answer, even though I didn't understand it well. I will try to understand it along this direction. Thank you again.

samalkhaiat
Oh... my... Gawd!

I'd love to see you try and make that rigorous.
Schwartz said it is impossible. In fact he showed that it is impossible to define (associative and commutative) multiplication in the class of generalized functions (distributions). Indeed, it is easy to show that such multiplication leads to contradictions: Consider the two most common distributions, the Dirac delta function $\delta (x)$ and the Principal value function $\mathscr{P}(1/x)$. We can rigorously prove the following relations $$x \ \delta (x) = \delta (x) \ x = 0, \ \ \ \ x \ \mathscr{P}(\frac{1}{x}) = \mathscr{P}(\frac{1}{x}) \ x = 1 .$$ If a product existed, then, using these relations, we would have the following contradictory chain of equalities $$0 = 0 \ \mathscr{P}(\frac{1}{x}) = \left( \delta (x) \ x \right) \mathscr{P}(\frac{1}{x}) = \delta (x) \left( x \mathscr{P}(\frac{1}{x}) \right) = \delta (x) .$$
So, in order to define a product of two distributions $f$ and $g$, it is necessary that they have the following properties: $f$ must be just as irregular in the neighbourhood of an arbitrary point as $g$ is regular in that neighbourhood, and vice versa.

Last edited:
vanhees71 and strangerep
samalkhaiat