Change of the order of integration including Dirac delta

  • Thread starter GIM
  • Start date
481
55
I'd be content just having this passage explained to me:

$$\frac{\delta(x-x')}{\sqrt{\delta(0)}}=\sqrt{\delta(x-x')}$$
Thanks :oldtongue:
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,707
2,731
Oh... my... Gawd! :nb) :eek: :-p

I'd love to see you try and make that rigorous. :wink: :biggrin:
It's very simple indeed to make it more rigorous. One can always replace the ##\delta## "function" with a true function, such as a very narrow Gaussian parameterized with a small width ##\epsilon##. Or instead of a Gaussian, an even better choice is the narrow "wall" function ##\delta_{\epsilon}(x)## defined by
$$\delta_{\epsilon}(x)=1/\epsilon \;\;{\rm for}\;\; |x|<\epsilon/2$$
$$\delta_{\epsilon}(x)=0 \;\;{\rm for}\;\; |x|>\epsilon/2$$
$$\delta_{\epsilon}(x)=1/2\epsilon \;\;{\rm for}\;\; |x|=\epsilon/2$$
It satisfies
$$\int_{-\infty}^{\infty} dx \, \delta_{\epsilon}(x)=1$$
By Taylor expansion ##f(x)=f(0)+xf'(0)+x^2f''(0)/2+...## one obtains
$$\int_{-\infty}^{\infty} dx \, f(x)\delta_{\epsilon}(x)=f(0) +\epsilon^2\frac{f''(0)}{24}+...=f(0) +{\cal O}(\epsilon^2)$$
Repeating my previous renormalization procedure by a replacement ##\delta\rightarrow\delta_{\epsilon}## at the right places, one obtains
$$\psi_{ren}(x)=\frac{\delta_{\epsilon}(x-x')}{\sqrt{\delta_{\epsilon}(0)}}
=\sqrt{\delta_{\epsilon}(x-x')}$$
which is well defined for an arbitrarily small positive ##\epsilon##. Putting also the limit ##\epsilon\rightarrow 0## in the right places, one covers also the ##\delta## "function" case. But such pedantry makes all equations more cumbersome, so for the sake of practical calculus I would prefer not to take care of all these details. For me, it's sufficient to know that I can do it rigorously as sketched above, if I really want to.
 
Last edited:

Demystifier

Science Advisor
Insights Author
2018 Award
9,707
2,731
I'd be content just having this passage explained to me:
Here is an explanation at the level of practical calculus. When ##x\neq x'##, both sides are zero, so there is a match. When ##x=x'##, the left-hand side is
$$\frac{\delta(0)}{\sqrt{\delta(0)}}=\sqrt{\delta(0)}$$
which equals the right-hand side.

If you want a more rigorous argument, see the hint in the post above.
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,707
2,731
Note also that physics textbooks often do some similar "illegitimate" manipulations. For example, for the Dirac ##\delta## in the momentum space ##\delta^4(k)## physicists often write
$$\delta^4(0)=\frac{TV}{(2\pi)^4}$$
where ##V## is the volume of the "laboratory" and ##T## is the time duration of the experiment. Without carrying about rigor, in this way they obtain results which agree with experiments.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,819
4,375
Come on, it doesnt' make sense to take a square or a square root of a ##\delta## distribution (NOT FUNCTION). In QT you must have
$$\langle x | x' \rangle=\delta(x-x'),$$
because otherwise the entire Dirac formalism of bras and kets breaks down, and it's well defined in the sense of distributions. You can take various weak limits to define this properly.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,819
4,375
Note also that physics textbooks often do some similar "illegitimate" manipulations. For example, for the Dirac ##\delta## in the momentum space ##\delta^4(k)## physicists often write
$$\delta^4(0)=\frac{TV}{(2\pi)^4}$$
where ##V## is the volume of the "laboratory" and ##T## is the time duration of the experiment. Without carrying about rigor, in this way they obtain results which agree with experiments.
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,707
2,731
Come on, it doesnt' make sense to take a square or a square root of a ##\delta## distribution (NOT FUNCTION). In QT you must have
$$\langle x | x' \rangle=\delta(x-x'),$$
because otherwise the entire Dirac formalism of bras and kets breaks down, and it's well defined in the sense of distributions. You can take various weak limits to define this properly.
The square root in #27 certainly does make sense.
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,707
2,731
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.
Yes, just like #23 is a hand-waving shortcut which can be justified by more rigorous procedure sketched in #27.
 

samalkhaiat

Science Advisor
1,587
752
Can somebody help me? I am studying Faddeev-Popov trick, following the Peskin and Schroeder's QFT book, but I can't understand one thing. After they inserted the Faddeev-Popov identity,
$$I = \int {{\cal D}\alpha \left( x \right)\delta \left( {G\left( {{A^\alpha }} \right)} \right)\det \left( {\frac{{\delta G\left( {{A^\alpha }} \right)}}{{\delta \alpha }}} \right)}$$
they exchanged the order of integration, but to my knowledge, since it includes delta function, there's no guarantee to exchange the order. Where can I find the reasoning for this?

[Mentor's note: This text had been edited to fix the Latex formatting. everyone is reminded that there's section explaining how to make Latex work with the Physics Forums software on our help page: https://www.physicsforums.com/help/] [Broken]
There is no problem with such formal manipulations at all. The whole purpose of the Faddeev-Popov procedure is to introduce the correct integration measure,
[tex]\Delta[A] = \det \left| \frac{\delta G}{\delta \theta}\right|_{G = 0} ,[/tex]
into the path integral so that we can factor out the infinite group volume [itex]\int \mathcal{D} g(\theta)[/itex] which causes the infinite over-counting, i.e. summing over equivalent gauge field configurations. Plus, in actual calculations, you never need to interchange the order of integrations because once you choose the gauge-fixing surface [itex]G^{a}[A] = 0[/itex] so that it intersects every group-orbit exactly once, all pieces of the integrand will become independent of the group coordinates [itex]\theta^{a}(x)[/itex], and you can safely pull out the group volume [itex]\int \mathcal{D} g(\theta)[/itex]. For example in the covariant gauge [itex]G^{a} = \partial^{\mu}A^{a}_{\mu}[/itex], the path integral becomes
[tex]\int \mathcal{D}A_{\mu} \ e^{i S[A]} = \int \mathcal{D}A_{\mu} \int \mathcal{D}g(\theta) \ \prod_{x} \delta [\partial^{\mu}A_{\mu}] \det \left| \partial^{\nu}D^{ab}_{\nu}\right| \ e^{i S[A]} .[/tex] Clearly, nothing is there to prevent you from factoring out the infinite volume of the gauge group [itex]\mbox{Vol}(\mathcal{G}) = \int \mathcal{D} g[/itex]. The only important point to observe is this, your choice of gauge fixing [itex]G^{a}[A][/itex] which defines the local cross section on the principal fibre bundle (gauge slice transverse to every [itex]\mathcal{G}[/itex]-orbit) must not come with vanishing Jacobian, i.e., the matrix [tex]\Delta[A] = \frac{\delta G^{a}}{\delta \theta_{b}}|_{\vec{\theta} = 0} ,[/tex] must have an inverse.
 
Last edited by a moderator:

GIM

8
0
Thank you, @samalkhaiat! I think this would be the best correct answer, even though I didn't understand it well. I will try to understand it along this direction. Thank you again.
 

samalkhaiat

Science Advisor
1,587
752
Oh... my... Gawd! :nb) :eek: :-p

I'd love to see you try and make that rigorous. :wink: :biggrin:
Schwartz said it is impossible. In fact he showed that it is impossible to define (associative and commutative) multiplication in the class of generalized functions (distributions). Indeed, it is easy to show that such multiplication leads to contradictions: Consider the two most common distributions, the Dirac delta function [itex]\delta (x)[/itex] and the Principal value function [itex]\mathscr{P}(1/x)[/itex]. We can rigorously prove the following relations [tex]x \ \delta (x) = \delta (x) \ x = 0, \ \ \ \ x \ \mathscr{P}(\frac{1}{x}) = \mathscr{P}(\frac{1}{x}) \ x = 1 .[/tex] If a product existed, then, using these relations, we would have the following contradictory chain of equalities [tex]0 = 0 \ \mathscr{P}(\frac{1}{x}) = \left( \delta (x) \ x \right) \mathscr{P}(\frac{1}{x}) = \delta (x) \left( x \mathscr{P}(\frac{1}{x}) \right) = \delta (x) .[/tex]
So, in order to define a product of two distributions [itex]f[/itex] and [itex]g[/itex], it is necessary that they have the following properties: [itex]f[/itex] must be just as irregular in the neighbourhood of an arbitrary point as [itex]g[/itex] is regular in that neighbourhood, and vice versa.
 
Last edited:

samalkhaiat

Science Advisor
1,587
752
Thank you, @samalkhaiat! I think this would be the best correct answer, even though I didn't understand it well. I will try to understand it along this direction. Thank you again.
If I know how much you know about the subject, I might be able to explain it better for you. Feel free to ask.
 

Want to reply to this thread?

"Change of the order of integration including Dirac delta" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top