Change of the order of integration including Dirac delta

Click For Summary
The discussion revolves around the Faddeev-Popov trick in quantum field theory and the complexities of integrating with the Dirac delta function. Participants express concerns about the validity of exchanging the order of integration when a delta function is involved, emphasizing the need for rigorous mathematical justification. Colombeau generalized functions are introduced as a framework that allows for the multiplication of distributions, which could potentially resolve issues related to the singular nature of the delta function. The conversation also touches on the normalization of states in quantum mechanics, highlighting the challenges posed by infinities in inner products involving delta functions. Overall, the thread illustrates the intersection of physics and mathematics in handling generalized functions and integration techniques.
  • #31
Demystifier said:
Note also that physics textbooks often do some similar "illegitimate" manipulations. For example, for the Dirac ##\delta## in the momentum space ##\delta^4(k)## physicists often write
$$\delta^4(0)=\frac{TV}{(2\pi)^4}$$
where ##V## is the volume of the "laboratory" and ##T## is the time duration of the experiment. Without carrying about rigor, in this way they obtain results which agree with experiments.
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.
 
Physics news on Phys.org
  • #32
vanhees71 said:
Come on, it doesnt' make sense to take a square or a square root of a ##\delta## distribution (NOT FUNCTION). In QT you must have
$$\langle x | x' \rangle=\delta(x-x'),$$
because otherwise the entire Dirac formalism of bras and kets breaks down, and it's well defined in the sense of distributions. You can take various weak limits to define this properly.
The square root in #27 certainly does make sense.
 
  • #33
vanhees71 said:
That's the usual short cut often used to define the square of S-matrix elements, but it's only a hand-waving shortcut. The correct way is to use true states rather than distributions (plane waves are distributions, namely momementum-eigenfunctions in position representation) for the asymptotic free states. Then there's no problem in squaring the S-matrix element and taking the weak limit to generalized momentum-eigenfunctions for the asymptotic free states. See Peskin/Schroeder for the details.
Yes, just like #23 is a hand-waving shortcut which can be justified by more rigorous procedure sketched in #27.
 
  • #34
GIM said:
Can somebody help me? I am studying Faddeev-Popov trick, following the Peskin and Schroeder's QFT book, but I can't understand one thing. After they inserted the Faddeev-Popov identity,
$$I = \int {{\cal D}\alpha \left( x \right)\delta \left( {G\left( {{A^\alpha }} \right)} \right)\det \left( {\frac{{\delta G\left( {{A^\alpha }} \right)}}{{\delta \alpha }}} \right)}$$
they exchanged the order of integration, but to my knowledge, since it includes delta function, there's no guarantee to exchange the order. Where can I find the reasoning for this?

[Mentor's note: This text had been edited to fix the Latex formatting. everyone is reminded that there's section explaining how to make Latex work with the Physics Forums software on our help page: https://www.physicsforums.com/help/]

There is no problem with such formal manipulations at all. The whole purpose of the Faddeev-Popov procedure is to introduce the correct integration measure,
\Delta[A] = \det \left| \frac{\delta G}{\delta \theta}\right|_{G = 0} ,
into the path integral so that we can factor out the infinite group volume \int \mathcal{D} g(\theta) which causes the infinite over-counting, i.e. summing over equivalent gauge field configurations. Plus, in actual calculations, you never need to interchange the order of integrations because once you choose the gauge-fixing surface G^{a}[A] = 0 so that it intersects every group-orbit exactly once, all pieces of the integrand will become independent of the group coordinates \theta^{a}(x), and you can safely pull out the group volume \int \mathcal{D} g(\theta). For example in the covariant gauge G^{a} = \partial^{\mu}A^{a}_{\mu}, the path integral becomes
\int \mathcal{D}A_{\mu} \ e^{i S[A]} = \int \mathcal{D}A_{\mu} \int \mathcal{D}g(\theta) \ \prod_{x} \delta [\partial^{\mu}A_{\mu}] \det \left| \partial^{\nu}D^{ab}_{\nu}\right| \ e^{i S[A]} . Clearly, nothing is there to prevent you from factoring out the infinite volume of the gauge group \mbox{Vol}(\mathcal{G}) = \int \mathcal{D} g. The only important point to observe is this, your choice of gauge fixing G^{a}[A] which defines the local cross section on the principal fibre bundle (gauge slice transverse to every \mathcal{G}-orbit) must not come with vanishing Jacobian, i.e., the matrix \Delta[A] = \frac{\delta G^{a}}{\delta \theta_{b}}|_{\vec{\theta} = 0} , must have an inverse.
 
Last edited by a moderator:
  • Like
Likes nrqed and GIM
  • #35
Thank you, @samalkhaiat! I think this would be the best correct answer, even though I didn't understand it well. I will try to understand it along this direction. Thank you again.
 
  • #36
strangerep said:
Oh... my... Gawd! :nb) :eek: :-p

I'd love to see you try and make that rigorous. :wink: :biggrin:

Schwartz said it is impossible. In fact he showed that it is impossible to define (associative and commutative) multiplication in the class of generalized functions (distributions). Indeed, it is easy to show that such multiplication leads to contradictions: Consider the two most common distributions, the Dirac delta function \delta (x) and the Principal value function \mathscr{P}(1/x). We can rigorously prove the following relations x \ \delta (x) = \delta (x) \ x = 0, \ \ \ \ x \ \mathscr{P}(\frac{1}{x}) = \mathscr{P}(\frac{1}{x}) \ x = 1 . If a product existed, then, using these relations, we would have the following contradictory chain of equalities 0 = 0 \ \mathscr{P}(\frac{1}{x}) = \left( \delta (x) \ x \right) \mathscr{P}(\frac{1}{x}) = \delta (x) \left( x \mathscr{P}(\frac{1}{x}) \right) = \delta (x) .
So, in order to define a product of two distributions f and g, it is necessary that they have the following properties: f must be just as irregular in the neighbourhood of an arbitrary point as g is regular in that neighbourhood, and vice versa.
 
Last edited:
  • Like
Likes vanhees71 and strangerep
  • #37
GIM said:
Thank you, @samalkhaiat! I think this would be the best correct answer, even though I didn't understand it well. I will try to understand it along this direction. Thank you again.
If I know how much you know about the subject, I might be able to explain it better for you. Feel free to ask.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
1
Views
3K
Replies
38
Views
692
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K