Quantum Field Theory: An Introduction to Physics Beyond the Standard Model

quantumdude
Staff Emeritus
Science Advisor
Gold Member
Messages
5,560
Reaction score
24
Hello everyone!

This is the rebirth of my thread in PF v2.0 entitled "Do you know QM and SR?" Since I started that thread, a 2nd edition of the book (Warren Siegel's Fields) has been released. The url is:

http://xxx.lanl.gov/pdf/hep-th/9912205

I'll post some of the more useful comments from the old thread shortly.
 
Physics news on Phys.org
OK, first off my notes on the first subsection of Chapter 1 are on the web, here:

[Removed Broken Link]

Second, once it came to light that group theory is a central mathematical theme in QFT, the following links were provided:

http://members.tripod.com/~dogschool
http://www.wspc.com/books/physics/0097.html
http://www.wspc.com/books/physics/1279.html

Third, the subject of the physical meaning of the commutator came up. Here was my attempt to explain it:

in classical mechanics, a quantity 'A' that does not explicitly depend on time and whose Poisson bracket with the Hamiltonian {A,H}=0, and for every conserved quantity there is an associated symmetry. The result carries over to quantum mechanics if one takes the commutator [A,H]=AH-HA instead of the Poisson bracket. [A,H]=0 implies that A is conserved (again, provided that A does not depend explicitly on time).

Commutators are also involved in the uncertainty principle. When [A,B]=0, then A and B are simultaneously diagonalizable. That means that there are simultaneous eigenfunctions of both operators, meaning that both observables can be measured simultaneously. If [A,B] does not equal zero, then there is an uncertainty relation. An example is [x,p]=i(hbar).


Fourth, the subject of Hermiticity of operators came up. Again, here is what I had to say on the subject:

The dagger signifies the Hermitian conjugate. For a matrix A, A+ is the complex conjugate of the transpose. An operator is Hermitian if A=A(+). The Hermitian conjugate is important for (at least) two reasons:
1. A=A(+)-->A has real eigenvalues, so all physical observables correspond to Hermitian operators.
2. An operator is unitary if A(+)=A-1. (Someone correct me if this is wrong): All operators that form a group with Hermitian generators are unitary. Unitarity is necessary to guarantee probability conservation.


A(+) is "A-dagger". It seems that superscripts are not done in the same way here as they were in the last version of PF.

The following link on brackets was supplied:

[Removed Broken Link]

I'll post some more of the relevant discussion when I get a chance.
 
Last edited by a moderator:
All operators that form a group with Hermitian generators are unitary. Unitarity is necessary to guarantee probability conservation.

Hrm, did you mean to say "unitary generators" instead of "hermitian generators"?


BTW that last link is a nice one! The section on generators makes more sense now; the way it was presented by Siegel seemed fairly arbitrary: it worked, but there was no motivation for it.

Hurkyl
 
Last edited:
Originally posted by Hurkyl
Hrm, did you mean to say "unitary generators" instead of "hermitian generators"?

Nope.

Take the time evolution operator, for instance:

U(t,t0)=exp[-iH(t-t0)/hbar]

U is unitary, and it is generated by H, a Hermitian operator.
 
Peskin

Aside, let me to note that in superstringtheory.com people is doing (for free) a reading of the Peskin Schroeder. This week they are starting chapter nine.
 
Oh right, lie generator. (bonks self) Ok I withdraw my objection. :smile:

Hurkyl
 
OK, back to my recap:

Fifth, the subject of symmetry in the context of indices came up. My remarks:

He's talking about symmetry/antisymmetry under an exchange of indices.
Take Sij=xiyj+xjyi
That combination is symmetric under the exchange i<-->j, because Sij=Sji.
Now take Aij=xiyj-xjyi
That combination is antisymmetric because Aij=-Aji.


Sixth, there was the ever-so-troublesome issue of correspondence. Specifically, what the heck are those terms of order (hbar)^2 in the correspondence between the commutator and the Poisson bracket?

A guy (lethe) at another forum (sciforums) helped me out here:

Tom, you have written:

in my QM courses, I learned that the correspondence principle was simply i/hbar[A,B]QM --> {a,b}classical

the correspondence principle that you learned in QM does not always work like this.

a reminder:
the correspondence principle says turn your classical poison bracket into your quantum commutator. and then write your classical function with the classical canonical variables replaced by the corresponding quantum operators.

for example, if the classical function is qp, the quantum operator cannot be QP, but rather must be 1/2(QP+PQ). for functions of degree 3 or higher it gets more complicated. you have to also include some quantum anomoly terms, when taking the poison pracket to the commutator.

for example, one can show that the correspondence principle takes {pq2,p2q} = 3p2q2 --> i\hbar/3[P3,Q3] + \hbar^2

that term on the end there means that we have to add a quantum anomoly term when we quantize the system, in order to be consistent with the assumptions of correspondence principle. but notice that the anomoly term is of order hbar^2.

you can also get quantum anomoly terms in your hamiltonian, when you, for example, try to quantize a noncartesian system. here again, the anomoly term is of order hbar^2.

so in the classical limit, hbar --> 0, and all variables commute.

in the semiclassical limit, divide by hbar, then take hbar --> 0, and the anomoly terms disappear, and the commutators simply become the poisson brackets, the hamiltonian becomes the classical hamiltonian.

i believe that their is no axiomatic way that is self consistent to specify the rules for quantization.

tom, i haven t looked at any of those documents you listed as references, do they treat this issue?

is it clear what s going on here? i m getting this mostly from ticciati. also shankar has a bit about this.

i think this is what s going on with siegel, but i ain t positive, so i would appreciate any feedback.

let me explain that in a little more detail:

let s say that i have a classical system that i want to quantize. this means that i want to write down a mapping z that, at the very least satisfies the following conditions:

z(p) = P
z(q) = Q

z({f(p,q),g(p,q)}) = i*hbar [z(f(p,q)),z(g(p,q))]

where f and g are any functions.

naively, you might want the mapping to be a little stronger. you might want it to take

z({f(p,q),g(p,q)}) = i*hbar [f(P,Q),g(P,Q)]

but we will see that even the weaker condition above is not possible. using just these three assumptions, you can discover by considering all the commutators that z(p2)=P2 + k, for some constant number k, and similarly for Q. then taking the poisson bracket of those two functions immediately shows that

z(pq)=1/2(PQ+QP)

so my second stronger condition above is violated. if i wanted to preserve the functional form of the observable, i would have to require z(pq) = PQ.

this fact is not so distressing however, because the classical variables commute, there is some ordering ambiguity. and furthermore, this symmetric sum is required to make the quantum observable hermitian.

however, the problem gets worse when you go to third degree functions.

you can calculate the poisson bracket for those third degree terms i mentioned above, and then plug in the mapping for that poisson bracket, and compare it with the commutator, you will see that the mapping no longer preserves the bracket to commutator isomorphism, except in the hbar --> 0 limit.

according to ticciati though, this is only of academic interest, since in QFT we never have to quantize systems with cross terms like that.


Pellman and Alis also had some helpful comments, but unfortunately I did not copy them in time. :frown:
 
On to Fermions...

This section is written in anticipation of later studying Supersymmetry (SUSY). Looking at the superstringtheory.com site, I see that the two main mathematical topics unique to SUSY are *drumroll*

Anticommuting variables and Graded Lie Algebras

So, it's not surprising that we find both of those in Chapter 1 of a book written by a string theorist.

Hurkyl's thoughts:

So, I imagine then that the vector space involved in the problems is either a Grassman algebra, or is implicitly extended to one, which leaves the |x> notation for a vector kinda moot because the vector is identified with the operator x, but I suppose kets are still useful for clarity's sake.
I imagine then that |0> is not the zero vector, but is instead the multiplicative identity for the algebra? That would explain how the one homework problem could possibly work:
|x> = e^(xa+)|0>

I also imagine then that each "anticommuting variable" is simply a generator of the algebra? So then any vector v in the algebra of a single anticommuting variable x can be written:
v = a 1 + b x = a|0> + b|x>
?
And then for two anticommuting variables x & y:
v = a 1 + b x + c y + d xy = a|0> + b|x> + c|y> + d|xy>
?


And Rutwig's invaluable advices:

The notation of Grassman numbers comes from old days, when the notation of c- and q-numbers to write complex numbers and operators in a Hilbert space was common. Now these numbers are in fact elements of the exterior algebra, and the anticommutation rule is nothing more but the statement of the elementary property of the wedge product of linear forms.


A little addendum. All these commutation anticommutation relations indeed show that we will have to enlarge the concept of Poisson bracket or Lie bracket to a more general structure that covers them. If we write the anticommutation as the ordinary bracket, the operators will be symmetric, and therefore do not belong to ordinary structures. However, since bosons behave like ordinary algebras and the action boson-fermion gives fermion, the fermions will have a module structure with respect to the bosons. Now it follows that all these are the ingredients to define a supersymmetry algebra.

Second addendum:
since Tom made me curious with the question, I have had a look on the book and seen that brackets are formally introduced after the question of commutation anticommutation of bosons and fermions. This is bad, as I see it, since finally all will be reduced to these or similar brackets and the isolated presentation leads to confusion.


And Heumpje was quite helpful here, too:

For instance:
[int]dc 1 = 0
[int]dc c = 1
or
(d/dc)(c*c)=-c*
while
(d/dc)(cc*)= c*
where c is a grassman number and c* its complex conjugate.
When calculating a path integral you have the problem that because of the introduced minus signs in the exponent your integral becomes indefinite. To prevent this Grassman numbers are introduced. It doesn't solve anything in the end, since your now stuck on a set of, although nice from a mathematical viewpoint, ununderstandable numbers.
There is a nice introduction to the use of Grassman numbers in:
"Quantum Many-Particle systems" By Negele & Orland, Frontiers in Physics series from Adisson-Wesley, 1988


That's all I want to bring over from PF v2.0 right now. I have a little bit on Lie groups (the 3rd subsection), but I want to try to pick it up here with Fermions next time, if for no other reason than to figure out what I am doing wrong on those exercises.
 
did you guys figure out what the half bracket half brace: [A,B} notation means? that was kind of confusing to me...
 
  • #10
Originally posted by lethe
did you guys figure out what the half bracket half brace: [A,B} notation means? that was kind of confusing to me...

This notation wants to point out that you have two types of commutators, the ordinary Lie bracket [] and the symmetric bracket {} corresponding to the fermion action. But this is the worst possible notation, and leads to confusion. It can all be expressed by using only the bracket [] and requiring the Jacobi superidentity.

regards,
 
  • #11
Anybody get exercise IA2.2 on pg 45? Here's how I see it. (I'm going to use P instead of psi).

The delta function is P '- P, the most general function of P is a + bP, so the integrand is

(P' - P)(a - bP') = aP' - aP -bP'^2 + bPP' (P'^2 term vanishes)
= (a + bP)P' - aP

The anti-derivative of this is (a + bP)P'^2/2 - aPP' = aPP', but what are the limits of integration for P'? All "P-space?" What space is that anyway? And does it even have the necessary characteristics (e.g., the correct topology) to allow integration to be done with anti-derivatives?

I don't see how we can get a + bP out of this.

- Todd
 
  • #12
Originally posted by pellman
Anybody get exercise IA2.2 on pg 45? Here's how I see it. (I'm going to use P instead of psi).


I, too, am stumped here. Oh, Ruuuutwiiiig!

The delta function is P '- P, the most general function of P is a + bP, so the integrand is

Whoa, are you saying that the delta function is the simple difference of P and P'? I thought it was a distribution defined similarly to the delta function for "normal" (commuting) variables?

The anti-derivative of this is (a + bP)P'^2/2 - aPP' = aPP', but what are the limits of integration for P'?

Siegel explicitly states that the integration is indefinite.

I found a paper at the LANL arXiv that gives a rundown this Grassman calculus. I will dig up the link and post it here, as well as take another crack at this one.
 
  • #13
Originally posted by Tom
I, too, am stumped here. Oh, Ruuuutwiiiig!

Ok, here is another assertion of Siegel which is not vey clear. The intention here is to generalize the known analysis to "superanalytic" functions, that is, functions having commuting an anticommuting variables. Let v be an anticommuting variable and let be given a displacement (infinitesimal) by an (infinitesimal) odd supernumber dv. If f: F--> G (F odd supernumbers, G Grassmann algebra) is a function, then f(v) must have also a displacement with:

df(v)=dv[(d/dv)f(v)]=[f(v)(d/dv)]dv (**)

(d/dv) is called the left derivative operator, and ...the right...

Now, f(v)=av+b is the most general soltuion to (**). In fact, if you expand v as a series in the Grassmann algebra, and similarly f(v), regard the coefficients of the latter as functions of the coefficients of v and vary them infinitesimally, so that the expression of dv is obtained. Now look at the conditions which the coefficients of f(v) must satisfy in order to agrre with (**).
Since the f is linear, it suffices to give sense to [inte]dv and [inte]v dv. Now, if the equation (known for distributions)

[inte] [(d/dx)f(x)]dx=0

shall also hold for anticommuting beasts, then

[inte]dv= 0 and [inte]vdv=y (y is a supernumber). Thus

[inte] f(v+a)dv=[inte] f(v)dv and

[inte]f(v)(d/dv)g(v)dv= [inte] f(v)(d/dv)g(v)dv
 
  • #14
Originally posted by Tom

Whoa, are you saying that the delta function is the simple difference of P and P'? I thought it was a distribution defined similarly to the delta function for "normal" (commuting) variables?
That's the problem. We're trying to show that &delta;(&psi;) = &psi;. (I've got this math-symbol thing figured out now.)


Originally posted by rutwig
Now, if the equation (known for distributions)

&int;[(d/dx)f(x)]dx=0

shall also hold for anticommuting beasts,
Why should it hold at all? For a plain old commuting scalar variable, for instance,

&int; [(d/dx)f(x)] dx = f(b) - f(a)

if b and a are the endpoints. In any case, it would only be special cases that vanish right?
 
  • #15
Easier(?) question

I'm really more interested in Ex IA2.3 on page 47. (Note: there is a correction to this exercise on the errata page.)

Can someone please show me how

{a,a*} = {&zeta;,-&part;/&part; &zeta;}
= - &zeta;&part;/&part; &zeta; - (&part;/&part; &zeta;)&zeta;
= 1 ?

If &zeta; were a usual commuting variable, then [&zeta;,-&part;/&part; &zeta;] = - &zeta;&part;/&part; &zeta; +[/size] (&part;/&part; &zeta;)&zeta; = 1 is easy to show. How does the anticommuting nature of &zeta; change the behavior of the derivative here?

(Edited for more)

1.
{a,a*}&psi;(&zeta;) = {&zeta;,-&part;/&part; &zeta;}&psi;(&zeta;)
= - &zeta;(&part;/&part; &zeta;)&psi;(&zeta;) - (&part;/&part; &zeta;)[&zeta;&psi;(&zeta;)]
= - &zeta;(&part;/&part; &zeta;)&psi;(&zeta;) +[/size] &zeta; (&part;/&part; &zeta;)&psi;(&zeta;) - 1 x &psi;(&zeta;)
(the plus sign arise because I guess the zeta and derivative anticommute, although I don't really see it.)
= - &psi;(&zeta;)
=> {a,a*} = -1

2. Another approach
Let &psi;(&zeta;) = A + B&zeta;. Then..

{a,a*}&psi;(&zeta;) = - &zeta;(&part;/&part; &zeta;)(A + B&zeta;) - (&part;/&part; &zeta;)[&zeta;(A + B&zeta;)]
= - B&zeta; - (&part;/&part; &zeta;)(A&zeta;) (&zeta;2 = 0)
= - B&zeta; - A
= - &psi;(&zeta;)
=> {a,a*} = -1

(even more)

Okay. The reason 1 and 2 give {a,a*} = -1 is because I am working in the <&zeta;|&psi;> representation and the operators are acting on the bra instead of the ket and the negative derivative becomes a positive derivative in that case. That is,

<&zeta;|a = (a*|&zeta;>)* = (-&part;/&part;&zeta; |&zeta;>)*
= +[/size] &part;/&part;&zeta;<&zeta;|

and

<&zeta;|a* = <&zeta;|&zeta;

Why we get the plus sign is still a mystery to me.
 
Last edited:
  • #16


Originally posted by pellman
I'm really more interested in Ex IA2.3 on page 47. (Note: there is a correction to this exercise on the errata page.)


Again there is a mistake of signs in the book due to an unconventional choice. The definition of fermionic oscillator is fixed, and set for convenience (see the realization of the Heisenberg algebra). Now, if we set a|[the]>=[the] |[the]>. Thus |[the]>=exp([the]a*)|0> and we obtain a*|[the]>= ([pard]/[pard][the])[the].

Strong recommendation: take another book, Siegel has not been revised carefully. The text I used to recommend can be found at:

http://arturo.fi.infn.it/casalbuoni/lezioni99.pdf
 
Last edited by a moderator:
  • #17
Originally posted by pellman
Why should it hold at all? For a plain old commuting scalar variable, for instance,

for antocommuting variables the comparison with ordinary ones leads inevitably to confusion. Conventional prejudices must be given up, since here no measure theory plays a role. And it is required to hold.
 
  • #18


Originally posted by rutwig

Strong recommendation: take another book, Siegel has not been revised carefully. The text I used to recommend can be found at:

http://arturo.fi.infn.it/casalbuoni/lezioni99.pdf
Thanks, rutwig!
 
Last edited by a moderator:
  • #19
I have corresponded with Siegel about ExIA2.3 and he has added another comment to the errata page. However, his comment is brief and may still be confusing.

If any of you are interested, just let me know and I will type up the exercise as it should correctly appear and post it here.

- Todd
 
  • #20
Originally posted by pellman
If any of you are interested, just let me know and I will type up the exercise as it should correctly appear and post it here.

Could ya?

Thanks.

Rutwig--I am printing out that other book. It looks much more conventional, which is good, but I want to ask you something. What would it take for us to be able to get through Siegel's book? I ask because he has a trilogy of books (QFT, SUSY, and String Theory) online, and I was hoping to get through them all.
 
  • #21
Originally posted by Tom
Rutwig--I am printing out that other book. It looks much more conventional, which is good, but I want to ask you something. What would it take for us to be able to get through Siegel's book? I ask because he has a trilogy of books (QFT, SUSY, and String Theory) online, and I was hoping to get through them all.

I fear that I do not understand fully what you mean. The problem I see at Siegel's books is that have grown out from his lectures at Stony Brook, and that the texts have been conceived in that sense, that is, I have not the impression they were written for people not attending the lectures. This would explain eventual imprecise points, or at least where it is not transparent what exactly is meant. They are excellent books, but the lecture that I do is that the topics should not be unknown.
To the question of SUSY, a knowledge of representation theory of the Lorentz (hélas Poincaré) group is recommendable, specifically to see what the Dirac, Weyl, Majorana spinors are (this follows at once from the point of view of Clifford algebras, which is the underlying formalism to define the Dirac matrices and corresponds to the natural generalization of space reflections for the covering group of SO(3)).
Also some Yang-Mills formlism, etc. Physically it is supposed that the reader has followed or follows regular physics lectures. Any graduate and or undergraduate (I don't know well the equivalence of european/american educational subdivision) should not have problems from the physical content. In any case, since I have not yet seen the two other books, I cannot pronounce myself with full clarity. I will comment on this later.
 
  • #22
Ex IA2.3(b) with corrections (part (a) is okay as is)

Define eigenstates of the annihilation operator (“coherent states") by

a|&zeta;> = &zeta; |&zeta;>

where &zeta; is anticommuting. Show that this implies

a+|&zeta;> = (-&part; /&part; &zeta;) |&zeta;>
|&zeta;> = exp(- &zeta;a+) |0>
|&zeta;’ + &zeta;> = exp(- &zeta;’a+) |&zeta;>
xa+a|&zeta;> = |x&zeta;>
<&zeta;|&zeta;’> = e- &zeta;*&zeta;’
&int; d&zeta;*d&zeta; e+&zeta;*&zeta;|&zeta;><&zeta;| = const
(I haven't gotten this one yet and am not sure of the normalization. It's probably equal to &pi; or 2&pi;.)

Define wave functions in this space, &psi;(&zeta;*) = <&zeta;|&psi;>. Taylor expand them in &zeta;*, and compare this to the usual two-component representation using |0> and a+|0> as a basis.

Note:

You can't really show that anticommutator and a|&zeta;> = &zeta; |&zeta;> alone necessarily imply a+|&zeta;> = (-&part; /&part; &zeta;) |&zeta;>. You also need the expression for <&zeta;|&zeta;’>. Or instead you can derive <&zeta;|&zeta;’> by assuming a+|&zeta;> = (-&part; /&part; &zeta;) |&zeta;>. See this thread for more: https://www.physicsforums.com/showthread.php?s=&threadid=791

Also, keep in mind that <&zeta;|a = (a+|&zeta;>)+ = (-&part; /&part; &zeta;*) <&zeta;|.

I haven't attempted part c yet.
 
Last edited:
  • #23
Looks like this thread is temporarily dead. I'm pausing with Fields too so that I can learn some perturbation theory, which I have until now neglected. To anyone reading, I for one am definitely returning to this book -- in about a month probably -- so don't be put off by the lack of recent posts.

- Todd
 
  • #24
I bailed on Siegel a while ago too, and switched back to Peskin&Schroeder (sometimes I feel like I change QFT books more often than I change clothes), at least until I can refresh myself on Hamilton-Jacobi theory and Poisson brackets, which I never really learned in the first place.

BTW, rutwig, can you recommend a good book/resource to learn about Clifford algebras?
 
  • #25
Originally posted by pellman
To anyone reading, I for one am definitely returning to this book -- in about a month probably -- so don't be put off by the lack of recent posts.

Same here. The big thing holding me up is that damn section on Fermions. I have never seen these "anticommuting numbers" before (operators yes, but numbers no). I have the book by Berezin, The Method of Second Quantization, to which Siegel refers, but I think I would have to read a book on Functional Analysis before I could get through that one.

Since that kind of heavy, rigorous treatment is clearly not needed to solve the exercises in Siegel, I am trying just to learn the Grassman calculus. Unfortunately, there is not one consolidated source for that that I can find. So, I am trying to put together a comprehensive tutorial on it from the following documents:

http://www.physics.rutgers.edu/~coleman/mbody/pdf
Density Operators for Fermions
Quasiclassical and Statistical Properties of Fermion Systems
On Generalized Super-Coherent States
Fermions, Bosons, Anyons, Bolzmanions and Lie-Hopf Algebras

Maybe if my review is good enough, I'll send it to Am. J. Phys.

Who knows?
 
Last edited by a moderator:
  • #26
BTW, rutwig, can you recommend a good book/resource to learn about Clifford algebras?

People have recommended to me the text by Pertti Lounesto. IIRC the title is Introduction to Clifford Algebras. Look up his name and Clifford algebras on amazon or your favorite book site.
 
  • #27
Originally posted by damgo
BTW, rutwig, can you recommend a good book/resource to learn about Clifford algebras?

Well, to the source given by selfadjoint, I enumerate the following;

P. Freund, Introduction to supersymmetry, Cambridge Univ. Press if your interest on Clifford algebras concerns only its relation with supersymmetry (Hey, Tom, this is a magnific book, add it to your list)

W. Greub. Multilinear algebra, Springer . This book deals in its 10th chapter with Clifford algebras, in relation with iner products spaces, complex, real, etc. Good source, but highly technical.

J E. Gilbert, M Murray, Clifford algebras and Dirac operators in harmonic analysis, Cambridge University Press The title is quite descriptive.

J M Charlier, Tensors & the Clifford Algebra: Applications to the Physics of Bosons & Fermions, Dekker 1993 Very pedagogical excellent book.
 
  • #28
I was really enthused about this thread, and hoped I could contribute. But alas, I find Siegel incomprehensible overall. I have had to go back to Ryder and Peskin & Shroeder. If anyone wanted to go through those books (maybe starting with a later chapter so as not to be too pedestrian) I would really enjoy it (and benefit from it) I think - I have questions...
 
  • #29
New Workshop on Weinberg's Quantum Theory of Fields

Hi, I am trying to start a new workshop on Weinberg's Quantum Theory of Fields, on a new thread. At a first glance I thought it was the worst book to learn QFT from because it's deeper than all other books on the subject, but now I realize that QFT just is that deep! And at this level everything has the potential to be made clear. However, most people find Weinberg's book tough, including me, and without help are deterred from using it, which is why I thought of this workshop. Many of the bits I spent days going over and thought were just to hard for me I ended up finding simple - and so I for one would have benefited timewise from a workshop.

Of course while I suggest following Weinberg's path, I would also suggest using other books for help, such as Peskin and Schroeder which contains some very well explained calculations.

Looking forward to your participation.
 
  • #30
I want to get back to rutwig's point,in his criticism of Siegel's text"
The problem I see at Siegel's books is that have grown out from his lectures at Stony Brook, and that the texts have been conceived in that sense, that is, I have not the impression they were written for people not attending the lectures. This would explain eventual imprecise points, or at least where it is not transparent what exactly is meant

In my opinion all textbooks have this defect. They all contain explanatory gaps that are intended to be filled in in the classroom, or perhaps in a questions/problems session. I have seen professional qft physicsits puzzled as to just what Peskin & Schroeder meant on a given page.

I think for many of us, a short course on integral manipulations would be constructive. Beginning with Calculus 102 integration by parts and covering all the delta function tricks and such. With "dumb" type problems that are not intended to extend the theory but just to build up a skill set. (one of the early excercises in P&S is to define linear sigma theory!) If that worked then on to Clifford Algebras.
 
  • #31
Thanks for the quick reply!

However, I'm still confused. I understand that W(&real;,p) must be SO(3) like &real; but why does that mean W(&real;,p)=&real;? Is it implied by

W(&real;1[\sub],p) W(&real;2[\sub],p)=
W(&real;1[\sub]&real;2[\sub],p),

and if so, why?
 
Last edited:
  • #32
W(&Lambda;,p) = L-1(&Lambda;p)&Lambda;L(p) (2.5.10), is always an ordinary rotation since L(p) boosts k to p, &Lambda; transforms p to &Lambda;p, and L-1(&Lambda;p) boosts &Lambda;p back to k. Now, L(p) is the identity for non-relativistic p while L-1(&Lambda;p) is the identity if in addition &Lambda; is an ordinary 3-rotation &real;. Thus in this case we clearly have W(&Lambda = &real;,p) = &real;. But this must hold for relativistic p reached by a pure boost since boosts are spatial-temporal rotations, not ordinary spatial rotations, and therefore cannot change &real;.
 
Last edited:
  • #33


Originally posted by Si
Hi, I am trying to start a new workshop on Weinberg's Quantum Theory of Fields, on a new thread.

Hi,

That was my idea at first too. However, not everyone has Weinberg's books. Siegel's book is online and free, so I thought it would be good. The problem is, it has the aforementioned gaps. I understand what SelfAdjoint is saying (about all books having gaps), but it seems that the gaps in Siegel's book require a whole lot of digging. I am trying my best to fill in the gaps in Ch. 1 and get them posted ASAP.

As for a Weinberg workshop, be my guest. The only problem is that the number of people who can participate is limited by the number of people who have the book.
 
  • #34
It would be good to have a head count of how many are interested and what texts they have or prefer (is there an easy way to do that here?)

I have not looked at Weinberg in more than a cursory way, and it's big and expensive (3 volumes), so I would have to take the time to convince myself it's to my liking. I do like Ryder and P&S. But I'm interested enough in the topic to tag along with whatever text most others want to refer to - long as there is a commitment to stick with it.

I agree this thread is nice since it stays visible.
 
  • #35
My reason for liking Weinberg: His approach feels much more "pure" than other books, particularly in that all axioms are simple and physically inuitive, and are introduced only when needed. Thus your knowledge doesn't get entangled, the various theorems become more powerful since they can then be extended to other theories, and theorems can be obtained more completely and generally yet made simpler. His brief yet comprehensive style helps one avoid getting confused, although sometimes he is a bit too brief!

Examples: He doesn't say that the causality condition follows from something to do with measurements at spacelike separation not affecting each other, but because it is necessary to make the S-matrix Lorentz invariant. He doesn't quantize fields of classical field theories such as electromagnetism, as there is no real physical reason to do so. Rather he starts with "particles" (defined to be the eigenstates of the generators of the Poincare group), then shows how fields arise from the need to satisfy the very obvious cluster decompostion principle. He gives complete proofs (e.g. he completes the theorem of Wigner, and shows that the physically-unintuitive Dirac equation is not an axiom, but turns out to be the
only possibility for spin 1/2 fields) and derives results in a very general manner (e.g. derives LSZ reduction for fields of arbitrary spin and proves the spin-statistics theorem). You can obtain a lot of well known results in scattering theory (Ch. 3)
without using fields.

By the way, don't buy all three volumes in one go! Volume one will teach you a lot about the basics, and can be read without the other two, which cover more advanced topics which no-one here (including myself) is interested in yet.
 
  • #36
Originally posted by Si
...He doesn't say that the causality condition follows from something to do with measurements at spacelike separation not affecting each other...

He does mention this in the second last full paragraph on page 198.

Originally posted by Si
He doesn't quantize fields of classical field theories such as electromagnetism, as there is no real physical reason to do so. Rather he starts with "particles" (defined to be eigenstates of the generators of the Poincare group), then shows how fields arise from the need to satisfy the very obvious cluster decompostion principle.

Particle states arise since it's their masses and spins that label the irreducible representations of SO(3,1) under which they transform. The cluster decomposition principle is invoked to explain why and how the hamiltionian must be constructed from creation and annhilation operators acting on these states. But it's lorentz invariance that requires these operators be grouped together to form quantum fields that satisfy causality.

You're right that weinberg doesn't construct QED by quantizing maxwell, but he deduces it first from the gauge-invariance principle he shows any quantum theory of massless particles with spin must satisfy.

Here's a question for you. Can you verify the expression on page 548 in section 13.4?

Originally posted by Si
By the way, don't buy all three volumes in one go! Volume one will teach you a lot about the basics, and can be read without the other two, which cover more advanced topics which no-one here (including myself) is interested in yet.

Actually, I have read all three.
 
  • #37
Originally posted by jeff
He does mention this in the second last full paragraph on page 198.

Although this is not really his reason for making it so. He gives a more formal argument - that it's needed for Lorentz invariance.

Particle states arise since it's their masses and spins that label the irreducible representations of SO(3,1) under which they transform. The cluster decomposition principle is invoked to explain why and how the hamiltionian must be constructed from creation and annhilation operators acting on these states. But it's lorentz invariance that requires these operators be grouped together to form quantum fields that satisfy causality.

Yes, I didn't mention LI + causality for brevity. CDP + LI + causality (+ anything else?) leads to fields.

You're right that weinberg doesn't construct QED by quantizing maxwell, but he deduces it first from the gauge-invariance principle he shows any quantum theory of massless particles with spin must satisfy.

Actually here I felt was one of Weinberg's weaker points. I arrived at the same question I do from other QFT books when the author tries to derive the QED Lagrangian from the gauge invariance principle: Is the QED Lagrangian the only possibility (for Abelian fields)? Perhaps I missed something in Weinberg's argument.

Here's a question for you. Can you verify the expression on page 548 in section 13.4?

Probably not, as I am only on Chapter 12! However, I will try to look at it tonight.

Actually, I have read all three.

Sorry, I was referring only to those people who had never read Weinberg, and who want to learn / re-learn the basics. I think you're the first person I've met who has! What did you think, was it the best approach for you or is there another author you prefer?

By the way, Weinberg discusses his approach in Volume 1 in hep-th/9702027 (http://xxx.soton.ac.uk/abs/hep-th/9702027 ), with some nice caveats added.
 
Last edited by a moderator:
  • #38
Originally posted by Si
What did you think, was it the best approach for you or is there another author you prefer?

Weinberg's are the only QFT texts I studied systematically, and were used in the course I took in my final year as an undergraduate. Since string theory is my primary interest, the effective field theory approach was useful.

Originally posted by Si
Although this is not really his reason for making it so. He gives a more formal argument - that it's needed for Lorentz invariance.

You probably noticed this, but just in case; in the same paragraph on p198, weinberg explains that it's because of the difficulty of defining measurability for dirac fields that he avoided invoking causality.

Originally posted by Si
...here I felt was one of Weinberg's weaker points...: Is the QED Lagrangian the only possibility (for Abelian fields)?

Weinberg argues that QED is the most general possible lorentz-invariant QFT coupling a massless particle of helicity &plusmn;1.

Originally posted by Si
By the way, Weinberg discusses his approach in Volume 1 in hep-th/9702027 (http://xxx.soton.ac.uk/abs/hep-th/9702027), with some nice caveats added.

This appears as one article in a collection entitled "conceptual foundations of quantum field theory" based on a symposium on foundational aspects of QFT. The book includes responses to each lecture (including weinberg's). The link to amazon.com is

 
Last edited by a moderator:
  • #39
Well, I would certainly be impressed if Weinberg has a good explanation of how to arrive at the ground state propagator. That is one thing that all authors seem to deal with in the most annoyingly fast and loose terms - never bothering to justify the Wick rotation, and sliding from initial and final position states to the ground state by a complete slight of hand - Jeeze! Even if they would just acknowlege the fact they are cheating with the math, at least the poor students would not be left wondering what the hell we missed.
 
  • #40
I'm not sure I understand, but I will have a go.

If your problem is the apparent arbitrary insertion of the i&epsilon; in the denominator: This correctly reproduces the position space propagator.

If your problem is a deeper one concerning scattering theory, I had the same problem when learning QFT. Indeed, Weinberg's section 3.1 was for me the most clear physical and mathematical justification of the relationship between interacting and free states, but I still feel I've missed something and would like to discuss this more.
 
  • #41
Originally posted by planetology
I would certainly be impressed if Weinberg has a good explanation of how to arrive at the ground state propagator.

All results in weinberg are explained in the sense that they're carefully presented in the context of his view of QFT as the unique consequence of reconciling quantum mechanics with special relativity, and that any quantum theory - even if it isn't a field theory (like string theory for example) - will at sufficiently low energies look like one.

Originally posted by planetology
...authors seem to deal with in the most annoyingly fast and loose terms

Can you be more specific about authors or methods?
 
  • #42
Originally posted by Si

If your problem is the apparent arbitrary insertion of the i&epsilon; in the denominator: This correctly reproduces the position space propagator.

Are you saying the key is knowing the right answer ahead of time? My complaint is with authors who claim to be doing a derivation - meaning the Wick rotation should stand convincingly on its own logic. I don't necessarily doubt that it does, I just want to see it spelled out so I can understand it, too. (I've seen one other justification for simplifying the integral, which is that the oscillatory terms cancel out; I know of a theorem to that effect - but not due to any QFT author bothering to cite it.)

The other issue I mentioned is the derivation of the propagator from path integrals. What one gets directly is the integral representation for the position state transition <qf|exp(-iHt)|qi>. But what is really of interest is the transition of energy states, not position states. The ground state transition amplitude <0|exp(-iHt)|0> invariably magically appears, using the exact same integral representation that was derived for the postition state transition with no mention of the fact that energy states are superpositions of position states. The Wick rotation to simplify the integral is sometimes there in the mix. Which authors? Can't recall them all off my head... Ryder and Mandl & Shaw come to mind; but I've never seen it done in significantly more detail really.
 
  • #43
Originally posted by planetology
...the Wick rotation should stand convincingly on its own logic.

The wick rotation is an example of "analytic continuation" in which functions analytic on some domain are extended to functions analytic on some larger domain. The physical justification of wick rotations lies in a theorem due to riemann saying that analytic continuations are unique. So the result of "repackaging" amplitudes by wick rotating them to a domain on which they converge is uniquely determined by the original oscillatory expression, i.e. no information has been added or removed, it's just been reexpressed in a form congenial to explicit calculation.
 
  • #44
Originally posted by planetology
Are you saying the key is knowing the right answer ahead of time?

If I understand you correctly, yes. If you have Weinberg(?), look at equation 6.2.1 for the propagator. This definition is unique, and comes from the commutation relations of the creation and annihilation operators. The i&epsilon; and the choice of sign of &epsilon; is introduced in the definition of the step function used for time ordering, 6.2.15. That it is correct follows from Cauchy's Theorem, closing the contour at &pm; &infinity; and taking the residue. The opposite sign would simply give the wrong result. Thus one gets the usual definition of the propagator, 6.2.18. In the S-matrix, this is integrated over x to produce a momentum space delta function. The momentum integral(s) are done again be using by rotating in the p0 plane, as allowed by Cauchy's Theorem, and this is called Wick rotation. The direction of rotation is determined by the sign of epsilon;, since we cannot rotate across the pole.

Following section 9.2, the PI applies to any QM with 'coordinates' q_i and 'momenta' p_j satisfying the canonical commutation relations. In old QM, q_i is the 3 spatial-position coordinates of a particle. In QFT, q_i is the field at each point in space. The PI gives the matrix element of operators sandwiched between two q-eigenstates (in QFT, eigenstates of the field at a given spatial point). To get the same matrix element with the field eigenstates replaced with the vacuum requires calculating the scalar product of the vacuum with the field eigenstates. If the magnitude of time is large, this change is equivalent to subtracting an i&epsilon; from the Hamiltonian in the PI, as well as an irrelevant normalization of the matrix element. This is shown to be equivalent to subtracting an i&epsilon; in the denominator of the propagators.

The S-matrix may then be calculated from the vacuum-vacuum matrix element of operators using 6.4.3 and the discussion before it.

This all rests on the axioms for how states behave when the magnitude of time is large. These are developed in chapter 3, although I still have problems with it.
 
Last edited:
  • #45
Originally posted by Si
The i&epsilon; and the choice of sign of &epsilon; is introduced in the definition of the step function used for time ordering...

To get the same matrix element with the field eigenstates replaced with the vacuum requires calculating the scalar product of the vacuum with the field eigenstates. If the magnitude of time is large, this change is equivalent to subtracting an i&epsilon; from the Hamiltonian in the PI...

Just so there's no confusion, in both these examples, the i&epsilon; implements boundary conditions by closing the contour of integration in the upper or lower half-plane. Wick rotations on the other hand change oscillatory to convergent amplitudes by rotating contours of integration along the real axis so that they lie along the imaginary axis, an example of which is given in Weinberg I p475.

Originally posted by Si
This all rests on the axioms for how states behave when the magnitude of time is large. These are developed in chapter 3, although I still have problems with it.

What is it that's bugging you about this?
 
  • #46
Originally posted by jeff
What is it that's bugging you about this?

One of my problems is I'm not sure! Let me just state what I understand: We introduce the principle that there exist "free" states, where "free" means that acting on them with &exp;[-iHt] gives the same result as acting on them with &exp;[iH0 t], where H0 is called the "free" particle Hamiltonian. An example of a physical free state: Consider a large box with a proton and an electron in it, a 2 particle state, with both particles in near momentum eigenstates. However, at any point x, if the probability to find one particle is finite, the probability to find the other at x will be small. So they must not be in perfect momentum eigenstates. This is why Weinberg gives a little "spread" (using g(&alpha;)) to these states, so they don't overlap. Here we assume that Fourier transform of p-states gives the spacetime locations. This free state must be a (near) eigenstate of H. Thus, such a state will change by a small amount after a finite time, but will change by a finite amount after a large time. 3.1.11 refers to (in the Schroedinger picture) states in the process of interaction at finite &tau;, which means the state on the LHS of 3.1.12 must be (for finite &tau;) heavily overlapping, and changing by a finite amount for a finite change in &tau;. Correct? I would like to regularize Weinberg's argument by making everything finite, and then let the size of the box + time magnitude go to infinity, the spread to zero (more slowly) etc., but couldn't find an obvious way to do it.

Do we define H0 to have the same spectrum as H because it is the only way to get 3.1.12? Or is it an axiom? And is there some physical reason for it? What is the physical interpretation for the eigenstates of H0?

The fact that H has two sets of eigenstates even though a Hermitian operator should only have one means that 3.1.11 is not quite correct - there is a discontinuity in the theory (which I guess shows up in 3.1.17), so 3.1.11 should be spread with g(&alpha;)?
 
  • #47
I just realized I said "propagator," when what I really meant was vacuum-vacuum transition amplitude. Sorry for the confusion.

Originally posted by Si

Following section 9.2, the PI applies to any QM with 'coordinates' q_i and 'momenta' p_j satisfying the canonical commutation relations. In old QM, q_i is the 3 spatial-position coordinates of a particle. In QFT, q_i is the field at each point in space. The PI gives the matrix element of operators sandwiched between two q-eigenstates (in QFT, eigenstates of the field at a given spatial point). To get the same matrix element with the field eigenstates replaced with the vacuum requires calculating the scalar product of the vacuum with the field eigenstates. If the magnitude of time is large, this change is equivalent to subtracting an i&epsilon; from the Hamiltonian in the PI, as well as an irrelevant normalization of the matrix element. This is shown to be equivalent to subtracting an i&epsilon; in the denominator of the propagators.

It makes sense to me what you are saying, but looking at Weinberg (I don't own it), his notation is odd to me, and his mathematical treatment so abbreviated it's a bit difficult for me to see his full justification of the equivalence between the inner product and the subtraction of i&epsilon;. I will study that section more carefully to see what more I can get out of it.

Originally posted by Jeff

The physical justification of wick rotations lies in a theorem due to riemann saying that analytic continuations are unique. So the result of "repackaging" amplitudes by wick rotating them to a domain on which they converge is uniquely determined by the original oscillatory expression, i.e. no information has been added or removed, it's just been reexpressed in a form congenial to explicit calculation.

That seems like it would be helpful to see. Know a good reference?
 
  • #48
Originally posted by planetology
That [principle of analytic continuation] seems like it would be helpful to see. Know a good reference?

It's a basic result covered in every introductory course in complex analysis. Just look under analytic continuation in any complex analysis text.
 
  • #49
Originally posted by Si
Do we define H0 to have the same spectrum as H because it is the only way to get 3.1.12? Or is it an axiom? And is there some physical reason for it? What is the physical interpretation for the eigenstates of H0?

Intuitively, since &Psi;&alpha;&plusmn; are states of non-interacting particles, we should be able to define them in terms of some corresponding set of free particle states &Phi;&alpha; of a free particle hamiltonian H0 in a way that they have the same appearance as the &Psi;&alpha;&plusmn;. This means that if we write H = H0+V, then since the &Psi;&alpha;&plusmn; are eigenstates of the full physical hamiltonian H, V must be chosen so that the masses appearing in H0 are the physical masses etc.

Originally posted by Si
This is why Weinberg gives a little "spread" (using g(?)) to these states...

In exp(-iH&tau;)&Psi; on p109, &Psi; describes a state seen by an observer at some point during a collision process. Now, the whole idea of defining scattering amplitudes in terms of in and out states depends on the assumption that the collision process occurs over some finite interval of time. If &Psi; is an energy eigenstate &Psi;&alpha; so that we know it's exact energy, by the time-energy uncertainty principle the collision process is spread out across all time, in which case the whole idea of in and out states goes down the toilet. We see this mathematically by noting that in that case exp(-iH&tau;)&Psi; = exp(-iE&alpha;&tau;)&Psi;&alpha;, so that taking &tau; &rarr; &plusmn; &infin; achieves nothing since exp(-iE&alpha;&tau;) is purely oscillatory and so has no limit. Thus, since &Psi;&alpha;&plusmn; are effectively states of non-interacting particles so that 3.1.1 requires they be energy eigenstates of the hamiltonian H, we must consider &int;d&alpha;exp(-iE&alpha;&tau;)g(&alpha;)&Psi;&alpha;&plusmn; rather than just individual energy eigenstates &Psi;&alpha;&plusmn;. Therefore, the correspondence between the &Psi;&alpha;&plusmn; and the &Phi;&alpha; must be given in terms of wave packets: &int;d&alpha;exp(-iE&alpha;&tau;)g(&alpha;)&Psi;&alpha;&plusmn; &rarr; &int;d&alpha;exp(-iE&alpha;&tau;)g(&alpha;)&Phi;&alpha; for &tau; &rarr; -&infin; or &tau; &rarr; +&infin; respectively.

Originally posted by Si
The fact that H has two sets of eigenstates even though a Hermitian operator should only have one means that 3.1.11 is not quite correct...

&Psi;&alpha;&plusmn; are states in the same hilbert space (see 1st full paragraph after 3.2.1) and by energy conservation their energy eigenvalues must be equal.
 
Last edited:
  • #50
Originally posted by planetology
It makes sense to me what you are saying, but looking at Weinberg (I don't own it), his notation is odd to me, and his mathematical treatment so abbreviated it's a bit difficult for me to see his full justification of the equivalence between the inner product and the subtraction of i&epsilon;. I will study that section more carefully to see what more I can get out of it.

I was put off Weinberg for a long time because his notation was different to others. But once I got familiar with it, I found in fact that his notation was simpler and more general.
 
Back
Top