What are the key concepts in Srednicki's explanation of Feynman diagrams in QFT?

In summary, Srednicki is discussing how Feynman diagrams can simplify complex math problems. There are a certain number of diagrams that result from rearranging the vertices and sources, and these diagrams have the same symmetry factors. However, he argues that there is a better way to find the number of combinations for one diagram. This method is by multiplying all counting factors.
  • #1
kexue
196
2
I'm trying to learn Feynman diagrams from Srednicki's QFT book!

Could someone retell and summarize in his or her own words what Srednicki is saying in the three paragraphs after equation 9.11?

(I don't understand what the (2P)!/(2P-3V)! different combinations are, how Feynman diagrams are connected to 9.11, what counting factors are and where to put them,..for short I'm totally confused by what Srednicki is saying here)

thank you

ps: the book is online, but as a PF newbie I'm not yet allowed to post links, maybe someone else could do
 
Physics news on Phys.org
  • #2
9.11 is an exact expression for [tex]\phi^3[/tex] theory. A particular term in this expression has 2P sources (from the free-field) and 3V derivatives (from the interaction). The 3V derivatives can munch on the 2P sources in (2P)!/(2P-3V)! different ways. That's like choosing 3V people out of 2P people. For the first person you have 2P choices, the second person 2P-1 choices, and the 3V person 2P-3V+1 times. Basically what the author is saying is that Feynman diagrams can simplify all this math you have to do, taking care of results that are algebraically the same. But it's not perfect, so you have to watch for symmetry factors.
 
  • #3
Thanks Redx for answering!

Could we, for concreteness, take a look at the case for E=0, V=4, P=6, for which the Feynman diagrams are given in figure 9.2?

For that we have (2x6)!/(2xP-3x4)!=479001600 combinations in which the 3x4 derivatives can act on the 2x6 sources. These 479001600 combination break down in six classes, combinations belonging to one class are algebraically identical, each class is represented by a Feynman diagram.

Now my first problem.

How do we determine the number of combinations for one Feynman diagram? Srednicki says by using counting factors. A counting factor of 3! for each V, a counting factor of V!, a counting factor of 2! for each P, a counting factor of P!.

I assume we multiply all these counting factors. That makes 3732480 combinations. Meaning 3732480 combinations for each individual Feynman diagram. But how does that add up to the 479001600 combinations? Where and how do they cancel the the numbers from the dual Taylor expansions?
 
Last edited:
  • #4
My second problem, certainly related to my first one.

Given a Feynman diagram, how do we write down the equation resulting from it? For concreteness again the case given in figure 9.2.

Do we just multiply the vertex and propagator terms? How and in which order? Would be great if Srednicki had shown that for a some cases. Could someone here at PF do it for one or two diagrams in figure 9.2?

thank you
 
  • #5
The Feynman diagrams that are shown are all "connected". Every point on the diagram can be traced to every other point without lifting your pencil. There are also disconnected diagrams that the author did not show that contribute to the 479001600 figure. The author didn't draw the disconnected diagrams for a reason. Later on in the chapter the author shows that you can ignore the disconnected diagrams - it's actually quite nice how that works out.

So in short, there are more diagrams than the author shows, and these diagrams contribute to the remainder of the 479001600.
 
  • #6
OK, I understand. Thank you, RedX.

But what about finding out the number of combinations for one Feynman diagram? As I understand Srednicki, it is done by multiplying all 'counting factors'. But why then is the number of combinations for each (connected) Feynman diagram in figure 9.2 the same? How do they cancel the factors coming from the double Taylor expansions? Why do the different Feynman diagrams in figure 9.2 have different symmetry factors though the number of combinations is the same?

thank you
 
  • #7
How do we determine the number of combinations for one Feynman diagram? Srednicki says by using counting factors. A counting factor of 3! for each V, a counting factor of V!, a counting factor of 2! for each P, a counting factor of P!.
[\QUOTE]


That is correct. And notice that all the diagrams in 9.2 have 4 vertices and 6 propagators. So the counting factor is all the same for all the diagrams shown in 9.2.

If you look at 9.11, for V=4, there is a factor of 1/4! in front of the V-terms. So the 4! combinations from rearranging the vertices in a Feynman diagram cancels this factor: 1/4!*4!=1. Similarly, there is a factor of 1/6! in the P-terms when P=6. Rearranging the propagators in a Feynman diagram 6! ways cancels this 1/6! factor. Note the 1/2 in the P-terms that is exponentiated to the P-power. For P=6 this gives a factor of (1/2)^6, which cancels the 2^6 rearrangements that result from switching the two sources or vertices that each propagator attaches to. Something similar happens for the 1/3! term exponentiated to the V-power (that's why there is a 1/6 in front of [tex]\phi^3[/tex] in [tex]\phi^3[/tex] theory).

But this method of just counting the Feynman diagrams and not the actual combinations is not perfect, so a symmetry factor is needed, which is different for each diagram. Don't try to verify the symmetry factor of every diagram. What's important is that you know how to do one of them. Later on when you have nothing else to do you can try to verify all the symmetry factors, but it's not critical for the rest of the book that you can calculate every single one as much as you understand why it happens.
 
  • #8
RedX, so you are saying that symmetry factors do not enter equation 9.11 anywhere?
 
  • #9
Let's take the diagrams shown in fig.(9.1). These correspond to the term in eq.(9.11) with V=2 and P=3.
The numerical coefficient (not including the i's, which I won't keep track of) is 1/(2^P P! 6^V V!) = 1/3456.

There are 6 functional derivatives d/dJ(x); call the 6 arguments x1, x2 ,x3, w1, w2, and w3; later we will set x1=x2=x3=x and w1=w2=w3=w, and integrate over x and w. There are also 6 sources J(x); call the 6 arguments y1, z1, y2, z2, y3, z3. There are also propagators Delta(y1-z1), Delta(y2-z2), Delta(y3-z3).

Let d(x) be short for d/dJ(x). Then we have to compute

d(x1)d(x2)d(x3)d(w1)d(w2)d(w3) J(y1)J(z1)J(y2)J(z2)J(y3)J(z3)

The result is 6! terms; each term is a product of 6 delta functions whose arguments are an x or a w minus a y or a z.

Write down all 6!=720 terms. Multiply each by Delta(y1-z1) Delta(y2-z2) Delta(y3-z3). Now set x1=x2=x3=x and w1=w2=w3=w, and integrate over all the arguments (x, w, y's, and z's). The result is the integral over x and w of

432 Delta(0)^2 Delta(x-w) + 288 Delta(x-w)^3

These two expressions correspond to the two Feynman diagrams in fig.(9.1). Note that 432+288=720, the original number of terms. Multiplying 432 by 1/3456 (the numerical factor from eq.9.11) yields 1/8, and multiplying 288 by 1/3456 yields 1/12. Thus the contribution of the P=3, V=2 term in eq.(9.11) is 1/8 times the first Feynman diagram of fig.(9.1), and 1/12 times the second.

The symmetry factors (8 and 12 in this case) can always be worked out by brute force in this manner.
 
Last edited:
  • #10
Thanks Avodyne!

(number of delta functions in 9.11 for some P, V)!= number of terms on the right-hand side of 9.11 for some P, V

That is a nice insight you gave here. (If I have understood it correctly.)

But I can't see how the 720 breaks down to 432 and 288. How do we compute those two numbers?

But first and foremost I'm not clear how what you have shown here relates to what Srednicki is saying.

What about the third paragraph after equation 9.11 in the book. He claims 3!VxV!x2!xPxP! terms for a particular diagram. Which means same number of terms for each diagram. Also for V=2, P=3, it gives neither 432, nor 288.
 
  • #11
kexue said:
He claims 3!VxV!x2!xPxP! terms for a particular diagram.
He says there is a factor of 2! for each propagator and a factor of 3! for each vertex; factors are multiplied. So he is claiming an overall factor of (3!)^V V! (2!)^P P! for each diagram (before consideration of symmetry factors), which would cancel the same factor in the denominator in eq.(9.11). This number is 3456 for V=2 and P=3.
kexue said:
But I can't see how the 720 breaks down to 432 and 288. How do we compute those two numbers?
Start with Delta(y1-z1) Delta(y2-z2) Delta(y3-z3). Now replace each y or z with x1,x2,x3,w1,w2,w3. There are 720 ways to do this. But a lot of them result in the same expression. Since Delta(a-b)=Delta(b-a), two terms that differ only in the sign of the argument of one of the Delta's yield the same expression. This reduces the 720 terms to 720/(2!)^3 = 90 different expressions, each with a coefficient of (2!)^3 = 8. Also, the order in which the 3 Delta's are written also doesn't matter, so this reduces the 90 expressions to 90/3! = 15 expressions, each now with a factor of (2!)^3 x 3! = 48. This is the factor of (2!)^P P!.

The 15 terms come in two categories: those in which each Delta is a function of an x minus a w, and those in which one Delta is a function of an x minus a w, one Delta is a function of a w minus a w, and one Delta is a function of an x minus an x.

Take the first category. The expression is Delta(x1-wi) Delta(x2-wj) Delta(x3-wk). There are 3! = 6 ways to assign the w's that pair with the x's. (The naive factor is (3!)^V V! = 72, and 6 is smaller than that by the "symmetry factor" of 12.) After setting all x's equal to x and all w's equal to w, we get the expression Delta(x-w)^3 with a coefficient of 48 x 6 = 288.

Take the second category. Consider the factor of Delta(xi-wj). We have to choose which x and which w are in this factor; once this choice is made, the other Delta's are determined. We have 3 x 3 = 9 ways to make this choice. (The naive factor is (3!)^V V! = 72, and 9 is smaller than that by the "symmetry factor" of 8.) After setting all x's equal to x and all w's equal to w, we get the expression Delta(x-w) Delta(0)^2 with a coefficient of 48 x 9 = 432.
 
Last edited:
  • #12
Thank you! Very appreciated. Things getting clearer now.

But I have one another thing concerning equation 9.12. Srednicki says C_I stands for a particular diagram. Then he goes on saying n_I is an integer that counts the number of C_I.
Particular diagrams can show up many times? In the figure 9.1-9.11 no diagram shows up even twice and I don't see how it could. What I'm misunderstanding here?

Also, in paragraph two after equation 9.12 he says exchanges are made among different but identical connected diagrams. What should I make of that?

thanks
 
  • #13
kexue said:
Particular diagrams can show up many times? In the figure 9.1-9.11 no diagram shows up even twice and I don't see how it could.
He only drew the connected diagrams. For example, with E=0 and V=4, in addition to the connected diagrams shown in fig 9.2, there are three "disconnected diagrams". One is the the first diagram in fig 9.1 multiplied by itself, one is the second diagram of fig 9.1 multiplied by itself, and one is the product of the two diagrams. By "product", I mean you multiply the mathematical expression corresponding to one diagram by the mathematical expression corresponding to the other diagram.
kexue said:
Also, in paragraph two after equation 9.12 he says exchanges are made among different but identical connected diagrams.
In the case of the square of a diagram (for example), we consider exchanging all propagators and all verticies in one of the two identical diagram for those in the other. This gives back the same mathematical expression.
 
  • #14
While on the subject of [tex]\phi^3[/tex] theory, is figure 17.1 a 2-particle irreducible diagram? Because if you remove two adjacent sides of the square, then the resulting diagram is still connected. But if you remove two opposite sides, then things are disconnected.
 
  • #15
Alright, I see. thanks a million guys... I'm probably coming back soon with more

Would be great if PF had some sticky thread for some standard physics text, where over time questions and answers for particular paragraphs, equations, etc. in these books are accumulated. A accompanied thread to Srednicki Quantum field theory , or to Peskin and Schroeder or to Jackson.

Post would be categorized by chapters of the respective text. Also, people could retell certain sections, add a different viewpoint.
 
Last edited:
  • #16
There are 6 functional derivatives d/dJ(x); call the 6 arguments x1, x2 ,x3, w1, w2, and w3; later we will set x1=x2=x3=x and w1=w2=w3=w, and integrate over x and w. There are also 6 sources J(x); call the 6 arguments y1, z1, y2, z2, y3, z3. There are also propagators Delta(y1-z1), Delta(y2-z2), Delta(y3-z3).

Let d(x) be short for d/dJ(x). Then we have to compute

d(x1)d(x2)d(x3)d(w1)d(w2)d(w3) J(y1)J(z1)J(y2)J(z2)J(y3)J(z3)

...

What is the rational behind switching from one argument x in J(x) to the six arguments x1, x2, x3, w1,w2, w3?

thanks
 
  • #17
By the way, the book is online:http://www.physics.ucsb.edu/~mark/qft.html

My continuing confusion is at equation 9.11.

Why switching the argument x in J(x) to the six arguments x1, x2, x3, w1,w2, w3 as Avodyne explained in post 9?
 
  • #18
kexue said:
What is the rational behind switching from one argument x in J(x) to the six arguments x1, x2, x3, w1,w2, w3?
We have V=2, so we have

[tex]\left[\int d^4x\left({\delta\over\delta J(x)}\right)^3\right]^2[/tex]

The square of the whole expression can be written using two different dummy integration variables as

[tex]\int d^4x\,d^4w\left({\delta\over\delta J(x)}\right)^3\left({\delta\over\delta J(w)}\right)^3[/tex]

I then find it easier to keep track of the counting of terms to write [d(x)]^3 = d(x1)d(x2)d(x3), and then set x1=x2=x3=x at the end. But this is not necessary.
 
  • #19
thanks, Avodyne!
 
  • #20
One last, big, final question!

We have a couple of operators ( the functional derivatives) acting on a couple of terms (sources and delta functions) on the left in 9.11.

Why is not the first operator on the left acting first, then the second operator on the left acting second and so forth? Why are there many combinations and why do we have to take account of all of those combinations?

I suspect it has to do with the integrals and dummy variables, but I can't see clearly why.

again, thank you
 
  • #21
The derivatives commute, so you can have them act in any order. That they are functional derivatives doesn't play any particular role. Instead of a function J(x), suppose we had a vector J with n components J_i, i=1,...,n, and a constant matrix Delta_ij. Then we are trying to compute

[tex]{1\over V! 6^V P! 2^P}\Biggl(\sum_i {d^3\over dJ^3_i}\Biggr)^{\!\!V}\Biggl(\sum_{j,k}J_j\Delta_{jk}J_k\Biggr)^{\!\!P}[/tex]

where

[tex]{d\over dJ_i}\,J_j = \delta_{ij}.[/tex]

For V=2 and P=3, the answer is

[tex]{1\over8}\sum_{i,j}\Delta_{ii}\Delta_{ij}\Delta_{jj}+{1\over12}\sum_{i,j}\Delta_{ij}^3[/tex]

corresponding to the two diagrams in fig.9.1.
 
  • #22
Ahh, cool stuff, man!

Many thanks, once again.
 
  • #23
What happens when 2P-3V<0? Srednicki says 2P-3V surviving sources, (2P)!/(2P-3V)! different combinations, but what is when 2P-3V is negative?

thanks
 
  • #24
If 2P-3V<0, there are more derivatives with respect to J than there are J's, and so the result is zero. Just like (d/dx)^n x^m = 0 if n>m.
 
  • #25
But why then can we take as many derivatives with respect to J as we like in equation 8.14?

EDIT: Alright, I think I can see now why.

Thanks, Avodyne
 
Last edited:
  • #26
But can we go please back to post 29. I believe to understand how things play out in discrete case for 9.11 and how from that the Feynman rules emerge. But if we had to compute 9.11 as it is, with integrals and functional derivatives, how does that go? As Avodyne expained in post 9 many different combinations of delta functions will result. But how? I computed 8.15, which I found quite intricate, with chain rule, changing of dummy variables and so forth.

Just for the simplest case: 9.11 with V=1, P=1 and just one functional derivate instead of three. How does it work? How do I compute it?
 
Last edited:
  • #27
It's exactly the same in the continuous case as it is in the discrete case. You just have integrals instead of sums, Dirac delta functions instead of Kronecker deltas, etc.
 
  • #28
Exactly the same? I don't know, but integrals over functional derivatives and sums over vectors or matrices do not look all the same to me.

Can anybody give me a hint?

Just for the simplest case: 9.11 with V=1, P=1 and just one functional derivate instead of three. How does it work? How do I compute it?

Nevermind, I have figure it out. Thank you.
 
Last edited:
  • #29
Hello Avodyne and everbody else, still there?

When Srednicki says the 3V functional derivatives can act on the 2P sources in (2P)!/(2P-3V)! different combinations, then we have to take account all of them because why? Because they possibly correspond to different physical processes and these process all contribute to the transition amplitude?

Why is one combination, one order of functional derivatives not enough?
 
  • #30
This is a math problem, no physical reasoning is involved.

Note that (d/dx)^m x^n = n!/(n-m)! x^(n-m). We could say that the m derivatives can act on the n x's in n!/(n-m)! different combinations.
 
Last edited:
  • #31
I see.

d(x1)d(x2)d(x3)d(w1)d(w2)d(w3) J(y1)J(z1)J(y2)J(z2)J(y3)J(z3)=
d(x1)d(x2)d(x3)d(w1)d(w2)[J(y1)J(z1)J(y2)J(z2)J(y3)Delta(w3-z3) +J(y1)J(z1)J(y2)J(z2)Delta(w3-y3)J(z3)+...]
=d(x1)d(x2)d(x3)d(w1)[J(y1)J(z1)J(y2)J(z2)Delta(w2-y3)Delta(w3-z3) +d(x1)d(x2)d(x3)d(w1)[J(y1)J(z1)J(y2)Delta(w2-z2)J(3)Delta(w3-z3)+..+ J(y1)J(z1)J(y2)J(z2)Delta(w3-y3)Delta(w2-z3)+...]= and so on

makes 6! terms

thank you!
 
Last edited:
  • #32
Yes. But just to be absolutely clear on the notation, the delta's on the right-hand side are Dirac delta functions [tex]\delta^4(x-y)[/tex] and not Feynman propagators [tex]\Delta(x-y)[/tex].
 
  • #33
I was celebrating the last days that I had finally understood 9.11.

But as read on just two pages later, Srednicki hit me with it. Out of blue he is claiming, that what we have so far computed, which happened to be connected diagrams, are not the only contributions to Z(J). We also have to take account of products of several connected diagrams.

How can that be? When I look at 9.11, from where in the world should the need arise to form products of several connected diagrams?

thanks
 
  • #34
After some thought I see now what he means.

Silly question!
 
  • #35
I found a website that discusses the chapter you're on, though I'm not sure if it'll be helpful:

http://www.physics.indiana.edu/~dermisek/QFT/qft-II-1-4p.pdf
 
Last edited by a moderator:

Similar threads

Replies
4
Views
2K
  • Quantum Physics
Replies
6
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • Quantum Physics
Replies
9
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
3K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
15
Views
2K
  • Special and General Relativity
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
3K
Back
Top