Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Feynman diagrams

  1. Jan 18, 2009 #1
    I'm trying to learn Feynman diagrams from Srednicki's QFT book!

    Could someone retell and summarize in his or her own words what Srednicki is saying in the three paragraphs after equation 9.11?

    (I don't understand what the (2P)!/(2P-3V)! different combinations are, how Feynman diagrams are connected to 9.11, what counting factors are and where to put them,..for short I'm totally confused by what Srednicki is saying here)

    thank you

    ps: the book is online, but as a PF newbie I'm not yet allowed to post links, maybe someone else could do
  2. jcsd
  3. Jan 18, 2009 #2
    9.11 is an exact expression for [tex]\phi^3[/tex] theory. A particular term in this expression has 2P sources (from the free-field) and 3V derivatives (from the interaction). The 3V derivatives can munch on the 2P sources in (2P)!/(2P-3V)! different ways. That's like choosing 3V people out of 2P people. For the first person you have 2P choices, the second person 2P-1 choices, and the 3V person 2P-3V+1 times. Basically what the author is saying is that Feynman diagrams can simplify all this math you have to do, taking care of results that are algebraically the same. But it's not perfect, so you have to watch for symmetry factors.
  4. Jan 19, 2009 #3
    Thanks Redx for answering!

    Could we, for concreteness, take a look at the case for E=0, V=4, P=6, for which the Feynman diagrams are given in figure 9.2?

    For that we have (2x6)!/(2xP-3x4)!=479001600 combinations in which the 3x4 derivatives can act on the 2x6 sources. These 479001600 combination break down in six classes, combinations belonging to one class are algebraically identical, each class is represented by a Feynman diagram.

    Now my first problem.

    How do we determine the number of combinations for one Feynman diagram? Srednicki says by using counting factors. A counting factor of 3! for each V, a counting factor of V!, a counting factor of 2! for each P, a counting factor of P!.

    I assume we multiply all these counting factors. That makes 3732480 combinations. Meaning 3732480 combinations for each individual Feynman diagram. But how does that add up to the 479001600 combinations? Where and how do they cancel the the numbers from the dual Taylor expansions?
    Last edited: Jan 19, 2009
  5. Jan 19, 2009 #4
    My second problem, certainly related to my first one.

    Given a Feynman diagram, how do we write down the equation resulting from it? For concreteness again the case given in figure 9.2.

    Do we just multiply the vertex and propagator terms? How and in which order? Would be great if Srednicki had shown that for a some cases. Could someone here at PF do it for one or two diagrams in figure 9.2?

    thank you
  6. Jan 19, 2009 #5
    The Feynman diagrams that are shown are all "connected". Every point on the diagram can be traced to every other point without lifting your pencil. There are also disconnected diagrams that the author did not show that contribute to the 479001600 figure. The author didn't draw the disconnected diagrams for a reason. Later on in the chapter the author shows that you can ignore the disconnected diagrams - it's actually quite nice how that works out.

    So in short, there are more diagrams than the author shows, and these diagrams contribute to the remainder of the 479001600.
  7. Jan 19, 2009 #6
    OK, I understand. Thank you, RedX.

    But what about finding out the number of combinations for one Feynman diagram? As I understand Srednicki, it is done by multiplying all 'counting factors'. But why then is the number of combinations for each (connected) Feynman diagram in figure 9.2 the same? How do they cancel the factors coming from the double Taylor expansions? Why do the different Feynman diagrams in figure 9.2 have different symmetry factors though the number of combinations is the same?

    thank you
  8. Jan 19, 2009 #7
  9. Jan 20, 2009 #8
    RedX, so you are saying that symmetry factors do not enter equation 9.11 anywhere?
  10. Jan 20, 2009 #9


    User Avatar
    Science Advisor

    Let's take the diagrams shown in fig.(9.1). These correspond to the term in eq.(9.11) with V=2 and P=3.
    The numerical coefficient (not including the i's, which I won't keep track of) is 1/(2^P P! 6^V V!) = 1/3456.

    There are 6 functional derivatives d/dJ(x); call the 6 arguments x1, x2 ,x3, w1, w2, and w3; later we will set x1=x2=x3=x and w1=w2=w3=w, and integrate over x and w. There are also 6 sources J(x); call the 6 arguments y1, z1, y2, z2, y3, z3. There are also propagators Delta(y1-z1), Delta(y2-z2), Delta(y3-z3).

    Let d(x) be short for d/dJ(x). Then we have to compute

    d(x1)d(x2)d(x3)d(w1)d(w2)d(w3) J(y1)J(z1)J(y2)J(z2)J(y3)J(z3)

    The result is 6! terms; each term is a product of 6 delta functions whose arguments are an x or a w minus a y or a z.

    Write down all 6!=720 terms. Multiply each by Delta(y1-z1) Delta(y2-z2) Delta(y3-z3). Now set x1=x2=x3=x and w1=w2=w3=w, and integrate over all the arguments (x, w, y's, and z's). The result is the integral over x and w of

    432 Delta(0)^2 Delta(x-w) + 288 Delta(x-w)^3

    These two expressions correspond to the two Feynman diagrams in fig.(9.1). Note that 432+288=720, the original number of terms. Multiplying 432 by 1/3456 (the numerical factor from eq.9.11) yields 1/8, and multiplying 288 by 1/3456 yields 1/12. Thus the contribution of the P=3, V=2 term in eq.(9.11) is 1/8 times the first Feynman diagram of fig.(9.1), and 1/12 times the second.

    The symmetry factors (8 and 12 in this case) can always be worked out by brute force in this manner.
    Last edited: Jan 20, 2009
  11. Jan 21, 2009 #10
    Thanks Avodyne!

    (number of delta functions in 9.11 for some P, V)!= number of terms on the right-hand side of 9.11 for some P, V

    That is a nice insight you gave here. (If I have understood it correctly.)

    But I can't see how the 720 breaks down to 432 and 288. How do we compute those two numbers?

    But first and foremost I'm not clear how what you have shown here relates to what Srednicki is saying.

    What about the third paragraph after equation 9.11 in the book. He claims 3!VxV!x2!xPxP! terms for a particular diagram. Which means same number of terms for each diagram. Also for V=2, P=3, it gives neither 432, nor 288.
  12. Jan 21, 2009 #11


    User Avatar
    Science Advisor

    He says there is a factor of 2! for each propagator and a factor of 3! for each vertex; factors are multiplied. So he is claiming an overall factor of (3!)^V V! (2!)^P P! for each diagram (before consideration of symmetry factors), which would cancel the same factor in the denominator in eq.(9.11). This number is 3456 for V=2 and P=3.
    Start with Delta(y1-z1) Delta(y2-z2) Delta(y3-z3). Now replace each y or z with x1,x2,x3,w1,w2,w3. There are 720 ways to do this. But a lot of them result in the same expression. Since Delta(a-b)=Delta(b-a), two terms that differ only in the sign of the argument of one of the Delta's yield the same expression. This reduces the 720 terms to 720/(2!)^3 = 90 different expressions, each with a coefficient of (2!)^3 = 8. Also, the order in which the 3 Delta's are written also doesn't matter, so this reduces the 90 expressions to 90/3! = 15 expressions, each now with a factor of (2!)^3 x 3! = 48. This is the factor of (2!)^P P!.

    The 15 terms come in two categories: those in which each Delta is a function of an x minus a w, and those in which one Delta is a function of an x minus a w, one Delta is a function of a w minus a w, and one Delta is a function of an x minus an x.

    Take the first category. The expression is Delta(x1-wi) Delta(x2-wj) Delta(x3-wk). There are 3! = 6 ways to assign the w's that pair with the x's. (The naive factor is (3!)^V V! = 72, and 6 is smaller than that by the "symmetry factor" of 12.) After setting all x's equal to x and all w's equal to w, we get the expression Delta(x-w)^3 with a coefficient of 48 x 6 = 288.

    Take the second category. Consider the factor of Delta(xi-wj). We have to choose which x and which w are in this factor; once this choice is made, the other Delta's are determined. We have 3 x 3 = 9 ways to make this choice. (The naive factor is (3!)^V V! = 72, and 9 is smaller than that by the "symmetry factor" of 8.) After setting all x's equal to x and all w's equal to w, we get the expression Delta(x-w) Delta(0)^2 with a coefficient of 48 x 9 = 432.
    Last edited: Jan 21, 2009
  13. Jan 21, 2009 #12
    Thank you! Very appreciated. Things getting clearer now.

    But I have one another thing concerning equation 9.12. Srednicki says C_I stands for a particular diagram. Then he goes on saying n_I is an integer that counts the number of C_I.
    Particular diagrams can show up many times? In the figure 9.1-9.11 no diagram shows up even twice and I don't see how it could. What I'm misunderstanding here?

    Also, in paragraph two after equation 9.12 he says exchanges are made among different but identical connected diagrams. What should I make of that?

  14. Jan 21, 2009 #13


    User Avatar
    Science Advisor

    He only drew the connected diagrams. For example, with E=0 and V=4, in addition to the connected diagrams shown in fig 9.2, there are three "disconnected diagrams". One is the the first diagram in fig 9.1 multiplied by itself, one is the second diagram of fig 9.1 multiplied by itself, and one is the product of the two diagrams. By "product", I mean you multiply the mathematical expression corresponding to one diagram by the mathematical expression corresponding to the other diagram.
    In the case of the square of a diagram (for example), we consider exchanging all propagators and all verticies in one of the two identical diagram for those in the other. This gives back the same mathematical expression.
  15. Jan 21, 2009 #14
    While on the subject of [tex]\phi^3[/tex] theory, is figure 17.1 a 2-particle irreducible diagram? Because if you remove two adjacent sides of the square, then the resulting diagram is still connected. But if you remove two opposite sides, then things are disconnected.
  16. Jan 22, 2009 #15
    Alright, I see. thanks a million guys... I'm probably coming back soon with more

    Would be great if PF had some sticky thread for some standard physics text, where over time questions and answers for particular paragraphs, equations, etc. in these books are accumulated. A accompanied thread to Srednicki Quantum field theory , or to Peskin and Schroeder or to Jackson.

    Post would be categorized by chapters of the respective text. Also, people could retell certain sections, add a different viewpoint.
    Last edited: Jan 22, 2009
  17. Jan 28, 2009 #16
    What is the rational behind switching from one argument x in J(x) to the six arguments x1, x2, x3, w1,w2, w3?

  18. Jan 29, 2009 #17
    By the way, the book is online:http://www.physics.ucsb.edu/~mark/qft.html

    My continuing confusion is at equation 9.11.

    Why switching the argument x in J(x) to the six arguments x1, x2, x3, w1,w2, w3 as Avodyne explained in post 9?
  19. Jan 29, 2009 #18


    User Avatar
    Science Advisor

    We have V=2, so we have

    [tex]\left[\int d^4x\left({\delta\over\delta J(x)}\right)^3\right]^2[/tex]

    The square of the whole expression can be written using two different dummy integration variables as

    [tex]\int d^4x\,d^4w\left({\delta\over\delta J(x)}\right)^3\left({\delta\over\delta J(w)}\right)^3[/tex]

    I then find it easier to keep track of the counting of terms to write [d(x)]^3 = d(x1)d(x2)d(x3), and then set x1=x2=x3=x at the end. But this is not necessary.
  20. Jan 29, 2009 #19
    thanks, Avodyne!
  21. Jan 30, 2009 #20
    One last, big, final question!

    We have a couple of operators ( the functional derivatives) acting on a couple of terms (sources and delta functions) on the left in 9.11.

    Why is not the first operator on the left acting first, then the second operator on the left acting second and so forth? Why are there many combinations and why do we have to take account of all of those combinations?

    I suspect it has to do with the integrals and dummy variables, but I can't see clearly why.

    again, thank you
  22. Jan 30, 2009 #21


    User Avatar
    Science Advisor

    The derivatives commute, so you can have them act in any order. That they are functional derivatives doesn't play any particular role. Instead of a function J(x), suppose we had a vector J with n components J_i, i=1,...,n, and a constant matrix Delta_ij. Then we are trying to compute

    [tex]{1\over V! 6^V P! 2^P}\Biggl(\sum_i {d^3\over dJ^3_i}\Biggr)^{\!\!V}\Biggl(\sum_{j,k}J_j\Delta_{jk}J_k\Biggr)^{\!\!P}[/tex]


    [tex]{d\over dJ_i}\,J_j = \delta_{ij}.[/tex]

    For V=2 and P=3, the answer is


    corresponding to the two diagrams in fig.9.1.
  23. Jan 30, 2009 #22
    Ahh, cool stuff, man!

    Many thanks, once again.
  24. Feb 1, 2009 #23
    What happens when 2P-3V<0? Srednicki says 2P-3V surviving sources, (2P)!/(2P-3V)! different combinations, but what is when 2P-3V is negative?

  25. Feb 2, 2009 #24


    User Avatar
    Science Advisor

    If 2P-3V<0, there are more derivatives with respect to J than there are J's, and so the result is zero. Just like (d/dx)^n x^m = 0 if n>m.
  26. Feb 3, 2009 #25
    But why then can we take as many derivatives with respect to J as we like in equation 8.14?

    EDIT: Alright, I think I can see now why.

    Thanks, Avodyne
    Last edited: Feb 3, 2009
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook