1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivation of the Noether current

  1. Jul 1, 2014 #1

    CAF123

    User Avatar
    Gold Member

    (The problem I have is really at the end, however, I have provided my whole argument in detail for clarity and completeness at the cost of perhaps making the thread very unappealing to read)
    1. The problem statement, all variables and given/known data
    (c.f Di Francesco's book, P.41) We are given that the transformed action under an infinitesimal transformation is $$S' = \int d^d x \left(1 + \partial_{\mu}\left(w_a \frac{\delta x^{\mu}}{\delta w_a}\right)\right) L\left(\Phi + w_a \frac{\delta F}{\delta w_a}, [\delta^{\nu}_{\mu} - \partial_{\mu}\left(w_a\left(\frac{\delta x^{\nu}}{\delta w_a}\right)\right)](\partial_{\nu}\Phi + \partial_{\nu}[w_a\left(\frac{\delta F}{\delta w_a}\right)])\right)$$

    To consider $\delta S = S' - S$, I am looking to expand the above result to first order.
    I can multiply the brackets above to obtain $$S' = \int d^dx\, L(\Phi + w_a \frac{\delta F}{\delta w_a}, \partial_{\mu} \Phi + \partial_{\mu} (w_a \frac{\delta F}{\delta w_a}) - \partial_{\mu} (w_a \frac{\delta x^{\nu}}{\delta w_a})\partial_{\nu} \Phi) +$$ $$ \int d^d x\, \partial_{\mu} (w_a \frac{\delta x^{\mu}}{\delta w_a})L(\Phi + w_a \frac{\delta F}{\delta w_a}, \partial_{\mu}\Phi + \partial_{\mu}(w_a \frac{\delta F}{\delta w_a}) - \partial_{\mu} (w_a \frac{\delta x^{\nu}}{\delta w_a})\partial_{\nu} \Phi$$

    Then I can taylor expand the expansion above to first order in the parameters. This gives $$S' = \int d^d x \left[L(\Phi, \partial_{\mu}\Phi) + \omega_{a} \frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial \Phi} + \partial_{\mu} (\omega_{a} \frac{\delta F}{\delta \omega_a}) \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu}(\omega_a \frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)}\right] + \int d^d x \partial_{\mu}(\omega_a \frac{\delta x^{\mu}}{\delta \omega_a})[..]$$ where [..] is the terms in the bracket in the preceding integral. Most of these terms will be ignored (in fact all but the first) since they will of higher order in the parameters. The variation is then $$\delta S = S'-S = \int d^d x \omega_a \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + \partial_{\mu} (\omega_a \frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu}(\omega_a \frac{\delta x^{\nu}}{\delta \omega_a}) \partial_{\nu}\frac{\partial L}{\partial(\partial_{\mu}\Phi)}+\int d^d x \partial_{\mu}(\omega_a \frac{\delta x^{\mu}}{\delta \omega_a})L(\Phi, \partial_{\mu}\Phi)$$

    Now perform the derivatives explicitly, grouping together terms in ##\partial_{\mu}\omega_a## and ##\omega_a## and imposing invariance of action: $$0 = \delta S = \int d^d x\,\partial_{\mu} \omega_a \left[\frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \frac{\delta x^{\nu}}{\delta \omega_a}\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \frac{\delta x^{\mu}}{\delta \omega_a}L\right] +$$ $$ \omega_a\left[ \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu} (\frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \partial_{\mu} (\frac{\delta x^{\mu}}{\delta \omega_a})L\right]$$

    The answer in the book is that ##\int d^d x j^{\mu} \partial_{\mu}\omega_a= 0 ##.The terms multiplying ##\partial_{\mu}\omega_a## is exactly ##j^{\mu}## in the book. So I would have the right answer, provided all the terms multiplying ##\omega_a## would vanish. Indeed, the first two do as a result of applying the classical equations of motion for the field ##\Phi##. But I do not see how the final two terms vanish (or indeed if they do). I have tried again using the E.O.Ms, integration by parts to no success. I then thought I might be able to ignore these terms but I had no justification. Any thoughts would be great.
    Many thanks.
     
    Last edited: Jul 1, 2014
  2. jcsd
  3. Jul 3, 2014 #2
    I'm not sure but "These sum up to zero if the action is symmetric under rigid transformations" means ##\delta S=0## if ##\omega_a## is independent of position, in which case the first term vanishes and the result is trivial.
     
  4. Jul 3, 2014 #3

    CAF123

    User Avatar
    Gold Member

    Hi bloby,
    I did not really understand that paragraph in the book. It seems to me to be incorrect and contradictory in terms. The sentence 'The variation ##\delta S = S'-S## of the action contains terms with no derivatives of ##\omega_a##' is a contradiction of eqn (2.140) and of the result I obtained. Furthermore, it also seems to be a contradiction of the last sentence in the paragraph '##\delta S## involves only the first derivatives of ##\omega_a##'. Or perhaps I misunderstood something, however, I have seen this book be incorrect before.

    So, if ##\omega## is indeed independent of position, then ##\partial_{\mu}\omega_a = 0## identically. In which case, we are left with ##\int d^dx \omega_a [..] = 0## If ##\omega \neq 0## always, then [..]=0. Is this what you mean? But this would give a different conserved current than the one in the book?

    If ##\omega## is dependent on position, then ##\partial_{\mu}\omega_a \neq 0##. So my thinking was to obtain (2.140), the terms multiplying ##\partial_{\mu}\omega## in my expression in the OP had to be ##j^{\mu}##. My expression agrees with (2.141). Therefore, the terms multiplying ##\omega## then have to vanish so as to still have (2.140). And that is my problem. They don't seem to.

    Thanks.
     
  5. Jul 3, 2014 #4
    For what I understand your last equation in the OP gives the general variation of the action no matter if the transformation is a symmetry and with ##\omega_a## dependent of position.(This is the 'elegant trick': assume a general ##\omega## and put it constant at the end)
    Your last equation in the OP gives the variation of the action for a general transformation( without assuming it's a symmetry nor a rigid transformation).
    Then assuming that for a rigid transformation(first term vanishes) it is a symmetry, the second term vanishes.
    The variation that remains is 'only due to the varying part'/ non rigid part of the transformation.
    Then by an integration by part they obtain (2.142) and since "##\delta S## should vanish for any position-dependent parameters ##\omega_a(x)## the result follow.
    Perhaps someone more expert should help.
     
  6. Jul 3, 2014 #5

    CAF123

    User Avatar
    Gold Member

    Yes, I did not impose the symmetry transformation yet.
    For a rigid transformation ,##\omega## is not a function of position so the first term vanishes. This means then that ##\omega \int d^d x [..] = 0##, for a symmetry transformation where [..] are the terms multiplying ##\omega## in the OP. Correct?
    I do not understand this part - what variation remains? Are you now considering the case where ##\omega## is a function of position?

    Thanks.
     
  7. Jul 3, 2014 #6
    You are right it looks strange for me too.
    I would rather perform the integration by part first, then we have ##0=\delta S =\delta S^1 - \delta S^2##, with ##\delta S^1=\int d^d x \omega_a(x) \partial_{\mu} j^{\mu}##, for arbitrary small changes in ##\Phi##, in particular those given by the transformation with ##\omega_a(x)##. This gives ##\delta S^1=\delta S^2##. If the infinitesimal change is then taken to be 'along the symmetry'( to be even more a symmetry) we have ##\delta S^2=0## and ##\omega =constant## can be pulled out of the integral.
    Someone else?
     
    Last edited: Jul 3, 2014
  8. Jul 3, 2014 #7

    strangerep

    User Avatar
    Science Advisor

    Here's my $0.02 worth...

    1) If you really want to understand a derivation of Noether's theorem, go study Greiner & Reinhardt -- who do not skip steps, nor do they use self-indulgent "elegant" techniques that may well impress their peers, but leave students bewildered. And the authors claim the book is "pedagogical. Yeah, right. :grumpy:

    2) My take on the FMS derivation is as follows:

    First, the notation in eq(2.125) could be written more clearly as
    $$x'^\mu(\omega_a) ~=~ x'^\mu(\omega_a=0)
    ~+~ \omega_a \left( \frac{\delta x'^\mu}{\delta \omega_a} \right)_{\omega_a = 0}
    ~=~ x^\mu
    ~+~ \omega_a \left( \frac{\delta x^\mu}{\delta \omega_a} \right)_{\omega_a = 0} ~.$$
    IIUC, they can lose the prime on the last ##x^\mu## because only terms to ##O(\omega_a)## are retained. And ##x'^\mu## to 0'th order in ##\omega_a## is ##x^\mu##.

    A similar thing applies to their 2nd eqn in (2.125) -- it could be written more explicitly like this:
    $$\Phi'(x') ~=~ \Phi(x) ~+~ \omega_a \left( \frac{\delta F}{\delta \omega_a}(x) \right)_{\omega_a = 0} ~.$$

    The reason I emphasize the above is that$$\left( \frac{\delta x^\mu}{\delta \omega_a} \right)_{\omega_a = 0}$$and$$ \left( \frac{\delta F}{\delta \omega_a}(x) \right)_{\omega_a = 0}$$no longer have any dependence on ##\omega_a##, and we'll use this fact below.

    The next thing to realize is that we're considering arbitrary variations ##\omega_a##, in general they're arbitrary functions of ##x##.

    Think about this equation: $$ a f(x) + b g(x) ~=~ 0 ~,$$ where ##f,g## are arbitrary independent functions and ##a,b## are quantities (independent of ##x##) to be determined. Since ##f,g## can be anything, the only possibility is ##a=0=b##.

    That's essentially what's going on when FMS appear to consider rigid transformations separately.
    More explicitly, specialize my equation above to $$ a f(x) + b \frac{df(x)}{dx} ~=~ 0 ~.$$Since ##f(x)## is an arbitrary function, we may legitimately conclude that the coefficients of ##f## and ##df/dx## must vanish separately.

    Now let's look at the terms in your integral equation:

    There should be an explicit domain on the integral sign, e.g., $$\int_\Omega \cdots ~~,$$, since the action can legitimately be minimized over an arbitrary domain. (Again, see Greiner & Reinhardt for a more careful treatment.) Since the domain is arbitrary, vanishing of the integral is only possible if the integrand vanishes. That leaves you with$$0 ~=~ \partial_{\mu} \omega_a \Big[ \cdots \Big] ~+~ \omega_a \Big[ \cdots \Big] ~.$$Applying my earlier remarks, remembering that the ##\omega_a## are arbitrary, the coefficients in big parentheses may validly be set to 0 separately. But this is only true if those coefficients are indeed independent of ##\omega_a## -- and this is seen to be the case if my earlier more-explicit notation is used.

    HTH (sigh).
     
  9. Jul 4, 2014 #8

    CAF123

    User Avatar
    Gold Member

    Hi strangerep,

    I am wondering if there is a mistake in eqn (2.126). Shouldn't it read ##\Phi'(x') - \Phi(x) = -i\omega_a G_a \Phi(x)##? (The generator ##G_a## is the full generator of the transformation so it should change both the field and the coordinate?)

    Could you explain what you mean by the statement '...consider rigid transformations separately'? Do you mean that in the equation ##0 = \partial_{\mu}\omega_a[...]_1 + \omega_a[...]_2##, if ##\omega \neq \omega(x)## then ##[...]_1 \neq 0## since ##\partial_{\mu}\omega_a = 0## and ##[...]_1## constitutes ##j^{\mu}## and then ##[...]_2 = 0## separately for the equation to hold.

    If ##\omega = \omega(x)##, then ##[...]_1 = 0## and likewise ##[...]_2=0##. I did not see this case in FMS.
    Thanks.
     
    Last edited: Jul 4, 2014
  10. Jul 4, 2014 #9

    strangerep

    User Avatar
    Science Advisor

    The sentence preceding eqn (2.126) says:

    If you follow through to eqn (2.133), one ends up with the correct generator for total angular momentum.

    That doesn't look right. Let's try again -- since I just realized my previous explanation was too simplistic... :blushing:
    [Edit: Actually, I should be more upfront about this: I'm having difficulty making sense of the FMS treatment, so take anything I say below with a grain of salt... :frown:]

    The relevant equation is $$\delta S ~=~ \int \partial_{\mu}\omega_a[...]_1 ~+~ \int\omega_a[...]_2 ~.$$ We specialize to the case ##\omega = const##, and require the action to be invariant under such transformations. The equation then becomes: $$0 ~=~ \int \omega_a[...]_2 ~=~ \omega_a \int [...]_2 ~.$$Since ##\omega_a\ne 0## in general, this implies ##\int [...]_2 = 0##.

    Then we need to deduce that ##\int \omega_a(x) [...]_2 = 0## is true generally. Afaict, this requires the (reasonable) assumption that global (rigid) symmetries hold for arbitrary regions of integration. In that case, ##\int [...]_2 = 0## implies ##[...]_2 = 0##.

    That leaves $$\delta S ~=~ \int \partial_{\mu}\omega_a[...]_1 ~.$$ Note that if we now perform integration-by-parts with an arbitrary domain of integration, there will in general be a boundary term. But if we impose the additional conditions that (1) the variations vanish on that boundary, and (2) that ##\delta S = 0## still, then an integration by parts gets the result: $$0 ~=~ \int \partial_{\mu}j^\mu_a \, \omega_a(x) ~,$$(where I've reinstated ##\omega_a##'s explicit ##x##-dependence).

    Now, ##\omega_a(x)## is still a very arbitrary function -- the only constraint is that it must vanish on the integration boundary. But it can be anything inside the boundary. This arbitrariness allows us to conclude that the integrand must vanish, and since ##\omega_a(x) \ne 0## in general, we get ##\partial_{\mu}j^\mu_a = 0##.
    [Edit: more rigorously, this follows from the fundamental lemma of calculus of variations, provided ##w_a(x)## satisfies conditions of continuity and differentiability outlined in that Wiki page.]

    This also explains why we couldn't do the integration-by-parts thing first. To get ##[...]_2 = 0##, we needed an extra condition. The 2nd step requires a more specific boundary on which ##\omega_a(x)=0##. But if ##\omega_a(x)## is a constant, it could only be trivially 0.

    [Late Edit:] Since I'm not confident about what I've said above, I'm now trying to do the problem from scratch. That might take a while...
     
    Last edited: Jul 5, 2014
  11. Jul 5, 2014 #10

    CAF123

    User Avatar
    Gold Member

    It was the conceptual point of view I was concerned with: G is defined to be the generator that transforms both the coordinates and the field and yet it seems there are no instances of the transformed coordinate system (the primed system) present on the LHS of the equation. I realize this is off-topic from the main discussion in this thread, but I was wondering how Di Francesco obtained eqn (2.127). Here are my thoughts: Expand ##\Phi(x')##, keeping the 'shape' of the field (as Grenier puts it) the same, so ##\Phi(x') \approx \Phi(x) + \omega_a \frac{\delta \Phi(x')}{\delta \omega_a}##. Now insert into (2.125) gives $$\Phi'(x') = \Phi(x') - \omega_a \frac{\delta x^{\mu}}{\delta \omega_a}\frac{\partial \Phi(x')}{\delta x^{\mu}} + \omega_a \frac{\delta F}{\delta \omega_a}(x)$$ which is nearly the right result except Di Francesco has a prime on the x at the final term. My thinking was defined an equaivalent function F in the primed system, i.e ##F(\Phi(x)) \equiv F'(\Phi(x'))##. Are these arguments correct?

    If we want ##\int \omega_a(x)[...]_2 = 0## to hold generally and for arbritary regions of integration, then can we not say that the integrand has to vanish ##\omega_a(x) [...]_2 = 0##. Since ##\omega_a(x)## too is arbritary (under no constraints) then ##[...]_2 = 0##. But then this does not use the rigid transformation anywhere. Or are you using the fact that by implementing a rigid transformation, (thereby constraining omega to be position independent) we have that ##\int [...]_2 = 0##. Then for this to hold for all regions, we must have ##[...]_2 = 0## which is used in the equation ##\int \omega_a(x)[...]_2 ##, making it vanish?
    I didn't quite understand this paragraph - could you possibly elaborate?

    Thanks.
     
  12. Jul 5, 2014 #11

    strangerep

    User Avatar
    Science Advisor

    I'm still working through the problem from scratch. I want to check carefully whether the expression you obtained at the end of your OP is correct. If it's not, then none of the subsequent arguments matter... :frown:

    I haven't got there yet, but... I can't do any more tonight.

    So try to hang loose for a day or two (or three) until I can complete my detailed checks.
     
  13. Jul 6, 2014 #12

    strangerep

    User Avatar
    Science Advisor

    Hmm. I just reached the same expression as you got at the end of your opening post.

    But something doesn't look right. From your original post (my emboldening)...

    I don't see how the emboldened statement is true. I'm guessing you intend integration by parts, but that introduces another ##\partial_\mu \omega_a## term, doesn't it? If not, please show explicitly...
     
  14. Jul 6, 2014 #13

    CAF123

    User Avatar
    Gold Member

    Thanks for the check.

    You're right. What I did initially was to pass the integral through the ##\omega_a## at the front and write $$\int d^d x (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{ \partial(\partial_{\mu}\Phi)} = -\int d^d x\frac{\delta F}{\delta \omega_a} \partial_{\mu} \frac{\partial L}{\partial (\partial_{\mu}\Phi)}$$ but in the general case, ##\omega## is not constant, so what I did was incorrect. Does it help the problem at all?
     
  15. Jul 6, 2014 #14

    strangerep

    User Avatar
    Science Advisor

    Oh, don't thank me yet. I have a feeling that things are deeply wrong about all of this.

    No, at least, not that I can see.

    In fact, I'm starting to think that FMS's whole treatment is RUBBISH. :mad:
    I probably shouldn't have tried (earlier in this thread) to fabricate justifications for it.

    Here's why I think it's rubbish. FMS express the coordinate transformation as
    $$\def\Vdrv#1#2{\frac{\delta #1}{\delta #2}}
    \def\Pdrv#1#2{\frac{\partial #1}{\partial #2}}
    x'^\mu ~=~ x^\mu + \omega_a \Vdrv{x^\mu}{\omega_a} ~,$$
    BUT... it's just meant to be an infinitesimal version of a coordinate transformation dependent on some parameters ##\omega_a##. Writing it out more carefully, it should be like this:
    $$x'^\mu(x,\omega) ~\approx~ x'^\mu \Big|_{\omega=0}
    ~+~ \omega_a \Pdrv{x'^\mu}{\omega_a} \Big|_{\omega=0}
    ~=~ x^\mu ~+~ \omega_a X^\mu_a(x) ~,$$where I've introduced new symbols ##X^\mu_a## which are independent of ##\omega_a## but are in general functions of ##x##.

    The point is that the ##\omega_a## are not functions of ##x## under any circumstances. So Sam's characterization (in your other thread) of FMS's stuff as being a "mess" seems accurate.

    Further, if you study (and re-study) Greiner's derivation carefully, he never makes use of anything like this. The ##\omega_a X^\mu_a(x)## correspond to his ##\delta x^\mu## but he never needs to split it up by introducing ##\omega##'s. And he completes the whole Noether derivation in just under 3 pages -- which is about the same as my attempt to reproduce FMS's steps. And Greiner never invokes integration by parts.

    Oh well, at least I've learned something out of this whole exercise. :cry:

    If you want to actually learn something over your summer, forget about the FMS crap and study Greiner's derivation in fine detail, and maybe also study Ballentine cover-to-cover. Maybe also one of Greiner's other books: "QM -- Symmetries". At least you'll know that these are reliable.
     
    Last edited: Jul 7, 2014
  16. Jul 7, 2014 #15

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    There are different ways to study the conservation laws originating from global symmetries. I don't know the textbook by Di Francesco. So I cannot say anything about his treatment. In my QFT notes, you find the standard derivation for this Noether theorem for classical field theories in Sect. 3.3. If I remember right, it's equivalent to the one given in Walter Greiner's and Joachim Reinhardt's book on field quantization (part of the theory-textbook series by W. Greiner).

    It's helpful to keep the following in mind:

    (a) Usually you have a transformation in both space-time coordinates and simultaneously of the fields. E.g., take a vector field and proper orthochronous Lorentz transformations as an example. The transformation reads
    [tex]A'{}^{\mu}(x')={\Lambda^{\mu}}_{\nu} A^{\nu}(x), \quad x'{}^{\mu}={\Lambda^{\mu}}_{\nu} x^{\nu}.[/tex]
    For Noether's theorem, you don't need the full Lie group but only the tangent space at unity, i.e., the Lie algebra or, in physicist's words, the infinitesimal transformations, taking into account only variations to first order in the parameters, e.g., for the Lorentz transformation
    [tex]{\Lambda^{\mu}}_{\nu}=\delta_{\nu}^{\mu} + \eta^{\mu \rho} \delta \omega_{\rho \nu} \quad \text{with} \quad \delta \omega_{\rho \nu}=-\delta \omega_{\nu \rho}.[/tex]

    (b) The transformation by definition is a symmetry if the action functional is invariant under the transformation, i.e., for all fields (and not only the solution of the field equations, given by the Euler-Lagrange equations of the Hamilton least-action principle) you have
    [tex]S[\phi',x'] \equiv S[\phi,x].[/tex]
    This leads to constraints on the action, if you demand that a given transformation is a symmetry. Then Noether's theorem tells you that there is a conserved quantity for each one-parameter symmetry (sub) group.

    There are alternative treatments of the special case of global internal symmetries, i.e., symmetries not related to Poincare symmetry of space-time, like invariance under the choice of the phase of a complex wave function, leading to the conservation of a charge. One goes back to Gell-Mann and Levy, studying the axial current in the context of weak pion decay in the early 1960ies. The idea there is to simply study the case of a Lagrangian that is invariant under a global symmetry (in this case the abelian axial symmetry) by evaluating the variation of the corresponding action under the corresponding local symmetry, i.e., you make the infinitesimal parameter [itex]\delta \epsilon=\delta \epsilon(x)[/itex]. Then you can identify the Noether current [itex]j^{\mu}[/itex] easily as the coefficient of [itex]\partial_{\mu} \delta \epsilon[/itex] (modulo factors). The coefficient of [itex]\delta \epsilon[/itex] can be written as [itex]\partial_{\mu} j^{\mu}[/itex], when making use of the equations of motion, i.e., the Euler-Lagrange equation. Since the action is invariant under global symmetries only, after having identified the Nother current with this trick of making the parameter [itex]x[/itex] dependent, you set [itex]\delta \epsilon=\text{const}[/itex] again, and you find that the Noether current obeys the continuity equation. This is an elegant derivation of Noether's theorem and makes it easier to identify the expression of the Noether current in terms of the field.

    Another way, particularly useful for the derivation of Ward-Takahashi identities of global symmetries within the path-integral formulation, using the methods of generating functionals for the different kinds of Green's functions and proper vertex functions, is to introduce auxilliary vector fields as if you want to gauge the global symmetry to make it local but in the quantized theory just treat these auxilliary gauge fields as external c-number fields. This gives an elegant way to derive the Ward-Takahashi identities ("sum rules") of the global symmetries, particularly in the path-integral formulation. The functional becomes dependent on the auxilliary fields on top of the dependence of the usual external currents (or for the action their conjugate field-expectation values).
     
  17. Jul 7, 2014 #16

    CAF123

    User Avatar
    Gold Member

    Thanks vanhees71. I am going onto the path integral formulation next semester, so I have not seen the Ward identities yet. Are your QFT notes online at all?

    Thanks strangerep. I saw one of the professors today and his argument on the FMS Noether current derivation was more or less exactly what you wrote in #9. What made you think you were incorrect? I also asked him about the possible x dependence in omega in the top equation of (2.125) (that you pointed out in your last post) and, I can't remember his exact words, but he said something about the x dependence in omega being small, then it is okay. To summarise what he said:

    The equation is ##\int d^d x \partial_{\mu} \omega_a [...]_1 + \omega_a [...]_2 = 0##. If ##\omega = \omega(x)## then the first term does not vanish, so use integration by parts to get ##-\int d^d x \omega_a \partial_{\mu}[...]_1 + \int d^d x \omega_a [...]_2 = 0##.
    The ##[...]_1## is exactly the Noether current and for arbritary regions of integration, we must have ##[...]_2 = 0##.

    For ##\omega \neq \omega(x)## then the first term vanishes and ##\omega \int d^d x [...]_2 = 0## again implying ##[...]_2 = 0##. In each case, I still need to prove that ##[...]_2 = 0## though using the terms in the OP.

    My professor suggested that I use Greiner in conjunction to try to see how they obtained their result, although it looks to be a bit difficult to try to match the notation. He said try the formula that I have in the OP for special cases of a Lorentz symmetry transformation to see if it goes to zero, then I can gain trust in what I wrote. But you already confirmed it, so that is enough for me ;) even though I will try to do what he said.
     
  18. Jul 7, 2014 #17

    strangerep

    User Avatar
    Science Advisor

    Because the starting equations (2.125) are (a) wrong (as I explained in my previous post), and (b) they refer to ##\omega_a## as "infinitesimal parameters", not functions of ##x##.

    Therefore, anything flowing from them might be mathematically correct, but the overall result is worthless since it depends on an incorrect starting point.

    Well, I think that either FMS's eq(2.125) is wrong, or they're totally sloppy and misleading. I hereby issue a (friendly) challenge to your professor to discuss and justify FMS's eq(2.125) here on PF. :biggrin:

    No, that only proves ##-\partial_\mu[...]_1 + [...]_2 = 0##. You've got the argument the wrong way around -- unless one regards ##[...]_2## as some kind of source term, in which case we're dealing with a conservation equation in the presence of sources.

    If you really want to pursue this, study my earlier argument more carefully, else I'll just be repeating myself.

    But I reiterate: all that stuff about the integrals is irrelevant fluff if there's a problem with eq(2.125). That's where my objections are aimed.

    I have already pursued that advice in fine detail. You won't be able to match notation, at least not sensibly. (How do you think I became so convinced that FMS is rubbish? :frown:)

    I can only say (for the last time): study the Greiner derivation in and of itself, without trying to relate it to FMS. When you feel you understand Greiner thoroughly, then maybe come back and try to relate to FMS (but I'm pretty sure that such relating is likely to fail).

    FMS merely write down (2.125) as an unjustified sweeping statement. I say those equations are either wrong or deeply misleading, and I challenge anyone to show how/whether I'm mistaken.
     
  19. Jul 8, 2014 #18

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I had a brief look at this section in Di Francesco et al's textbook on Conformal Field Theory, and I must say, it's at least very misleading notation. I've no clue what the precise meaning of the symbols in Eq. (2.125) might be.

    Have a look at my notes. I hope, there it becomes a bit clearer. Also the recommended book by Greiner and Reinhardt "Field Quantization" is very good to learn QFT. It's very detailed in the calculations, showing most steps of the calculations in detail.
     
  20. Jul 8, 2014 #19

    CAF123

    User Avatar
    Gold Member

    Thanks vanhees71, I asked another question on the Physics stack exchange and in one of the answers notation from perhaps a more common treatment is mapped to notation in Di Francesco. I am still to comprehend exactly what it means, but here is the link http://physics.stackexchange.com/qu...nsformation?noredirect=1#comment251403_123316. Does it make more sense ?

    Could you put a link to your notes? Thanks.
     
  21. Jul 8, 2014 #20

    CAF123

    User Avatar
    Gold Member

    I asked another question on Physics stack exchange and one of the responses I got seemed to give a match to the notation. As I said above in response to vanhees, I am still to make sense of it myself but I provided the link above to see if it helps at all.

    I asked for further clarification and yes, what I wrote was in the opposite order. What he said was exactly what you wrote in #9.

    Ok, I will study Greiner by itself and understand his argument.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Derivation of the Noether current
  1. Noether Currents (Replies: 13)

  2. Noether Current (Replies: 1)

Loading...