Derivation of the Noether current

In summary, the conversation is discussing the transformed action under an infinitesimal transformation and how to expand it to first order. The resulting equation is then compared to the one in a book, and there is a question about the terms multiplying a certain variable in the equation. The conversation also mentions a possible contradiction in the book and some confusion about the terms vanishing. The conversation concludes with the suggestion that an expert should clarify the situation.
  • #1
CAF123
Gold Member
2,948
88
(The problem I have is really at the end, however, I have provided my whole argument in detail for clarity and completeness at the cost of perhaps making the thread very unappealing to read)

Homework Statement


(c.f Di Francesco's book, P.41) We are given that the transformed action under an infinitesimal transformation is $$S' = \int d^d x \left(1 + \partial_{\mu}\left(w_a \frac{\delta x^{\mu}}{\delta w_a}\right)\right) L\left(\Phi + w_a \frac{\delta F}{\delta w_a}, [\delta^{\nu}_{\mu} - \partial_{\mu}\left(w_a\left(\frac{\delta x^{\nu}}{\delta w_a}\right)\right)](\partial_{\nu}\Phi + \partial_{\nu}[w_a\left(\frac{\delta F}{\delta w_a}\right)])\right)$$

To consider $\delta S = S' - S$, I am looking to expand the above result to first order.
I can multiply the brackets above to obtain $$S' = \int d^dx\, L(\Phi + w_a \frac{\delta F}{\delta w_a}, \partial_{\mu} \Phi + \partial_{\mu} (w_a \frac{\delta F}{\delta w_a}) - \partial_{\mu} (w_a \frac{\delta x^{\nu}}{\delta w_a})\partial_{\nu} \Phi) +$$ $$ \int d^d x\, \partial_{\mu} (w_a \frac{\delta x^{\mu}}{\delta w_a})L(\Phi + w_a \frac{\delta F}{\delta w_a}, \partial_{\mu}\Phi + \partial_{\mu}(w_a \frac{\delta F}{\delta w_a}) - \partial_{\mu} (w_a \frac{\delta x^{\nu}}{\delta w_a})\partial_{\nu} \Phi$$

Then I can taylor expand the expansion above to first order in the parameters. This gives $$S' = \int d^d x \left[L(\Phi, \partial_{\mu}\Phi) + \omega_{a} \frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial \Phi} + \partial_{\mu} (\omega_{a} \frac{\delta F}{\delta \omega_a}) \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu}(\omega_a \frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)}\right] + \int d^d x \partial_{\mu}(\omega_a \frac{\delta x^{\mu}}{\delta \omega_a})[..]$$ where [..] is the terms in the bracket in the preceding integral. Most of these terms will be ignored (in fact all but the first) since they will of higher order in the parameters. The variation is then $$\delta S = S'-S = \int d^d x \omega_a \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + \partial_{\mu} (\omega_a \frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu}(\omega_a \frac{\delta x^{\nu}}{\delta \omega_a}) \partial_{\nu}\frac{\partial L}{\partial(\partial_{\mu}\Phi)}+\int d^d x \partial_{\mu}(\omega_a \frac{\delta x^{\mu}}{\delta \omega_a})L(\Phi, \partial_{\mu}\Phi)$$

Now perform the derivatives explicitly, grouping together terms in ##\partial_{\mu}\omega_a## and ##\omega_a## and imposing invariance of action: $$0 = \delta S = \int d^d x\,\partial_{\mu} \omega_a \left[\frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \frac{\delta x^{\nu}}{\delta \omega_a}\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \frac{\delta x^{\mu}}{\delta \omega_a}L\right] +$$ $$ \omega_a\left[ \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu} (\frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \partial_{\mu} (\frac{\delta x^{\mu}}{\delta \omega_a})L\right]$$

The answer in the book is that ##\int d^d x j^{\mu} \partial_{\mu}\omega_a= 0 ##.The terms multiplying ##\partial_{\mu}\omega_a## is exactly ##j^{\mu}## in the book. So I would have the right answer, provided all the terms multiplying ##\omega_a## would vanish. Indeed, the first two do as a result of applying the classical equations of motion for the field ##\Phi##. But I do not see how the final two terms vanish (or indeed if they do). I have tried again using the E.O.Ms, integration by parts to no success. I then thought I might be able to ignore these terms but I had no justification. Any thoughts would be great.
Many thanks.
 
Last edited:
Physics news on Phys.org
  • #2
I'm not sure but "These sum up to zero if the action is symmetric under rigid transformations" means ##\delta S=0## if ##\omega_a## is independent of position, in which case the first term vanishes and the result is trivial.
 
  • #3
Hi bloby,
bloby said:
I'm not sure but "These sum up to zero if the action is symmetric under rigid transformations" means ##\delta S=0## if ##\omega_a## is independent of position, in which case the first term vanishes and the result is trivial.
I did not really understand that paragraph in the book. It seems to me to be incorrect and contradictory in terms. The sentence 'The variation ##\delta S = S'-S## of the action contains terms with no derivatives of ##\omega_a##' is a contradiction of eqn (2.140) and of the result I obtained. Furthermore, it also seems to be a contradiction of the last sentence in the paragraph '##\delta S## involves only the first derivatives of ##\omega_a##'. Or perhaps I misunderstood something, however, I have seen this book be incorrect before.

So, if ##\omega## is indeed independent of position, then ##\partial_{\mu}\omega_a = 0## identically. In which case, we are left with ##\int d^dx \omega_a [..] = 0## If ##\omega \neq 0## always, then [..]=0. Is this what you mean? But this would give a different conserved current than the one in the book?

If ##\omega## is dependent on position, then ##\partial_{\mu}\omega_a \neq 0##. So my thinking was to obtain (2.140), the terms multiplying ##\partial_{\mu}\omega## in my expression in the OP had to be ##j^{\mu}##. My expression agrees with (2.141). Therefore, the terms multiplying ##\omega## then have to vanish so as to still have (2.140). And that is my problem. They don't seem to.

Thanks.
 
  • #4
For what I understand your last equation in the OP gives the general variation of the action no matter if the transformation is a symmetry and with ##\omega_a## dependent of position.(This is the 'elegant trick': assume a general ##\omega## and put it constant at the end)
Your last equation in the OP gives the variation of the action for a general transformation( without assuming it's a symmetry nor a rigid transformation).
Then assuming that for a rigid transformation(first term vanishes) it is a symmetry, the second term vanishes.
The variation that remains is 'only due to the varying part'/ non rigid part of the transformation.
Then by an integration by part they obtain (2.142) and since "##\delta S## should vanish for any position-dependent parameters ##\omega_a(x)## the result follow.
Perhaps someone more expert should help.
 
  • #5
bloby said:
For what I understand your last equation in the OP gives the general variation of the action no matter if the transformation is a symmetry and with ##\omega_a## dependent of position
Yes, I did not impose the symmetry transformation yet.
Your last equation in the OP gives the variation of the action for a general transformation( without assuming it's a symmetry nor a rigid transformation).
Then assuming that for a rigid transformation(first term vanishes) it is a symmetry, the second term vanishes.
For a rigid transformation ,##\omega## is not a function of position so the first term vanishes. This means then that ##\omega \int d^d x [..] = 0##, for a symmetry transformation where [..] are the terms multiplying ##\omega## in the OP. Correct?
The variation that remains is 'only due to the varying part'/ non rigid part of the transformation.
I do not understand this part - what variation remains? Are you now considering the case where ##\omega## is a function of position?

Thanks.
 
  • #6
You are right it looks strange for me too.
I would rather perform the integration by part first, then we have ##0=\delta S =\delta S^1 - \delta S^2##, with ##\delta S^1=\int d^d x \omega_a(x) \partial_{\mu} j^{\mu}##, for arbitrary small changes in ##\Phi##, in particular those given by the transformation with ##\omega_a(x)##. This gives ##\delta S^1=\delta S^2##. If the infinitesimal change is then taken to be 'along the symmetry'( to be even more a symmetry) we have ##\delta S^2=0## and ##\omega =constant## can be pulled out of the integral.
Someone else?
 
Last edited:
  • #7
Here's my $0.02 worth...

1) If you really want to understand a derivation of Noether's theorem, go study Greiner & Reinhardt -- who do not skip steps, nor do they use self-indulgent "elegant" techniques that may well impress their peers, but leave students bewildered. And the authors claim the book is "pedagogical. Yeah, right. :grumpy:

2) My take on the FMS derivation is as follows:

First, the notation in eq(2.125) could be written more clearly as
$$x'^\mu(\omega_a) ~=~ x'^\mu(\omega_a=0)
~+~ \omega_a \left( \frac{\delta x'^\mu}{\delta \omega_a} \right)_{\omega_a = 0}
~=~ x^\mu
~+~ \omega_a \left( \frac{\delta x^\mu}{\delta \omega_a} \right)_{\omega_a = 0} ~.$$
IIUC, they can lose the prime on the last ##x^\mu## because only terms to ##O(\omega_a)## are retained. And ##x'^\mu## to 0'th order in ##\omega_a## is ##x^\mu##.

A similar thing applies to their 2nd eqn in (2.125) -- it could be written more explicitly like this:
$$\Phi'(x') ~=~ \Phi(x) ~+~ \omega_a \left( \frac{\delta F}{\delta \omega_a}(x) \right)_{\omega_a = 0} ~.$$

The reason I emphasize the above is that$$\left( \frac{\delta x^\mu}{\delta \omega_a} \right)_{\omega_a = 0}$$and$$ \left( \frac{\delta F}{\delta \omega_a}(x) \right)_{\omega_a = 0}$$no longer have any dependence on ##\omega_a##, and we'll use this fact below.

The next thing to realize is that we're considering arbitrary variations ##\omega_a##, in general they're arbitrary functions of ##x##.

Think about this equation: $$ a f(x) + b g(x) ~=~ 0 ~,$$ where ##f,g## are arbitrary independent functions and ##a,b## are quantities (independent of ##x##) to be determined. Since ##f,g## can be anything, the only possibility is ##a=0=b##.

That's essentially what's going on when FMS appear to consider rigid transformations separately.
More explicitly, specialize my equation above to $$ a f(x) + b \frac{df(x)}{dx} ~=~ 0 ~.$$Since ##f(x)## is an arbitrary function, we may legitimately conclude that the coefficients of ##f## and ##df/dx## must vanish separately.

Now let's look at the terms in your integral equation:

CAF123 said:
$$0 = \delta S = \int d^d x\,\partial_{\mu} \omega_a \left[\frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \frac{\delta x^{\nu}}{\delta \omega_a}\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \frac{\delta x^{\mu}}{\delta \omega_a}L\right] +$$ $$ \omega_a\left[ \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu} (\frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \partial_{\mu} (\frac{\delta x^{\mu}}{\delta \omega_a})L\right]$$
There should be an explicit domain on the integral sign, e.g., $$\int_\Omega \cdots ~~,$$, since the action can legitimately be minimized over an arbitrary domain. (Again, see Greiner & Reinhardt for a more careful treatment.) Since the domain is arbitrary, vanishing of the integral is only possible if the integrand vanishes. That leaves you with$$0 ~=~ \partial_{\mu} \omega_a \Big[ \cdots \Big] ~+~ \omega_a \Big[ \cdots \Big] ~.$$Applying my earlier remarks, remembering that the ##\omega_a## are arbitrary, the coefficients in big parentheses may validly be set to 0 separately. But this is only true if those coefficients are indeed independent of ##\omega_a## -- and this is seen to be the case if my earlier more-explicit notation is used.

HTH (sigh).
 
  • #8
Hi strangerep,

I am wondering if there is a mistake in eqn (2.126). Shouldn't it read ##\Phi'(x') - \Phi(x) = -i\omega_a G_a \Phi(x)##? (The generator ##G_a## is the full generator of the transformation so it should change both the field and the coordinate?)

strangerep said:
That's essentially what's going on when FMS appear to consider rigid transformations separately.
More explicitly, specialize my equation above to $$ a f(x) + b \frac{df(x)}{dx} ~=~ 0 ~.$$Since ##f(x)## is an arbitrary function, we may legitimately conclude that the coefficients of ##f## and ##df/dx## must vanish separately.
Could you explain what you mean by the statement '...consider rigid transformations separately'? Do you mean that in the equation ##0 = \partial_{\mu}\omega_a[...]_1 + \omega_a[...]_2##, if ##\omega \neq \omega(x)## then ##[...]_1 \neq 0## since ##\partial_{\mu}\omega_a = 0## and ##[...]_1## constitutes ##j^{\mu}## and then ##[...]_2 = 0## separately for the equation to hold.

If ##\omega = \omega(x)##, then ##[...]_1 = 0## and likewise ##[...]_2=0##. I did not see this case in FMS.
Thanks.
 
Last edited:
  • #9
CAF123 said:
I am wondering if there is a mistake in eqn (2.126). Shouldn't it read ##\Phi'(x') - \Phi(x) = -i\omega_a G_a \Phi(x)##? (The generator ##G_a## is the full generator of the transformation so it should change both the field and the coordinate?)
The sentence preceding eqn (2.126) says:

FMS said:
It is customary to define the generator ##G_a## of a symmetry transformation by the following expression for the infinitesimal transformation at a[sic] same point: [...]
If you follow through to eqn (2.133), one ends up with the correct generator for total angular momentum.

Could you explain what you mean by the statement '...consider rigid transformations separately'? Do you mean that in the equation ##0 = \partial_{\mu}\omega_a[...]_1 + \omega_a[...]_2##, if ##\omega \neq \omega(x)## then ##[...]_1 \neq 0## since ##\partial_{\mu}\omega_a = 0## and ##[...]_1## constitutes ##j^{\mu}## and then ##[...]_2 = 0## separately for the equation to hold.
That doesn't look right. Let's try again -- since I just realized my previous explanation was too simplistic... :blushing:
[Edit: Actually, I should be more upfront about this: I'm having difficulty making sense of the FMS treatment, so take anything I say below with a grain of salt... :frown:]

The relevant equation is $$\delta S ~=~ \int \partial_{\mu}\omega_a[...]_1 ~+~ \int\omega_a[...]_2 ~.$$ We specialize to the case ##\omega = const##, and require the action to be invariant under such transformations. The equation then becomes: $$0 ~=~ \int \omega_a[...]_2 ~=~ \omega_a \int [...]_2 ~.$$Since ##\omega_a\ne 0## in general, this implies ##\int [...]_2 = 0##.

Then we need to deduce that ##\int \omega_a(x) [...]_2 = 0## is true generally. Afaict, this requires the (reasonable) assumption that global (rigid) symmetries hold for arbitrary regions of integration. In that case, ##\int [...]_2 = 0## implies ##[...]_2 = 0##.

That leaves $$\delta S ~=~ \int \partial_{\mu}\omega_a[...]_1 ~.$$ Note that if we now perform integration-by-parts with an arbitrary domain of integration, there will in general be a boundary term. But if we impose the additional conditions that (1) the variations vanish on that boundary, and (2) that ##\delta S = 0## still, then an integration by parts gets the result: $$0 ~=~ \int \partial_{\mu}j^\mu_a \, \omega_a(x) ~,$$(where I've reinstated ##\omega_a##'s explicit ##x##-dependence).

Now, ##\omega_a(x)## is still a very arbitrary function -- the only constraint is that it must vanish on the integration boundary. But it can be anything inside the boundary. This arbitrariness allows us to conclude that the integrand must vanish, and since ##\omega_a(x) \ne 0## in general, we get ##\partial_{\mu}j^\mu_a = 0##.
[Edit: more rigorously, this follows from the fundamental lemma of calculus of variations, provided ##w_a(x)## satisfies conditions of continuity and differentiability outlined in that Wiki page.]

This also explains why we couldn't do the integration-by-parts thing first. To get ##[...]_2 = 0##, we needed an extra condition. The 2nd step requires a more specific boundary on which ##\omega_a(x)=0##. But if ##\omega_a(x)## is a constant, it could only be trivially 0.

[Late Edit:] Since I'm not confident about what I've said above, I'm now trying to do the problem from scratch. That might take a while...
 
Last edited:
  • #10
strangerep said:
If you follow through to eqn (2.133), one ends up with the correct generator for total angular momentum.
It was the conceptual point of view I was concerned with: G is defined to be the generator that transforms both the coordinates and the field and yet it seems there are no instances of the transformed coordinate system (the primed system) present on the LHS of the equation. I realize this is off-topic from the main discussion in this thread, but I was wondering how Di Francesco obtained eqn (2.127). Here are my thoughts: Expand ##\Phi(x')##, keeping the 'shape' of the field (as Grenier puts it) the same, so ##\Phi(x') \approx \Phi(x) + \omega_a \frac{\delta \Phi(x')}{\delta \omega_a}##. Now insert into (2.125) gives $$\Phi'(x') = \Phi(x') - \omega_a \frac{\delta x^{\mu}}{\delta \omega_a}\frac{\partial \Phi(x')}{\delta x^{\mu}} + \omega_a \frac{\delta F}{\delta \omega_a}(x)$$ which is nearly the right result except Di Francesco has a prime on the x at the final term. My thinking was defined an equaivalent function F in the primed system, i.e ##F(\Phi(x)) \equiv F'(\Phi(x'))##. Are these arguments correct?

Then we need to deduce that ##\int \omega_a(x) [...]_2 = 0## is true generally. Afaict, this requires the (reasonable) assumption that global (rigid) symmetries hold for arbitrary regions of integration. In that case, ##\int [...]_2 = 0## implies ##[...]_2 = 0##.
If we want ##\int \omega_a(x)[...]_2 = 0## to hold generally and for arbritary regions of integration, then can we not say that the integrand has to vanish ##\omega_a(x) [...]_2 = 0##. Since ##\omega_a(x)## too is arbritary (under no constraints) then ##[...]_2 = 0##. But then this does not use the rigid transformation anywhere. Or are you using the fact that by implementing a rigid transformation, (thereby constraining omega to be position independent) we have that ##\int [...]_2 = 0##. Then for this to hold for all regions, we must have ##[...]_2 = 0## which is used in the equation ##\int \omega_a(x)[...]_2 ##, making it vanish?
This also explains why we couldn't do the integration-by-parts thing first. To get ##[...]_2 = 0##, we needed an extra condition. The 2nd step requires a more specific boundary on which ##\omega_a(x)=0##. But if ##\omega_a(x)## is a constant, it could only be trivially 0.
I didn't quite understand this paragraph - could you possibly elaborate?

Thanks.
 
  • #11
I'm still working through the problem from scratch. I want to check carefully whether the expression you obtained at the end of your OP is correct. If it's not, then none of the subsequent arguments matter... :frown:

I haven't got there yet, but... I can't do any more tonight.

So try to hang loose for a day or two (or three) until I can complete my detailed checks.
 
  • Like
Likes 1 person
  • #12
Hmm. I just reached the same expression as you got at the end of your opening post.

But something doesn't look right. From your original post (my emboldening)...

CAF123 said:
Now perform the derivatives explicitly, grouping together terms in ##\partial_{\mu}\omega_a## and ##\omega_a## and imposing invariance of action: $$0 = \delta S = \int d^d x\,\partial_{\mu} \omega_a \left[\frac{\delta F}{\delta \omega_a} \frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \frac{\delta x^{\nu}}{\delta \omega_a}\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \frac{\delta x^{\mu}}{\delta \omega_a}L\right] +$$ $$ \omega_a\left[ \frac{\delta F}{\delta \omega_a}\frac{\partial L}{\partial \Phi} + (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{\partial(\partial_{\mu}\Phi)} - \partial_{\mu} (\frac{\delta x^{\nu}}{\delta \omega_a})\partial_{\nu}\Phi \frac{\partial L}{\partial(\partial_{\mu}\Phi)} + \partial_{\mu} (\frac{\delta x^{\mu}}{\delta \omega_a})L\right]$$

The answer in the book is that ##\int d^d x j^{\mu} \partial_{\mu}\omega_a= 0 ##.The terms multiplying ##\partial_{\mu}\omega_a## is exactly ##j^{\mu}## in the book. So I would have the right answer, provided all the terms multiplying ##\omega_a## would vanish. Indeed, the first two do as a result of applying the classical equations of motion for the field ##\Phi##. [...]
I don't see how the emboldened statement is true. I'm guessing you intend integration by parts, but that introduces another ##\partial_\mu \omega_a## term, doesn't it? If not, please show explicitly...
 
  • #13
strangerep said:
Hmm. I just reached the same expression as you got at the end of your opening post.
Thanks for the check.

I don't see how the emboldened statement is true. I'm guessing you intend integration by parts, but that introduces another ##\partial_\mu \omega_a## term, doesn't it? If not, please show explicitly...
You're right. What I did initially was to pass the integral through the ##\omega_a## at the front and write $$\int d^d x (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{ \partial(\partial_{\mu}\Phi)} = -\int d^d x\frac{\delta F}{\delta \omega_a} \partial_{\mu} \frac{\partial L}{\partial (\partial_{\mu}\Phi)}$$ but in the general case, ##\omega## is not constant, so what I did was incorrect. Does it help the problem at all?
 
  • #14
CAF123 said:
Thanks for the check.
Oh, don't thank me yet. I have a feeling that things are deeply wrong about all of this.

You're right. What I did initially was to pass the integral through the ##\omega_a## at the front and write $$\int d^d x (\partial_{\mu}\frac{\delta F}{\delta \omega_a})\frac{\partial L}{ \partial(\partial_{\mu}\Phi)} = -\int d^d x\frac{\delta F}{\delta \omega_a} \partial_{\mu} \frac{\partial L}{\partial (\partial_{\mu}\Phi)}$$ but in the general case, ##\omega## is not constant, so what I did was incorrect. Does it help the problem at all?
No, at least, not that I can see.

In fact, I'm starting to think that FMS's whole treatment is RUBBISH. :mad:
I probably shouldn't have tried (earlier in this thread) to fabricate justifications for it.

Here's why I think it's rubbish. FMS express the coordinate transformation as
$$\def\Vdrv#1#2{\frac{\delta #1}{\delta #2}}
\def\Pdrv#1#2{\frac{\partial #1}{\partial #2}}
x'^\mu ~=~ x^\mu + \omega_a \Vdrv{x^\mu}{\omega_a} ~,$$
BUT... it's just meant to be an infinitesimal version of a coordinate transformation dependent on some parameters ##\omega_a##. Writing it out more carefully, it should be like this:
$$x'^\mu(x,\omega) ~\approx~ x'^\mu \Big|_{\omega=0}
~+~ \omega_a \Pdrv{x'^\mu}{\omega_a} \Big|_{\omega=0}
~=~ x^\mu ~+~ \omega_a X^\mu_a(x) ~,$$where I've introduced new symbols ##X^\mu_a## which are independent of ##\omega_a## but are in general functions of ##x##.

The point is that the ##\omega_a## are not functions of ##x## under any circumstances. So Sam's characterization (in your other thread) of FMS's stuff as being a "mess" seems accurate.

Further, if you study (and re-study) Greiner's derivation carefully, he never makes use of anything like this. The ##\omega_a X^\mu_a(x)## correspond to his ##\delta x^\mu## but he never needs to split it up by introducing ##\omega##'s. And he completes the whole Noether derivation in just under 3 pages -- which is about the same as my attempt to reproduce FMS's steps. And Greiner never invokes integration by parts.

Oh well, at least I've learned something out of this whole exercise. :cry:

If you want to actually learn something over your summer, forget about the FMS crap and study Greiner's derivation in fine detail, and maybe also study Ballentine cover-to-cover. Maybe also one of Greiner's other books: "QM -- Symmetries". At least you'll know that these are reliable.
 
Last edited:
  • Like
Likes 1 person
  • #15
There are different ways to study the conservation laws originating from global symmetries. I don't know the textbook by Di Francesco. So I cannot say anything about his treatment. In my QFT notes, you find the standard derivation for this Noether theorem for classical field theories in Sect. 3.3. If I remember right, it's equivalent to the one given in Walter Greiner's and Joachim Reinhardt's book on field quantization (part of the theory-textbook series by W. Greiner).

It's helpful to keep the following in mind:

(a) Usually you have a transformation in both space-time coordinates and simultaneously of the fields. E.g., take a vector field and proper orthochronous Lorentz transformations as an example. The transformation reads
[tex]A'{}^{\mu}(x')={\Lambda^{\mu}}_{\nu} A^{\nu}(x), \quad x'{}^{\mu}={\Lambda^{\mu}}_{\nu} x^{\nu}.[/tex]
For Noether's theorem, you don't need the full Lie group but only the tangent space at unity, i.e., the Lie algebra or, in physicist's words, the infinitesimal transformations, taking into account only variations to first order in the parameters, e.g., for the Lorentz transformation
[tex]{\Lambda^{\mu}}_{\nu}=\delta_{\nu}^{\mu} + \eta^{\mu \rho} \delta \omega_{\rho \nu} \quad \text{with} \quad \delta \omega_{\rho \nu}=-\delta \omega_{\nu \rho}.[/tex]

(b) The transformation by definition is a symmetry if the action functional is invariant under the transformation, i.e., for all fields (and not only the solution of the field equations, given by the Euler-Lagrange equations of the Hamilton least-action principle) you have
[tex]S[\phi',x'] \equiv S[\phi,x].[/tex]
This leads to constraints on the action, if you demand that a given transformation is a symmetry. Then Noether's theorem tells you that there is a conserved quantity for each one-parameter symmetry (sub) group.

There are alternative treatments of the special case of global internal symmetries, i.e., symmetries not related to Poincare symmetry of space-time, like invariance under the choice of the phase of a complex wave function, leading to the conservation of a charge. One goes back to Gell-Mann and Levy, studying the axial current in the context of weak pion decay in the early 1960ies. The idea there is to simply study the case of a Lagrangian that is invariant under a global symmetry (in this case the abelian axial symmetry) by evaluating the variation of the corresponding action under the corresponding local symmetry, i.e., you make the infinitesimal parameter [itex]\delta \epsilon=\delta \epsilon(x)[/itex]. Then you can identify the Noether current [itex]j^{\mu}[/itex] easily as the coefficient of [itex]\partial_{\mu} \delta \epsilon[/itex] (modulo factors). The coefficient of [itex]\delta \epsilon[/itex] can be written as [itex]\partial_{\mu} j^{\mu}[/itex], when making use of the equations of motion, i.e., the Euler-Lagrange equation. Since the action is invariant under global symmetries only, after having identified the Nother current with this trick of making the parameter [itex]x[/itex] dependent, you set [itex]\delta \epsilon=\text{const}[/itex] again, and you find that the Noether current obeys the continuity equation. This is an elegant derivation of Noether's theorem and makes it easier to identify the expression of the Noether current in terms of the field.

Another way, particularly useful for the derivation of Ward-Takahashi identities of global symmetries within the path-integral formulation, using the methods of generating functionals for the different kinds of Green's functions and proper vertex functions, is to introduce auxilliary vector fields as if you want to gauge the global symmetry to make it local but in the quantized theory just treat these auxilliary gauge fields as external c-number fields. This gives an elegant way to derive the Ward-Takahashi identities ("sum rules") of the global symmetries, particularly in the path-integral formulation. The functional becomes dependent on the auxilliary fields on top of the dependence of the usual external currents (or for the action their conjugate field-expectation values).
 
  • Like
Likes 1 person
  • #16
Thanks vanhees71. I am going onto the path integral formulation next semester, so I have not seen the Ward identities yet. Are your QFT notes online at all?

Thanks strangerep. I saw one of the professors today and his argument on the FMS Noether current derivation was more or less exactly what you wrote in #9. What made you think you were incorrect? I also asked him about the possible x dependence in omega in the top equation of (2.125) (that you pointed out in your last post) and, I can't remember his exact words, but he said something about the x dependence in omega being small, then it is okay. To summarise what he said:

The equation is ##\int d^d x \partial_{\mu} \omega_a [...]_1 + \omega_a [...]_2 = 0##. If ##\omega = \omega(x)## then the first term does not vanish, so use integration by parts to get ##-\int d^d x \omega_a \partial_{\mu}[...]_1 + \int d^d x \omega_a [...]_2 = 0##.
The ##[...]_1## is exactly the Noether current and for arbritary regions of integration, we must have ##[...]_2 = 0##.

For ##\omega \neq \omega(x)## then the first term vanishes and ##\omega \int d^d x [...]_2 = 0## again implying ##[...]_2 = 0##. In each case, I still need to prove that ##[...]_2 = 0## though using the terms in the OP.

My professor suggested that I use Greiner in conjunction to try to see how they obtained their result, although it looks to be a bit difficult to try to match the notation. He said try the formula that I have in the OP for special cases of a Lorentz symmetry transformation to see if it goes to zero, then I can gain trust in what I wrote. But you already confirmed it, so that is enough for me ;) even though I will try to do what he said.
 
  • #17
CAF123 said:
I saw one of the professors today and his argument on the FMS Noether current derivation was more or less exactly what you wrote in #9. What made you think you were incorrect?
Because the starting equations (2.125) are (a) wrong (as I explained in my previous post), and (b) they refer to ##\omega_a## as "infinitesimal parameters", not functions of ##x##.

Therefore, anything flowing from them might be mathematically correct, but the overall result is worthless since it depends on an incorrect starting point.

I also asked him about the possible x dependence in omega in the top equation of (2.125) (that you pointed out in your last post) and, I can't remember his exact words, but he said something about the x dependence in omega being small, then it is okay.
Well, I think that either FMS's eq(2.125) is wrong, or they're totally sloppy and misleading. I hereby issue a (friendly) challenge to your professor to discuss and justify FMS's eq(2.125) here on PF. :biggrin:

To summarise what he said:

The equation is ##\int d^d x \partial_{\mu} \omega_a [...]_1 + \omega_a [...]_2 = 0##. If ##\omega = \omega(x)## then the first term does not vanish, so use integration by parts to get ##-\int d^d x \omega_a \partial_{\mu}[...]_1 + \int d^d x \omega_a [...]_2 = 0##.
The ##[...]_1## is exactly the Noether current and for arbritary regions of integration, we must have ##[...]_2 = 0##.
No, that only proves ##-\partial_\mu[...]_1 + [...]_2 = 0##. You've got the argument the wrong way around -- unless one regards ##[...]_2## as some kind of source term, in which case we're dealing with a conservation equation in the presence of sources.

For ##\omega \neq \omega(x)## then the first term vanishes and ##\omega \int d^d x [...]_2 = 0## again implying ##[...]_2 = 0##. In each case, I still need to prove that ##[...]_2 = 0## though using the terms in the OP.
If you really want to pursue this, study my earlier argument more carefully, else I'll just be repeating myself.

But I reiterate: all that stuff about the integrals is irrelevant fluff if there's a problem with eq(2.125). That's where my objections are aimed.

My professor suggested that I use Greiner in conjunction to try to see how they obtained their result, although it looks to be a bit difficult to try to match the notation.
I have already pursued that advice in fine detail. You won't be able to match notation, at least not sensibly. (How do you think I became so convinced that FMS is rubbish? :frown:)

I can only say (for the last time): study the Greiner derivation in and of itself, without trying to relate it to FMS. When you feel you understand Greiner thoroughly, then maybe come back and try to relate to FMS (but I'm pretty sure that such relating is likely to fail).

He said try the formula that I have in the OP for special cases of a Lorentz symmetry transformation to see if it goes to zero, then I can gain trust in what I wrote. But you already confirmed it, so that is enough for me ;) even though I will try to do what he said.
FMS merely write down (2.125) as an unjustified sweeping statement. I say those equations are either wrong or deeply misleading, and I challenge anyone to show how/whether I'm mistaken.
 
  • #18
I had a brief look at this section in Di Francesco et al's textbook on Conformal Field Theory, and I must say, it's at least very misleading notation. I've no clue what the precise meaning of the symbols in Eq. (2.125) might be.

Have a look at my notes. I hope, there it becomes a bit clearer. Also the recommended book by Greiner and Reinhardt "Field Quantization" is very good to learn QFT. It's very detailed in the calculations, showing most steps of the calculations in detail.
 
  • #19
vanhees71 said:
I had a brief look at this section in Di Francesco et al's textbook on Conformal Field Theory, and I must say, it's at least very misleading notation. I've no clue what the precise meaning of the symbols in Eq. (2.125) might be.
Thanks vanhees71, I asked another question on the Physics stack exchange and in one of the answers notation from perhaps a more common treatment is mapped to notation in Di Francesco. I am still to comprehend exactly what it means, but here is the link http://physics.stackexchange.com/qu...nsformation?noredirect=1#comment251403_123316. Does it make more sense ?

Have a look at my notes. I hope, there it becomes a bit clearer. Also the recommended book by Greiner and Reinhardt "Field Quantization" is very good to learn QFT. It's very detailed in the calculations, showing most steps of the calculations in detail.
Could you put a link to your notes? Thanks.
 
  • #20
strangerep said:
Because the starting equations (2.125) are (a) wrong (as I explained in my previous post), and (b) they refer to ##\omega_a## as "infinitesimal parameters", not functions of ##x##.
I asked another question on Physics stack exchange and one of the responses I got seemed to give a match to the notation. As I said above in response to vanhees, I am still to make sense of it myself but I provided the link above to see if it helps at all.

No, that only proves ##-\partial_\mu[...]_1 + [...]_2 = 0##. You've got the argument the wrong way around -- unless one regards ##[...]_2## as some kind of source term, in which case we're dealing with a conservation equation in the presence of sources.
I asked for further clarification and yes, what I wrote was in the opposite order. What he said was exactly what you wrote in #9.

I can only say (for the last time): study the Greiner derivation in and of itself, without trying to relate it to FMS. When you feel you understand Greiner thoroughly, then maybe come back and try to relate to FMS (but I'm pretty sure that such relating is likely to fail).
Ok, I will study Greiner by itself and understand his argument.
 
  • #22
CAF123 said:
I asked another question on Physics stack exchange and one of the responses I got seemed to give a match to the notation.
I have no problem with the answer given there by "joshphysics", but... in the end the notation "match" is so bad it's almost funny. It's like saying: let's refer to those large grey animal with trunks by the term "apples".

He also did not address the subsequent puzzle about FMS's use of ##\omega_a## as a nontrivial function of ##x## (probably because you didn't specifically ask about that).

(BTW, Hendrik, thanks for taking a closer look at FMS. I had begun to wonder if I was losing my marbles... :frown:)
 
Last edited:
  • #23
Hi strangerep, I have read through Greiner's argument and I agree that he follows a much simpler and cleaner approach, both in the notation and the method (as you said, no integration by parts and to see the classical equations of motion just fall into place in the last step was very nice).

I do have a few questions about some of the points he brings up. On P.40, footnote (3) he defines the quantity ##\tilde \phi_r (x) = \phi_r' (x) - \phi_r(x)## and says '...keeps the value of the coordinate x fixed and only takes into account the change of 'shape' of the field'.
Geometrically, what does this mean?
I understand that the field is not formally to be represented/drawn in the same space as the coordinate space, but if we fix the coordinate then how would the field change at that same value of x?

I made sense of the quantity ##\phi'(x') - \phi'(x)## be viewing the second term as the value of ##\phi'(x')## at ##x' = x##. The coordinate representation ##x'## in S' and ##x## in S serve to locate the same point in Minkowski space, but ##x'## in S does not, so this means it locates a different point in Minkowski space and hence we have this quantity infinitesimally equal to the orbital generator. At least that was the result of my thinking process.

But the term above that I mentioned I cannot come up with a geometric analogy.
 
  • #24
[continued from above..]
My only other question with regard to Grenier's derivation is to do with eqn (2.44) on P.41. He seemed to have obtained the equation $$\frac{\partial \phi_r'(x')}{\partial x'_{\mu}} - \frac{\partial \phi_r(x)}{\partial x_{\mu}} = \delta \left(\frac{\partial \phi_r(x)}{\partial x_{\mu}}\right)$$ but this step is not clear to me.

As you said, it was difficult to match Di Francesco's and Grenier's derivation. Greiner does not introduce the ##\omega_a## so when I tried to split my result up into a piece multiplying ##\omega_a## and ##\partial_{\mu}\omega_a## in the OP, this step is of course absent from Greiner's treatment so I cannot really match. I think though, in terms of (2.125) Di Francesco, we have the notation mapping
\begin{cases}
\delta \phi_r(x) \rightarrow \omega_a \frac{\delta F}{\delta \omega_a}\\
\delta x^{\nu} \rightarrow \omega_a \frac{\delta x^{\nu}}{\delta \omega_a}
\end{cases}

My final question (apologies for an extended reply) is to do with what you said in post #9. Assuming for a moment that the starting point is justified, by imposing a rigid transformation, we obtained ##[...]_2 = 0##. We then use this result when we consider the case of ##\omega = \omega(x)## to be left with only (2.140) in Di Francesco. My question is: What permits us to use this result?

Many thanks.
 
  • #25
CAF123 said:
On P.40, footnote (3) he defines the quantity ##\tilde \phi_r (x) = \phi_r' (x) - \phi_r(x)## and says '...keeps the value of the coordinate x fixed and only takes into account the change of 'shape' of the field'.
Geometrically, what does this mean?
Greiner's term "shape of the field" relates to what I had previously called the "vectorness" (or whatever) of the field. For a vector-valued field, Greiner's ##r## index is a vector index, for a spinor field it would be a spinor index, and so on. So it's like (e.g.,) rotating the field components without any motion in spacetime.

I regard the "modified variation" ##\tilde\delta\phi_r(x)## as simply a convenient technical device, and there's probably no need to to worry too much about a direct geometric interpretation. It's main purpose is to have a variation which commutes with ordinary differentiation, and that's useful because the step from eqn (2.45) to (2.46), and then all the integrals get converted into integrals with ##d^4x##, with no ##d^4x'## remaining.

I understand that the field is not formally to be represented/drawn in the same space as the coordinate space, but if we fix the coordinate then how would the field change at that same value of x?
Have you encountered the concept of tangent spaces yet? If not, then it's a bit hard to explain. For the vector field case, you might think of the field like the flow lines for wind on a weather map, or fluid moving over the some surface under the action of a force. For each point ##x## on the surface (the "base" manifold here) there is an associated direction representing the flow at that point. One models this by imagining a (flat) tangent space anchored to each point ##x##. The "shape" of the field (i.e., the field components) live in this tangent space. So to answer your question about how the field changes at "that same value of x", just think about changing its direction in the tangent space at that point.

The union of all those tangent spaces over every point of the base manifold is called a "tangent bundle". You might also hear the term "vector bundle". When you study gauge field theory, you might also hear the related term "principal bundle". The underlying idea is the same: one imposes extra structure anchored at each point ##x##, to represent extra properties of the field.
 
  • #26
CAF123 said:
[continued from above..]
My only other question with regard to Grenier's derivation is to do with eqn (2.44) on P.41. He seemed to have obtained the equation $$\frac{\partial \phi_r'(x')}{\partial x'_{\mu}} - \frac{\partial \phi_r(x)}{\partial x_{\mu}} = \delta \left(\frac{\partial \phi_r(x)}{\partial x_{\mu}}\right)$$ but this step is not clear to me.
I presume you're talking about the step from the 2nd to the 3rd line of (2.44). If so, try thinking of the quantity
$$ \def\Pdrv#1#2{\frac{\partial #1}{\partial #2}}
\def\pdrv#1{\frac{\partial}{\partial #1}}
\pi_{r\mu}(x) ~:=~ \Pdrv{\phi_r(x)}{x^\mu}
$$as just another field. Then we ask: what is the (ordinary) variation of the ##\pi## field applicable in the current problem? We know that, under these variations, we have ##x \to x'## and ##\phi_r(x) \to \phi'_r(x)##. Hence we also have
$$\pdrv{x^\mu} ~\to~ \pdrv{x'^\mu} ~~.$$Hence, we can write out the explicit expression for ##\pi'_{r\mu}(x')##, and hence ##\delta\pi_{r\mu}(x)## -- which is what Greiner has done here.

If such manipulations seem a bit magical, like they've been pulled out of a hat, one can sometimes get more insight by working the overall proof backwards. E.g., start near (2.45) and see how he gets into the mess of (2.49), then figure out what manipulations could be helpful to get to the end of (2.49). Greiner is simply giving those auxiliary manipulations in advance of where they're needed, rather than waiting until they're needed before working them out.

As you said, it was difficult to match Di Francesco's and Grenier's derivation. Greiner does not introduce the ##\omega_a## so when I tried to split my result up into a piece multiplying ##\omega_a## and ##\partial_{\mu}\omega_a## in the OP, this step is of course absent from Greiner's treatment so I cannot really match. I think though, in terms of (2.125) Di Francesco, we have the notation mapping
\begin{cases}
\delta \phi_r(x) \rightarrow \omega_a \frac{\delta F}{\delta \omega_a}\\
\delta x^{\nu} \rightarrow \omega_a \frac{\delta x^{\nu}}{\delta \omega_a}
\end{cases}
But that matching doesn't hold water when one examines the detail. A Taylor expansion in a parameter ##\omega## doesn't give the FMS expressions that you've shown on the right hand sides above.

My final question (apologies for an extended reply) is to do with what you said in post #9. Assuming for a moment that the starting point is justified, by imposing a rigid transformation, we obtained ##[...]_2 = 0##. We then use this result when we consider the case of ##\omega = \omega(x)## to be left with only (2.140) in Di Francesco. My question is: What permits us to use this result?
It's an application of the stuff I wrote near the middle of my post #7 concerning functions ##f(x)## and ##g(x)##, and what we can deduce about polynomial equations involving them. So first tell me whether you've understood that part of post #7.
 
Last edited:
  • Like
Likes 1 person
  • #27
strangerep said:
So to answer your question about how the field changes at "that same value of x", just think about changing its direction in the tangent space at that point.
That makes sense - since the coordinates and the field 'live' in two different spaces, you can rotate the field at the same position x. However, the quantity ##\phi'(x) - \phi(x)## is (infinitesimally) equal to the full generator of the transformation, which means the variation is due to an orbital piece (coordinate variation) and an internal piece (field variation). How does this make sense with the above way of thinking about the quantity ##\phi'(x) - \phi(x)##?

The union of all those tangent spaces over every point of the base manifold is called a "tangent bundle". You might also hear the term "vector bundle". When you study gauge field theory, you might also hear the related term "principal bundle". The underlying idea is the same: one imposes extra structure anchored at each point ##x##, to represent extra properties of the field.
I had heard the term 'tangent space' before in the context of lie algebra. Which is to say the lie algebra corresponds to a special type of vector space called the tangent space. Geometrically if in 2D space, this set comprises all vectors that are tangent to the group manifold, formed from the action of said group on space-time. I thought that was correct, but that seems to make the use for the term tangent bundle defunct, so perhaps there is a small subtlety above.
strangerep said:
I presume you're talking about the step from the 2nd to the 3rd line of (2.44). If so, try thinking of the quantity
$$ \def\Pdrv#1#2{\frac{\partial #1}{\partial #2}}
\def\pdrv#1{\frac{\partial}{\partial #1}}
\pi_{r\mu}(x) ~:=~ \Pdrv{\phi_r(x)}{x^\mu}
$$as just another field. Then we ask: what is the (ordinary) variation of the ##\pi## field applicable in the current problem? We know that, under these variations, we have ##x \to x'## and ##\phi_r(x) \to \phi'_r(x)##. Hence we also have
$$\pdrv{x^\mu} ~\to~ \pdrv{x'^\mu} ~~.$$Hence, we can write out the explicit expression for ##\pi'_{r\mu}(x')##, and hence ##\delta\pi_{r\mu}(x)## -- which is what Greiner has done here.
Thanks, it now makes sense.

It's an application of the stuff I wrote near the middle of my post #7 concerning functions ##f(x)## and ##g(x)##, and what we can deduce about polynomial equations involving them. So first tell me whether you've understood that part of post #7.
Yes, it made sense. In the first case, upon imposing the rigid transformation, (omega a constant) we have that ##[...]_2 = 0##. I was wondering how we can then use this result subsequently when omega is no longer a constant.
 
  • #28
[continued from above]
hmmm, I have managed to make contact with the derivation by Greiner and Di Francesco. When I subbed in those notation mappings I had in my last post into the derivation by Greiner, I worked it all through and ended up with the same expression for the Noether current as Di Francesco but I had two more terms multiplying omega. In addition to the terms already in the OP multiplying omega, I have these two added on as well: $$-\left(\partial_{\mu} \frac{\partial L}{\partial(\partial_{\mu}\phi)}\right) \frac{\partial \phi}{\partial x^{\nu}} \frac{\delta x^{\nu}}{\delta \omega_a} - \frac{\partial L}{\partial(\partial_{\mu}\phi)} \left(\partial_{\mu} \frac{\partial \phi}{\partial x^{\nu}}\right) \frac{\delta x^{\nu}}{\delta \omega_a}$$
I would agree that this would not make sense to you having not done the exercise yourself, but I believe those two additional terms are correct. My reason for saying so is when I applied the Lorentz transformation of the coordinates and the field to the expression multiplying omega previously I was left with a term antisymmetric in two indices. (So this meant ##[...]_2## did not vanish as we wish it too). When I apply the result of those newly found terms, out pops exactly the same term but with the signs reversed so that when I add it to my previous result, everything cancels.

I realize there is still some thinking to be done as to how Di Francesco can even start to think about making omega a function of x in the first place, given his eqn (2.125) and the fact this assumes omega being manifestly constant. But doesn't the fact that I managed to recover most of the terms via Greiner's approach give it some reliability? That said, I looked over my argument again and I can't find any way to get those additional two terms. :frown:
 
  • #29
CAF123 said:
[...] which means the variation is due to an orbital piece (coordinate variation) and an internal piece (field variation). How does this make sense with the above way of thinking about the quantity ##\phi'(x) - \phi(x)##?
Sounds ok to me, except that I wouldn't call the 2nd piece "internal". A better name might be "intrinsic", since it's relevant to the intrinsic spin indices. Then the term "internal piece" could be used later for gauge tuplet indices.

I had heard the term 'tangent space' before in the context of lie algebra. Which is to say the lie algebra corresponds to a special type of vector space called the tangent space. Geometrically if in 2D space, this set comprises all vectors that are tangent to the group manifold, formed from the action of said group on space-time. I thought that was correct, but that seems to make the use for the term tangent bundle defunct, so perhaps there is a small subtlety above.
Consider an abstract Lie group (without reference to any action of spacetime or whatever). The abstract group is a manifold by definition. It's Lie algebra is the particular tangent space of the group manifold that is anchored at the identity element.

Another way to think about tangent spaces is to imagine the set of all possible paths through a base manifold (let's say it's n-dimensional). At each point ##x## of the manifold, there is an additional n-dimensional space of (infinitesimal) directions that paths through ##x## could take. Therefore, to properly describe all paths (and not just the base manifold itself), we need at least a 2n-dimensional space, being the product of the n-dimensional base manifold and the n-dimensional direction spaces at each point. Those "direction spaces" are what is usually called the tangent spaces.

The whole tangent bundle (2n-dimensional in the case above) is usually hard to visualize. Even if we start with a simple 2D base manifold, the tangent bundle is then 4D, hence difficult to visualize.

Yes, it made sense. In the first case, upon imposing the rigid transformation, (omega a constant) we have that ##[...]_2 = 0##. I was wondering how we can then use this result subsequently when omega is no longer a constant.
The key is that ##[...]_2## is independent of ##\omega##. So if you can show it to be 0 for some values of ##\omega## then it must still be 0 for all of them -- since changing ##\omega## has no effect on ##[...]_2##.

Unfortunately, this crucial ##\omega##--independence is totally non-obvious in FMS's notation.
 
  • #30
CAF123 said:
I realize there is still some thinking to be done as to how Di Francesco can even start to think about making omega a function of x in the first place, given his eqn (2.125) and the fact this assumes omega being manifestly constant. But doesn't the fact that I managed to recover most of the terms via Greiner's approach give it some reliability?
No! Just because someone can fudge a correct answer does not mean their method is good.

Suppose someone showed you a piece of math in which they used the equation
$$ \frac{d}{dx}\,x^3 ~=~ \sin x $$and managed to deduce somehow that ##1 + 1 = 2##. You wouldn't say that obtaining that correct result means that their calculus is right. On the contrary, we should probably just laugh at that elementary error, and not waste any time following what was or wasn't "derived" from that error.

Similarly, I think you are investing too much valuable time following the consequent details of FMS's derivation. If you can't give all the symbols in eqn (2.125) are rigorously valid meaning, and justifty FMS's "can be written" sweeping statement, then you're just wasting your time trying to proceed beyond that point. There's lot of other, more profitable, ways you could be spending your time. E.g., by studying that entire chapter of Greiner thoroughly, and trying to do his exercise 2.2 without looking at his solution. Or studying Ballentine and doing his exercises. :biggrin:
 
  • #31
strangerep said:
Sounds ok to me, except that I wouldn't call the 2nd piece "internal". A better name might be "intrinsic", since it's relevant to the intrinsic spin indices. Then the term "internal piece" could be used later for gauge tuplet indices.
The reason that I was unclear why ##\phi'(x) - \phi(x)## should be equal to the full generator is because the coordinates in ##\phi'(x) - \phi(x)## are not shifted at all, so it appears there is no orbital piece having any effect.

Thanks for the suggestion about Ballentine - would you recommend his book for introducing the path integral formulism/ Feynman path integrals and time dependent perturbation theory. I also have Sakurai and Griffiths.

I have a few more questions with regard to material from earlier on in the book by FMS, if that is ok.
- P.40 eqn (2.133). I was trying to understand the factor of 1/2 appearing on the LHS of that equation. Intuitively, I think the 1/2 is to compensate for exactly half of the entries in the ##\omega_{\rho \nu}## matrix being not independent, so the number of independent generators is also halved. However, I was looking to try to get the half via a more mathematical analysis: Given $$i \omega_{\rho \nu}L^{\rho \nu}\Phi = \omega_{\rho \nu} \left(\frac{\delta x^{\mu}}{\delta \omega_{\rho \nu}} \frac{\partial \Phi}{\partial x^{\mu}} - \frac{\delta F}{\delta \omega_{\rho \nu}}\right) \Rightarrow -i \omega_{\nu \rho}L^{\rho \nu}\Phi = -\omega_{\nu \rho} \left(-\frac{\delta x^{\mu}}{\delta \omega_{\nu \rho}} \frac{\partial \Phi}{\partial x^{\mu}} + \frac{\delta F}{\delta \omega_{\nu \rho}}\right) $$ and then relabel gives $$-i \omega_{\rho \nu}L^{\nu \rho}\Phi = \omega_{\rho \nu} \left(\frac{\delta x^{\mu}}{\delta \omega_{\rho \nu}} \frac{\partial \Phi}{\partial x^{\mu}} - \frac{\delta F}{\delta \omega_{\rho \nu}}\right)$$ then I tried to add this to the first equation, but it didn't give (2.133). Do you have any ideas?

-In another P.SE thread here,http://physics.stackexchange.com/questions/119381/spin-matrix-for-various-spacetime-fields I obtain the generator of rotations of the SO(2) rotation group for an infinitesimal rotation of 2D vectors, collectively comprising a vector field. I now tried to relate this to the spin-1/2 electron system, but it appears vectors representing states for that system transform under the Pauli matrices instead. Is there an underlying reason for this? I also noticed that $$\text{Id} + \omega \begin{pmatrix} 0&1\\-1&0 \end{pmatrix} = \text{Id} + i \omega \begin{pmatrix} 0&-i \\i&0 \end{pmatrix},$$ so I seemed to have made contact with one of the Pauli matrices. What is so special about this particular Pauli matrix showing up here?

Thank you, strangerep.
 
  • #32
CAF123 said:
Thanks for the suggestion about Ballentine - would you recommend his book for introducing the path integral formulism/ Feynman path integrals and time dependent perturbation theory.
Ballentine is a modern development of QM, not QFT. The reason I keep pushing Ballentine in your direction is that I sense your proficiency in ordinary QM needs improvement.

For path-integral stuff,... well,... Greiner & Reinhardt develop QFT by both the canonical method, and by path integrals.

I have a few more questions with regard to material from earlier on in the book by FMS, if that is ok. - P.40 eqn (2.133). I was trying to understand the factor of 1/2 appearing on the LHS of that equation. [...]Do you have any ideas?
Not really. It's a double-counting thing. There is some freedom in how one defines generators and parameters (up to a scale factor), so one chooses the factor to make subsequent calculations more convenient.
[...] so I seemed to have made contact with one of the Pauli matrices. What is so special about this particular Pauli matrix showing up here?
Any 2x2 matrix can be expressed as a linear combination of the Pauli matrices and the unit matrix, so that much is unremarkable. To get more insight, you could try working out the effect of the other Pauli matrices on an arbitrary 2D vector. What transformation of the 2D vectors do they generate? IOW, what matrices do you get when you exponentiate ##a \sigma_x## and ##b \sigma_z## (where ##a,b## are real parameters) ?

And what is the most general linear transformation of a 2D vector space?
 
Last edited:
  • #33
CAF123 said:
The reason that I was unclear why ##\phi'(x) - \phi(x)## should be equal to the full generator is because the coordinates in ##\phi'(x) - \phi(x)## are not shifted at all, so it appears there is no orbital piece having any effect.
Did you have any comments with regard to what I wrote above? There is a sketch on P.67 Ballentine that I thought may help, but I couldn't get much from it in terms of answering my question.

strangerep said:
Ballentine is a modern development of QM, not QFT. The reason I keep pushing Ballentine in your direction is that I sense your proficiency in ordinary QM needs improvement.
You are right, I have only done a single Griffiths level QM course. More comes after the summer break, before the following year where I then do the QFT courses. I should also mention that the reason I am inclined to pursue FMS (although I accepted Greiner's derivation for the Noether derivation) is because this book was the book the professor I am working with assigned to aid me in the project I am doing. Despite not having done any QFT, the professor is keen for me to do a little bit in correlation functions and ward identities. I believe these have connections to the quantum field theory counterparts to the conserved currents arising from the classical field theory analysis. Would you be able to tell me if the coverage of this in FMS looks reliable? (pp.42-45, 104-109.) Thanks very much. After the project, I will study the more general treatment from other texts, e.g some of which you talked about.


Any 2x2 matrix can be expressed as a linear combination of the Pauli matrices and the unit matrix, so that much is unremarkable.
Is it correct to say that the spin-1/2 electron spin states transform under the quantity $$1 + i\omega \begin{pmatrix} 0&-i\\i&0 \end{pmatrix}?$$ I just wondered because, from what I have read, the spin1/2 electron system transforms under the fundamental rep of SU(2) (the rescaled Pauli matrices). I suppose those states can be mapped to vectors in a 2D Euclidean plane (since j=1/2 => 2 values for m) in which case they would transform. Or if I understand Ballentine, P.172 eqn (7.50) correctly, to make contact with what I wrote, ##a \cdot \sigma = i \omega \sigma_y##, with vanishing coefficients for the other Pauli matrices in the linear combination. However, this would involve making one of the coefficients complex = iω.
 
  • #34
CAF123 said:
Did you have any comments with regard to what I wrote above?
Well,... (deep breath...), this involves some rather advanced concepts in field theory, but I'll try to give a sketch...

Have you ever heard the saying that "elementary particle types can be classified according to the unitary irreducible representations of the Poincare group" ?

A related, but perhaps easier, concept is that the values of total spin, and spin projection, are determined by finding the unitary irreducible representations of the rotation group. The latter is exactly what Ballentine performs in section 7.1. I get the feeling many people kinda gloss over that section, eager to move onward, but it contains incredibly important foundational material, that should be studied -- and then meditated upon. :biggrin:

In the case of the full Lorentz group (and hence the Poincare group_, one finds that there are no finite-dimensional unitary irreducible representations, but only infinite-dimensional reps. Hence they are necessarily field representations.

Now comes the big insight: the Lagrangians and the fields they're built from work in concert to yield a representation of the Poincare algebra(!). This is a deep and crucial insight, essentially responsible for "why field theory is the way it is" -- to quote Weinberg. It means that we can find certain expressions built from the fields which satisfy the Poincare commutation relations. In the classical case, this is implemented via Poisson brackets and functional derivatives. You can study this further in Greiner, section 2.5. The quantities corresponding to each continuous symmetry of the Lagrangian also generate that symmetry transformation -- in the sense of Poisson brackets.

This principle -- of building a field representation of the Poincare group -- then guides the choice of possible interactions between the free fields. The fields in the interacting theory must still give a representation of the Poincare group, though it is a different representation from that given by the free fields. Weinberg shows how this usefully restricts the possible choice of interaction terms in the Lagrangian.

Ballentine does a similar thing for the non-relativistic Galilean case in his section 3.4, case (iii). It's only the relatively easy case of a spinless particle interacting with an external field, but the guiding principle is that he's still trying to ensure that the net result gives a representation of the Galilean algebra. This criterion severely restricts the possible forms of the interaction, but it turns out that this covers a vast number of cases.

Anyway,... getting back to the classical field case... the orbital part of the generator is still in there, though slightly disguised. See, e.g., Greiner's eq(2.70). (BTW, did you ever look at the MTW reference I mentioned earlier? It's relevant here.)

All this stuff is essentially why I suggested you study that whole chapter of Greiner carefully, right to the end (rather than just stopping at Noether's thm). A physicist needs a deep understanding of the field representations of symmetry groups.


You are right, I have only done a single Griffiths level QM course. More comes after the summer break, before the following year where I then do the QFT courses. I should also mention that the reason I am inclined to pursue FMS (although I accepted Greiner's derivation for the Noether derivation) is because this book was the book the professor I am working with assigned to aid me in the project I am doing. Despite not having done any QFT, the professor is keen for me to do a little bit in correlation functions and ward identities. I believe these have connections to the quantum field theory counterparts to the conserved currents arising from the classical field theory analysis.
That's quite an advanced topic. Have you studied Green's functions yet? (They're related to the simplest 2-point correlation functions.) I'm not sure about the wisdom of trying to study these before a basic course in QFT, but heh, maybe I'm wrong.

Would you be able to tell me if the coverage of this in FMS looks reliable? (pp.42-45, 104-109.)
I don't think I can give you reliable advice about that, since I don't know what was in your professor's mind. (Did he give you a written statement of the project, or just some vague waffle?)

Tbh, I think it's all a bit advanced for where you are right now, and you're kinda being thrown in the deep end. But you might be able to get a more intuitive understanding of correlations functions (and path integrals) from Zee's QFT book.


Is it correct to say that the spin-1/2 electron spin states transform under the quantity $$1 + i\omega \begin{pmatrix} 0&-i\\i&0 \end{pmatrix}?$$ I just wondered because, from what I have read, the spin1/2 electron system transforms under the fundamental rep of SU(2) (the rescaled Pauli matrices).
The usual rotation group is represented in the case of spin-1/2 particles as ##SU(2,C)##, i.e., 2x2 complex unitary matrices.

I suppose those states can be mapped to vectors in a 2D Euclidean plane (since j=1/2 => 2 values for m) in which case they would transform.
Except that 2-complex-dimensional, not 2-real-dimensional.

Or if I understand Ballentine, P.172 eqn (7.50) correctly, to make contact with what I wrote, ##a \cdot \sigma = i \omega \sigma_y##, with vanishing coefficients for the other Pauli matrices in the linear combination. However, this would involve making one of the coefficients complex = iω.
Can you work out this problem: what are the (matrix) generators and Lie algebras for the groups ##SU(2,C)##, and ##SL(2,R)## ?
 
Last edited:
  • #35
strangerep said:
The usual rotation group is represented in the case of spin-1/2 particles as ##SU(2,C)##, i.e., 2x2 complex unitary matrices.

Except that 2-complex-dimensional, not 2-real-dimensional.
Is there a reason why the state vectors in two dimensional space do not transform under the group of 2D real matrices SL(2,R)? (..or is that what you are getting me to see below?)
Can you work out this problem: what are the (matrix) generators and Lie algebras for the groups ##SU(2,C)##, and ##SL(2,R)## ?
SU(2) is locally isomorphic to SO(3) which means it shares the same Lie algebra as SO(3), satisfying commutation relations ##[T_a, T_b] = i\epsilon_{abc}T_c##. In two dimensions, suitable representations of the generators are ##T_a = 1/2 \sigma_a## where ##\sigma_a## are the Pauli matrices.

For ##SO(2,R)##, the generator would be the 2x2 rotation matrix. For SL(2,R), from this document, it appears the Lie algebra is the same up to a sign in the last commutation relation.http://infohost.nmt.edu/~iavramid/notes/sl2c.pdf
 
Last edited:

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
317
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
964
  • Advanced Physics Homework Help
Replies
1
Views
702
  • Advanced Physics Homework Help
Replies
0
Views
118
  • Advanced Physics Homework Help
Replies
10
Views
1K
  • Advanced Physics Homework Help
Replies
9
Views
2K
Replies
5
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
871
  • Advanced Physics Homework Help
Replies
1
Views
857
Back
Top