Transformation of generator under translations

In summary, The subgroup of the Poincare group that leaves the point ##x=0## invariant is known as the Lorentz group. The action of an infinitesimal Lorentz transformation on a field ##\Phi(0)## is given by ##L_{\mu \nu}\Phi(0) = S_{\mu \nu}\Phi(0)##. Using the commutation relations of the Poincare group, we can translate the generator ##L_{\mu \nu}## to a nonzero value of ##x##. This is given by the equation $$e^{ix^{\rho}P_{\rho}} L_{\mu \nu} e^{-ix^{\sigma}P_{
  • #1
CAF123
Gold Member
2,948
88

Homework Statement


Let us study the subgroup of the Poincare group that leaves the point ##x=0## invariant, that is the Lorentz group. The action of an infinitesimal Lorentz transformation on a field ##\Phi(0)## is ##L_{\mu \nu}\Phi(0) = S_{\mu \nu}\Phi(0)##. By use of the commutation relations of the Poincare group, we translate the generator ##L_{\mu \nu}## to a nonzero value of ##x##: $$e^{ix^{\rho}P_{\rho}} L_{\mu \nu} e^{-ix^{\sigma}P_{\sigma}} = S_{\mu \nu} - x_{\mu}P_{\nu} + x_{\nu}P_{\mu}\,\,\,\,\,\,\,\,\,(1),$$ where the RHS is computed using the Baker-Campbell-Hausdorff formula to the second term. Then we can write the action of the generators $$P_{\mu} \Phi(x) = -i\partial_{\mu}\Phi(x) \,\,\,\,\text{and}\,\,\,\,L_{\mu \nu}\Phi(x) = i(x_{\mu}\partial_{\nu} - x_{\nu}\partial_{\mu})\Phi(x) + S_{\mu \nu}\Phi(x)\,\,\,\,\,(2)$$

I understand that the LHS of eqn (1) describes the transformation of ##L_{\mu \nu}## under a space time translation. What I want to understand is how they obtained eqn (2).

Homework Equations


##\Phi'(x') = (\text{Id} - i\omega_{g}T_{g})\Phi(x)##

The Attempt at a Solution


I am trying to understand how we are able to write the action of the generators like that using eqn (1). If we consider the transformation of the fields under a Lorentz transformation then $$\Phi'(x') = (\text{Id} - iL_{\mu \nu}\omega^{\mu \nu})\Phi(0+x)$$ using the equation in relevant equations. Now expand ##\Phi(0+x) \approx \Phi(0) + x^{\mu}\partial_{\mu}\Phi(0)##.

For a scalar field, ##\Phi'(x') = \Phi(x)## and so we have ##\Phi(x) = \Phi(0) + x^{\mu}\partial_{\mu}\Phi(0) + iL_{\mu \nu}\omega^{\mu \nu}\Phi(0) + iL_{\mu \nu}\omega^{\mu \nu}x^{\sigma}\partial_{\sigma}\Phi(0)##. I can't see a way to connect this to eqn (2) exactly.

If I now use the fact that the coordinates change ##x^{\rho} = \omega^{\rho}_{\,\,\,\nu}x^{\nu}## then the above becomes, ignoring the last term quadratic in the parameter, $$\Phi(x) = \Phi(0) - \omega^{\rho}_{\,\,\,\nu}x^{\nu}\partial_{\rho}\Phi(0) + iL_{\mu \nu}\omega^{\mu \nu}\Phi(0)\,\,\,\,(3)$$ Now antisymmetrise over the components of ##\omega##, add the result to (3) gives $$2\Phi(x) = 2\Phi(0) + \omega^{\rho \nu}(x_{\rho}\partial_{\nu} - x_{\nu}\partial_{\rho})\Phi(0) + 2i\omega^{\mu \nu}S_{\mu \nu}\Phi(0)$$

So I am making contact with the orbital and intrinsic generators, but I can't quite get an exact result to match with eqn (1) in order to read off the explicit forms for the generators ##P_{\mu}## and ##L_{\mu \nu}##

Many thanks.
 
Physics news on Phys.org
  • #2
Let's focus first just on a simple piece of this problem, i.e., $$P_{\mu} \Phi(x) = -i\partial_{\mu}\Phi(x) ~. $$Do you understand how to obtain this? (If so, give a sketch.)
 
  • #3
Hi strangrep, sorry for late reply.
Yes I think I get the overall idea but, as I point out below, there are a few mathematical intricacies that I do not understand.

Consider the expansion of the new field at the new position x, ##\Phi'(x')##. Then $$\Phi'(x') \approx \Phi(x) + \omega_{a}\frac{\delta F}{\delta \omega_a}(x)\,\,\,\,(1)$$ But also $$\Phi'(x') \approx \Phi(x') - \omega_a \frac{\delta x^{\mu}}{\delta \omega_a} \frac{\delta \Phi(x')}{\delta x^{\mu}} + \omega_{a} \frac{\delta F}{\delta \omega_a}(x')\,\,\,\,(2)$$ F was defined to be a function of the field ##F=F(\Phi(x)) := \Phi'(x')##
Eqn (2) is not so clear right now, but here are my thoughts:

Translations form an abelian group and so admit only 1D irreps with which the fields transform. Therefore, ##\Phi'(x') = \Phi(x)##. So under a translation ##x'^{\mu} = x^{\mu} + a^{\mu}##, $$\Phi'(x') = \Phi(x) = \Phi(x' - a) \approx \Phi(x') - \omega_{a}\frac{\delta \Phi(x')}{\delta\omega_a}$$ Sub into (1). Then I was wondering why there is a prime on the x at the end of (1). Again, my thoughts being that we have simply defined a function numerically equivalent to one in the unprimed system: ##F' = F'(\Phi(x')) \equiv F(\Phi(x))##. If all that is correct, then I can move on.

It is now just a case of subbing in the above into the generic transformation of fields: $$\Phi'(x) - \Phi(x) = -\omega_a G_a \Phi(x) \Rightarrow \Phi'(x') - \Phi(x') = -i\omega_a G_a \Phi(x')$$ by sending x → x'. Sub in results, F is trivial as noted above and we get the result.

My question is how to obtain the same result using what was written in the OP.
Many thanks.
 
  • #4
So let 's review... ##\Phi(x)## is (I presume) a quantum field. You are given these equations:
$$L_{\mu \nu}\Phi(0) ~=~ S_{\mu \nu}\Phi(0) ~~~~~~~~~~ (0)$$ $$e^{ix^{\rho}P_{\rho}} L_{\mu \nu} e^{-ix^{\sigma}P_{\sigma}} ~=~ S_{\mu \nu} - x_{\mu}P_{\nu} + x_{\nu}P_{\mu} ~~~~~(1) $$and you want to understand how to obtain:
$$L_{\mu \nu}\Phi(x) ~=~ i(x_{\mu}\partial_{\nu} - x_{\nu}\partial_{\mu})\Phi(x) + S_{\mu \nu}\Phi(x) ~~~~~ (2)$$

So... do you understand that
$$\Phi(x+a) ~=~ e^{ia^\rho P_\rho} \Phi(x) e^{-ia^\sigma P_\sigma} ~~~~~~ (4)$$ ?

If so, you should be able to apply [an adaptation of] eq(4) to [both sides of] eq(0), then use eq(1) to arrive at eq(2).
 
  • #5
I can write ##\Phi(0+x) = e^{ix^{\rho} P_{\rho}}\Phi(0)e^{-ix^{\sigma}P_{\sigma}}## And then also ##\Phi(0+x) \approx \Phi(0) +x^{\mu}\partial_{\mu}\Phi(0) = \Phi(0) + \omega^{\mu}_{\,\,\,\nu}x^{\nu}\partial_{\mu}\Phi(0) = \Phi(0) + \omega^{\mu \nu}x_{\nu}\partial_{\mu}\Phi(0)##.
I could antisymmetrise over omega now to obtain a term ##x_{\nu}\partial_{\mu} - x_{\mu}\partial_{\nu}##. I think doing some sort of taylor expansion is the only way to bring in the partial derivative terms and I think the above way is the correct way?

Then I could multiply ##\Phi(0+x) = \Phi(x)## above by ##L_{\mu \nu}## but that would yield ##\Phi(0)## terms on the RHS and not ##\Phi(x)## as I require.

What is the adaptation you alluded to? I can write the transformation of the generator itself as ##L'_{\mu \nu} = e^{ix^{\rho}P_{\rho}}L_{\mu \nu}e^{-ix^{\sigma}P_{\sigma}}## but I am not sure what to do with this.

Thanks.
 
  • #6
It's a lot simpler than that. Just work out the following expression:
$$e^{ix^{\rho}P_{\rho}} L_{\mu \nu} \Phi(0) e^{-ix^{\sigma}P_{\sigma}} ~=~ ?$$It involves no more than 1 or 2 lines. Don't overthink it. And ignore what I called eq(0) in my post #4.

Hint: insert a ##1## between ##L_{\mu \nu}## and ##\Phi(0)##.
 
  • #7
strangerep said:
Hint: insert a ##1## between ##L_{\mu \nu}## and ##\Phi(0)##.
I see, many thanks. I thought they actually derived ##P_{\mu} = -i\partial_{\mu}## in the actual derivation which is why I was so keen to always do a taylor expansion.

I have a question about the RHS of your eqn(1). Here is my working:
From Hausdorff formula, ##e^{ix^{\rho}P_{\rho}}L_{\mu \nu}e^{-ix^{\sigma}P_{\sigma}} = L_{\mu \nu} - [L_{\mu \nu}, x^{\sigma}P_{\sigma}] + ...## I can rewrite the second term like ##[L_{\mu \nu}, x^{\sigma}P_{\sigma}] = x^{\sigma}[L_{\mu \nu}, P_{\sigma}] + [L_{\mu \nu}, x^{\sigma}]P_{\sigma} = ix^{\sigma}(\eta_{\sigma \nu}P_{\mu} - \eta_{\sigma \mu}P_{\nu}) + [L_{\mu \nu}, x^{\sigma}]P_{\sigma}## using the commutation relations of the Poincare group.

Then I can write it like ##ix_{\nu}P_{\mu} - ix_{\mu}P_{\nu} + (L_{\mu \nu}x^{\sigma} - x^{\sigma}L_{\mu \nu})P_{\sigma}##

Now what I did in my previous attempt at this was to now set ##L_{\mu \nu} \rightarrow S_{\mu \nu}## and so the second term vanishes since ##S_{\mu \nu}## only acts on the fields.
But I no longer like what I did here because, besides the fact that the resulting term is incorrect by a factor of i, I have no justication of why the single ##L_{\mu \nu}## term on the RHS should be solely ##S_{\mu \nu}##.

Thanks.
 
  • #8
CAF123 said:
I have a question about the RHS of your eqn(1). Here is my working:
From Hausdorff formula, ##e^{ix^{\rho}P_{\rho}}L_{\mu \nu}e^{-ix^{\sigma}P_{\sigma}} = L_{\mu \nu} - [L_{\mu \nu}, x^{\sigma}P_{\sigma}] + ...##
Shouldn't that be ##\cdots - [L_{\mu \nu}, i x^{\sigma}P_{\sigma}]## ?

I can rewrite the second term like ##[L_{\mu \nu}, x^{\sigma}P_{\sigma}] = x^{\sigma}[L_{\mu \nu}, P_{\sigma}] + [L_{\mu \nu}, x^{\sigma}]P_{\sigma} = ix^{\sigma}(\eta_{\sigma \nu}P_{\mu} - \eta_{\sigma \mu}P_{\nu}) + [L_{\mu \nu}, x^{\sigma}]P_{\sigma}## using the commutation relations of the Poincare group.

Then I can write it like ##ix_{\nu}P_{\mu} - ix_{\mu}P_{\nu} + (L_{\mu \nu}x^{\sigma} - x^{\sigma}L_{\mu \nu})P_{\sigma}##

Now what I did in my previous attempt at this was to now set ##L_{\mu \nu} \rightarrow S_{\mu \nu}## and so the second term vanishes since ##S_{\mu \nu}## only acts on the fields.
##S_{\mu \nu}## acts on spin indices (which you haven't shown explicitly on ##\Phi##). Is that what you meant?

But I no longer like what I did here because, besides the fact that the resulting term is incorrect by a factor of i,
See 1st comment above.

I have no justication of why the single ##L_{\mu \nu}## term on the RHS should be solely ##S_{\mu \nu}##.
I must say that I don't much like the way the original question is formulated. It would be nicer if ##L## were replaced by ##J##, with ##L## reserved for the orbital part. But at the same time, I don't want to confuse you (too much).

It might be better to take a step back and review section 7.2 of Ballentine on pp164-166. Do you have (or can you access) a copy of Ballentine?
 
Last edited:
  • Like
Likes 1 person
  • #9
strangerep said:
Shouldn't that be ##\cdots - [L_{\mu \nu}, i x^{\sigma}P_{\sigma}]## ?
Yes of course, thanks. That corrects the error later on with the stray i.

##S_{\mu \nu}## acts on spin indices (which you haven't shown explicitly on ##\Phi##). Is that what you meant?
Yes, and if I let ##L_{\mu \nu} \rightarrow S_{\mu \nu}## in that commutator, I get that ##S_{\mu \nu}x^{\sigma} - x^{\sigma}S_{\mu \nu} = 0## since S here does not act on an entity with any spin indices. My trouble is why can I simply send ##L_{\mu \nu} \rightarrow S_{\mu \nu}## on the RHS without also doing it on the LHS.

Just so that you know, later on in the book I am following, the same derivation is done for the dilation operator. The result is ##e^{ix^{\rho}P_{\rho}}De^{-ix^{\sigma}P_{\sigma}} = D + x^{\sigma}P_{\sigma}## There the 'intrinsic' term (D) that appears on the RHS is also on the LHS. Then the book writes ##D\Phi(x) = (\Delta' - ix^{\sigma}\partial_{\sigma})\Phi(x)##, where ##\Delta'## has the analogous meaning as ##S_{\mu \nu}## in the other case. So it seems there is a lack of care for notation here, unless I am misunderstanding something.

Also, I read in one of the books I have that each 'full' generator of a symmetry transformation can be regarded as being composed of a spacetime part and an internal part. I believe that the space time part transforms the coordinates in the space and the internal part transforms the field. Is that correct? It just seems strange because, in the case at hand, we have the space time part acting on ##\Phi## too (not just ##S_{\mu \nu}##) in eqn (2), so it made me wonder if I understand the book right. Or perhaps it is a case that since the field is a function of the coordinates,##\Phi = \Phi(x)##, it transforms under the space time part too.

I must say that I don't much like the way the original question is formulated. It would be nicer if ##L## were replaced by ##J##, with ##L## reserved for the orbital part. But at the same time, I don't want to confuse you (too much).
You did not confuse me and in fact I was thinking the same at some point last week, but just tried to deal with the notation. If we send L to J then in your eqn(2), it is easy to see that J=L+S, L being the orbital angular momentum.

It might be better to take a step back and review section 7.2 of Ballentine on pp164-166. Do you have (or can you access) a copy of Ballentine?
I took a trip this morning to the uni library and found an older version of Ballentine, I think. 7.2 of that edition does not include anything on the field theory discussion but instead more or less describing the components L and S of the total angular momentum. Is that what you have me look at?
 
Last edited:
  • #10
CAF123 said:
Just so that you know, later on in the book I am following, the same derivation is done for the dilation operator.[...]
This is all from a book?? Geez, you should have given a specific reference in your opening post. Might have saved a lot of time and guesswork.

I'll respond to the dilation stuff after you give me the reference (assuming I can access the book, of course).

Also, I read in one of the books I have that each 'full' generator of a symmetry transformation can be regarded as being composed of a spacetime part and an internal part. I believe that the space time part transforms the coordinates in the space and the internal part transforms the field. Is that correct? It just seems strange because, in the case at hand, we have the space time part acting on ##\Phi## too (not just ##S_{\mu \nu}##) in eqn (2), so it made me wonder if I understand the book right. Or perhaps it is a case that since the field is a function of the coordinates,##\Phi = \Phi(x)##, it transforms under the space time part too.
There's some different cases here. E.g., one could have (say) a 3-vector valued field ##f^i(x)## (aka a "spin-1" field) on spacetime. A spatial rotation must then act (somehow) on the coordinates, but also on the ##i## 3-vector index. Another example is that one could have a field which behaves as a scalar under Lorentz transformations, but is a doublet under an internal (gauge) symmetry group (e.g., the ##SU(2)_L## isospin group). People don't always write out the indices explicitly -- they expect you to keep track of it all in your head (which is difficult when learning, and can still be difficult later -- I often need to drop back to fully explicit index notation to make sure I understand something properly).

More on this below...

I took a trip this morning to the uni library and found an older version of Ballentine, I think. 7.2 of that edition does not include anything on the field theory discussion but instead more or less describing the components L and S of the total angular momentum. Is that what you have me look at?
Yes. I just wanted you to see a clearer treatment of how a spatial rotation is decomposed into 2 parts when one is not dealing with a scalar valued field. E.g., in Ballentine's case (ii) on p165, he shows in eqn(7.19) how the generic rotation operator ##\mathbf R## is decomposed into a matrix ##D## which acts on the components of the wave function (my ##f^i## above is essentially the same idea), and a differential operator ##R^{-1}## which acts on the coordinates ##x##.

His ##D## is the exponentiated spin operator -- see eqn(7.21).

In the general case there might not be 3 components -- that's specific to the spin-1 case. For other spin values, there's a different number of components (corresponding to the range of the ##m## quantum number -- see the list at the bottom of p162).
 
Last edited:
  • #11
strangerep said:
This is all from a book?? Geez, you should have given a specific reference in your opening post. Might have saved a lot of time and guesswork.
Sorry about that. I am concentrating on two chapters of the big yellow book on Conformal Field Theory by Di Francesco, Mathieu and Senechal that develop the symmetry transformations/action invariance under transformations and generators of the conformal group. To be precise these are:
Starting 2.4 P36 - 42.
And for what we are talking about at the moment,
Chapter 4, 4.2 P.100.

And I think I understand the notation, but would appreciate any further comments you have.
$$e^{ix^{\rho}P_{\rho}}L_{\mu \nu}\Phi(0)e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}\Phi(0)e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}e^{-ix^{\alpha}P_{\alpha}}e^{ix^{\gamma}P_{\gamma}}\Phi(0)e^{-ix^{\sigma}P_{\sigma}}$$ The first equality is also equal to ##L_{\mu \nu}\Phi(x)## so the result follows provided that ##S_{\mu \nu} x^{\sigma} - x^{\sigma}S_{\mu \nu}=0## which it should be since S does not act on x, so the term overall vanishes.
 
Last edited:
  • #12
CAF123 said:
I am concentrating on two chapters of the big yellow book on Conformal Field Theory by Di Francesco, Mathieu and Senechal that develop the symmetry transformations/action invariance under transformations and generators of the conformal group.
Hmm. Have you already mastered ordinary QFT?? This book looks like a difficult read if you haven't. More like a graduate text.

To be precise these are:
Starting 2.4 P36 - 42.
And for what we are talking about at the moment,
Chapter 4, 4.2 P.100.
Not the clearest exposition I've ever seen.

And I think I understand the notation, but would appreciate any further comments you have.
$$e^{ix^{\rho}P_{\rho}}L_{\mu \nu}\Phi(0)e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}\Phi(0)e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}e^{-ix^{\alpha}P_{\alpha}}e^{ix^{\gamma}P_{\gamma}}\Phi(0)e^{-ix^{\sigma}P_{\sigma}}$$ The first equality is also equal to ##L_{\mu \nu}\Phi(x)## so the result follows provided that ##S_{\mu \nu} x^{\sigma} - x^{\sigma}S_{\mu \nu}=0## which it should be since S does not act on x, so the term overall vanishes.
I would have said the result follows because ##S_{\mu\nu}## commutes with the translation generators ##P_\mu## (but also because it does indeed commute with ##x##, which is a parameter here).

So I guess we're done here?
 
  • #13
strangerep said:
Hmm. Have you already mastered ordinary QFT??
Actually, not at all. I am doing a project right now and I am only focusing on a very small subset of the book (as I said in the last post, only from 2.4 to end of chap 2 and beginning of chap 4. )These sections deal mostly with the conformal group in the classical field theory. So that I may get the bigger picture for future courses on QFT when I take them, can you recommend any good books that you yourself perhaps worked through/ found useful that I can use alongside this book?
Some others here have told me that it is not the book I should be using for field theory because it is specialized. I have sought other online resources, but a book would be ideal. Thanks.

I would have said the result follows because ##S_{\mu\nu}## commutes with the translation generators ##P_\mu## (but also because it does indeed commute with ##x##, which is a parameter here).

Could you explain why your first statement is true? Using the Haussdorff formula, I arrive at ##e^{ix^{\sigma}P_{\sigma}}S_{\mu \nu}e^{-ix^{\rho}P_{\rho}} = S_{\mu \nu} - [S_{\mu \nu}, ix^{\sigma}P_{\sigma}]##
The latter term is equal to ##i(x^{\sigma}[S_{\mu \nu},P_{\sigma}] + [S_{\mu \nu}, x^{\sigma}]P_{\sigma})##. Now, if what you say is true both terms here are zero. But that would eliminate one of the terms in the final expression.

One of the commutation relations of the Poincare group is $$[P_{\rho}, L_{\mu \nu}] = i(\eta_{\rho \mu}P_{\nu} - \eta_{\rho \nu}P_{\mu})$$ I thought we would obtain the correct commutation relation for ##S_{\mu \nu}## by simply sending ##L_{\mu \nu} \rightarrow S_{\mu \nu}## in that formula.

So I guess we're done here?
Yes, just a quick query above left over. Do you have a copy of the book? Could you help with the paragraph on P.101 at the top? I was wondering what it means to say '##\Phi(x)## belongs to an irreducible representation of the Lorentz group..'. and how they obtained that '##\tilde{\Delta}## is simply a number, manifestly equal to ##-i\Delta##'.

Thanks.
 
Last edited:
  • #14
CAF123 said:
Actually, not at all. I am doing a project right now and I am only focusing on a very small subset of the book (as I said in the last post, only from 2.4 to end of chap 2 and beginning of chap 4.) These sections deal mostly with the conformal group in the classical field theory.
What is the project? I presume you were given some kind of summary or abstract to work from?

So that I may get the bigger picture for future courses on QFT when I take them, can you recommend any good books that you yourself perhaps worked through/ found useful that I can use alongside this book?
I can't really recommend anything for conformal field theory, since (afaict) it doesn't have much practical use in fundamental physics. I tend to group it with string theory and SUSY. I've heard that it has uses in condensed matter, but that's not my area.

Some others here have told me that it is not the book I should be using for field theory because it is specialized. I have sought other online resources, but a book would be ideal.
"Field theory" covers a lot of ground. (And have you noticed my signature line?)

It depends what level of knowledge you've reached right now. For classical EM, Jackson is my goto book. For more advanced optics and coherent states, Mandel & Wolf is the bible. For ordinary QFT,... well,... the series of textbooks by Greiner (and various co-authors) got me started. Then Peskin & Schroeder for more advanced treatment and 1-loop calculations. Weinberg is considered the QFT "bible", but it's very difficult. Magiorre is also worth a look. Hendrik van Hees (aka "vanhees71" on PF) has an extensive script on QFT, so if you study from that you can always get help here from the author. Zee gives a useful overview of path integral methods in QFT, but is not likely to teach you how to perform the difficult integrals needed for cross-sections.

TBH, I wouldn't advise trying anything more advanced than Greiner or Peskin & Schroeder until you've mastered the way that Ballentine develops ordinary QM (which involves representation theory at a more introductory level).

[Edit:]You might also find some early sections of Greiner's "Field Quantization" helpful for stuff about classical fields in Lagrangian/Hamiltonian formulation.

Could you explain why your first statement is true?
Ah, well,... this is a bit subtle and you're not the first person to be perplexed about that. OK... (deep breath...)

Look again at that section of Ballentine I mentioned earlier: i.e., eqn(7.19) in case(ii) on p165. The bold R on the lhs is an operator on Hilbert space. But the ##D## and ##R^{-1}## on the rhs specific to the current representation. Urk -- now I have to explain what "representation" means...
For now, I'll just point out that, under a rotation, a vector transforms differently from a scalar. (The scalar remains unchanged, whereas the components of the vector get mixed around.) One says that scalars and vectors correspond to different representations of the rotation group: the same (active) rotation in physical 3D does different things to scalars than it does to vectors. The intrinsic spin part of angular momentum captures this distinction: scalars are called spin-0, vectors are spin-1, etc.

Now suppose we perform a translation in 3-space. The coordinate origin changes of course, but the distinction between "scalar" and "vector" remains. (Imagine translating a vector anchored "here" to become a vector anchored "there". It doesn't stop being a vector -- its "vectorness" is an intrinsic property.)

Using the Haussdorff formula, I arrive at ##e^{ix^{\sigma}P_{\sigma}}S_{\mu \nu}e^{-ix^{\rho}P_{\rho}} = S_{\mu \nu} - [S_{\mu \nu}, ix^{\sigma}P_{\sigma}]##
The latter term is equal to ##i(x^{\sigma}[S_{\mu \nu},P_{\sigma}] + [S_{\mu \nu}, x^{\sigma}]P_{\sigma})##. Now, if what you say is true both terms here are zero. But that would eliminate one of the terms in the final expression.

One of the commutation relations of the Poincare group is $$[P_{\rho}, L_{\mu \nu}] = i(\eta_{\rho \mu}P_{\nu} - \eta_{\rho \nu}P_{\mu})$$ I thought we would obtain the correct commutation relation for ##S_{\mu \nu}## by simply sending ##L_{\mu \nu} \rightarrow S_{\mu \nu}## in that formula.

Let's take a step back. And I'll use ##J_{\mu\nu}## for total angular momentum. The commutation relations for the Poincare algebra, e.g., ##[P_\lambda, J_{\mu\nu}] = \cdots## are true regardless of which representation we're acting on. One says that these are the abstract elements of the Poincare algebra. But when you write something like ##P_\lambda = -i\partial_\mu##, you're implicitly specializing the abstract ##P_\lambda## to the form it takes when acting on a representation made of wave functions. One says that the Hilbert space pf square-integrable wave functions "carries a representation of ##P_\lambda## in the form of a differential operator". This distinction between an abstract element of a Lie algebra, and it's specific form in a certain representation is one the most important things to grasp in all of modern physics.

Now comes a more difficult part: a vector-valued wave function is actually constructed as a tensor product of a representation of the rotation group (this is the spin part) and a representation of the translation group (the x part). In tensor products like this, operators acting on one part of the product are blind to the other -- that's the bottom-line reason why the spin operator commutes with the translation operator. But... have you encountered tensor products yet in your QM studies? If not, I suppose the above will seem quite obscure. If so, you'll have to be satisfied (for now) with thinking about how the intrinsic "vectorness" or "scalarness" of a physical entity doesn't change if you move from "here" to "there".

When we write ##J_{\mu\nu} = L_{\mu\nu} + S_{\mu\nu}##, what's really happening is this: we want a representation of the rotation group that acts on vector-valued functions of ##x##. But this involves a tensor product space, so we must find representations that act on each part independently, and then form the product. Look at Ballentine's eqns (7.17), (7.20) and (7.21). Putting them together, we have
$$e^{i\theta \hat n \cdot J/\hbar}
~=~ e^{i\theta \hat n \cdot L/\hbar} \, e^{i\theta \hat n \cdot S/\hbar} ~.$$But L and S act on different parts of the tensor product (by construction), hence they commute. So we can put them back inside a single exponent, leading to the formula ##J_{\mu\nu} = L_{\mu\nu} + S_{\mu\nu}##.

Strictly speaking, we should be writing something like
$$e^{i\theta \hat n \cdot J/\hbar}
~=~ e^{i\theta \hat n \cdot L/\hbar} \, \otimes \, e^{i\theta \hat n \cdot S/\hbar} ~,$$to emphasize that we're working with a tensor-product representation here. But physicists rarely do that. They just keep in mind that the different parts operate on different parts of the overall wave function.

So, when you said that you simply put ##J_{\mu\nu} \to S_{\mu\nu}##, you're ignoring the spatial dependence. I.e., you're ignoring one term of the tensor product. Certainly, for the specific case ##x=0##, the orbital term is zero, but you can't ignore it in general.

That's why I didn't like the notation in that book: it thoroughly glosses over most of the really important foundational stuff I sketched above.

[Edit #2] The situation is different depending on whether we're dealing with the non-relativistic (Galilean) or relativistic (Poincare) case. In the former, total angular momentum decomposes covariantly into orbital and spin parts (meaning that each part retains its identity under Galilean transformations. But for Poincare, Lorentz boosts can mix the orbital and spin parts, hence the decomposition is not really meaningful. (Are you familiar with the Pauli-Lubanski spin vector?)

In the conformal case, we deal only with massless fields, and things are different again. There are various subtleties even in ordinary (Poincare) QFT when trying to construct massless quantum fields.

So some of what I said above might not be quite right for the CFT case. I must look at more of that CFT book to check -- but right now it's my bedtime.
[End Edit #2]

Could you help with the paragraph on P.101 at the top? I was wondering what it means to say '##\Phi(x)## belongs to an irreducible representation of the Lorentz group..'.
I hope you start to see from my explanation above that the question "what is a representation" is a very big one. For current purposes, you can probably translate it to something like ##\Phi## is a scalar, or spinor, or vector, or (etc), under Lorentz transformations.

and how they obtained that '##\tilde{\Delta}## is simply a number, manifestly equal to ##-i\Delta##'.
They're explanation is poor. They should have said that the 1st commutator in (4.29), together with Schur's lemma means that, ##\tilde\Delta## must be a multiple of the identity. (The "multiple" is denoted ##-i\Delta##.) Hence the lhs of the 2nd commutator in (4.29) vanishes. Hence the rhs of that commutator (i.e ., ##-i\kappa_\mu##) vanishes also.

BTW, you should probably move some of these larger non-HW questions to the quantum forum where others can (probably) give a wider perspective than I can. CFT is not really my area, but there's other people on PF who know a lot about it.
 
Last edited:
  • Like
Likes 1 person
  • #15
strangerep said:
What is the project? I presume you were given some kind of summary or abstract to work from?
Yes, I have a list of aims that more or less get me looking into the basics of the theory before looking into an application. I am focusing on the conformal group in the classical field theory realm since I am yet to take any course on QFT.

TBH, I wouldn't advise trying anything more advanced than Greiner or Peskin & Schroeder until you've mastered the way that Ballentine develops ordinary QM (which involves representation theory at a more introductory level).
[Edit:]You might also find some early sections of Greiner's "Field Quantization" helpful for stuff about classical fields in Lagrangian/Hamiltonian formulation.
Many thanks. I will keep them all in mind and perhaps look into Greiner because your edit is also relevant to what I have studied in the project.

Look again at that section of Ballentine I mentioned earlier: i.e., eqn(7.19) in case(ii) on p165. The bold R on the lhs is an operator on Hilbert space. But the ##D## and ##R^{-1}## on the rhs specific to the current representation. Urk -- now I have to explain what "representation" means...
For now, I'll just point out that, under a rotation, a vector transforms differently from a scalar. (The scalar remains unchanged, whereas the components of the vector get mixed around.) One says that scalars and vectors correspond to different representations of the rotation group: the same (active) rotation in physical 3D does different things to scalars than it does to vectors. The intrinsic spin part of angular momentum captures this distinction: scalars are called spin-0, vectors are spin-1, etc.
A scalar transforms trivially under the Lorentz group, so we can map the scalar to the trivial representation (map to 1) which would encapsulate its trivial transformation nicely. Any scalar representation can likely be mapped to 1, e.g a particle possessing spin-0 transforms like a scalar.

Now comes a more difficult part: a vector-valued wave function is actually constructed as a tensor product of a representation of the rotation group (this is the spin part) and a representation of the translation group (the x part).
So, the translation part encodes the orbital dependence of the overall wave function and the spin part encodes the spin value of the wave function. I.e if we consider the wave function of an electron possessing a total spin s=1/2 and some arbritary orbital dependence ##\Phi(x)## then the full wavefunction is a tensor product ##\Phi'(x) = |s=1/2 \rangle \otimes \Phi(x)##. The other use for tensor products I know of is that it naturally leads to the Clebsch-Gordon series decomposition for the tensor product of two spin representations.
In tensor products like this, operators acting on one part of the product are blind to the other -- that's the bottom-line reason why the spin operator commutes with the translation operator.
It makes sense conceptually.

When we write ##J_{\mu\nu} = L_{\mu\nu} + S_{\mu\nu}##, what's really happening is this: we want a representation of the rotation group that acts on vector-valued functions of ##x##. But this involves a tensor product space, so we must find representations that act on each part independently, and then form the product. Look at Ballentine's eqns (7.17), (7.20) and (7.21). Putting them together, we have
$$e^{i\theta \hat n \cdot J/\hbar}
~=~ e^{i\theta \hat n \cdot L/\hbar} \, e^{i\theta \hat n \cdot S/\hbar} ~.$$But L and S act on different parts of the tensor product (by construction), hence they commute. So we can put them back inside a single exponent, leading to the formula ##J_{\mu\nu} = L_{\mu\nu} + S_{\mu\nu}##.

Strictly speaking, we should be writing something like
$$e^{i\theta \hat n \cdot J/\hbar}
~=~ e^{i\theta \hat n \cdot L/\hbar} \, \otimes \, e^{i\theta \hat n \cdot S/\hbar} ~,$$to emphasize that we're working with a tensor-product representation here. But physicists rarely do that. They just keep in mind that the different parts operate on different parts of the overall wave function.
To make contact with the example I posted above, are you saying that the ##e^{i\theta \hat n \cdot L/\hbar}## term is the same as my ##\Phi(x)## and the ##e^{i\theta \hat n \cdot S/\hbar}## is the same as my ##|s=1/2\rangle##?

So, when you said that you simply put ##J_{\mu\nu} \to S_{\mu\nu}##, you're ignoring the spatial dependence. I.e., you're ignoring one term of the tensor product. Certainly, for the specific case ##x=0##, the orbital term is zero, but you can't ignore it in general.
Intuitively it at least makes sense but if I use this knowledge in my commutator it simply vanishes: $$e^{ix^{\rho}P_{\rho}}L_{\mu \nu}\Phi(0) e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}\Phi(0)e^{-ix^{\sigma}P_{\sigma}} = e^{ix^{\rho}P_{\rho}}S_{\mu \nu}e^{-ix^{\gamma}P_{\gamma}}e^{ix^{\alpha}P_{\alpha}}\Phi(0)e^{-ix^{\sigma}P_{\sigma}}$$
Then ##e^{ix^{\rho}P_{\rho}}S_{\mu \nu}e^{-ix^{\gamma}P_{\gamma}} = S_{\mu \nu} - [S_{\mu \nu}, ix^{\sigma}P_{\sigma}] = S_{\mu \nu} - i(x^{\sigma}[S_{\mu \nu}, P_{\sigma}] + [S_{\mu \nu}, x^{\sigma}]P_{\sigma})##

So the first term in the reexpressed commutator, given the above discussion, will vanish and so will the second since x is just a parameter of the translation.
I hope you start to see from my explanation above that the question "what is a representation" is a very big one. For current purposes, you can probably translate it to something like ##\Phi## is a scalar, or spinor, or vector, or (etc), under Lorentz transformations.
Ok, would 'irreducible' be used just so that we may use Schur's lemma?

They're explanation is poor. They should have said that the 1st commutator in (4.29), together with Schur's lemma means that, ##\tilde\Delta## must be a multiple of the identity. (The "multiple" is denoted ##-i\Delta##.) Hence the lhs of the 2nd commutator in (4.29) vanishes. Hence the rhs of that commutator (i.e ., ##-i\kappa_\mu##) vanishes also.
Yes, that makes more sense to me.

BTW, you should probably move some of these larger non-HW questions to the quantum forum where others can (probably) give a wider perspective than I can. CFT is not really my area, but there's other people on PF who know a lot about it.
Thanks, and thanks for taking the time to write a long and insightful post.
 
Last edited:
  • #16
As I was almost ready to go beddy-bye, I had a horrible feeling that some of the things I said might not be quite right in the context of the conformal group. See my "Edit #2" in previous post.

I'll respond to the other stuff tomorrow (my time).

In the meantime, list your project aims in more detail. I can't help feeling that jumping into the conformal group (for anything) is unwise if you haven't yet achieved proficiency in the Poincare case.
 
  • #17
strangerep said:
In the meantime, list your project aims in more detail. I can't help feeling that jumping into the conformal group (for anything) is unwise if you haven't yet achieved proficiency in the Poincare case.
Studying the conformal group in d>2 dimension, writing explicit representations of the generators/ studying the algebra, note isometries in D+2 dimensional space and isomorphism with SO(d+1,1), application in electrodynamics.
I am studying the Poincare group as I go along as well, the Lorentz group together with translations from a subset of the conformal group, so I suppose it is natural that I will touch upon it.
Thanks.
 
  • #18
I'll respond to the pieces of your earlier post separately (since some require more thought than others).

CAF123 said:
A scalar transforms trivially under the Lorentz group, so we can map the scalar to the trivial representation (map to 1) which would encapsulate its trivial transformation nicely. Any scalar representation can likely be mapped to 1, e.g a particle possessing spin-0 transforms like a scalar.
That's the right idea, but not quite the correct way to express it. I need to explain a little more about representations...

Mathematically, a representation (of a group, say) is a mapping from the abstract group elements to operators on a linear space. So, e.g., in the spin-1 representation, we have a mapping from the abstract elements of ##SO(3)## to 3x3 matrices acting on the usual 3D vector space. Those matrices are said to represent the abstract group elements, and the entire 3D vector space is said to be the "carrier space" for the representation -- meaning that there are operators (matrices) on the vector space that represent the group elements faithfully, including the multiplicative group properties, etc, etc.

In the scalar (i.e., spin-0) case, the linear carrier space is 1-dimensional, hence all elements of ##SO(3)## map to 1x1 matrices. But since the determinant must be ##\pm 1##, those 1x1 matrices are just the numbers 1 and -1. (In the -1 case, we're actually dealing with "pseudo-scalars" -- meaning that they change sign under parity inversion, unlike ordinary scalars.)

So your last sentence above should be changed to something like: In any scalar representation, the group elements are all represented trivially by the identity.
 
  • #19
CAF123 said:
So, the translation part encodes the orbital dependence of the overall wave function and the spin part encodes the spin value of the wave function. I.e if we consider the wave function of an electron possessing a total spin s=1/2 and some arbritary orbital dependence ##\Phi(x)## then the full wavefunction is a tensor product ##\Phi'(x) = |s=1/2 \rangle \otimes \Phi(x)##.
Again, that's heading in the right direction, but not quite correct in the detail. The trouble with your formulation is that the component ##\Phi^{+1/2}(x)## could have different x-dependence from ##\Phi^{-1/2}(x)##.

It's better to think of it as a tensor product of spaces. Something like
$$\{ |1/2\rangle \,,\, |-1/2\rangle\} ~\otimes~ L^2(R^3)$$I.e., the 2D Hilbert space ##C^2## tensored with the space of square-integrable functions on position space.

To make contact with the example I posted above, are you saying that the ##e^{i\theta \hat n \cdot L/\hbar}## term is the same as my ##\Phi(x)## and the ##e^{i\theta \hat n \cdot S/\hbar}## is the same as my ##|s=1/2\rangle##?
No. The exponentials are operators, but the ket is a vector.

As for ##\Phi## -- here there is a big difference between a classical field (where it's just a function), a QM function (which is a vector in a Hilbert space), and a quantum field (where it's an operator on a Fock space).

The stuff that I referenced in Ballentine corresponds to the 2nd case (QM) where the ##\Phi## wavefunction is a vector, so we write the generic rotation transformation as ##\phi \to R \Phi##.

But for the 3rd case (QFT), ##\Phi## is an operator so it transforms under a generic rotation as ##\Phi \to R \Phi R^{-1}##. So in section 4.2.1 of FMS, they're really dealing with the quantum field case, even though the untrained eye might think it's still the classical case.
 
  • #20
CAF123 said:
Intuitively it at least makes sense but if I use this knowledge in my commutator it simply vanishes: [...]
I begin to dislike FMS more and more. Back on p98, in eq(4.18) they use the symbol ##L_{\mu\nu}## for orbital angular momentum. But here on p100 they silently change it to mean the total angular momentum. No word about the distinction between the abstract generator and specific representations of it. From what they've written, I don't think it is possible to go from their eq(4.25) to eq(4.26) in the way they sketch. I think they are just using their knowledge of the correct result to fudge an explanation. My advice is to put that book aside for now.

The correct underlying idea is as follows. The difference between "total angular momentum of a field about a point ##a_0##" and "total angular momentum of a field about a point ##a_1##" does indeed equal an orbital-like term ##-b_\mu P_\nu + b_\nu P_\mu##, where ##b = a_1 - a_0##. To see this, can you access a copy of the "Gravitation" bible by Misner, Thorne & Wheeler? See Box 5.6 on pp157-159. Then they show how to go from the spin part to the total expression. This is the best compact (classical) treatment of the relationships between total, spin and orbital angular momenta that I've found. (It's also good for debunking the oft-quoted phrase that "spin is a quantum concept". It's not, of course, as MTW shows. Only the quantized values for angular momentum arise from the quantum framework.)

I also think you should (as a high priority) get a copy of Greiner's "Field Quantization" (if you haven't already) and study ch2 thoroughly -- which really does deal with classical field theory. They use Poisson brackets to represent abstract group commutators, which is the correct way to do things in the classical case.
 
  • #21
CAF123 said:
Ok, would 'irreducible' be used just so that we may use Schur's lemma?
I would guess so.

There's actually a lot of variations of Schur's Lemma, applicable to different situations. If you haven't already noticed, Ballentine's appendices A and B contain useful (and compact) discussions of Schur's Lemma, and the irreducibility of Q and P, which often perplexes QM students.
 
  • Like
Likes 1 person
  • #22
strangerep said:
Again, that's heading in the right direction, but not quite correct in the detail. The trouble with your formulation is that the component ##\Phi^{+1/2}(x)## could have different x-dependence from ##\Phi^{-1/2}(x)##.
Can I just check the notation here? ##\Phi^{1/2}(x)## and ##\Phi^{-1/2}(x)## are components of the full orbital part of the wavefunction (or vector in Hilbert space)? And this orbital part may be written as a linear combination of suitable basis kets, for example position or L2, Lz eignkets.
It's better to think of it as a tensor product of spaces. Something like
$$\{ |1/2\rangle \,,\, |-1/2\rangle\} ~\otimes~ L^2(R^3)$$I.e., the 2D Hilbert space ##C^2## tensored with the space of square-integrable functions on position space.
So the generic spin dependence is spanned by (via our choice) the eigenkets of Sz. And ##L^2(R^3)## is the set of all possible orbital dependencies of the system (we have no further knowledge about the orbital dependence of the spin 1/2 electron so this space is infinite dimensional without further constraints) spanned by some eigenkets of an observable, again of our choice, appropriate for the problem at hand.

No. The exponentials are operators, but the ket is a vector.
Woops, yes of course. We have the general rotation representation written as a tensor product of the spin part and the orbital part: $$\rho = \rho^{\text{orbital}} \otimes \rho^{\text{spin}} = \exp\left(-iL\cdot n/\hbar\right) \otimes \exp\left(-iS \cdot n/\hbar\right)$$ and this acts on the total wavefunction, where the orbital (spin) operator acts only on the orbital (spin) dependence of the wavefunction. ##\rho## is a label for the full rotation representation acting on kets in the space.

The ##\rho## are typically rotation matrices. What is the physical reasoning for the necessity of a tensor product of the two reps of the orbital and spin parts? In this link, subsection 'Tensor Product of Linear Maps' http://en.wikipedia.org/wiki/Tensor_product the tensor product of two matrices involves all possible multiplications between components in one matrix and in the other. Is this the reason why we have a tensor product of reps here?

The other motivation I had was that the tensor product leads correctly to the Clebsch-Gordon series for the total angular momentum decomposition for spin states, but going back to the actual meaning above is probably more beneficial.
The correct underlying idea is as follows. The difference between "total angular momentum of a field about a point ##a_0##" and "total angular momentum of a field about a point ##a_1##" does indeed equal an orbital-like term ##-b_\mu P_\nu + b_\nu P_\mu##, where ##b = a_1 - a_0##. To see this, can you access a copy of the "Gravitation" bible by Misner, Thorne & Wheeler? See Box 5.6 on pp157-159. Then they show how to go from the spin part to the total expression. This is the best compact (classical) treatment of the relationships between total, spin and orbital angular momenta that I've found. (It's also good for debunking the oft-quoted phrase that "spin is a quantum concept". It's not, of course, as MTW shows. Only the quantized values for angular momentum arise from the quantum framework.)
The library is closed today, but I will go in tomorrow. I checked the catalogue already and there is a copy there so I will check MTW out and take Greiner out. Then I will come back to you if I have further queries. Thanks.
 
Last edited:
  • #23
CAF123 said:
Can I just check the notation here? ##\Phi^{1/2}(x)## and ##\Phi^{-1/2}(x)## are components of the full orbital part of the wavefunction (or vector in Hilbert space)?
They are the spinor components of the full wavefunction (I'm restricting here to the spin-1/2 case). I don't know what you mean by "orbital part" of the wavefunction. The wavefunction has spinor indices (the ##\pm 1/2##), and also a spacetime dependence denoted via ##x##.

Sometimes one denotes spinor indices by A,B,C,... and gives these indices the values 1,2. But I chose to give them values ##\pm 1/2##.
Alternatively, you could write the full wavefunction as $$\Phi(x) ~=~ \begin{pmatrix}f(x) \\ g(x) \end{pmatrix} ~,$$ where ##f,g## correspond to my earlier ##\Phi^{\pm 1/2}(x)##.

And this orbital part may be written as a linear combination of suitable basis kets, for example position or L2, Lz eignkets.
No. The spacetime part would typically involve complex exponentials, or some other set of functions that form a solution set of whatever differential wave equation is applicable to the current system. By writing ##\Phi^{\pm 1/2}(x)##, I've already expressed the spinor part in terms of basis kets for (spin-1/2) angular momentum, projected along an arbitrary direction, ##z## say.

So the generic spin dependence is spanned by (via our choice) the eigenkets of Sz. And ##L^2(R^3)## is the set of all possible [STRIKE]orbital [/STRIKE] spacetime dependencies of the system (we have no further knowledge about the [STRIKE]orbital [/STRIKE] spacetime dependence of the spin 1/2 electron so this space is infinite dimensional without further constraints) spanned by some eigenkets of an observable, again of our choice, appropriate for the problem at hand.
With my strike-outs of orbital, replaced by "spacetime", the above is almost correct -- except that I shouldn't have said ##L^2(R^3)##, because the normalization condition here involves not ordinary square-integrability of each component, but rather
$$\int dx \; \Phi^\dagger(x) \, \Phi(x)$$(where the dagger means conjugate transpose, as usual).

Woops, yes of course. We have the general rotation representation written as a tensor product of the spin part and the orbital part: $$\rho = \rho^{\text{orbital}} \otimes \rho^{\text{spin}} = \exp\left(-iL\cdot n/\hbar\right) \otimes \exp\left(-iS \cdot n/\hbar\right)$$ and this acts on the total wavefunction, where the orbital (spin) operator acts only on the [STRIKE]orbital[/STRIKE] spacetime (spin) dependence of the wavefunction. ##\rho## is a label for the full rotation representation acting on kets in the space.
With my strike-out correction, the above becomes correct.

The ##\rho## are typically rotation matrices.
That's only true for ##\rho^{\text{spin}}##. But ##L## is typically a differential operator, e.g., ##i\hbar(x_\mu \partial_\nu - x_\nu \partial_\mu)##.

It's important to get used to think of a differential operator as just another linear operator. An ordinary matrix ##M## acts on finite-dimensional Hilbert spaces and (e.g.,) maps a vector ##u## to another vector ##v = Mu##. Similarly, a differential operator ##\partial## acts on a infinite dimensional Hilbert space (whose vectors are functions) and (e.g.,) maps a function ##F## to another function ##G = \partial F## . The only difference is that in the latter case we work with infinite-dimensional vectors: one could denote ##f_x \equiv f(x)## to make the parallel with finite-dim vectors more explicit.

In this link, subsection 'Tensor Product of Linear Maps' http://en.wikipedia.org/wiki/Tensor_product the tensor product of two matrices involves all possible multiplications between components in one matrix and in the other.
In the present case, we have a tensor product between a finite-dim space and an inf-dim space -- but the underlying idea is the same.

What is the physical reasoning for the necessity of a tensor product of the two reps of the [STRIKE]orbital[/STRIKE]spacetime and spin parts? [...] Is this the reason why we have a tensor product of reps here?
Ah, this is a question of deep importance. Do you know the spectral theorem yet? If we demand that a dynamical variable be represented by a self-adjoint operator ##O## on some Hilbert space, its eigenvalues are automatically all real and the corresponding eigenvectors form a basis for the space. Physically, the eigenvalues correspond to all possible values measurable for the dynamical variable represented by ##O##. That's why we use Hilbert spaces in QM -- it captures all the possible measurement outcomes for a particular dynamical variable relevant to a given system. But then, how do we allow for the possibility of several dynamical variables whose respective operators commute? We want a Hilbert space for the composite system, and the tensor product construction is a framework for achieving that. Thus, we build larger composite Hilbert spaces from smaller elementary Hilbert spaces.

In his textbook, Dirac develops the entire QM framework in this way: by building up the Hilbert spaces by considering eigenvalues/vectors of self-adjoint operators. (But don't try and learn QM from Dirac's book -- it's too difficult.)


The other motivation I had was that the tensor product leads correctly to the Clebsch-Gordon series for the total angular momentum decomposition for spin states, but going back to the actual meaning above is probably more beneficial.
The Clebsch-Gordon stuff is just an application of what I described above. In a simple case, we have a 2 elementary systems, whose respective Hilbert spaces carry an irreducible representation of SO(3). We want the 2-part composite space to continue to carry an SO(3) representation (though no longer irreducible of course). The so(3) generators for each part add vectorially, and from that one derives the relationship between the larger (composite) SO(3) and its 2 smaller elementary reps -- expressed as Clebsch-Gordon coefficients. Similar techniques are involved when modelling composite systems in particle physics, e.g., hadrons composed of quarks.
 
Last edited:
  • #24
Many thanks for all your answers!
strangerep said:
The Clebsch-Gordon stuff is just an application of what I described above. In a simple case, we have a 2 elementary systems, whose respective Hilbert spaces carry an irreducible representation of SO(3).
If we consider a three dimensional Hilbert space spanned by suitable basis vectors for a j=1 spin system then this corresponds to the SO(3) fundamental representation. Is that what you mean by a Hilbert space carrying an irreducible representation? The vectors in that basis transform under this representation (e.g acted on upon by the SO(3) fundamental rep matrices).


We want the 2-part composite space to continue to carry an SO(3) representation (though no longer irreducible of course).
Since the dimension, D, of the tensor product space corresponds to multiplication of the dimension, d1,2 of the constituient spaces, D > d1,2. Is it irreducible in the sense that, in a similar vein to the fact that a 3D space spanned by 4 basis vectors means that 1 basis vector is redundant, likewise the resulting space carrying an SO(3) rep can also be reduced in some way?

In the finite group case, it is easier to see because we can use the dimensionality theorem for irreducible representations.

The so(3) generators for each part add vectorially, and from that one derives the relationship between the larger (composite) SO(3) and its 2 smaller elementary reps -- expressed as Clebsch-Gordon coefficients. Similar techniques are involved when modelling composite systems in particle physics, e.g., hadrons composed of quarks.
I am doing a course in particle physics next semester, so that is enlightening.

I was also studying parts of chapter 2 and in particular, I was trying to derive (2.141), the expression for the Noether current. I nearly have it, except two terms in my expansion won't vanish. I was wondering if you can see any way to make the following term go to zero: $$\int d^d x\,\, \omega_a \left[\partial_{\mu} \left(\frac{\delta x^{\mu}}{\delta \omega_a}\right)L - \left(\partial_{\mu} \left(\frac{\delta x^{\nu}}{\delta \omega_a}\right)\right)(\partial_{\nu}\Phi) \frac{\partial L}{\partial (\partial_{\mu} \Phi)}\right]$$

L is the lagrangian density. I have tried integration by parts, using the Euler Lagrange E.O.M's but I can't get it to vanish. I spoke to one of the professors about this and he said try again. (We did the expansion together on the board and when we saw that we had obtained the correct expression for the current given in the book, we stopped and just assumed all other terms vanished). But I can't see how it would. Two other terms vanished nicely by applying the E.O.Ms, but the terms above I don't see.

Thanks.
 
  • #25
CAF123 said:
If we consider a three dimensional Hilbert space spanned by suitable basis vectors for a j=1 spin system then this corresponds to the SO(3) fundamental representation. Is that what you mean by a Hilbert space carrying an irreducible representation? The vectors in that basis transform under this representation (e.g acted on upon by the SO(3) fundamental rep matrices).
That's 1 example, but it doesn't have to be the fundamental representation. E.g., for the spin-1/2 case we work in 2-complex-dimensional space.

Since the dimension, D, of the tensor product space corresponds to multiplication of the dimension, d1,2 of the constituient spaces, D > d1,2. Is it irreducible in the sense that, in a similar vein to the fact that a 3D space spanned by 4 basis vectors means that 1 basis vector is redundant, likewise the resulting space carrying an SO(3) rep can also be reduced in some way?
You're talking about an over-complete basis there, but that's a different thing.

Indeed, the concept of (ir)reducibility is a bit tricky. Have a (careful) read of this Wiki page. The first 2 paragraphs of the overview section are crucial: for a linear space ##V## to carry a reducible rep of a group ##G## it must be possible to find a nontrivial subspace ##W \subset V## which is preserved under the action of the group. I.e., ##G## always maps elements of ##W## to other elements of ##W##. One says that ##W## is "##G##-invariant".

Also important is the distinction between "irreducibility" and "indecomposability". The latter refers to the ability to make the matrices block-diagonal by a similarity transformation, and I've seen more than one textbook conflate this property with irreducibility. That Wiki page clarifies the distinction.

In the finite group case, it is easier to see because we can use the dimensionality theorem for irreducible representations.
If you work with the correctly formulated concept of irreducibility, there's not much difference in the inf-dim case. It's still all about finding a (nontrivial) G-invariant subspace.

Re the Noether question: I had a quick look, but it's hard to see easily what's going on, and I don't know what steps you've taken to reach this point. So I suggest you post this as a separate question, with more background and detailed on your attempted solution. You might also get some clues if you look at a different derivation of Noether's thm in Greiner. It uses different notation from FMS, but may be clearer in its exposition.
 
Last edited:
  • #26
strangerep said:
If you work with the correctly formulated concept of irreducibility, there's not much difference in the inf-dim case. It's still all about finding a (nontrivial) G-invariant subspace.
Thanks, but how does it follow that by taking a tensor product of two spaces carrying an SO(3) rep that the resulting space carries a reducible SO(3) rep? So, that is to say how do we know in that space there exists a non trivial invariant subspace?

Isn't another condition for irreducibility (perhaps equivalent) is that if matrix representations of different elements of the abstract group have the same eigenvectors then that representation is reducible? (It seems to be equivalent because by acting with the reps on some vector ##\vec w \in W \subset V##, since ##\vec w## is an eigenvector then it is unchanged up to some multiple ##k \vec w## which by properties of subspaces means that ##k \vec w \in W##. This is a nontrivial subspace so the matrices are reducible)

Just going back to the main point in the OP:
strangerep said:
So... do you understand that
$$\Phi(x+a) ~=~ e^{ia^\rho P_\rho} \Phi(x) e^{-ia^\sigma P_\sigma} ~~~~~~ (4)$$ ?
One of the professors that I was meeting with said that this statement is not true and that the correct expression should not be a sandwiching of the operator but like ##\Phi'(x') = e^{-ix'\cdot P}\Phi(x)##. So which is correct?

If we go to P.40, there it is written that ##L^{\rho \nu} = i(x^{\rho}\partial^{\nu} - x^{\nu}\partial^{\rho}) + S^{\rho \nu}##(which is correct and makes sense). This means that eqn (4.26) on P.100 must be incorrect. Unless the author means that ##L^{\rho \nu}## is actually ##J^{\rho \nu}## (total angular momentum) and so we can write ##e^{ix^{\rho}P_{\rho}}L_{\mu \nu}e^{-ix^{\sigma}P_{\sigma}} = J_{\mu \nu}## perhaps?

Re the Noether question: I had a quick look, but it's hard to see easily what's going on, and I don't know what steps you've taken to reach this point. So I suggest you post this as a separate question, with more background and detailed on your attempted solution. You might also get some clues if you look at a different derivation of Noether's thm in Greiner. It uses different notation from FMS, but may be clearer in its exposition.
Thank you for your efforts. I have put another thread in the Advanced Physics H/W subforum if you would be so kind to have a look. I apologize in advance for the amount of TeX on that thread, however, I have shown all the main steps to make the argument as clear as possible.
 
Last edited:
  • #27
CAF123 said:
Thanks, but how does it follow that by taking a tensor product of two spaces carrying an SO(3) rep that the resulting space carries a reducible SO(3) rep? So, that is to say how do we know in that space there exists a non trivial invariant subspace?
The vectors in each of the component spaces do not mix with each other under rotations. (You cannot make the state of one particle somehow become the state of another independent particle.) So each of the irrep spaces that formed the tensor product are "non-trivial invariant subspaces" of the larger composite space.

Isn't another condition for irreducibility (perhaps equivalent) is that if matrix representations of different elements of the abstract group have the same eigenvectors then that representation is reducible?
Not sure about that, tbh. Maybe that's a question you should ask in the Linear Algebra forum.

Just going back to the main point in the OP:

One of the professors that I was meeting with said that this statement is not true and that the correct expression should not be a sandwiching of the operator but like ##\Phi'(x') = e^{-ix'\cdot P}\Phi(x)##. So which is correct?
If ##\Phi(x)## is a classical (scalar) field, or an ordinary QM wave function (state vector) of a scalar particle, then $$\Phi'(x) ~=~ e^{-ia\cdot P} \Phi(x) ~=~ \Phi(x+a) ~.$$Note: I had the wrong sign of ##a## in what I wrote earlier.

(What you wrote seems to contain a typo, -- there's a ##x'## in the exponent.)

But if ##\Phi(x)## is a quantum field (i.e., an operator), then (I believe) it should be $$\Phi'(x) ~=~ e^{-ia\cdot P} \Phi(x) e^{ia\cdot P} ~=~ \Phi(x+a) ~.$$For a better explanation, see Ballentine pp66-68.

(Herein you see some of the confusion that arose because you were talking about classical fields, but FMS seemed to be using QFT language.)

For more extensive clarification of this, see also Greiner's section 4.3 "Symmetry Transformations". (But note: my edition is from 1996, and newer editions might have different section numbering. It's good that Greiner keep updating his books, but it makes precise page references likely to go out of date. In my edition, the precise page reference is pp95-96.)

If we go to P.40, there it is written that ##L^{\rho \nu} = i(x^{\rho}\partial^{\nu} - x^{\nu}\partial^{\rho}) + S^{\rho \nu}##(which is correct and makes sense). This means that eqn (4.26) on P.100 must be incorrect. Unless the author means that ##L^{\rho \nu}## is actually ##J^{\rho \nu}## (total angular momentum) and so we can write ##e^{ix^{\rho}P_{\rho}}L_{\mu \nu}e^{-ix^{\sigma}P_{\sigma}} = J_{\mu \nu}## perhaps?
That's exactly the ambiguity that I've been struggling with in this whole thread. I'm reluctant to waste any more time trying to guess what FMS meant.

I apologize in advance for the amount of TeX on that thread, however, I have shown all the main steps to make the argument as clear as possible.
No need to apologize for showing your work.

Unfortunately, I probably can't look at it in detail today. Maybe tomorrow. And maybe someone else will answer better than I can...
 
Last edited:
  • #28
Hi strangerep, just have a quick question from a previous post:
strangerep said:
They're explanation is poor. They should have said that the 1st commutator in (4.29), together with Schur's lemma means that, ##\tilde \Delta## must be a multiple of the identity. (The "multiple" is denoted ##-i\Delta##.) Hence the lhs of the 2nd commutator in (4.29) vanishes. Hence the rhs of that commutator (i.e ., ##-i\kappa_\mu##) vanishes also.
Do you know where this ##-i \Delta## comes from? I have used another method and obtained this result up to a minus sign, but how is it obtained in the treatment on P.101? Does it follow from the commutation relations (4.29) somehow?

I am now writing up the project report.
 
  • #29
CAF123 said:
Do you know where this ##-i \Delta## comes from? I have used another method and obtained this result up to a minus sign, but how is it obtained in the treatment on P.101? Does it follow from the commutation relations (4.29) somehow?
Schur's lemma tells you the ##\tilde\Delta## is a multiple of the identity. They obviously intend the sign to be compatible with the definition of ##\Delta## (no tilde) in eq(2.121). But I don't have the spare time to work through it to check whether their sign conventions are consistent, sorry. :frown:
 

1. What is the transformation of generator under translations?

The transformation of generator under translations refers to the change in the position and orientation of a generator (a mathematical concept) when it is moved from one location to another by a translation (a type of geometric transformation).

2. How does a translation affect the generator?

A translation affects the generator by shifting it to a new location while preserving its shape and size. This means that the generator's coordinates will change, but its orientation and dimensions will remain the same.

3. What is the mathematical representation of the transformation of generator under translations?

The mathematical representation of the transformation of generator under translations is a translation matrix. This matrix is used to calculate the new coordinates of the generator after the translation has been applied.

4. Can a generator undergo multiple translations?

Yes, a generator can undergo multiple translations. Each translation will result in a new position and set of coordinates for the generator. The final position and orientation of the generator will be the result of all the translations applied to it.

5. How is the transformation of generator under translations useful in real-world applications?

The transformation of generator under translations is useful in various fields such as computer graphics, robotics, and engineering. It allows for the efficient representation and manipulation of objects in different positions and orientations, making it a valuable tool for 3D modeling and animation, path planning for robots, and designing structures and machines with complex movements.

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
319
  • Advanced Physics Homework Help
Replies
2
Views
849
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Advanced Physics Homework Help
Replies
30
Views
5K
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
8
Views
729
  • Advanced Physics Homework Help
Replies
4
Views
1K
Back
Top