4 vector upper and lower indices

Click For Summary
SUMMARY

The discussion focuses on the notation and index manipulation in quantum field theory (QFT), particularly regarding upper and lower indices in derivatives and Lagrangian densities. The participants clarify that the transformation of indices, such as g^{\mu\nu} \partial_\mu = \partial^\nu, is crucial for maintaining consistency in calculations. They emphasize the importance of avoiding repeated indices to prevent confusion in the summation convention, particularly when deriving expressions involving the Lagrangian density L = \frac{1}{2} \partial^\mu \phi \partial_\mu \phi. Proper bookkeeping in index notation is essential for accurate results in QFT.

PREREQUISITES
  • Understanding of quantum field theory (QFT) concepts
  • Familiarity with tensor notation and index manipulation
  • Knowledge of Lagrangian mechanics in physics
  • Proficiency in calculus, particularly partial derivatives
NEXT STEPS
  • Study the implications of index notation in general relativity
  • Learn about the summation convention and its applications in tensor calculus
  • Explore the derivation of equations of motion from the Lagrangian density
  • Investigate common pitfalls in tensor calculations and how to avoid them
USEFUL FOR

Students and researchers in theoretical physics, particularly those studying quantum field theory, as well as anyone involved in advanced mathematical physics requiring a solid grasp of tensor notation and index manipulation.

Plaetean
Messages
35
Reaction score
0
I'm working through some intro QFT using Peskin accompanied by David Tong's notes, and have a question over notation. From Peskin I have:

<br /> x^\mu=x^0+x^1+x^2+x^3=(t,\mathbf{x})<br />
and
<br /> x_\mu=g^{\mu\nu}x^\nu=x^0-x^1-x^2-x^3=(t,-\mathbf{x})<br />
so
<br /> p_\mu p^\mu=g^{\mu\nu}p^\mu p^\nu=E^2-|\mathbf{p}|^2<br /> with
<br /> \partial_\mu=\frac{\partial}{\partial x^\mu}=\bigg(\frac{\partial}{\partial x^0},\mathbf{\nabla}\bigg)<br />

Does this mean that:

<br /> \partial^\mu=\frac{\partial}{\partial x_\mu}=\bigg(\frac{\partial}{\partial x^0},-\mathbf{\nabla}\bigg)<br />

and if so, is there any reason why the upper/lower index flips when expressing a derivative compared with when writing just a normal vector? It's a bit of a pain when you're starting out so I'm guessing there must be a good reason for it that emerges later.

I'd also like someone to just confirm that I've taken this derivative properly (might seem a bit laboured but I want to make triple sure I've got the notation correct right away):

If we have a Lagrangian density of

<br /> L=\frac{1}{2}\partial^\mu\phi\partial_\mu\phi<br />the derivative with respect to dphi is:<br /> \frac{\partial L}{\partial(\partial_\mu\phi)}=\frac{\partial}{\partial_\mu}\bigg(\frac{1}{2}\partial^\mu\phi\partial_\mu\phi\bigg)=\frac{\partial}{\partial_\mu}\bigg(\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\mu\phi\bigg)=\frac{1}{2}g^{\mu\nu}\frac{\partial}{\partial_\mu}\bigg(\partial_\mu\phi\partial_\mu\phi\bigg)=\frac{1}{2}g^{\mu\nu}\frac{\partial}{\partial_\mu}(\partial_\mu\phi)^2<br />

<br /> =g^{\mu\nu}\partial_\mu\phi=\partial^\mu\phi<br />

Thanks as always to you good folk.
 
Physics news on Phys.org
The answer is correct - but your working needs some work - you are introducing too many repeated indices that mess up with the summation convention and confuse you. It is probably a fortunate coincidence in this case that your sloppy working produces the correct result!

In particular,
g^{\mu \nu} \partial_\mu = \partial^\nu \neq \partial^\mu

It is good bookkeeping practice to avoid conflicts between indices that are not related - i.e. you should rename the dummy indexes so they do not coincide. As an illustration of the correct way you should do this calculation:
\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} = \frac{\partial}{\partial (\partial_\mu \phi)} \left[\frac{1}{2} \partial^{\alpha} \phi\,\partial_{\alpha} \phi\right] = \frac{1}{2} \frac{\partial}{\partial (\partial_\mu \phi)} \left[g^{\alpha \beta} \partial_{\beta} \phi\,\partial_{\alpha} \phi \right]<br /> = \frac{1}{2} g^{\alpha \beta} \left[\delta^{\mu}_{\,\beta} \, \partial_{\alpha} \phi + \partial_{\beta} \phi\, \delta^{\mu}_{\,\alpha}\right]\\<br /> = \frac{1}{2} \left[ g^{\alpha \mu}\, \partial_{\alpha} \phi + g^{\mu \beta}\, \partial_{\beta} \phi \right] = \partial^{\mu} \phi <br />
This may seem awfully tedious, and as you get more familiar, there is a tendency to simply skip steps, but for more complicated scenarios, it is extremely important that we keep track of the indices very carefully.
 
Fightfish said:
The answer is correct - but your working needs some work - you are introducing too many repeated indices that mess up with the summation convention and confuse you. It is probably a fortunate coincidence in this case that your sloppy working produces the correct result!

In particular,
g^{\mu \nu} \partial_\mu = \partial^\nu \neq \partial^\mu

It is good bookkeeping practice to avoid conflicts between indices that are not related - i.e. you should rename the dummy indexes so they do not coincide. As an illustration of the correct way you should do this calculation:
\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} = \frac{\partial}{\partial (\partial_\mu \phi)} \left[\frac{1}{2} \partial^{\alpha} \phi\,\partial_{\alpha} \phi\right] = \frac{1}{2} \frac{\partial}{\partial (\partial_\mu \phi)} \left[g^{\alpha \beta} \partial_{\beta} \phi\,\partial_{\alpha} \phi \right]<br /> = \frac{1}{2} g^{\alpha \beta} \left[\delta^{\mu}_{\,\beta} \, \partial_{\alpha} \phi + \partial_{\beta} \phi\, \delta^{\mu}_{\,\alpha}\right]\\<br /> = \frac{1}{2} \left[ g^{\alpha \mu}\, \partial_{\alpha} \phi + g^{\mu \beta}\, \partial_{\beta} \phi \right] = \partial^{\mu} \phi<br />
This may seem awfully tedious, and as you get more familiar, there is a tendency to simply skip steps, but for more complicated scenarios, it is extremely important that we keep track of the indices very carefully.

Thanks for this - its kind of bizarre that I have never come across this kind of thing explicitly in my courses, and I'm finding it really hard to find clear online material on it as well.
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K