Index notation for inverse Lorentz transform

  • #1

Main Question or Discussion Point

Hi all, just had a question about tensor/matrix notation with the inverse Lorentz transform. The topic was covered well here, but I’m still having trouble relating this with an equation in Schutz Intro to GR...

So I can use the following to get an eqution for the inverse:
[tex]x^{\overline{\mu}}x_{\overline{\mu}}=\Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda^{\beta}_{\;\overline{\mu}}x_{\beta}[/tex]
And therefore
[tex]\Lambda^{\beta}_{\;\overline{\mu}}\Lambda^{\overline{\mu}}_{\;\alpha}=\delta^{\beta}_{\;\alpha}[/tex]
This equation is just the one in ch2 from Schutz. But I can just as well reason as follows:
[tex]x^{\overline{\mu}}x_{\overline{\mu}}=\eta_{\overline{\mu}\overline{\nu}}x^{\overline{\mu}}x^{\overline{\nu}}=\eta_{\overline{\mu}\overline{\nu}}\Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda^{\overline{\nu}}_{\;\beta}x^{\beta}=\Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda_{\overline{\mu}\beta}x^{\beta}[/tex]
And therefore
[tex]\Lambda_{\overline{\mu}\beta}\Lambda^{\overline{\mu}}_{\;\alpha}=\eta_{\beta\alpha}[/tex]
Or
[tex]\Lambda_{\overline{\mu}}^{\;\ \beta}\Lambda^{\overline{\mu}}_{\;\alpha}=\delta^{\beta}_{\;\alpha}[/tex]
Taken together, we seem to have
[tex]\Lambda_{\overline{\mu}}^{\;\ \beta}=\Lambda^{\beta}_{\;\overline{\mu}}[/tex]
Is this correct? It seems wrong to me, and it seems that I might’ve confused my tensor and matrix indices, I’m just not sure how...
 

Answers and Replies

  • #2
TeethWhitener
Science Advisor
Gold Member
1,679
994
Hi all, just had a question about tensor/matrix notation with the inverse Lorentz transform. The topic was covered well here, but I’m still having trouble relating this with an equation in Schutz Intro to GR...

So I can use the following to get an eqution for the inverse:
[tex]x^{\overline{\mu}}x_{\overline{\mu}}=\Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda^{\beta}_{\;\overline{\mu}}x_{\beta}[/tex]
And therefore
[tex]\Lambda^{\beta}_{\;\overline{\mu}}\Lambda^{\overline{\mu}}_{\;\alpha}=\delta^{\beta}_{\;\alpha}[/tex]
This equation is just the one in ch2 from Schutz.
Everything after this is fine. It's this part that's incorrect. In particular:
$$x^{\overline{\mu}}x_{\overline{\mu}}\neq \Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda^{\beta}_{\;\overline{\mu}}x_{\beta}$$
It might help to write out the sums on each side explicitly to see where you've gone wrong.
 
  • #3
Everything after this is fine. It's this part that's incorrect. In particular:
$$x^{\overline{\mu}}x_{\overline{\mu}}\neq \Lambda^{\overline{\mu}}_{\;\alpha}x^{\alpha}\Lambda^{\beta}_{\;\overline{\mu}}x_{\beta}$$
It might help to write out the sums on each side explicitly to see where you've gone wrong.
Thanks very much for your reply, this issue has been bothering me for days now! Is the problem that I should’ve written this as

$$x^{\overline{\mu}}x_{\overline{\mu}}= \Lambda^{\overline{\mu}}_{\;\alpha}(\vec{v})x^{\alpha}\Lambda^{\beta}_{\;\overline{\mu}}(-\vec{v})x_{\beta}$$

I wasn’t sure whether putting the bars over indicies already implied this (and indeed, without the -v things turn out bad). Propagating that forward, we would then have

[tex]\Lambda_{\overline{\mu}}^{\;\ \beta}(\vec{v})=\Lambda^{\beta}_{\;\overline{\mu}}(-\vec{v})[/tex]

Which still looks off to me (do the sides not differ by a transpose?)
 
  • #4
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
14,395
5,979
It's a bad habit to indicate the components wrt. to different bases not with different symbols but different indices. Strictly speaking it's wrong. That said, let's do a more careful analysis. By definition the Lorentz transformation between contravariant vector components of the same vector ##x## is given by
$$\bar{x}^{\mu}={\Lambda^{\mu}}_{\rho} x^{\rho},$$
and the matrix ##\Lambda## is by definition a Lorentz transformation iff
$$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}.$$
The transformation rule for covariant components most easily follow from the rule to lower and raise indices:
$$\bar{x}_{\mu} = \eta_{\mu \rho} \bar{x}^{\rho} = \eta_{\mu \rho} {\Lambda^{\rho}}_{\sigma} x^{\sigma} = \eta_{\mu \rho} {\Lambda^{\rho}}_{\sigma} \eta^{\sigma \nu} x_{\nu}={\Lambda_{\mu}}^{\nu} x_{\nu}.$$
Then from the Lorentz property of the matrix ##\Lambda## you easily derive
$${\Lambda^{\mu}}_{\rho} {\Lambda_{\mu}}^{\sigma}=\delta_{\rho}^{\sigma}, \quad(*)$$
i.e., the covariant components transform cogrediently to the contravariant components as it must be after all since covariant components are components of a linear form (dual vector) and contravariant components are components of a vector.

In matrix notation (*) implies
$$\Lambda^{-1} = \eta \Lambda^t \eta.$$
 
  • #5
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,670
6,453
It's a bad habit to indicate the components wrt. to different bases not with different symbols but different indices. Strictly speaking it's wrong.
I know we disagree on this, so I do not think you can say that it is strictly wrong. It is a matter of notation and a question of what one considers to be coordinate dependent and what not. I don't think notation can be "wrong" - but it can be more or less convenient and lucid. My preference is to use the symbol to denote a particular tensor, which is coordinate independent and therefore should be represented by the same symbol regardless of the coordinates used. The coordinate dependent quantities are the components and therefore I prefer to denote this using primes or other decorations. Of course, when you do not have tensors (or mean different tensors depending on the coordinate system, such as the basis vectors) and your quantities do depend on the coordinate system chosen then this also needs to be underlined, examples being the coordinates themselves or the basis vectors. Anyway, I think we have had this discussion before and we are probably not going to agree this time either ... :rolleyes:
 
  • #6
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
14,395
5,979
My argument is that ##x^{\mu}## and ##x^{\mu'}## are the same four real numbers for both ##\mu## and ##\mu'## running from 0 to 3, while ##x^{\prime \mu}## are (in general) different numbers from ##x^{\mu}## when ##\mu## is running from 0 to 3. When I learnt the theory of relativity first, I remember also to have used a textbook from the library and I didn't understand it only because of this strange notation to mark the changed components by varying the labels rather than the symbol.

Of course a tensor is coordinate independent, and this shows again, how important it is to label the symbols not the indices. For a vector we have e.g.
$$\underline{x}=x^{\mu} \underline{e}_{\mu} = x^{\prime \mu} \underline{e}_{\mu}'.$$
On the other hand, if one gets used to the "labeling-indices convention", maybe it's not such a big issue. I've always avoided it not to confuse myself ;-).
 
  • #7
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,670
6,453
Instead of just having us repeat the same conversation, let me just link it:
https://www.physicsforums.com/threads/tensor-invariance-and-coordinate-variance.914642/

On the other hand, if one gets used to the "labeling-indices convention", maybe it's not such a big issue. I've always avoided it not to confuse myself ;-).
That is interesting. I converted to it after giving it a significant amout of thought. I avoid priming the symbol precisely to not confuse myself or having to write the actual tensor in boldface or in a different font.
 
  • #8
Thanks vanhees71 and Orodruin! Your posts have been a great help, and I think I’m more on my way to understanding the differences between these two notations now. I am reading Schutz GR, which uses one notation, and Srednicki QFT, which uses the other...
 

Related Threads on Index notation for inverse Lorentz transform

Replies
11
Views
2K
Replies
23
Views
3K
Replies
11
Views
2K
Replies
47
Views
10K
Replies
9
Views
584
Replies
14
Views
8K
Replies
5
Views
645
Replies
20
Views
5K
Replies
19
Views
11K
Top