Index placement -- Lorentz transformation matrix

  • I
  • Thread starter dyn
  • Start date
  • #1
dyn
535
23

Main Question or Discussion Point

Hi.
I came across the following statement , which seems wrong to me.
Λμρ = ( ΛT )ρμ
I have it on good authority (a previous post on this forum) that (ΛT)μν = Λνμ so I am hoping that the first equation is wrong ? It looks like the inverse not the transpose ?

The equation Λμρ η μνΛνσ = ηρσ is written in matrix form as ΛT η Λ = η
Why is one of the Lorentz matrices transposed ? It looks like it should be Λ η Λ = η

Thanks
 

Answers and Replies

  • #2
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,667
6,450
a previous post on this forum
Please do not make such statements without linking to that previous post.
 
  • #3
pervect
Staff Emeritus
Science Advisor
Insights Author
9,673
908
Hi.
I came across the following statement , which seems wrong to me.
Λμρ = ( ΛT )ρμ
I have it on good authority (a previous post on this forum) that (ΛT)μν = Λνμ so I am hoping that the first equation is wrong ? It looks like the inverse not the transpose ?

The equation Λμρ η μνΛνσ = ηρσ is written in matrix form as ΛT η Λ = η
Why is one of the Lorentz matrices transposed ? It looks like it should be Λ η Λ = η

Thanks
There are apparently multiple conventions in use, but I use tensor notation and follow the approach in "Gravitation" (MTW) where the authors state the indices of a transformation matrix always go "from northwest to southeast".

My best guess is that you aren't even using tensor notation but some sort of matrix notation, and that ##\lambda^T## means the transpose of the matrix. There is no need for a transpose operator in tensor notation, so that's why I suspect you aren't using it.

Besides apparently not using tensor notation, you are using a different notation than my textbook does when you write ##(...)_\rho{}^\mu##, because the indices move from southwest to northeast, rather than northwest to southeast.
 
  • #4
dyn
535
23
If you agree with the following equation Λμρ ημν Λνσ = ηρσ
can you tell me why in matrix form its written as ΛTηΛ = η and not just ΛηΛ = η ?
I don't understand why one of the Lorentz transformation matrices is transposed.
Thanks
 
  • #5
nrqed
Science Advisor
Homework Helper
Gold Member
3,603
203
If you agree with the following equation Λμρ ημν Λνσ = ηρσ
can you tell me why in matrix form its written as ΛTηΛ = η and not just ΛηΛ = η ?
I don't understand why one of the Lorentz transformation matrices is transposed.
Thanks
Let's go back to ordinary metric multiplication. Writing out the indices, we have
## (M A B)_{ad} = M_{ab} A_{bc} B_{cd} ##
Note that the order of the indices, the second index of the M matrix is the first index of the A matrix. In the expression you wrote, the first index of the first ##\Lambda## (the ##\mu##) is the first index of the eta matrix. Now, note that we can write the product I just wrote
## (M A B)_{ad} = (M^T)_{ba} A_{bc} B_{cd} ## which is what you wanted.
 
  • Like
Reactions: dyn
  • #6
Ibix
Science Advisor
Insights Author
5,960
4,524
It's important to make a distinction between this tensor index notation and matrix notation, no matter how similar they are. Matrix notation tells you how to sum by ordering the matrices - so ##\vec u^T\mathbf{g}\vec v## and ##\mathbf{g}\vec v\vec u^T## are different things. But tensor notation assigns meaning to the index placement but not the order, so ##g_{ij}u^iv^j## and ##u^ig_{ij}v^j## are the same thing. This means that "transposing a tensor" is something you only have to do if you choose to represent it as a matrix and apply the rules of matrix multiplication instead of the index summation rules.

Second thing to note is that the Lorentz transform matrix is symmetric. So transposing it doesn't do anything anyway.

Final thing to note is that there seem to be multiple conventions for index placement on ##\Lambda##. Carroll uses northwest/southeast for Lorentz transforms and southeast/northwest for inverse Lorentz transforms. But he notes that Schutz, for example, prefers northwest/southeast always. He also points out that it doesn't matter, since both forward and reverse transforms are Lorentz transforms, and all that matters is that your indices are correctly associated with the upper and lower positions.

So assuming that whatever unreferenced (there's a reason for PF rules about references!) source @dyn saw is using Carroll's notation then I'd agree that ##\Lambda^\mu{}_\rho=(\Lambda^T)_\mu{}^\rho## is incorrect since Carroll would use that notation for inverse, not transpose. I can't think offhand of a notation where it would make sense, since transposing a Lorentz transform in tensor notation is doubly pointless (don't take that as any kind of authority on notation, though). I'd say that the rest of the OP is a good example of why you should be very careful mixing tensor and matrix notation.
 
Last edited:
  • #7
dyn
535
23
[QUOTE="

Second thing to note is that the Lorentz transform matrix is symmetric. So transposing it doesn't do anything anyway.[/QUOTE]

I know the LT matrix is symmetric for a standard boost in the x-direction but is every LT matrix symmetric for any combination of boosts ?
If yes , is this a direct consequence of the definition ΛTηΛ = η ?
 
  • #8
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,667
6,450
I know the LT matrix is symmetric for a standard boost in the x-direction but is every LT matrix symmetric for any combination of boosts ?
No, it is not. For example, a standard spatial rotation is also in the Lorentz group and is not symmetric.
 
  • #9
Ibix
Science Advisor
Insights Author
5,960
4,524
I know the LT matrix is symmetric for a standard boost in the x-direction but is every LT matrix symmetric for any combination of boosts ?
An arbitrary boost, yes. If you throw rotations into the mix then no, to be fair.
 
  • #10
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
14,384
5,976
Usually the notation is as follows. The Lorentz transformation matrix describes the transformation of contravariant tensor components. It's sufficient to consider the space-time four-vector:
$$x^{\prime \mu} = {\Lambda^{\mu}}_{\rho} x^{\rho}.$$
In order to be a Lorentz transformation you must have
$$\eta_{\mu \nu} x^{\prime \mu} x^{\prime \nu}=\eta_{\rho \sigma} x^{\rho} x^{\sigma},$$
where ##\eta_{\rho \sigma}## are the (purely covariant!) tensor components of the Minkowski fundamental form, i.e., in Minkowski orthonormal frames ##(\eta_{\rho \sigma})=\mathrm{diag}(1,-1,-1,-1)##. Since this should hold for any vector ##x##, the Lorentz-transformation matrix must fulfill
$$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}.$$
Sometimes it's convenient to work in matrix-vector notation (to be a bit evil you can also say in order to utmostly confuse the student you switch between the self-explaining Ricci calculus and the notationally less precise matrix-vector notaion all the time ;-)). Then one defines matrices
$$\hat{\Lambda}=({\Lambda^{\mu}}_{\rho}), \quad \hat{\eta}=(\eta_{\rho \sigma}).$$
As you see, then the Ricci calculus's most important property to clearly distinguish between co- and contravariant components of tensors and transformation matrices, is lost, but that's no problem as long as you keep in mind what kind of components you are dealing with. Just write down the matrices and column and row vectors and then translate the Ricci-calculus formulae to the matrix-vector multiplication calculus.

Obviously, using column vectors ##\overline{x}=(x^{\mu})## for the contravariant vector components, then the Lorentz transformations read
$$\overline{x}'=\hat{\Lambda} \overline{x}.$$
The Lorentz-transformation property of the matrix ##\hat{\Lambda}## translates to
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta},$$
or since ##\hat{\eta}^2=1## after some simple matrix manipulation
$$\hat{\Lambda}^{-1} = \hat{\eta} \hat{\Lambda}^{\text{T}} \hat{\eta}.$$
In the Ricci calculus this translates into
$${(\Lambda^{-1})^{\mu}}_{\nu}=\eta^{\mu \rho} \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho}={\Lambda_{\nu}}^{\mu}.$$
This can be verified by the Ricci calculus easily, using the original Lorentz-transformation property:
$${\Lambda_{\nu}}^{\mu} {\Lambda^{\nu}}_{\rho}=g_{\nu \sigma} g^{\mu \alpha} {\Lambda^{\sigma}}_{\alpha} {\Lambda^{\nu}}_{\rho} = g^{\mu \alpha} g_{\alpha \rho} = {\delta^{\mu}}_{\rho}.$$
 

Related Threads on Index placement -- Lorentz transformation matrix

  • Last Post
Replies
4
Views
657
  • Last Post
Replies
5
Views
515
Replies
7
Views
896
Replies
23
Views
3K
Replies
18
Views
17K
Replies
1
Views
2K
Replies
11
Views
2K
Replies
4
Views
2K
  • Last Post
Replies
5
Views
410
Replies
19
Views
11K
Top