fayled said:
The Lorentz transformation matrix may be written in index form as Λμ ν. The transpose may be written (ΛT)μ ν=Λν μ.
I want to apply this to convert the defining relation for a Lorentz transformation η=ΛTηΛ into index form. We have
ηρσ=(ΛT)ρ μημνΛν σ
The next step to obtain the correct result seems to be to write (ΛT)ρ μ=Λμ ρ
However I don't see how this agrees with the way I have defined the transpose above. Why can we not use (ΛT)ρ μ=Λμ ρ, which seems to follow from the definition of the transpose above?
Thanks for any help!
The usual confusion again!
1) The non-covariant expression (\Lambda^{T})^{\mu}{}_{\nu} = \Lambda^{\nu}{}_{\mu}, can only be used for the concrete elements of \Lambda, for example: \Lambda^{1}{}_{2} = (\Lambda^{T})^{2}{}_{1}.
2) When \Lambda^{T} comes multiplied by other matrices, don’t use any up-down index structure for \Lambda^{T}. See (3).
3) To obtain the index structure of the matrix equation \eta = \Lambda^{T} \ \eta \ \Lambda, do the following: Since \eta^{T} = \eta, i.e., \eta_{\alpha\beta} = \eta_{\beta\alpha}, then \eta_{\alpha\beta} = \left( ( \eta \ \Lambda)^{T} \ \Lambda \right)_{\beta\alpha} . Now, if we choose the matrix element of \Lambda to be of the form \Lambda^{\rho}{}_{\sigma} with upper index represents the rows and the lower one for columns, then the above equation becomes \eta_{\alpha\beta} = ( \eta \ \Lambda)^{T}_{\beta\mu} \ \Lambda^{\mu}{}_{\alpha} . \ \ \ \ \ \ \ \ (3) For any matrix M, the transpose M^{T} is defined by M^{T}_{ab} = M_{ba}. Thus, (\eta \ \Lambda)^{T}_{\beta\mu} = (\eta \ \Lambda)_{\mu\beta}, and Eq(3) becomes the one we know \eta_{\alpha\beta} = (\eta \ \Lambda)_{\mu\beta} \ \Lambda^{\mu}{}_{\alpha} = \eta_{\mu\nu} \ \Lambda^{\nu}{}_{\beta} \ \Lambda^{\mu}{}_{\alpha} .
4) The expression (\Lambda^{-1})^{\sigma}{}_{\rho} = \Lambda_{\rho}{}^{\sigma} , \ \ \ \ \ \ \ \ \ \ \ (4’) is an allowed covariant expression because you can lower and raise the indices on \Lambda by \eta and \eta^{-1} respectively: \Lambda_{\rho}{}^{\sigma} = \eta^{\sigma \nu} \ \eta_{\rho\mu} \ \Lambda^{\mu}{}_{\nu} . \ \ \ \ \ \ \ \ \ (4) Substituting (4’) in the LHS of (4), we find (\Lambda^{-1})^{\sigma}{}_{\rho} = \eta^{\sigma \nu} \ \left( \eta_{\rho\mu} \ \Lambda^{\mu}{}_{\nu} \right) = \eta^{\sigma \nu} \ \left( \eta \ \Lambda \right)_{\rho \nu} . \ \ \ \ (5) This can be rewritten as (\Lambda^{-1})^{\sigma}{}_{\rho} = \eta^{\sigma\nu} \ \left(\eta \ \Lambda \right)^{T}_{\nu \rho} = \left( \eta \ ( \eta \ \Lambda)^{T} \ \right)^{\sigma}{}_{\rho} .
And, therefore, we have the correct matrix equations \Lambda^{-1} = \eta \ \Lambda^{T} \ \eta = \left( \eta \ \Lambda \ \eta \right)^{T} , \ \ \ \ \ \ (6) and \Lambda^{T} = \eta \ \Lambda^{-1} \ \eta = \left( \eta \ \Lambda \ \eta \right)^{-1} . \ \ \ \ \ \ (7)
5) The matrix \Lambda^{-1} can be constructed either from its elements as given by Eq(5), or directly by simple matrix multiplication as given by Eq(6). To check that I did not make a mistake, let us do them both. We write (5) as (\Lambda^{-1})^{\sigma}{}_{\rho} = \eta_{\mu\rho} \ \Lambda^{\mu}{}_{\nu} \ \eta^{\nu\sigma} , and evaluate the relevant matrix elements using the fact that \eta = \eta^{-1} = \mbox{diag} (1, -1, -1, -1):
(\Lambda^{-1})^{0}{}_{k} = (-1) \ \Lambda^{k}{}_{0} \ (+1) = - \Lambda^{k}{}_{0} . In other words, we have the following equalities (\Lambda^{-1})^{0}{}_{k} = \Lambda_{k}{}^{0} = - \Lambda^{k}{}_{0} = - (\Lambda^{T})^{0}{}_{k} .
Next, we calculate the space-space matrix elements (\Lambda^{-1})^{i}{}_{k} = \eta_{\mu k} \ \Lambda^{\mu}{}_{\nu} \ \eta^{\nu i} = (-1) \ \Lambda^{k}{}_{i} \ (-1) . Thus (\Lambda^{-1})^{i}{}_{k} = \Lambda_{k}{}^{i} = \Lambda^{k}{}_{i} = (\Lambda^{T})^{i}{}_{k} . And similarly, we find (\Lambda^{-1})^{k}{}_{0} = \Lambda_{0}{}^{k} = - \Lambda^{0}{}_{k} = - (\Lambda^{T})^{k}{}_{0} .
From these matrix elements, we can write
\Lambda^{-1} = \begin{pmatrix}<br />
<br />
\Lambda^{0}{}_{0} & -\Lambda^{1}{}_{0} & -\Lambda^{2}{}_{0} & -\Lambda^{3}{}_{0} \\<br />
<br />
-\Lambda^{0}{}_{1} & \Lambda^{1}{}_{1} & \Lambda^{2}{}_{1} & \Lambda^{3}{}_{1} \\<br />
<br />
-\Lambda^{0}{}_{2} & \Lambda^{1}{}_{2} & \Lambda^{2}{}_{2} & \Lambda^{3}{}_{2} \\<br />
<br />
-\Lambda^{0}{}_{3} & \Lambda^{1}{}_{3} & \Lambda^{2}{}_{3} & \Lambda^{3}{}_{3}<br />
<br />
\end{pmatrix} . \ \ \ (8)
Indeed, the same matrix can be obtained from Eq(6) as follow: From
\Lambda = \begin{pmatrix}<br />
\Lambda^{0}{}_{0} & \Lambda^{0}{}_{1} & \Lambda^{0}{}_{2} & \Lambda^{0}{}_{3} \\<br />
\Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\<br />
\Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\<br />
\Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3} \end{pmatrix} ,
and
\eta = \begin{pmatrix}<br />
1 & 0 & 0 & 0 \\<br />
0 & -1 & 0 & 0 \\<br />
0 & 0 & -1& 0 \\<br />
0 & 0 & 0 & -1 \end{pmatrix} ,
We fined
\eta \ \Lambda \ \eta = \begin{pmatrix}<br />
\Lambda^{0}{}_{0} & -\Lambda^{0}{}_{1} & -\Lambda^{0}{}_{2} & -\Lambda^{0}{}_{3} \\<br />
-\Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\<br />
-\Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\<br />
-\Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3}<br />
\end{pmatrix} . \ \ \ (9)
Clearly, you obtain Eq(8) by taking the transpose of Eq(9).
6) Here is an exercise for you (probably the most important thing you should do). Use the matrix \eta \ \Lambda \ \eta, as given in Eq(9), to show that (\eta \ \Lambda \ \eta) \Lambda^{T} = I_{4 \times 4} .