Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Index Notation for Lorentz Transformation

  1. Mar 30, 2017 #1
    The Lorentz transformation matrix may be written in index form as Λμ ν. The transpose may be written (ΛT)μ νν μ.

    I want to apply this to convert the defining relation for a Lorentz transformation η=ΛTηΛ into index form. We have
    ηρσ=(ΛT)ρ μημνΛν σ

    The next step to obtain the correct result seems to be to write (ΛT)ρ μμ ρ

    However I don't see how this agrees with the way I have defined the transpose above. Why can we not use (ΛT)ρ μμ ρ, which seems to follow from the definition of the transpose above?

    Thanks for any help!
     
  2. jcsd
  3. Mar 30, 2017 #2

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    The word "transpose" is appropriate for matrices, not tensors. For matrices, the distinction between raised indices and lowered indices is more confusing than helpful. So let's write it all in terms of raised indices. Then the Lorentz transform is:

    [itex]\Lambda^{\alpha \beta} x^\beta = (x')^\alpha[/itex]

    [itex](\Lambda^T)^{\alpha \beta} = \Lambda^{\beta \alpha}[/itex]

    So the defining relation for [itex]\eta[/itex] is:

    [itex](\Lambda^T) \eta \Lambda = \eta[/itex]

    or in terms of components:

    [itex]\Lambda^{\beta \alpha} \eta^{\beta \mu} \Lambda^{\mu \nu} = \eta^{\alpha \nu}[/itex]
     
  4. Mar 30, 2017 #3
    Thanks for your reply.

    Unfortunately I think this just sidetracks the problem. I really need to be able to understand how to deal with the raising and lowering of indices.

    For example, I want to be able to understand why the condition Λνμ=(Λ-1)μν doesn't imply that ΛΛT=I. If I just use all indices as raised, then that doesn't seem to help.
     
  5. Mar 30, 2017 #4

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    To be honest, things such as ##(\Lambda^T)^\alpha{}_{\beta}## do not really make sense - the indices are in the wrong place in relation to the transpose and the parenthesis. You can define a matrix
    $$
    \Lambda = (\Lambda^\alpha{}_{\beta})
    $$
    which is the matrix whose entry in row ##\alpha## and column ##\beta## is ##\Lambda^\alpha{}_\beta##. By definition of matrix transposition, the transpose ##\Lambda^T## would then be the matrix
    $$
    (\Lambda^\alpha{}_\beta)^T
    $$
    with ##\Lambda^\alpha{}_\beta## being the entry in column ##\alpha## and row ##\beta##. If you anyway insist on writing things such as ##(\Lambda^T)_\beta{}^\alpha##, the only meaningful relation involving indices up and down would be ##(\Lambda^T)_\beta{}^\alpha = \Lambda^\alpha{}_\beta##. You cannot change where the indices are located (i.e., up/down).

    Edit: That being said, I strongly discourage thinking of the Lorentz transformation as a matrix. You can represent it by a matrix, but really the most intuitive way of thinking about it is as ##\Lambda^\alpha{}_\beta = \partial x'^\alpha/\partial x^\beta## (this will also help in making the transition to general coordinate transforms).
     
  6. Mar 30, 2017 #5
    Ok, well in some places I seem to be reading that transposing involves swapping the indices left-right, whereas in other places it seems to imply the indices are completely swapped left-right and top-bottom.

    So, assuming we swap left-right only, would you be able to help identify why the following is incorrect (which has led me to all this confusion)?

    We know that
    Λνμ=(Λ-1)μν
    So, why can't I just swap the indices left-right on the left hand side term, to give
    T)μν=(Λ-1)μν
    which would then imply in matrix form
    ΛT-1
    This is obviously not correct, because the Lorentz group actually satisfies η=ΛTηΛ...
     
  7. Mar 30, 2017 #6

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    This is incorrect. Here you need to swap the indices from up to down - you do that with the metric tensor. Another reason to avoid matrix notation completely.
     
  8. Mar 30, 2017 #7
    See for example Equation 1.34 here:
    http://epx.phys.tohoku.ac.jp/~yhitoshi/particleweb/ptest-1.pdf [Broken]
     
    Last edited by a moderator: May 8, 2017
  9. Mar 30, 2017 #8

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Well, it isn't true that [itex](\Lambda^{-1})^\mu_\nu = \Lambda^\nu_\mu[/itex]. Let's just look at 2-D spacetime. Then

    [itex]\Lambda = \left( \begin{array} \\ \gamma & - \gamma v/c\\ -\gamma v/c & \gamma \end{array} \right)[/itex]

    where [itex]\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}[/itex]

    [itex]\Lambda^{-1} = \left( \begin{array} \\ \gamma & + \gamma v/c \\ +\gamma v/c & \gamma \end{array} \right)[/itex]

    [itex]\Lambda^T = \left( \begin{array} \\ \gamma & - \gamma v/c \\ -\gamma v/c & \gamma \end{array} \right)[/itex]

    [itex]\eta = \left( \begin{array} \\ 1 & 0 \\ -1 & 0 \end{array} \right)[/itex]

    So it's not true that [itex]\Lambda \Lambda^T = I[/itex].
     
  10. Mar 30, 2017 #9

    strangerep

    User Avatar
    Science Advisor

    @fayled :

    1) Have a look at this recent thread in which I learned the error of my ways regarding transposition and index placement. See especially post #14 in that thread.

    2) tl:dr : In taking a transpose, one must indeed flip upstairs##\leftrightarrow##downstairs as well as swapping the index symbols. See the end of post #14 in the above thread.

    Re-summarizing, if ##V,W## are vector spaces, and ##L## is a linear map: $$L : V \to W $$then the transpose ##L^T## is defined as $$L^T : W^* \to V^* ~,$$whereas the inverse is $$L^{-1} : W \to V ~.$$ That's why it's correct to do an upstairs##\leftrightarrow##downstairs flip when taking the transpose -- because you're now working with the dual spaces.

    (##V^*## denotes the dual space of ##V##.)
     
  11. Mar 31, 2017 #10

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    By definition a Lorentz transformation matrix obeys
    $$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma},$$
    where (in the "west-coast convention") ##(\eta_{\mu \nu})=\text{diag}(1,-1,-1,-1)##. Note that it is crucial to keep the indices in clear order, both horizontally and vertically.

    Sometimes the matrix-vector calculus is shortening some calculations, but there's big confusion in the literature and of course also here in this forum. We had a recent thread about it somewhere here in the relativity subforum. The upshot is that by definition
    $${(\Lambda^T)^{\mu}}_{\nu}={\Lambda^{\nu}}_{\mu}.$$
    With this notation, putting ##(\eta_{\mu \nu})=\hat{\eta}## and ##({\Lambda^{\mu}}_{\nu})=\hat{\Lambda}## (which btw. makes the shortcoming of the matrix-vector notation apparent, because the "automatic syntax check" that co- and contravariant indices are not longer clearly distinguished as in the Ricci calculus, where the former are lower and the latter are upper indices) you can write the above "pseudo-orthogonality condition" as
    $$\hat{\Lambda}^T \hat{\eta} \hat{\Lambda}=\hat{\eta},$$
    which, because of ##\hat{\eta}^2=\hat{1}##, implies
    $$\hat{\Lambda}^{-1}=\hat{\eta} \hat{\Lambda}^T \hat{\eta}.$$

    This is easily translated back into index notation
    $${(\Lambda^{-1})^{\mu}}_{\nu} = \eta^{\mu \rho} \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho}={\Lambda_{\nu}}^{\mu},$$
    i.e., the inverse LT is not simply given by the transverse matrix but the indices must be dragged by the pseudo-metric matrix too.
     
  12. Apr 1, 2017 #11

    samalkhaiat

    User Avatar
    Science Advisor

    The usual confusion again!

    1) The non-covariant expression [itex](\Lambda^{T})^{\mu}{}_{\nu} = \Lambda^{\nu}{}_{\mu}[/itex], can only be used for the concrete elements of [itex]\Lambda[/itex], for example: [itex]\Lambda^{1}{}_{2} = (\Lambda^{T})^{2}{}_{1}[/itex].

    2) When [itex]\Lambda^{T}[/itex] comes multiplied by other matrices, don’t use any up-down index structure for [itex]\Lambda^{T}[/itex]. See (3).

    3) To obtain the index structure of the matrix equation [itex]\eta = \Lambda^{T} \ \eta \ \Lambda[/itex], do the following: Since [itex]\eta^{T} = \eta[/itex], i.e., [itex]\eta_{\alpha\beta} = \eta_{\beta\alpha}[/itex], then [tex]\eta_{\alpha\beta} = \left( ( \eta \ \Lambda)^{T} \ \Lambda \right)_{\beta\alpha} .[/tex] Now, if we choose the matrix element of [itex]\Lambda[/itex] to be of the form [itex]\Lambda^{\rho}{}_{\sigma}[/itex] with upper index represents the rows and the lower one for columns, then the above equation becomes [tex]\eta_{\alpha\beta} = ( \eta \ \Lambda)^{T}_{\beta\mu} \ \Lambda^{\mu}{}_{\alpha} . \ \ \ \ \ \ \ \ (3)[/tex] For any matrix [itex]M[/itex], the transpose [itex]M^{T}[/itex] is defined by [itex]M^{T}_{ab} = M_{ba}[/itex]. Thus, [itex](\eta \ \Lambda)^{T}_{\beta\mu} = (\eta \ \Lambda)_{\mu\beta}[/itex], and Eq(3) becomes the one we know [tex]\eta_{\alpha\beta} = (\eta \ \Lambda)_{\mu\beta} \ \Lambda^{\mu}{}_{\alpha} = \eta_{\mu\nu} \ \Lambda^{\nu}{}_{\beta} \ \Lambda^{\mu}{}_{\alpha} .[/tex]

    4) The expression [tex](\Lambda^{-1})^{\sigma}{}_{\rho} = \Lambda_{\rho}{}^{\sigma} , \ \ \ \ \ \ \ \ \ \ \ (4’)[/tex] is an allowed covariant expression because you can lower and raise the indices on [itex]\Lambda[/itex] by [itex]\eta[/itex] and [itex]\eta^{-1}[/itex] respectively: [tex]\Lambda_{\rho}{}^{\sigma} = \eta^{\sigma \nu} \ \eta_{\rho\mu} \ \Lambda^{\mu}{}_{\nu} . \ \ \ \ \ \ \ \ \ (4)[/tex] Substituting (4’) in the LHS of (4), we find [tex](\Lambda^{-1})^{\sigma}{}_{\rho} = \eta^{\sigma \nu} \ \left( \eta_{\rho\mu} \ \Lambda^{\mu}{}_{\nu} \right) = \eta^{\sigma \nu} \ \left( \eta \ \Lambda \right)_{\rho \nu} . \ \ \ \ (5)[/tex] This can be rewritten as [tex](\Lambda^{-1})^{\sigma}{}_{\rho} = \eta^{\sigma\nu} \ \left(\eta \ \Lambda \right)^{T}_{\nu \rho} = \left( \eta \ ( \eta \ \Lambda)^{T} \ \right)^{\sigma}{}_{\rho} .[/tex]

    And, therefore, we have the correct matrix equations [tex]\Lambda^{-1} = \eta \ \Lambda^{T} \ \eta = \left( \eta \ \Lambda \ \eta \right)^{T} , \ \ \ \ \ \ (6)[/tex] and [tex]\Lambda^{T} = \eta \ \Lambda^{-1} \ \eta = \left( \eta \ \Lambda \ \eta \right)^{-1} . \ \ \ \ \ \ (7)[/tex]

    5) The matrix [itex]\Lambda^{-1}[/itex] can be constructed either from its elements as given by Eq(5), or directly by simple matrix multiplication as given by Eq(6). To check that I did not make a mistake, let us do them both. We write (5) as [tex](\Lambda^{-1})^{\sigma}{}_{\rho} = \eta_{\mu\rho} \ \Lambda^{\mu}{}_{\nu} \ \eta^{\nu\sigma} ,[/tex] and evaluate the relevant matrix elements using the fact that [itex]\eta = \eta^{-1} = \mbox{diag} (1, -1, -1, -1)[/itex]:

    [tex](\Lambda^{-1})^{0}{}_{k} = (-1) \ \Lambda^{k}{}_{0} \ (+1) = - \Lambda^{k}{}_{0} .[/tex] In other words, we have the following equalities [tex](\Lambda^{-1})^{0}{}_{k} = \Lambda_{k}{}^{0} = - \Lambda^{k}{}_{0} = - (\Lambda^{T})^{0}{}_{k} .[/tex]

    Next, we calculate the space-space matrix elements [tex](\Lambda^{-1})^{i}{}_{k} = \eta_{\mu k} \ \Lambda^{\mu}{}_{\nu} \ \eta^{\nu i} = (-1) \ \Lambda^{k}{}_{i} \ (-1) .[/tex] Thus [tex](\Lambda^{-1})^{i}{}_{k} = \Lambda_{k}{}^{i} = \Lambda^{k}{}_{i} = (\Lambda^{T})^{i}{}_{k} .[/tex] And similarly, we find [tex](\Lambda^{-1})^{k}{}_{0} = \Lambda_{0}{}^{k} = - \Lambda^{0}{}_{k} = - (\Lambda^{T})^{k}{}_{0} .[/tex]

    From these matrix elements, we can write

    [tex]\Lambda^{-1} = \begin{pmatrix}

    \Lambda^{0}{}_{0} & -\Lambda^{1}{}_{0} & -\Lambda^{2}{}_{0} & -\Lambda^{3}{}_{0} \\

    -\Lambda^{0}{}_{1} & \Lambda^{1}{}_{1} & \Lambda^{2}{}_{1} & \Lambda^{3}{}_{1} \\

    -\Lambda^{0}{}_{2} & \Lambda^{1}{}_{2} & \Lambda^{2}{}_{2} & \Lambda^{3}{}_{2} \\

    -\Lambda^{0}{}_{3} & \Lambda^{1}{}_{3} & \Lambda^{2}{}_{3} & \Lambda^{3}{}_{3}

    \end{pmatrix} . \ \ \ (8)[/tex]

    Indeed, the same matrix can be obtained from Eq(6) as follow: From

    [tex]\Lambda = \begin{pmatrix}
    \Lambda^{0}{}_{0} & \Lambda^{0}{}_{1} & \Lambda^{0}{}_{2} & \Lambda^{0}{}_{3} \\
    \Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\
    \Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\
    \Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3} \end{pmatrix} ,[/tex]

    and
    [tex]\eta = \begin{pmatrix}
    1 & 0 & 0 & 0 \\
    0 & -1 & 0 & 0 \\
    0 & 0 & -1& 0 \\
    0 & 0 & 0 & -1 \end{pmatrix} ,[/tex]

    We fined

    [tex]\eta \ \Lambda \ \eta = \begin{pmatrix}
    \Lambda^{0}{}_{0} & -\Lambda^{0}{}_{1} & -\Lambda^{0}{}_{2} & -\Lambda^{0}{}_{3} \\
    -\Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\
    -\Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\
    -\Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3}
    \end{pmatrix} . \ \ \ (9)[/tex]

    Clearly, you obtain Eq(8) by taking the transpose of Eq(9).

    6) Here is an exercise for you (probably the most important thing you should do). Use the matrix [itex]\eta \ \Lambda \ \eta[/itex], as given in Eq(9), to show that [tex](\eta \ \Lambda \ \eta) \Lambda^{T} = I_{4 \times 4} .[/tex]
     
  13. Apr 1, 2017 #12

    Dale

    Staff: Mentor

    If this is your goal then I would recommend not trying to mix tensor and matrix notation. Focus entirely on the raising and lowering tensor indices, and drop the matrix and matrix transposition stuff entirely.

    IMO, the best approach will be to take 2D tensors and expand them out explicitly writing the sums.
     
    Last edited: Apr 1, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Index Notation for Lorentz Transformation
  1. Lorentz Transformation (Replies: 10)

Loading...