Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Transpose and Inverse of Lorentz Transform Matrix

  1. Mar 13, 2017 #1
    Let ##\Lambda## be a Lorentz transformation. The matrix representing the Lorentz transformation is written as ##\Lambda^\mu{}_\nu##, the first index referring to the rows and the second index referring to columns.

    The defining relation (necessary and sufficient) for Lorentz transforms is $$g_{\mu\nu}=g_{\alpha\beta}\Lambda^\alpha{}_\mu \Lambda^\beta{}_\nu.$$
    In matrix form this reads ##g=\Lambda^Tg\Lambda##, where we have used ##(\Lambda^T)^\nu{}_\beta=\Lambda^\beta{}_\nu##. From this we can see that ##(\Lambda^{-1})^\mu{}_\nu=\Lambda_\nu{}^\mu##.

    Up to this point, do I have everything right?

    In Wu-Ki Tung's "Group Theory in Physics" (Appendix I.3, equation I.3-1), Tung states that,
    $$(\Lambda^T)_\mu{}^\nu=\Lambda^\nu{}_\mu$$. Is this consistent with my definitions/conventions?
    For instance, formally taking ##\Lambda\rightarrow \Lambda^T,\mu\rightarrow \nu,\nu\rightarrow\mu##, Tung's equation reads,
    $$(\Lambda^T)^\mu{}_\nu=\Lambda_\nu{}^\mu.$$
    whereas, according to our definition/convention, ##(\Lambda^{-1})^\mu{}_\nu=\Lambda_\nu{}^\mu##, and ##(\Lambda^T)^\mu{}_\nu=\Lambda^\nu{}_\mu.## What gives?
     
  2. jcsd
  3. Mar 13, 2017 #2

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    You convention seems to be right to me. Indeed
    $$g_{\mu \nu} = g_{\alpha \beta} {\Lambda^{\alpha}}_{\mu} {\Lambda^{\beta}}_{\nu}$$
    translates in matrix notation to
    $$\hat{g}=\hat{\Lambda}^{T} \hat{g} \hat{\Lambda}.$$
    Multiplying this equation from the left with ##\hat{g}## and using ##\hat{g}^2=1## leads to
    $$1=\hat{g} \hat{\Lambda}^{T} \hat{g} \hat{\Lambda},$$
    which means that ##\hat{\Lambda}## is invertible and that
    $$\hat{\Lambda}^{-1}=\hat{g} \hat{\Lambda}^{T} \hat{g}.$$
    In index notation that means
    $${(\Lambda^{-1})^{\mu}}_{\nu} = g^{\mu \alpha} g_{\nu \beta} {\Lambda^{\beta}}_{\alpha} ={\Lambda_{\nu}}^{\mu}.$$
    Of course, you can get this also within the index calculus itself from the very first equation. Contracting it with ##g^{\mu \gamma}## gives
    $$\delta_{\nu}^{\gamma} = g_{\alpha \beta} g^{\mu \gamma} {\Lambda^{\alpha}}_{\mu} {\Lambda^{\beta}}_{\nu}.$$
    Now contracting this equation with ##{(\Lambda^{-1})^{\nu}}_{\delta}## gives
    $${(\Lambda^{-1})^{\gamma}}_{\delta}=g_{\alpha \beta} g^{\mu \gamma} {\Lambda^{\alpha}}_{\mu} \delta_{\delta}^{\beta} ={\Lambda_{\delta}}^{\gamma}. $$
     
  4. Mar 13, 2017 #3
    So, the way Tung defines his transpose is indeed inconsistent with my convention, right?

    Also, I was wondering where this freedom to "define" transpose and inverse comes about. Why is there a need for a convention, at all? Given a matrix, its transpose and inverse are uniquely defined. Is it about 2nd-rank tensors not being in exact correspondence with matrices?
     
  5. Mar 13, 2017 #4

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I don't think that there is anything inconsistent except the book you are citing. I don't know it, but it seems to confusing at best, if not just wrong.The author seems to be confused concerning upper and lower indices. Note that ##{\Lambda^{\mu}}_{\nu}## are no tensor components but matrix elements of a basis transformation.
     
  6. Mar 13, 2017 #5

    strangerep

    User Avatar
    Science Advisor

    Well, I'll venture to say that I think your convention is wrong and Tung's is right. One of the reasons for having upper/lower indices on ##\Lambda## here is to be consistent with the summation convention (implicit summation on paired upper+lower indices).

    So I think your
    should be: where we have used ##(\Lambda^T)_\mu^{\;~\alpha} = \Lambda^\alpha_{\;~\mu}##
     
  7. Mar 13, 2017 #6
    I see what you are saying and that makes sense. But, if ##(\Lambda^T)_\mu {}^\nu = \Lambda^\nu{}_\mu## is true, then, ##(\Lambda^T)^\mu {}_\nu = \Lambda_\nu{}^\mu## is also true, which can be seen by lowering and raising indices by the metric (treating##\Lambda## as a tensor). But, we know that ##(\Lambda^{-1})^\mu {}_\nu=\Lambda_\nu {}^\mu##. This seems to imply that the ##\mu \nu##th element of ##\Lambda^{-1}## and ##\Lambda^T## are the same. Which certainly isn't true. How do I reconcile this contradiction?
     
  8. Mar 14, 2017 #7

    strangerep

    User Avatar
    Science Advisor

    That's true for an ordinary orthogonal group. But the Lorentz group is an indefinite orthogonal group, so one must include the metric in the relationship between transpose and inverse. Have a read of that wiki page, under the section titled "Matrix definition".
     
  9. Mar 14, 2017 #8
    Yes, I'm aware of that and therein lies the contradiction. Starting from
    $$g_{\mu\nu}=g_{\alpha\beta}\Lambda^\alpha{}_\mu\Lambda^\beta{}_\nu\\
    \implies\delta^\alpha{}_\nu=g^{\alpha\mu}g_{\mu\nu}=g^{\alpha\mu}g_{\alpha\beta}\Lambda^\alpha{}_\mu\Lambda^\beta{}_\nu\\
    \implies \delta^\alpha{}_\nu=\Lambda_\beta{}^\alpha \Lambda^\beta_\nu$$
    Now, to write this in Matrix notation, we write ##(\Lambda^{-1})^\alpha{}_\beta=\Lambda_\beta{}^\alpha##, and then the previous equation becomes
    $$\delta^\alpha{}_\nu=\Lambda_\beta{}^\alpha \Lambda^\beta_\nu\\
    \delta^\alpha{}_\nu=(\Lambda^{-1})^\alpha{}_\beta \Lambda^\beta_\nu,$$
    which in matrix notation reads, ##I=\Lambda^{-1}\Lambda##. Hence, to be consistent with both Einstein summation convention and matrix multiplication, we are forced to write the ##\alpha \beta##-th element of ##\Lambda^{-1}## as ##\Lambda_\beta{}^\alpha##. Is there an error that I'm inadvertently committing?

    If what i wrote is true, i.e., ##(\Lambda^{-1})^\alpha{}_\beta=\Lambda_\beta{}^\alpha##, then, if the transpose is defined as in Tung's book, i.e.,##(\Lambda^T)_\mu {}^\nu = \Lambda^\nu{}_\mu##, which is equivalent to ##(\Lambda^T)^\mu {}_\nu = \Lambda_\nu{}^\mu##, then this would imply ##\Lambda^{-1}=\Lambda^T## rather than ##\Lambda^{-1}=g\Lambda^T g##, as it should be. This is the contradiction, at least apparent, that I'm talking about. What's the way out of this?

    By the way, thank you for having the patience for reading my questions and replying!
     
  10. Mar 14, 2017 #9

    strangerep

    User Avatar
    Science Advisor

    Your 2nd line is nonsense. On the rhs you have ##\alpha## as both a free index, and as a summation index.
     
  11. Mar 14, 2017 #10
    Oops, sorry! I meant,

    $$g_{\mu\nu}=g_{\alpha\beta}\Lambda^\alpha{}_\mu\Lambda^\beta{}_\nu\\
    \implies\delta^\rho{}_\nu=g^{\rho\mu}g_{\mu\nu}=g^{\rho\mu}g_{\alpha\beta}\Lambda^\alpha{}_\mu\Lambda^\beta{}_\nu\\
    \implies \delta^\rho{}_\nu=\Lambda_\beta{}^\rho \Lambda^\beta{}_\nu$$
    Now we write ##(\Lambda^{-1})^\rho{}_\beta=\Lambda_\beta{}^\rho##. Hence the previous equation becomes,
    $$\delta^\alpha{}_\nu=\Lambda_\beta{}^\rho \Lambda^\beta_\nu\\
    \delta^\rho{}_\nu=(\Lambda^{-1})^\rho{}_\beta \Lambda^\beta_\nu,$$
    which in matrix notation reads, ##I=\Lambda^{-1}\Lambda##.

    The rest of the argument proceeds the same way. Thanks for being patient!
     
  12. Mar 14, 2017 #11

    strangerep

    User Avatar
    Science Advisor

    I just noticed another error in your earlier post...
    I believe the correct formula is ##\Lambda^{-1}=g^{-1}\Lambda^T g ~##.

    But I'm getting lost about what your perceived contradiction is (or might have become). You may have to re-write it, just to be clear.
     
  13. Mar 14, 2017 #12
    Indeed, ##\Lambda^{-1}=g^{-1}\Lambda^T g ~##. But, for the Minkowski metric tensor, ##g^{-1}=g##, since ##g^2=1.##

    Ok, so my basic confusion, distilled to its essence is thus:

    Using the earlier derivation, we see that ##(\Lambda^{-1})^\rho{}_\beta=\Lambda_\beta{}^\rho##. Tung's definition of transpose tells us ##(\Lambda^T)^\rho{}_\beta=\Lambda_\beta{}^\rho##. If both are to be consistent, it would imply that ##\Lambda^{-1}=\Lambda^T##, which is not true for Minkowski space. The actual relation for Minkowski space should be ##\Lambda^{-1}=g\Lambda^Tg##. So, is there an inconsistency between Tung's convention/definition and mine, which is generally followed in relativity texts.

    Just to get a clarification, how would you write the ##\mu\nu##-th element of the ##\Lambda^{-1}## matrix, given the matrix ##\Lambda##? As I stated previously, I would write it as ##(\Lambda^{-1})^\mu{}_\nu=\Lambda_\nu{}^\mu##
     
  14. Mar 14, 2017 #13

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I disagree, since taking the transpose of a matrix means just switching rows and columns, no matter whether the indices are upper or lower indices. Indeed, I think it's easier to stay within the Ricci calculus.

    In other words, I think that
    $${(\Lambda^T)^{\mu}}_{\nu}={\Lambda^{\nu}}_{\mu}$$
    while by definition (although the ##{\Lambda^{\mu}}_{\nu}## are no tensor components)
    $${\Lambda_{\mu}}^{\nu}=g_{\alpha \mu} g^{\beta \nu} {\Lambda^{\alpha}}_{\beta}={(\Lambda^{-1})^{\nu}}_{\mu}.$$
    At least you come to that conclusion also within the Ricci index calculus (as shown in my previous posting above).
     
  15. Mar 14, 2017 #14

    strangerep

    User Avatar
    Science Advisor

    After sleeping on it, and doing some Googling, I am persuaded. :biggrin:

    I.e., I now agree that $$ (\Lambda^T)^\mu_{~\;\nu} ~=~ \Lambda^\nu_{~\;\mu}$$is correct.

    In my Googling, I noticed that quite a few authors (not just Tung) also make the same mistake. The Wiki entry for Ricci calculus doesn't even mention the subtleties involved in taking a transpose. Wald's GR book doesn't mention it, MTW only talk about transpose as interchanging index positions (without mentioning raising/lowering), and Weinberg's GR book confuses things more by his eq(3.6.2): $$D_{\alpha\mu} ~\equiv~ \frac{\partial \xi^\alpha}{\partial x^\mu}$$ which has the ##\alpha## downstairs on the left, but upstairs on the right. Then he writes $$D^T_{\mu\alpha} ~\equiv~ D_{\alpha\mu}$$which is$$\frac{\partial \xi^\mu}{\partial x^\alpha} ~,$$so he gets it right in the end, but only after a confusing detour.

    The source of this error seems to lie in not properly understanding the abstract meaning of the transpose operation. If ##V,W## are vector spaces, and ##L## is a linear map ##L: V \to W##, then the the transposed map is ##L^T : W^* \to V^*##, where the asterisk denotes the dual space. (In Ricci calculus, taking the dual corresponds to raising/lowering indices.)
     
    Last edited: Mar 14, 2017
  16. Mar 15, 2017 #15

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    The problem is that taking the transpose of a matrix is not reallymatching the difference between co- and contravariant components, which is so elegantly encoded in the Ricci-index calculus notation. For me the greatest obstacle of coordinate-free notations is to keep in mind which kind of objects one is dealing with, because it's difficult to find different symbols for the different kinds of objects, e.g., to distinguish a one-form clearly from a vector etc.
     
  17. Mar 15, 2017 #16

    strangerep

    User Avatar
    Science Advisor

    I'm not 100% sure what you mean by this sentence. If you mean that taking the transpose of a matrix does not explicitly display the swap between primal and dual spaces, then yes, I agree.
     
  18. Mar 15, 2017 #17

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Then we agree.
     
  19. Mar 15, 2017 #18

    DrGreg

    User Avatar
    Science Advisor
    Gold Member

    Would it cause less confusion to call it an "adjoint" ##L^*## instead of a "transpose" ##L^T##?
     
  20. Mar 15, 2017 #19

    strangerep

    User Avatar
    Science Advisor

    According to Wikipedia's entry on linear maps, there's a notable distinction:
     
  21. Mar 16, 2017 #20

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    I assume you mean index-free? Coordinate free notations should not include transformation coefficients at all as they are related to coordinate transformations.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Transpose and Inverse of Lorentz Transform Matrix
Loading...