Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Lorentz transformation matrix and its inverse

  1. Aug 30, 2015 #1

    dyn

    User Avatar

    Given the Lorentz matrix Λuv its transpose is Λvu but what is its transpose ? I have seen ΛuaΛub = δb a which implies an inverse. This seems to be some sort of swapping rows and columns but to get the inverse you also need to replace v with -v ? Also In the LT matrix is it the 1st slot that represents rows or the top index ?
     
  2. jcsd
  3. Aug 30, 2015 #2

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I presume you meant to ask what is its inverse (not transpose).
    Where did you see that formula? In general it is not correct. As you say, the sign of ##v## also needs to be changed. The Lorentz matrix ##\Lambda## in a basis can be expressed as ##A^{-1}L(v)A## where ##A## is the matrix for changing coordinates from the given basis to one in which the ##x## axis points in the direction of motion, and

    $$L(v)=
    \gamma \left( \begin{array}{ccc}
    1 & -\beta & 0 & 0 \\
    -\beta & 1 & 0 & 0 \\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\end{array} \right)$$

    where ##\beta\equiv\frac{v}{c}##.
    Then the inverse of this is ##A^{-1}L(v)^{-1}A## and

    $$L(v)^{-1}=
    \gamma \left( \begin{array}{ccc}
    1 & \beta & 0 & 0 \\
    \beta & 1 & 0 & 0 \\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\end{array} \right)$$
     
    Last edited: Aug 30, 2015
  4. Aug 31, 2015 #3

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    A Lorentz-transformation matrix is defined as a ##\mathbb{R}^{4 \times 4}## matrix that keeps the Minkowski pseudometric ##\eta_{\mu \nu}=\mathrm{diag}(1,-1,-1,-1)## invariant, which means
    $${L^{\mu}}_{\rho} {L^{\nu}}_{\sigma} \eta_{\mu \nu} = \eta_{\rho \sigma}.$$
    Written in matrix notation this reads
    $$\hat{L}^T \hat{\eta} \hat{L}=\hat{\eta}.$$
    Since ##\hat{\eta}=\hat{\eta}^{-1}##, multiplication with ##\hat{\eta}## from the left and with ##\hat{L}^{-1}## from the right, gives
    $$\hat{L}^{-1}=\hat{\eta} \hat{L}^{T} \hat{\eta}.$$
    For a rotation-free boost with three-velocity ##\vec{v}##, you have
    $$\hat{L}_B(\vec{v})=\begin{pmatrix}
    \gamma & -\gamma \vec{v}^T \\
    -\gamma \vec{v}^T & (\gamma-1) \vec{v} \otimes \vec{v}+\mathbb{1}_3
    \end{pmatrix}.$$
    Then you indeed get
    $$\hat{L}_B^{-1}(\vec{v})=\hat{L}_B(-\vec{v}),$$
    as it should be.
     
  5. Sep 2, 2015 #4

    dyn

    User Avatar

    Thanks for your replies.Yes my original question was about the inverse. Thanks for realising that. According to what you have said ; the following which i found in some notes seems wrong as it is just a transpose and involves no sign change " the inverse of Λa b is Λ ba. Am I right ?
    I have also looked at a solution to a question involving the Faraday tensor. It involves calculating F'uv given the equation F'uv = Λ u b Λ v b Fab. So I have 3 4x4 matrices which I need to multiply together. The bit I don't understand is that the solution multiplies them together with the Fab matrix in the middle. Why has the order been changed ?
     
  6. Sep 2, 2015 #5

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Probably. But it depends on the context and what they meant by the symbols in the notation they were using.
    First, I think you meant to write ##\Lambda^u_a \Lambda^v_b F^{ab}##, not ##\Lambda^u_b \Lambda^v_b F^{ab}##, as the latter doesn't have any ##a## in it.

    Secondly, one nice thing about Einstein notation is that, unlike matrix notation, it doesn't matter what order you write the factors in. What does matter is what indices you use and whether they are up or down. The choice of indices and position determines the order of matrix multiplication, not the order of presentation of the factors. So in Einstein notation

    $$\Lambda^u_a \Lambda^v_b F^{ab}=\Lambda^u_a F^{ab} \Lambda^v_b =F^{ab}\Lambda^u_a \Lambda^v_b $$
     
  7. Sep 3, 2015 #6

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    It is very important to keep the order of the indices in the Lorentz-transformation matrix, also the natural index pattern is that one is an upper and the other a lower index. As detailed in #3, in matrix-vector notation you have
    $$\hat{\Lambda}^{-1} = \hat{\eta} \hat{\Lambda}^T \hat{\eta}.$$
    Let's translate this into the index notation
    $${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta^{\mu \rho} \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho}={\Lambda_{\nu}}^{\mu}.$$
    Here, one defines the index lowering and raising operation as if the Lorentz matrix was a tensor (which it is of course not). So your formula is correct.

    I don't understand the question concerning the transformation law of the Faraday tensor. As a tensor of 2nd rank it transforms like a Kronecker product of two vectors, i.e., like ##x^{\mu} x^{\nu}##, i.e.,
    $$\overline{F}^{\mu \nu}(\overline{x}) = {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} F^{\rho \sigma}(\hat{\Lambda}^{-1} \overline{x}).$$
    Here it is important to realize that the Faraday tensor is in fact a tensor field, and on the right-hand side it depends on the old coordinates, which I have expressed in terms of the new ones. One must not forget to also transform the argument of fields in the proper way!
     
  8. Sep 3, 2015 #7

    dyn

    User Avatar

    Thanks again for taking the time to reply. sorry to be a pain here but i'm still confused.
    I'm confused about this as it seems to imply matrices can be multiplied in any order and give the same answer which as far as I know is not true in general for matrices. Also I thought the indices for tensors should not be places one above another in a vertical line ?

    This seems to say that the inverse of $${{\Lambda}^{\mu}}_{\nu} $$ is $${\Lambda_{\nu}}^{\mu}$$
    but this is just the transpose. It doesn't contain the necessary sign change.
     
  9. Sep 3, 2015 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    ##\Lambda## is a matrix. ##\Lambda^\mu{}_\nu## is a real number. Matrix multiplication isn't commutative, but the multiplication operation on the set of real numbers is.

    Yes, it should be avoided if you intend to use the metric to raise and lower indices.

    If ##\Lambda## denotes a matrix, and ##\Lambda^\mu{}_\nu## denotes the number on row ##\mu##, column ##\nu## of that matrix, then the number on row ##\mu##, column ##\nu## of ##\Lambda^T## is ##\Lambda^{\nu}{}_\mu##, not ##\Lambda_\nu{}^\mu##.
     
  10. Sep 3, 2015 #9

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    A Lorentz transformation matrix is a 4×4 matrix ##\Lambda## such that ##\Lambda^T\eta\Lambda=\eta##. Multiply this equation by ##\eta^{-1}## from the left, and you see that ##\Lambda^{-1}=\eta^{-1}\Lambda^T\eta##.

    There's a bunch of things that we need to understand to relate this to the index notation:

    ##\Lambda## is the matrix of components of a type (1,1) tensor. This means that the number on row ##\mu##, column ##\nu##, is the ##{}^\mu{}_\nu## component of that tensor. That tensor is also denoted by ##\Lambda##, so its ##{}^\mu{}_\nu## component is denoted by ##\Lambda^\mu{}_\nu##.

    Similarly, ##\eta## is the matrix of components of the Minkowski metric tensor ##\eta##, so the number on row ##\mu##, column ##\nu##, is the ##{}_{\mu\nu}## component of the Minkowski metric tensor, which is written as ##\eta_{\mu\nu}##.

    ##\eta^{-1}## is however not defined as a matrix of components of a tensor. It's simply the inverse of ##\eta##, which happens to be equal to ##\eta##. But it's still convenient to write the number on row ##\mu##, column ##\nu## of ##\eta^{-1}## as ##\eta^{\mu\nu}##, because this ensures that the summation convention works the way it's supposed to: ##\eta^{\mu\nu}\eta_{\nu\rho} =\delta^\mu_\rho##

    So what are the numbers on row ##\mu##, column ##\nu## of ##\Lambda^{-1}## and ##\Lambda^T##? If we denote them by ##(\Lambda^{-1})^\mu{}_\nu## and ##(\Lambda^T)^\mu{}_\nu=\Lambda^\nu{}_\mu## respectively, and use the definition of matrix multiplication and the convention that the etas raise and lower indices, we get
    $$(\Lambda^{-1})^\mu{}_\nu =(\eta^{-1}(\Lambda^T)\eta)^\mu{}_{\nu} =\eta^{\mu\rho}\Lambda^\sigma{}_\rho\eta_{\sigma\nu} =\Lambda_\nu{}^\mu.$$
     
    Last edited: Sep 3, 2015
  11. Sep 3, 2015 #10

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    And that's why it is so important to keep also the horizontal order of indices right, as stressed above. Note that
    $${\Lambda_{\nu}}^{\mu}=\eta_{\nu \rho} \eta^{\mu \sigma} {\Lambda^{\rho}}_{\sigma} \neq {\Lambda^{\nu}}_{\mu}.$$
    (See also the previous posting #9 by Fredrik).
     
  12. Sep 3, 2015 #11

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    My two main texts that use differential geometry - Schutz's Intro to GR and Lee's Riemannian Manifolds - take different approaches on keeping upper and lower indices in order.

    Schutz, which I read first, follows the principles advocated here of always keeping them in order, hence writing things like ##{F^{ab}}_{cd}##. Lee on the other hand generally writes ##F^{ab}_{cd}##. The advantages of Lee's notation are that (1) it's faster to write, as one doesn't have to put extra braces around ##F^{ab}## before doing the ##{}_{cd}## part and (2) it takes up less horizontal space, so you have to break equation lines less often, which is a major issue in tensor operations.

    The point of vanhees and Fredrik above that one can get confused if one doesn't preserve order between upper and lower indices is a good one. It prompted me to review Lee's book to see if he says anything about his choice of notation. I didn't find an explanation, but I did find that he sometimes does adopt the Schutz approach. For instance he always writes Riemann tensors, when not all indices are on the same level, in order, eg as ##{R_{abc}}^d## rather than ##R_{abc}^d##. No doubt this is in order to avoid the confusion that is warned against above. Indeed he emphasises the point that ##{R_{abc}}^d,{{R_{ab}}^c}_d,{{R_a}^b}_{cd},{R^a}_{bcd}## are all different. If you look at the latex code for the last line, you can see the mess of braces one has to write to give those symbolisations, which gives the strong temptation to ditch the ordering - but in this case it would definitely be a bad idea.

    On the other hand he always writes Christoffel symbols without ordering, ie ##\Gamma^a_{bc}##. I suppose that's because nobody ever raises or lowers indices of Christoffel symbols (and yes I know that's because they're not actually tensors, but nevertheless they are written in equations mixed up with tensors).

    From this I infer that his approach is a pragmatic one under which he preserves order when it matters because the order is not made obvious from the context - eg because there is raising or lowering going on. But he puts the upper indices above the lower ones when the order is not in question.

    I think when one is first learning (and when one is writing for learners) it is best to use Schutz's approach, because otherwise it's easy to get confused, and there's lots of raising and lowering going on. Further, when one is doing relativity, rather than general differential geometry, one never has more than four indices, so the problem of running out of horizontal space on the page rarely arises.

    Edit: I just noticed Fredrik's comment above about placing indices above one another: 'Yes, it should be avoided if you intend to use the metric to raise and lower indices.' [emphasis added by me] I guess that sounds rather like Lee's approach.
     
    Last edited: Sep 3, 2015
  13. Sep 4, 2015 #12

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I'd never buy a book that doesn't keep good care of the order of indices. This may work sometimes, if the tensors are symmetric, but if you have a GR book which doesn't keep track of the index order, you get confused at latest when the curvature tensor is introduced.

    Already for the antisymmetric Faraday tensor in electrodynamics it's a desaster, because it's antisymmetric. What should ##F_{\mu}^{\nu}## mean? Note that ##{F_{\mu}}^{\nu}=-{F^{\nu}}_{\mu}##!
     
  14. Sep 4, 2015 #13

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    In addition one should say that the point of proper notation is not to minimize the author's typing work but to maximize readability and convenience for the reader. If you are lucky you type a text once and have thousands of readers!
     
  15. Sep 4, 2015 #14

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    This is not correct. The factor ##\gamma## does not multiply the entire matrix; it only multiplies the upper left 2x2 portion. The correct matrix for a Lorentz boost in the ##x## direction is:

    $$
    L(v)= \left( \begin{array}{ccc} \gamma & -\beta \gamma & 0 & 0 \\ -\beta \gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{array} \right)
    $$
     
  16. Sep 4, 2015 #15

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Good pickup. I started off with the answer being for a 2 x 2 manifold (1 spatial dimension) to keep it simple, but then decided to include the 2 suppressed dimensions and forgot that in that case one could no longer put the ##\gamma## outside the matrix.

    Indeed, but when one is dealing with arbitrarily many dimensions rather than only four, minimising the line breaks in the middle of equations improves readability (in my opinion).
     
  17. Sep 4, 2015 #16

    dyn

    User Avatar

    Thanks for all your replies I am slowly getting there. I follow the argument in #9 that the inverse of Λuv is Λvu but if i am just given the element Λuv how do i find the corresponding element in the inverse ? I see 2 minkowski metric elements multiplied together which wouldn't produce the overall sign change.

    As regards the Faraday tensor equation F'uv = Λua Λvb Fab I realise these are just elements multiplied together so the order doesn't matter but if I want to do the matrix multiplication how do I decide the order of the matrix multiplication ?
     
  18. Sep 5, 2015 #17

    samalkhaiat

    User Avatar
    Science Advisor

    See bellow.
    Follow the rule of matrix multiplication: the second index of the first matrix is summed (or contracted) with the first index of the second matrix: [tex]\bar{F}^{ab} = \left( \Lambda^{a}{}_{c} \ F^{cd} \right) \ \Lambda^{b}{}_{d} = \Lambda^{b}{}_{d} \left( \Lambda \ F \right)^{ad}.[/tex] Now, let [itex]\Lambda F = B[/itex], [tex]\bar{F}^{ab} = \Lambda^{b}{}_{d} \ B^{ad} = \Lambda^{b}{}_{d} \ ( B^{T})^{da} ,[/tex] or [tex]\bar{F}^{ab} = \left( \Lambda \ B^{T}\right)^{ba} = \left( B \ \Lambda^{T}\right)^{ab} .[/tex] Therefore [tex]\bar{F} = B \ \Lambda^{T} = \Lambda \ F \ \Lambda^{T} .[/tex] But, why do you want it in matrix form? The world isn’t made of only rank-2 tensors. Any way, here are some rules and conventions you need to follows when you treat rank-2 Lorentz tensors as matrices:

    i) I have already given you the first rule which is the matrix multiplication rule above.

    ii) [itex]\eta[/itex] is a (0,2) tensor: [itex]\eta_{\mu\nu}[/itex]. It is also the matrix element of the diagonal matrix [itex]\eta[/itex].

    iii) [itex]\eta^{\mu\nu}[/itex] is a (2,0) tensor and can be regarded as the matrix element of the inverse matrix [itex]\eta^{-1}[/itex].

    iv) [itex]\Lambda[/itex] is a Lorentz group element. Lorentz group is a matrix Lie group and [itex]\Lambda[/itex], therefore, has matrix representation. The convention for its matrix element is [itex]\Lambda^{\mu}{}_{\nu}[/itex], where [itex]\nu[/itex] represents the rows (i.e. the first index on a matrix) and [itex]\nu[/itex] numerates the columns (i.e. the second index on a matrix). This convention though makes it mandatory to represent [itex]\Lambda^{-1}[/itex], [itex]\Lambda^{T}[/itex] and all other MATRIX OPERATIONS by the same index structure for their matrix element. So, like [itex]\Lambda^{\mu}{}_{\nu}[/itex], we must write [itex](\Lambda^{-1})^{\mu}{}_{\nu}[/itex], [itex](\Lambda^{T})^{\mu}{}_{\nu}[/itex] and so on.

    v) Even though [itex]\Lambda^{\mu}{}_{\nu}[/itex] is NOT a tensor, we can raise and lower its indices by the metric tensor [itex]\eta[/itex]. This becomes important when dealing with the infinitesimal part of [itex]\Lambda[/itex]. Examples:
    (1) The infinitesimal group parameters satisfy the following MATRIX equation, [tex](\eta \ \omega)^{T} = - (\eta \ \omega) . \ \ \ \ (1)[/tex] The [itex]\alpha \beta[/itex]-matrix element is [tex]\left( (\eta \ \omega)^{T}\right)_{\alpha \beta} = - \left( \eta \ \omega \right)_{\alpha \beta} , [/tex] or, by doing the transpose on the LHS, [tex]\left( \eta \ \omega \right)_{\beta \alpha} = - \left( \eta \ \omega \right)_{\alpha \beta} .[/tex] Following the above-mentioned rule for matrix multiplication, we get [tex]\eta_{\beta \mu} \ \omega^{\mu}{}_{\alpha} = - \eta_{\alpha \rho} \ \omega^{\rho}{}_{\beta} .[/tex] Thus [tex]\omega_{\beta \alpha} = - \omega_{\alpha \beta} . \ \ \ \ \ (2)[/tex] You can also start from (2) and go backward to (1).

    (2) The defining relation of Lorentz group is given by [tex]\eta_{\mu \nu} \ \Lambda^{\mu}{}_{\alpha} \ \Lambda^{\nu}{}_{\beta} = \eta_{\alpha \beta} . \ \ \ (3)[/tex] Before we carry on with raising and lowering indices, I would like to make two important side notes on Eq(3): A) equations (1) or (2) are the infinitesimal version of Eq(3), and B) since the [itex]\Lambda[/itex]’s form a group, Eq(3) is also satisfied by inverse element, [tex]\eta_{\mu \nu} \left( \Lambda^{-1}\right)^{\mu}{}_{\alpha} \left( \Lambda^{-1}\right)^{\nu}{}_{\beta} = \eta_{\alpha \beta} . \ \ (4)[/tex]
    Okay, lowering the index on the first [itex]\Lambda[/itex] in Eq(3), we obtain [tex]\Lambda_{\nu \alpha} \ \Lambda^{\nu}{}_{\beta} = \eta_{\alpha \beta} .[/tex] Now, raising the index [itex]\alpha[/itex] on both sides (or, which is the same thing, contracting with [itex]\eta^{\alpha \tau}[/itex]), we obtain [tex]\Lambda_{\nu}{}^{\tau} \ \Lambda^{\nu}{}_{\beta} = \delta^{\tau}{}_{\beta} . \ \ \ \ \ (5)[/tex] Notice that Eq(5) does not follow the rule of matrix multiplication. This is because of the funny index structure of [itex]\Lambda_{\nu}{}^{\tau}[/itex] which does not agree with our convention in (iv) above. However, we know the following matrix equation [tex]\left( \Lambda^{-1} \ \Lambda \right)^{\tau}{}_{\beta} = \delta^{\tau}{}_{\beta} .[/tex] So, using the rule for matrix multiplication, we find [tex]\left( \Lambda^{-1}\right)^{\tau}{}_{\nu} \ \Lambda^{\nu}{}_{\beta} = \delta^{\tau}{}_{\beta} . \ \ \ \ \ (6)[/tex] Comparing (5) with (6), we find [tex]\left( \Lambda^{-1}\right)^{\tau}{}_{\nu} = \Lambda_{\nu}{}^{\tau} . \ \ \ \ \ \ (7)[/tex] We will come to the (matrix) meaning of this in a minute, let us first substitute (7) in (4) to obtain: [tex]\eta_{\mu \nu} \ \Lambda_{\alpha}{}^{\mu} \ \Lambda_{\beta}{}^{\nu} = \eta_{\alpha \beta} .[/tex] This shows that we could have started with the convention [itex]\Lambda_{\mu}{}^{\nu}[/itex] for the matrix element of [itex]\Lambda[/itex]. The lesson is this, once you choose a convention you must stick with it.

    v) Finally, Eq(7) means the following: giving the matrix
    [tex]
    \Lambda = \begin{pmatrix}
    \Lambda^{0}{}_{0} & \Lambda^{0}{}_{1} & \Lambda^{0}{}_{2} & \Lambda^{0}{}_{3} \\
    \Lambda^{1}{}_{0} & \Lambda^{1}{}_{1} & \Lambda^{1}{}_{2} & \Lambda^{1}{}_{3} \\
    \Lambda^{2}{}_{0} & \Lambda^{2}{}_{1} & \Lambda^{2}{}_{2} & \Lambda^{2}{}_{3} \\
    \Lambda^{3}{}_{0} & \Lambda^{3}{}_{1} & \Lambda^{3}{}_{2} & \Lambda^{3}{}_{3}
    \end{pmatrix} ,
    [/tex]

    the inverse is obtained by changing the sign of [itex]\Lambda^{0}{}_{i}[/itex] and [itex]\Lambda^{i}{}_{0}[/itex] components ONLY, and then transposing ALL indices:

    [tex]
    \Lambda^{-1} = \begin{pmatrix}
    \Lambda^{0}{}_{0} & -\Lambda^{1}{}_{0} & -\Lambda^{2}{}_{0} & -\Lambda^{3}{}_{0} \\
    -\Lambda^{0}{}_{1} & \Lambda^{1}{}_{1} & \Lambda^{2}{}_{1} & \Lambda^{3}{}_{1} \\
    -\Lambda^{0}{}_{2} & \Lambda^{1}{}_{2} & \Lambda^{2}{}_{2} & \Lambda^{3}{}_{2} \\
    -\Lambda^{0}{}_{3} & \Lambda^{1}{}_{3} & \Lambda^{2}{}_{3} & \Lambda^{3}{}_{3}
    \end{pmatrix} .
    [/tex]
     
    Last edited: Sep 5, 2015
  19. Sep 8, 2015 #18

    dyn

    User Avatar

    Thanks for that reply. I understand the order of the matrix multiplication now. There is one last thing that is puzzling me. Where does the negative sign come from when taking the inverse Lorentz matrix. The key equation seems to be ( Λ-1)uv = Λvu but what exactly does this mean and how does it introduce the sign change ? If the number on row u column v of the Lorentz matrix is denoted by Λ uv what does Λvu denote and where is the negative sign coming from ?
     
  20. Sep 9, 2015 #19

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    The sign change comes from the index lowering and raising rule:
    $${(\Lambda^{1})^{\mu}}_{\nu}={\Lambda_{\nu}}^{\mu}=\eta_{\nu \sigma} \eta^{\mu \rho} {\Lambda^{\sigma}}_{\rho}.$$
    Note that in matrix notation this reads
    $$\hat{\Lambda}^{-1} = (\hat{\eta} \hat{\Lambda} \hat{\eta})^{\mathrm{T}}=\hat{\eta} \hat{\Lambda}^{\mathrm{T}} \hat{\eta},$$
    where I have used that ##\hat{\eta}=\hat{\eta}^{-1} = \hat{\eta}^{\mathrm{T}}=\mathrm{diag}(1,-1,-1,-1)##.
     
  21. Sep 9, 2015 #20

    dyn

    User Avatar

    I would like to thank everyone for their patience with me on this thread and their help. I finally see where the sign change comes from but it is so much easier just to remember to change every v to a -v ! I will be starting another thread soon as I have some more questions. I'm hoping you can all help me again. Many thanks.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Lorentz transformation matrix and its inverse
Loading...