Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Lorentz transformations and Minkowski metric

  1. Sep 13, 2014 #1
    I am attempting to read my first book in QFT, and got stuck.

    A Lorentz transformation that preserves the Minkowski metric [itex]\eta_{\mu \nu}[/itex] is given by [itex]x^{\mu} \rightarrow {x'}^{\mu} = {\Lambda}^\mu_\nu x^\nu [/itex]. This means [itex] \eta_{\mu \nu} x^\mu x^\nu = \eta_{\mu \nu}x'^\mu x'^\nu [/itex] for all [itex]x[/itex], which implies that [itex] \eta_{\mu \nu} = \eta_{\sigma \tau} {\Lambda}^\sigma_\mu {\Lambda}^\tau_\nu [/itex].

    I am wondering if this is the right direction so as to arrive at the implication:

    [itex] x^\sigma \rightarrow x'^\sigma = \Lambda^\sigma_\mu x^\mu, x^\tau \rightarrow x'^\tau = \Lambda^\tau_\nu x^\nu [/itex]

    Now, I am not sure how to go beyond this point. And I assume that [itex] \eta_{\mu \nu} = \eta_{\sigma \tau} {\Lambda}^\sigma_\mu {\Lambda}^\tau_\nu [/itex] is an operator, so it does not matter if the [itex] x^i [/itex] are included or not.


    Thanks.
     
  2. jcsd
  3. Sep 13, 2014 #2

    WannabeNewton

    User Avatar
    Science Advisor

    Use an infinitesimal Lorentz transformation and work to first order in the expansion parameter.
     
  4. Sep 13, 2014 #3
    Thank you for the quick reply.

    Can you give me an example in this context or further explanation of what an infinitesimal Lorentz transformation is? Also, the unprimed to primed transformation is given. How do you go the other way?

    Is it correct to say that since [itex] \eta_{\mu \nu} x ^\mu x^\nu = \eta_{\mu \nu} x'^\mu x'^\nu [/itex] then [itex] \eta_{\mu \nu} x ^\mu x^\nu = \eta_{\mu \nu} \Lambda^\mu_\sigma x^\sigma \Lambda^\nu_\tau x^\tau [/itex]? And since repeated indices are "dummy" indices, then they can be relabeled and we have that [itex] \eta_{\mu \nu} \Lambda^\mu_\sigma x^\sigma \Lambda^\nu_\tau x^\tau = \eta_{\sigma \tau} \Lambda^\sigma_\mu x^\mu \Lambda^\tau_\nu x^\nu [/itex], therefore showing the original implication?
     
    Last edited: Sep 13, 2014
  5. Sep 13, 2014 #4

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I prefer to work with the matrices instead of their components. The Minkowski bilinear form g is defined by ##g(x,x)=x^T\eta x## for all 4×1 matrices x. A Lorentz transformation is a 4×4 matrix ##\Lambda## such that ##g(\Lambda x,\Lambda x)=g(x,x)## for all 4×1 matrices x. So if ##\Lambda## is a Lorentz transformation, we have
    $$x^T\eta x=g(x,x)=g(\Lambda x,\Lambda x)=(\Lambda x)^T\eta(\Lambda x) =x^T\Lambda^T \eta\Lambda x,$$ for all 4×1 matrices x. The words "for all" are the key to the answer. You can make a clever choice of x to show that one component of the matrix equation ##\eta=\Lambda^T\eta\Lambda## holds. Different choices of x will help you prove different components of that matrix equation.

    It may be easier to first prove that this statement holds: For all 4×1 matrices x and y, we have ##x^T\eta y=x^T\Lambda^T\eta\Lambda y##. You can prove this by noting that the seemingly weaker statement above (with two x's and no y) implies that for all x,y, we have ##(x+y)^T\eta (x+y)=(x+y)^T\Lambda^T\eta\Lambda (x+y)## and ##(x-y)^T\eta (x-y)=(x-y)^T\Lambda^T\eta\Lambda (x-y)##.

    The reason why this is worth doing first is that it's now very easy to single out any component you want from the matrix equation with a clever choice of x and y.

    When physicists use the term "infinitesimal", they're always talking about some kind of Taylor expansion, where we pretend that the higher order terms don't exist. A Lorentz transformation can be viewed as a function of six parameters (three velocity components and three Euler angles that identify a rotation). If we denote the 6-tuple of parameters by ##\theta##, we can write
    $$\Lambda(\theta)=I+\theta^a\frac{\partial}{\partial \theta^a}\bigg|_0\Lambda(\theta)+\cdots$$ The sum of the first order terms is often denoted by ##\omega##. The sum ##I+\omega## can be called an infinitesimal Lorentz transformation. But this doesn't have anything to do with what you asked about. It is relevant to QFT though. A standard exercise is to show that if ##\Lambda(\theta)## is a Lorentz transformation, then ##\omega^T=-\omega##.

    This is so much easier when we're working with matrices instead of their components. ##x'=\Lambda x## implies ##x=\Lambda^{-1}x'##. What's difficult to understand is that the component version of the former is ##x'^\mu=\Lambda^\mu{}_\nu x^\nu##, while the component form of the latter is ##x^\mu =\Lambda_\nu{}^\mu x'^\nu##. I wrote a post that explains this (and how to translate between component notation and matrix notation) a couple of days ago: https://www.physicsforums.com/showthread.php?p=4847943#post4847943

    The calculation is correct, but you need an argument like the one described above (try several clever choices of x) to eliminate the x's.
     
    Last edited: Sep 13, 2014
  6. Sep 15, 2014 #5
    Would a "clever" choice be to multiply both sides on the right by [itex]x_\nu x_\mu[/itex]? That is, multiplying both sides on the left by the inverses of [itex]x^\mu[/itex] and [itex]x^\nu[/itex]?
     
    Last edited: Sep 15, 2014
  7. Sep 15, 2014 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Those aren't inverses, and to multiply both sides by something isn't to make a choice of x. What you need to do is to plug in specific numbers as the components of x.

    If M is a 4×4 matrix whose component on row ##\mu##, column ##\nu## is denoted by ##M^\mu{}_\nu##, then what is
    $$\begin{pmatrix}1& 0& 0& 0\end{pmatrix}M\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix}?$$
     
  8. Sep 15, 2014 #7

    nrqed

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member


    I am a bit confused about what you are trying to prove and what you are starting with. You seem to start with [itex]x^{\mu} \rightarrow {x'}^{\mu} = {\Lambda}^\mu_\nu x^\nu [/itex] and you seem to want to prove


    [itex] x^\sigma \rightarrow x'^\sigma = \Lambda^\sigma_\mu x^\mu, x^\tau \rightarrow x'^\tau = \Lambda^\tau_\nu x^\nu [/itex]

    But these three expressions are all the same! So can you clarify what you are trying to prove?
     
  9. Sep 15, 2014 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'm pretty sure that he wants to prove that if ##\eta_{\mu\nu}x'^\mu x'^\nu =\eta_{\mu\nu} x^\mu x^\nu## for all x, then ##\Lambda^\mu{}_\rho\Lambda^\nu{}_\sigma \eta_{\mu\nu} =\eta_{\rho\sigma}##, or equivalently, that if ##x'^T\eta x'=x^T\eta x## for all x, then ##\Lambda^T\eta\Lambda=\eta##.
     
  10. Sep 16, 2014 #9

    ChrisVer

    User Avatar
    Gold Member

    Why do you need to use different choices for x?
    I mean the form you reached and holds for all x is:
    [itex] x^T \eta x = x^T \Lambda^T \eta \Lambda x[/itex]
    and I think it is enough to say that:
    [itex]\eta= \Lambda^T \eta \Lambda [/itex]
    (No needs for x choices or inserting other vectros y etc)
     
  11. Sep 16, 2014 #10

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This conclusion is definitely not obvious, but if you have previously proved a theorem that says e.g. that if <x,Ax>=0 for all x then A=0, then you can use that theorem.

    If you do what I did in #4 to replace one of the x's with an arbitrary y, then there's another option to "clever choices of x" (or "clever choices of x and y"): If ##x^TAy=0## for all x,y, then ##x^TA## is the matrix representation of the functional that takes every vector to 0. So we have ##0=x^TA=(A^Tx)^T## for all x, and now we can argue that ##A^T=0## in the same way.

    But I still think that "clever choices of x and y" is the easiest and best way to do this. It gives us a simple and elementary proof that doesn't rely on any other theorems.
     
    Last edited: Sep 16, 2014
  12. Sep 16, 2014 #11

    [itex]{M^1}_1[/itex]

    Then for basis vectors, [itex]x_\mu {M^\mu}_\nu x^\nu = {M^\mu}_\nu [/itex]?
     
  13. Sep 17, 2014 #12

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes, it's the top left component, but the rows and columns are usually numbered from 0 to 3 in relativity, so I would denote this component by ##M^0{}_0##. So if ##x^TMx=0## for all x, then this choice of x tells us that ##M^0{}_0=0##.

    Since you have x on both sides of M, and there are only four vectors in the standard basis, you can only single out four components of M by plugging in standard basis vectors. These are the ones on the diagonal. To get information about the other components of M, you would have to plug in other vectors. For example, if you plug in (1,1,0,0), you get ##M^0{}_0+M^0{}_1+M^1{}_0+M^1{}_1=0##. If you plug in (1,-1,0,0) you get an equation with the same components, but two of the sign flipped. If you keep plugging in vectors like these, you get more and more information, and will eventually find all the components of M.

    It's much easier to first do what I did, to convert the result ##x^TMx=0## for all x to ##x^TMy=0## for all x,y. Now there's no need to plug in anything but standard basis vectors. By plugging in different standard basis vectors on the left and right, you can single out any component of M you want.
     
  14. Sep 17, 2014 #13

    ChrisVer

    User Avatar
    Gold Member

    Another, easier way I saw here
    https://www.physicsforums.com/showthread.php?t=91974
    [itex] Ax = Bx \Rightarrow B^{-1}A x = B^{-1}Bx = x [/itex]
    so [itex]B^{-1}A = I [/itex]
    Then you can also try to write [itex]x \rightarrow B^{-1} y[/itex] (since B is given matrix, then for all x's implies for all y's) from where you will get [itex]A B^{-1} = I [/itex]

    These results: [itex] B^{-1}A = A B^{-1} =I [/itex], are the definition of [itex]A[/itex] being the inverse of [itex]B^{-1}[/itex]
    So [itex]A= (B^{-1})^{-1}=B[/itex].

    This just needs the assumption that [itex]B[/itex] is inversible, which is true for the minkowski metric matrix. In your case:
    [itex] x^{T} \eta x = x^T \Lambda^T \eta \Lambda x \Rightarrow x^{T} (\eta x) = x^T (\Lambda^T \eta \Lambda x) \Rightarrow \eta x = \Lambda^T \eta \Lambda x[/itex]
    So [itex]A= \Lambda^T \eta \Lambda[/itex] and [itex]B= \eta[/itex], and since [itex]B[/itex] is inversible then due to the above, [itex]A=B \Rightarrow \Lambda^T \eta \Lambda = \eta[/itex]

    (do I make it desperately obvious that working with specific components is confusing me that much?)
     
    Last edited: Sep 17, 2014
  15. Sep 17, 2014 #14

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The step from ##x^T(\eta x)=x^T(\Lambda^T\eta\Lambda x)## for all x, to ##\eta x=\Lambda^T\eta\Lambda x## for all x, is non-trivial, since the things in parentheses depend on x. Even the step from ##B^{-1}Ax=x## for all x to ##B^{-1}A=I## requires an explanation. If you're familiar with the bijective correspondence between linear operators and matrices, then you can argue that the linear operators corresponding to ##B^{-1}A## and ##I## have the same domain and take each element of that domain to the same thing. That means that the linear operators are equal (i.e. that the linear operator corresponding to ##B^{-1}A## is the identity map), and that means that their matrices are equal. So we can conclude that ##B^{-1}A=I##.

    But the easiest way is still..."clever choices of x". This doesn't even require you to understand the connection between linear operators and matrices.

    The word is "invertible" by the way.
     
  16. Sep 17, 2014 #15

    ChrisVer

    User Avatar
    Gold Member

    are you implying that the associative property does not hold?
    http://en.wikipedia.org/wiki/Matrix_multiplication#Row_vector.2C_square_matrix.2C_and_column_vector
    I don't understand how it can be non trivial, if I two vectors and take their normal product:
    [itex] x^T y = x^T z[/itex] don't I have that [itex]y=z[/itex]? And how is x is destroying that? If you change x, then you are changing their results (y,z) but they still remain equal (or the equality [itex] x^T y = x^T z[/itex] won't hold).
    As for the identity: Or just ask what is the thing that when acts on every vector [itex]x[/itex] gives you the same vector? By definition it's the identical "transformation" (or better mapping [itex]V \rightarrow V[/itex]).

    The problem with taking several x's is that you are "doomed" in checking too many elements and trying to do exactly the same thing in many steps. I'm trying to find the fastest,general and self-explanatory way (not saying that the other is wrong, but because you need to start putting "choices" I find it not delicate).
     
    Last edited: Sep 17, 2014
  17. Sep 17, 2014 #16
    OK - I have I have it.

    A Lorentz transformation [itex]x^\mu \rightarrow {x'}^\mu = {\Lambda^\mu}_\nu x^\nu [/itex] preserves the Minkowski metric [itex] \eta_{\mu \nu} [/itex], which means that [itex] \eta_{\mu \nu} x^\mu x^\nu = \eta_{\mu \nu} {x'}^\mu {x'}^\nu \hspace{3pt} \forall \hspace{3pt} x[/itex]. Therefore,

    [itex]
    \eta_{\mu \nu} x^\mu x^\nu = \eta_{\mu \nu} {x'}^\mu {x'}^\nu \\
    = \eta_{\mu \nu} {\Lambda^\mu}_\sigma x^\sigma {\Lambda^\nu}_\tau x^\tau \\
    = \eta_{\sigma \tau} {\Lambda^\sigma}_\mu x^\mu {\Lambda^\tau}_\nu x^\nu \\
    \Rightarrow \eta_{\mu \nu} x^\mu x^\nu = \eta_{\sigma \tau} {\Lambda^\sigma}_\mu {\Lambda^\tau}_\nu x^\mu x^\nu
    [/itex]

    by relabeling the repeated/dummy indices. Since this holds *for all [itex]x[/itex]*,

    [itex]

    \eta_{\mu \nu} = \eta_{\sigma \tau} {\Lambda^\sigma}_\mu {\Lambda^\tau}_\nu

    [/itex].
     
  18. Sep 17, 2014 #17

    ChrisVer

    User Avatar
    Gold Member

    didn't you already do that in post #3?
     
  19. Sep 17, 2014 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Of course not.

    The following statement is true:
    For all 4×1 matrices ##y,z##, if ##x^Ty=x^Tz## for all 4×1 matrices x, then y=z.​
    It's true, but not trivial. The easiest way to prove it is this: Let y,z be 4×1 matrices. Suppose that ##x^Ty=x^Tz## for all 4×1 matrices x. Then for each ##\mu\in\{0,1,2,3\}##, we have ##x^\mu=(e_\mu)^Ty=(e_\mu)^Tz=z^\mu##. This implies that ##y=z##.

    (Each ##e_\mu## denotes a standard basis vector).

    You keep suggesting that the theorem I stated and proved above doesn't require proof, and that it's equally obvious that the following statement is also true:
    For all linear operators A,B on the set of 4×1 matrices, if ##x^TA(x)=x^TB(x)## for all 4×1 matrices x, then ##A=B##.​
    This statement is true as well, but it's certainly not a trivial consequence of the first theorem. This should be obvious from the fact that the first theorem is a statement about 4×1 matrices, and the second is a statement about linear operators on the vector space of 4×1 matrices. That's how "x is destroying that".

    Yes, I told you that this is a way to prove that ##B^{-1}A=I##, but I also told you that it requires you to understand the relationship between linear operators and matrices. The argument you're making is not about the matrices ##B^{-1}A## and ##I##. It's about the corresponding linear operators. Here's that part of my post again:

    Yes, the first time you do it, you will make some useless choices of x. That's why I prefer to first prove that ##y^T\Lambda^T\eta\Lambda z=y^T\eta z## for all 4×1 matrices y,z. It's very easy to avoid making useless choices of y and z. In fact, you will probably get it exactly right with your first guess.

    I think the fastest and simplest way to eliminate all doubt is the proof I have sketched:

    1. We know that ##x^T\Lambda^T\eta\Lambda x=x^T\eta x## for all x.
    2. Let y,z be arbitrary 4×1 matrices. Since the equality in step 1 holds for all x, it holds when we substitute y+z for x, and it holds when we substitute y-z for x.
    3. When we simplify the equalities obtained in 2, we find that ##y^T\Lambda^T\eta\Lambda z=y^T\eta z## for all 4×1 matrices y,z.
    4. This result implies that for all ##\mu,\nu##, we have
    $$(\Lambda^T\eta\Lambda)^\mu{}_\nu = (e_\mu)^T(\Lambda^T\eta\Lambda)e_\nu =(e_\mu)^T\eta e_\nu =\eta_{\mu\nu}.$$ 5. The matrix equation ##\Lambda^T\eta\Lambda=\eta## holds, because each component of it holds.

    I'm not sure why you find it "not delicate". The standard way to use that all x in a set have the same property P, is to pick a specific x in that set and use that it has property P.
     
    Last edited: Sep 17, 2014
  20. Sep 17, 2014 #19

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is a much better way to state what you did in posts #1 and #3, but what you wanted to know was why the final step is valid, right? Do you understand why it's valid now?
     
  21. Sep 17, 2014 #20

    ChrisVer

    User Avatar
    Gold Member

    Because you don't need one choice, but severals... If you wanted one choice then for an arbitrary x (or else put all x's) you would have to choose [itex] x = (a,b,c,d)^T[/itex] with [itex]a,b,c,d \in \mathbb{R}[/itex] arbitrary numbers...which of course won't be too insightful after taking the matrix multiplications with it.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Lorentz transformations and Minkowski metric
  1. Minkowski Metric (Replies: 36)

Loading...