General Lorentz Matrix in Terms of Rapidities

Click For Summary

Discussion Overview

The discussion revolves around the formulation of the general Lorentz matrix in terms of rapidities, particularly focusing on transformations that are not limited to boosts in a single direction. Participants explore the mathematical representation and implications of such transformations, referencing literature and engaging in technical reasoning.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants seek references or derivations for the general Lorentz matrix in terms of rapidities, questioning whether it pertains to arbitrary boosts or more general Lorentz transformations.
  • One participant provides a specific form of the Lorentz matrix involving hyperbolic functions and rotation matrices, indicating its utility for further exploration.
  • There is a discussion about the nature of the rotation matrix R, with one participant suggesting it can be replaced by the identity matrix for boosts only.
  • Another participant references Jackson's text, discussing tensors corresponding to roll, pitch, and yaw, and raises questions about the absence of imaginary numbers in the context of matrix exponentiation.
  • Participants engage in calculations involving matrix exponentials, exploring how to derive sine and cosine functions from matrix representations.
  • One participant expresses confusion regarding the treatment of the k=0 term in series expansions and seeks clarification on the identity matrix in this context.
  • There are mentions of specific matrices representing rapidity changes and rotations, with participants sharing their findings and results from computations.

Areas of Agreement / Disagreement

The discussion contains multiple competing views and remains unresolved regarding the precise formulation and interpretation of the Lorentz matrix in terms of rapidities. Participants express differing opinions on the mathematical details and implications of their findings.

Contextual Notes

Some participants note limitations in their understanding of matrix exponentiation and the treatment of specific terms in series expansions. There is also mention of software limitations affecting matrix multiplication, which could influence the results discussed.

Passionflower
Messages
1,543
Reaction score
0
Does anybody have a reference or can write out the general (so not just a boost in only one direction) Lorentz matrix in terms of rapidities?
 
Physics news on Phys.org
Passionflower said:
Does anybody have a reference or can write out the general (so not just a boost in only one direction) Lorentz matrix in terms of rapidities?

Do you mean a boost in an arbitrary direction, or do you mean an arbitrary (restricted) Lorentz transformation (which is not necessarily a boost)? If you mean the former, look at page 541 of the second edition of Jackson.
 
Yep, here it is:

\Lambda = \left( \begin{smallmatrix} \pm 1& 0 \\ 0& \pm \textbf{I} \end{smallmatrix} \right)\left( \begin{smallmatrix} \cosh\eta & -\textbf{n}\sinh\eta\\ -\textbf{n}\sinh\eta& \textbf{I}+\textbf{n}\circ\textbf{n}(\cosh\eta -1) \end{smallmatrix} \right)\left( \begin{smallmatrix} 1&0\\ 0&\textbf{R} \end{smallmatrix} \right)
 
Thaakisfox said:
Yep, here it is:

\Lambda = \left( \begin{smallmatrix} \pm 1& 0 \\ 0& \pm \textbf{I} \end{smallmatrix} \right)\left( \begin{smallmatrix} \cosh\eta & -\textbf{n}\sinh\eta\\ -\textbf{n}\sinh\eta& \textbf{I}+\textbf{n}\circ\textbf{n}(\cosh\eta -1) \end{smallmatrix} \right)\left( \begin{smallmatrix} 1&0\\ 0&\textbf{R} \end{smallmatrix} \right)

That looks useful. I think I can work out what n is. What is R ?
 
Mentz114 said:
That looks useful. I think I can work out what n is. What is R ?

An arbitrary 3x3 (spatial) rotation matrix.
 
George Jones said:
An arbitrary 3x3 (spatial) rotation matrix.

Thanks. So for boosts only we can replace R with I, presumably ?
 
George Jones said:
Do you mean a boost in an arbitrary direction, or do you mean an arbitrary (restricted) Lorentz transformation (which is not necessarily a boost)? If you mean the former, look at page 541 of the second edition of Jackson.

Hmmmmm. I have the third edition. Perhaps I don't read enough, but that's about the first time I've seen any discussion of this topic that's had any potential for making sense.

Jackson describes S1, S2, and S3, as tensors somehow corresponding to roll, pitch, and yaw, and K1, K2, K3, as tensors corresponding to rapidity change in the x, y, and z directions. (okay, he doesn't use those words, but that's what the math looks like in the end.)

He defines
\begin{matrix}<br /> L = \omega \cdot S - \zeta \cdot K<br /> \\ <br /> A=e^L<br /> \end{matrix}​

so that if \omega=(w,0,0) you get a rotation matrix, and if \zeta = (z,0,0) except for one little issue. There's a mysterious lack of imaginary numbers anywhere, but he's still getting cosines and sines when he takes the eL for the \omega part.

It takes a little getting used to, that taking a constant to the power of a matrix gives you a matrix.

Should L be, instead

L = i \omega \cdot S - \zeta \cdot K​

or is this hidden in the notation somewhere?
 
JDoolin said:
He defines
\begin{matrix}<br /> L = \omega \cdot S - \zeta \cdot K<br /> \\ <br /> A=e^L<br /> \end{matrix}​

so that if \omega=(w,0,0) you get a rotation matrix, and if \zeta = (z,0,0) except for one little issue. There's a mysterious lack of imaginary numbers anywhere, but he's still getting cosines and sines when he takes the eL for the \omega part.

It takes a little getting used to, that taking a constant to the power of a matrix gives you a matrix.

Should L be, instead

L = i \omega \cdot S - \zeta \cdot K​

or is this hidden in the notation somewhere?

The cosines and sines come from taking the matrix exponential. Try computing

\exp \begin{pmatrix}0 &amp; -\theta \\ \theta &amp; 0\end{pmatrix}

and see what happens.
 
Ben Niehoff said:
The cosines and sines come from taking the matrix exponential. Try computing

\exp \begin{pmatrix}0 &amp; -\theta \\ \theta &amp; 0\end{pmatrix}

and see what happens.


You gave me the hint I needed by using the word:http://en.wikipedia.org/wiki/Matrix_exponential" .

I'm starting to have some luck using

Code:
S1 = {{0,-Theta}, {Theta, 0}}
A = IdentityMatrix[2]  (*When I tried to do the k=0 term, I got a 0^0 error.*)
For[n = 1, n < 6, n = n + 1,  (*Technically this should go to n->infinity*)
 A = A + S1^n/n!;  (*This is apparently the key to doing the matrix exponential*)
 Print[MatrixForm[A]]]

The software I am using is not doing Matrix multiplication correctly when I take S1n. It is just multiplying term by term. So with the first five terms it came out as follows:

<br /> \left(<br /> \begin{array}{cc}<br /> 1 &amp; -\frac{\theta ^5}{120}+\frac{\theta<br /> ^4}{24}-\frac{\theta ^3}{6}+\frac{\theta<br /> ^2}{2}-\theta \\<br /> \frac{\theta ^5}{120}+\frac{\theta<br /> ^4}{24}+\frac{\theta ^3}{6}+\frac{\theta<br /> ^2}{2}+\theta &amp; 1<br /> \end{array}<br /> \right)

If I do proper Matrix Multiplication instead of what Mathematica did here, I can see I'd get the sines and cosines.

Question: How is the k=0 determined in these cases? It wouldn't be the four-by-four identity matrix, apparently.

Answer: Here is a working version of the code for the first four terms in the series. I think the http://en.wikipedia.org/wiki/Matrix_exponential" is RIGHT, and the sum should go from 0 to infinity. [STRIKE]But this version of the identity matrix is peculiar to the 2 by 2 case.[/STRIKE]
Code:
Clear["Global`*"]
S = {{0, -T}, {T, 0}};
S0 = {{1, 0}, {0, 1}};
S1 = S
S2 = S.S
S3 = S.S.S
S4 = S.S.S.S
MatrixForm[Expand[S0 + S1 + S2/2 + S3/6 + S4/24]]
TeXForm[MatrixForm[Expand[S0 + S1 + S2/2 + S3/6 + S4/24]]]

The output is

<br /> \left(<br /> \begin{array}{cc}<br /> \frac{T^4}{24}-\frac{T^2}{2}+1 &amp; \frac{T^3}{6}-T \\<br /> T-\frac{T^3}{6} &amp; \frac{T^4}{24}-\frac{T^2}{2}+1<br /> \end{array}<br /> \right)<br />

These are the first terms in the Maclaurin series expansions for sine and cosine.

I've also found that that Mathematica has a "MatrixExp" command which gives the final value, without all this fiddling with infinite series.
 
Last edited by a moderator:
  • #10
JDoolin said:
The software I am using is not doing Matrix multiplication correctly when I take S1n. It is just multiplying term by term.

Try MatrixPower[S1, n] instead of S1^n.
 
  • #11
Thanks, Rasalhague.

Just for completion,


<br /> {K1,K2,K3}=<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; \beta_x} &amp; 0 &amp; 0 \\<br /> \beta_x &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; 0 &amp; \beta_y} &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> \beta_y} &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; 0 &amp; 0 &amp; \beta_z \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> \beta_z &amp; 0 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)<br />

<br /> <br /> {S1,S2,S3}=<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; -\theta_1 \\<br /> 0 &amp; 0 &amp; \theta_1 &amp; 0<br /> \end{array}<br /> \right)<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; \theta_2 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; -\theta_2 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)<br /> \left( <br /> \begin{array}{cccc}<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; -\theta_3 &amp; 0 \\<br /> 0 &amp; \theta_3 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0<br /> \end{array}<br /> \right)<br />

<br /> MatrixExp[S1]=\left(<br /> \begin{array}{cccc}<br /> 1 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 1 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; \cos (\theta_1)<br /> &amp; -\sin (\theta_1) \\<br /> 0 &amp; 0 &amp; \sin (\theta_1)<br /> &amp; \cos (\theta_1)<br /> \end{array}<br /> \right)<br />
<br /> <br /> MatrixExp[K1]=\left(<br /> \begin{array}{cccc}<br /> \frac{1}{2} e^{-\beta_x} \left(e^{2 \beta_x}+1\right) &amp; \frac{1}{2}<br /> e^{-\beta_x}<br /> \left(e^{2 \beta_x}-1\right) &amp; 0 &amp; 0 \\<br /> \frac{1}{2} e^{-\beta_x} \left(e^{2 \beta_x}-1\right) &amp; \frac{1}{2}<br /> e^{-\beta_x}<br /> \left(e^{2 \beta_x}+1\right) &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 1 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 1<br /> \end{array}<br /> \right)<br />

The terms in the last matrix are hyperbolic sines and cosines.

One concern I have about the rotation matrices; the method prescribed seems to be commutative, whereas rotations around different axes are usually considered non-commutative.

(Ah, I see. If you do all your rotations from the body in the original direction, it comes out commutative. If you turn with the body, the rotations are not commutative.)
 
Last edited:
  • #12
Sorry to post in this thread after such a long time, but I find an error in my thinking. I cannot now see how rotations (yaw, pitch, roll) can be commutative, whether you turn along with the object, or not, a change in yaw, followed by a roll, is different by a change in yaw, followed by a change in pitch.

Perhaps the "method prescribed" is not commutative, after all. Quite a relief, actually, since that means the rotations have to be treated individually. But I have to ask, if these aren't commutative, is it even philosophically possible for an object to turn on three axes simultaneously? It seems like it should be, since you could power three motors. But the position of those three motors would each be changing over time.

I would say, sure, it is both physically and philosophically possible to have an object rotating on three axes, but there is no unambiguous single way for that to happen. You would have to choose how to put one axis inside the other.

But on second thought, even then, if the engines were turning inside one another, they would line up on one occasion or another. So perhaps, it may actually be both physically and philosophically impossible for an object to rotate around three axes at once.


Also the equation for K1, K2, K3 got messed up:

{K1,K2,K3}= \left( \begin{array}{cccc} 0 &amp; \beta_x &amp; 0 &amp; 0 \\ \beta_x &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{array} \right) \left( \begin{array}{cccc} 0 &amp; 0 &amp; \beta_y &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ \beta_y &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{array} \right) \left( \begin{array}{cccc} 0 &amp; 0 &amp; 0 &amp; \beta_z \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ \beta_z &amp; 0 &amp; 0 &amp; 0 \end{array} \right)
 
Last edited:
  • #13
Ahh, I think I have it. A combination of two rotations, one after another, creates a final effect as though the object were rotated around a different axis. The "method prescribed" doesn't imply commutativity.

Two simultaneous rotations, one inside the other, is not the same as two simultaneous rotations, the other inside the one.
 

Similar threads

Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K