Transpose and Inverse of Lorentz Transform Matrix

In summary: Well, what you wrote is incorrect. You can't just "move" the indices on ##\Lambda## around like that because it's not a tensor. I'm not sure what you are doing with this equation:$$\delta^\alpha{}_\nu=g^{\alpha\mu}g_{\mu\nu}$$Are you trying to use the metric to raise indices? That's not how that works. The metric doesn't have matrix indices. It has one "up" and one "down" index, e.g. ##g^{\mu \nu}## or ##g_{\mu \nu}##. If you want to raise an index on a tensor, you have to use the metric in the appropriate
  • #71
vanhees71 said:
The Lorentz group is a subgroup of the symmetry group of Minkwoski space as an affine space. The Lorentz transformations themselves don't form a vector space.
So is that the reason why we cannot establish a canonical isomorphism between Lorentz transformations (hence Lorentz matrices in an orthonormal basis) and the vector space of (1,1) tensor built on ##V## (i.e. the translation vector space) ?
 
Physics news on Phys.org
  • #72
I don't know, how you come to the idea in the first place. What would this be good for? Only because you can represent 2nd-rank tensors as well as Lorentz transformations by ##4\times 4## matrices doesn't say that there is a "canonical" (i.e., basis independent) isomorphism between them.
 
  • #73
Sorry, I've really difficulty to catch your point.
If the set of Lorentz transformations was itself a vector space why we could not apply the 'argument' in the aforementioned link (1st answer) to build such canonical isomorphism?

I do not understand if it really no makes sense or if, as agreed with you, there is actually no business to do that.
 
Last edited:
  • #74
cianfa72 said:
To me it seems we are basically 'turning' the affine space into a vector space by selecting a given point in it.
That's one way of viewing what you are doing when you pick a particular coordinate chart on Minkowski spacetime, yes. But note that the choice of coordinates does not just fix an origin, it fixes the directions of the axes (four of them). So it does more than just turn the affine space into a vector space; it specifies a particular basis of the vector space.

cianfa72 said:
Hence the coordinates ##x^{\mu}## and ##x^{\prime \mu}## entering in ##x^{\prime \mu}={\Lambda^{\mu}}_{\rho} x^{\rho} + a^{\mu}## are really the coordinates for the 'translation' vector space part of the affine space definition
No, they aren't. The specification of coordinates, as above, specifies a set of basis directions as well as an origin.
 
  • Like
Likes vanhees71
  • #75
PeterDonis said:
So it does more than just turn the affine space into a vector space; it specifies a particular basis of the vector space.
And to follow on with this, Lorentz transformations (the ##\Lambda## part of the transformation equation you wrote) are changes of basis; they switch from one basis of the vector space to another, without changing the vector space itself (i.e., they rotate the coordinate axes without changing the origin).
 
  • Like
Likes vanhees71
  • #76
PeterDonis said:
And to follow on with this, Lorentz transformations (the ##\Lambda## part of the transformation equation you wrote) are changes of basis; they switch from one basis of the vector space to another, without changing the vector space itself (i.e., they rotate the coordinate axes without changing the origin).
Sorry to bother you: so the operator ##+## inside
$$x^{\prime \mu}=a^{\prime \mu}+{\Lambda^{\mu}}_{\rho} x^{\rho}$$ is actually a sum of two vectors or it is a sum of a point in the affine space and the vector ##{\Lambda^{\mu}}_{\rho} x^{\rho}## ? I believe the former.
 
  • #77
cianfa72 said:
Sorry to bother you: so the operator ##+## inside
$$x^{\prime \mu}=a^{\prime \mu}+{\Lambda^{\mu}}_{\rho} x^{\rho}$$ is actually a sum of two vectors or it is a sum of a point in the affine space and the vector ##{\Lambda^{\mu}}_{\rho} x^{\rho}## ? I believe the former.
Strictly speaking, it's a sum of real numbers. The equation you write is actually four equations, one for each value of ##\mu##. Each equation is an equation in real numbers. The ##\Lambda## term on the RHS expands to a summation of four terms in each equation, one for each value of ##\rho##.

If you want to interpret the four equations together as one equation, then it would appear to be a vector equation, but the vector space is not well defined, since the equation includes a change of origin. One could interpret it as a combination of a change of basis in the vector space of vectors from the original (unprimed) origin, plus a function from the first vector space (unprimed) to a second vector space (of vectors from the primed origin) that preserves the basis, but that still doesn't match either of the things you described, and the operator ##+## in this case would have to be interpreted as a sloppy shorthand for combining two different and incommensurable operations.
 
  • #78
PeterDonis said:
One could interpret it as a combination of a change of basis in the vector space of vectors from the original (unprimed) origin, plus a function from the first vector space (unprimed) to a second vector space (of vectors from the primed origin) that preserves the basis, but that still doesn't match either of the things you described, and the operator ##+## in this case would have to be interpreted as a sloppy shorthand for combining two different and incommensurable operations.
That's was actually my point to 'reduce' it to a sum of vectors that belong to the translation vector space part of the definition of affine space.
 
  • #79
cianfa72 said:
That's was actually my point to 'reduce' it to a sum of vectors that belong to the translation vector space part of the definition of affine space.
There is no such thing as "the translation vector space part of the affine space" that I'm aware of. An affine translation can be viewed as a map between vector spaces; that's what I was describing. But a map between vector spaces is not the same thing as a vector space.
 
  • #80
  • #81
cianfa72 said:
See for instance here https://en.m.wikipedia.org/wiki/Affine_space in the section Definition.
Ok, so if we consider the translations as a vector space acting on a set, then we have:

The set is a set of points, which we are viewing as points in spacetime.

The vector space is the space of translations, which are maps from the set of points into itself. Note that this vector space is not Minkowski spacetime! It's just ##\mathbb{R}^4##, or more precisely its additive component, considered as an additive vector space.

Now let's unpack the full transformation equation in that light:

$$
\left( x^\prime \right)^\mu = \Lambda^\mu{}_\rho x^\rho + a^\mu
$$

We can view ##a^\mu## as an element of the above vector space (a translation). However, that vector space, as above, is a map between points. So the ##+## operator in the above is shorthand for "take the point referred to by ##\Lambda^\mu{}_\rho x^\rho## and apply the translation ##a^\mu## to it". So it's still not an addition of vectors. It's a shorthand for the translation operation.
 
  • Like
Likes vanhees71
  • #82
PeterDonis said:
The set is a set of points, which we are viewing as points in spacetime.

The vector space is the space of translations, which are maps from the set of points into itself. Note that this vector space is not Minkowski spacetime! It's just ##\mathbb{R}^4##, or more precisely its additive component, considered as an additive vector space.
Yes, surely.

PeterDonis said:
So the ##+## operator in the above is shorthand for "take the point referred to by ##\Lambda^\mu{}_\rho x^\rho## and apply the translation ##a^\mu## to it". So it's still not an addition of vectors. It's a shorthand for the translation operation.
Actually I prefer to think of such ##+## operator as the 'add' operation of the translation vector space (see post#68 by @vanhees71).

To me that sum actually 'implements'
$$x^{\prime \mu} \vec{e}_{\mu}'=\overrightarrow{O'P}=\overrightarrow{O'O}+\overrightarrow{OP}=\vec{a}+x^{\rho} \vec{e}_{\rho}=a^{\prime \mu} \vec{e}_{\mu}' + \vec{e}_{\mu}' {\Lambda^{\mu}}_{\rho} x^{\rho}.$$ This is really a sum of elements (vectors) belonging to the translation vector space.

For that reason I said ##x^{\mu}## and ##x^{\prime \mu}## are in practice the coefficients (i.e. the components) of vectors from translation vector space into the two given basis, respectively.
 
Last edited:
  • Like
Likes vanhees71
  • #83
cianfa72 said:
Sorry to bother you: so the operator ##+## inside
$$x^{\prime \mu}=a^{\prime \mu}+{\Lambda^{\mu}}_{\rho} x^{\rho}$$ is actually a sum of two vectors or it is a sum of a point in the affine space and the vector ##{\Lambda^{\mu}}_{\rho} x^{\rho}## ? I believe the former.
It's a sum of vector components. It's good to distinguish between vectors (tensors) and their components. The vectors and tensors don't depend on the choice of any basis, while the components always do since to introduce the components you need a basis first to define them.
 
  • Like
Likes cianfa72
  • #84
vanhees71 said:
It's a sum of vector components. It's good to distinguish between vectors (tensors) and their components. The vectors and tensors don't depend on the choice of any basis, while the components always do since to introduce the components you need a basis first to define them.
Yes, definitely. In fact in that equation basis vectors do not appear at all.
 
  • Like
Likes vanhees71
  • #85
cianfa72 said:
This is really a sum of elements (vectors) belonging to the translation vector space.
No, it's not. As I said in post #81, it's a shorthand for the action of a single vector in the translation vector space, namely ##a^\mu##, on the underlying set of points. The other terms in your equation are not vectors in the translation vector space; they are coordinate vectors, i.e., points in the underlying set (the spacetime) viewed as vectors (by choosing a particular origin). The translation ##a^\mu## shifts the origin from the unprimed one to the primed one.

A sum of elements in the translation vector space would be adding two different translations, say ##a^\mu## and ##b^\mu##, to get a new translation. That's not what the equation you wrote is saying.
 
  • Like
Likes vanhees71
  • #86
PeterDonis said:
The other terms in your equation are not vectors in the translation vector space; they are coordinate vectors, i.e., points in the underlying set (the spacetime) viewed as vectors (by choosing a particular origin).
Maybe I could be wrong but...choosing a particular point (i.e. the origin) in the affine space does not get you a vector space namely the same 'translation' vector space ?
 
Last edited:
  • #87
I'd rather say by choosing an arbitrary point ##O## there's a one-to-one mapping between the points of the affine manifold and the vectors of this affine manifold. This is part of the axioms defining what an affine manifold is: For any point ##P## there's a unique vector ##\vec{x}=\overrightarrow{OP}## and also the other way around: For any vector ##\vec{x}## there's a uniqe point ##P## such that ##\overrightarrow{OP}=\vec{x}##. So you have a one-to-one mapping between points of the affine manifold and its vectors by choosing an arbitrary point ##O## as "the origin".

You can of course choose another origin ##O'## and you get another mapping between the points of the affine manifold and the vectors via ##\vec{x}'=\overrightarrow{O'P}##.

Since both mappings between points and vectors are bijective there's a bivective map between the corresponding vectors, and these build a group, the group of translations:
$$\vec{x}'=\overrightarrow{O'P}=\overrightarrow{O'O}+\overrightarrow{OP}=\vec{x}+\vec{a}.$$
The translations build an Abelian group.
 
  • Like
Likes cianfa72
  • #88
cianfa72 said:
choosing a particular point (i.e. the origin) in the affine space does not get you a vector space namely the same 'translation' vector space ?
No. Go read the last paragraph of my post #85 again, carefully.
 
  • #89
cianfa72 said:
choosing a particular point (i.e. the origin) in the affine space does not get you a vector space namely the same 'translation' vector space ?
To complement my post #85, here's another way to look at it.

Suppose we leave out the translation ##a^\mu## and just consider the Lorentz transformation ##\left( x^\prime \right)^\mu = \Lambda^\mu{}_\rho x^\rho##. As I have already said, this is a change of basis in a vector space. What vector space?

If the answer to your question quoted above were "yes", the change of basis would be in the translation vector space. But that won't work. As GR makes clear, the vector space in which this change of basis is being made is the vector space of tangent vectors at a point. A tangent vector is not a translation; roughly speaking, its magnitude is a rate of change in the direction in which the vector points. The Lorentz transformation ##\Lambda## preserves magnitudes of vectors, so it preserves the rate of change while rotating the directions (which is what a change of basis does).

Note also that, strictly speaking, what I said above for Lorentz transformations only works for timelike and spacelike vectors. The action of a Lorentz transformation on null vectors is fundamentally different: it doesn't rotate them, it dilates them. (Physically, this corresponds to the relativistic Doppler shift.) But such an action would make no sense if it were interpreted as acting on the translation vector space.

In other words, translation vectors, such as ##a^\mu## in your equation, are not Minkowski spacetime vectors; there is no such thing as timelike, spacelike, or null with translation vectors. A translation vector is just a "bare" element of ##\mathbb{R}^4##, and its magnitude, i.e., the "distance" by which it shifts the origin of coordinates, is its "Euclidean" magnitude, not its "Minkowski" magnitude. (Trying to assign a Minkowski magnitude to a translation vector like ##a^\mu## would mean that, for example, the translation ##a^\mu = (1, 1, 0, 0)## would have a magnitude of zero, i.e., it would shift the origin by zero distance, which is obviously false.)

However, the vectors ##x^\mu## and ##\left( x^\prime \right)^\mu## are Minkowski spacetime vectors, since the Lorentz transformation acts on them--whereas it can't act on the translation vector ##a^\mu##. You can't Lorentz transform a translation of the origin.
 
  • #90
Conventions are different. MTW, for instance, uses the convention that all transformation matrices (including but not limited to the Lorentz transformation) obey the "northwest, southeast" convention.

Other textbooks do not necessarily follow MTW's convention, while I haven't read Tung, I assume you're correct that he doesn't follow it. I wouldn't say MTW's way is the only way, but I like the way they do it, it's simple and convenient. MTW's convention is to always write transformation matrices as ##\Lambda^a{}_b##, which MTW describes as the "northwest-southeast" convention. I suppose though, that if you have a paper or book that doesn't follow the conventions, you have to figure out what they mean. As I always use MTW's approach, I don't know how to help with other approaches.

What I think is important is that one know how to transform vectors, how to transform co-vectors (one-forms), and that one know how to find the inverse of a transform. I can't think of anything else one needs to know. So if you know how to do these three things with whatever convetion it is that your source uses, you're set.

WIth MTW's approach, vectors transform as ##u^a = \Lambda^a{}_b u^b##, one forms/ covectors transform as ##\omega_b = \Lambda^a{}_b \omega_a## and that for ##\Lambda^{-1}## to be the inverse of ##\Lambda##, we must have ##\Lambda^a{}_b (\Lambda^{-1})^b{}_c = \delta^a{}_c##, i.e. the composition of a transform and it's inverse must be the identity transform. I believe it also follows that ## (\Lambda^{-1})^a{}_b \Lambda^b{}_c= \delta^a{}_c##, i.e. the order of the composition of a transform and it's inverse doesn't matter. But I could be mistaken, this is from memory.
 
  • Like
Likes cianfa72 and vanhees71
<h2>1. What is the Lorentz Transform Matrix?</h2><p>The Lorentz Transform Matrix is a mathematical representation of the Lorentz transformation, which is a fundamental concept in special relativity. It describes how the measurements of space and time change for an observer moving at a constant velocity relative to another observer.</p><h2>2. What is the purpose of transposing and inverting the Lorentz Transform Matrix?</h2><p>The transpose and inverse of the Lorentz Transform Matrix are used to convert between different frames of reference. Transposing the matrix allows us to switch between the "primed" and "unprimed" frames, while inverting the matrix allows us to switch between the "moving" and "stationary" frames.</p><h2>3. How do you transpose the Lorentz Transform Matrix?</h2><p>To transpose the Lorentz Transform Matrix, simply switch the position of the elements in the first row with the elements in the first column, the elements in the second row with the elements in the second column, and so on. The resulting matrix will be the transpose of the original matrix.</p><h2>4. How do you find the inverse of the Lorentz Transform Matrix?</h2><p>The inverse of the Lorentz Transform Matrix can be found by using the formula: <br> L^-1 = (1/γ) * [γ, -βγ, 0, 0; -βγ, γ, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1] <br> where γ is the Lorentz factor and β is the velocity of the moving frame relative to the stationary frame.</p><h2>5. What are the applications of the Lorentz Transform Matrix?</h2><p>The Lorentz Transform Matrix is used in various fields, such as physics, engineering, and astronomy. It is essential in understanding the effects of relativity on measurements of time and space, and it is also used in the design of high-speed vehicles and in calculations involving electromagnetic fields.</p>

1. What is the Lorentz Transform Matrix?

The Lorentz Transform Matrix is a mathematical representation of the Lorentz transformation, which is a fundamental concept in special relativity. It describes how the measurements of space and time change for an observer moving at a constant velocity relative to another observer.

2. What is the purpose of transposing and inverting the Lorentz Transform Matrix?

The transpose and inverse of the Lorentz Transform Matrix are used to convert between different frames of reference. Transposing the matrix allows us to switch between the "primed" and "unprimed" frames, while inverting the matrix allows us to switch between the "moving" and "stationary" frames.

3. How do you transpose the Lorentz Transform Matrix?

To transpose the Lorentz Transform Matrix, simply switch the position of the elements in the first row with the elements in the first column, the elements in the second row with the elements in the second column, and so on. The resulting matrix will be the transpose of the original matrix.

4. How do you find the inverse of the Lorentz Transform Matrix?

The inverse of the Lorentz Transform Matrix can be found by using the formula:
L^-1 = (1/γ) * [γ, -βγ, 0, 0; -βγ, γ, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1]
where γ is the Lorentz factor and β is the velocity of the moving frame relative to the stationary frame.

5. What are the applications of the Lorentz Transform Matrix?

The Lorentz Transform Matrix is used in various fields, such as physics, engineering, and astronomy. It is essential in understanding the effects of relativity on measurements of time and space, and it is also used in the design of high-speed vehicles and in calculations involving electromagnetic fields.

Similar threads

  • Special and General Relativity
Replies
4
Views
1K
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
Replies
5
Views
1K
  • Special and General Relativity
Replies
7
Views
5K
  • Special and General Relativity
Replies
30
Views
5K
  • Special and General Relativity
Replies
34
Views
3K
  • Special and General Relativity
Replies
4
Views
3K
  • Special and General Relativity
Replies
14
Views
2K
Replies
23
Views
3K
  • Special and General Relativity
Replies
18
Views
1K
Back
Top