Confused by proof of Lorentz properties from invariance of interval

In summary, the author's confusion is regarding the claim that two matrices are equal because of their relation via the dot product of any vector. This is not always true, as illustrated by the counterexample.
  • #1
superbro
5
0
I've seen a few short proofs that if that some transformation [itex]\Lambda[/itex] preserves the spacetime interval, then

[itex]\Lambda^\top g \Lambda = g[/itex]

where g is the spacetime metric.

They have all relied on an argument using some simple algebra to show that

[itex](\Lambda^\top g \Lambda) x \cdot x = g x \cdot x[/itex]

and since this is true for *any* x, it must be true that

[itex]\Lambda^\top g \Lambda = g[/itex]

This confuses me. I don't see how this "since it's true for any x" step is justified.

For example,

[itex]\left( \begin{array}{ccc} 0 & -1 \\ 1 & 0 \end{array} \right) x \cdot x = \left( \begin{array}{ccc} 0 & 0 \\ 0 & 0 \end{array} \right) x \cdot x[/itex]

for any x, and by the same argument, since it is true for *any* x, it must be true that the arrays are equal and -1 = 0 = 1. Did I just break math?

Intuitively, I don't think I could do the same trick to produce a counterexample in 3+ dimensions, but this seems like kind of a subtle point to sweep under the rug in a proof.

Does anyone have a more detailed version of this argument that would (hopefully) make more sense to me?
 
Physics news on Phys.org
  • #2
You cannot cancel a factor of zero from an equation because it can give contradictions

0 x = 0 y

therefore

x = y

for all x, y.
 
  • #3
If it's the zero matrix that is making you think of division by zero, it's also true for a 90 degree rotation in the opposite direction.

[itex]\left( \begin{array}{ccc} 0 & -1 \\ 1 & 0 \end{array} \right) x \cdot x = \left( \begin{array}{ccc} 0 & 1 \\ -1 & 0 \end{array} \right) x \cdot x[/itex]

Those matrices aren't equal, either.

To be clear, my confusion is that in several proofs the claim is made that two matrices are equal because of their relation via the dot product of any vector. It's clearly not true for all matrices as the above counterexample shows, so what properties of these particular matrices are exploited to make it true for them?
 
  • #4
superbro said:
If it's the zero matrix that is making you think of division by zero, it's also true for a 90 degree rotation in the opposite direction.

[itex]\left( \begin{array}{ccc} 0 & -1 \\ 1 & 0 \end{array} \right) x \cdot x = \left( \begin{array}{ccc} 0 & 1 \\ -1 & 0 \end{array} \right) x \cdot x[/itex]

Those matrices aren't equal, either.

The equation you've written is an example of the length of vectors being invariant under spatial rotations, which form a group. The rotation matrices are unimodular with orthogonal rows and columns.

To be clear, my confusion is that in several proofs the claim is made that two matrices are equal because of their relation via the dot product of any vector.
That is confusing.
 
  • #5
Here's an example of this from "Problem Book in Quantum Field Theory" by Voda Radovanovic...

The square of the length of a four-vector, [itex]x[/itex] is [itex]x^2 = g_{\mu \nu} x^{\mu} x^{\nu}[/itex]. By substituting [itex]x'^{\mu} = \Lambda^{\mu}_{\rho}x^{\rho}[/itex] into the condition [itex]x'^2 = x^2[/itex] one obtains:

[itex]g_{\mu\nu}\Lambda^{\mu}_{\rho}\Lambda^{\nu} _{\sigma} x^{\rho}x^{\sigma} = g_{\rho\sigma}x^{\rho}x^{\sigma}[/itex]

Since (1.1) is valid for any vector [itex]x \in M_4[/itex], we get [itex]\Lambda^{\mu}_{\rho}g_{\mu\nu}\Lambda^{\nu} _{\sigma} = g_{\rho\sigma}[/itex].

He does specifically mention x is in M4, which rules out my counterexamples, since they only work in 2D. I presume this is a well-known and obvious argument in order to be the solution to the very first problem of the book, but it seems pretty subtle to me if I want to prove the result in more detail...
 
  • #6
That seems logical. If [itex]\Lambda[/itex] is length preserving then g, which defines the interval should be invariant under it.
 
  • #7
Mentz114 said:
The equation you've written is an example of the length of vectors being invariant under spatial rotations, which form a group. The rotation matrices are unimodular with orthogonal rows and columns.

Actually, thinking about this a bit, I think it's different than a statement that vector length or dot product is preserved.

For example, for any orthogonal matrices A and B, [itex](Ax) \cdot (Ax) = x \cdot x = (Bx) \cdot (Bx)[/itex], which is directly related to the length being preserved.

But [itex] (Ax) \cdot x = (Bx) \cdot x[/itex] fails for most orthogonal matrices. (As an example, check the identity versus a 90 degree rotation where you end up with x * x = 0 which clearly can't be true for all x).

In the 2D rotational case, for example, A and B would have to rotate x by the same absolute value of theta for their projections onto the original x to be equal. I can't think of any examples of nonequal rotations A and B from higher dimensions where this would hold, since any 3D+ rotation A has an eigenvector on its axis (with A x * x = x * x) which is not an eigenvector of any other rotation B (so B x * x < x * x). I think this must ultimately be related to why this argument works, but there's a fair amount of linear algebra behind it.

Mentz114 said:
That seems logical. If [itex]\Lambda[/itex] is length preserving then g, which defines the interval should be invariant under it.

I agree it's an intuitive result, and I have no doubt it's true. I just think the argument used to get there (If A x * x = B x * x for all x in M4, then A = B) is not so obvious, and in fact it's not even true in M2.

Maybe it's just part of me getting used to physics to accept the more hand-wavy "it seems reasonable" kind of argument even if I don't fully understand the mathematics behind it. It tends to leave me terribly confused, though.
 
  • #8
If Ax=Bx then (Ax).x = (Bx).x. I don't know if it's a sufficient condition though.
 
  • #9
Well, I managed to figure this out after a bit more pondering. It's necessary to use the properties of g on both sides of the equation.

(My thought that maybe it worked because of some nutty property of rotations in dimensions greater than 2 was crap, since e.g. [itex]\left( \begin{array}{ccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{array} \right) x \cdot x = 0[/itex] for all x. [Stupid even dimensions.])

I did it in more detail this way...

For arbitrary matrices A and B, where [itex](Ax) \cdot x = (Bx) \cdot x[/itex] for all x, you can show that their entries on the main diagonal must be equal by picking each unit vector in the basis and substituting it for x.

For example, picking [itex]x = \left( \begin{matrix} 1 \\ 0 \\ 0 \\ 0 \end{matrix} \right)[/itex] in [itex]A_{ij} x_i x_j = B_{ij} x_i x_j[/itex] yields [itex]A_{00} = B_{00}[/itex]

Now that we know the diagonals are equal, you can show by picking each vector in the basis with two 1's in it (e.g. (0, 1, 1, 0)) that [itex]A_{ij} + A_{ji} = B_{ij} + B_{ji}[/itex].

For example, picking [itex]x = \left( \begin{matrix} 1 \\ 0 \\ 1 \\ 0 \end{matrix} \right)[/itex] in [itex]A_{ij} x_i x_j = B_{ij} x_i x_j[/itex] yields [itex]A_{00} + A_{02} + A_{20} + A_{22} = B_{00} + B_{02} + B_{20} + B_{22}[/itex] which implies [itex]A_{02} + A_{20} = B_{02} + B_{20}[/itex] since [itex]A_{nn} = B_{nn}[/itex] for all indices n.

Now going from the general case to the special case where B = g, and i ≠ j, then [itex]A_{ij} + A_{ji} = g_{ij} + g_{ji} = 0[/itex], so the entries off diagonal are antisymmetric.

Finally, we know that [itex]\Lambda^{\top} g \Lambda[/itex] is symmetric, since g is symmetric and [itex] (\Lambda^{\top} g \Lambda)^{\top} = \Lambda^{\top} g^{\top} \Lambda = \Lambda^{\top} g \Lambda[/itex].

The only way that the off-diagonal entries can be both symmetric and antisymmetric is if they are zero, and the on-diagonal entries must be equal, so [itex]\Lambda^{\top} g \Lambda = g[/itex] as expected.

The same proof seems like it would work for any diagonal matrix substituted for g, so there's probably some slightly more sophisticated linear algebra that would make short work of this through some decomposition or something.
 

1. What is the Lorentz invariance of interval?

The Lorentz invariance of interval is a fundamental property of spacetime in special relativity. It states that the space and time intervals between events are the same for all observers, regardless of their relative motion.

2. How is the Lorentz invariance of interval related to the proof of Lorentz properties?

The proof of Lorentz properties is based on the invariance of interval. This means that the laws of physics should be the same for all observers, regardless of their relative motion. This is a fundamental principle in special relativity and leads to the Lorentz transformations, which describe how space and time coordinates change between reference frames.

3. What are the Lorentz transformations?

The Lorentz transformations are a set of equations that describe how space and time coordinates are related between two observers in different reference frames. They are based on the principle of Lorentz invariance and are a fundamental component of special relativity.

4. How does the Lorentz invariance of interval impact our understanding of spacetime?

The Lorentz invariance of interval is a crucial element in our understanding of spacetime. It allows us to reconcile the seemingly contradictory concepts of space and time, and to describe the behavior of objects moving at high speeds. It also forms the basis for many important theories, such as special relativity and the theory of general relativity.

5. What are some applications of the Lorentz invariance of interval?

The Lorentz invariance of interval has many practical applications in modern science and technology. It is used in the design and functioning of particle accelerators, GPS systems, and other technologies that rely on precise measurements of time and space. It also plays a crucial role in our understanding of the behavior of particles at high energies and speeds.

Similar threads

  • Special and General Relativity
Replies
6
Views
240
  • Special and General Relativity
3
Replies
101
Views
3K
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
2
Replies
54
Views
1K
  • Special and General Relativity
5
Replies
146
Views
6K
  • Special and General Relativity
4
Replies
123
Views
5K
  • Special and General Relativity
Replies
5
Views
1K
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
Replies
2
Views
1K
  • Special and General Relativity
Replies
6
Views
5K
Back
Top