Solving SR Invariance: Minkowski Metric, Poincare Transformation, Index Notation

In summary: I had ##g_{uv}## twice without using any primes as I changed from one frame to the other consistent with my comment in the original post that the Minkowski metric is unchanged. (pretty sure this is what my lecturer said...)That's correct.
  • #1
binbagsss
1,254
11
I am following some lecture notes looking at the invariance of Poincare transformation acting on flat space-time with the minkowski metric:

##x'^{u} = \Lambda ^{u}## ##_{a} x^{a} + a^{u} ## [1], where ##a^{u}## is a constant vector and ##\Lambda^{uv}## is such that it leaves the minkowski metric, ##g_{ab}## invariant.

By invariance I have:

##ds^{2}=g_{uv}dx^{u}dx^{v}=ds'^{2}=dx'^{u}dx'^{v}g_{u'v'}## [2]

from [1] ## dx'^{u} = \Lambda ^{u}## ##_a ## ## dx^{a} ##

Plugging this into [2] I have:

##ds^{2}=g_{uv}dx^{u}dx^{v}=g_{uv}\Lambda^{u}## ##_{m} dx^{m} \Lambda ^{v} ## ##_{n}dx^{n}##

and then we can simply cancel ##g_{uv}## this was my working. and I can't see where it is flawed.

I am also able to follow the lecture notes where the only difference is the choice of indicies and get a different expression as follows:##g_{uv}dx^{u}dx^{v}=g_{ab}dx'^{a}dx'^{b}##

## =g_{ab}\Lambda^{a}## ##_{\theta}dx^{\theta} \Lambda^{b} ## ##_{\phi} dx^{\phi}##

Now I rename ##\theta=u, \phi=v## and get

##g_{uv}=g_{ab}\Lambda^{a}## ##_{u} \Lambda^{b}## ##_{v}## as in my lecture notes.Have I broke any index notation rules in the first one? such that my metrics cancel when they should not etc? struggling to see where I've gone wrong. I know there is a rule where you shouldn't have an index appear more than twice but I thought this was only true on the same side of the equation and in the same term, i.e. a 'multiplicative' term not summing separate terms...Many thanks in advance.
 
Physics news on Phys.org
  • #2
binbagsss said:
Plugging this into [2] I have:

##ds^{2}=g_{uv}dx^{u}dx^{v}=g_{uv}\Lambda^{u}## ##_{m} dx^{m} \Lambda ^{v} ## ##_{n}dx^{n}##
Huh? How did you get that? Did you forget some primes somewhere?

BTW, in your eqn[2], it may be better to write ##g'_{\mu\nu}## rather than ##g_{\mu'\nu'}##.

and then we can simply cancel ##g_{uv}## this was my working. and I can't see where it is flawed.
What do you mean by "we can simply cancel ##g_{uv}##"? You can't "cancel" ##g_{uv}## here.
 
  • Like
Likes vanhees71
  • #3
strangerep said:
Huh? How did you get that? Did you forget some primes somewhere?

BTW, in your eqn[2], it may be better to write ##g'_{\mu\nu}## rather than ##g_{\mu'\nu'}##..

I had ##g_{uv}## twice without using any primes as I changed from one frame to the other consistent with my comment in the original post that the Minkowski metric is unchanged. (pretty sure this is what my lecturer said...)

strangerep said:
What do you mean by "we can simply cancel ##g_{uv}##"? You can't "cancel" ##g_{uv}## here.

Since it appears on both sides of the equation?
Oh thinking of the metric tensor as a matrix, you can't just do this can you...?
 
  • #4
binbagsss said:
I had ##g_{uv}## twice without using any primes as I changed from one frame to the other consistent with my comment in the original post that the Minkowski metric is unchanged. (pretty sure this is what my lecturer said...)
I'm pretty sure what your lecturer meant is something like: $$\Lambda^T g' \Lambda ~=~ g ~,$$(where I've switched to matrix notation instead of indices).

I.e., you're supposed to apply the ##\Lambda## transformation to the metric, and then find out that in fact the components of the metric are unchanged if ##\Lambda## is an orthogonal transformation (i.e., if ##\Lambda^T = \Lambda^{-1}##).

Since it appears on both sides of the equation?
Oh thinking of the metric tensor as a matrix, you can't just do this can you...?
That's not the reason. Rather, it's because you're performing a double summation over the indices. (Write out the equation using explicit ``##\Sigma##'' summation notation if this isn't clear.)
 
  • #5
strangerep said:
I'm pretty sure what your lecturer meant is something like: $$\Lambda^T g' \Lambda ~=~ g ~,$$(where I've switched to matrix notation instead of indices).

I.e., you're supposed to apply the ##\Lambda## transformation to the metric, and then find out that in fact the components of the metric are unchanged if ##\Lambda## is an orthogonal transformation (i.e., if ##\Lambda^T = \Lambda^{-1}##).

That's not the reason. Rather, it's because you're performing a double summation over the indices. (Write out the equation using explicit ``##\Sigma##'' summation notation if this isn't clear.)
My lecturer just stated as I did above that Minkowski metric is unchanged.

However I do see that from my incorrect expression in OP by renaming indices ##u< -> m## ,## v <- > n ##, that I obtain the result in lectures, so it was just a matter of renaming dummy indices and not 'cancelling' the ##g_{uv}## ... ?

strangerep said:
That's not the reason. Rather, it's because you're performing a double summation over the indices. (Write out the equation using explicit ``##\Sigma##'' summation notation if this isn't clear.)

Isn't matrix multiplication a summation ? hence representing the tensors as matrices, it is the same argument as what you've just said?
 
  • #6
binbagsss said:
However I do see that from my incorrect expression in OP by renaming indices ##u< -> m## ,## v <- > n ##, that I obtain the result in lectures, so it was just a matter of renaming dummy indices and not 'cancelling' the ##g_{uv}## ... ?
You're making me try to guess what's in your mind, which is rarely reliable. Perhaps you should write it out again here (as equations) properly so I can be sure.

Isn't matrix multiplication a summation ? hence representing the tensors as matrices, it is the same argument as what you've just said?
To "cancel" a matrix from both sides of an equation, you must multiply both sides by the inverse matrix. To do that, you need a free index. E.g., suppose you have an equation like:$$ A^\mu_{~\nu} v^\nu ~=~ z^\mu ~,$$where ##A## is an invertible matrix and ##v,z## are (column) vectors. Then you can multiply both sides by the inverse matrix ##A^{-1}##, as follows:$$(A^{-1})^\lambda_{~\mu} A^\mu_{~\nu} v^\nu ~=~ (A^{-1})^\lambda_{~\mu} z^\mu ~,$$$$\delta^\lambda_{~\nu} v^\nu ~=~ (A^{-1})^\lambda_{~\mu} z^\mu ~,$$$$ v^\lambda ~=~ (A^{-1})^\lambda_{~\mu} z^\mu ~.$$ But to do that, we needed a free index on both sides (in this case "##\mu##"). But in the expressions for ##ds^2## there are no free indices, only dummy ones.
 
  • #7
strangerep said:
You're making me try to guess what's in your mind, which is rarely reliable. Perhaps you should write it out again here (as equations) properly so I can be sure.

Okay so literally ## u-> m ## and ## n -> v##
So instead of ##g_{uv}dx^{u}dx^{v}=g_{uv}\Lambda^{u}_{m}dx^{m}\Lambda^{v}_{n}dx^{n}##
I have ##g_{uv}dx^{u}dx^{v}=g_{mn}\Lambda^{m}_{u}dx^{u}\Lambda^{n}_{v}dx^{v}##
Then, as my lecture notes did, the ##dx^{u} dx^{v}## cancel from each side of the equation to give the result I was after.

HOWEVER, I've just realized, I have no idea why/how you can cancel the ##dx^{u} dx^{v}##? Since, as you said, the expression consists only of dummy indicies, that is, the ##dx^{u}## and ##dx^{v}## are four-vectors being summed over with ##"\Lambda"## and ##"g"## on the RHS right? wheres they're being summed over just ##"g"## on the LHS, so surely you can't just cancel...? I would have thought the exact same reason just discussed above why ##"g"## doesn't cancels, holds here...
 
  • #8
You don't cancel them. But the expression must be true for arbitrary ##dx^u## and ##dx^v##. The only way that can be true is if the coefficients of each term are the same, which gives you the result that looks like you canceled the infintesimal vectors.
 
  • Like
Likes vanhees71
  • #9
Ibix said:
You don't cancel them. But the expression must be true for arbitrary ##dx^u## and ##dx^v##. The only way that can be true is if the coefficients of each term are the same, which gives you the result that looks like you canceled the infintesimal vectors.

Ahh of course ! thank you !
 
  • #10
@binbagsss : I sense that you're still not getting it properly. So I guess I'll have to lay it out for you...

##ds## is a scalar, hence independent of the coordinate system you're using, hence ##ds^2 ~=~ ds'^2##. Writing out both sides of this equation in full, we have:$$ g_{\mu\nu} dx^\mu dx^\nu ~=~ g'_{\mu\nu} dx'^\mu dx'^\nu ~~~~~~ [1]~,$$where the primes denote components of quantities referred to the primed coordinate system, etc.

The coordinate transformation between the dx's is ##dx'^\beta = \Lambda^\beta_{~\alpha} dx^\alpha##. Applying this to the RHS of eqn[1] gives $$g_{\mu\nu} dx^\mu dx^\nu ~=~ g'_{\mu\nu} \Lambda^\mu_{~\alpha} dx^\alpha \Lambda^\nu_{~\beta}dx^\beta ~~~~~~ [2] ~.$$ Rearranging the RHS, and renaming dummy indices on the LHS, this becomes $$g_{\alpha\beta} dx^\alpha dx^\beta ~=~ \Lambda^\mu_{~\alpha} g'_{\mu\nu} \Lambda^\nu_{~\beta}dx^\alpha dx^\beta ~~~~~~ [3] ~,$$ which implies $$\left( g_{\alpha\beta} - \Lambda^\mu_{~\alpha} g'_{\mu\nu} \Lambda^\nu_{~\beta}\right) dx^\alpha dx^\beta ~=~ 0 ~~~~~~ [4] ~.$$ Now, Ibix's point in post #8 kicks in, allowing us to deduce: $$g_{\alpha\beta} - \Lambda^\mu_{~\alpha} g'_{\mu\nu} \Lambda^\nu_{~\beta} ~=~ 0 ~~~~~~ [5] ~.$$Notice that at this stage we still have a primed ##g'_{\mu\nu}##, since we've only considered an arbitrary coordinate transformation so far.

If we now also demand that the transformation be such that the metric components are left unchanged, i.e., if we require ##g_{\alpha\beta} = g'_{\alpha\beta}##, then eqn[5] implies a condition on the matrix ##\Lambda##. I'll leave it to you to deduce that this condition is equivalent to ##\Lambda^T = \Lambda^{-1},## (which is satisfied by the usual Lorentz transformations).
 
  • Like
Likes binbagsss, vanhees71 and Ibix

1. What is SR Invariance?

SR Invariance, or Special Relativity Invariance, is a fundamental principle in physics that states that the laws of physics should be the same for all observers in uniform motion. This means that the laws of physics should not depend on the observer's frame of reference, as long as the observer is moving at a constant velocity.

2. What is the Minkowski Metric?

The Minkowski Metric, also known as the spacetime metric, is a mathematical tool used in special relativity to measure distances and intervals in 4-dimensional spacetime. It combines the concepts of space and time into a single entity, and is represented by a 4x4 matrix with specific coefficients.

3. What is a Poincare Transformation?

A Poincare Transformation is a mathematical transformation that describes how coordinates and measurements in one inertial frame of reference relate to another inertial frame of reference. It includes translations, rotations, and boosts in spacetime. In special relativity, these transformations are used to maintain the invariance of physical laws.

4. What is Index Notation?

Index notation, also known as Einstein notation, is a mathematical notation used to represent tensors (multi-dimensional arrays) in a compact and convenient way. It involves using indices to represent the components of a tensor, and the Einstein summation convention where repeated indices imply summation.

5. How is SR Invariance related to Minkowski Metric, Poincare Transformation, and Index Notation?

SR Invariance is the fundamental principle that guides the use of the Minkowski Metric, Poincare Transformation, and Index Notation in special relativity. These mathematical tools are used to describe and maintain the invariance of physical laws under different frames of reference, and play a crucial role in understanding and solving problems in special relativity.

Similar threads

  • Special and General Relativity
Replies
11
Views
1K
Replies
4
Views
1K
  • Special and General Relativity
Replies
15
Views
909
  • Special and General Relativity
Replies
1
Views
843
  • Special and General Relativity
Replies
1
Views
1K
  • Special and General Relativity
Replies
1
Views
666
  • Special and General Relativity
Replies
12
Views
1K
  • Special and General Relativity
Replies
14
Views
3K
  • Special and General Relativity
2
Replies
35
Views
2K
  • Special and General Relativity
Replies
12
Views
1K
Back
Top