Transforming Contra & Covariant Vectors

Click For Summary

Discussion Overview

The discussion revolves around the transformation of contravariant and covariant vectors under Lorentz transformations. Participants explore the mathematical relationships and manipulations involved in deriving one transformation equation from another, focusing on the implications of lowering and raising indices using the metric tensor.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the Lorentz transformation equations for contravariant and covariant vectors and seeks clarification on deriving one from the other using the metric tensor.
  • Another participant states the relation between contravariant and covariant components, suggesting that this relation is crucial for the transformation process.
  • There is a discussion about the manipulation of indices and the application of the metric tensor, with participants expressing confusion over the correct steps to take.
  • Some participants assert that the transformation matrix for lowered components can be expressed in terms of the metric tensor and the Lorentz transformation matrix, but note that the properties of the Lorentz matrix complicate this relationship.
  • One participant questions the nature of the Lorentz transformation matrix, discussing its representation as a matrix and its relation to rank-2 tensors, while others clarify that not all matrices represent tensors due to differences in transformation properties.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding the manipulation of indices and the properties of the Lorentz transformation matrix. There is no consensus on the clarity of the steps involved in the transformation process, indicating ongoing confusion and debate.

Contextual Notes

Participants highlight the importance of correctly applying the metric tensor and the implications of treating the Lorentz transformation matrix as a tensor, which remains a point of contention. The discussion reflects uncertainty about the proper handling of indices and the relationships between different components.

dyn
Messages
774
Reaction score
63
Hi.
The book I am using gives the following equations for the the Lorentz transformations of contravariant and covariant vectors
x = Λμν xν ( 1 )

xμ/ = Λμν xv ( 2 )

where the 2 Lorentz transformation matrices are the inverses of each other. I am trying to get equation 2 from equation 1 but if I lower the index on the LHS of (1) using the metric gρμ and apply it to both sides of (1) I get

xρ/ = Λρν x ν

Then I'm stuck because how can I lower the ν on xν as ν is already repeated twice on the RHS and so I can't use a metric with ν in it ?
Thanks
 
Physics news on Phys.org
The relation between the contravariant and covariant components is ##x^\nu = g^{\nu\mu}x_\mu##.
 
dyn said:
Hi.
The book I am using gives the following equations for the the Lorentz transformations of contravariant and covariant vectors
x = Λμν xν ( 1 )

xμ/ = Λμν xv ( 2 )

where the 2 Lorentz transformation matrices are the inverses of each other. I am trying to get equation 2 from equation 1 but if I lower the index on the LHS of (1) using the metric gρμ and apply it to both sides of (1) I get

xρ/ = Λρν x ν

Then I'm stuck because how can I lower the ν on xν as ν is already repeated twice on the RHS and so I can't use a metric with ν in it ?
Thanks

You have

##x'^\mu = \Lambda^\mu_\nu x^\nu \Rightarrow g^{\mu \rho} x'_\rho = \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda##

Now, you operate on both sides with the ##g## to get:

##x'_\rho = g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda##

So the transformation matrix for the lowered components is ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda}##

The final step is to realize that ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} = \Lambda^\lambda_\rho##. That might seem obvious, but it's actually not, because ##\Lambda## is not a tensor; the two indices refer to different coordinate systems. So it's not immediately obvious that you can raise and lower indices the way you could with a tensor.
 
  • Like
Likes   Reactions: dyn
stevendaryl said:
You have

##x'^\mu = \Lambda^\mu_\nu x^\nu \Rightarrow g^{\mu \rho} x'_\rho = \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda##

Now, you operate on both sides with the ##g## to get:

##x'_\rho = g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda##
.
Thanks for your reply. Can you explicitly explain this step for me ? What do I multiply each side of the equation by in terms of indices ?
 
You are not to multiply each side with anything. You are to insert the relation given in #2, which @stevendaryl did for you explicitly in the first line of #3.
 
I don't understand that step
 
It is just an insertion of a known relation.
 
Orodruin said:
It is just an insertion of a known relation.

I presume you mean the relation gαβgαγ = δβγ ?
So that means I multiply each side of the equation by gμα ?
 
dyn said:
Thanks for your reply. Can you explicitly explain this step for me ? What do I multiply each side of the equation by in terms of indices ?

Let me write it without any indices. I think you will find that there is only one way to put in indices so that it makes sense.

  1. ##x' = \Lambda x##
  2. Let's write: ##x = g^{-1} g x##
  3. Substituting this expression for ##x## into equation 1: ##x' = \Lambda g^{-1} g x##
  4. Operate on 3 using ##g##: ##g x' = g \Lambda g^{-1} g x##
  5. Now, let's define the combination: ##\widetilde{x} \equiv g x##
  6. Also, ##\widetilde{x'} \equiv g x'##
  7. And ##\widetilde{\Lambda} \equiv g \Lambda g^{-1}##
  8. So the contravariant transformation law is: ##\widetilde{x'} = \widetilde{\Lambda} \widetilde{x}##
There is really only one way that you can insert indices into the equations 1-8 that makes sense.
 
  • #10
Thanks for that but its the placement of indices in each step that confuses me. Was the following statement correct ?
dyn said:
I presume you mean the relation gαβgαγ = δβγ ?
So that means I multiply each side of the equation by gμα ?
 
  • #11
dyn said:
Thanks for that but its the placement of indices in each step that confuses me. Was the following statement correct ?

As I said, there really is only one way to do the indices so that it makes sense.

But yes, if you start with ##x'^\mu = \Lambda^\mu_\nu x^\nu##, and multiply both sides by ##g_{\mu \alpha}## (and sum over ##\mu##). Then you rewrite ##x^\nu## as ##g^{\nu \lambda} x_\lambda##
 
  • Like
Likes   Reactions: dyn
  • #12
stevendaryl said:
The final step is to realize that ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} = \Lambda^\lambda_\rho##. That might seem obvious, but it's actually not, because ##\Lambda## is not a tensor; the two indices refer to different coordinate systems. So it's not immediately obvious that you can raise and lower indices the way you could with a tensor.
I always see Λ as a 4x4 matrix and I always thought a matrix was a tensor of rank 2. I see Λμν as the entry on row μ and column ν of that matrix but I'm unsure how the rows and columns relate to the inverse Λμν
 
  • #13
dyn said:
I always thought a matrix was a tensor of rank 2
The individual components of a rank-2 tensor can be represented as a matrix, but not all matrices are representations of rank-2 tensors.

The components of a tensor transform in a particular way when you change coordinate systems; not all matrices have that property. Also when you represent a tensor as a matrix, you lose the distinction between contravariant and covariant components.
 
  • Like
Likes   Reactions: dyn

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 124 ·
5
Replies
124
Views
10K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 23 ·
Replies
23
Views
6K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 35 ·
2
Replies
35
Views
7K