Matrix notation for Lorentz transformations

Click For Summary
SUMMARY

This discussion focuses on the transformation of contravariant and covariant vectors using matrix notation in the context of Lorentz transformations. The key equations presented include the transformation rules for vectors, specifically (v_{new})^i and (v_{new})_i, along with the use of the metric tensor g for raising and lowering indices. The author proposes a set of rules for interpreting index positions in matrix equations, emphasizing the importance of horizontal index placement and the implications for tensor transformations. The discussion concludes that while multiple interpretations yield correct results, clarity in notation is crucial for understanding the underlying concepts.

PREREQUISITES
  • Understanding of contravariant and covariant vectors
  • Familiarity with Lorentz transformations and their properties
  • Knowledge of metric tensors and their role in raising/lowering indices
  • Basic proficiency in matrix algebra and notation
NEXT STEPS
  • Study the properties of Lorentz transformations in detail
  • Learn about the implications of metric tensors in tensor calculus
  • Explore the concept of index notation and its applications in physics
  • Investigate the relationship between matrix transposition and tensor transformations
USEFUL FOR

Physicists, mathematicians, and students studying relativity, tensor calculus, or advanced linear algebra who seek to deepen their understanding of vector transformations and matrix notation.

decerto
Messages
84
Reaction score
2
I'm having some confusion with index notation and how it works with contravariance/covariance.

(v_{new})^i=\frac{\partial (x_{new})^i}{\partial (x_{old})^j}(v_{old})^j

(v_{new})^i=J^i_{\ j}(v_{old})^j


(v_{new})_i=\frac{\partial (x_{old})^j}{\partial (x_{new})^i}(v_{old})_j

(v_{new})_i=(J^{-1})^j_{\ i}(v_{old})_j

So these are the standard rules for transforming contra and covariant vectors.
Now if we want to convert this into a matrix equation is there an exact set of rules with regards index position?

For example for the covariant transformation I can transpose the matrix which swaps the index order(Not sure how this makes sense) and this gives the right answer if we treat the covariant vectors as columns.

Or I can move the (v_{old})_j to the right of the J inverse and treat it as a row vector and this gives the right answer and I don't need to even consider what a transpose is in this interpretation.

Now both of these interpretations give the correct answers but they seem to have different meanings for upper vs lower and horizontal order.

Is there a best way to think about this, which way makes the most sense in terms of raising/lowering with metric tensors and transforming higher order tensors?
 
Physics news on Phys.org
I suggest the following rules:
1. If the metric tensor is denoted by ##g##, the matrix with ##g_{ij}## on row i, column j is denoted by ##g## as well.

2. The component on row i, column j of the matrix ##g^{-1}## is denoted by ##g^{ij}##.

3. For most other matrices X, the element on row i, column j is denoted by ##X^i{}_j##.

4. For n×1 matrices v, you don't write the column index. In other words, you write ##v^i## instead of ##v^i{}_1##.

5. For 1×n matrices v, if you use them at all (it's probably best if you don't), you can't drop the row index from the notation. The notation ##v_i## is already reserved for ##g_{ij}v^j=g_{ij}v^j{}_1##, so it can't be used for ##v^1{}_i##.

6. In products, you transpose matrices if you have to, to ensure that each index that's summed over appears once upstairs and once downstairs.​
Example: A Lorentz transformation is linear function ##\Lambda:\mathbb R^4\to\mathbb R^4## such that ##\Lambda^T\eta\Lambda=\eta##. The component on row ##\mu##, column ##\nu## of this equation is
$$\eta_{\mu\nu}=(\Lambda^T)^\mu{}_\rho \eta_{\rho\sigma}\Lambda^\sigma{}_\nu =\Lambda^\rho{}_\mu\eta_{\rho\sigma}\Lambda^\sigma{}_\nu.$$ The intermediate step is usually not written out, because ##\rho## appears twice downstairs.

Note that the horizontal positions of the indices are important, because of weird things like this:
$$(\Lambda^{-1})^\mu{}_\nu = (\eta^{-1}\Lambda^T\eta)^\mu{}_\nu =\eta^{\mu\rho} \Lambda^\sigma{}_\rho \eta_{\sigma\nu} =\Lambda_\nu{}^\mu.$$
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K