From Einstein Summation to Matrix Notation: Why?

Click For Summary

Discussion Overview

The discussion revolves around the transformation properties of the Minkowski metric under Lorentz transformations, specifically the transition from Einstein summation notation to matrix notation. Participants explore the implications of index placement, the correctness of equations, and the significance of maintaining clarity in tensor notation.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant asserts the equation ##\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\alpha\beta}## is valid but questions the reasoning behind it and seeks further discussion on the topic.
  • Another participant corrects the initial equation, stating that it should include repeated indices on the right-hand side, suggesting it should be ##\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\mu\nu}##.
  • Some participants emphasize the importance of proper index placement in Ricci notation, arguing that it clarifies the roles of indices in matrix formalism.
  • One participant highlights the significance of index ordering in the context of the Riemann tensor and its applications, illustrating how incorrect placements can lead to misinterpretations of results.
  • Several participants express confusion regarding index placement as presented in their textbooks, noting that while some authors care about it, explanations are often lacking.
  • A later reply suggests that the clarity of index notation is crucial for understanding transformations and computations involving tensors.

Areas of Agreement / Disagreement

Participants exhibit disagreement regarding the correctness of the initial equation and the necessity of index placement. While some corrections are made, there is no consensus on the best practices for notation or the implications of the discussed transformations.

Contextual Notes

Limitations include potential misunderstandings stemming from different interpretations of index notation and the lack of comprehensive explanations in some textbooks. The discussion also reflects varying levels of familiarity with tensor calculus and matrix representations.

epovo
Messages
114
Reaction score
21
TL;DR
How does the change from Einstein summation convention to matrix multiplication work?
I know that if ##\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\alpha\beta}##
then the matrix equation is
$$ (\eta) = (\Lambda)^T\eta\Lambda $$
I have painstakingly verified that this is indeed true, but I am not sure why, and what the rules are (e.g. the ##(\eta)## is in the middle but why?- My text book uses this without proof.
Where can I find a discussion of these topics?
 
Physics news on Phys.org
Sorry that should have been ##\eta^{\alpha\beta}## at the end
 
Recall the definiition of matrix multiplication: <br /> (AB)_{ij} = \sum_k A_{ik}B_{kj}. Thus you need the index contracted over to be the second index of the first factor and the first index of the second factor. Swapping the indices is the same as taking the transpose, since (A^T)_{ij} = A_{ji}.
 
  • Like
Likes   Reactions: epovo
That's what I missed! Thank you very much
 
epovo said:
I know that if ##\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\alpha\beta}##
This is not correct. You should have repeated (summation) indices on the RHS. It should be:
$$\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\mu\nu}$$Although, in fact, there is no need to have primes on the new indices. You could equally have:
$$\eta_{\alpha\beta}=\Lambda^\mu_{\alpha} \Lambda^\nu_{\beta} \eta_{\mu\nu}$$
 
  • Like
Likes   Reactions: Dale
PeroK said:
This is not correct. You should have repeated (summation) indices on the RHS. It should be:
$$\eta_{\alpha'\beta'}=\Lambda^\mu_{\alpha'} \Lambda^\nu_{\beta'} \eta_{\mu\nu}$$Although, in fact, there is no need to have primes on the new indices. You could equally have:
$$\eta_{\alpha\beta}=\Lambda^\mu_{\alpha} \Lambda^\nu_{\beta} \eta_{\mu\nu}$$
You are right. My mistake
 
epovo said:
You are right. My mistake
In fact, this is quite important. One definition of a Lorentz Transformation is that it leaves the Minowski metric invariant. Your equation is not the general transformation rule, where the components of the metric tensor ##\eta## are transformed into different components in a new coordinate system. Instead, it's a statement that the Lorentz Transformation leaves the components unchanged.
 
Another utmost important point is to be concise in the Ricci (index) notation. It is important to keep both the vertical (co- versus contravariant tensor components) AND the horizontal position of the indices clean, i.e., you should write ##{\Lambda^{\mu}}_{\nu}## rather than ##\Lambda_{\nu}^{\mu}##. The latter notation is not uniquely defining the object properly (in this case a Lorentz-transformation matrix).
In the former notation it's clear that in the matrix formalism ##\mu## (1st index) labels the rows and ##\nu## (2nd index) the columns. Then you equation reads
$$\eta_{\rho \sigma} {\Lambda^{\rho}}_{\mu} {\Lambda^{\sigma}}_{\nu}=\eta_{\mu \nu}.$$
and it's clear that by the definition of the matrix product, which follows the rule "row ##\times## column", must read
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta}.$$
 
  • Like
Likes   Reactions: Ibix, epovo and PeroK
vanhees71 said:
Another utmost important point is to be concise in the Ricci (index) notation. It is important to keep both the vertical (co- versus contravariant tensor components) AND the horizontal position of the indices clean, i.e., you should write ##{\Lambda^{\mu}}_{\nu}## rather than ##\Lambda_{\nu}^{\mu}##. The latter notation is not uniquely defining the object properly (in this case a Lorentz-transformation matrix).
In the former notation it's clear that in the matrix formalism ##\mu## (1st index) labels the rows and ##\nu## (2nd index) the columns. Then you equation reads
$$\eta_{\rho \sigma} {\Lambda^{\rho}}_{\mu} {\Lambda^{\sigma}}_{\nu}=\eta_{\mu \nu}.$$
and it's clear that by the definition of the matrix product, which follows the rule "row ##\times## column", must read
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta}.$$
This question of keeping the horizontal position of the indices has been baffling me for some time. My text book (Schutz) does it, but it never explains why. So I figured this was the use of those spaces, but since Schutz makes an occasional error, I was confused. This is the first time I see it spelled out. Thanks for this.
 
  • Like
Likes   Reactions: PeroK and vanhees71
  • #10
Indeed, there are tons of textbooks, which don't care about the horizontal placement of indices. Obviously the authors are too lazy to put the brackets in the LaTeX or, even better, to use the tensor package ;-).
 
  • #11
To be fair to Schutz, he does care about index placement. But as @epovo says, I don't think he explains why. This is part of why I feel he needed a better editor.

Carroll's lecture notes do explain, in chapter 2 I think, if you want to download and have a read.

An example of the importance of index ordering is something like the Riemann tensor, ##R^a{}_{bcd}##. This completely captures curvature of spacetime, and one of the things it does is give you the change in a vector if you transport it around a small loop. So if you have a vector ##V^b## and two small displacement vectors ##dx^c## and ##dy^d## then the change in ##V^b## from parallel transporting around the parallelogram defined by the small displacements is ##R^a{}_{bcd}V^bdx^cdy^d##. So you can see that each index has a particular function in this application: the first is the output vector, the second matches to the input vector, and the third and fourth match to the loop definition vectors. Mixing up the inputs (e.g. ##R^a{}_{bcd}V^ddx^bdy^c##) will give you an answer, but not to the question you think you are asking.

The problem with index placement arises when you start raising and lowering indices. I mean, ##R^a_{bcd}## would be unambiguous if it were only ever that same index that was raised. But I can compute ##g_{ea}g^{fc}R^a{}_{bcd}=R_{eb}{}^f{}_d##. And if I ignore index placement and write ##R^f_{ebd}##, which tensor slot did I have raised again...?
 
  • Like
  • Informative
Likes   Reactions: epovo, PeterDonis, malawi_glenn and 2 others

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 58 ·
2
Replies
58
Views
12K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 9 ·
Replies
9
Views
5K