Gold Member
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR? I have that textbook but never went through it. Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology. Is there a better source? Thanks.

P.S. I always liked that the book MTW is it's own pun....

PeterDonis
Mentor
2020 Award
Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?
Yes. However:

Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem.

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.

ohwilleke, vanhees71, Demystifier and 1 other person
Gold Member
Yes. However:

As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem.

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.
Ok, thanks.

ohwilleke
Orodruin
Staff Emeritus
Homework Helper
Gold Member
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
Do not break the first commandment https://www.physicsforums.com/insights/the-10-commandments-of-index-expressions-and-tensor-calculus/ ;)

Do not despair. It does look daunting in the beginning but once you get the hang of it there are mainly a few basic things to keep in mind.

ohwilleke, madscientist_93, Demystifier and 1 other person
vanhees71
Gold Member
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.

ohwilleke, dextercioby and Demystifier
Orodruin
Staff Emeritus
Homework Helper
Gold Member
The greatest obstacle for me is to obey Commandment 2 ;-))
Yet it has been the bane of many a student calculations

vanhees71
Demystifier
Gold Member
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.

vanhees71
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing

vanhees71
Demystifier
Gold Member
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.

ohwilleke
Orodruin
Staff Emeritus
Homework Helper
Gold Member
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
This is true for SO(n). It is not true for SO(1,n). In particular, for the standard Lorentz boost in the x-direction, it is not true as ##\Lambda = \Lambda^T##.

Demystifier
vanhees71
Gold Member
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
It's not that easy! You have ##(\eta^{\mu \nu})=\mathrm{diag}(1,-1,-1,-1)## (I'm in the west-coast camp, but there's no big difference when using the east-coast convention). An ##\mathbb{R}^{4 \times 4}##-matrix is called a Lorentz-transformation matrix if,
$${\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} \eta_{\mu \nu}=\eta_{\mu \nu}.$$
In matrix notation (note that here the index positioning gets lost, so you have to keep in mind that the matrix ##\hat{\Lambda}## has a first upper and a second lower index while the matrix ##\hat{\eta}## as two lower indices) this reads
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta}.$$
Since ##\hat{\eta}^2=\hat{1}## we have
$$\hat{\eta} \hat{\Lambda}^{\text{T}} \hat{\eta}=\hat{\Lambda}^{-1}.$$
In index notation that reads restoring the correct index placement (note that also ##\hat{\eta}^{-1}=(\eta^{\mu \nu})=\hat{\eta}=(\eta_{\mu \nu})##)
$${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho} \eta^{\rho \mu}={\Lambda_{\nu}}^{\mu}.$$

dextercioby and Demystifier
My confusion is over the following -
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?

Demystifier
Gold Member
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.

vanhees71
vanhees71
Gold Member
The difference between upper and lower indices makes a lot of sense. It's a mnemonic notation for knowing whether the corresponding tensor components have to be transformed covariantly (lower indices) or contravariantly.

The drawback of the matrix notation is that this information gets hidden, i.e., you have to always keep in mind which of the indices in the matrix is an upper or lower index. Another drawback is that you can't handle tensors with rank 3 and higher. The advantage is a somewhat shorter notation.

Demystifier
Orodruin
Staff Emeritus
Homework Helper
Gold Member
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.
It should be pointed out that this is true particularly for the metric tensor. It is not generally true that ##T^{\mu\nu}## are the components of the inverse of ##T_{\mu\nu}##. However, for the metric tensor, it holds that
$$g^{\mu\nu}g_{\nu\sigma} v^\sigma = g^{\mu\nu} v_\nu = v^\mu$$
for all ##v## and so ##g^{\mu\nu}g_{\nu\sigma} = \delta^\mu_\sigma##.

vanhees71
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means

ohwilleke
Gold Member
I have a copy of MTW and it is not at all my favorite one to use for independent studying of something like tensor calculus from scratch.

PeterDonis
Mentor
2020 Award
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?
The entry in row ##\nu## and column ##\mu##.

However, ##\Lambda## is not a tensor, it's a coordinate transformation represented as a matrix. They're not the same thing.

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
First you need to understand the distinction between tensors and coordinate transformations. You have been talking about ##\Lambda##, meaning the Lorentz transformation, as though it were a tensor; but as noted above, it isn't.

dyn
PeterDonis
Mentor
2020 Award
the Lorentz-transformation tensor
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.

Demystifier
Gold Member
A coordinate transformation isn't a tensor.
Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.
I was about to write something similar, but this depends on whether you consider an active or passive Lorentz transformation. For a passive transformation, the coordinate transformation certainly is not a tensor as it is just a relabelling of components. However, for an active transformation, a Lorentz transformation is indeed a linear map from vectors in Minkowski space to vectors in Minkowski space - which is the very definition of a (1,1) tensor. However ...

Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.
... I generally dislike the very common introduction of tensors as being "components transforming in a particular way" as I have seen it lead to several misunderstandings. In the case of a passive Lorentz transformation (which is what people will generally think about), it certainly does not satisfy the requirements for being a tensor in the sense of being a map from tangent vectors to tangent vectors as it is just a relabelling of the coordinates used to describe the vector.

Demystifier
Gold Member
Then can we at least agree that there are several inequvalent definitions of a "tensor"?

dextercioby
Homework Helper
Then can we at least agree that there are several inequvalent definitions of a "tensor"?
In mathematics, there is only one definition as a multilinear map. There is no such thing as "inequivalent definitions", this would mean different notions.

vanhees71