General Relativity: Advice please about the textbook by Misner, Thorne and Wheeler

  • Thread starter bob012345
  • Start date
  • #1
bob012345
Gold Member
586
121
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR? I have that textbook but never went through it. Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology. Is there a better source? Thanks.

P.S. I always liked that the book MTW is it's own pun....
 

Answers and Replies

  • #2
PeterDonis
Mentor
Insights Author
2020 Award
34,297
12,538
Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?
Yes. However:

Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem. :wink:

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.
 
  • Like
Likes ohwilleke, vanhees71, Demystifier and 1 other person
  • #3
bob012345
Gold Member
586
121
Yes. However:


As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem. :wink:

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.
Ok, thanks.
 
  • #4
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
17,167
6,973
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
Do not break the first commandment https://www.physicsforums.com/insights/the-10-commandments-of-index-expressions-and-tensor-calculus/ ;)

Do not despair. It does look daunting in the beginning but once you get the hang of it there are mainly a few basic things to keep in mind.
 
  • Like
Likes ohwilleke, madscientist_93, Demystifier and 1 other person
  • #5
vanhees71
Science Advisor
Insights Author
Gold Member
17,813
8,778
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.
 
  • Like
Likes ohwilleke, dextercioby and Demystifier
  • #6
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
17,167
6,973
The greatest obstacle for me is to obey Commandment 2 ;-))
Yet it has been the bane of many a student calculations :oldeyes:
 
  • #7
Demystifier
Science Advisor
Insights Author
Gold Member
11,761
4,193
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.
 
  • #8
dyn
670
45
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing
 
  • #9
Demystifier
Science Advisor
Insights Author
Gold Member
11,761
4,193
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
 
  • #10
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
17,167
6,973
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
This is true for SO(n). It is not true for SO(1,n). In particular, for the standard Lorentz boost in the x-direction, it is not true as ##\Lambda = \Lambda^T##.
 
  • Like
Likes Demystifier
  • #11
vanhees71
Science Advisor
Insights Author
Gold Member
17,813
8,778
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
It's not that easy! You have ##(\eta^{\mu \nu})=\mathrm{diag}(1,-1,-1,-1)## (I'm in the west-coast camp, but there's no big difference when using the east-coast convention). An ##\mathbb{R}^{4 \times 4}##-matrix is called a Lorentz-transformation matrix if,
$${\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} \eta_{\mu \nu}=\eta_{\mu \nu}.$$
In matrix notation (note that here the index positioning gets lost, so you have to keep in mind that the matrix ##\hat{\Lambda}## has a first upper and a second lower index while the matrix ##\hat{\eta}## as two lower indices) this reads
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta}.$$
Since ##\hat{\eta}^2=\hat{1}## we have
$$\hat{\eta} \hat{\Lambda}^{\text{T}} \hat{\eta}=\hat{\Lambda}^{-1}.$$
In index notation that reads restoring the correct index placement (note that also ##\hat{\eta}^{-1}=(\eta^{\mu \nu})=\hat{\eta}=(\eta_{\mu \nu})##)
$${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho} \eta^{\rho \mu}={\Lambda_{\nu}}^{\mu}.$$
 
  • Like
Likes dextercioby and Demystifier
  • #13
dyn
670
45
My confusion is over the following -
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
 
  • #14
Demystifier
Science Advisor
Insights Author
Gold Member
11,761
4,193
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.
 
  • #15
vanhees71
Science Advisor
Insights Author
Gold Member
17,813
8,778
The difference between upper and lower indices makes a lot of sense. It's a mnemonic notation for knowing whether the corresponding tensor components have to be transformed covariantly (lower indices) or contravariantly.

The drawback of the matrix notation is that this information gets hidden, i.e., you have to always keep in mind which of the indices in the matrix is an upper or lower index. Another drawback is that you can't handle tensors with rank 3 and higher. The advantage is a somewhat shorter notation.
 
  • Like
Likes Demystifier
  • #16
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
17,167
6,973
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.
It should be pointed out that this is true particularly for the metric tensor. It is not generally true that ##T^{\mu\nu}## are the components of the inverse of ##T_{\mu\nu}##. However, for the metric tensor, it holds that
$$
g^{\mu\nu}g_{\nu\sigma} v^\sigma = g^{\mu\nu} v_\nu = v^\mu
$$
for all ##v## and so ##g^{\mu\nu}g_{\nu\sigma} = \delta^\mu_\sigma##.
 
  • #17
dyn
670
45
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
 
  • #18
ohwilleke
Gold Member
1,647
497
I have a copy of MTW and it is not at all my favorite one to use for independent studying of something like tensor calculus from scratch.
 
  • #19
PeterDonis
Mentor
Insights Author
2020 Award
34,297
12,538
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?
The entry in row ##\nu## and column ##\mu##.

However, ##\Lambda## is not a tensor, it's a coordinate transformation represented as a matrix. They're not the same thing.

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
First you need to understand the distinction between tensors and coordinate transformations. You have been talking about ##\Lambda##, meaning the Lorentz transformation, as though it were a tensor; but as noted above, it isn't.
 
  • #20
PeterDonis
Mentor
Insights Author
2020 Award
34,297
12,538
the Lorentz-transformation tensor
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.
 
  • #21
Demystifier
Science Advisor
Insights Author
Gold Member
11,761
4,193
A coordinate transformation isn't a tensor.
Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.
 
  • #22
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
17,167
6,973
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.
I was about to write something similar, but this depends on whether you consider an active or passive Lorentz transformation. For a passive transformation, the coordinate transformation certainly is not a tensor as it is just a relabelling of components. However, for an active transformation, a Lorentz transformation is indeed a linear map from vectors in Minkowski space to vectors in Minkowski space - which is the very definition of a (1,1) tensor. However ...

Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.
... I generally dislike the very common introduction of tensors as being "components transforming in a particular way" as I have seen it lead to several misunderstandings. In the case of a passive Lorentz transformation (which is what people will generally think about), it certainly does not satisfy the requirements for being a tensor in the sense of being a map from tangent vectors to tangent vectors as it is just a relabelling of the coordinates used to describe the vector.
 
  • #23
Demystifier
Science Advisor
Insights Author
Gold Member
11,761
4,193
Then can we at least agree that there are several inequvalent definitions of a "tensor"?
 
  • #24
dextercioby
Science Advisor
Homework Helper
Insights Author
13,077
640
Then can we at least agree that there are several inequvalent definitions of a "tensor"?
In mathematics, there is only one definition as a multilinear map. There is no such thing as "inequivalent definitions", this would mean different notions.
 
  • #25
vanhees71
Science Advisor
Insights Author
Gold Member
17,813
8,778
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
The horizontal position of the indices always tells you what's counted: the left index counts the row, the right index the column. In the matrix notation the vertical positioning of the indices is simply lost. You always have to remember what a given matrix represents, including the vertical positioning of the indices. For this reason I tend to avoid the matrix notation when it comes to relativity when doing calculations.
 

Related Threads on General Relativity: Advice please about the textbook by Misner, Thorne and Wheeler

Replies
15
Views
20K
  • Last Post
Replies
6
Views
9K
Replies
11
Views
4K
Replies
8
Views
4K
Replies
4
Views
4K
Replies
13
Views
2K
  • Last Post
Replies
6
Views
5K
Replies
3
Views
1K
Top