Understanding Tensors: Is Misner Thorne and Wheeler Enough?

  • Thread starter Thread starter bob012345
  • Start date Start date
  • Tags Tags
    Tensors Wheeler
Click For Summary
The discussion centers on whether the textbook "Misner, Thorne, and Wheeler" (MTW) is sufficient for understanding tensors necessary for general relativity (GR). Participants express that MTW may not be the best starting point due to its complexity, suggesting alternatives like Sean Carroll's online lecture notes for a gentler introduction. There is a focus on the importance of understanding tensor notation, particularly the distinction between upper and lower indices, and how they relate to matrix representations. The conversation also touches on the nature of Lorentz transformations and their classification as tensors, highlighting the nuances in terminology and definitions. Overall, clarity in tensor notation and foundational understanding is emphasized as crucial for mastering the subject.
bob012345
Gold Member
Messages
2,291
Reaction score
1,015
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR? I have that textbook but never went through it. Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology. Is there a better source? Thanks.

P.S. I always liked that the book MTW is it's own pun...
 
Physics news on Phys.org
bob012345 said:
Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?
Yes. However:

bob012345 said:
Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem. :wink:

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.
 
  • Like
Likes Mr.Husky, PhDeezNutz, ohwilleke and 3 others
PeterDonis said:
Yes. However:As you seem to realize, MTW is probably not the source you want to learn tensor calculus from if you want to avoid this problem. :wink:

I would suggest Sean Carroll's online lecture notes on GR as a gentler introduction to the subject.
Ok, thanks.
 
  • Like
Likes ohwilleke
bob012345 said:
Summary:: Does the textbook Misner Thorne and Wheeler have all I need to understand tensors in order to learn GR?

Tensors greatly intimidate me with all the indexes and symbols and summing this way and that and all the terminology.
Do not break the first commandment https://www.physicsforums.com/insights/the-10-commandments-of-index-expressions-and-tensor-calculus/ ;)

Do not despair. It does look daunting in the beginning but once you get the hang of it there are mainly a few basic things to keep in mind.
 
  • Like
Likes Mr.Husky, ohwilleke, madscientist_93 and 2 others
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.
 
  • Like
Likes ohwilleke, dextercioby and Demystifier
vanhees71 said:
The greatest obstacle for me is to obey Commandment 2 ;-))
Yet it has been the bane of many a student calculations :oldeyes:
 
  • Like
Likes vanhees71
vanhees71 said:
The greatest obstacle for me is to obey Commandment 2 ;-)). Another important commandment is to also obey carefully the horizontal order of indices not only the vertical one! I've seen many documents providing in principle a good approach to the topic but are completely useless in not clearly writing the indices in a well-defined horizontal ordering, where it is needed.
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.
 
  • Like
Likes vanhees71
Demystifier said:
Especially when one needs to be careful about both at the same time. For instance, the Lorentz-transformation tensor obeys ##(\Lambda^{-1})^{\alpha}_{\;\,\beta}=\Lambda^{\;\,\alpha}_{\beta}##.
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing
 
  • Like
Likes vanhees71
dyn said:
Can you recommend a textbook that covers this clearly and explains what the α and β refer to in terms of rows and columns on either side of the equation. I have always found this confusing
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
 
  • Like
Likes ohwilleke
  • #10
Demystifier said:
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
This is true for SO(n). It is not true for SO(1,n). In particular, for the standard Lorentz boost in the x-direction, it is not true as ##\Lambda = \Lambda^T##.
 
  • Like
Likes Demystifier
  • #11
Demystifier said:
I don't know a textbook, but it's easy. Lorentz group is an orthogonal group SO(1,3). Orthogonal matrix, by definition, obeys ##\Lambda^{-1}=\Lambda^T##, where ##T## denotes the transpose. The rest should be easy.
It's not that easy! You have ##(\eta^{\mu \nu})=\mathrm{diag}(1,-1,-1,-1)## (I'm in the west-coast camp, but there's no big difference when using the east-coast convention). An ##\mathbb{R}^{4 \times 4}##-matrix is called a Lorentz-transformation matrix if,
$${\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} \eta_{\mu \nu}=\eta_{\mu \nu}.$$
In matrix notation (note that here the index positioning gets lost, so you have to keep in mind that the matrix ##\hat{\Lambda}## has a first upper and a second lower index while the matrix ##\hat{\eta}## as two lower indices) this reads
$$\hat{\Lambda}^{\text{T}} \hat{\eta} \hat{\Lambda}=\hat{\eta}.$$
Since ##\hat{\eta}^2=\hat{1}## we have
$$\hat{\eta} \hat{\Lambda}^{\text{T}} \hat{\eta}=\hat{\Lambda}^{-1}.$$
In index notation that reads restoring the correct index placement (note that also ##\hat{\eta}^{-1}=(\eta^{\mu \nu})=\hat{\eta}=(\eta_{\mu \nu})##)
$${(\hat{\Lambda}^{-1})^{\mu}}_{\nu} = \eta_{\nu \sigma} {\Lambda^{\sigma}}_{\rho} \eta^{\rho \mu}={\Lambda_{\nu}}^{\mu}.$$
 
  • Like
Likes shinobi20, dextercioby and Demystifier
  • #13
My confusion is over the following -
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
 
  • #14
dyn said:
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.
 
  • Like
Likes vanhees71
  • #15
The difference between upper and lower indices makes a lot of sense. It's a mnemonic notation for knowing whether the corresponding tensor components have to be transformed covariantly (lower indices) or contravariantly.

The drawback of the matrix notation is that this information gets hidden, i.e., you have to always keep in mind which of the indices in the matrix is an upper or lower index. Another drawback is that you can't handle tensors with rank 3 and higher. The advantage is a somewhat shorter notation.
 
  • Like
Likes Demystifier
  • #16
Demystifier said:
It is because it's first from the left. In terms of a matrix, the difference between upper and lower indices doesn't make sense. For example, for the metric tensor ##g## with matrix entries ##g_{\mu\nu}##, the quantities called ##g^{\mu\nu}## are really the matrix entries of ##g^{-1}##. So in matrix language, ##g_{\mu\nu}## and ##g^{\mu\nu}## are entries of different matrices, one being the inverse of another.
It should be pointed out that this is true particularly for the metric tensor. It is not generally true that ##T^{\mu\nu}## are the components of the inverse of ##T_{\mu\nu}##. However, for the metric tensor, it holds that
$$
g^{\mu\nu}g_{\nu\sigma} v^\sigma = g^{\mu\nu} v_\nu = v^\mu
$$
for all ##v## and so ##g^{\mu\nu}g_{\nu\sigma} = \delta^\mu_\sigma##.
 
  • Like
Likes vanhees71
  • #17
dyn said:
If Λμν represents the entry in the Λ matrix in row μ and column ν what does Λνμ represent in terms of rows and columns ?
In other words in Λμν the row is indicated by μ ; but is that because it is the top index or the 1st index going from left to right ?
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
 
  • #18
I have a copy of MTW and it is not at all my favorite one to use for independent studying of something like tensor calculus from scratch.
 
  • #19
dyn said:
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?
The entry in row ##\nu## and column ##\mu##.

However, ##\Lambda## is not a tensor, it's a coordinate transformation represented as a matrix. They're not the same thing.

dyn said:
I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
First you need to understand the distinction between tensors and coordinate transformations. You have been talking about ##\Lambda##, meaning the Lorentz transformation, as though it were a tensor; but as noted above, it isn't.
 
  • Like
Likes dyn
  • #20
Demystifier said:
the Lorentz-transformation tensor
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.
 
  • #21
PeterDonis said:
A coordinate transformation isn't a tensor.
Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.
 
  • #22
PeterDonis said:
A coordinate transformation isn't a tensor. They're different kinds of objects, even though similar notation is used for both.
I was about to write something similar, but this depends on whether you consider an active or passive Lorentz transformation. For a passive transformation, the coordinate transformation certainly is not a tensor as it is just a relabelling of components. However, for an active transformation, a Lorentz transformation is indeed a linear map from vectors in Minkowski space to vectors in Minkowski space - which is the very definition of a (1,1) tensor. However ...

Demystifier said:
Lorentz transformation is a Lorentz tensor. Under Lorentz coordinate transformations, components of ##\Lambda^{\mu}_{\;\nu}## transform as components of a tensor. But of course, it's not a tensor under general coordinate transformations.
... I generally dislike the very common introduction of tensors as being "components transforming in a particular way" as I have seen it lead to several misunderstandings. In the case of a passive Lorentz transformation (which is what people will generally think about), it certainly does not satisfy the requirements for being a tensor in the sense of being a map from tangent vectors to tangent vectors as it is just a relabelling of the coordinates used to describe the vector.
 
  • #23
Then can we at least agree that there are several inequvalent definitions of a "tensor"?
 
  • #24
Demystifier said:
Then can we at least agree that there are several inequvalent definitions of a "tensor"?
In mathematics, there is only one definition as a multilinear map. There is no such thing as "inequivalent definitions", this would mean different notions.
 
  • #25
dyn said:
If Λ is a 4x4 matrix and Λμν represents the entry in row μ and column ν what does Λvu represent ?

I am looking for a textbook that explains in the clearest sense this kind of tensor/index notation and what it means
The horizontal position of the indices always tells you what's counted: the left index counts the row, the right index the column. In the matrix notation the vertical positioning of the indices is simply lost. You always have to remember what a given matrix represents, including the vertical positioning of the indices. For this reason I tend to avoid the matrix notation when it comes to relativity when doing calculations.
 
  • Like
Likes dyn
  • #26
It seems hopeless as highly educated intelligent people can't even agree on the math. Maybe GR is best left to professional mathematicians.
 
  • #27
bob012345 said:
highly educated intelligent people can't even agree on the math
I don't think there's any disagreement on the math. The disagreement in this thread has been over terminology--whether, for example, the Lorentz transformation can be properly described as a "tensor". Nobody is disagreeing on how to use the Lorentz transformation mathematically, or any other mathematical object.
 
  • Like
Likes ohwilleke, vanhees71, Demystifier and 1 other person
  • #28
bob012345 said:
Maybe GR is best left to professional mathematicians.
You might want to reconsider that since very few prominent specialists in GR have been professional mathematicians. The only one I can think of off the top of my head is Roger Penrose.
 
  • Like
Likes ohwilleke, vanhees71, Demystifier and 1 other person
  • #29
PeterDonis said:
You might want to reconsider that since very few prominent specialists in GR have been professional mathematicians.
Maybe that’s why the Earth is still blocking your view of Venus … :-p
 
  • #30
caz said:
Maybe that’s why the Earth is still blocking your view of Venus … :-p
Yes, since de-modulation isn't working, perhaps I need to consider finding a black hole somewhere and using that to swallow the Earth...
 
  • Like
Likes Frabjous

Similar threads

  • · Replies 5 ·
Replies
5
Views
6K
  • · Replies 15 ·
Replies
15
Views
6K
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • Sticky
  • · Replies 243 ·
9
Replies
243
Views
57K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
8
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K