Question about tensor notation convention as used in SR/GR

Click For Summary

Discussion Overview

The discussion revolves around tensor notation conventions as used in special relativity (SR) and general relativity (GR). Participants explore the implications of index placement, transposition, and the relationship between tensor components and matrix representations.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Conceptual clarification

Main Points Raised

  • One participant seeks confirmation on whether ##A_{a}\text{ }^{b}## represents the element in the a-th row and b-th column of the transpose of A, suggesting that ##A_{a}\text{ }^{b}= A^{b}\text{ }_{a}##.
  • Another participant argues that this equality is not certain and depends on the symmetry properties of the tensor A, cautioning against assuming A is merely a matrix.
  • A different participant introduces the summation convention, stating that ##A_{a}{ }^{b}= g_{ac} \, g^{bd}A^{c}{ }_{d}##, linking it to matrix notation.
  • Concerns are raised about when it is appropriate to treat A as a matrix, with a participant questioning the conditions under which ##A^{a} \text{ } _{b}=A_{b}\text{ } ^{a}## holds true.
  • One participant emphasizes the importance of verifying conventions used in texts, noting that some indices may not represent components to be summed over.
  • Another participant discusses the implications of treating indices in tensor notation, highlighting that the order of factors does not affect the representation of the same vector.
  • There is a suggestion that the notation ##(A^{T})_{i} \text{ }^{j}## may not be used in relativity, as the transposition can be represented by moving indices without explicitly writing T.
  • One participant expresses confusion over a specific text's notation, noting inconsistencies in how transposition is represented.
  • A later reply clarifies that the convention used in the instructor's equations makes sense, associating transposition with reversing indices, but acknowledges the ambiguity in matrix notation regarding which space a matrix belongs to.
  • Another participant reflects on their initial intuition that for fixed indices i and j, the equality ##A_{i} \text{ }^{j}=A^{j} \text{ }_{i}## should hold, emphasizing that while the tensors/matrices are not equal, the elements should be equal for specific indices.

Areas of Agreement / Disagreement

Participants express differing views on the conventions of tensor notation and the implications of transposition. There is no consensus on the correctness of specific notations or the conditions under which certain equalities hold.

Contextual Notes

Participants highlight the need for careful definitions and the potential for ambiguity in matrix notation, particularly regarding the association of indices with specific tensor spaces.

Coffee_
Messages
259
Reaction score
2
When writing

##A_{a}\text{ }^{b}## one means ''The element on the a-th row and b-th column of the TRANSPOSE of A" right?

Such that ##A_{a}\text{ } ^{b}= A^{b}\text{ } _{a}## ?

I would just like a confirmation so I'm not learning the convention in a wrong manner.
 
Physics news on Phys.org
Coffee_ said:
Such that ##A_{a}\text{ } ^{b}= A^{b}\text{ } _{a}## ?

This is not at all certain and depends on the symmetry properties of ##A##. You seem to be assuming that ##A## is a matrix, but generally it is simply a tensor or a set of numbers with indices attached. Now, this may be represented with a matrix (if it only has two indices), but you generally need to be careful with your definitions when doing so.
 
If you understand the summation convention,[tex] A_{a}{ } ^{b}= g_{ac} \, g^{bd}A^{c}{ } _{d}[/tex]In matrix notation, the right-hand side is [itex]\textbf{gAg}^{-1}[/itex], where [itex]\textbf{A}[/itex] denotes the matrix whose (a,b) component is [itex]A^{a}{ } _{b}[/itex].
 
That definition seems to imply that it's not really always equal to the transposed matrix. So in what cases is it alright to treat A as a matrix and just say that ##A^{a} \text{ } _{b}=A_{b}\text{ } ^{a}##?
 
You must verify the conventions used in the text or paper.
In some cases, those are abstract indices (labels for slots) -- not components and not something to be "summed over".
Also, note that in tensor notation, unlike in a sequence of compatible matrix multiplications,
there is no specific ordering of factors (without changing the index structure of each factor).
That is to say, ##A^aB_{ab}C^{bc}D^{q}{}_{c}=C^{bc}A^aD^{q}{}_{c}B_{ab}## represent the same vector (call it ##V^q##).
 
Coffee_ said:
That definition seems to imply that it's not really always equal to the transposed matrix. So in what cases is it alright to treat A as a matrix and just say that ##A^{a} \text{ } _{b}=A_{b}\text{ } ^{a}##?

If you lower the index a on both sides you get [itex]A_{ab}=A_{ba}[/itex], which is a statement that A is symmetric. So your statement is true if and only if A is symmetric.

You might find it helpful to write down an asymmetric 2x2 matrix ##A^{a} \text{ } _{b}##, let the metric be diag(1,-1), and work out how ##A_{b}\text{ } ^{a}## differs from it.
 
I see, according to all of your answers the convention we saw in class is not entirely correct since we used ##A_{i} \text{ } ^{j} ## to represent the i-th row and the j-th column of the transposed matrix. What if I'd note this as ##(A ^{T}) _{i} \text{ } ^{j}## is it correct then or does it have to be ##(A ^{T})^{i} \text{ } _{j}##? That is, to represent the i-th row and j-th column of the transpose of A.
 
Index notation is a complete and self-contained alternative to matrix notation, not an alternative representation of it. There is seldom any reason to translate back and forth between the two, and in many cases (e.g., a tensor with more than 2 indices) no such translation is even possible. The most common examples I've seen where relativists do want to revert to matrix notation is to represent either a metric tensor or a stress-energy tensor compactly by writing it in matrix form. Since these are symmetric, it doesn't matter what convention you use for matching up the indices with rows and columns. I think your instructor is just trying to connect tensors to familiar ideas about matrices.

Notations like ##(A ^{T}) _{i} \text{ } ^{j}## are not used in relativity, in my experience. The reason is that the T is not needed, because we can represent the same idea more efficiently just by moving indices around.

If you really want to get fluent at converting back and forth between the two notations, then first off there is a convention that contravariant (upper-index) vectors are column vectors, while covectors are row vectors. The grammar of index notation says that when indices are being summed over, one must be an upper index and one a lower index. This means that:

1. A linear transformation that takes a contravariant vector as an input and gives a contravariant vector as an output must have one lower index and one upper index.
2. A transformation from contravariant vectors to covariant ones must have two lower indices.
3. A transformation from covariant vectors to contrariant ones must have two upper indices.
4. A transformation from covariant vectors to covariant ones must have one upper and one lower index.

Let's consider a case where our matrix isn't square, say [itex]A_{ab}[/itex], where a is an index in a three-dimensional space and b is an index in a two-dimensional space. In a situation like this, we want to make sure that when A is translated into matrix language, it doesn't make sense to add A to its own transpose -- you can't add a 3x2 matrix and a 2x3. This all comes out properly if we interpret the transpose operation as flipping the order of the indices to [itex]A_{ba}[/itex]. Then, as expected, we can't have [itex]A_{ab}+A_{ba}[/itex], because the two terms belong to different spaces.

The convention used in your instructor's eqs. 2.19-20, where they have a mixed (upper-lower) rank-2 tensor, also makes sense according to this logic. They're associating transposition with reversing the two indices. This will in general produce a tensor that belongs to a different space. In their example, they have a tensor that belongs to the upper-lower space (first index is upper, second lower), and when they transpose they get one that belongs to the lower-upper space.

Coffee_ said:
I see, according to all of your answers the convention we saw in class is not entirely correct since we used ##A_{i} \text{ } ^{j} ## to represent the i-th row and the j-th column of the transposed matrix.

I'm not perceiving the same problem that you are. As far as I can tell, everything is consistent.

Maybe what's bothering you is that you want there to be a convention that will tell you whether a given NxN matrix should be associated with an upper-lower tensor like ##A^{a} \text{ } _{b} ##, or a lower-upper one like ##A_{b} \text{ } ^{a} ##. There can't be any such rule. When we write something that looks like a square matrix, and then write its transpose, it still looks like a square matrix. Looking at the two matrices, there would be no way to tell which should be ##A^{a} \text{ } _{b} ## and which should be ##A_{b} \text{ } ^{a} ##. This is just an ambiguity in matrix notation, and it can't be resolved without adding some external information. If I just write you an NxN matrix, you can't tell what space it belongs to. It could belong to any of the following spaces: upper-upper, upper-lower, lower-upper, or lower-lower.
 
Last edited:
  • Like
Likes   Reactions: Coffee_
  • #10
First of all, thanks for the very elaborate answer.

Secondly I'd like to go in on one paragraph.

bcrowell said:
The convention used in your instructor's eqs. 2.19-20, where they have a mixed (upper-lower) rank-2 tensor, also makes sense according to this logic. They're associating transposition with reversing the two indices. This will in general produce a tensor that belongs to a different space. In their example, they have a tensor that belongs to the upper-lower space (first index is upper, second lower), and when they transpose they get one that belongs to the lower-upper space.

If the convention used here, is the correct way to represent the transposed, then clearly my intuition was correct in my original post that ##A_{i} \text{ } ^{j}=A^{j} \text{ } _{i}##. Element wise, those two should be equal. Because the first term is the element of the i-th row and j-th column of the A matrix, which should be equal to the element of the j-th row and i-th column of the transposed matrix which is exactly the second term no? The ''tensors/matrices'' aren't equal , but for a fixed i and j , for example 2 and 3 , this equality must hold it seems. At least that is, if I understood the convention implied by the author here.

If the above is correct, it seems that the author seems to contradict his own convention in eq 2.22 on page 13.
 
  • #11
Coffee_ said:
If the convention used here, is the correct way to represent the transposed, then clearly my intuition was correct in my original post that ##A_{i} \text{ } ^{j}=A^{j} \text{ } _{i}##.
Not true, as proved in #6.
Coffee_ said:
Element wise, those two should be equal. Because the first term is the element of the i-th row and j-th column of the A matrix, which should be equal to the element of the j-th row and i-th column of the transposed matrix which is exactly the second term no?

No. As explained in #9, there cannot be any fixed rule for translating back and forth between matrix notation and tensor notation. The fact that you've reached a contradiction here is an indication that you were incorrect in assuming the existence of such a rule.
 
  • #12
Coffee_ said:
If the convention used here, is the correct way to represent the transposed, then clearly my intuition was correct in my original post that ##A_{i} \text{ } ^{j}=A^{j} \text{ } _{i}##.
Using your author's notation, (2.20) should be interpreted to mean$$(A^T)_{i} {}^{j}=A^{j} {}_{i} \, .$$What you wrote would be true only if A were symmetric.
 
  • #13
DrGreg said:
Using your author's notation, (2.20) should be interpreted to mean$$(A^T)_{i} {}^{j}=A^{j} {}_{i} \, .$$What you wrote would be true only if A were symmetric.

Oh thanks this really clears up a lot. Can't believe how long it took me to get this simple fact. I think I understand it now.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K