Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Question about tensor notation convention as used in SR/GR

  1. May 3, 2015 #1
    When writing

    ##A_{a}\text{ }^{b}## one means ''The element on the a-th row and b-th column of the TRANSPOSE of A" right?

    Such that ##A_{a}\text{ } ^{b}= A^{b}\text{ } _{a}## ?

    I would just like a confirmation so I'm not learning the convention in a wrong manner.
     
  2. jcsd
  3. May 3, 2015 #2

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    This is not at all certain and depends on the symmetry properties of ##A##. You seem to be assuming that ##A## is a matrix, but generally it is simply a tensor or a set of numbers with indices attached. Now, this may be represented with a matrix (if it only has two indices), but you generally need to be careful with your definitions when doing so.
     
  4. May 3, 2015 #3

    DrGreg

    User Avatar
    Science Advisor
    Gold Member

    If you understand the summation convention,[tex]
    A_{a}{ } ^{b}= g_{ac} \, g^{bd}A^{c}{ } _{d}
    [/tex]In matrix notation, the right-hand side is [itex]\textbf{gAg}^{-1}[/itex], where [itex]\textbf{A}[/itex] denotes the matrix whose (a,b) component is [itex]A^{a}{ } _{b}[/itex].
     
  5. May 3, 2015 #4
    That definition seems to imply that it's not really always equal to the transposed matrix. So in what cases is it alright to treat A as a matrix and just say that ##A^{a} \text{ } _{b}=A_{b}\text{ } ^{a}##?
     
  6. May 3, 2015 #5

    robphy

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    You must verify the conventions used in the text or paper.
    In some cases, those are abstract indices (labels for slots) -- not components and not something to be "summed over".
    Also, note that in tensor notation, unlike in a sequence of compatible matrix multiplications,
    there is no specific ordering of factors (without changing the index structure of each factor).
    That is to say, ##A^aB_{ab}C^{bc}D^{q}{}_{c}=C^{bc}A^aD^{q}{}_{c}B_{ab}## represent the same vector (call it ##V^q##).
     
  7. May 3, 2015 #6

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If you lower the index a on both sides you get [itex]A_{ab}=A_{ba}[/itex], which is a statement that A is symmetric. So your statement is true if and only if A is symmetric.

    You might find it helpful to write down an asymmetric 2x2 matrix ##A^{a} \text{ } _{b}##, let the metric be diag(1,-1), and work out how ##A_{b}\text{ } ^{a}## differs from it.
     
  8. May 4, 2015 #7
    I see, according to all of your answers the convention we saw in class is not entirely correct since we used ##A_{i} \text{ } ^{j} ## to represent the i-th row and the j-th column of the transposed matrix. What if I'd note this as ##(A ^{T}) _{i} \text{ } ^{j}## is it correct then or does it have to be ##(A ^{T})^{i} \text{ } _{j}##? That is, to represent the i-th row and j-th column of the transpose of A.
     
  9. May 4, 2015 #8
  10. May 4, 2015 #9

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Index notation is a complete and self-contained alternative to matrix notation, not an alternative representation of it. There is seldom any reason to translate back and forth between the two, and in many cases (e.g., a tensor with more than 2 indices) no such translation is even possible. The most common examples I've seen where relativists do want to revert to matrix notation is to represent either a metric tensor or a stress-energy tensor compactly by writing it in matrix form. Since these are symmetric, it doesn't matter what convention you use for matching up the indices with rows and columns. I think your instructor is just trying to connect tensors to familiar ideas about matrices.

    Notations like ##(A ^{T}) _{i} \text{ } ^{j}## are not used in relativity, in my experience. The reason is that the T is not needed, because we can represent the same idea more efficiently just by moving indices around.

    If you really want to get fluent at converting back and forth between the two notations, then first off there is a convention that contravariant (upper-index) vectors are column vectors, while covectors are row vectors. The grammar of index notation says that when indices are being summed over, one must be an upper index and one a lower index. This means that:

    1. A linear transformation that takes a contravariant vector as an input and gives a contravariant vector as an output must have one lower index and one upper index.
    2. A transformation from contravariant vectors to covariant ones must have two lower indices.
    3. A transformation from covariant vectors to contrariant ones must have two upper indices.
    4. A transformation from covariant vectors to covariant ones must have one upper and one lower index.

    Let's consider a case where our matrix isn't square, say [itex]A_{ab}[/itex], where a is an index in a three-dimensional space and b is an index in a two-dimensional space. In a situation like this, we want to make sure that when A is translated into matrix language, it doesn't make sense to add A to its own transpose -- you can't add a 3x2 matrix and a 2x3. This all comes out properly if we interpret the transpose operation as flipping the order of the indices to [itex]A_{ba}[/itex]. Then, as expected, we can't have [itex]A_{ab}+A_{ba}[/itex], because the two terms belong to different spaces.

    The convention used in your instructor's eqs. 2.19-20, where they have a mixed (upper-lower) rank-2 tensor, also makes sense according to this logic. They're associating transposition with reversing the two indices. This will in general produce a tensor that belongs to a different space. In their example, they have a tensor that belongs to the upper-lower space (first index is upper, second lower), and when they transpose they get one that belongs to the lower-upper space.

    I'm not perceiving the same problem that you are. As far as I can tell, everything is consistent.

    Maybe what's bothering you is that you want there to be a convention that will tell you whether a given NxN matrix should be associated with an upper-lower tensor like ##A^{a} \text{ } _{b} ##, or a lower-upper one like ##A_{b} \text{ } ^{a} ##. There can't be any such rule. When we write something that looks like a square matrix, and then write its transpose, it still looks like a square matrix. Looking at the two matrices, there would be no way to tell which should be ##A^{a} \text{ } _{b} ## and which should be ##A_{b} \text{ } ^{a} ##. This is just an ambiguity in matrix notation, and it can't be resolved without adding some external information. If I just write you an NxN matrix, you can't tell what space it belongs to. It could belong to any of the following spaces: upper-upper, upper-lower, lower-upper, or lower-lower.
     
    Last edited: May 4, 2015
  11. May 5, 2015 #10
    First of all, thanks for the very elaborate answer.

    Secondly I'd like to go in on one paragraph.

    If the convention used here, is the correct way to represent the transposed, then clearly my intuition was correct in my original post that ##A_{i} \text{ } ^{j}=A^{j} \text{ } _{i}##. Element wise, those two should be equal. Because the first term is the element of the i-th row and j-th column of the A matrix, which should be equal to the element of the j-th row and i-th column of the transposed matrix which is exactly the second term no? The ''tensors/matrices'' aren't equal , but for a fixed i and j , for example 2 and 3 , this equality must hold it seems. At least that is, if I understood the convention implied by the author here.

    If the above is correct, it seems that the author seems to contradict his own convention in eq 2.22 on page 13.
     
  12. May 5, 2015 #11

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Not true, as proved in #6.


    No. As explained in #9, there cannot be any fixed rule for translating back and forth between matrix notation and tensor notation. The fact that you've reached a contradiction here is an indication that you were incorrect in assuming the existence of such a rule.
     
  13. May 5, 2015 #12

    DrGreg

    User Avatar
    Science Advisor
    Gold Member

    Using your author's notation, (2.20) should be interpreted to mean$$(A^T)_{i} {}^{j}=A^{j} {}_{i} \, .$$What you wrote would be true only if A were symmetric.
     
  14. May 5, 2015 #13
    Oh thanks this really clears up a lot. Can't believe how long it took me to get this simple fact. I think I understand it now.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Question about tensor notation convention as used in SR/GR
  1. Why use Tensor in GR? (Replies: 12)

Loading...