Index notation is a complete and self-contained alternative to matrix notation, not an alternative representation of it. There is seldom any reason to translate back and forth between the two, and in many cases (e.g., a tensor with more than 2 indices) no such translation is even possible. The most common examples I've seen where relativists do want to revert to matrix notation is to represent either a metric tensor or a stress-energy tensor compactly by writing it in matrix form. Since these are symmetric, it doesn't matter what convention you use for matching up the indices with rows and columns. I think your instructor is just trying to connect tensors to familiar ideas about matrices.
Notations like ##(A ^{T}) _{i} \text{ } ^{j}## are not used in relativity, in my experience. The reason is that the T is not needed, because we can represent the same idea more efficiently just by moving indices around.
If you really want to get fluent at converting back and forth between the two notations, then first off there is a convention that contravariant (upper-index) vectors are column vectors, while covectors are row vectors. The grammar of index notation says that when indices are being summed over, one must be an upper index and one a lower index. This means that:
1. A linear transformation that takes a contravariant vector as an input and gives a contravariant vector as an output must have one lower index and one upper index.
2. A transformation from contravariant vectors to covariant ones must have two lower indices.
3. A transformation from covariant vectors to contrariant ones must have two upper indices.
4. A transformation from covariant vectors to covariant ones must have one upper and one lower index.
Let's consider a case where our matrix isn't square, say A_{ab}, where a is an index in a three-dimensional space and b is an index in a two-dimensional space. In a situation like this, we want to make sure that when A is translated into matrix language, it doesn't make sense to add A to its own transpose -- you can't add a 3x2 matrix and a 2x3. This all comes out properly if we interpret the transpose operation as flipping the order of the indices to A_{ba}. Then, as expected, we can't have A_{ab}+A_{ba}, because the two terms belong to different spaces.
The convention used in your instructor's eqs. 2.19-20, where they have a mixed (upper-lower) rank-2 tensor, also makes sense according to this logic. They're associating transposition with reversing the two indices. This will in general produce a tensor that belongs to a different space. In their example, they have a tensor that belongs to the upper-lower space (first index is upper, second lower), and when they transpose they get one that belongs to the lower-upper space.
Coffee_ said:
I see, according to all of your answers the convention we saw in class is not entirely correct since we used ##A_{i} \text{ } ^{j} ## to represent the i-th row and the j-th column of the transposed matrix.
I'm not perceiving the same problem that you are. As far as I can tell, everything is consistent.
Maybe what's bothering you is that you want there to be a convention that will tell you whether a given NxN matrix should be associated with an upper-lower tensor like ##A^{a} \text{ } _{b} ##, or a lower-upper one like ##A_{b} \text{ } ^{a} ##. There can't be any such rule. When we write something that looks like a square matrix, and then write its transpose, it still looks like a square matrix. Looking at the two matrices, there would be no way to tell which should be ##A^{a} \text{ } _{b} ## and which should be ##A_{b} \text{ } ^{a} ##. This is just an ambiguity in matrix notation, and it can't be resolved without adding some external information. If I just write you an NxN matrix, you can't tell what space it belongs to. It could belong to any of the following spaces: upper-upper, upper-lower, lower-upper, or lower-lower.