# Orthogonal Transformations

1. Nov 10, 2009

### Rasalhague

In Chapter 1 of Blandford & Thorne: Applications of Classical Physics, section 1.7.1, "Euclidean 3-space: Orthogonal Transformations" (Version 0801.1.K), do equations 1.43 at the beginning of the section, representing respectively the expansion of the old basis vectors in the new basis, and the expansion of the new basis vectors in the old basis:

$$\mathbf{e}_{i} = \mathbf{e}_{\overline{p}}R_{\overline{p}i}, \quad \mathbf{e}_{\overline{p}} = \mathbf{e}_{i}R_{i\overline{p}}$$

anticipate the result to be shown, namely that the inverse of an orthogonal transformation matrix is its transpose, or should I read $$\left[ R_{\overline{p}i}\right], \left[ R_{i\overline{p}}\right]$$ as denoting two different matrices, the one not necessarily the transpose of the other (although in the instance of orthogonal transformations, they happen to be)?

http://www.pma.caltech.edu/Courses/ph136/yr2008/

That's to say: in general, if I find two matrices denoted $$\left[ R_{\overline{p}i}\right], \left[ R_{i\overline{p}}\right]$$, should I interpret the reversal of the indices as indicating that each matrix is the transpose of the other? Or is the reversal of indices equivalent to just using a completely different symbol to denote two matrices, e.g. $$S$$ and $$T$$?

2. Nov 10, 2009

### haushofer

If I have a specific matrix T which I represent as

$$T_{ab}$$
then the matrix

$$T_{ba}$$

represents its transpose. So you shouldn't suddenly assign to this transpose a completely different matrix. That would be the same as saying that we have a column vector X, and that the vector X^T represents a completely different vector instead of its row-version.

These rotation matrices of yours preserve the Euclidean line element

$$dl^2 = \delta_{ij}dx^i dx^j$$

So you know that for x --> x'=Rx we have dl^2 = dl'^2.

$$dl'^2 = \delta_{ij} R_{ik}dx^k R_{jl}dx^l$$

So

$$\delta_{ij} R_{ik} R_{jl} = R_{ik}R_{il} = \delta_{kl}$$

which gives the result.

3. Nov 10, 2009

### Rasalhague

Okay but since the fact of the inverse being the transpose was the very thing they're about to establish, it seemed rather like saying, now we're going to prove that A = A (using the same symbol for the two quantities whose identity has yet to be shown), which made me wonder if there was something I was missing there, or wether they really were just anticipating the result, using the notation $$\left[R_{i \overline{p}} \right]$$ and $$\left[R_{\overline{p} i} \right]$$ to state, before proving it, that $$\left[R_{i \overline{p}} \right]^{T} = \left[R_{i \overline{p}} \right]^{-1}$$.

Later they write, "This says that the transpose of $$\left[R_{\overline{p} i} \right]$$ is its inverse--which we have already denoted by $$\left[R_{i \overline{p}} \right]$$". I wondered whether by "which" they meant the fact that they'd already indicated the conclusion in the question in this way (if that's what they did), or whether they just meant that $$\left[R_{i \overline{p}} \right]$$ happened to be the symbol they'd used.

Anyway, thanks your help, haushofer, and for the alternative proof. Blandford and Thorne use all subscript indices in the sections of their book dealing with Cartesian frames for Euclidean space, but if we were to write this with the full summation convention (indexes on different levels summed over), would the following be a correct way to rewrite it? (I've used letters with indices in square brackets and letters with no indices to represent matrices, treating objects with a single up index as columns by default.)

$$\left(dl \right)^{2} = \delta_{ij} dx^{i} dx^{j} = \delta_{ij} R^{i}_{k} dx^{k} R^{j}_{l} dx^{l} = \delta_{ij} R^{i}_{k} R^{j}_{l} dx^{k} dx^{l}$$

$$= dx_{i} dx^{i} = R_{ik} R^{i}_{l} dx^{k} dx^{l}$$

$$= \left[R^{T} R \right]_{kl} dx^{k} dx^{l}$$

$$\Rightarrow \left[dx^{i} \right]^{T} \left[dx^{i} \right] = \left[dx^{i} \right]^{T} \left[R^{T} R \right] \left[dx^{i} \right]$$

$$\Rightarrow I = R^{T} R$$

$$\Rightarrow R^{T} = R^{-1}$$

Last edited: Nov 10, 2009
4. Nov 11, 2009

### Rasalhague

In the next section, 1.7.2, they present equations 1.46

$$\mathbf{e}_{\alpha} = \mathbf{e}_{\overline \mu} L^{\overline \mu}_{\; \alpha}, \quad \mathbf{e}_{\overline \mu} = \mathbf{e}_{\alpha} L^{\alpha}_{\; \overline \mu}$$

for a Lorentz transformation. (Can a Lorentz transformation be defined as any transformation in Minkowski space that preserves orthonormality of basis; and can it be defined equivalently as a transformation that preserves spacetime intervals?) Here too they denote the direct and inverse transformations both with the same letter, L, and reversed indices. But I don't think the inverse of a Lorentz transformation is necessarily equal to its transpose, is it? For example, if

$$L = \left[ \begin{matrix} \gamma & -\gamma \beta & 0 & 0 \\ -\gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$$

then

$$L^{-1} = \left[ \begin{matrix} \gamma & \gamma \beta & 0 & 0 \\ \gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right]$$

which is not the transpose of L.

5. Nov 11, 2009

### Rasalhague

Come to think of it, the transformations labelled R in 1.43 are in different equations, and one of the indices summed over in each case, so I'm thinking we can't conclude just from the way they've written them there that Rpi = [Rip]T. The only givens, at that stage, are that the R in the first equation is the inverse of the R in the second equation, and that they preserve the orthonormality of the basis. Unless I'm mistaken, it's only later in the derivation when the same R appears twice in a single equation with indices needing to be reversed to make a matrix multiplication that we can confidently represent one R as the transpose of the other.

Similarly with Lorentz transformations, they use the same symbol L in two different equations, 1.46, to represent two different transformations. Here also, the two transformations denoted L are inverses of each other. But in this case, the two transformations denoted L aren't transposes of each other, even though the only difference in how they're written in 1.46 is that the indices are reversed.

Last edited: Nov 12, 2009
6. Nov 12, 2009

### Rasalhague

If matrix multiplication would normally be represented in index notation as

$$C^{i}_{j} = A^{i}_{\;k} B^{k}_{\;j}$$

I'd expect

$$\delta^{i}_{j} = T^{i}_{\;k} T^{k}_{\;j}$$

to indicate that T = T-1. So it puzzles me to see in each of equations 1.44b and 1.47a here http://www.pma.caltech.edu/Courses/ph136/yr2008/0801.1.K.pdf an identical symbol, R and L respectively, apparently standing for two different functions in the same equation, e.g.

$$\delta_{\overline{p} \: \overline{q}} = R_{\overline{p}i} R_{i \overline{q}}$$

$$\delta^{\overline{\mu}}_{\; \overline{\nu}} = L^{\overline{\mu}}_{\; \alpha} L^{\alpha}_{\; \overline{\nu}}$$

In this context, the inverse of R (an arbitrary rotation) isn't necessarily equal to R, and the inverse of L (an arbitrary Lorentz transformation) is neither necessarily equal to L nor to its transpose. Does this use of symbols strike anyone else as odd or ambiguous? If not, can anyone see what it is about the convention that I haven't understood yet?

Equation 1.44c

$$R_{i \overline{p}} = R_{\overline{p} i}$$

is supposed to express the idea that the transpose of $$R_{i \overline{p}}$$ is equal to its inverse; but to me it seem to be saying that R is equal to its transpose (not the intended meaning).

7. Nov 15, 2009

### Rasalhague

In section 1.7.2, they say that "$$L^{\overline{\mu}}_{\; \alpha}$$ and $$L^{\alpha}_{\; \overline{\mu}}$$ are elements of two different transformation matrices" which "must be the inverse of each other":

$$L^{\overline{\mu}}_{\; \alpha} L^{\alpha}_{\; \overline{\nu}} = \delta^{\overline{\mu}}_{\; \overline{\nu}}$$

Is this use of the same symbol for different matrices ambiguous? If not, how can we tell when one L is not equal to another L?

8. Nov 21, 2009

### Rasalhague

Thanks to CompuChip for posting the link to Sean Carroll's Lecture Notes on General Relativity on another thread.

http://preposterousuniverse.com/grnotes/

I'm really enjoying the first chapter! On page 10 it has something on this question:

"But the inverse of a Lorentz transformation from the unprimed to the primed coordinates is also a Lorentz transformation, this time from the primed to the unprimed systems. We will therefore introduce a somewhat subtle notation by using the same symbol for both matrices, just with primed an unprimed indices adjusted."

$$\left(\Lambda^{-1} \right)^{\nu'}_{\quad \mu} = \Lambda_{\nu'}^{\quad \mu}$$

So apparently it's the relative height of the primed versus unprimed indices that indicates which matrix is being used: one generalised orthogonal transformation or its inverse. I wonder if that's a general rule (something essential to the whole index gymnastics), or if it's a convention limited to a particular kind of matrix.

The way I was able to see that a rotation matrix was the inverse of its transpose -- the way I wrote down the proof -- relied on treating the swapping of indices as equivalent to transposing a matrix, but this convention seems to complicate that a bit...

9. Nov 21, 2009

### haushofer

Hi, I'm sorry that I haven't replied for so long :) For Lorentz transformations you know that

$$\Lambda^{\mu}_{\ \alpha} \Lambda^{\nu}_{\ \beta}\eta_{\mu\nu} = \eta_{\alpha\beta}$$

The Lorentz Transformations keep the Minkowskimetric invariant, just as the SO(3) rotations keep the Kronecker delta (which is the metric of R3) invariant. But you also know that

$$\eta_{\mu}^{\sigma} = \eta_{\mu\nu}\eta^{\nu\sigma} =\delta_{\mu}^{\sigma}$$

This makes sense: a metric provides you with a diffeomorphism between the components of vectors and the components of dual vectors. So dualizing a component twice should give you the same component again, which is the statement of the above. If we apply this to the first formula we get

$$\Lambda^{\mu}_{\ \alpha} \Lambda^{\nu\beta}\eta_{\mu\nu} = \eta_{\alpha}^{\beta}=\delta_{\alpha}^{\beta} \rightarrow \Lambda^{\mu}_{\ \alpha}\Lambda_{\mu}^{\ \beta} = \eta_{\alpha}^{\beta} = \delta_{\alpha}^{\beta}$$

So these kind of considerations warn you that you should be carefull about placing indices on LT's; the above shows that

$$\Lambda^{\mu}_{\ \alpha} = [\Lambda_{\alpha}^{\ \mu}]^{-1}$$

10. Nov 21, 2009

### Rasalhague

Thanks for your help, Haushofer. Much appreciated, although I'm still struggling to understand... I think part of the difficulty is that no one source explains everything, so I'm having to compare what different authors say, but often they use slightly different systems with slightly different rules, and I don't always know how much of one author's system can be applied to another's. For example, Sean Carroll seems to be using a system in which it's significant whether an upper index appears to the left of a lower index or to the right (I'm still not quite sure what it signifies...), whereas others consistently put upper indices left of lower indices, or else say it makes no difference for them whether an upper index appears left or right of a lower index. Or maybe when Carroll doesn't intend the left-right order of upper and lower indices to have any significance. Maybe that's what he means by "the important thing is where the primes go". But then in your equations, you didn't use any primes, so maybe you're using a different convention in which something else denotes what Carroll uses primes for.

One writer gives the following rules for indices:

1) Free indices are written on the same level (upper or lower) on both sides of the equation. Each free index has only one entry on each side of the equation.

2) Each summation index should have exactly two entries: one upper entry and one lower entry.

3) For any double indexed array with indices on the same level (both upper or lower), the first index is the row number, while the second denotes the column. If indices are on different levels (one upper and one lower), then the upper index is a row number, while the lower one denotes the column.

Given rule 3, your final equation

$$\Lambda^{\mu}_{\;\alpha} = \left[\Lambda_{\alpha}^{\;\mu} \right]^{-1}$$

seems to be saying that lambda equals its own inverse, which I don't think was your intended meaning as it isn't true of all Lorentz transformations. I wondered if you, and Sean Carroll, might be following a different rule whereby the leftmost index indicates the row, regardless of level (upper or lower); but then the equation seems to be saying that a Lorentz transformation would be the inverse of its transpose, but that isn't the case in general, is it? (Only for rotations.) The other alternative I thought of was that the above equation relies on a convention where lambda can represent two different transformations in the same equation, as with Blandford & Thorne and Carroll, but I'm still deeply confused about how that works in practice.

Carroll offers the equation:

$$\left(\Lambda^{-1} \right)^{\nu'}_{\; \mu} = \Lambda_{\nu'}^{\; \mu}$$

which breaks rule 1 from the list above. Does rule 1 not apply at all in Carroll's system, or is this equation an exception? And if the latter, is it the only exception or are there others; what would be a general statement of rule 1?

In Euclidean space with a Cartesian frame,

$$\left[A_{ij} \right]^{T} = \left[A_{ji} \right].$$

In Minkowski space with a Lorentz frame,

$$\left[A_{\alpha\beta} \right]^{T} = \left[A_{\beta\alpha} \right]$$ and $$\left[A^{\alpha\beta} \right]^{T} = \left[A^{\beta\alpha} \right],$$

Right? But

$$A^{\alpha}_{\;\beta} = \eta^{\alpha\mu} A_{\mu\beta}.$$

Therefore $$\left[A^{\alpha}_{\;\beta} \right]^{T} = \left[ A_{\mu\beta} \right]^{T} \left[\eta^{\alpha\mu} \right]^{T} = \left[A_{\beta\mu} \right] \left[\eta^{\mu\alpha} \right] = \left[A_{\beta}^{\;\alpha} \right].$$

So by rule three every transformation matrix is symmetric! But that's not the case, is it? Here, by A, I just mean any transformation matrix.

Here's how far I've got with exploring the relationship of a Lorentz transformation to its inverse and transpose. I'm also trying to understand how to convert back and forth between matrix notation and index notation, so any guidance or criticism on that score is welcome too! It seems to begin okay, but obviously I've got mixed up somewhere.

$$\mathbf{e}_{\alpha} \cdot \mathbf{e}_{\beta} = \mathbf{e}_{\mu'} \Lambda^{\mu'}_{\;\alpha} \cdot \mathbf{e}_{\nu'} \Lambda^{\nu'}_{\;\beta}$$

$$\eta_{\alpha\beta} = \Lambda^{\mu'}_{\;\alpha} \Lambda^{\nu'}_{\;\beta} \eta_{\mu'\nu'}$$

Matrix notation: $$\eta = \Lambda^{T} \eta \Lambda$$

$$\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'} \Lambda^{\nu'}_{\;\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho}$$

$$\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'} \delta^{\nu'}_{\;\rho}$$

$$\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}$$

Matrix notation: $$\eta \Lambda^{-1} = \Lambda^{T} \eta$$

$$\eta^{\alpha\beta} \eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}$$

(That step breaks rule 2 that each summation index should have exactly two entries. What would be the correct way to write this equation, corresponding to the matrix equation below?)

$$\left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}$$

Matrix notation: $$\Lambda^{-1} = \eta \Lambda^{T} \eta$$

From this, if I mechanically follow the index manipulation rules, unless I'm mistaken:

$$\left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda_{\nu' \alpha} = \eta^{\beta\alpha} \Lambda_{\nu' \alpha} = \Lambda_{\nu'}^{\;\beta}.$$

which according to rule 3 would mean:

$$\Lambda^{-1} = \Lambda$$

But it isn't so in general. Or if there's another convention whereby the leftmost index denotes the row, regardless of height:

$$\Lambda^{-1} = \Lambda^{T}$$

which is true for rotations but not for Lorentz transformations in general. So I think I must be making some mistake in how I convert between matrix notation and index notation. Could it be that I'm mixing up two different conventions? In that last step, I ignored the right-left order of the upper and lower indices because I don't know what rule to apply there, but if that is significant in Carroll's system, I wonder if that's part of the problem. On the other hand, I don't think I used any method that relies on what I read in Carroll, and I purposefully tried to keep distinct symbols for lambda and its inverse. So where did it all go wrong?

Last edited: Nov 21, 2009
11. Nov 21, 2009

### Rasalhague

From Sean Carroll's equations 1.29

$$\Lambda_{\nu'}^{\;\mu} \Lambda^{\sigma'}_{\;\mu} = \delta^{\sigma'}_{\nu'}, \qquad \Lambda_{\nu'}^{\;\mu} \Lambda^{\nu'}_{\;\rho} = \delta^{\mu}_{\rho}$$

http://preposterousuniverse.com/grnotes/

it looks as though the left-right ordering of upper and lower indices might not be important here after all. These seem to be saying that if you see the two lambdas, one having a prime on its lower index, the other on its upper index, you should regard each as representing the inverse of other. Is that right? On the other hand, supposing the upper index indicates the row, and the lower index the column, equation 1.28,

$$\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\nu'}^{\;\mu}$$

seems to be saying, on the contrary, that switching the prime from a lower index to an upper index means the transpose of the inverse.

12. Nov 21, 2009

### haushofer

Hey, I must say that I'll stick to my own conventions and don't feel like going through others :)

No. The order of indices doesn't have to be the same, but at the same height. So

$$T^{\mu\nu} = S^{\mu}_{\nu}$$

is nonsense, while

$$A_{\mu\nu} = -A_{\nu\mu}$$
is perfectly allowable; it's the definition of an antisymmetric 2-tensor.

Consider a vector x with components $x^{\mu}$. We write it as a column vector. If we contract it with a tensor which is represented by a matrix (so a 2-tensor) it has to have a mu-index down, where ever it stands! So we can first look at

$$\Lambda^{\rho}_{\ \mu}x^{\mu}$$

Here we have for lambda a rho and a mu index. Because x is a column vector by definition, we know that the mu of lambda has to indicate the row of lambda; this is how we define matrix multiplication with vectors in linear algebra, right? Now consider

$$\Lambda^{\ \rho}_{\mu} x^{\mu}$$

Again, x is a column vector, so we know that the mu index of lambda has to indicate a row! Now you could wonder if the two lambda's above are the same, but my previous post shows that they are not; the reversed index order indicates taking the inverse.

So, what does an expression like

$$\Lambda_{\mu\nu} = \eta_{\mu\rho}\Lambda^{\rho}_{\ \nu}$$

mean? Well, you know that $\eta_{\mu\nu}$ is the metric, represented by a matrix. And $\Lambda^{\rho}_{\ \nu}$ can also be represented by a matrix. So this is merely a matrix multiplicated, giving another 4x4 matrix. The mu index of eta represents the row of the eta-matrix, and the rho of lambda indicates the column of the lambda-matrix. So the resulting object,

$$\Lambda_{\mu\nu}$$

can be represented by a matrix of which mu indicates the row and nu the column. This shows that in general it doesn't make sense to say that "upper indices indicate rows" and "lower indices indicate colums" or something like that.

I think you should reason as the following: you know that x with components $x^{\mu}$ can be written as a column vector. You also know that it can be transformed by a Lorentz transformation Lambda. Acting with Lambda on x we have to get another column vector, which requires that Lambda has an upper index. But this action goes via contraction, so this requires also a lower index on Lambda. Then you DEFINE a Lorentztransformation acting on $x^{\mu}$ as

$$\Lambda^{\rho}_{\ \mu}$$

You could also define it as

$$\Lambda^{\ \rho}_{\mu}$$

but the above reasoning shows that this is merely the inverse of the first. It's up to you how you would like to represent the components of the Lorentz transformation; the other would then indicate the inverse :)

13. Nov 21, 2009

### haushofer

Yes it does, but whatever you call an inverse is up to you.

Maybe it's good to know that the Lorentz group is (surprise!) a group. This means that every element in it has an inverse. If I have a group of matrices, I can call $T$ a group element and $T^{-1}$ its inverse. But nothing stops me from labeling $T$ as $T^{-1}$.

Ah, ok, this can be confusing

Consider (I'm writing primes now to indicate the transformed vector)

$$x^{'\mu} = \Lambda^{'\mu}_{\ \nu}x^{\nu}$$

Then you would like to write something like

$$x^{\nu} = ( \Lambda^{'\mu}_{\ \nu})^{-1}x^{'\mu}$$
This only makes sense if I write the nu index on the RHS up and the 'mu index down. So you could write

$$x^{\nu} = ( \Lambda^{-1})^{\nu}_{\ '\mu}x^{'\mu}$$

From the earlier reasoning you know that you can express this inverse of Lambda via Lambda itself by reversing the order of the indices:

$$( \Lambda^{-1})^{\nu}_{\ '\mu} = \Lambda^{\ \ \nu}_{'\mu}$$

So to avoid confusion (I'm sorry I overlooked this possible confusion to you!): If someone writes

$$(\Lambda^{\mu}_{\ \nu})^{-1} = \Lambda^{\ \mu}_{\nu}$$
she/he means that reversing the order of indices gives an inverse. But from Carrol's point of view it would be better to write

$$(\Lambda^{\mu}_{\ \nu})^{-1} = \Lambda^{\ \nu}_{\mu}$$

because the contraction changes if you switch to the inverse. For instance, this implies that

$$tr(\Lambda^{-1}\Lambda) = \Lambda^{\ \nu}_{\mu}\Lambda^{\mu}_{\ \nu}= 4$$

which makes sense :) Maybe I'm too used to this kind of conventions to overlook possible confusion.

14. Nov 21, 2009

### Rasalhague

Okay, so here it looks like reversing the indices does simply correspond to transposing a matrix used to represent the tensor. I relied on this idea when I was working through the proof of the fact that the inverse of a rotation matrix is its transpose. But I couldn't have used that method of proof if I'd been following Blandford and Thorne's notation in which reversing the indices had already been defined as indicating the inverse. In fact, I'm not sure how I would have shown it in their notation.

Do you mean a column of lambda?

Again, I'd have thought a column (a set of numerals lined up from top to bottom) rather than a row (a set of numerals lined up from left to right) of lambda. Unless I've become even more baffled than I thought, the muth entry on the rhoth row of lambda is multiplied by the muth entry of x and the sum of these muth multiplications gives the rhoth entry of the resulting vector.

In general, or only for a Lorentz transformation? And would swapping the indices from top to bottom indicate the transpose?

This is what I understand by matrix multiplication:

$$C_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}$$

where the left index, in each case, denotes a row (a set of numerals arranged from left to right), and the right index a column (a set of numerals arranged from top to bottom). So in your example, given rule 3 of the index rules that I quoted, I'd have expected the rho of lambda to stand for the row of lambda.

Rule 3 that I quoted said that where both indices are on the same level, the convention is to treat the first (the one on the left) as the row and the second (the one on the right) as the column; and where the indices appear at different heights (one up, one down), the upper index is taken to stand for the row, and the lower index for the column. Is that the usual convention?

15. Nov 21, 2009

### Rasalhague

It confused me a bit at first, but by now I'm reasonably happy about that!

Ah, so the indices of the inverse, mu and nu, outside of the brackets here aren't really making any comment about the indices of the original lambda; they're just labels for the components of lambda inverse, whatever relationship these may have to the rows and columns of the original lambda?

Thanks again for your patience with all my endless questions ;-)

16. Nov 22, 2009

### Rasalhague

If, as he says, "the important thing is where the primes go", when Sean Carroll writes

$$\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\nu'}^{\;\mu}$$

could this have be written as follows, without violating rule 1 or accidentally implying a transpose:

$$\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\mu'}^{\;\nu}$$

giving

$$\mathbf{e}_{\mu} = \Lambda_{\mu}^{\;\nu'} \mathbf{e}_{\nu'}, \qquad \mathbf{e}_{\mu'} = \Lambda^{\nu}_{\;\mu'} \mathbf{e}_{\nu}.$$

(Or using any other letters for indices, so long as they're consistent within each equation.) In other words, couldn't we just choose letters for indices in such a way that there's no need to switch letters too, given that the location of the primes is enough to show whether the direct or inverse transformation in intended? (And direct or inverse is also indicated by the left-right-order of upper and lower indices. And any letter can be used for summed over indices providing it isn't the same as a free index in the same equation.)

Time to wrack my brains some more over Thorne & Blandford's equation 1.44c:

$$R_{i \overline{p}} = R_{\overline{p} i}$$

They say, "Note: Eq. (1.44c) does not say that $$R_{i \overline{p}}$$ is a symmetric matrix; in fact, it typically is not. Rather, (1.44c) says that $$R_{i \overline{p}}$$ is the transpose of $$R_{\overline{p} i}$$".

http://www.pma.caltech.edu/Courses/ph136/yr2008/

So perhaps the rule is that switching the bar means inversion. Then:

$$R_{i \overline{p}} = R_{\overline{p} i} \rightarrow R = \left(R^{-1} \right)^{T}$$

And counterfactually, I'm very tentatively guessing, perhaps their notation could be used to indicate:

$$R_{i \overline{p}} = R_{\overline{i} p} \rightarrow R = R^{-1}$$

$$R_{i \overline{p}} = R_{p \overline{i}} \rightarrow R = R^{-T}$$

The latter being what would have indicated that both matrices written R were symmetrical, if that had been the case.

In section 1.7.2, where they introduce Lorentz transformations with indices on different levels, Thorne and Blandford write, "Notice the up/down placement of indices on the elements of the transformation matrices: the first index is always up, and the second is always down." Presumably this is the same convention Carroll attributes to Schutz on p. 10. Would I be right in thinking that this is made possible because switching a pair of indices horizontally, when one is up and the other down, is superfluous if inversion is also indicated by switching the height of the prime (or bar, as the case may be)? But if we use a system such as the one you did in #9 where there are no primes or bars and inversion is indicated only by a switch in horizontal order, then I suppose it would be essential, unless we use different letters for the direct and inverse transformation matrices, as Ruslan Shapirov does in his Quick Introduction to Tensor Analysis, pp. 13-16.

http://arxiv.org/abs/math.HO/0403252

(He denotes one with S, the other T.) Even more transparently, D.H. Griffel in Linear Algebra and its Applications, who uses the letter P for a change of basis matrix, just represents its inverse P-1:

$$b_{k} = \sum_{i=1}^{n} P_{ik} b'_{i}, \qquad b'_{j} = \sum_{k=1}^{n} \left(P^{-1} \right)_{kj} b_{k}$$

In post #10, I got to

$$\eta = \Lambda^{T}\eta\Lambda$$

only by assuming that all my lambdas referred to the same matrix, and that exchanging indices represented a transposition. I suppose, in the later step where I said I "mechanically followed" the index-manipulation rules, perhaps the device for representing inversion by a horizontal switch is encoded into these rules, leading me to:

$$\left( \Lambda^{-1} \right)^{\beta}_{\;\nu'} = \Lambda_{\nu'}^{\;\beta}$$

except that wouldn't be right according to Thorne & Blandford and Schutz's convention whereby inversion is represented only by the position of a bar or prime, and it wouldn't be right according to Carroll's rule that inversion is represented mainly by a switch in the position of the prime as well as, supplementarily, by a horizontal switch of indices. I didn't even know about this horzontal switching convention before, and as far as I knew the rules I was following as those generally used, e.g. by Blandford and Thorne, and so I'm thinking it shouldn't depend on this horizontal switching rule. Of course, it's entirely possible I've misunderstood or misapplied the rules.

So, in conclusion, I'm still baffled! What went wrong in #10?

Last edited: Nov 22, 2009
17. Nov 22, 2009

### Rasalhague

P. 17: "Notice that [...] "free" indices must be the same on both sides of an equation," So I guess it is an exception.

18. Nov 22, 2009

### haushofer

Ok :)

I think you could say that.

Note that if I have a tensor like Maxwell's tensor,

$$F_{\mu\nu}$$

I can say that $F_{0\nu}$ indicates the first row, $F_{1\nu}$ the second row, and $F_{\mu 0}$ the first column etc. So mu indicates the row, and nu the column. If I now write down

$$F_{\nu\mu}$$

this really is a transpose: I interchanged rows and colums. This is easily done, because the indices are on the same height. And because F is a 2-form you know that

$$F_{\nu\mu} = - F_{\mu\nu}$$

So it's antisymmetric and has 4*3/2=6 components, just the right number to include magnetic and electric fields. If you would do something with a tensor

$$T^{\mu}_{\ \nu}$$

I would first bring down the mu index with the metric:

$$T_{\rho\nu} = \eta_{\rho\mu} T^{\rho}_{\ \nu}$$

Again you can say that rho indicates a row and nu a column, and again

$$T_{\nu\rho}$$

is represented by the transpose; you again switch rows and columns. Note that switching indices on the same height doesn't require a metric, but switching indices which are on different heights does! So be careful with this. If you want to compare the tensors (note that the indexnames are just labels, we only care about the position of them!)

$$T^{\mu}_{\ \nu}$$

and

$$T_{\alpha}^{\ \beta}$$

you should write

$$\eta_{\beta\nu}\eta^{\mu\alpha}T_{\alpha}^{\ \beta} = T^{\mu}_{\ \nu}$$

This is not ordinary "transposing", but multiplication by two metric tensors. I believe this is also where your confusion about these Lorentz transformations came from.