Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Orthogonal Transformations

  1. Nov 10, 2009 #1
    In Chapter 1 of Blandford & Thorne: Applications of Classical Physics, section 1.7.1, "Euclidean 3-space: Orthogonal Transformations" (Version 0801.1.K), do equations 1.43 at the beginning of the section, representing respectively the expansion of the old basis vectors in the new basis, and the expansion of the new basis vectors in the old basis:

    [tex]\mathbf{e}_{i} = \mathbf{e}_{\overline{p}}R_{\overline{p}i}, \quad \mathbf{e}_{\overline{p}} = \mathbf{e}_{i}R_{i\overline{p}}[/tex]

    anticipate the result to be shown, namely that the inverse of an orthogonal transformation matrix is its transpose, or should I read [tex]\left[ R_{\overline{p}i}\right], \left[ R_{i\overline{p}}\right][/tex] as denoting two different matrices, the one not necessarily the transpose of the other (although in the instance of orthogonal transformations, they happen to be)?

    http://www.pma.caltech.edu/Courses/ph136/yr2008/

    That's to say: in general, if I find two matrices denoted [tex]\left[ R_{\overline{p}i}\right], \left[ R_{i\overline{p}}\right][/tex], should I interpret the reversal of the indices as indicating that each matrix is the transpose of the other? Or is the reversal of indices equivalent to just using a completely different symbol to denote two matrices, e.g. [tex]S[/tex] and [tex]T[/tex]?
     
  2. jcsd
  3. Nov 10, 2009 #2

    haushofer

    User Avatar
    Science Advisor

    If I have a specific matrix T which I represent as

    [tex]
    T_{ab}
    [/tex]
    then the matrix

    [tex]
    T_{ba}
    [/tex]

    represents its transpose. So you shouldn't suddenly assign to this transpose a completely different matrix. That would be the same as saying that we have a column vector X, and that the vector X^T represents a completely different vector instead of its row-version.

    These rotation matrices of yours preserve the Euclidean line element

    [tex]
    dl^2 = \delta_{ij}dx^i dx^j
    [/tex]

    So you know that for x --> x'=Rx we have dl^2 = dl'^2.

    [tex]
    dl'^2 = \delta_{ij} R_{ik}dx^k R_{jl}dx^l
    [/tex]

    So

    [tex]
    \delta_{ij} R_{ik} R_{jl} = R_{ik}R_{il} = \delta_{kl}
    [/tex]

    which gives the result.
     
  4. Nov 10, 2009 #3
    Okay but since the fact of the inverse being the transpose was the very thing they're about to establish, it seemed rather like saying, now we're going to prove that A = A (using the same symbol for the two quantities whose identity has yet to be shown), which made me wonder if there was something I was missing there, or wether they really were just anticipating the result, using the notation [tex]\left[R_{i \overline{p}} \right][/tex] and [tex]\left[R_{\overline{p} i} \right][/tex] to state, before proving it, that [tex]\left[R_{i \overline{p}} \right]^{T} = \left[R_{i \overline{p}} \right]^{-1}[/tex].

    Later they write, "This says that the transpose of [tex]\left[R_{\overline{p} i} \right][/tex] is its inverse--which we have already denoted by [tex]\left[R_{i \overline{p}} \right][/tex]". I wondered whether by "which" they meant the fact that they'd already indicated the conclusion in the question in this way (if that's what they did), or whether they just meant that [tex]\left[R_{i \overline{p}} \right][/tex] happened to be the symbol they'd used.

    Anyway, thanks your help, haushofer, and for the alternative proof. Blandford and Thorne use all subscript indices in the sections of their book dealing with Cartesian frames for Euclidean space, but if we were to write this with the full summation convention (indexes on different levels summed over), would the following be a correct way to rewrite it? (I've used letters with indices in square brackets and letters with no indices to represent matrices, treating objects with a single up index as columns by default.)

    [tex]\left(dl \right)^{2} = \delta_{ij} dx^{i} dx^{j} = \delta_{ij} R^{i}_{k} dx^{k} R^{j}_{l} dx^{l} = \delta_{ij} R^{i}_{k} R^{j}_{l} dx^{k} dx^{l}[/tex]

    [tex]= dx_{i} dx^{i} = R_{ik} R^{i}_{l} dx^{k} dx^{l}[/tex]

    [tex]= \left[R^{T} R \right]_{kl} dx^{k} dx^{l}[/tex]

    [tex]\Rightarrow \left[dx^{i} \right]^{T} \left[dx^{i} \right] = \left[dx^{i} \right]^{T} \left[R^{T} R \right] \left[dx^{i} \right][/tex]

    [tex]\Rightarrow I = R^{T} R[/tex]

    [tex]\Rightarrow R^{T} = R^{-1}[/tex]
     
    Last edited: Nov 10, 2009
  5. Nov 11, 2009 #4
    In the next section, 1.7.2, they present equations 1.46

    [tex]\mathbf{e}_{\alpha} = \mathbf{e}_{\overline \mu} L^{\overline \mu}_{\; \alpha}, \quad \mathbf{e}_{\overline \mu} = \mathbf{e}_{\alpha} L^{\alpha}_{\; \overline \mu}[/tex]

    for a Lorentz transformation. (Can a Lorentz transformation be defined as any transformation in Minkowski space that preserves orthonormality of basis; and can it be defined equivalently as a transformation that preserves spacetime intervals?) Here too they denote the direct and inverse transformations both with the same letter, L, and reversed indices. But I don't think the inverse of a Lorentz transformation is necessarily equal to its transpose, is it? For example, if

    [tex]L = \left[ \begin{matrix} \gamma & -\gamma \beta & 0 & 0 \\ -\gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right][/tex]

    then

    [tex]L^{-1} = \left[ \begin{matrix} \gamma & \gamma \beta & 0 & 0 \\ \gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right][/tex]

    which is not the transpose of L.
     
  6. Nov 11, 2009 #5
    Come to think of it, the transformations labelled R in 1.43 are in different equations, and one of the indices summed over in each case, so I'm thinking we can't conclude just from the way they've written them there that Rpi = [Rip]T. The only givens, at that stage, are that the R in the first equation is the inverse of the R in the second equation, and that they preserve the orthonormality of the basis. Unless I'm mistaken, it's only later in the derivation when the same R appears twice in a single equation with indices needing to be reversed to make a matrix multiplication that we can confidently represent one R as the transpose of the other.

    Similarly with Lorentz transformations, they use the same symbol L in two different equations, 1.46, to represent two different transformations. Here also, the two transformations denoted L are inverses of each other. But in this case, the two transformations denoted L aren't transposes of each other, even though the only difference in how they're written in 1.46 is that the indices are reversed.
     
    Last edited: Nov 12, 2009
  7. Nov 12, 2009 #6
    If matrix multiplication would normally be represented in index notation as

    [tex]C^{i}_{j} = A^{i}_{\;k} B^{k}_{\;j}[/tex]

    I'd expect

    [tex]\delta^{i}_{j} = T^{i}_{\;k} T^{k}_{\;j}[/tex]

    to indicate that T = T-1. So it puzzles me to see in each of equations 1.44b and 1.47a here http://www.pma.caltech.edu/Courses/ph136/yr2008/0801.1.K.pdf an identical symbol, R and L respectively, apparently standing for two different functions in the same equation, e.g.

    [tex]\delta_{\overline{p} \: \overline{q}} = R_{\overline{p}i} R_{i \overline{q}}[/tex]

    [tex]\delta^{\overline{\mu}}_{\; \overline{\nu}} = L^{\overline{\mu}}_{\; \alpha} L^{\alpha}_{\; \overline{\nu}}[/tex]

    In this context, the inverse of R (an arbitrary rotation) isn't necessarily equal to R, and the inverse of L (an arbitrary Lorentz transformation) is neither necessarily equal to L nor to its transpose. Does this use of symbols strike anyone else as odd or ambiguous? If not, can anyone see what it is about the convention that I haven't understood yet?

    Equation 1.44c

    [tex]R_{i \overline{p}} = R_{\overline{p} i}[/tex]

    is supposed to express the idea that the transpose of [tex]R_{i \overline{p}}[/tex] is equal to its inverse; but to me it seem to be saying that R is equal to its transpose (not the intended meaning).
     
  8. Nov 15, 2009 #7
    In section 1.7.2, they say that "[tex]L^{\overline{\mu}}_{\; \alpha}[/tex] and [tex]L^{\alpha}_{\; \overline{\mu}}[/tex] are elements of two different transformation matrices" which "must be the inverse of each other":

    [tex]L^{\overline{\mu}}_{\; \alpha} L^{\alpha}_{\; \overline{\nu}} = \delta^{\overline{\mu}}_{\; \overline{\nu}}[/tex]

    Is this use of the same symbol for different matrices ambiguous? If not, how can we tell when one L is not equal to another L?
     
  9. Nov 21, 2009 #8
    Thanks to CompuChip for posting the link to Sean Carroll's Lecture Notes on General Relativity on another thread.

    http://preposterousuniverse.com/grnotes/

    I'm really enjoying the first chapter! On page 10 it has something on this question:

    "But the inverse of a Lorentz transformation from the unprimed to the primed coordinates is also a Lorentz transformation, this time from the primed to the unprimed systems. We will therefore introduce a somewhat subtle notation by using the same symbol for both matrices, just with primed an unprimed indices adjusted."

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\quad \mu} = \Lambda_{\nu'}^{\quad \mu}[/tex]

    So apparently it's the relative height of the primed versus unprimed indices that indicates which matrix is being used: one generalised orthogonal transformation or its inverse. I wonder if that's a general rule (something essential to the whole index gymnastics), or if it's a convention limited to a particular kind of matrix.

    The way I was able to see that a rotation matrix was the inverse of its transpose -- the way I wrote down the proof -- relied on treating the swapping of indices as equivalent to transposing a matrix, but this convention seems to complicate that a bit...
     
  10. Nov 21, 2009 #9

    haushofer

    User Avatar
    Science Advisor

    Hi, I'm sorry that I haven't replied for so long :) For Lorentz transformations you know that

    [tex]
    \Lambda^{\mu}_{\ \alpha} \Lambda^{\nu}_{\ \beta}\eta_{\mu\nu} = \eta_{\alpha\beta}
    [/tex]

    The Lorentz Transformations keep the Minkowskimetric invariant, just as the SO(3) rotations keep the Kronecker delta (which is the metric of R3) invariant. But you also know that

    [tex]
    \eta_{\mu}^{\sigma} = \eta_{\mu\nu}\eta^{\nu\sigma} =\delta_{\mu}^{\sigma}
    [/tex]

    This makes sense: a metric provides you with a diffeomorphism between the components of vectors and the components of dual vectors. So dualizing a component twice should give you the same component again, which is the statement of the above. If we apply this to the first formula we get

    [tex]
    \Lambda^{\mu}_{\ \alpha} \Lambda^{\nu\beta}\eta_{\mu\nu} = \eta_{\alpha}^{\beta}=\delta_{\alpha}^{\beta} \rightarrow \Lambda^{\mu}_{\ \alpha}\Lambda_{\mu}^{\ \beta} = \eta_{\alpha}^{\beta} = \delta_{\alpha}^{\beta}
    [/tex]

    So these kind of considerations warn you that you should be carefull about placing indices on LT's; the above shows that

    [tex]
    \Lambda^{\mu}_{\ \alpha} = [\Lambda_{\alpha}^{\ \mu}]^{-1}
    [/tex]
     
  11. Nov 21, 2009 #10
    Thanks for your help, Haushofer. Much appreciated, although I'm still struggling to understand... I think part of the difficulty is that no one source explains everything, so I'm having to compare what different authors say, but often they use slightly different systems with slightly different rules, and I don't always know how much of one author's system can be applied to another's. For example, Sean Carroll seems to be using a system in which it's significant whether an upper index appears to the left of a lower index or to the right (I'm still not quite sure what it signifies...), whereas others consistently put upper indices left of lower indices, or else say it makes no difference for them whether an upper index appears left or right of a lower index. Or maybe when Carroll doesn't intend the left-right order of upper and lower indices to have any significance. Maybe that's what he means by "the important thing is where the primes go". But then in your equations, you didn't use any primes, so maybe you're using a different convention in which something else denotes what Carroll uses primes for.

    One writer gives the following rules for indices:

    1) Free indices are written on the same level (upper or lower) on both sides of the equation. Each free index has only one entry on each side of the equation.

    2) Each summation index should have exactly two entries: one upper entry and one lower entry.

    3) For any double indexed array with indices on the same level (both upper or lower), the first index is the row number, while the second denotes the column. If indices are on different levels (one upper and one lower), then the upper index is a row number, while the lower one denotes the column.


    Given rule 3, your final equation

    [tex]\Lambda^{\mu}_{\;\alpha} = \left[\Lambda_{\alpha}^{\;\mu} \right]^{-1}[/tex]

    seems to be saying that lambda equals its own inverse, which I don't think was your intended meaning as it isn't true of all Lorentz transformations. I wondered if you, and Sean Carroll, might be following a different rule whereby the leftmost index indicates the row, regardless of level (upper or lower); but then the equation seems to be saying that a Lorentz transformation would be the inverse of its transpose, but that isn't the case in general, is it? (Only for rotations.) The other alternative I thought of was that the above equation relies on a convention where lambda can represent two different transformations in the same equation, as with Blandford & Thorne and Carroll, but I'm still deeply confused about how that works in practice.

    Carroll offers the equation:

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\; \mu} = \Lambda_{\nu'}^{\; \mu}[/tex]

    which breaks rule 1 from the list above. Does rule 1 not apply at all in Carroll's system, or is this equation an exception? And if the latter, is it the only exception or are there others; what would be a general statement of rule 1?

    In Euclidean space with a Cartesian frame,

    [tex]\left[A_{ij} \right]^{T} = \left[A_{ji} \right].[/tex]

    In Minkowski space with a Lorentz frame,

    [tex]\left[A_{\alpha\beta} \right]^{T} = \left[A_{\beta\alpha} \right][/tex] and [tex]\left[A^{\alpha\beta} \right]^{T} = \left[A^{\beta\alpha} \right],[/tex]

    Right? But

    [tex]A^{\alpha}_{\;\beta} = \eta^{\alpha\mu} A_{\mu\beta}.[/tex]

    Therefore [tex]\left[A^{\alpha}_{\;\beta} \right]^{T} = \left[ A_{\mu\beta} \right]^{T} \left[\eta^{\alpha\mu} \right]^{T} = \left[A_{\beta\mu} \right] \left[\eta^{\mu\alpha} \right] = \left[A_{\beta}^{\;\alpha} \right].[/tex]

    So by rule three every transformation matrix is symmetric! But that's not the case, is it? Here, by A, I just mean any transformation matrix.

    Here's how far I've got with exploring the relationship of a Lorentz transformation to its inverse and transpose. I'm also trying to understand how to convert back and forth between matrix notation and index notation, so any guidance or criticism on that score is welcome too! It seems to begin okay, but obviously I've got mixed up somewhere.

    [tex]\mathbf{e}_{\alpha} \cdot \mathbf{e}_{\beta} = \mathbf{e}_{\mu'} \Lambda^{\mu'}_{\;\alpha} \cdot \mathbf{e}_{\nu'} \Lambda^{\nu'}_{\;\beta}[/tex]

    [tex]\eta_{\alpha\beta} = \Lambda^{\mu'}_{\;\alpha} \Lambda^{\nu'}_{\;\beta} \eta_{\mu'\nu'} [/tex]

    Matrix notation: [tex]\eta = \Lambda^{T} \eta \Lambda[/tex]

    [tex]\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'} \Lambda^{\nu'}_{\;\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho}[/tex]

    [tex]\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\rho} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'} \delta^{\nu'}_{\;\rho}[/tex]

    [tex]\eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}[/tex]

    Matrix notation: [tex]\eta \Lambda^{-1} = \Lambda^{T} \eta[/tex]

    [tex]\eta^{\alpha\beta} \eta_{\alpha\beta} \left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}[/tex]

    (That step breaks rule 2 that each summation index should have exactly two entries. What would be the correct way to write this equation, corresponding to the matrix equation below?)

    [tex]\left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda^{\mu'}_{\;\alpha} \eta_{\mu'\nu'}[/tex]

    Matrix notation: [tex]\Lambda^{-1} = \eta \Lambda^{T} \eta[/tex]

    From this, if I mechanically follow the index manipulation rules, unless I'm mistaken:

    [tex]\left(\Lambda^{-1} \right)^{\beta}_{\;\nu'} = \eta^{\alpha\beta} \Lambda_{\nu' \alpha} = \eta^{\beta\alpha} \Lambda_{\nu' \alpha} = \Lambda_{\nu'}^{\;\beta}.[/tex]

    which according to rule 3 would mean:

    [tex]\Lambda^{-1} = \Lambda[/tex]

    But it isn't so in general. Or if there's another convention whereby the leftmost index denotes the row, regardless of height:

    [tex]\Lambda^{-1} = \Lambda^{T}[/tex]

    which is true for rotations but not for Lorentz transformations in general. So I think I must be making some mistake in how I convert between matrix notation and index notation. Could it be that I'm mixing up two different conventions? In that last step, I ignored the right-left order of the upper and lower indices because I don't know what rule to apply there, but if that is significant in Carroll's system, I wonder if that's part of the problem. On the other hand, I don't think I used any method that relies on what I read in Carroll, and I purposefully tried to keep distinct symbols for lambda and its inverse. So where did it all go wrong?
     
    Last edited: Nov 21, 2009
  12. Nov 21, 2009 #11
    From Sean Carroll's equations 1.29

    [tex]\Lambda_{\nu'}^{\;\mu} \Lambda^{\sigma'}_{\;\mu} = \delta^{\sigma'}_{\nu'}, \qquad \Lambda_{\nu'}^{\;\mu} \Lambda^{\nu'}_{\;\rho} = \delta^{\mu}_{\rho}[/tex]

    http://preposterousuniverse.com/grnotes/

    it looks as though the left-right ordering of upper and lower indices might not be important here after all. These seem to be saying that if you see the two lambdas, one having a prime on its lower index, the other on its upper index, you should regard each as representing the inverse of other. Is that right? On the other hand, supposing the upper index indicates the row, and the lower index the column, equation 1.28,

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\nu'}^{\;\mu}[/tex]

    seems to be saying, on the contrary, that switching the prime from a lower index to an upper index means the transpose of the inverse.
     
  13. Nov 21, 2009 #12

    haushofer

    User Avatar
    Science Advisor

    Hey, I must say that I'll stick to my own conventions and don't feel like going through others :)

    No. The order of indices doesn't have to be the same, but at the same height. So

    [tex]
    T^{\mu\nu} = S^{\mu}_{\nu}
    [/tex]

    is nonsense, while

    [tex]
    A_{\mu\nu} = -A_{\nu\mu}
    [/tex]
    is perfectly allowable; it's the definition of an antisymmetric 2-tensor.

    Consider a vector x with components [itex]x^{\mu}[/itex]. We write it as a column vector. If we contract it with a tensor which is represented by a matrix (so a 2-tensor) it has to have a mu-index down, where ever it stands! So we can first look at

    [tex]
    \Lambda^{\rho}_{\ \mu}x^{\mu}
    [/tex]

    Here we have for lambda a rho and a mu index. Because x is a column vector by definition, we know that the mu of lambda has to indicate the row of lambda; this is how we define matrix multiplication with vectors in linear algebra, right? Now consider

    [tex]
    \Lambda^{\ \rho}_{\mu} x^{\mu}
    [/tex]

    Again, x is a column vector, so we know that the mu index of lambda has to indicate a row! Now you could wonder if the two lambda's above are the same, but my previous post shows that they are not; the reversed index order indicates taking the inverse.

    So, what does an expression like

    [tex]
    \Lambda_{\mu\nu} = \eta_{\mu\rho}\Lambda^{\rho}_{\ \nu}
    [/tex]

    mean? Well, you know that [itex]\eta_{\mu\nu}[/itex] is the metric, represented by a matrix. And [itex]\Lambda^{\rho}_{\ \nu}[/itex] can also be represented by a matrix. So this is merely a matrix multiplicated, giving another 4x4 matrix. The mu index of eta represents the row of the eta-matrix, and the rho of lambda indicates the column of the lambda-matrix. So the resulting object,

    [tex]
    \Lambda_{\mu\nu}
    [/tex]

    can be represented by a matrix of which mu indicates the row and nu the column. This shows that in general it doesn't make sense to say that "upper indices indicate rows" and "lower indices indicate colums" or something like that.

    I think you should reason as the following: you know that x with components [itex]x^{\mu}[/itex] can be written as a column vector. You also know that it can be transformed by a Lorentz transformation Lambda. Acting with Lambda on x we have to get another column vector, which requires that Lambda has an upper index. But this action goes via contraction, so this requires also a lower index on Lambda. Then you DEFINE a Lorentztransformation acting on [itex]x^{\mu}[/itex] as

    [tex]
    \Lambda^{\rho}_{\ \mu}
    [/tex]

    You could also define it as

    [tex]
    \Lambda^{\ \rho}_{\mu}
    [/tex]

    but the above reasoning shows that this is merely the inverse of the first. It's up to you how you would like to represent the components of the Lorentz transformation; the other would then indicate the inverse :)
     
  14. Nov 21, 2009 #13

    haushofer

    User Avatar
    Science Advisor

    Yes it does, but whatever you call an inverse is up to you.

    Maybe it's good to know that the Lorentz group is (surprise!) a group. This means that every element in it has an inverse. If I have a group of matrices, I can call [itex]T[/itex] a group element and [itex]T^{-1}[/itex] its inverse. But nothing stops me from labeling [itex]T[/itex] as [itex]T^{-1}[/itex].


    Ah, ok, this can be confusing

    Consider (I'm writing primes now to indicate the transformed vector)

    [tex]
    x^{'\mu} = \Lambda^{'\mu}_{\ \nu}x^{\nu}
    [/tex]

    Then you would like to write something like

    [tex]
    x^{\nu} = ( \Lambda^{'\mu}_{\ \nu})^{-1}x^{'\mu}
    [/tex]
    This only makes sense if I write the nu index on the RHS up and the 'mu index down. So you could write

    [tex]
    x^{\nu} = ( \Lambda^{-1})^{\nu}_{\ '\mu}x^{'\mu}
    [/tex]

    From the earlier reasoning you know that you can express this inverse of Lambda via Lambda itself by reversing the order of the indices:

    [tex]
    ( \Lambda^{-1})^{\nu}_{\ '\mu} = \Lambda^{\ \ \nu}_{'\mu}
    [/tex]

    So to avoid confusion (I'm sorry I overlooked this possible confusion to you!): If someone writes

    [tex]
    (\Lambda^{\mu}_{\ \nu})^{-1} = \Lambda^{\ \mu}_{\nu}
    [/tex]
    she/he means that reversing the order of indices gives an inverse. But from Carrol's point of view it would be better to write

    [tex]
    (\Lambda^{\mu}_{\ \nu})^{-1} = \Lambda^{\ \nu}_{\mu}
    [/tex]

    because the contraction changes if you switch to the inverse. For instance, this implies that

    [tex]
    tr(\Lambda^{-1}\Lambda) = \Lambda^{\ \nu}_{\mu}\Lambda^{\mu}_{\ \nu}= 4
    [/tex]

    which makes sense :) Maybe I'm too used to this kind of conventions to overlook possible confusion.
     
  15. Nov 21, 2009 #14
    Okay, so here it looks like reversing the indices does simply correspond to transposing a matrix used to represent the tensor. I relied on this idea when I was working through the proof of the fact that the inverse of a rotation matrix is its transpose. But I couldn't have used that method of proof if I'd been following Blandford and Thorne's notation in which reversing the indices had already been defined as indicating the inverse. In fact, I'm not sure how I would have shown it in their notation.

    Do you mean a column of lambda?

    Again, I'd have thought a column (a set of numerals lined up from top to bottom) rather than a row (a set of numerals lined up from left to right) of lambda. Unless I've become even more baffled than I thought, the muth entry on the rhoth row of lambda is multiplied by the muth entry of x and the sum of these muth multiplications gives the rhoth entry of the resulting vector.

    In general, or only for a Lorentz transformation? And would swapping the indices from top to bottom indicate the transpose?

    This is what I understand by matrix multiplication:

    [tex]C_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}[/tex]

    where the left index, in each case, denotes a row (a set of numerals arranged from left to right), and the right index a column (a set of numerals arranged from top to bottom). So in your example, given rule 3 of the index rules that I quoted, I'd have expected the rho of lambda to stand for the row of lambda.

    Rule 3 that I quoted said that where both indices are on the same level, the convention is to treat the first (the one on the left) as the row and the second (the one on the right) as the column; and where the indices appear at different heights (one up, one down), the upper index is taken to stand for the row, and the lower index for the column. Is that the usual convention?
     
  16. Nov 21, 2009 #15
    It confused me a bit at first, but by now I'm reasonably happy about that!

    Ah, so the indices of the inverse, mu and nu, outside of the brackets here aren't really making any comment about the indices of the original lambda; they're just labels for the components of lambda inverse, whatever relationship these may have to the rows and columns of the original lambda?

    Thanks again for your patience with all my endless questions ;-)
     
  17. Nov 22, 2009 #16
    If, as he says, "the important thing is where the primes go", when Sean Carroll writes

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\nu'}^{\;\mu}[/tex]

    could this have be written as follows, without violating rule 1 or accidentally implying a transpose:

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\;\mu} = \Lambda_{\mu'}^{\;\nu}[/tex]

    giving

    [tex]\mathbf{e}_{\mu} = \Lambda_{\mu}^{\;\nu'} \mathbf{e}_{\nu'}, \qquad \mathbf{e}_{\mu'} = \Lambda^{\nu}_{\;\mu'} \mathbf{e}_{\nu}.[/tex]

    (Or using any other letters for indices, so long as they're consistent within each equation.) In other words, couldn't we just choose letters for indices in such a way that there's no need to switch letters too, given that the location of the primes is enough to show whether the direct or inverse transformation in intended? (And direct or inverse is also indicated by the left-right-order of upper and lower indices. And any letter can be used for summed over indices providing it isn't the same as a free index in the same equation.)

    Time to wrack my brains some more over Thorne & Blandford's equation 1.44c:

    [tex]R_{i \overline{p}} = R_{\overline{p} i}[/tex]

    They say, "Note: Eq. (1.44c) does not say that [tex]R_{i \overline{p}}[/tex] is a symmetric matrix; in fact, it typically is not. Rather, (1.44c) says that [tex]R_{i \overline{p}}[/tex] is the transpose of [tex]R_{\overline{p} i}[/tex]".

    http://www.pma.caltech.edu/Courses/ph136/yr2008/

    So perhaps the rule is that switching the bar means inversion. Then:

    [tex]R_{i \overline{p}} = R_{\overline{p} i} \rightarrow R = \left(R^{-1} \right)^{T}[/tex]

    And counterfactually, I'm very tentatively guessing, perhaps their notation could be used to indicate:

    [tex]R_{i \overline{p}} = R_{\overline{i} p} \rightarrow R = R^{-1}[/tex]

    [tex]R_{i \overline{p}} = R_{p \overline{i}} \rightarrow R = R^{-T}[/tex]

    The latter being what would have indicated that both matrices written R were symmetrical, if that had been the case.

    In section 1.7.2, where they introduce Lorentz transformations with indices on different levels, Thorne and Blandford write, "Notice the up/down placement of indices on the elements of the transformation matrices: the first index is always up, and the second is always down." Presumably this is the same convention Carroll attributes to Schutz on p. 10. Would I be right in thinking that this is made possible because switching a pair of indices horizontally, when one is up and the other down, is superfluous if inversion is also indicated by switching the height of the prime (or bar, as the case may be)? But if we use a system such as the one you did in #9 where there are no primes or bars and inversion is indicated only by a switch in horizontal order, then I suppose it would be essential, unless we use different letters for the direct and inverse transformation matrices, as Ruslan Shapirov does in his Quick Introduction to Tensor Analysis, pp. 13-16.

    http://arxiv.org/abs/math.HO/0403252

    (He denotes one with S, the other T.) Even more transparently, D.H. Griffel in Linear Algebra and its Applications, who uses the letter P for a change of basis matrix, just represents its inverse P-1:

    [tex]b_{k} = \sum_{i=1}^{n} P_{ik} b'_{i}, \qquad b'_{j} = \sum_{k=1}^{n} \left(P^{-1} \right)_{kj} b_{k}[/tex]

    In post #10, I got to

    [tex]\eta = \Lambda^{T}\eta\Lambda[/tex]

    only by assuming that all my lambdas referred to the same matrix, and that exchanging indices represented a transposition. I suppose, in the later step where I said I "mechanically followed" the index-manipulation rules, perhaps the device for representing inversion by a horizontal switch is encoded into these rules, leading me to:

    [tex]\left( \Lambda^{-1} \right)^{\beta}_{\;\nu'} = \Lambda_{\nu'}^{\;\beta}[/tex]

    except that wouldn't be right according to Thorne & Blandford and Schutz's convention whereby inversion is represented only by the position of a bar or prime, and it wouldn't be right according to Carroll's rule that inversion is represented mainly by a switch in the position of the prime as well as, supplementarily, by a horizontal switch of indices. I didn't even know about this horzontal switching convention before, and as far as I knew the rules I was following as those generally used, e.g. by Blandford and Thorne, and so I'm thinking it shouldn't depend on this horizontal switching rule. Of course, it's entirely possible I've misunderstood or misapplied the rules.

    So, in conclusion, I'm still baffled! What went wrong in #10?
     
    Last edited: Nov 22, 2009
  18. Nov 22, 2009 #17
    P. 17: "Notice that [...] "free" indices must be the same on both sides of an equation," So I guess it is an exception.
     
  19. Nov 22, 2009 #18

    haushofer

    User Avatar
    Science Advisor

    Ok :)

    I think you could say that.

    Note that if I have a tensor like Maxwell's tensor,

    [tex]
    F_{\mu\nu}
    [/tex]

    I can say that [itex]F_{0\nu}[/itex] indicates the first row, [itex]F_{1\nu}[/itex] the second row, and [itex]F_{\mu 0}[/itex] the first column etc. So mu indicates the row, and nu the column. If I now write down

    [tex]
    F_{\nu\mu}
    [/tex]

    this really is a transpose: I interchanged rows and colums. This is easily done, because the indices are on the same height. And because F is a 2-form you know that

    [tex]
    F_{\nu\mu} = - F_{\mu\nu}
    [/tex]

    So it's antisymmetric and has 4*3/2=6 components, just the right number to include magnetic and electric fields. If you would do something with a tensor

    [tex]
    T^{\mu}_{\ \nu}
    [/tex]

    I would first bring down the mu index with the metric:

    [tex]
    T_{\rho\nu} = \eta_{\rho\mu} T^{\rho}_{\ \nu}
    [/tex]

    Again you can say that rho indicates a row and nu a column, and again

    [tex]
    T_{\nu\rho}
    [/tex]

    is represented by the transpose; you again switch rows and columns. Note that switching indices on the same height doesn't require a metric, but switching indices which are on different heights does! So be careful with this. If you want to compare the tensors (note that the indexnames are just labels, we only care about the position of them!)

    [tex]
    T^{\mu}_{\ \nu}
    [/tex]

    and

    [tex]
    T_{\alpha}^{\ \beta}
    [/tex]

    you should write

    [tex]
    \eta_{\beta\nu}\eta^{\mu\alpha}T_{\alpha}^{\ \beta} = T^{\mu}_{\ \nu}
    [/tex]

    This is not ordinary "transposing", but multiplication by two metric tensors. I believe this is also where your confusion about these Lorentz transformations came from.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Orthogonal Transformations
  1. Orthogonal Sets (Replies: 3)

Loading...