Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Tensor confusion

  1. Dec 8, 2009 #1
    If [itex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/itex]

    does that mean [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}[/itex] ?
     
    Last edited: Dec 8, 2009
  2. jcsd
  3. Dec 8, 2009 #2

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Doesn't

    [tex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/tex]

    mean that

    [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}?[/tex]
     
  4. Dec 8, 2009 #3
    Yes George that's what I meant to write, sorry about that. Is that correct?

    Does that also mean that [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \Lambda^{\nu}_{\hspace{3 mm} \mu} [/itex] ?
     
  5. Dec 8, 2009 #4

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think so.
    No.

    [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]
     
  6. Dec 8, 2009 #5
    So [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = (\Lambda^{-1})^\nu{}_\mu[/itex]? Ie. [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu}[/itex] Is the inverse of [tex]\Lambda^{\nu}_{\hspace{3 mm} \mu} [/tex] ?


    So if I wanted to multiply two Lambdas together, it's only in certain cases that we get the kronecker delta?

    For instance: [tex] \Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\nu}^{\hspace{3 mm}\alpha}= \delta^\mu_\alpha[/tex]

    and [tex] \Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\mu}^{\hspace{3 mm}\nu}= \delta^\nu_\alpha[/tex]

    Is that right?
     
    Last edited: Dec 8, 2009
  7. Dec 8, 2009 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's right. See this post for a little bit more.
     
  8. Dec 9, 2009 #7
    OK some more conceptual problems I'm having:

    [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

    But matrix multiplication is associate, so [tex] (\eta_{\mu \alpha} \eta^{\beta \nu}) \Lambda^\alpha{}_\beta = \eta_{\mu \alpha} (\eta^{\beta \nu} \Lambda^\alpha{}_\beta)[/tex] but surely [tex] (\eta_{\mu \alpha} \eta^{\beta \nu})[/tex] is equal to the identity matrix?
     
  9. Dec 9, 2009 #8

    haushofer

    User Avatar
    Science Advisor

    In

    [tex]
    \eta_{\mu \alpha} \eta^{\beta \nu}
    [/tex]

    you basically take the tensor product of an n-dimensional covariant metric and a contravariant metric and you end up with a type (2,2) tensor. Normally this is not represented by an nxn matrix (or you have to take a convention in which you build a 16x16 matrix by multiplying every element of the first matrix by the second matrix, but that's not what is useful here).

    If you would take a contraction then ofcourse you could write things down in terms of simple matrix multiplication. But you don't.
     
  10. Dec 9, 2009 #9

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It's not. It's the product of one component of [itex]\eta[/itex] and one component of [itex]\eta^{-1}[/itex]. Recall that the definition of matrix multiplication is [itex](AB)^i_j=A^i_k B^k_j[/itex] and that the right-hand side actually means [itex]\sum_k A^i_k B^k_j[/itex]. There is no summation in [tex]\eta_{\mu \alpha} \eta^{\beta \nu}[/tex].

    Did you understand my calculation of the components of [itex]\Lambda^{-1}[/itex] in the thread I linked to?
     
  11. Dec 9, 2009 #10
    And the other reason why it's not an identity matrix is because we're working in Minkowski space with signature (-,+,+,+), and therefore [itex]\eta[/itex]'s are not identity matrices.
     
  12. Dec 9, 2009 #11

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You're right that they're not, but the result would still be (the components of) an identity matrix if the indices had matched. See the post I linked to.
     
    Last edited: Dec 9, 2009
  13. Dec 9, 2009 #12
    Could we write this in matrix notation as

    [tex]\left [ \Lambda_{\mu}^{\enspace \nu} \right ] = \eta \left [ \Lambda^{\mu}_{\enspace \nu} \right ] \eta^{-1} = \eta \Lambda \eta = \left [ \Lambda^{\mu}_{\enspace \nu} \right ]^{-1}[/tex]

    And am I right in thinking this equation only applies to boosts? The more general equation including boosts and rotations being:

    [tex]\eta \Lambda^{T} \eta = \Lambda^{-1}[/tex]
     
  14. Dec 9, 2009 #13

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    George's equations (one for each value of the indices) are just the components of a matrix equation that holds for all Lorentz transformations. See the post I linked to in #6.
     
  15. Dec 10, 2009 #14
    How's this for index juggling?

    [tex]\Lambda^{\mu}_{\enspace\rho} \left ( \Lambda^{-1} \right )^{\rho}_{\enspace\nu} = \delta^{\mu}_{\nu}[/tex]

    And substituting your equation for the components of [tex]\Lambda^{-1}[/tex], from post #2 of the thread you linked to:

    [tex]\Lambda^{\mu}_{\enspace\rho} \eta^{\thinspace \rho\tau} \Lambda^{\sigma}_{\enspace\tau} \eta_{\sigma\nu} = \delta^{\mu}_{\nu}[/tex]

    [tex]\Lambda^{\mu}_{\enspace\rho} \Lambda_{\nu}^{\enspace\rho} = \delta^{\mu}_{\nu}[/tex]

    Or in matrix format:

    [tex]\Lambda \eta^{-1} \Lambda^{T} \eta = I \Leftrightarrow \Lambda^{-1} = \eta^{-1} \Lambda^{T} \eta[/tex]

    I suppose what this shows is that the rules of index manipulation imply the convention that when there's a pair of indices--one up, and one down--then swapping their horizontal order (moving the leftmost index to the right, and the rightmost index to the left) inverts a Lorentz transformation. Does swapping the horizontal order of indices indicate inversion in general, or does this only work for a Lorentz transformation?

    [tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\nu}^{\enspace\mu} \right ][/tex]

    And if so, since the indices are arbitrary:

    [tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex] ?

    In #18 of this thread https://www.physicsforums.com/showthread.php?t=353536&page=2 Haushofer concludes with a formula similar to George's. In fact, I think it's equivalent to George's, except that [tex]T[/tex] is used instead of [tex]\Lambda[/tex]. If I was going to try to write this in matrix notation, I'd write:

    [tex]\left [ T^{\mu}_{\enspace\nu} \right ] = \eta^{-1} \left [ T_{\alpha}^{\enspace\beta} \right ]^{T} \eta[/tex]

    Is that correct? Then if [tex]T[/tex] was a Lorentz transformation, I guess we'd know that [tex]\left [ T^{\mu}_{\enspace\nu} \right ][/tex] is the inverse of [tex]\left [ T_{\alpha}^{\enspace\beta} \right ][/tex]. But since not everything is a Lorentz transformation, I'm guessing maybe it's not true in general that

    [tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex]
     
  16. Dec 10, 2009 #15

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It's all good.

    That's right. I like to take [itex]\Lambda^T\eta\Lambda=\eta[/itex] as the definition of a Lorentz transformation. If we multiply this with [itex]\eta^{-1}[/itex] from the left, we get [itex]\eta^{-1}\Lambda^T\eta\Lambda=I[/itex], which implies that [itex]\Lambda^{-1}[/itex] is what you said.

    To go from any of the nice and simple matrix equations to the corresponding result with lots of mostly pointless and annoying indices, you simply use the definition of matrix multiplication stated above, the summation convention, and the notational convention described in the other thread.

    Only for Lorentz transformations, because it follows from the formula for [itex]\Lambda^{-1}[/itex] that you found (which only holds when [itex]\Lambda[/itex] is a Lorentz transformation), and those other things I just mentioned.

    What you need to understand is that while [itex]T^\alpha{}_\beta[/itex] is defined as the component of T on row [itex]\alpha[/itex], column [itex]\beta[/itex], [itex]T_\alpha{}^\beta[/itex] is defined as the component on row [itex]\alpha[/itex], column [itex]\beta[/itex] of [itex]\eta T\eta^{-1}[/itex]. (This is just the convention to use the metric to raise and lower indices). So your equation says that [itex]T^{-1}=\eta T\eta^{-1}[/itex], or maybe that [itex]T^{-1}=(\eta T\eta^{-1})^T=\eta^{-1}T^T\eta[/itex]. the second alternative makes more sense (since it would be true for Lorentz transformations), so that suggests that if we use that bracket notation to indicate "the matrix with these components", we should actually interpret it as "the transpose of the matrix with these components" when the indices are "first one downstairs, second one upstairs". (A better option is probably to avoid that notation when you can).

    His formula is just an equivalent way to define what we mean by [itex]T_\alpha{}^\beta[/itex].
     
    Last edited: Dec 10, 2009
  17. Dec 10, 2009 #16
    Phew! Thanks, that's a relief to know.

    Ah, another source of confusion... This differs from the convention explained by Ruslan Shapirov in his Quick Introduction to Tensor Analysis, which I'd assumed was the rule everyone followed:

    "For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number."

    I gather that some people follow a convention whereby upper indices are always written first, [tex]T^{\alpha}_{\enspace\beta}[/tex], or where an arbitrary type-(1,1) tensor is written [tex]T^{\alpha}_{\beta}[/tex] (this is what Shapirov does), and only the order of indices on the same level as each other is significant, whereas others use a convention whereby changing the horizontal order of a pair of indices on a type-(1,1) tensor does make a difference (indicating inversion of a Lorentz transformation, and I don't know what--if anything--it indicates more generally). So maybe Shapirov's rule shouldn't be applied to the system of index manipulation in which [tex]T^{\alpha}_{\enspace\beta}[/tex] doesn't necessarily equal [tex]T_{\beta}^{\enspace\alpha}[/tex].
     
  18. Dec 10, 2009 #17

    atyy

    User Avatar
    Science Advisor

    In GR, if you raise and lower indices, then the ordering, including leaving appropriate spaces for upstairs and downstairs indices matters. In SR, there are some tricks where you don't have to keep track of this, because of the fixed background, and restriction to Lorentz inertial frames, but I don't remember the rules off the top of my head.
     
  19. Dec 10, 2009 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't know what "everyone" is using, but his convention does make sense. It seems that if we use his convention, we can use the [] notation consistently regardless of whether the left or the right index is upstairs. Either way we have [tex]T_\alpha{}^\beta=\eta_\alpha_\gamma T^\gamma{}_\delta\eta^\delta^\beta[/tex]. The question is, do we want to interpret that as the components of [itex]\eta\Lambda\eta^{-1}[/itex] or as the components of the transpose of that?

    Adding to what atyy said, [tex]T_\alpha{}^\beta[/itex] would be the result of having a tensor [itex]T:V\times V^*\rightarrow\mathbb R[/itex] act on basis vectors, and [tex]S^\alpha{}_\beta[/itex] would be the result of having a tensor [itex]S:V^*\times V\rightarrow\mathbb R[/itex] act on basis vectors. Here V is a vector space (usually the tangent space at some point of a manifold) and V* it's dual space. So the positions of the indices determine what type of tensor we're dealing with.

    In SR, there's no reason to even think about tensors (at least not in a situation I can think of right now), so I would really prefer to just write components of matrices as [itex]A^\mu_\nu[/itex] or [itex]A_\mu_\nu[/itex]. The notational convention we've been discussing in this thread just makes everything more complicated without having any significant benefits. It ensures that we never have to write [itex]^{-1}[/itex] or [itex]^T[/itex] on a Lorentz transformation matrix, but I think that's it.

    I really don't get why so many (all?) authors choose to write equations like [tex]\Lambda^T\eta\Lambda=\eta[/tex] in component form.
     
    Last edited: Dec 10, 2009
  20. Dec 10, 2009 #19
    Sean Carroll, in his GR lecture notes, ch. 1, p. 10, writes, breaking another of Shapirov's rules (that indices should match at the same height on opposite sides of an equation),

    We will [...] introduce a somewhat subtle notation by using the same symbol for both matrices [a Lorentz transformation and its inverse], just with primed and unprimed indices adjusted. That is,

    [tex]\left(\Lambda^{-1} \right)^{\nu'}_{\enspace \mu} = \Lambda_{\nu'}^{\enspace \mu}[/tex]

    or

    [tex]\Lambda_{\nu'}^{\enspace\mu} \Lambda^{\sigma'}_{\enspace\mu} = \delta^{\sigma'}_{\nu'} \qquad \Lambda_{\nu'}^{\enspace\mu} \Lambda^{\nu'}_{\enspace\rho} = \delta^{\mu}_{\rho'}[/tex]

    (Note that Schutz uses a different convention, always arranging the two indices northwest/southeast; the important thing is where the primes go.)

    http://preposterousuniverse.com/grnotes/

    I haven't seen Schutz's First Course in General Relativity, so I don't know any more about that, but in Blandford and Thorne's Applications of Classical Physics, 1.7.2, where they introduce Lorentz tramsformations, they write

    [tex]L^{\overline{\mu}}_{\enspace \alpha} L^{\alpha}_{\enspace \overline{\nu}} = \delta^{\overline{\mu}}_{\enspace \overline{\nu}} \qquad L^{\alpha}_{\enspace \overline{\mu}} L^{\overline{\nu}}_{\enspace \beta} = \delta^{\alpha}_{\enspace \beta} [/tex]

    Notice the up/down placement of indices on the elements of the transformation matrices: the first index is always up, and the second is always down.

    Perhaps this is similar to Schutz's notation. Is the role of the left-right ordering, in other people's notation, fulfilled here in Blandford and Thorne's notation by the position of the bar, or would left-right ordering still be needed for a more general treatment?

    In other sources I've looked at, such as Bowen and Wang's Introduction to Vector's and Tensors, tensors are just said to be of type, or valency, (p,q), p-times contravariant and q-times covariant, requiring p up indices and q down indices:

    [tex]T : V^{*}_{1} \times ... \times V^{*}_{p} \times V_{1} \times ... \times V_{q} \to \mathbb{R}[/tex]

    ...with up and down indices ordered separately. So apparently it's more complicated than I realised. Is there any way of explaining or hinting at why leaving spaces for up and down indices becomes important in GR to someone just starting out and battling with the basics of terminology, definitions and notational conventions?
     
  21. Dec 10, 2009 #20

    atyy

    User Avatar
    Science Advisor

    A tensor eats a bunch of one forms and tangent vectors and spits out a number. As you can see from the above definition, it matters which one forms and vectors go into which mouth of a tensor, so the order of the up and down indices matter. It is a good idea to keep in mind that one forms and vectors are separate objects.

    However, when there is a metric tensor, each one form can be associated with a vector as follows. A one form eats a vector and spits out a number. The metric tensor eats two vectors and spits out a number. So if a metric tensor eats one vector, it's still hungry and can eat another vector, so the half-full metric tensor is a one form. This is defined as the one form associated with the vector that the half-full metric tensor has just eaten, and is denoted by the same symbol as that vector, but with its index lowered or raised (I don't remember which one). So the combined requirement of keeping track of which mouth of a tensor eats what, and the ability to raise or lower indices means that we have to keep track of the combined order of up and down indices.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook