Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Index Notation interpretation

  1. Apr 20, 2014 #1
    I'm having some confusion with index notation and how it works with contravariance/covariance.

    [itex](v_{new})^i=\frac{\partial (x_{new})^i}{\partial (x_{old})^j}(v_{old})^j[/itex]

    [itex](v_{new})^i=J^i_{\ j}(v_{old})^j[/itex]


    [itex](v_{new})_i=\frac{\partial (x_{old})^j}{\partial (x_{new})^i}(v_{old})_j[/itex]

    [itex](v_{new})_i=(J^{-1})^j_{\ i}(v_{old})_j[/itex]

    So these are the standard rules for transforming contra and covariant vectors.
    Now if we want to convert this into a matrix equation is there an exact set of rules with regards index position?

    For example for the covariant transformation I can transpose the matrix which swaps the index order(Not sure how this makes sense) and this gives the right answer if we treat the covariant vectors as columns.

    Or I can move the [itex](v_{old})_j[/itex] to the right of the J inverse and treat it as a row vector and this gives the right answer and I don't need to even consider what a transpose is in this interpretation.

    Now both of these interpretations give the correct answers but they seem to have different meanings for upper vs lower and horizontal order.

    Is there a best way to think about this, which way makes the most sense in terms of raising/lowering with metric tensors and transforming higher order tensors?
     
  2. jcsd
  3. Apr 21, 2014 #2

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I suggest the following rules:
    1. If the metric tensor is denoted by ##g##, the matrix with ##g_{ij}## on row i, column j is denoted by ##g## as well.

    2. The component on row i, column j of the matrix ##g^{-1}## is denoted by ##g^{ij}##.

    3. For most other matrices X, the element on row i, column j is denoted by ##X^i{}_j##.

    4. For n×1 matrices v, you don't write the column index. In other words, you write ##v^i## instead of ##v^i{}_1##.

    5. For 1×n matrices v, if you use them at all (it's probably best if you don't), you can't drop the row index from the notation. The notation ##v_i## is already reserved for ##g_{ij}v^j=g_{ij}v^j{}_1##, so it can't be used for ##v^1{}_i##.

    6. In products, you transpose matrices if you have to, to ensure that each index that's summed over appears once upstairs and once downstairs.​


    Example: A Lorentz transformation is linear function ##\Lambda:\mathbb R^4\to\mathbb R^4## such that ##\Lambda^T\eta\Lambda=\eta##. The component on row ##\mu##, column ##\nu## of this equation is
    $$\eta_{\mu\nu}=(\Lambda^T)^\mu{}_\rho \eta_{\rho\sigma}\Lambda^\sigma{}_\nu =\Lambda^\rho{}_\mu\eta_{\rho\sigma}\Lambda^\sigma{}_\nu.$$ The intermediate step is usually not written out, because ##\rho## appears twice downstairs.

    Note that the horizontal positions of the indices are important, because of weird things like this:
    $$(\Lambda^{-1})^\mu{}_\nu = (\eta^{-1}\Lambda^T\eta)^\mu{}_\nu =\eta^{\mu\rho} \Lambda^\sigma{}_\rho \eta_{\sigma\nu} =\Lambda_\nu{}^\mu.$$
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Index Notation interpretation
  1. Index arithmetic (Replies: 3)

  2. Index conventions (Replies: 2)

  3. Index two. (Replies: 3)

Loading...