Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Index notation, covector transfor ( matrix representation)

  1. Oct 27, 2016 #1
    Just a couple of quick questions on index notation, may be because of the way I'm thinking as matrix representation:

    1) ##V^{u}B_{kl}=B_{kl}V^{u}## , i.e. you are free to switch the order of objects, I had no idea you could do this, and don't really understand for two reasons:

    i) In the above say I am in 4-d space and ##V^{u}## is a 4-vector, which can be repesented as a column vector , and ##B_{kl}## can be written as a 4x4 matrix, then the LHS doesn't make sense, you can't multiply, but the right side does.

    ii) Or say I have ##A_{mn}B_{kl}## and both of these can be represented as a 4x4 matrix, and matrix multiplication is in general not commutative....

    2) I am looking at how a covector transforms and I have:

    ## w_{u}=\Lambda^{v}_{u} w'_{u}##, where ##w'_{u}## is the transformed covector in some other coordinates ##x'^{u}## rather than ##x^{u}## and ##\Lambda## is the Jacobian transformation of the coordinates ##= \frac {\partial x'}{ \partial x } ## .

    Now I want to invert this. I wanted to multiply both sides by ##(\Lambda^{v}_{u})^{-1}##, but I get ## w'_{u}= (\Lambda^{u}_{v}) ^{-1} w_{v}## which I can see straight away is wrong by the inconsistent placement of the indices.

    (I am able to derive the correct expression using the chain rule , quite simply, but I'd like to know what is wrong with what I am doing).

    Many thanks in advance.
     
  2. jcsd
  3. Oct 27, 2016 #2

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Your notation is non-standard. ##V^uB_{kl}## is not an operation on vectors and matrices. It is just a multiplication of two scalars. To regard it as an operation involving vectors, some of the indices need to match so that Einstein summation notation can come into play. Ideally, the matching indices should have opposite heights - one up and one down - although not all authors follow that commendable practice.

    For instance ##V_uB^u{}_l## denotes the ##l##th component of the row vector obtained by post-multiplying the row vector with components ##V_u## by the matrix with components ##B^u{}_l##. Or in some notation conventions it is considered as denoting the vector ##V## in its entirety rather than its ##l##th component.

    The matching of indices, and which indices are matched, is crucial. If we change the above multiplication to ##V^lB^k{}_l## we have denoted the ##k##th component of the column vector obtained by pre-multiplying the column vector with components ##V^l## by the aforesaid matrix. It doesn't matter whether we write it as ##V^lB^k{}_l## or ##B^k{}_lV^l##, since scalar multiplication is commutative. What matters is whether the index of ##V## matches the first or second index of ##B##.
     
  4. Oct 27, 2016 #3

    dextercioby

    User Avatar
    Science Advisor
    Homework Helper

    I am afraid you got it all wrong and I'd blame the book you are reading.
    The moment you hear about tensors, you should make a mental leap to get rid of the (albeit sometimes useful) concept of rectangular/square matrix. You should never judge tensor algebra/analysis in terms of matrices. Because the connection between them is not immediate. If you choose a basis in the vector space (tensor product of vector spaces), then you can arrange the components of an at most 2nd rank tensor into a matrix: rectangular made up of a column or line for vectors and covectors, respectively, and square for 2nd rank tensors.

    1. Matrix multiplication makes sense in terms of tensor algebra only if the tensor product of the tensors is contracted. Component-wise, this involves a summation which you don't have under point i)
    ii) Yes, the component-wise formula you wrote corresponds to a tensor product of two tensors. Do you really understand what this product means? I would say you don't (I blame the book, again).
    iii) This is just components manipulation. Pay attention to the order of the indices and the so-called "balancing" of free indices in each of the LHS and RHS and between the two sides of the equal sign.

    [note] with some notable exceptions, like the book of R. Wald on GR, you should never attempt to properly study the mathematics of relativity from physics books. You need linear algebra and differential geometry (i.e. mathematics) books. Read my signature :)
     
  5. Oct 28, 2016 #4

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    It's indeed important to stress that an expression like ##V^{\mu}## is not a vector but the components of a vector with respect to some basis. A vector is an invariant expression. If ##\boldsymbol{b}_{\mu}## denotes the basis, then the vector is given in terms of its components by (Einstein summation implied)
    $$\boldsymbol{V}=V^{\mu} \boldsymbol{b}_{\mu}.$$
    The same holds for tensors. Here you need the dual basis, ##\boldsymbol{b}^{\mu}##, which is a linear form defined by mapping the basis vectors as
    $$\boldsymbol{b}^{\mu}(\boldsymbol{b}_{\nu})=\delta_{\nu}^{\mu}$$
    with the Kronecker symbol ##\delta_{\nu}^{\mu}##, which is ##1## if ##\mu=\nu## and ##0## if ##\mu \neq \nu##.

    Tensors are multilinear forms with entries being vectors or dual vectors (linear forms). The invariant object is given in terms of its components for, e.g., a tensor of rank (2,1)
    $$\boldsymbol{T}={T_{\mu \nu}}^{\rho} \boldsymbol{b}^{\mu} \otimes \boldsymbol{b}^{\nu} \otimes \boldsymbol{b}_{\rho}.$$
    This implies that for two vectors ##\boldsymbol{U}##, ##\boldsymbol{V}## and a co-vector ##\boldsymbol{W}## in components you have
    $$\boldsymbol{T}(\boldsymbol{U},\boldsymbol{V};\boldsymbol{W})={T_{\mu \nu}}^{\rho} U^{\mu} V^{\nu} W_{\rho},$$
    where the Einstein summation convention is implied again.

    Now it's also clear how the components transform under change of the basis. Then in terms of the representing matrix with respect to the two bases you have
    $$\boldsymbol{b}_{\mu}=\boldsymbol{b}_{\nu}' {T^{\nu}}_{\mu}.$$
    Now an arbitrary vector tells you how to transform its components
    $$\boldsymbol{V}=V^{\mu} \boldsymbol{b}_{\mu}=V^{\mu} {T^{\nu}}_{\mu} \boldsymbol{b}_{\nu}'=V^{\prime \nu} \boldsymbol{b}_{\nu}'.$$
    Since the decomposition of a vector wrt. to a basis is unique (since the basis vectors are a linearly independent set of vectors), this implies that
    $$V^{\prime \nu}={T^{\nu}}_{\mu} V^{\mu}.$$
    Now we can also figure out how the dual cobases transform into each other. You have
    $$\delta_{\nu}^{\mu}=\boldsymbol{b}^{\mu}(\boldsymbol{b}_{\nu}) = \boldsymbol{b}^{\mu}(\boldsymbol{b}_{\rho}') {T^{\rho}}_{\nu}.$$
    Now there should be a transformation
    $$\boldsymbol{b}^{\mu}={U^{\mu}}_{\sigma} \boldsymbol{b}^{\prime \sigma}.$$
    To find it we use the previous equation to write
    $$\delta_{\nu}^{\mu}={T^{\rho}}_{\nu} {U^{\mu}}_{\sigma} \boldsymbol{b}^{\prime \sigma}(\boldsymbol{b}_{\rho}') = {T^{\rho}}_{\nu} {U^{\mu}}_{\sigma} \delta_{\rho}^{\sigma} = {T^{\rho}}_{\nu} {U^{\mu}}_{\rho} \; \Rightarrow \; {U^{\mu}}_{\rho}= {(T^{-1})^{\mu}}_{\rho}.$$
    The dual co-basis vectors transform cogrediently to the basis vectors. Now it's also clear how the components of one-forms (dual vectors) transform:
    $$\boldsymbol{S}=S_{\mu} \boldsymbol{b}^{\mu} = S_{\mu} {U^{\mu}}_{\nu} \boldsymbol{b}^{\prime \nu} \; \Rightarrow \; S_{\nu}'={U^{\mu}}_{\nu} S_{\mu}.$$
    One says the components of a co-vector (lower index) transform covariantly (as the basis vectors) and the components of a vector (upper index) contra-variantly, i.e., like the co-bases. Indeed we have
    $$\boldsymbol{b}_{\nu} = {T^{\mu}}_{\nu} \boldsymbol{b}_{\mu}' \; \Rightarrow \; \boldsymbol{b}_{\mu}'={U^{\nu}}_{\mu} \boldsymbol{b}_{\nu}.$$
     
  6. Nov 8, 2016 #5
    Scalar multiplication is commutative yes but matrix is not. so in a sense it's a matter of notation since ##V^lB^k{}_l## being read in this 'pre-multiplying' way is not how it is read in normal matrix multiplication?
     
  7. Nov 8, 2016 #6

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Of course, if you want to do the above calculation matrix-vector notation, you must be careful with the order. Usually one arranges the ##V^l## in a column vector and ##{B^k}_l## in a matrix
    $$V=\begin{pmatrix} V^0 \\ V^1 \\ V^2 \\ V^3 \end{pmatrix}, \quad \hat{B}=\begin{pmatrix}
    {B^{0}}_0 & {B^0}_1&{B^0}_2 &{B^0}_3 \\
    {B^1}_0 & {B^1}_1 &{B^1}_2 & {B^1}_3\\
    {B^2}_0 & {B^2}_1 &{B^2}_2 & {B^2}_3\\
    {B^3}_0 & {B^3}_1 &{B^3}_2 & {B^3}_3\end{pmatrix}.$$
    Then the expression above reads ##\hat{B} V## (in this order), while ##V \hat{B}## doesn't make sense. In the index notation of course you deal with numbers which you simply multiply and add according to the summation convention. There the order doesn't play a role since addition and multiplication of numbers are commutative operations.
     
  8. Nov 8, 2016 #7

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    If the subscripts and superscripts are shown, as they are here, it is scalar multiplication. To write matrix multiplication, one uses symbols that represent matrices and vectors, using devices like bold fonts, overhead arrows and sometimes underlines or double overlines. So one can write it as matrix multiplication as
    $$\vec U=\overline{\overline B}\vec V$$
    in which case the order of the elements on the RHS cannot be changed without changing the meaning of the RHS (and maybe even rendering it meaningless).
    Or one can write it componentwise as
    $$U^k=B^k{}_l V^l$$
    which is the same as (although not quite as intuitive as)
    $$U^k=V^lB^k{}_l $$
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Index notation, covector transfor ( matrix representation)
  1. Help with index notation (Replies: 28)

Loading...