1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Finding tensor components via matrix manipulations

  1. Apr 29, 2016 #1
    1. The problem statement, all variables and given/known data

    Imagine we have a tensor ##X^{\mu\nu}## and a vector ##V^{\mu}##, with components

    ##
    X^{\mu\nu}=\left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right), \qquad V^{\mu} = (-1,2,0,-2).
    ##

    Find the components of:

    (a) ##{X^{\mu}}_{\nu}##
    (b) ##{X_{\mu}}^{\nu}##
    (c) ##X^{(\mu\nu)}##
    (d) ##X_{[\mu\nu]}##
    (e) ##{X^{\lambda}}_{\lambda}##
    (f) ##V^{\mu}V_{\mu}##
    (g) ##V_{\mu}X^{\mu\nu}##

    2. Relevant equations

    3. The attempt at a solution

    (a) ##{X^{\mu}}_{\nu}=X^{\mu\rho}\eta_{\rho\nu}=\left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)=\left( \begin{array}{cccc}
    -2 & 0 & 1 & -1 \\
    1 & 0 & 3 & 2 \\
    1 & 1 & 0 & 0 \\
    2 & 1 & 1 & -2 \end{array} \right)
    ##,

    where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##X^{\mu\rho}## and the first index of ##\eta_{\rho\nu}##.

    (b) ##{X_{\mu}}^{\nu}=\eta_{\mu\rho}X^{\rho\nu}=
    \left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)
    \left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)
    =\left( \begin{array}{cccc}
    -2 & 0 & -1 & 1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)
    ##,

    where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##\eta_{\mu\rho}## and the first index of ##X^{\rho\nu}##.

    (c) ##X^{(\mu\nu)}=\frac{1}{2}(X^{\mu\nu}+X^{\nu\mu})=\frac{1}{2}\Bigg[\left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)+\left( \begin{array}{cccc}
    2 & -1 & -1 & -2 \\
    0 & 0 & 1 & 1 \\
    1 & 3 & 0 & 1 \\
    -1 & 2 & 0 & -2 \end{array} \right)
    \Bigg]=\left( \begin{array}{cccc}
    2 & -0.5 & 0 & -1.5 \\
    -0.5 & 0 & 2 & 1.5 \\
    0 & 2 & 0 & 0.5 \\
    -1.5 & 1.5 & 0.5 & -2 \end{array} \right)
    ##

    (d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})##

    Are my answers to (a), (b) and (c) correct?

    With part (d), I'm not sure if I should take the original matrix to ##X^{\rho\sigma}## or the transposed matrix to ##X^{\rho\sigma}##? Does it make a difference anyway?
     
  2. jcsd
  3. Apr 30, 2016 #2

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    What you have written looks correct to me, including (d). If you instead used the transposed matrix in (d) you would change just the sign of the answer.
     
  4. Apr 30, 2016 #3
    Ok!

    (d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})=
    \frac{1}{2}\left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)
    \left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right) -
    \frac{1}{2}\left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)
    \left( \begin{array}{cccc}
    2 & -1 & -1 & -2 \\
    0 & 0 & 1 & 1 \\
    1 & 3 & 0 & 1 \\
    -1 & 2 & 0 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)##
    ##=\frac{1}{2}
    \left( \begin{array}{cccc}
    2 & 0 & -1 & 1 \\
    1 & 0 & 3 & 2 \\
    1 & 1 & 0 & 0 \\
    2 & 1 & 1 & -2 \end{array} \right)-\frac{1}{2}
    \left( \begin{array}{cccc}
    2 & 1 & 1 & 2 \\
    0 & 0 & 1 & 1 \\
    -1 & 3 & 0 & 1 \\
    1 & 2 & 0 & -2 \end{array} \right)
    =\left( \begin{array}{cccc}
    0 & -0.5 & -1 & -0.5 \\
    0.5 & 0 & 1 & 0.5 \\
    1 & -1 & 0 & -0.5 \\
    0.5 & -0.5 & 0.5 & 0 \end{array} \right)
    ##

    (e) ##{X^{\lambda}}_{\lambda} =\eta_{\lambda\rho}X^{\rho\sigma}\eta_{\sigma\lambda}##

    Is my answer to (d) correct?

    Am I on the right track with (e)? How do I sum over ##\lambda##?
     
  5. Apr 30, 2016 #4

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    (d) looks OK
    (e) is just the trace of a matrix you have already calculated (where?), so you don't need to do any new matrix multiplications.
     
  6. May 1, 2016 #5
    Thanks!

    (e) ##{X^{\lambda}}_{\lambda}={X^1}_{1}+{X^2}_{2}+{X^3}_{3}+{X^4}_{4}=-2+0+0-2=-4##, from part (a).

    (f) ##V^{\mu}V_{\mu}=
    \left( \begin{array}{cccc}
    -1 & 2 & 0 & -2 \end{array} \right)
    \left( \begin{array}{c}
    -1 \\
    2 \\
    0 \\
    -2 \end{array} \right)=9
    ##

    (g) ##V^{\mu}X^{\mu\nu}=
    \left( \begin{array}{cccc}
    -1 & 2 & 0 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)=
    \left( \begin{array}{cccc}
    0 & -2 & 3 & 9 \end{array} \right)
    ##

    What do you think?
     
  7. May 1, 2016 #6
    [itex]V_{\mu}[/itex] and [itex]V^{\mu}[/itex] have different components - don't forget that you need to apply the metric tensor to raise and lower indices!
     
  8. May 1, 2016 #7
    Ok!

    (f) ##V_{\nu}=V^{\rho}\eta_{\rho\nu}=
    \left( \begin{array}{cccc}
    -1 & 2 & 0 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    -1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 \\
    0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 \end{array} \right)
    =
    \left( \begin{array}{cccc}
    1 & 2 & 0 & -2 \end{array} \right)
    ##

    Therefore, ##V^{\mu}V_{\mu}=V^{0}V_{0}+V^{1}V_{1}+V^{2}V_{2}+V^{3}V_{3}=(-1)(1)+(2)(2)+(0)(0)+(-2)(-2)=7##.

    Is it correct now?
     
  9. May 1, 2016 #8
    Yup, looks correct now. Same for part (g) - based on your original post, it seems to be [itex]V_{\mu}[/itex] instead of [itex]V^{\mu}[/itex]. It helps to remember that in the Einstein summation convention, one index should be superscripted and the other subscripted.
     
  10. May 1, 2016 #9
    Ok, so using ##V_{\mu}## from part (f),

    (g) ##V_{\mu}X^{\mu\nu}=
    \left( \begin{array}{cccc} 1 & 2 & 0 & -2 \end{array} \right)
    \left( \begin{array}{cccc}
    2 & 0 & 1 & -1 \\
    -1 & 0 & 3 & 2 \\
    -1 & 1 & 0 & 0 \\
    -2 & 1 & 1 & -2 \end{array} \right)= \left( \begin{array}{cccc} 4 & -2 & 5 & 7 \end{array} \right).##

    Is it all right?
     
  11. May 1, 2016 #10
    Yup, seems alright to me.
     
  12. May 1, 2016 #11
    Thanks to both andrewkirk and Fightfish for helping me to solve the problem!:smile:
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Finding tensor components via matrix manipulations
  1. Tensor manipulation (Replies: 2)

Loading...