• Support PF! Buy your school textbooks, materials and every day products Here!

Finding tensor components via matrix manipulations

  • #1
1,344
32

Homework Statement



Imagine we have a tensor ##X^{\mu\nu}## and a vector ##V^{\mu}##, with components

##
X^{\mu\nu}=\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right), \qquad V^{\mu} = (-1,2,0,-2).
##

Find the components of:

(a) ##{X^{\mu}}_{\nu}##
(b) ##{X_{\mu}}^{\nu}##
(c) ##X^{(\mu\nu)}##
(d) ##X_{[\mu\nu]}##
(e) ##{X^{\lambda}}_{\lambda}##
(f) ##V^{\mu}V_{\mu}##
(g) ##V_{\mu}X^{\mu\nu}##

Homework Equations



The Attempt at a Solution



(a) ##{X^{\mu}}_{\nu}=X^{\mu\rho}\eta_{\rho\nu}=\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)=\left( \begin{array}{cccc}
-2 & 0 & 1 & -1 \\
1 & 0 & 3 & 2 \\
1 & 1 & 0 & 0 \\
2 & 1 & 1 & -2 \end{array} \right)
##,

where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##X^{\mu\rho}## and the first index of ##\eta_{\rho\nu}##.

(b) ##{X_{\mu}}^{\nu}=\eta_{\mu\rho}X^{\rho\nu}=
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
=\left( \begin{array}{cccc}
-2 & 0 & -1 & 1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
##,

where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##\eta_{\mu\rho}## and the first index of ##X^{\rho\nu}##.

(c) ##X^{(\mu\nu)}=\frac{1}{2}(X^{\mu\nu}+X^{\nu\mu})=\frac{1}{2}\Bigg[\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)+\left( \begin{array}{cccc}
2 & -1 & -1 & -2 \\
0 & 0 & 1 & 1 \\
1 & 3 & 0 & 1 \\
-1 & 2 & 0 & -2 \end{array} \right)
\Bigg]=\left( \begin{array}{cccc}
2 & -0.5 & 0 & -1.5 \\
-0.5 & 0 & 2 & 1.5 \\
0 & 2 & 0 & 0.5 \\
-1.5 & 1.5 & 0.5 & -2 \end{array} \right)
##

(d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})##

Are my answers to (a), (b) and (c) correct?

With part (d), I'm not sure if I should take the original matrix to ##X^{\rho\sigma}## or the transposed matrix to ##X^{\rho\sigma}##? Does it make a difference anyway?
 

Answers and Replies

  • #2
andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
3,792
1,390
What you have written looks correct to me, including (d). If you instead used the transposed matrix in (d) you would change just the sign of the answer.
 
  • #3
1,344
32
Ok!

(d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})=
\frac{1}{2}\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right) -
\frac{1}{2}\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & -1 & -1 & -2 \\
0 & 0 & 1 & 1 \\
1 & 3 & 0 & 1 \\
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)##
##=\frac{1}{2}
\left( \begin{array}{cccc}
2 & 0 & -1 & 1 \\
1 & 0 & 3 & 2 \\
1 & 1 & 0 & 0 \\
2 & 1 & 1 & -2 \end{array} \right)-\frac{1}{2}
\left( \begin{array}{cccc}
2 & 1 & 1 & 2 \\
0 & 0 & 1 & 1 \\
-1 & 3 & 0 & 1 \\
1 & 2 & 0 & -2 \end{array} \right)
=\left( \begin{array}{cccc}
0 & -0.5 & -1 & -0.5 \\
0.5 & 0 & 1 & 0.5 \\
1 & -1 & 0 & -0.5 \\
0.5 & -0.5 & 0.5 & 0 \end{array} \right)
##

(e) ##{X^{\lambda}}_{\lambda} =\eta_{\lambda\rho}X^{\rho\sigma}\eta_{\sigma\lambda}##

Is my answer to (d) correct?

Am I on the right track with (e)? How do I sum over ##\lambda##?
 
  • #4
andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
3,792
1,390
(d) looks OK
(e) is just the trace of a matrix you have already calculated (where?), so you don't need to do any new matrix multiplications.
 
  • #5
1,344
32
(d) looks OK
Thanks!

(e) is just the trace of a matrix you have already calculated (where?), so you don't need to do any new matrix multiplications.
(e) ##{X^{\lambda}}_{\lambda}={X^1}_{1}+{X^2}_{2}+{X^3}_{3}+{X^4}_{4}=-2+0+0-2=-4##, from part (a).

(f) ##V^{\mu}V_{\mu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{c}
-1 \\
2 \\
0 \\
-2 \end{array} \right)=9
##

(g) ##V^{\mu}X^{\mu\nu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)=
\left( \begin{array}{cccc}
0 & -2 & 3 & 9 \end{array} \right)
##

What do you think?
 
  • #6
954
117
[itex]V_{\mu}[/itex] and [itex]V^{\mu}[/itex] have different components - don't forget that you need to apply the metric tensor to raise and lower indices!
 
  • #7
1,344
32
[itex]V_{\mu}[/itex] and [itex]V^{\mu}[/itex] have different components - don't forget that you need to apply the metric tensor to raise and lower indices!
Ok!

(f) ##V_{\nu}=V^{\rho}\eta_{\rho\nu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
=
\left( \begin{array}{cccc}
1 & 2 & 0 & -2 \end{array} \right)
##

Therefore, ##V^{\mu}V_{\mu}=V^{0}V_{0}+V^{1}V_{1}+V^{2}V_{2}+V^{3}V_{3}=(-1)(1)+(2)(2)+(0)(0)+(-2)(-2)=7##.

Is it correct now?
 
  • #8
954
117
Yup, looks correct now. Same for part (g) - based on your original post, it seems to be [itex]V_{\mu}[/itex] instead of [itex]V^{\mu}[/itex]. It helps to remember that in the Einstein summation convention, one index should be superscripted and the other subscripted.
 
  • #9
1,344
32
Yup, looks correct now. Same for part (g) - based on your original post, it seems to be [itex]V_{\mu}[/itex] instead of [itex]V^{\mu}[/itex]. It helps to remember that in the Einstein summation convention, one index should be superscripted and the other subscripted.
Ok, so using ##V_{\mu}## from part (f),

(g) ##V_{\mu}X^{\mu\nu}=
\left( \begin{array}{cccc} 1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)= \left( \begin{array}{cccc} 4 & -2 & 5 & 7 \end{array} \right).##

Is it all right?
 
  • #10
954
117
Yup, seems alright to me.
 
  • #11
1,344
32
Thanks to both andrewkirk and Fightfish for helping me to solve the problem!:smile:
 

Related Threads on Finding tensor components via matrix manipulations

  • Last Post
Replies
2
Views
1K
Replies
6
Views
801
Replies
4
Views
4K
Replies
4
Views
1K
  • Last Post
Replies
1
Views
1K
Replies
0
Views
1K
  • Last Post
Replies
1
Views
343
Replies
1
Views
820
Replies
1
Views
537
Replies
3
Views
553
Top