Finding tensor components via matrix manipulations

In summary, the components of a tensor ##X^{\mu\nu}## and a vector ##V^{\mu}## were found using matrix multiplication and the Einstein summation convention. The components of ##{X^{\mu}}_{\nu}## and ##{X_{\mu}}^{\nu}## were calculated, as well as the symmetric and anti-symmetric parts of the tensor, represented by ##X^{(\mu\nu)}## and ##X_{[\mu\nu]}## respectively. The trace of ##X^{\mu\nu}## was also calculated. Finally, the components of ##V^{\mu}V_{\mu}## and ##V_{\mu}X^
  • #1
spaghetti3451
1,344
33

Homework Statement



Imagine we have a tensor ##X^{\mu\nu}## and a vector ##V^{\mu}##, with components

##
X^{\mu\nu}=\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right), \qquad V^{\mu} = (-1,2,0,-2).
##

Find the components of:

(a) ##{X^{\mu}}_{\nu}##
(b) ##{X_{\mu}}^{\nu}##
(c) ##X^{(\mu\nu)}##
(d) ##X_{[\mu\nu]}##
(e) ##{X^{\lambda}}_{\lambda}##
(f) ##V^{\mu}V_{\mu}##
(g) ##V_{\mu}X^{\mu\nu}##

Homework Equations



The Attempt at a Solution



(a) ##{X^{\mu}}_{\nu}=X^{\mu\rho}\eta_{\rho\nu}=\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)=\left( \begin{array}{cccc}
-2 & 0 & 1 & -1 \\
1 & 0 & 3 & 2 \\
1 & 1 & 0 & 0 \\
2 & 1 & 1 & -2 \end{array} \right)
##,

where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##X^{\mu\rho}## and the first index of ##\eta_{\rho\nu}##.

(b) ##{X_{\mu}}^{\nu}=\eta_{\mu\rho}X^{\rho\nu}=
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
=\left( \begin{array}{cccc}
-2 & 0 & -1 & 1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
##,

where the rows of the left matrix are multiplied by the columns of the right matrix because the summation is over the second index of ##\eta_{\mu\rho}## and the first index of ##X^{\rho\nu}##.

(c) ##X^{(\mu\nu)}=\frac{1}{2}(X^{\mu\nu}+X^{\nu\mu})=\frac{1}{2}\Bigg[\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)+\left( \begin{array}{cccc}
2 & -1 & -1 & -2 \\
0 & 0 & 1 & 1 \\
1 & 3 & 0 & 1 \\
-1 & 2 & 0 & -2 \end{array} \right)
\Bigg]=\left( \begin{array}{cccc}
2 & -0.5 & 0 & -1.5 \\
-0.5 & 0 & 2 & 1.5 \\
0 & 2 & 0 & 0.5 \\
-1.5 & 1.5 & 0.5 & -2 \end{array} \right)
##

(d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})##

Are my answers to (a), (b) and (c) correct?

With part (d), I'm not sure if I should take the original matrix to ##X^{\rho\sigma}## or the transposed matrix to ##X^{\rho\sigma}##? Does it make a difference anyway?
 
Physics news on Phys.org
  • #2
What you have written looks correct to me, including (d). If you instead used the transposed matrix in (d) you would change just the sign of the answer.
 
  • #3
Ok!

(d) ##X_{[\mu\nu]}=\frac{1}{2}(X_{\mu\nu}-X_{\nu\mu})=\frac{1}{2}(\eta_{\mu\rho}X^{\rho\sigma}\eta_{\sigma\nu}-\eta_{\nu\sigma}X^{\sigma\rho}\eta_{\rho\mu})=
\frac{1}{2}\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right) -
\frac{1}{2}\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
\left( \begin{array}{cccc}
2 & -1 & -1 & -2 \\
0 & 0 & 1 & 1 \\
1 & 3 & 0 & 1 \\
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)##
##=\frac{1}{2}
\left( \begin{array}{cccc}
2 & 0 & -1 & 1 \\
1 & 0 & 3 & 2 \\
1 & 1 & 0 & 0 \\
2 & 1 & 1 & -2 \end{array} \right)-\frac{1}{2}
\left( \begin{array}{cccc}
2 & 1 & 1 & 2 \\
0 & 0 & 1 & 1 \\
-1 & 3 & 0 & 1 \\
1 & 2 & 0 & -2 \end{array} \right)
=\left( \begin{array}{cccc}
0 & -0.5 & -1 & -0.5 \\
0.5 & 0 & 1 & 0.5 \\
1 & -1 & 0 & -0.5 \\
0.5 & -0.5 & 0.5 & 0 \end{array} \right)
##

(e) ##{X^{\lambda}}_{\lambda} =\eta_{\lambda\rho}X^{\rho\sigma}\eta_{\sigma\lambda}##

Is my answer to (d) correct?

Am I on the right track with (e)? How do I sum over ##\lambda##?
 
  • #4
(d) looks OK
(e) is just the trace of a matrix you have already calculated (where?), so you don't need to do any new matrix multiplications.
 
  • #5
andrewkirk said:
(d) looks OK

Thanks!

andrewkirk said:
(e) is just the trace of a matrix you have already calculated (where?), so you don't need to do any new matrix multiplications.

(e) ##{X^{\lambda}}_{\lambda}={X^1}_{1}+{X^2}_{2}+{X^3}_{3}+{X^4}_{4}=-2+0+0-2=-4##, from part (a).

(f) ##V^{\mu}V_{\mu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{c}
-1 \\
2 \\
0 \\
-2 \end{array} \right)=9
##

(g) ##V^{\mu}X^{\mu\nu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)=
\left( \begin{array}{cccc}
0 & -2 & 3 & 9 \end{array} \right)
##

What do you think?
 
  • #6
[itex]V_{\mu}[/itex] and [itex]V^{\mu}[/itex] have different components - don't forget that you need to apply the metric tensor to raise and lower indices!
 
  • #7
Fightfish said:
[itex]V_{\mu}[/itex] and [itex]V^{\mu}[/itex] have different components - don't forget that you need to apply the metric tensor to raise and lower indices!

Ok!

(f) ##V_{\nu}=V^{\rho}\eta_{\rho\nu}=
\left( \begin{array}{cccc}
-1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right)
=
\left( \begin{array}{cccc}
1 & 2 & 0 & -2 \end{array} \right)
##

Therefore, ##V^{\mu}V_{\mu}=V^{0}V_{0}+V^{1}V_{1}+V^{2}V_{2}+V^{3}V_{3}=(-1)(1)+(2)(2)+(0)(0)+(-2)(-2)=7##.

Is it correct now?
 
  • #8
Yup, looks correct now. Same for part (g) - based on your original post, it seems to be [itex]V_{\mu}[/itex] instead of [itex]V^{\mu}[/itex]. It helps to remember that in the Einstein summation convention, one index should be superscripted and the other subscripted.
 
  • #9
Fightfish said:
Yup, looks correct now. Same for part (g) - based on your original post, it seems to be [itex]V_{\mu}[/itex] instead of [itex]V^{\mu}[/itex]. It helps to remember that in the Einstein summation convention, one index should be superscripted and the other subscripted.

Ok, so using ##V_{\mu}## from part (f),

(g) ##V_{\mu}X^{\mu\nu}=
\left( \begin{array}{cccc} 1 & 2 & 0 & -2 \end{array} \right)
\left( \begin{array}{cccc}
2 & 0 & 1 & -1 \\
-1 & 0 & 3 & 2 \\
-1 & 1 & 0 & 0 \\
-2 & 1 & 1 & -2 \end{array} \right)= \left( \begin{array}{cccc} 4 & -2 & 5 & 7 \end{array} \right).##

Is it all right?
 
  • #10
Yup, seems alright to me.
 
  • #11
Thanks to both andrewkirk and Fightfish for helping me to solve the problem!:smile:
 

1. What is a tensor?

A tensor is a mathematical object that describes the relationships between different sets of vectors and covectors. It is used to represent physical quantities that have both magnitude and direction in multiple dimensions.

2. How do I find the components of a tensor using matrix manipulations?

To find the components of a tensor using matrix manipulations, you can use the index notation and apply matrix multiplication rules. First, write out the tensor in index notation, then create a matrix using the components of the tensor. Finally, multiply the matrix with the appropriate vectors or covectors to find the components of the tensor.

3. What is index notation?

Index notation is a shorthand way of writing tensors using indices to represent the different components of the tensor. It is commonly used in mathematics and physics to simplify calculations and manipulations of tensors.

4. Can I use matrix manipulations to find tensor components in any dimension?

Yes, matrix manipulations can be used to find tensor components in any dimension. However, the number of indices used to represent the tensor will depend on the dimension of the underlying space.

5. Are there any limitations to using matrix manipulations to find tensor components?

There are no inherent limitations to using matrix manipulations to find tensor components. However, it is important to note that the order of matrix multiplication is important and must be done correctly to obtain the correct components. Additionally, some complex tensors may require advanced mathematical techniques to find their components.

Similar threads

  • Advanced Physics Homework Help
Replies
2
Views
482
  • Advanced Physics Homework Help
Replies
1
Views
340
  • Advanced Physics Homework Help
Replies
0
Views
301
  • Advanced Physics Homework Help
Replies
8
Views
753
  • Advanced Physics Homework Help
Replies
7
Views
910
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
22
Views
3K
  • Advanced Physics Homework Help
Replies
10
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
Back
Top