# Tensor and vector notation

• I

## Main Question or Discussion Point

Hello. I am confused about the notation for tensors and vectors. From what I saw, for a 4-vector the notation is with upper index. But for a second rank tensor (electromagnetic tensor for example) the notation is also upper index. I attached a screenshot of this. Initially I thought that for vectors there is upper index while for tensors there is lower index but now I am really confused (see second image). What is the actual notation? Thank you!

#### Attachments

• 43.9 KB Views: 431
• 82.9 KB Views: 424

Related Special and General Relativity News on Phys.org
DrGreg
Gold Member
"Tensor" is a general term which can come in many types, with upper indices, lower indices, or both, they're all called "tensors". A type-$\binom n m$ tensor has $n$ upper indices and $m$ lower indices. A vector is a type-$\binom 1 0$ tensor. A scalar is a type-$\binom 0 0$ tensor.

The electromagnetic tensor in your example is a type-$\binom 2 0$ tensor. There are also type-$\binom 1 1$ and type-$\binom 0 2$ tensors.

"Tensor" is a general term which can come in many types, with upper indices, lower indices, or both, they're all called "tensors". A type-$\binom n m$ tensor has $n$ upper indices and $m$ lower indices. A vector is a type-$\binom 1 0$ tensor. A scalar is a type-$\binom 0 0$ tensor.

The electromagnetic tensor in your example is a type-$\binom 2 0$ tensor. There are also type-$\binom 1 1$ and type-$\binom 0 2$ tensors.
So, if a (1 0) tensor is a vector, this means that (2 0) is just a matrix, or there is a difference between (2 0) tensor and matrix? (sorry for horizontal notation)

PeterDonis
Mentor
2019 Award
Whether the index is upper or lower is a separate question from how many indexes there are (i.e., the rank of the tensor). As DrGreg says, you can have "vectors" (objects with one index) with either an upper or a lower index, and you can have tensors (objects with two--or more--indexes) with two upper indexes, two lower indexes, or one upper and one lower index.

If you know the metric tensor, you can use it to raise or lower indexes, so in many applications in physics the indexes are simply placed wherever is most convenient. The key thing is to make sure that free indexes on both sides of an equation match, and that contractions are done using one upper and one lower index.

Strictly speaking, however, upper and lower indexes mean different things. For example, a vector (one upper index) is a set of 4 numbers (in 4-d spacetime) that change in a particular way when you change coordinates. A "covector" (one lower index) is a linear map from vectors to numbers, i.e., it is an object which, when given a vector, gives back a number (a scalar). The standard notation for this is a contraction: the number $s$ obtained when you take a vector $V^a$ and apply to it a covector (linear map) $C_a$ is $s = V^a C_a = V^0 C_0 + V^1 C_1 + V^2 C_2 + V^3 C_3$.

Whether the index is upper or lower is a separate question from how many indexes there are (i.e., the rank of the tensor). As DrGreg says, you can have "vectors" (objects with one index) with either an upper or a lower index, and you can have tensors (objects with two--or more--indexes) with two upper indexes, two lower indexes, or one upper and one lower index.

If you know the metric tensor, you can use it to raise or lower indexes, so in many applications in physics the indexes are simply placed wherever is most convenient. The key thing is to make sure that free indexes on both sides of an equation match, and that contractions are done using one upper and one lower index.

Strictly speaking, however, upper and lower indexes mean different things. For example, a vector (one upper index) is a set of 4 numbers (in 4-d spacetime) that change in a particular way when you change coordinates. A "covector" (one lower index) is a linear map from vectors to numbers, i.e., it is an object which, when given a vector, gives back a number (a scalar). The standard notation for this is a contraction: the number $s$ obtained when you take a vector $V^a$ and apply to it a covector (linear map) $C_a$ is $s = V^a C_a = V^0 C_0 + V^1 C_1 + V^2 C_2 + V^3 C_3$.
Thank you! So, a (2 0) tensor is a normal matrix. And if we have the metric, we can turn this to a (0 2) tensor, which might not be a matrix anymore. Is this right?

fresh_42
Mentor
Are $\binom n m$ tensors usually only those of the type $v_1 \otimes v_2 \otimes \dots \otimes v_n \otimes v^*_1 \otimes v^*_2 \otimes \dots \otimes v^*_m$ or linear combinations of them as well, i.e. all tensors of $(n,m)$ rank?

Nugatory
Mentor
So, a (2 0) tensor is a normal matrix. And if we have the metric, we can turn this to a (0 2) tensor, which might not be a matrix anymore. Is this right?
Neither a (2,0) tensor nor a (0,2) tensor is a matrix, but the components of both in a given coordinate system ($A^{ij}$ and $A_{ij}$) can be arranged in a two-dimension matrix when it is convenient - and it often is, so you see this done all the time.

If you want to get into GR as quickly as possible, https://preposterousuniverse.com/wp-content/uploads/2015/08/grtinypdf.pdf is a good practical summary. Just be warned that in this context "practical" means "ignores all mathematical niceties not required to get results out of the machinery".

vanhees71
"Tensor" is a general term which can come in many types, with upper indices, lower indices, or both, they're all called "tensors". A type-$\binom n m$ tensor has $n$ upper indices and $m$ lower indices. A vector is a type-$\binom 1 0$ tensor. A scalar is a type-$\binom 0 0$ tensor.
The electromagnetic tensor in your example is a type-$\binom 2 0$ tensor. There are also type-$\binom 1 1$ and type-$\binom 0 2$ tensors.
Even more careful is to stress that here we deal with tensor components with respect to a given Minkowski-orthonormal basis, i.e., you have $\binom{n}{m}$ tensor components, written as a symbol with one $n$ upper and $m$ lower indices. You can raise or lower indices with the pseudo-metric components $\eta^{\mu \nu}$ and $\eta_{\mu \nu}$ with $(\eta_{\mu \nu})=(\eta^{\mu \nu})=\mathrm{diag}(1,-1,-1,-1)$ (in the usual matrix notation for 2nd-rank tensor components).