# Interpreting The Definition of Tensors

• I
VuIcan
Hello, I've just been slightly unsure of something and would like to get secondary confirmation as I've just begun a book on tensor analysis. I would also preface this by saying my linear algebra is somewhat rusty. Suppose you have the inertia tensor in some unprimed coordinate system such that ##\mathbf{\widetilde{I}}##, then we know definitionally that this second-rank tensor will transform as such to into some primed coordinate system(where ##\Lambda## corresponds to the transformation matrix from the unprimed to the primed coordinate system):

$$\widetilde{I}' = \Lambda \widetilde{I} \Lambda ^{\dagger}$$

Now, if one were to apply some vector stimulus in the primed coordinate system from the right, would it be correct to think of this vector as firstly being transformed into the unprimed coordinate system (since the adjoint is equivalent to the inverse within this context), then being directionally altered by the inertia tensor in the unprimed coordinate system, then finally being transformed back into the primed coordinate system by the final matrix? I feel like I'm misunderstanding something fundamental however.

-Vulcan

Staff Emeritus
Homework Helper
Gold Member
Personally, I think it is better not to think about vectors and tensors as transforming objects. A vector (or any other tensor) does not depend on the coordinate system. The direction to the Moon is physically the same regardless of what coordinates you choose. What does depend on the basis you choose is the components of the vector and the vector (and tensor) components therefore change between systems. A rank two tensor is (among other things) a linear map from a vector to a vector. This mapping does not happen "in a coordinate system". However, you can express that linear map in a given basis by giving its components relative to that basis, which you can represent using a matrix.

FactChecker
VuIcan
Personally, I think it is better not to think about vectors and tensors as transforming objects.

But one needs to understand the mathematical operations being performed to be able to understand how said mathematical entity remains unchanged through a sequence of operations. Yes, I'm fully aware, the magnitude/direction of a vector under these types of linear transformation doesn't change, but I just can't seem to follow the math in this case(or at least I'm paranoid, I'm not following it correctly). Do you think my interpretation of the sequence of the operations is correct?

Also, aren't tensors defined by the way they transform? So wouldn't understanding tensors require one to think of them as transforming objects(component wise) under some imposed coordinate system?

That said, I do have an additional question as someone who is new to the idea of tensors, I've only been able "understand" 2nd-rank tensors when they've been defined defined operationally, like the inertia tensor/Maxwell tensor. Do you have any helpful resources that may help me attain some geometric understanding of 2nd-rank tensors in the same manner that everybody has the geometric picture of vectors as something with a "magnitude and direction"/arrow in 3d-space?

Thanks again.

Staff Emeritus
Homework Helper
Gold Member
Also, aren't tensors defined by the way they transform?
No. It is a common way to introduce them, but I find it misleading at best and students never really seem to grasp things when introduced this way. (Also, again note that the components transform, not the tensor itself.) A type (m,n) tensor is a linear map from ##n## copies of a vector space to ##m## copies of the vector space, the transformation properties of the components follow directly from this when you change basis on the vector space.

That said, I do have an additional question as someone who is new to the idea of tensors, I've only been able "understand" 2nd-rank tensors when they've been defined defined operationally, like the inertia tensor/Maxwell tensor. Do you have any helpful resources that may help me attain some geometric understanding of 2nd-rank tensors in the same manner that everybody has the geometric picture of vectors as something with a "magnitude and direction"/arrow in 3d-space?
A vector is a linear combination of the basis vectors. A rank 2 tensor is a linear combination of pairs of basis vectors ##\vec e_i \otimes \vec e_j##. In order to know how a rank 2 tensor you would need 9 numbers (in 3D space). One possibility is drawing the three vectors that the basis vectors are mapped to by the tensor.

VuIcan
A type (m,n) tensor is a linear map from nnn copies of a vector space to mmm copies of the vector space, the transformation properties of the components follow directly from this when you change basis on the vector space.

I apologize for my ignorance , but I'm not sure what you mean by copies of a vector space?

A vector is a linear combination of the basis vectors. A rank 2 tensor is a linear combination of pairs of basis vectors ##\vec e_i \otimes \vec e_j##. In order to know how a rank 2 tensor you would need 9 numbers (in 3D space). One possibility is drawing the three vectors that the basis vectors are mapped to by the tensor.

So taking the inertia tensor as an example:

$$\mathbf{\widetilde{I}} = \int dm \left[ <r|r> \mathbf{1} - |r><r| \right]$$

How does the idea of it being a linear combination of outer products of basis vectors apply? Sorry if I'm being a bit clueless.

The three vectors that the basis vectors are mapped to in this scenario would be the column vectors? Correct?

Thanks for your patience, I think I'm almost there : )

Gold Member
I apologize for my ignorance , but I'm not sure what you mean by copies of a vector space?
You have a k-linear map taking inputs in the product ## V \times V \times...\times V \times V^{*} \times ...\times V^{*} ## ( though the factors can appear in mixed order) so that the tensor is linear in each factor separately.

Homework Helper
Gold Member
In preparation of @WWGD's comment, this might help...
• A real function of two real variables $f(u,v)$ can be regarded as a map from
ordered pairs of real values $(u,v)\in \cal R\times R$ to the reals $\cal R$.
In $\cal R\times R$, each "$\cal R$" can be thought of as an independent copy of $\cal R$.
• A dot-product of two vectors $\vec u \cdot \vec v= g(\vec u,\vec v)=g_{ab} u^a v^b$ can be regarded as a map from
ordered pairs of vectors $(\vec u,\vec v)\in V\times V$ to the reals $\cal R$.
In $V\times V$, each "$V$" can be thought of as an independent copy of $V$.

In standard notation, a vector has 1 up-index (hence, $u^a$) and corresponds to a column-vector in matrix component notation.
In "bra-ket" notation, this is like the ket $\left| u \right\rangle$.
(The lower-indices on $g_{ab}$ means that it accepts two vectors to produce a scalar.)
(** I provide alternate notations to help with intuition... be careful not to mix notations. **)

This dot-product is actually a "bi"-linear map since it is linear is each of the 2 arguments:
$g(\vec u+\vec p,\vec v)=g(\vec u,\vec v)+g(\vec p,\vec v)$ and $g(A\vec u,\vec v)=Ag(\vec u,\vec v)$
$g(\vec u,\vec v+\vec q)=g(\vec u,\vec v)+g(\vec u,\vec q)$ and $g(\vec u,B\vec v)=Bg(\vec u,\vec v)$
or generally by "FOIL"ing
\begin{align*} g(A\vec u+\vec p,B\vec v+\vec q) &=g(A\vec u,B\vec v)+g(A\vec u,\vec q)+g(\vec p, B\vec v)+g(\vec p,\vec q)\\ &=ABg(\vec u,\vec v)+Ag(\vec u,\vec q)+Bg(\vec p, \vec v)+g(\vec p,\vec q) \end{align*}

In @Orodruin's terms,
• the $g_{ab}$ is a type (m=0,n=2)-tensor [with 0 up-indices and 2 down-indices]
because it takes
"n=2 copies of the vector space $V$" (that is, $V\times V$)
to
"m=0 copies of the vector space $V$" (a.k.a. the scalars (usu.) $\cal R$).
(that is, input an ordered pair of vectors and output a scalar: $V\times V \longrightarrow \cal R$).

• An example of a type (m=0,n=1)-tensor [with 0 up-indices and 1 down-index]
is the "x-component operation in Euclidean space" $\color{red}{(\hat x\cdot \color{black}{\square})}=\color{red}{\hat x_b}$
(where: $\color{red}{\hat x_b}=\color{red}{g_{ab} \hat x^a}$ and $\color{red}{\hat x_b} \hat x^b =\color{red}{g_{ab} \hat x^a} \hat x^b =1$).
Thus,$$u_{\color{red}{x}}=\color{red}{(\hat x\cdot \color{black}{\vec u})}=\color{red}{\hat x_b} u^b$$ because it takes
"n=1 copy of the vector space $V$" to "m=0 copies of the vector space $V$" (the scalars $\cal R$).
(that is, input a vector and output a scalar: $V \longrightarrow \cal R$)
• An example of a type (m=1,n=1)-tensor [with 1 up-index and 1 down-index]
is the "vector x-component operation in Euclidean space" $\color{red}{\hat x (\hat x\cdot \color{black}{\square})}=\color{red}{\hat x^a \hat x_b}$
Thus, $$\color{red}{\hat x (\color{black}{u}_x)}=\color{red}{\hat x (\hat x\cdot \color{black}{\vec u})}=\color{red}{\hat x^a \hat x_b} u^b$$ because it takes
"n=1 copy of the vector space $V$" to "m=1 copy of the vector space $V$"
(that is, input a vector and output a vector: $V \longrightarrow V$).
In matrix notation, this is a square-matrix.
(This map could be called a transformation on the vector space
[specifically, a projection operator since $\color{red}{\hat x^a \hat x_b} \ \color{green}{\hat x^b \hat x_c}=\color{blue}{\hat x^a \hat x_c}$]).

In @WWGD's terms, $V^*$ is called a dual vector space and its elements have 1 down-index (like $z_a$)
and correspond to row-vectors in matrix component notation.
In "bra-ket" notation, this is like the bra $\left\langle z \right |$.
• The type (m=1,n=1) tensor $\color{red}{\hat x^a \hat x_b}$ can also be interpreted as a bilinear map $\color{red}{h(\color{blue}{\square} ,\color{black}{\square} )}$
that takes
an ordered pair: first from the "m=1 copy of the dual-vector space $V^*$" and second from the "n=1 copy of the vector space $V$"
to the reals: $V^* \times V \longrightarrow \cal R$,
as in $\color{red}{h(\color{blue}{\hat x_a} ,\color{black}{u^b} )}=\color{blue}{\hat x_a}\color{red}{\hat x^a \hat x_b} u^b=u_x$

Hopefully, this is a useful starting point.

suremarc, WWGD and Klystron