Interpreting The Definition of Tensors

  • I
  • Thread starter VuIcan
  • Start date
  • #1
13
2
Hello, I've just been slightly unsure of something and would like to get secondary confirmation as I've just begun a book on tensor analysis. I would also preface this by saying my linear algebra is somewhat rusty. Suppose you have the inertia tensor in some unprimed coordinate system such that ##\mathbf{\widetilde{I}}##, then we know definitionally that this second-rank tensor will transform as such to into some primed coordinate system(where ##\Lambda## corresponds to the transformation matrix from the unprimed to the primed coordinate system):

$$ \widetilde{I}' = \Lambda \widetilde{I} \Lambda ^{\dagger}$$

Now, if one were to apply some vector stimulus in the primed coordinate system from the right, would it be correct to think of this vector as firstly being transformed into the unprimed coordinate system (since the adjoint is equivalent to the inverse within this context), then being directionally altered by the inertia tensor in the unprimed coordinate system, then finally being transformed back into the primed coordinate system by the final matrix? I feel like I'm misunderstanding something fundamental however.

Thanks in advance,

-Vulcan
 

Answers and Replies

  • #2
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,829
6,650
Personally, I think it is better not to think about vectors and tensors as transforming objects. A vector (or any other tensor) does not depend on the coordinate system. The direction to the Moon is physically the same regardless of what coordinates you choose. What does depend on the basis you choose is the components of the vector and the vector (and tensor) components therefore change between systems. A rank two tensor is (among other things) a linear map from a vector to a vector. This mapping does not happen "in a coordinate system". However, you can express that linear map in a given basis by giving its components relative to that basis, which you can represent using a matrix.
 
  • Like
Likes FactChecker
  • #3
13
2
Personally, I think it is better not to think about vectors and tensors as transforming objects.
But one needs to understand the mathematical operations being performed to be able to understand how said mathematical entity remains unchanged through a sequence of operations. Yes, I'm fully aware, the magnitude/direction of a vector under these types of linear transformation doesn't change, but I just can't seem to follow the math in this case(or at least I'm paranoid, I'm not following it correctly). Do you think my interpretation of the sequence of the operations is correct?

Also, aren't tensors defined by the way they transform? So wouldn't understanding tensors require one to think of them as transforming objects(component wise) under some imposed coordinate system?

That said, I do have an additional question as someone who is new to the idea of tensors, I've only been able "understand" 2nd-rank tensors when they've been defined defined operationally, like the inertia tensor/Maxwell tensor. Do you have any helpful resources that may help me attain some geometric understanding of 2nd-rank tensors in the same manner that everybody has the geometric picture of vectors as something with a "magnitude and direction"/arrow in 3d-space?

Thanks again.
 
  • #4
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,829
6,650
Also, aren't tensors defined by the way they transform?
No. It is a common way to introduce them, but I find it misleading at best and students never really seem to grasp things when introduced this way. (Also, again note that the components transform, not the tensor itself.) A type (m,n) tensor is a linear map from ##n## copies of a vector space to ##m## copies of the vector space, the transformation properties of the components follow directly from this when you change basis on the vector space.

That said, I do have an additional question as someone who is new to the idea of tensors, I've only been able "understand" 2nd-rank tensors when they've been defined defined operationally, like the inertia tensor/Maxwell tensor. Do you have any helpful resources that may help me attain some geometric understanding of 2nd-rank tensors in the same manner that everybody has the geometric picture of vectors as something with a "magnitude and direction"/arrow in 3d-space?
A vector is a linear combination of the basis vectors. A rank 2 tensor is a linear combination of pairs of basis vectors ##\vec e_i \otimes \vec e_j##. In order to know how a rank 2 tensor you would need 9 numbers (in 3D space). One possibility is drawing the three vectors that the basis vectors are mapped to by the tensor.
 
  • #5
13
2
A type (m,n) tensor is a linear map from nnn copies of a vector space to mmm copies of the vector space, the transformation properties of the components follow directly from this when you change basis on the vector space.
I apologize for my ignorance , but I'm not sure what you mean by copies of a vector space?

A vector is a linear combination of the basis vectors. A rank 2 tensor is a linear combination of pairs of basis vectors ##\vec e_i \otimes \vec e_j##. In order to know how a rank 2 tensor you would need 9 numbers (in 3D space). One possibility is drawing the three vectors that the basis vectors are mapped to by the tensor.
So taking the inertia tensor as an example:

$$\mathbf{\widetilde{I}} = \int dm \left[ <r|r> \mathbf{1} - |r><r| \right] $$

How does the idea of it being a linear combination of outer products of basis vectors apply? Sorry if I'm being a bit clueless.

The three vectors that the basis vectors are mapped to in this scenario would be the column vectors? Correct?


Thanks for your patience, I think I'm almost there : )
 
  • #6
WWGD
Science Advisor
Gold Member
2019 Award
5,410
3,486
I apologize for my ignorance , but I'm not sure what you mean by copies of a vector space?
You have a k-linear map taking inputs in the product ## V \times V \times...\times V \times V^{*} \times ...\times V^{*} ## ( though the factors can appear in mixed order) so that the tensor is linear in each factor separately.
 
  • #7
robphy
Science Advisor
Homework Helper
Insights Author
Gold Member
5,701
993
In preparation of @WWGD's comment, this might help...
  • A real function of two real variables [itex]f(u,v)[/itex] can be regarded as a map from
    ordered pairs of real values [itex](u,v)\in \cal R\times R[/itex] to the reals [itex]\cal R[/itex].
    In [itex]\cal R\times R[/itex], each "[itex]\cal R[/itex]" can be thought of as an independent copy of [itex]\cal R[/itex].
  • A dot-product of two vectors [itex]\vec u \cdot \vec v= g(\vec u,\vec v)=g_{ab} u^a v^b[/itex] can be regarded as a map from
    ordered pairs of vectors [itex](\vec u,\vec v)\in V\times V[/itex] to the reals [itex]\cal R[/itex].
    In [itex]V\times V[/itex], each "[itex]V[/itex]" can be thought of as an independent copy of [itex]V[/itex].

    In standard notation, a vector has 1 up-index (hence, [itex] u^a [/itex]) and corresponds to a column-vector in matrix component notation.
    In "bra-ket" notation, this is like the ket [itex]\left| u \right\rangle [/itex].
    (The lower-indices on [itex]g_{ab}[/itex] means that it accepts two vectors to produce a scalar.)
    (** I provide alternate notations to help with intuition... be careful not to mix notations. **)

    This dot-product is actually a "bi"-linear map since it is linear is each of the 2 arguments:
    [itex]g(\vec u+\vec p,\vec v)=g(\vec u,\vec v)+g(\vec p,\vec v)[/itex] and [itex]g(A\vec u,\vec v)=Ag(\vec u,\vec v)[/itex]
    [itex]g(\vec u,\vec v+\vec q)=g(\vec u,\vec v)+g(\vec u,\vec q)[/itex] and [itex]g(\vec u,B\vec v)=Bg(\vec u,\vec v)[/itex]
    or generally by "FOIL"ing
    [itex]\begin{align*}
    g(A\vec u+\vec p,B\vec v+\vec q)
    &=g(A\vec u,B\vec v)+g(A\vec u,\vec q)+g(\vec p, B\vec v)+g(\vec p,\vec q)\\
    &=ABg(\vec u,\vec v)+Ag(\vec u,\vec q)+Bg(\vec p, \vec v)+g(\vec p,\vec q)
    \end{align*}[/itex]

In @Orodruin's terms,
  • the [itex] g_{ab} [/itex] is a type (m=0,n=2)-tensor [with 0 up-indices and 2 down-indices]
    because it takes
    "n=2 copies of the vector space [itex]V[/itex]" (that is, [itex]V\times V[/itex])
    to
    "m=0 copies of the vector space [itex]V[/itex]" (a.k.a. the scalars (usu.) [itex]\cal R[/itex]).
    (that is, input an ordered pair of vectors and output a scalar: [itex]V\times V \longrightarrow \cal R[/itex]).

  • An example of a type (m=0,n=1)-tensor [with 0 up-indices and 1 down-index]
    is the "x-component operation in Euclidean space" [itex]\color{red}{(\hat x\cdot \color{black}{\square})}=\color{red}{\hat x_b} [/itex]
    (where: [itex]\color{red}{\hat x_b}=\color{red}{g_{ab} \hat x^a} [/itex] and [itex]\color{red}{\hat x_b} \hat x^b =\color{red}{g_{ab} \hat x^a} \hat x^b =1 [/itex]).
    Thus,[tex]u_{\color{red}{x}}=\color{red}{(\hat x\cdot \color{black}{\vec u})}=\color{red}{\hat x_b} u^b [/tex] because it takes
    "n=1 copy of the vector space [itex]V[/itex]" to "m=0 copies of the vector space [itex]V[/itex]" (the scalars [itex]\cal R[/itex]).
    (that is, input a vector and output a scalar: [itex]V \longrightarrow \cal R [/itex])
  • An example of a type (m=1,n=1)-tensor [with 1 up-index and 1 down-index]
    is the "vector x-component operation in Euclidean space" [itex]\color{red}{\hat x (\hat x\cdot \color{black}{\square})}=\color{red}{\hat x^a \hat x_b} [/itex]
    Thus, [tex]\color{red}{\hat x (\color{black}{u}_x)}=\color{red}{\hat x (\hat x\cdot \color{black}{\vec u})}=\color{red}{\hat x^a \hat x_b} u^b [/tex] because it takes
    "n=1 copy of the vector space [itex]V[/itex]" to "m=1 copy of the vector space [itex]V[/itex]"
    (that is, input a vector and output a vector: [itex]V \longrightarrow V [/itex]).
    In matrix notation, this is a square-matrix.
    (This map could be called a transformation on the vector space
    [specifically, a projection operator since [itex]\color{red}{\hat x^a \hat x_b} \ \color{green}{\hat x^b \hat x_c}=\color{blue}{\hat x^a \hat x_c} [/itex]]).

In @WWGD's terms, [itex]V^*[/itex] is called a dual vector space and its elements have 1 down-index (like [itex] z_a [/itex])
and correspond to row-vectors in matrix component notation.
In "bra-ket" notation, this is like the bra [itex]\left\langle z \right | [/itex].
  • The type (m=1,n=1) tensor [itex] \color{red}{\hat x^a \hat x_b} [/itex] can also be interpreted as a bilinear map [itex]\color{red}{h(\color{blue}{\square} ,\color{black}{\square} )} [/itex]
    that takes
    an ordered pair: first from the "m=1 copy of the dual-vector space [itex]V^*[/itex]" and second from the "n=1 copy of the vector space [itex]V[/itex]"
    to the reals: [itex]V^* \times V \longrightarrow \cal R[/itex],
    as in [itex]\color{red}{h(\color{blue}{\hat x_a} ,\color{black}{u^b} )}=\color{blue}{\hat x_a}\color{red}{\hat x^a \hat x_b} u^b=u_x [/itex]

Hopefully, this is a useful starting point.
 
  • Like
Likes suremarc, WWGD and Klystron

Related Threads on Interpreting The Definition of Tensors

Replies
11
Views
2K
Replies
5
Views
11K
Replies
12
Views
5K
  • Last Post
Replies
7
Views
12K
  • Last Post
Replies
13
Views
5K
  • Last Post
Replies
10
Views
1K
Replies
3
Views
1K
Replies
5
Views
1K
Replies
24
Views
5K
  • Last Post
Replies
2
Views
817
Top