whatisatensor

What Is a Tensor?

[Total: 18    Average: 2.9/5]

Let me start with a counter-question. What is a number? Before you laugh, there is more to this question as one might think. A number can be something we use to count, or more advanced an element of a field like the real numbers. Students might answer that a number is a scalar. This is the appropriate answer when vector spaces are around. But what is a scalar? A scalar can be viewed as the coordinate of a one dimensional vector space, the component of a basis vector. It means we stretch or compress a vector. But this manipulation is a transformation, a homomorphism, a linear mapping. So the number represents a linear mapping of a one dimensional vector space. It also transports other numbers to new ones. Thus it is an element of ##\mathbb{R}^*##, the dual space: ##c \mapsto (a \mapsto \langle c,a \rangle##). Wait! Linear mapping? Aren’t those represented by matrices. Yes, it is a ##1 \times 1## matrix and even a vector itself.
So without any trouble we have already found that a number is

  • An element of a field, e.g. ##\mathbb{R}##.
  • A scalar.
  • A coordinate.
  • A component.
  • A transformation of other numbers, an element of a dual vector space, e.g. ##\mathbb{R}^*##
  • A matrix.
  • A vector.

As you might have noticed, we can easily generalize these properties to higher dimensions, i.e. arrays of numbers, which we usually call vectors and matrices. I could as well have asked: What is a vector, what a matrix? We would have found even more answers, as matrices can be used to solve systems of linear equations, some form matrix groups, others play an important role in calculus as Jacobi matrices, and again others are number schemes in stochastic. In the end they are only two dimensional arrays of numbers in rectangular shape. A number is even the one dimensional special case of the two dimensional array matrix.

 

This sums up the difficulties when we ask: What is a tensor? Depending on whom you ask, how many room and time there is for an answer, where the emphases lie or what you want to use them for, the answers may vary significantly. In the end they are only multi-dimensional arrays of numbers in rectangular shape.1)  2).

\begin{equation*}
\begin{aligned}
\label{Ex-I}
\begin{bmatrix}
2
\end{bmatrix}
&\;
&\begin{bmatrix}
1\\1
\end{bmatrix}
&\;
&\begin{bmatrix}
0& -1\\1&0
\end{bmatrix}
&\;
\\
scalar &\;  & vector &\;  & matrix
\\
-&\;  & (1,0)\; tensor &\;  & (2,0)\; tensor
\end{aligned}
\end{equation*}

 

##(3,0)\; tensor##

Many students are used to deal with scalars (numbers, mass), vectors (arrows, force) and matrices (linear equations, Jacobi-matrix, linear transformations, covariances). The concept of tensors, however, is often new to them at the beginning of their study of physics. Unfortunately they are as important in physics as scalars, vectors and matrices are. The good news is, they aren’t any more difficult than the former. They only have more coordinates. This might seem to go to the expense of clarity, but there are methods to deal with it. E.g. a vector also has more coordinates than a scalar. The only difference is, that we can sketch an arrow, whereas sketching an object defined by a cube of numbers is impossible. And as we can do more with matrices, than we can do with scalars, we can do even more with tensors, because a cube of numbers, or even higher dimensional arrays of numbers, can represent a lot more than simple scalars and matrices can. Furthermore, scalars, vectors and matrices are also tensors. This is already the entire secret about tensors. Everything beyond this point are methods, examples and language, in order to prepare for how tensors can be used to investigate certain objects.

Definitions

As variable as the concept of tensors is as variable are possible definitions. In coordinates a tensor is a multi-dimensional, rectangular scheme of numbers: a single number as a scalar, an array as a vector, a matrix as a linear function, a cube as a bilinear algorithm and so on. All of them are tensors, as a scalar is a special case of a matrix, all these are special cases of a tensor. The most abstract formulation is: A tensor ##\otimes_\mathbb{F}## is a binary covariant functor that represents a solution for a co-universal mapping problem on the category of vector spaces over a field ##\mathbb{F}\, [3].## It is a long way from a scheme of numbers to this categorial definition. To be of practical use, the truth lies – as so often – in between. Numbers don’t mean anything without basis, and categorial terms are useless in everyday’s business where coordinates are dominant.

 

Definition: A tensor product of vector spaces ##U \otimes V## is a vector space structure on the Cartesian product ##U \times V## that satisfies

\begin{equation}\label{Tensor Product}
\begin{aligned}
(u+u’)\otimes v &= u \otimes v + u’ \otimes v\\
u \otimes (v + v’) &= u \otimes v + u \otimes v’\\
\lambda (u\otimes v) &= (\lambda u) \otimes v = u \otimes (\lambda v)
\end{aligned}
\end{equation}

This means a tensor product is a freely generated vector space of all pairs ##(u,v)## that satisfies some additional conditions such as linearity in each argument, i.e. bilinearity, which justifies the name product. Tensors form a vector space as matrices do. The tensor product, however, must not be confused with the direct sum ##U \oplus V## which is of dimension ##\operatorname{dim} U +\operatorname{dim} V## as a basis would be ##\{(u_i,0)\, , \,(0 , v_i)\}##, whereas in a tensor product ##U \otimes V##, all basis vectors ##(u_i,v_j)## are linearly independent and we get  the dimension ##\operatorname{dim} U \cdot \operatorname{dim} V##. Tensors can be added and multiplied by scalars. A tensor product is not commutative even if both vector spaces are the same. Now obviously it can be iterated and the vector spaces could as well be dual spaces or algebras. In physics tensors are often a mixture of several vector spaces and several dual spaces. It also makes sense to sort both kinds as the tensor product isn’t commutative.

 

Definition: A tensor ##T_q^p## of type ##(p,q)## of ##V## with ##p## contravariant and ##q## covariant components is an element (vector) of
$$
\mathcal{T}(V^p;V^{*}_q) = \underbrace{ V\otimes\ldots\otimes V}_{p- times}\otimes\underbrace{ V^*\otimes\ldots\otimes V^*}_{q-times}
$$
By (1) this tensor is linear in all its components.

Examples

From a mathematical point of view it doesn’t matter whether a vector space ##V## or its dual ##V^*## of linear functionals is considered. Both are vector spaces and a tensor product in this context is defined for vector spaces. So we can simply say

  • A tensor of rank ##0## is a scalar: ##T^0 \in \mathbb{R}##.
  • A tensor of rank ##1## is a vector: ##T^1 = \sum u_i##.
  • A tensor of rank ##2## is a matrix: ##T^2 = \sum u_i \otimes v_i##.
  • A tensor of rank ##3## is a cube: ##T^3 = \sum u_i \otimes v_i \otimes w_i##.
  • A tensor of rank ##4## is a ##4##-cube and we run out of terms for them: ##T^4 = \sum u_i \otimes v_i \otimes w_i \otimes z_i##.
  • ##\ldots## etc.

If we build ##u \otimes v## in coordinates we get a matrix. Say ##u = (u_1,\ldots ,u_m)^\tau## and ##v = (v_1,\ldots , v_n)^\tau##. Then
$$
u \otimes v = u \cdot v^\tau = \begin{bmatrix} u_1v_1& u_1v_2 & u_1v_3 & \ldots & u_1v_n\\
u_2v_1& u_2v_2 & u_2v_3 & \ldots & u_2v_n\\
u_3v_1& u_3v_2 & u_3v_3 & \ldots & u_3v_n\\
\vdots & \vdots & \vdots & \ldots & \vdots \\
u_nv_1& u_nv_2 & u_nv_3 & \ldots & u_nv_n
\end{bmatrix}
$$

Note that this is the usual matrix multiplication, row times column. But here the first factor are ##m## rows of length ##1## and the second factor are ##n## columns of length ##1##. It also means that this matrix is a matrix of rank one, since it consists of different multiples of a single row vector ##v^\tau##. To write an arbitrary ##n \times n## matrix ##A## as a tensor, we need ##n## of those dyadic tensors, i.e.
$$
A = \sum_{i=1}^n u_i \otimes v_i
$$
A generic “cube” ##u \otimes v \otimes w## will get us a “rank ##1## cube” as different multiples of a rank ##1## matrix stacked to a cube. An arbitrary “cube” would be a sum of these. And this procedure isn’t bounded by dimensions. We can go on and on. Only thing is, that already “cube” was a bit of a crutch to describe a three dimensional array of numbers and we run out of words other than tensor. A four dimensional version (tensor) could be viewed as the tensor product of two matrices, which themselves are tensor products of two vectors and always sums of them.

 

Let us consider now arbitrary ##2 \times 2## matrices ##M## and order their entries such that we can consider them as vectors, because ##\mathbb{M}(2,2)## is a vector space. Say ##(M_{11},M_{12},M_{21},M_{22})##. Then we get in

\begin{equation*} \begin{aligned}
M \cdot N &= \begin{bmatrix} M_{11}&M_{12} \\ M_{21}&M_{22}
\end{bmatrix} \cdot \begin{bmatrix} N_{11}&N_{12} \\ N_{21}&N_{22} \end{bmatrix} \\ & \\ &= \begin{bmatrix}M_{11}\cdot N_{11}+M_{12} \cdot N_{21} &M_{11} \cdot N_{12}+M_{12} \cdot N_{22} \\ M_{21}\cdot N_{11}+M_{22} \cdot N_{21} & M_{21} \cdot N_{12}+M_{22} \cdot N_{22} \end{bmatrix}\\ & \\
&= (\sum_{\mu =1}^{7} u_{\mu }^* \otimes v_{\mu }^* \otimes W_{\mu })(M,N) = \sum_{\mu =1}^{7} u_{\mu}^*(M) \cdot v_{\mu}^*(N) \cdot W_{\mu}\\ & \\ &= ( \begin{bmatrix} 1\\0 \\0 \\1
\end{bmatrix}\otimes \begin{bmatrix} 1\\0 \\0 \\1
\end{bmatrix} \otimes \begin{bmatrix} 1\\0 \\0 \\1
\end{bmatrix} + \begin{bmatrix} 0\\0 \\1 \\1
\end{bmatrix}\otimes \begin{bmatrix} 1\\0 \\0 \\0
\end{bmatrix} \otimes \begin{bmatrix} 0\\0 \\1 \\-1
\end{bmatrix}\\ & \\ & +  \begin{bmatrix} 1\\0 \\0 \\0
\end{bmatrix}\otimes \begin{bmatrix} 0\\1 \\0 \\-1
\end{bmatrix} \otimes \begin{bmatrix} 0\\1 \\0 \\1
\end{bmatrix} + \begin{bmatrix} 0\\0 \\0 \\1
\end{bmatrix}\otimes \begin{bmatrix}-1\\0 \\1 \\0
\end{bmatrix} \otimes \begin{bmatrix} 1\\0 \\1 \\0
\end{bmatrix}\\ & \\ &+ \begin{bmatrix} 1\\1 \\0 \\0
\end{bmatrix}\otimes \begin{bmatrix} 0\\0 \\0 \\1
\end{bmatrix} \otimes \begin{bmatrix}-1\\1 \\0 \\0
\end{bmatrix} + \begin{bmatrix}-1\\0 \\1 \\0
\end{bmatrix}\otimes \begin{bmatrix} 1\\1 \\0 \\0
\end{bmatrix} \otimes \begin{bmatrix} 0\\0 \\0 \\1
\end{bmatrix}\\ & \\ &+ \begin{bmatrix} 0\\1 \\0 \\-1
\end{bmatrix}\otimes \begin{bmatrix} 0\\0 \\1 \\1
\end{bmatrix} \otimes \begin{bmatrix} 1\\0 \\0 \\0
\end{bmatrix} ).(M,N) \end{aligned} \end{equation*}

 

a matrix multiplication of ##2 \times 2## matrices which only needs seven generic multiplications ##u_{\mu}^*(M) \cdot v_{\mu}^*(N) ## to the expense of more additions. This (bilinear) algorithm is from Volker Strassen ##[1]##. It reduced the “matrix exponent” from ##3## to ##\log_2 7 = 2.807## which means matrix multiplication can be done with ##n^{2.807}## essential multiplications instead of the obvious ##n^3## by simply multiplying rows and columns. The current record holder is an algorithm from  François Le Gall (2014) with an upper bound of ##O(n^{2.3728639}) [5]##. For sake of completeness let me add, these numbers are true for large ##n## and they start with different values of ##n##. For ##n=2## Strassen’s algorithm is already optimal. One cannot use less than seven multiplications to calculate the product of two ##2 \times 2 ## matrices. For larger matrices, however, there are algorithms with less multiplications. Whether these algorithms can be called efficient or useful is a different discussion. I once have been told that Strassen’s algorithm had been used in cockpit software, but I’m not sure if this is true.

This example shall demonstrate the following points:

  • The actual presentation of tensors depends on the choice of basis as well as it does for vectors and matrices.
  • Strassen’s algorithm is an easy example on how tensors can be used as mappings to describe certain objects. The set of all those algorithms for this matrix multiplication form in fact an algebraic variety, i.e. a geometrical object.
  • Tensors can be used for various applications, not necessarily only in mathematics and physics, here computer science.
  • The obvious, here “Matrix multiplication of ##2 \times 2## matrices needs ##8## generic multiplications.” isn’t necessarily the truth. Strassen saved one.
  • A tensor itself is a linear combination of let’s say generic tensors of the form ##v_1 \otimes \ldots \otimes v_m##. In case ##m=1## one doesn’t actually speak of tensors, but of vectors instead, although strictly speaking they would be called monads. In case ##m=2## these generic tensors are called dyads and in case ##m=3## triads.
  • One cannot simplify the addition of generic tensors, it remains a formal sum. The only exception as for multilinear objects always is
    $$u_1 \otimes v \otimes w + u_2 \otimes v \otimes w = (u_1 +u_2) \otimes v \otimes w $$
    where all but one factors are identical in which case we know it as distributive property.
  • Matrix multiplication isn’t commutative. So we are not allowed the swap the ##u_\mu^*## and ##v_\mu^*## in the above example, i.e. a tensor product isn’t commutative either.
  • Tensors in a given basis are number schemes. Which meaning we attach to them depends on our purpose.

 

However, these schemes of numbers called tensors can stand for a lot of things: transformations, algorithms, tensor algebras or tensor fields. They can be used as a construction template for Graßmann algebras, Clifford algebras or Lie algebras, because of their (co-)universal property. They occur on really many places in physics, e.g. stress-energy tensor, Cauchy stress tensor, metric tensor or curvature tensors as the Ricci tensor, just to name a view. Not bad for some numbers ordered in multidimensional cubes. This only reflects, what we’ve already experienced with matrices. As a single object, they are only some numbers in rectangular form. But we use them from solving linear equations as well as to describe the fundamental forces in our universe.

 

Sources

Sources

  1. Strassen, V., Gaussian elimination is not optimal, 1969, Numer. Math. (1969) 13: 354. doi:10.1007/BF02165411
  2. Werner Greub, Linear Algebra, Springer Verlag New York Inc., 1981, GTM 23
    https://www.amazon.com/Linear-Algebra-Werner-Greub/dp/8184896336
  3. Götz Brunner, Homologische Algebra, Bibliografisches Institut AG Zürich, 1973
  4. Maximillian Ganster, Graz University of Technology, Lecture Notes Summer 2010, Vektoranalysis,\\
    https://www.math.tugraz.at/\~ganster/lv\_vektoranalysis\_ss\_10/\\
    20\_ko-\_und\_kontravariante\_darstellung.pdf
  5. François Le Gall, 2014, Powers of Tensors and Fast Matrix Multiplication, https://arxiv.org/abs/1401.7714

 

##\underline{Footnotes:}##

 1) Tensors don’t need to be of the same size in every dimension, i.e. don’t have to be built from vectors of the same dimension, so the examples below are already a special case even though the standard case in the sense that quadratic matrices appear more often than rectangular ones. ##\uparrow##

 2) One might call a scalar a (0,0) tensor, but I will leave this up to the logicians. ##\uparrow##

 

 

45 replies
Newer Comments »
  1. burakumin
    burakumin says:
    fresh_42

    Definition: A tensor product of vector spaces U⊗V is a vector space structure on the Cartesian product U×V that satisfies …

    If I interpret correctly this sentence says the underlying set of U⊗V is U×V. This is not correct. This works for the direct sum U⊕V but the tensor product is a "bigger" set than U×V. One usual way to encode it is to quotient ##mathbb{R}^{(U times V)}## (the set of finitely-supported functions from U×V to ##mathbb{R}##) by the appropriate equivalence relation.

  2. lavinia
    lavinia says:

    – Students of General Relativity learn about tensors from their transformation properties. Tensors are arrays of number assigned to each coordinate system that transform according to certain rules. Arrays that do not transform according to these rules are not tensors. I think that it would be helpful to connect this General Relativity approach to the mathematical approach that you have explained in this Insight.

    Also these students need to understand how a metric allows one to pass back and forth between covariant and contra-variant tensors. One might show how this is the same as passing between a vector space and its dual.

    – Students of Quantum Mechanics learn about tensors to describe the states of several particles e.g. two entangled electrons. In this case, the mathematical definition is more like the Quantum Mechanics definition but for the Quantum Mechanics student it is also important to understand how linear operators act on tensor products of vector spaces.

    – If one wants to discuss tensor products purely mathematically, then one might show how they are defined when the scalars are not in a field but in a commutative ring – or even a non-commutative ring. It is potentially misleading to says that mathematical tensors require a field.

    BTW: The tensor product of two three dimensional vector spaces is nine dimensional. The tensor product to two 1 dimensional vector spaces is 1 dimensional.

  3. lavinia
    lavinia says:
    burakumin

    If I interpret correctly this sentence says the underlying set of U⊗V is U×V. This is not correct. This works for the direct sum U⊕V but the tensor product is a "bigger" set than U×V. One usual way to encode it is to quotient ##mathbb{R}^{(U times V)}## (the set of finitely-supported functions from U×V to ##mathbb{R}##) by the appropriate equivalence relation.

    The tensor product of two 1 dimensional vector spaces is 1 dimensional so it is smaller not bigger than the direct sum. The tensor product tof two 2 dimensional vector spaces is 4 dimensional so this is the the same size as the direct sum not bigger.

  4. burakumin
    burakumin says:
    lavinia

    The tensor product of two 1 dimensional vector spaces is 1 dimensional so it is smaller not bigger than the direct sum. The tensor product tof two 2 dimensional vector spaces is 4 dimensional so this is the the same size as the direct sum not bigger.

    This is correct but missing the relevant point: that the presentation contains a false statement. The fact that you can indeed find counter examples where the direct sum is bigger than the tensor product does not makes the insight presentation any more correct.

  5. fresh_42
    fresh_42 says:
    burakumin

    If I interpret correctly this sentence says the underlying set of U⊗V is U×V. This is not correct. This works for the direct sum U⊕V but the tensor product is a "bigger" set than U×V. One usual way to encode it is to quotient ##mathbb{R}^{(U times V)}## (the set of finitely-supported functions from U×V to ##mathbb{R}##) by the appropriate equivalence relation.

    No, this interpretation was of course not intended, rather a quotient of the free linear span of the set ##U times V##.
    I added an explanation to close this trapdoor. Thank you.

  6. burakumin
    burakumin says:
    fresh_42

    No, this interpretation was of course not intended, rather a quotient of the free linear span of the set ##U times V##.
    I added an explanation to close this trapdoor. Thank you.

    Thank you

  7. fresh_42
    fresh_42 says:
    lavinia

    – Students of General Relativity learn about tensors …

    – Students of Quantum Mechanics learn about tensors …

    – If one wants to discuss tensor products purely mathematically …

    I know, or at least assumed all this. And I was tempted to explain a lot of these aspects. However, as I recognized that this would lead to at least three or four parts, I concentrated on my initial purpose again, which was to explain what kind of object tensors are, rather than to cover all aspects of their applications. It was meant to answer this basic question which occasionally comes up on PF and I got bored retyping the same stuff over and over again. That's why I've chosen Strassen's algorithm as an example, because it uses linear functionals as well as vectors to form a tensor product on a very basic level, which could easily be followed.

  8. Orodruin
    Orodruin says:

    Let me first say that I think that the Insight is well written in general. However, I must say that I have had a lot of experience with students not grasping what tensors are based on them being introduced as multidimensional arrays. Sure, you can represent a tensor by a multidimensional array, but this does not mean that a tensor is a multidimensional array or that a multidimensional array is a tensor. Let us take the case of tensors in ##Votimes V## for definiteness. A basis change in ##V## can be described by a matrix that will tell you how the tensor components transform, but in itself this matrix is not a tensor.

    Furthermore, you can represent a tensor of any rank with a row or column vector – or (in the case of rank > 1) a matrix for that matter (just choose suitable bases). This may even be more natural if you consider tensors as multilinear maps. An example of a rank 4 tensor being used in solid mechanics is the compliance/stiffness tensors that give a linear relation between the stress tensor and the strain tensor (both symmetric rank 2 tensors). This is often represented as a 6×6 matrix using the basis ##vec e_1 otimes vec e_1##, ##vec e_2 otimes vec e_2##, ##vec e_3 otimes vec e_3##, ##vec e_{{1} otimes vec e_{2}}##, ##vec e_{{1} otimes vec e_{3}}##, ##vec e_{{2} otimes vec e_{3}}## for the symmetric rank 2 tensors. In the same language, the stress and strain tensors are described as column matrices with 6 elements.

  9. fresh_42
    fresh_42 says:
    Orodruin

    Sure, you can represent a tensor by a multidimensional array, but this does not mean that a tensor is a multidimensional array or that a multidimensional array is a tensor. Let us take the case of tensors in ##Votimes V## for definiteness. A basis change in ##V## can be described by a matrix that will tell you how the tensor components transform, but in itself this matrix is not a tensor.

    You can consider every matrix as a tensor (defining the matrix rank by tensors) or write a tensor in columns, as it is a vector (element of a vector space) in the end. Personally I like to view a tensor product as the solution of a couniversal mapping problem. As I said, I was tempted to write more about the aspect of "How to use a tensor" instead of "What is a tensor" but this would have led to several chapters and the problem "Where to draw the line" were still an open one. Therefore I simply wanted to take away the fears of the term and answer what it is, as I did before in a few threads, where the basic question was about multilinearity and linear algebra and the constituencies of tensors. The intro with the numbers should show that the degree of complexity depends on the complexity of purpose. I simply wanted to shortcut future answers to threads rather than write a book about tensor calculus. That was the main reason for the examples, which can be understood on a very basic level. Otherwise I would have written about the Ricci tensor and tensor fields which I find far more exciting. And I would have started with rings and modules and not with vector spaces. Thus I only mentioned them, because I wanted to keep it short and to keep it easy: an answer for a thread. Nobody on an "A" and probably as well on an "I" level reads a text about what a tensor is.

  10. Orodruin
    Orodruin says:

    Perhaps I was mislead regarding the intended audience from the beginning. I am pretty sure most engineering students will not remember what a homomorphism is without looking it up. Certainly a person at B-level cannot be expected to know this?

    In the end, I suspect we would give different answers to the question in the title based on our backgrounds and the expected audience. My students would (generally) not prefer me to give them the mathematical explanation, but instead the physical application and interpretation, more to the effect of how I think you would interpret "how can you use tensors in physics?" or "how do I interpret the meaning of a tensor?"

    fresh_42

    Nobody on an "A" and probably as well on an "I" level reads a text about what a tensor is.

    This must mean I am B-level. :oops::eek::frown:

  11. fresh_42
    fresh_42 says:
    Orodruin

    Perhaps I was mislead regarding the intended audience from the beginning. I am pretty sure most engineering students will not remember what a homomorphism is without looking it up. Certainly a person at B-level cannot be expected to know this?

    In the end, I suspect we would give different answers to the question in the title based on our backgrounds and the expected audience. My students would (generally) not prefer me to give them the mathematical explanation, but instead the physical application and interpretation, more to the effect of how I think you would interpret "how can you use tensors in physics?" or "how do I interpret the meaning of a tensor?"

    Yes, you are right. My goal was really to say "Hey look, a tensor is nothing to be afraid of." and that's why I wrote

    Depending on whom you ask, how many room and time there is for an answer, where the emphases lie or what you want to use them for, the answers may vary significantly.

    And to be honest, I'm bad at basis changes, i.e. frame changes and this whole rising and lowering indices is mathematically completely boring stuff. I first wanted to touch all these questions but I saw, that would need a lot of more space. So I decided to write a simple answer and leave the "several parts" article about tensors for the future. Do you want to know where I gave it up? I tried to get my head around the covariant and contravariant parts. Of course I know what this means in general, but what does it mean here? How is it related? Is there a natural way how the ##V's## come up contravariant and the ## V^{*'}s## covariant? Without coordinate transformations? In a categorial sense, it is again a different situation. And as I've found a source where it was just the other way around, I labeled it "deliberate". Which makes sense, as you can always switch between a vector space and its dual – mathematically. I guess it depends on whether one considers ##operatorname{Hom}(V,V^*)## or ##operatorname{Hom}(V^*,V)##. But if you know a good answer, I really like to hear it.

    This must mean I am B-level. :oops::eek::frown:

    Well, your motivation can't have been to learn what a tensor is. That's for sure. :cool: Maybe you have been curious about another point of view. As I started, I found there are so many of them, that it would be carrying me away more and more (and thus couldn't be used as a short answer anymore). It is as if you start an article "What is a matrix?" by the sentence: "The Killing form is used to classify all simple Lie Groups, which are classical matrix groups. There is nothing special about it, all we need is the natural representation and traces … etc." Could be done this way, why not.

    This is the skeleton I originally planned:

    subsection*{Covariance and Contravariance}
    subsection*{To Rise and to Lower Indices}
    subsection*{Natural Isomorphisms and Representations}
    subsection*{Tensor Algebra}
    section*{Stress Energy Tensor}
    section*{Cauchy Stress Tensor}
    section*{Metric Tensor}
    section*{Curvature Tensor}
    section*{The Co-Universal Property}
    subsection*{Graßmann Algebras}
    subsection*{Clifford Algebras}
    subsection*{Lie Algebras}
    section*{Tensor Fields}

  12. WWGD
    WWGD says:

    I thought it would be nice to have a good understanding of what a singleton ## a otimes b ## represents in a tensor product. It is one of these things that I have understood and then forgotten many times over.

  13. WWGD
    WWGD says:
    burakumin

    If I interpret correctly this sentence says the underlying set of U⊗V is U×V. This is not correct. This works for the direct sum U⊕V but the tensor product is a "bigger" set than U×V. One usual way to encode it is to quotient ##mathbb{R}^{(U times V)}## (the set of finitely-supported functions from U×V to ##mathbb{R}##) by the appropriate equivalence relation.

    I think this is done before the moding out and arranging into equivalence classes is done.

  14. fresh_42
    fresh_42 says:
    WWGD

    I think this is done before the moding out and arranging into equivalence classes is done.

    It's the freely generated vector space (module) on the set ##U times V##. The factorization indeed guarantees the multilinearity and the finiteness of sums which could as well be formulated as conditions to hold.

  15. WWGD
    WWGD says:
    fresh_42

    It's the freely generated vector space (module) on the set ##U times V##. The factorization indeed guarantees the multilinearity and the finiteness of sums which could as well be formulated as conditions to hold.

    I was replying to someone else's post.

  16. stevendaryl
    stevendaryl says:

    The way that tensors are manipulated implicitly assumes isomorphisms between certain spaces.

    If [itex]A[/itex] is a vector space, then [itex]A^*[/itex] is the set of linear functions of type [itex]A rightarrow S[/itex] (where [itex]S[/itex] means "scalar", which can mean real numbers or complex numbers or maybe something else depending on the setting).

    The first isomorphism is [itex]A^{**}[/itex] is isomorphic to [itex]A[/itex].

    The second isomorphism is [itex]A^* otimes B^*[/itex] is isomorphic to those function of type [itex](A times B) rightarrow S[/itex] that are linear in both arguments.

    So this means that a tensor of type [itex]T^p_q[/itex] can be thought of as a linear function that takes [itex]q[/itex] vectors and [itex]p[/itex] covectors and returns a scalar, or as a function that takes [itex]q[/itex] vectors and returns an element of [itex]V otimes V otimes … otimes V[/itex] ([itex]p[/itex] of them), or as a function that takes [itex]p[/itex] covectors and returns an element of [itex]V^* otimes … otimes V^*[/itex] ([itex]p[/itex] of them), etc.

Newer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply