What topics from linear algebra do I need to study tensors?

  1. Hi
    What topics in linear algebra do I need to know to start learning tensors?
    I know the following topics from linear algebra: 1-equation systems 2-vector spaces(linear independence, span, basis, important subspaces of a vector space) 3-linear transformations(kernel, Image, Isomorphic vector spaces) 4-Eigenvalues and Eigenvectors 5- Polynomials 6-determinants.
    What other tools from linear algebra do I need to know to understand tensors? Where should I begin from to understand tensors little by little? and my final goal is understand the applications of tensors in Physics and then in pure mathematics.
     
  2. jcsd
  3. lavinia

    lavinia 2,052
    Science Advisor

    you know enough. feel free to ask questions
     
  4. What kind of objects are you talking about? Are they standard finite-rank vector spaces, or something like a Hilbert space?
     
  5. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    You really just need to understand linear functions, matrices, linear independence and bases. Make sure that you understand the relationship between linear functions and matrices described e.g. in post #3 in this thread.
     
  6. mathwonk

    mathwonk 9,703
    Science Advisor
    Homework Helper

    the basic concept in linear algebra is linear functions. tensors are one step up, and concern multilinear functions. i.e. tensors are a topic in multilinear algebra.

    a function is multilinear, if it is linear in each variable separately, so indeed linearity is the only prerequisite.


    the basic example of a multilinear function is multiplication. so tensors are a generalization of multiplication, except the multiplication may not be commutative.
     
  7. well, here are the questions that I'm struggling with for now: What is a dual space? (And why we need it?) What is a co-variant/contra-variant component of a tensor?

    This is how I understand tensors: A tensor is a mapping from the Cartesian product of m vector spaces and n dual vector spaces to a field like real numbers which is linear in each of its components. such a tensor would be of rank m+n and would have m contravariant/covariant components and n covariant/contravariant components. This is one of the many definitions of tensors that I've seen.

    I meant finite vector spaces over an arbitrary field.

    I fully understand linear independence, spans and bases. I'm also confident doing calculations with matrices.

    well, I guess a linear function is just another term for a linear transformation from one vector space to another. Is that right?

    for now my real confusion arises from the meaning of the dual of a vector space. then I need to understand what co/contra variant vectors are and why we're interested in them.
     
  8. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    Excellent. You should make sure that you also understand the relationship between linear operators and matrices. The fastest way to do that if you don't know it already is to read the post I linked to in my previous post. You need this to understand how the concept of "components of a tensor" generalizes the concept of "components of a linear operator".

    Yes, the terms linear function, linear map, linear operator and linear transformation, all mean the same thing in most books. Some authors only use the term operator when the domain and codomain are the same vector space. A linear functional is a linear map from a vector space over a field F into F.

    This post should be useful. You should also check out the three posts I link to at the end of it, and also this one.
     
  9. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    Instead of posting that last link, I should have just quoted the relevant section.
    I just wanted to include a definition of the components of a tensor.
     
  10. This is how I understand the relationship between linear transformations and matrices:
    Suppose We have two vector spaces V and W and there is a mapping between them T: V → W that keeps linearity intact which means that for all v[itex]\in[/itex]V, w[itex]\in[/itex]W and the scalars α & β we have: T(αv+βw) = αT(v) + βT(w).
    If we apply this linear transformation to a vector in V in a coordinate system equipped with the bases {ei} we get:
    T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en)
    Now we can associate a matrix to this linear Transformation in the following way:
    Let's show a vector v in V by an n*1 matrix (assuming it has n dimensions) and a vector w in W by an m*1 matrix(assuming it has m dimensions). then we can define the matrix A such that w=Av. the matrix A will be of the rank m*n. the columns of A will be the T(ei)'s (the {ei}'s are a basis for V, and the corresponding T(ei)'s will form a basis for W). Hence we can write a vector w in W as a linear combination of the components of a vector v in V using the matrix multiplication Av. conversely, If you give me an equation like w=Av I can write it in the form T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en).

    That's how I understand the relationship between linear transformations and matrices. correct me if I'm wrong.


    Thanks.

    I checked the thread, but the OP's question is not so understandable for me.

    Doesn't that confirm the way I defined a tensor?
    I'm still struggling with the concept of a dual space and dual bases. I don't understand what they are and why they are defined. (in fact I've seen very different definitions of a dual space which makes it only more confusing for me)
     
    Last edited: Oct 26, 2011
  11. Deveno

    Deveno 906
    Science Advisor

    the dual space V* for a vector space V sounds imposing, but it's really a pretty simple idea.

    suppose we use the standard basis:

    {e1,e2,....,en} for V, where

    ej = (0,0,0,....,1,.....,0,0) <---1 in the j-th place.

    the dual basis is the basis:

    12,.....,φn} where

    φi(ej) = δij, the kroenecker delta.

    you already know this basis, the dual basis element:

    φi(x1,x2,....,xn) = xi

    is the just the i-th projection function.

    (note: in tensor notation the coordinate indices of the vector x are usually written "up",

    x = Ʃ xjej, so i'm breaking with tradition just for this post).

    now, how do we normally calcuate a linear functional f: V→F?

    we use linearity: f(v) = f(a1e1+a2e2+...+anen) = a1f(e1)+a2f(e2) +...anf(en).

    now the f(ej) are just field elements, which we can regard as the "coordinates" of f in V*, and each aj can be regarding as φj(v).

    so f(v) = f(e11(v)+f(e22(v)+...f(enn(v)

    if we abbreviate f(ei) by fi, using the linearity of the projection functions we get:

    f(v) = (f1φ1+f2φ2+....+fnφn)(v)

    (again, i note that it is traditional to write the "coordinates" of a linear functional "down", i am just writing it this way for clarity's sake in this one post)

    in other words, we can write f = f1φ1+f2φ2+...+fnφn, a linear functional in V* is just a linear combination of the projection functionals dual to our original basis for V.

    that is, given a basis (a special subset of V, sort of the "instant coffee" description of V...just "add coordinates"), the projection functions that pick out the j-th coordinate in that basis, are the corresponding "special subset" of V* (the dual basis).

    so, in the real plane, for example, we can describe any linear function from RxR→R, as a linear combination of "projection onto the x-axis" and "projection onto the y-axis".

    you have seen these dual vectors before in another setting, although you may not have realized it. if we regard "x" as meaning the FUNCTION:

    f(x,y) = x, then f' is the linear function with matrix [1 0], that is:

    f'(a,b) = 1(a) + 0(b) = a

    so df(a,b) (the differential of f at (a,b)) is just φ1(a,b), dual to the tangent vector (1,0)(a,b) (the unit vector in the x-direction with its start point at (a,b)). this is the actual meaning of the symbol "dx" in such expressions as:

    [tex]\int P dy + Q dx[/tex]

    (hopefully this explains why "dx" isn't a number....it's a functional, you need to "hit it with a vector" to get a number out of it)

    *********

    tensors can be understood in terms of the tensor product. a good introduction to that is here:

    http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html
     
  12. well, I don't know what that notation means.φi(ej) = δij. Does it mean that the inner product of φi and ei should be equal to kronecker delta or it means that if φ acts on ei it'll produce a scalar? I guess the later is true. if that's true, then can we define the dual space of a vector space V as the space spanned by the basis
    12,.....,φn} such that if any of them acts on a basis vector from V it produces a scalar number? (either 0 or 1).

    the f(ej)'s are just field elements? I thought they were vectors? I start getting confused from this point.
     
  13. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    This is about half the story, but it also contains a mistake. If dim W>dim V, {T(ei)} doesn't have enough members to be a basis for W. Compare it to what I said:
    This is the whole story. Note that I'm not just saying that there's a matrix equation corresponding to y=Ax. I'm also saying exactly what the components are, given a basis for U and a basis for V. In particular, this means that the components of a linear functional T in a basis {e_i} are just T(e_i). Compare that to the definition of the components of a tensor, and you will see that it's the same idea applied to multilinear functionals.

    If you read the the first paragraph of my post, you will see that I didn't understand it either, and wrote a reply that ignores everything he said. You can skip the first two paragraphs of my reply and start reading at "A manifold is a set with some other stuff defined on it, the most important being coordinate systems". The post gives you an overview of the basics of differential geometry, and the posts it links to provide some of the details.

    Yes. I didn't include that to show you the definition of a tensor, but to show you the definition of the components of a tensor. (You said that tensors have components, but you didn't say what they are). I wanted to include such a definition because I quickly skimmed through the old posts that I suggested that you read, and didn't see one in there.

    What other definitions have you seen? I don't think I've seen another one. The word "continuous" or "bounded" is sometimes thrown in there, as in "V* is the set of all bounded linear functions from V into ℝ". This addition to the definition is only relevant when V is infinite-dimensional. A linear function is continuous if and only if it's bounded, and as long as we're dealing with finite-dimensional spaces, all linear functions are continuous (if we use the isomorphism between V and ℝn, and the standard norm on ℝ, to define a norm on V).
     
    Last edited: Oct 26, 2011
  14. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes (to everything except the first guess about inner products).

    He used the word "functional" to indicate that the codomain of f is ℝ (the field that was used in the definition of the vector space). ℝ is a 1-dimensional vector space, so you could call its members "vectors" if you want to, but as you know, that would be unconventional.
     
  15. Deveno

    Deveno 906
    Science Advisor

    φi(ej) = δij means that:

    φi(ej) = 0 if i ≠ j,

    φi(ei) = 1.

    this is pretty obvious, the i-th coordinate of ej is 0, unless i = j (that's how we define the standard basis).

    it's not an "inner product", φi is a function that when given a vector, returns its i-th coordinate (which is a field element). so, in R3 for example:

    φ1(1,2,4) = 1
    φ2(1,2,4) = 2
    φ3(1,2,4) = 4

    so, yeah, φi(ej) is a scalar, and φi is defined so that it kills every basis vector except ei, and for ei, it gives the i-th coordinate, which is....1 (it's the only non-zero coordinate ei has).

    in R3, the dual basis to {(1,0,0),(0,1,0),(0,0,1)} is the 3 linear functionals:

    φ1(x,y,z) = x
    φ2(x,y,z) = y
    φ3(x,y,z) = z.

    any linear functional on R3 can be written as a linear combination of these 3.

    for example, suppose f(x,y,z) = 3x + 2(y-z). that's a perfectly good linear functional.

    we can write f = 3φ1 + 2φ2 - 2φ3, so in the dual space, with this particular dual basis, we have f = (3,2,-2) (just remember, this isn't really a "vector" but a way to specify WHICH linear functional we mean: take 3 times the first coordinate of a vector, then add 2 times the 2nd coordinate, then subtract 2 times the third coordinate, which is a perfectly good way to get a number from a vector).

    and of course the f(ej) are scalars! what does f do? it spits out a number, when you input a vector. well, a basis vector is still a vector, so when we apply a linear functional to a vector, we get a scalar.
     
  16. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    I just realized that what you said suggests that you may have misunderstood the term "scalar". It doesn't mean "either 0 or 1". Also, regarding your suggested definition of the dual space, I don't understand the "such that" part of it. If that's supposed to be a definition of the φi, it's incomplete.
     
  17. so, if I've understood it correctly, the dual space is nothing but a concept that arises from the linear 'functionals' on a vector space? (I didn't know what a functional meant before this thread). Can I say that any vector space has a corresponding dual space consisting of all functionals on V?so, the key to understanding dual spaces is to understand what linear functionals are. right?

    I meant any scalars, not necessarily 0 and 1. my confusion arises from this part:

    We say that φi(ej)=δij. from that I understand that the only possible values for φ are 0 and 1. so, when I see the example, I see something totally different. It says φ3(1,2,4) = 4. the idea is pretty obvious, but I don't understand how φi(ej)=δij explains this.
    so, if φi(ej)=δij, then δij is either 0 or 1. How can φ3(1,2,4) be equal to 4?
     
  18. Deveno

    Deveno 906
    Science Advisor

    the standard basis vectors ej are special, they only have 0 or 1 coordinates.

    so the dual basis to them, acting on them, will only return a 0 or 1 value. this doesn't mean that {0,1} is the range of the dual basis, in general a vector will have any possible field element as coordinate values.

    we just define the dual basis to the standard basis by:

    1) defining the values on a basis set
    2) extending by linearity.

    for example, earlier i wrote φ2(1,2,4) = 2. let's expand this a bit:

    φ2(1,2,4) = φ2((1,0,0) + (0,2,0) + (0,0,4))

    = φ2(1(1,0,0) + 2(0,1,0) + 4(0,0,1))

    = φ2(1(1,0,0)) + φ2(2(0,1,0)) + φ2(4(0,0,1)) (by linearity!)

    = 1(φ2(1,0,0)) + 2(φ2(0,1,0)) + 4(φ2(0,0,1)) (by linearity again!)

    = (1)(0) + 2(1) + 4(0) (using the dual basis kroenecker delta, it's only 1 when the indices match)

    = 0 + 2 + 0 = 2 (using...um, addition of real numbers).
     
  19. Yaayy! I guess I've understood what a dual basis is. now a dual space is the space spanned by that basis? If yes then I've understood what a dual space is.

    now, what are co-variant and contra-variant vectors? How do they arise in math/physics?
     
  20. Fredrik

    Fredrik 10,301
    Staff Emeritus
    Science Advisor
    Gold Member

    In the simplest non-trivial case [itex]V=\mathbb R^2[/itex], [itex]\phi^i(e_j)=\delta^i_j[/itex] are the four equalities [tex]\begin{align}
    \phi^1(e_1) &=1\\
    \phi^1(e_2) &=0\\
    \phi^2(e_1) &=0\\
    \phi^2(e_2) &=1
    \end{align}[/tex] Since the [itex]\phi^i[/itex] are assumed to be linear, we have [itex]\phi^i(v)=\phi^i(v^je_j)=v^j\phi^i(e_j)=v^j\delta^i_j=v^i[/itex], and in particular [itex]\phi^1(ae_1+be_2)=a\phi^1(e_1)+b\phi^1(e_2)=a[/itex].

    I explained that in the post I linked to earlier. The one that made you say that you didn't understand the OP's question.

    Yes, but that would be a weird way to define it. V* is the set of linear functionals from V into ℝ. Addition and scalar multiplication are defined in the obvious ways:

    (f+g)(v)=f(v)+g(v)
    (af)(v)=a(f(v))

    These definitions give V* the structure of a vector space. Let f in V* and v in V be arbitrary. We have [itex]f(v)=f(v^i e_i)=v^if(e_i)=\phi^i(v)f(e_i)[/itex]. Since v was arbitrary, this means that [itex]f=f(e_i)\phi^i[/itex]. Since f was arbitrary, this means that [itex]\{\phi^i\}[/itex] spans V*.
     
    Last edited: Oct 27, 2011
  21. great. now I have almost understood the idea behind defining dual spaces and also the idea behind covariant and contravariant vectors. your post about co/contra-variant vectors was a great one.

    now the only thing remained unclear for me is that I want to see some mathematical/physical examples of tensors. and I want to understand how we can interpret different physical situations with the new language I've learned. I have almost 0 level of knowledge about differential geometry, so don't go into very advanced topics from physics please (like GR).

    Is there any other thing that I need to know about tensors?
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?