Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is a tensor

  1. Aug 7, 2004 #1
    Can someone give me a dummies definition of what a tensor basically is and what its applications are? Thanks
     
  2. jcsd
  3. Aug 7, 2004 #2

    Janitor

    User Avatar
    Science Advisor

  4. Aug 8, 2004 #3
  5. Aug 8, 2004 #4
    can someone give me an example of an applied problem where it would be necessary to use a tensor to find a solution? I still really don't see why tensors would be useful
     
  6. Aug 9, 2004 #5
  7. Aug 9, 2004 #6

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    i am going to go out on a limb here and try a very elementary explanation. Fortunately there are plenty of knowledgeable people here to correct my mistakes.

    A first order tensor is a vector, i.e. it is something linear, like a linear polynomial

    say ax+by + cz.

    A second order tensor is something that is bilinear, like a degree two polynomial

    say ax^2 + by^2 + cZ^2 + dxy + exz + fyz.

    And so on.


    Given a function like f(x,yz), we can re - expand it as a Taylor series about each point, and pick off the linear term, thus assigning to each point a linear polynomial, i.e. a vector field.

    On the other hand we could assign the second derivative, i.e. the second order term of the Taylor series at each point, thus obtaining a second degree tensor field.

    and so on.


    Hence any problem that requires second derivatives (or higher) for its solution is essentially one that uses tensors, i.e. multilinear as opposed to linear objects.

    Curvature is such a concept, and curvature is an intrinsic part of the theory of relativity, since mass produces curvature in space time.

    Ok, I drop out here and ask for help from the experts.
     
  8. Aug 10, 2004 #7
  9. Aug 10, 2004 #8

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    pmb phy, With all respect, I think you may be misunderstanding what you have seen.

    For instance on the first page of the site you just referred us to, equation (1) displays the "metric tensor" in exactly the form I gave for a second order (or rank) tensor, namely it has an expression as a homogeneous polynomial of degree 2.

    Later on, this same site describes a zero rank tensor as a scalar valued function that keeps the same value when coordinates are changed, and a first order tensor as one which transforms by the usual first order chain rule, when coordinates are changed, and a second order tensor as one which transforms by the chain rule for second derivatives when coordinates are changed, etc...

    Perhaps the confusion is that I was referring to the appearance of a (symmetric) tensor in a given coordinate system, and your sources emphasize the way these representations change, under change of coordinates.

    Unfortunately many sources emphasize the appearance or representation of tensors rather then their conceptual meaning. The essential content of a tensor (at a point) is its multilinearity.

    Globalizing them, leads to a family or "field" of such objects at different points, and then to the necessity of changing coordinates, at which point the way in which they change appearance under coordinate change becomes of interest.

    It seems odd to me at least to define them this way however. On the other hand, you are right that some features of tensors, or even vectors, are invisible except when one changes coordinates.

    E.g. a vector and a covector at a point both look like an n tuple of numbers, but when you change coordinates one changes by the transpose of the matrix changing the other.

    Of course conceptually they differ even at a point, as one is a tangent vector and one is a linear form acting on tangent vectors.

    Do you buy any of this?
     
  10. Aug 10, 2004 #9
    Equation 1 contains the components is the metric tensor. The components in that particular case were all zero except for g11 and g2 which equal 1.

    I think you're confusing the tensor with the expression the components of the tensor appears in. A general tensor is a geometric object which is linear function of its variables which maps into scalars. For example: Let g be the metric tensor. Its a function of two vectors. The boldface notation represents the tensor itself and not components in a particular coordinate system. An example of this would be the magnitude of a vector.

    [tex]A^2 = \boldf g \left \boldf A, \boldf B \right[/tex]

    When you represent the vector in a basis and using the linearlity of the tensor then you get the usual expression in terms of components.

    There are two ways of looking at tensors. I've been meaning to make a new web page to emphasize the geometric meaning but am unable to do so at this time. Plus I'm still thinking of the best way to do that.

    The terms "covariant" and "contravariant" can have different meanings in the same context depending on their usage. For example: A little mentioned notion is that a single vector can have covariant and contravariant components. For details please see

    http://www.geocities.com/physics_world/co_vs_contra.htm

    Some of it.

    Pete
     
  11. Aug 10, 2004 #10

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Pete, let me try again. we may be getting closer together here.

    An equation like summation gjk dxj dxk, as on the site you referenced, is a covariant tensor of second rank, because it is a second degree homogeneous polynomial in the expressions dxj, dxk, which are themselves covariant tensors of first rank. i.e. it is of rank 2, because there are two of them multiplied together. this results in a bilinear operator, which is linear in each variable separately.

    now WHICH second rank tensor it is, is determined by what the coefficients are, or the "components" if you like, namely the gjk.

    the usual metric tensor on the euclidean plane is dx1dx1 + dx2dx2, so the only non zero components are g11 = 1, g22 = 1.

    But there are many other riemannian metrics given by other choices of the gjk.

    I have tried to explain the conceptual idea of tensors at greater length in some other threads in this forum also with the word tensor in them. let me know what you think of them.

    Brutally, if T is the tangent bundle and T^ the dual bundle of a manifold, then sections of T are contravariant tensors of rank one, and sections of T^ are covariant tensors of rank 1.

    A section of (T^ tensor T^), where this is the bundle of tensor products of the dual spaces, is a covariant tensor of second rank, such as a metric.

    if we consider only one tangent space isomorphic to R^n, its dual has basis dx1,...dxn, and the second tensor product of the duals has basis dxjdxk, for all j,k, (where the order matters).

    Any second rank tensor can be expressed in terms of this basis and the coordinates or coefficients or componets are called gjk.

    When we change coordinates, we get a new standard basis for the second tensor product and hence the coefficients of our basis expansion change. I/.e/; the gjk change into some other matrix valued function g'jk, in the way specified.

    To define a tensor by saying how the components transform instead of what bundle it is a section of, is like defining a duck by the way it walks. Of course we all know the old saying: if it transforms like a tensor, then it is a tensor.

    peace,

    roy
     
    Last edited: Aug 10, 2004
  12. Aug 10, 2004 #11

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Dear competing tensor advocates.

    I just checked out the site recommended above by Tom McCurdy in post 3.

    http://en.wikipedia.org/wiki/Tensor


    It tries to discuss both viewpoints on tensors and even relate them. I recommend it also. It seems from that discussion that my viewpoint is the so called modern one.


    "The modern (component-free) approach views tensors initially as abstract objects, expressing some definite type of multi-linear concept. Their well-known properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra."

    I still like my duckwalk joke, though.
     
  13. Aug 11, 2004 #12
    Why do you call the summation a second rank tensor? It is not. gjk is a tensor of rank two. The differentials dxk are tensors of rank one. The summation is a contraction of a tensor of rank two with two tensors of rank one giving a tensor of rank zero.
    No. They are not multiplied together. They are summed. That is a huge difference.

    I recommend learning how to use subscripts and superscripts on this forum. That way I can tell if you're using them or not. I don't see why you're refering to the differentials as a basis. A basis is a vector and not a component like dxj.

    Pete
     
  14. Aug 11, 2004 #13

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Pete,

    Let me try once again. I could be wrong of course, but I am not convinced by what you are saying. I am confident however that the miscommunication is due either to my ignorance, which hopefully is curable, or to a difference in language about tensors.


    Now first of all you say that gjk is a tensor (subscript). To me this is like saying an n tuple of numbers is a vector. Just as an n tuple of numbers represents a vector, in terms of some basis, so also a matrix like (gjk) (subscripts) represents a tensor in terms of a basis.

    Now if I denote by {...,ej,....} (sub) a basis of tangent vectors, then an n tuple of numbers like (,,,aj ...) represents

    the vector: summation ajej.

    Here it does not matter whether the indices are up or down, because we know what the objects are, namely, the ej are tangent vectors, so we know how they transform, whether I indicate it by sub or super scripts. Also the aj are numbers, and I have written a summation. Einstein's convention as I recall it, is that one can save on writing summation notation, if one uses oppositely placed scripts to signal summation automatically. I am not doing this.

    Another convention is that subscripts are used like for tangent vectors ej, to denote that they are classical contravariant vectors (that is covariant in category theory). whereas superscripts are used like your elegantly written dx^j above to denote classically covariant vectors, i.e. covectors.

    In modern terminology I believe this is achieved by saying that the ej are basic sections of the tangent bundle, whose variance is known to transform by the jacobian matrix, and saying the dx^j are (basic) sections of the cotangent bundle, or dual tangent bundle, whose sections are known to transform by the transpose of the jacobian matrix.

    (These opposite transformation laws are used on the second site you referred me to, to define contravariant and covariant vectors.)

    As to the word "basis", it is used by me in the sense of vector space theory. I.e. any space of objects closed under additon and scalar multiplication, is called a vector space. E.g. a tensor space is also an example of a vector space but I have refrained from using that term in that way on this post since it is not used that way by physicists it seems.

    A basis for an abstract vector space is any collection of elements of the vector space, (e.g. if it is a basis of a tensor space, they will be tensors), such that every element of the space has a unique expression as a sum of scalar multiples of the given basis.


    E.g. in R^n, if we denote the standard basis vectors (0,...0,1,0,....0) with a 1 in the jth place, by the symbol ej (sub), then the collection e1,...,en is a basis for the tangent space to R^n, simply because every tangent vector (a1,...,an) can be written uniquely as

    summation: ajej.

    (Here I have committed the apparent contradiction of referring to an n tuple of numbers (a1,...,an) as a tangent vector. But that is because R^n is the one vector space in the whole world, whose vectors really are n tuples of numbers. The concept of a basis is a way to represent elements of other vector spaces as elements of R^n, i.e. as n tuples of numbers.)

    If I consider on the other hand the dual space (R^n)^ = linear maps from R^n to R, then this space is isomorphic to R^n, but the elements transform differently. One way to signal this would be to denote their coefficients by super scripts, but this is unnecessary, if we simply choose different symbols for them, such as dx^j.

    These symbols are well chosen, because the basic elements of the dual space are the differentials of functions, and the simplest (coordinate) functions on R^n are the functions x^j.

    (Thank you for your patience in bearing with what is no doubt extremely familiar to you. We may get somewhere yet however.)

    Thus a covector like dx^j acts on a vector like ek by dot product in terms of their coordinate representations, or more intrinsically, by noting that dx^j(ek) = kronecker delta ?jk. Now I agree this pairing or contraction is signaled by the fact that the scripts of the dx's are up and those of the e's are down.

    Now here is the source of the confusion for me in the equation (1) we were discussing.
    But I will postpone it a little longer to clarify further my use of notation, and "basis".

    In addition to the dual space T^ = (R^n)^, of R^n whose elements are apparently rank 1 covariant (old terminology) tensors, there is another space formed by taking the tensor product of this space with itself,

    called T^ tensor T^,

    whose elements are rank 2 covariant tensors. By definition, this space may be defined as the space of all bilinear maps from TxT to R. As such it is an abstract vector space, although it would be a sin to call its elements "vectors" in a physics forum, because to a physicist that word is reserved for rank 1 tensors.

    Nonetheless T^ tensor T^ is a linear space, (that is a better word), and it has a basis, i.e. a set of elements such that all other elements can be written in terms of these.

    E.g. such a basis is given by the tensor product dx^jdx^k of the two rank 1 tensors dx^j and dx^k. In gneral, if f,g are rank 1 covariant tensors, and if v,w is a pair of contravariant vectors, then the value of (f tensor g) on (v,w) is f(v)g(w).

    Thus there is a tensor multiplication taking pairs of elements of T^ to one element of T^ tensor T^. Then it is a theorem, easily checked, that the set of products (dx^j)tensor(dx^k), for all pairs j,k, is a basis for the space of second rank covariant tensors.

    In particular every such tensor can be written in terms of these. Thus a general second rank covariant tensor would be written as

    summation gjk(sub if you like) dx^j dx^k, where I have omitted the tensor sign.

    (Thus you are right there is a summation here of the tensors gjk dx^j dx^k,

    but there is also a tensor product, of dx^j times dx^k.)

    In particular, the standard scalar product on euclidean space would be written as

    summation (kronecker delta) dx^j dx^k = dx^1dx^1 +.....+ dx^ndx^n.


    Such an object acts on pairs of tangent vectors and spits out a number. E.g. on the pair (a1,...,an),(b1,...,bn) = (summation ajej, summation bkek),

    it spits out of course summation ajbj.

    Thus I am interpreting the object in equation (1) at the referenced site as representing the second rank covariant tensor:

    summation gjk dx^j dx^k

    whose value on the pair of vectors (summation ajej, summation bkek),

    is the number (matrix product):

    (,,,aj,....) (gjk) (...,bk,...)T = summation (over j,k), gjk aj bk.

    where here the T means transpose.


    Now we can also consider the tensor product of the tangent space with itself and get the space (T tensor T) of second rank contravariant vectors, for which a basis is given by the products {ej tensor ek}.

    Then we can consider that a covariant rank 2 vector like:

    summ gjk dx^j dx^k

    acts not on the pair (summation ajej, summation bkek),

    but rather on the contravariant vector, their product:

    summation ajbk (ej tensor ek).

    Then the value in coordinates is given by:

    summation(over j,k) ajbk gjk, a number.


    Now I am trying to see how to possibly interpret equation (1) as you have done.

    E.g. if I represent a contravariant rank 2 vector:

    summation gjk ej tensor ek,

    simply by the matrix gjk,

    and represent the covariant tensor:

    summation dx^j tensor dx^j,

    by the same expression:

    summation dx^j tensor dx^j,


    then I suppose I could believe that the equation (1) represents the number

    obtained by evaluating the covariant 2 tensor:

    summation dx^j tensor dx^j,

    on the contravariant 2 tensor:

    summation gjk ej tensor ek.

    This would violate two principles I hold sacred: first the symbols gjk are never used for contravariant tensors, but always for covariant tensors.

    second and more important: the object being represented there is not a number as you say, but a tensor, the metric tensor. They say so right on the site. (let me check that and get back to you.)

    Anyway I appreciate your sincere and patient attempt to communicate with me.

    We are struggling since it seems you apparently speak primarily "indices" and i speak only "index free", so if someone with dual langauage capabilities would jump in, it might help, but maybe we are doing as well as can be expected.

    OK I have been back your site, and to my mind it confirms what I have been saying.

    E.g. it says there that the expression G (bold) = summation gjk dx^j dx^k is a tensor, whose components are the numbers gjk. (The matrix gjk is therefore not itself a tensor.)

    From the basis point of view, this means that this tensor G is written as a linear combination of the basic tensors dx^j dx^k, using the components or coefficients gjk.

    He does not say it there, but those basic tensors themselves are products of the rank 1 tensors dx^j and dx^k.

    This is fun, and I hope we are helping each other. I know I appreciate your patience with my ignorance of common longstanding notation and practice in physics.

    best regards,

    roy
     
    Last edited: Aug 11, 2004
  15. Aug 11, 2004 #14
    I'm sorry, but due to major back problems it is impossibe for me to sit down and read such a long post. I'll have to spend some time reading and absorbing what you wrote bit by bit due to the short amounts of time I can sit in front of the computer.

    In the meantime can you please find and post a reference to where you've seen/learned the definition(s) which you hold to be true.
    Thanks

    Pete
     
    Last edited: Aug 11, 2004
  16. Aug 11, 2004 #15

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    These are the definitions I believe to have been standard in mathematical treatments of differential geometry since the 1960's, for example in Michael Spivak's little book Calculus on Manifolds, or his large treatise Differential Geometry. I will look for an internet source, but i suspect the one on wikipedia would suffice.

    http://en.wikipedia.org/wiki/Tensor

    I will check it out more carefully.

    I sympathize with the back problems as I also have them. Mine are helped by sitting only in an old captain's chair I inherited from my grandfather, but there must be others out there.
     
    Last edited: Aug 11, 2004
  17. Aug 11, 2004 #16

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Pete,

    Here is a quote from wikipedia:

    "There are equivalent approaches to visualizing and working with tensors; that the content is actually the same may only become apparent with some familiarity with the material.

    * The classical approach
    The classical approach views tensors as multidimensional arrays that are n-dimensional generalizations of scalars, 1-dimensional vectors and 2-dimensional matrices. The "components" of the tensor are the indices of the array. ......


    * The modern approach
    The modern (component-free) approach views tensors initially as abstract objects, expressing some definite type of multi-linear concept. Their well-known properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra.

    This treatment has largely replaced the component-based treatment for advanced study, in the way that the more modern component-free treatment of vectors replaces the traditional component-based treatment after the component-based treatment has been used to provide an elementary motivation for the concept of a vector. You could say that the slogan is 'tensors are elements of some tensor space'.

    * The intermediate treatment of tensors article attempts to bridge the two extremes, and to show their relationships."


    Here is the link for the intermediate article, but it is pretty sketchy.

    http://en.wikipedia.org/wiki/Intermediate_treatment_of_tensors

    One of the specific examples wikipedia cites of a tensor, is a homogeneous polynomial of degree two. There is also a short tutorial on this forum, in the thread "Math Newb wants to know whats a tensor", by chroot, where the scalar product is cited as an example of a rank 2 covariant tensor, not rank zero, (actually he calls it type (0,2), or rank 2 covariant and rank zero contravariant).

    I also gave an explicit calculation in the thread "tensor product" that shows how to use the modern definition of a tensor to compute the tensor product as matrices.

    If we get together on this, it will be a major success for both of us, but not worth a backache.

    best regards,

    roy
     
    Last edited: Aug 11, 2004
  18. Aug 11, 2004 #17
    Hi again

    Yep. Back problems suck big time. Due to that sitting and typing too long I had to be rushed to the hospital in an ambulance. I've experienced levels of pain (from sitting here typing too long) that I never knew even existed, and I know pain since I've had 8 bone marrow biopsies. I have no intention of letting that happen again.

    Meanwhile, if you happen to have A short course in General Relativity, J. Foster, J.D. Nightingale then see section 1.10. It will clarify the difference between contravariant vectors and 1-forms (aka covariant vectors). In Euclidean space you can get away with igoring the difference, but not in general. A 1-form maps vectors to real numbers. They are distinct objects from vectors (but they are related). That is why I have been emphasizing the placement of the indices. In some areas, such as mechanics, one uses only Cartesian tensors and this distinction never arises. A Cartesian tensor is an example of an affine tensor which is different than, but somewhat similar to, a tensor.

    Note that it is common practice to use what Foster and Nightingale phrase as follows on page 45
     
  19. Aug 11, 2004 #18

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    Hi again Pete,

    I admire your tenacity in this forum! It inspires me after my own last weeks "procedure" near the area I use for sitting.

    I discovered that this whole hash about competing languages has already been discussed at unbelievable length in the thread "intro to differential forms, started by lethe, and subsequently largely deleted by him, over a flap about use of more informal language.

    Check out posts #50,.... in that thread to see some of the same discussions we have been having about up or down indices. Of course they were talking about the case of anticommutative covariant tensors, or differential p - forms, rather than general tensors.

    It appears the original tutorial posted by lethe still exists at another site, namely

    http://www.sciforums.com/showthread.php?t=20843&page=1&pp=20


    He gives the full monty discussion there of the language I was advocating above. In particular, he does define general tensors on the way to defining skew commutative ones

    And thanks for the reference to Foster and Nightingale.

    Maybe I can still learn some relativity!

    best wishes,

    roy
     
  20. Aug 11, 2004 #19

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    PS

    It seems one confusion is that although (the components of) a covariant tensor are subscripts, the basic co - tensors themselves are apparently written as superscripts.

    i.e. gjk for the components as opposed to dx^j dx^k, for the basic guys.

    E.g. the basic tensor dx^1dx^1 written in components,

    would just be the matrix (gjk),

    where all the gjk equal zero except g11 = 1.

    Similarly, the coordinates (components) of a contravariant tensor are written as super, and the indices on the actual basic tensors are written as subs.

    i.e. a^j as opposed to ej.

    That way when we conjoin the components of a contra -tensor with the components of a co - tensor, it does mean to contract and get a number,

    e.g. summ gjk h^(jk)



    But when we conjoin the components gjk say of a co tensor, with the symbols for the basic co tensors, it only means to sum them up as numbers times co - tensors, so it is still a co tensor.

    e.g. summ gjk dx^j dx^k, is a rank 2 cotensor.


    So here are two versions of the same object

    classical covariant 2 tensor: gjk

    modern version of same covariant 2 tensor: summation gjk dx^j dx^k.


    classical version of contravariant 2 tensor h^(jk)

    modern version of same contravariant 2 tensor: summation h^(jk) ej ek.


    Then (summation gjk dx^j dx^k) acts on: (summation h^(jk) ej ek),

    by contracting their components: summation gjk h^(jk). this is a number.

    How does this seem?
     
  21. Aug 11, 2004 #20

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    PPS:
    In some sense the modern point of view has made the wonderful contribution of doubling the number of indices!!

    I.e. the modern way of writing that last contraction would be:

    (summation gjk dx^j dx^k) (summation h^(rs) er es)

    = big summation (gjk dx^j dx^k) (h^(rs) er es)

    = big summation (gjk h^(rs)) (dx^j dx^k)(er es)

    = big summation (gjk h^(rs)) (dx^j(er))(dx^k(es))

    = big summation (gjk h^(rs)) kronecker^(jk),(rs)

    [this last because dx^jdx^k (ej ek) = 1, and all other pairings

    dx^jdx^k (er es) are zero]

    = summation gjk h^(jk).

    But no one would often do this I hope.



    I admit that if one understands the indices on the components, there is never any need for the basic tensors, but it seems almost to throw out the baby and keep the bathwater, to an index - free guy like me.

    I admit too the indices are too complicated for me. I even wrote an algebra book once, including treating tensors in a coordinate free way, and actually wrote out the tensor product of matrix as a consequence of these definitions, but it was a terrifying experience.

    Just for laughs, I confess that to me the tensor product, Atensor(blank), is actually "the unique right exact functor on R - modules that commutes with direct sums and takes value A on the field R of real numbers", but I would not readily say that here, if we weren't good friends by now!

    Perhaps that reveals why I am having such a hard time understanding classical tensors though.

    Peace,

    roy
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: What is a tensor
  1. What is a tensor? (Replies: 13)

  2. What are tensors? (Replies: 11)

Loading...