# What is a tensor?

1. Mar 19, 2005

### Imo

I take Vector Calculus, where we simply deal with up to three dimensional Euclidean space, but I've been reading various physics books and they often talk about tensors. So far, I haven't been provided with a satisfactory explanation. Well quite frankly, I've got absolutely no clue what they are.

Anyone care to spend the time to explain (in the simplest possible terms) what a tensor is?

Thank you.

2. Mar 19, 2005

### physmurf

3. Mar 20, 2005

### Rev Prez

As far as your concerned, a tensor unifies and generalizes of the concepts scalar, vector and matrix--that is scalars (zero indices), vectors (1 index) and matrices (2 indices) are tensors of rank 0, 1, and 2 respectively.

Rev Prez

4. Mar 20, 2005

### chroot

Staff Emeritus
5. Mar 20, 2005

### jcsd

The simplest explantion I can think of is that vectors are rank 1 tensors and have x,y and z components, rank 2 tensors have xx, xy, xz, yx, yy, yz, zx, and zz components, rank 3 tensors have xxx, xxy, xxz, xyx, etc, components and so on. Additionally to be a tensor the components must transform in a certain way just like a mathematical object with an x, y and z component isn't a vector unless it transforms like a vector.

Now I'll add this isn a gross simplifcation as it ignores the covaraint/contravarint issue , uses specifc coodiante systes (thoguh the whole point of tensors ias that they describe evrything indepent of coordinate system), talks abotu 3-dimensional spaces only (thoguh the genralization to higher diemsnional space should be obvious) and defines tensors in terms of their components which is not the best way to define them.

6. Mar 20, 2005

### mathwonk

vectors are to tensors, as linear polynomials are to higher degree polynomials, except with tensors multiplication is also allowed to be non commnutative.

for example the dot product, a second degree polynomial in the entries of the two vector arguments, is a symmetric (i.e. commutative) tensor.

7. Mar 21, 2005

### rdt2

Rev Prez probably gets the right level for you at the moment. This is from my own lecture notes. I've sacrified some rigour to allow the idea to sink in more easily:

· a scalar is a magnitude with no direction in space

· a vector has a component scalar in each direction in space

· a (2nd order) tensor has a component vector in each direction in space.

We might then ask whether there are physical quantities which have a component tensor in each direction. These exist and are called higher-order tensors. Indeed there is a complete set of such quantities in which scalars, vectors and tensors such as stress are merely the first three elements. All of the previous definitions might then be reduced to the single statement:

· a tensor of order or rank N has a component tensor of order (N-1) in each direction in space.

We might then expect the bottom of the hierarchy, i.e scalars, to be called First Order while in fact they are called Zero’th Order. To appreciate why, consider each quantity in subscript notation:

· a scalar s would be written s with no subscript

· a vector v would be written vi with one subscript

· a second-order tensor t would be written tij with two subscripts

and, in our notation, with a corresponding number of underlines. So, the order of the tensor matches the number of subscripts and the number of underlines. Furthermore, in 2-D:

· a scalar has 2(0) = 1 scalar component

· a vector has 2(1) = 2 scalar components

· a second order tensor has 2(2) = 4 scalar components.

An Nth-order tensor Tijk... with N subscripts would have n(N) scalar components in n-D space.

8. Mar 21, 2005

### marlon

Last edited by a moderator: Apr 21, 2017
9. Mar 21, 2005

### mathwonk

short course on tensors:

let elements of R^2 be called vectors, and write them as length 2 column vectors.

then we can define "covectors" as row vectors of length 2. then row vectors may be considered as linear functions on vectors, i.e. as linear functions f:R^2-->R.

some people call vectors, tensors of rank 1 and type (1,0), and call covectors, tensors of rank one and type (0,1).

now consider two covectors f and g. they yield a bilinear function f(tensor)g of two vector variables, by multiplication, i.e. f(tensor)g (v,w) = f(v)g(w), a number.

this product is not commutative since g(tensor)f (v,w) = g(v)f(w).

for the same reason it gives a different answer when applied to (w,v), as compared to when applied to (v,w).

some people call f(tensor)g a rank 2 tensor of type (0,2).

if we add up several such products, e.g. f(tensor)g + h(tensor)k, we still have a bilinear function of two vector variables, hence another rank 2 tensor of type (0,2).

now we could consider also a product v(tensor)f, of a vector and a covector. if we apply this to a vector w we get a vector: namely v times the scalar f(w).

again a sum of such things is another: v(tensor)f + u (tensor)g, applied to w is

f(w) times v + g(w) times u.

some people call such a thing a rank 2 tensor of type (1,1).

since as a function on the vector w, this object is linear, it could be represented as a 2 by 2 matrix, whose columns were the vector values taken by this function at the standard basis vectors (1,0) and (0,1).

the ordinary dot product is a tensor of rank 2 and type (0,2), since it takes two vectors and gives out a number, and is bilinear.

i.e. if f is the linear function taking the vector (x,y) to x, and g is the linear function taking the vector (x,y) to y, then the dot product equals f(tensor)f + g(tensor)g.

I.e. applied to (v,w), where v = (v1,v2) and w = (w1,w2) are vectors, it gives us v1w1 + v2w2. thus it is symmetric.

the reason for distinguishing the vector variables from the covector variables, is that under a linear transformation T:R^2-->R^2, the vectors transform by v goes to Tv, and the covectors transform by f goes to fT, i.e. the multiplication occurs on the other side.

or if you insist on writing a row vector, i.e. a covector as a column vector, then you must multiply it (from the left) by T* = transpose of T.

multiplying more vectors and covectors together, and adding up, gives higher rank tensors.

Last edited: Mar 21, 2005
10. Mar 21, 2005

### mathwonk

what i have described are tensor spaces "at a point". just as the tangent space to a sphere, say at one point, is a vector space, so also there are tensor spaces at every point of the sphere.

then just as we can consider familes of tangent vectors, or tangent fileds, we can consider tensor fields, one tensor at each point.

was that simple enough for you? thats about as simple as i can make it, and still be correct.

Last edited: Mar 21, 2005
11. Mar 21, 2005

### dextercioby

The geometric approach to tensors is very familiar & really nice.I would like to mention an approach that us physicists love:group theory & representations.SO(3) and SO(3,1),to give examples.

Daniel.

12. Mar 22, 2005

### Imo

Thanks for everyone's reply, especially those that went down to simplest terms. It's gonna take me a while to go through it all, so I might have comments later (about a week...I've got a hell of a lot of work to do). Thank you again all.

13. Mar 27, 2005

### pmb_phy

Pete

14. Apr 5, 2005

### Starship

Ok so a metric tensor can be written this way:

$$OP^2 = (V_1)^2 + (V_2)^2$$

Hidden in this equation is the metrical tensor. It is hidden because it here consists of 0's and 1's that are not written in.

If the equation is rewritten in the form

$$OP^2 = 1(V_1)^2 + 0V_{1}V_{2} + 0V_{2}V_{1} + 1(V_2)^2$$

the full set of components (1,0,0,1) of the metric tensor is apparent.

If oblique coordinates are used, the formula for $$OP^2$$ takes the more general form

$$OP^2 = g_{11}(V_1)^2 + g_{12}V_{1}V_{2} + g_{21}V_{2}V_{1} + g_{22}(V_2)^2$$

the quantities $$g_{11}, g_{12}, g_{21}, g_{22}$$ being the new components of the metric tensor.

Last edited: Apr 5, 2005