Linear transformations as tensor.

1. Feb 26, 2012

The1337gamer

I was looking at this table here: http://en.wikipedia.org/wiki/Tensor#Examples

And i didn't understand why a (1,1) tensor is a linear transformation, I was wondering if someone could explain why this is.

A (1,1) tensor takes a vector and a one-form to a scalar.
But a linear transformation takes a vector to a vector.

Thanks.

2. Feb 26, 2012

dx

A bilinear form can take a vector to a vector or a covector to a covector:

Auvζv = γu
Auvωu = κv

3. Feb 26, 2012

The1337gamer

I thought a bilinear form was the tensor product of 2 one-forms/linear functionals, so it would take two vectors to a scalar.

4. Feb 26, 2012

Office_Shredder

Staff Emeritus
If your form is T(u,v) where u is a vector and v is a co-vector, consider the function f(u)=T(u,-). f(u) is a function which, when you plug in a co-vector you get a scalar. Those are exactly what vectors are (co-co vectors)

5. Feb 28, 2012

mathwonk

Assume all spaces are finite diml.

by plugging into one variable at a time, you see that bilinear maps from VxW to U is the same as linear maps from V to linear maps from W to U.

I.e. Bil(VxW,U) ≈ Lin(V,Lin(W,U)).

Write V* for linear maps from V to k, where k is the scalar field.
For finite diml spaces then Lin(Lin(V,k),k) = V** ≈ V.

essentially by definition of tensors, V*tensorW* ≈ (VtensorW)* ≈ Bil(VxW,k).

Thus V*tensorW ≈ Bil(VxW*,k) ≈ Lin(V,W**) ≈ Lin(V,W).

I.e. if ftensorw is an elementary (1,1) tensor in V*tensW,

then it maps the vector v in V to the vector

f(v).w in W. A general (1,1) tensor is a sum of such elementary ones and maps v to the corresponding sum of vectors in W.