# Homework Help: I need help with tensors

1. Apr 4, 2010

### Storm Butler

1. The problem statement, all variables and given/known data
I just started trying to get into relativity using Schutz's book but ive been getting a little confused with some of the tensor stuff. For starters, what does it mean by a one form maps covectors to real numbers? is it an operation or a transformation? and when it maps to a real number does that mean it will loose information about its direction? Also i don't understand some of the notation for example i get that A with a superscript alpha is the alpha component of the vector but then why are the basis vectors written with a subscript ( is this the whole contrivariant covariant part) also two subscripts(alpha beta) next to one another, i understand is just a matrix at the alpha row beta column. But i dont get it when there is a something with an alpha beta component one on top and one on the bottom (and they arent even in a vertical line one is shifted from the other. i tried looking up some of these question in penroses the road to reality book but i got confused at the part where he give a coordinate change where X= x , Y= y+x but then the partial with respect to X= the partial with respect to x minus the partial with respect to y while the partial of Y= the partial with respect to y (on page 189). sorry for the lack of math symbols i dont really know how to add them in.
-thanks for all the help anyone can offer.

this isn't HW just something im doing on my own but i figured it was the most appropriate place to put the question.

2. Apr 4, 2010

### tiny-tim

Hi Storm Butler!

(have an alpha: α and a beta: β and for indices, use the X2 and X2 tags just above the Reply box )
I don't see any difference between an operation and a transformation … a one-form is simply a function from one space to another.

And yes, it loses information about direction.
Components and vectors are completely different animals, and live in different spaces.

One lives upstairs, the other lives downstairs.
It's still a matrix entry … you just have to be more careful than usual about how you use it.

3. Apr 4, 2010

### Storm Butler

but what do you mean when you say that they are different components? does the notion of a vectors components being the projection of the vector onto the basis still apply or is their a new definition? and if there is a matrix Mab and another matrix Mab what is the difference? and if a one form is not a vector then how is there a concept of components? What do you mean by more careful or rather why do you have to be careful?

4. Apr 4, 2010

### Fredrik

Staff Emeritus
If V is a vector space over $\mathbb R$, its dual space V* is the set of linear functions from V into $\mathbb R$. The definitions

(x*+y*)(z)=x*z+y*z for all z in V
(ax*)z=a(x*z) for all z in V, all $a\in\mathbb R$

give V* the structure of a vector space. So dual vectors are vectors too, because they're members of a vector space. I prefer not to call them "covectors" until we have defined V to be the tangent space at a point p of a manifold M. Then the members of V are called tangent vectors at p and the members of V* are called cotangent vectors at p, or covectors at p.

I don't know what you mean when you ask "is it an operation or a transformation?". Aren't those terms synonymous to you? The word "operator" is sometimes reserved for linear functions from a vector space into the same vector space, so you might want to avoid that term. "Operation" is not a term I use. Linear functions from a vector space over a field F into F are often called "functionals", so you can use that one if you want.

Yes.

It's just a notational convention. It's a convenient way to see what type of tensor you're working with just by looking at the notation for its components.

Not really. It's a component of a tensor of type (0,2). That's a bilinear function from V×V into $\mathbb R$.

(Note that what some authors call a tensor of type (i,j), others call a tensor of type (j,i). I don't remember what Schutz calls it).

For example, $M_{ij}{}^k$ means

$$M(\vec e_i,\vec e_j,\tilde e^k)$$

where $$\tilde e^k$$ is the kth member of the dual basis to $$\{\vec e_i\}$$. The dual basis of a basis for V is a basis for V* defined by

$$\tilde e^i(\vec e_j)=\delta^i_j$$

So the positions of the indices on a component of this particular M lets you know that it's a multilinear function

$$M:V\times V\times V^*\rightarrow\mathbb R$$

In my opinion, questions like this belong in the topology & geometry section, but they usually get posted in the relativity section.

5. Apr 4, 2010

### Storm Butler

to start off i think i should clear up my operation or transformation statement i wasn't asking if it was one or the other i was just asking is it something of that nature i probably shouldn't have said both at once i see where the confusion came up. Also just out of curiosity in shutz's book (at least so far) the concepts of manifolds and dual vector spaces aren't really explicitly talked about (I'm only in the tensor analysis section) would you consider that a more geometric interpretation of a tensor? Also i know that you said that it is just a notational convention but i think that it is the notation that is really getting me stuck because i dont understand (for example) what the difference would be between Mij if it were written out explicltly as a matrix (like with columns and rows written out with the elements) from Mij. I just dont understand the difference. Or, there is another part where he shows a basis vector as {e0}B but how does that make sense? a basis vector is a vector so it cant have multiple elements. (Also can anyone answer the question i had earlier on whether or not the concept of components being a projection onto the basis vectors still applies?)

Are you saying that you think i should move my question to a different section?

6. Apr 4, 2010

### Fredrik

Staff Emeritus
Not really. A tensor is a multilinear map from a cartesian product involving a finite number of factors of V and V* into the real numbers. For example

$$T:V\times V^*\times V\times V\rightarrow\mathbb R$$.

A manifold M has a vector space TpM associated with each point p in M. So the choice V=TpM isn't "a more geometric interpretation of a tensor". It's just a special case.

They're just the components of two different tensors, regardless of whether you put them into a matrix or not.

Another convention is to use the metric to "raise and lower indices". For example, if T is a tensor of type (0,2), you can use T and the metric to define two different tensors of type (1,1) by saying that the first new tensor has components $T^i{}_j=g^{ij}T_{jk}[/tex], and the second one has components [itex]T_i{}^j=T_{ik}g^{kj}$. You can't call the new tensors "T" when you're using the the index-free notation, but their components would be written as a "T" with a bunch of indices, as I just did.

What if you write a basis vector as a linear combination of the members of another basis? Wouldn't a notation like that be appropriate then? For example

$$\vec e_i=(\vec e_i)^j \vec f_j$$

If an inner product is defined, then yes. We are just talking about vector spaces here, so everything you learned in your linear algebra class still applies. The first thing that's going to be different is that you're going to be using a bilinear form that isn't positive definite (a Lorentzian metric tensor) instead of an inner product.

It doesn't matter to me. I was just answering the comment you made about "the most appropriate place to put the question". If you want to move it, you can hit the report button and request a move.