# Indices in differential geometry

1. Jul 21, 2008

### Cincinnatus

So I've taken two differential topology/geometry classes both from a mathematics department. I see all over this forum a whole lot of talk about indices being up or down and raising/lowering etc.

My professors barely ever mentioned these things though I did notice that when they worked in local coordinates they always wrote the indices on certain objects up and other objects down. For example, inner products of vectors always seem to have repeated indices that are up on one object and down on the other like:
inner product of x and y = sum_i (x_i*y^i)

I've never really understood what we gain from this notation. At least for my example of an inner product no information seems to be gained by writing the indices this way. When talking about more complicated things than inner products I'm at a loss as to how I should arrange the indices and what benefit there is from doing things this way.

I initially thought that the notation was up on objects that transform covariantly and down contravariantly (or vice versa) but I'm at a loss as to what this means for objects with mixed indices. It's also not at all clear to me what it means to "raise" or "lower" the indices on some object.

If anyone is willing, I'd like clarification both on how the notation actually works as well as on the "philosophy" behind why this notation was chosen.

2. Jul 21, 2008

### HallsofIvy

First, it helps to distinguish between "covariant vectors" and "contravarient vectors". Do you understand the difference? Second, it allows you to use Einstein's "summation convention", "whenever you have the same index twice, once as a subscript and once as a superscript, sum over that index" with out having to write "$\sum$".

3. Jul 21, 2008

### Cincinnatus

Perhaps I don't understand the difference. I think of vectors as columns and covectors as rows. So a change of coordinates transformation on vectors must be left multiplication by some matrix. Whereas for covectors the equivalent transformation would have to be a right multiplication. So at least when we are talking about linear transformations I think of covariant transformations as right multiplication and contravariant transformations as left multiplication.

I also used to know the distinction in terms of pushforwards and pullbacks... but I seem to have forgotten this today.

4. Jul 21, 2008

### HallsofIvy

Okay, "columns" and "rows" is a good start. More precisely, "covariant vectors" are what we might normally think of as "vectors" (tangent vectors to a manifold) while "contravariant vectors" are linear functionals defined on those vectors. Given a basis, there exist a one-to-one identification of linear functionals with vectors so we can think of $\omega(v)$ as the dot product of the vector identified with $\omega$ and v- which we can then think of as a matrix product of row and column vectors.

But notice that this depends on a specific choice of basis in the tangent space at a specific point on the manifold. If you want to work "coordinate free" or at different points on the manifold you have to be more careful.

5. Jul 21, 2008

### Cincinnatus

This is just saying that the vector spaces consisting of vectors and covectors are dual to each other. This description (and the one I gave) breaks down when we talk about more complicated objects with mixed indices though right?

Is there any sense in which a tensor with mixed indices is dual to another tensor with "flipped" indices?