Indices in differential geometry

Click For Summary

Discussion Overview

The discussion revolves around the notation of indices in differential geometry, particularly the distinction between covariant and contravariant vectors, and the implications of raising and lowering indices. Participants explore the philosophical and practical aspects of this notation, including its utility in expressing inner products and the transformation properties of different types of vectors and tensors.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant expresses confusion about the significance of indices being up or down in notation, particularly in the context of inner products.
  • Another participant highlights the importance of distinguishing between covariant and contravariant vectors, suggesting that this distinction aids in applying Einstein's summation convention.
  • There is a discussion about the transformation properties of vectors and covectors, with one participant describing them in terms of left and right multiplication by matrices.
  • A later reply clarifies that covariant vectors are tangent vectors while contravariant vectors are linear functionals, emphasizing the duality between these spaces.
  • Participants question how the concept of duality applies to tensors with mixed indices and whether such tensors can be considered dual to others with "flipped" indices.

Areas of Agreement / Disagreement

Participants demonstrate a mix of understanding and confusion regarding the notation and concepts, with no clear consensus on the implications of mixed indices or the duality of tensors. Multiple competing views and interpretations remain present throughout the discussion.

Contextual Notes

Participants note that the understanding of these concepts may depend on specific choices of basis and the context of the manifold being discussed, indicating potential limitations in generalizing the discussion.

Cincinnatus
Messages
389
Reaction score
0
So I've taken two differential topology/geometry classes both from a mathematics department. I see all over this forum a whole lot of talk about indices being up or down and raising/lowering etc.

My professors barely ever mentioned these things though I did notice that when they worked in local coordinates they always wrote the indices on certain objects up and other objects down. For example, inner products of vectors always seem to have repeated indices that are up on one object and down on the other like:
inner product of x and y = sum_i (x_i*y^i)

I've never really understood what we gain from this notation. At least for my example of an inner product no information seems to be gained by writing the indices this way. When talking about more complicated things than inner products I'm at a loss as to how I should arrange the indices and what benefit there is from doing things this way.

I initially thought that the notation was up on objects that transform covariantly and down contravariantly (or vice versa) but I'm at a loss as to what this means for objects with mixed indices. It's also not at all clear to me what it means to "raise" or "lower" the indices on some object.

If anyone is willing, I'd like clarification both on how the notation actually works as well as on the "philosophy" behind why this notation was chosen.
 
Physics news on Phys.org
First, it helps to distinguish between "covariant vectors" and "contravarient vectors". Do you understand the difference? Second, it allows you to use Einstein's "summation convention", "whenever you have the same index twice, once as a subscript and once as a superscript, sum over that index" without having to write "\sum".
 
HallsofIvy said:
First, it helps to distinguish between "covariant vectors" and "contravarient vectors". Do you understand the difference? Second, it allows you to use Einstein's "summation convention", "whenever you have the same index twice, once as a subscript and once as a superscript, sum over that index" without having to write "\sum".

Perhaps I don't understand the difference. I think of vectors as columns and covectors as rows. So a change of coordinates transformation on vectors must be left multiplication by some matrix. Whereas for covectors the equivalent transformation would have to be a right multiplication. So at least when we are talking about linear transformations I think of covariant transformations as right multiplication and contravariant transformations as left multiplication.

I also used to know the distinction in terms of pushforwards and pullbacks... but I seem to have forgotten this today.
 
Okay, "columns" and "rows" is a good start. More precisely, "covariant vectors" are what we might normally think of as "vectors" (tangent vectors to a manifold) while "contravariant vectors" are linear functionals defined on those vectors. Given a basis, there exist a one-to-one identification of linear functionals with vectors so we can think of \omega(v) as the dot product of the vector identified with \omega and v- which we can then think of as a matrix product of row and column vectors.

But notice that this depends on a specific choice of basis in the tangent space at a specific point on the manifold. If you want to work "coordinate free" or at different points on the manifold you have to be more careful.
 
HallsofIvy said:
Okay, "columns" and "rows" is a good start. More precisely, "covariant vectors" are what we might normally think of as "vectors" (tangent vectors to a manifold) while "contravariant vectors" are linear functionals defined on those vectors. Given a basis, there exist a one-to-one identification of linear functionals with vectors so we can think of \omega(v) as the dot product of the vector identified with \omega and v- which we can then think of as a matrix product of row and column vectors.

This is just saying that the vector spaces consisting of vectors and covectors are dual to each other. This description (and the one I gave) breaks down when we talk about more complicated objects with mixed indices though right?

Is there any sense in which a tensor with mixed indices is dual to another tensor with "flipped" indices?
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 28 ·
Replies
28
Views
7K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
7K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 25 ·
Replies
25
Views
6K
  • · Replies 3 ·
Replies
3
Views
3K