Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Covariant and contravariant basis vectors /Euclidean space

  1. Jan 30, 2016 #1
    I want ask another basic question related to this paper - http://www.tandfonline.com/doi/pdf/10.1080/16742834.2011.11446922

    If I have basis vectors for a curvilinear coordinate system(Euclidean space) that are completely orthogonal to each other(basis vectors will change from point to point). I presume I can have a set of covariant basis vectors and contravariant basis vectors.

    Now assume that one of those three axes will be non orthogonal to the other two(the paper calls it the sigma coordinate). Will the following statement be correct ?

    The basis vectors that are orthogonal to each other will transform covariantly and the basis vectors that describe the non orthogonal coordinate surface will transform contravariantly.

    If the above statement is incorrect can someone explain what this text means from the paper-

    "In a σ-coordinate, the horizontal covariant basis vectors and the vertical contravariant basis vectors vary in the horizontal and vertical, respectively, while the covariant and contravariant basis vectors are non-orthogonal when the height and slope of terrain do not equal zero "
     
  2. jcsd
  3. Jan 30, 2016 #2

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    In this context, a "vector" at a point in an n-dimensional manifold can be defined, not as an n-tuple of numbers, but as a function that associates an n-tuple of real numbers with each coordinate system. This "vector" is then said to be "contravariant" if the relationship between the n-tuples associated with different coordinate systems is given by a certain formula (that I'm not going to explain in this post, but you may be familiar with it already), and "covariant" if it's given by a similar but different formula.

    Given a point p, a coordinate system x (with p in its domain) and an n-tuple r, you can always define a "contravariant vector" at p and a "covariant vector" at p by saying "let u be the contravariant vector at p that associates r with x, and let v be the covariant vector at p that associates r with x". I only took a glance at the paper, but I think that this is what formulas (4) and (5) are doing.

    Not in general no. Maybe that "sigma coordinate" is defined in some special way to make that statement correct?
     
  4. Jan 30, 2016 #3
    Fredrik - very interesting. So I need to figure out what is it about that sigma coordinate that leads to that conclusion.
     
  5. Jan 30, 2016 #4
    For me it is confusing to read that whether a vector is covariant or contravariant depends on a condition of orthogonality or lack thereof.

    Although it is true that a covariant vector transforms one way, and a contravariant vector transforms another way, I find that to be an extremely unsatisfactory way to define which vectors are co- or contravariant, since it fails to explain why one vector should be one and another vector the other.

    I think of covariant and contravariant vectors the way mathematicians do: A covariant vector describes a tangent direction at a point p of a space: It is the velocity vector of a certain curve (as well as many others), thought of as the trajectory of a moving point. The set of all tangent vectors at the point p is called the tangent space (of the space in question) at the point p. This tangent space, often denoted by Tp, is a vector space: Its vectors can be added and/or multiplied by a scalar at the point p.

    A contravariant vector eats regular vectors for breakfast, . . . and spits out a scalar.

    Suppose that, at some point p, the vectors e1, e2, ..., en for a basis for the tangent space. Then there is a natural basis for the "cotangent" space — the space of all contravariant vectors at p, which also forms a vector space in its own right:

    The contravariant vectors of the dual basis are sometimes denoted by e1*, e2,*, ..., en*. They are also sometimes denoted instead by e1, ..., en instead.

    In any case, they are defined by the condition

    ei(ej) = δij

    for all i, j (where the Kronecker delta δij equals 1 if i = j and 0 if i ≠ j).

    The fact that, for instance, e3(e2) = 0 is sometimes interpreted (as in the paper linked in the original post) to mean that this is a "dot product" between e3 and e2, and that its value of 0 means that e3 and e2 are "orthogonal".

    Maybe it's just me, but I find this attempt at a geometric interpretation of angles between vectors and dual vectors to be merely confusing and not helpful.
     
  6. Jan 30, 2016 #5
    Can you explain why it is confusing ? That would really help.
     
  7. Jan 30, 2016 #6
    It's just that vectors and dual vectors are things — tensors, in fact — of different type, so that when *they* interact with each other, I prefer to keep that interaction in a special category other than two vectors in the same vector space having geometric relationships to each other.
     
  8. Jan 30, 2016 #7
    Now I totally understand your concern. You are saying they belong to different vector spaces(contravariant and covariant). Hence they should not be used to derive geometric relationships. Is that correct ?
     
  9. Jan 30, 2016 #8
    I don't want to exclude any possibility, since my ignorance is vast, so I won't say "should not".

    But unless there is some reason to think of vectors and dual vectors in the same geometric space, my own personal preference is to not think of them as related geometrically.

    But rather, algebraically: the fundamental fact that a dual vector takes a vector as input, and the output is a real number. So we just evaluate the dual vector on the vector, but I don't necessarily see why to think of the result as the product of two absolute values and a cosine.
     
  10. Jan 30, 2016 #9
    Is there a case for thinking them geometrically in the self dual space ? I mean if I have an orthonormal basis then there is no distinction between the contravariant and covariant components of a vector.
     
  11. Jan 31, 2016 #10

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There's no such thing as contravariant and covariant components of a vector. In the terminology used in differential geometry, there are two vector spaces associated with each point p of a manifold M: the tangent space ##T_pM## and the cotangent space ##T_pM^*##. The latter is the dual space of the former. Given a coordinate system x with p in its domain, there's a straightforward way to define an ordered basis ##(e_1,\dots,e_n)## for the tangent space at p. Its dual ##(e^1,\dots,e^n)## is an ordered basis for the cotangent space at p. So a change of coordinate system induces a change of ordered basis for both the tangent space and the cotangent space. The n-tuple of components of a tangent vector (i.e. an element of ##T_pM##) transforms contravariantly under such a change. The n-tuple of components of a cotangent vector (i.e. an element of ##T_pM^*##) transforms covariantly under such a change.

    Here's the corresponding explanation using the obsolete and horrible terminology: A contravariant vector is an n-tuple of real numbers that transforms contravariantly, and a covariant vector is an n-tuple of real numbers that transforms covariantly. This is supposed to mean that vectors are functions that associate n-tuples with coordinate systems. To say that the n-tuple "transforms" in a certain way is to say that the n-tuples associated with two given coordinate systems are related in a certain way.

    What you're thinking about is probably that the metric can be used to "raise and lower indices". What this means is that if ##v^i## is the ith component of the n-tuple that a contravariant vector associates with a certain coordinate system, then ##v_i=g_{ij}v^j## is the ith component of the n-tuple that a covariant vector associates with that same coordinate system. If the matrix of components of the metric is the identity matrix, i.e. if ##g_{ij}=\delta_{ij}## for all i,j, then the covariant vector associates the same n-tuple with every coordinate system as the corresponding contravariant vector.
     
  12. Jan 31, 2016 #11
    Very good on guessing what my line of thinking is :)
     
  13. Feb 2, 2016 #12
    "Is there a case for thinking them geometrically in the self dual space ? I mean if I have an orthonormal basis then there is no distinction between the contravariant and covariant components of a vector."

    It is true that, given an inner product

    < , >: V x V →

    on a finite-dimensional vector space V, that inner product provides a certain isomorphism between the vector space V and its dual V* that is given by

    v |→ <v, >
    .

    But the existence of an isomorphism is not the same as saying that two isomorphic vector spaces V and V* are identical. In this case, they are not.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Covariant and contravariant basis vectors /Euclidean space
Loading...