# Covariant and contravariant basis vectors /Euclidean space

• meteo student
In summary, the paper discusses how covariant and contravariant vectors are defined in terms of an equation that relates the vectors' angles. The equation is not always true, and does not always make sense when applied to vectors and dual vectors.

#### meteo student

I want ask another basic question related to this paper - http://www.tandfonline.com/doi/pdf/10.1080/16742834.2011.11446922

If I have basis vectors for a curvilinear coordinate system(Euclidean space) that are completely orthogonal to each other(basis vectors will change from point to point). I presume I can have a set of covariant basis vectors and contravariant basis vectors.

Now assume that one of those three axes will be non orthogonal to the other two(the paper calls it the sigma coordinate). Will the following statement be correct ?

The basis vectors that are orthogonal to each other will transform covariantly and the basis vectors that describe the non orthogonal coordinate surface will transform contravariantly.

If the above statement is incorrect can someone explain what this text means from the paper-

"In a σ-coordinate, the horizontal covariant basis vectors and the vertical contravariant basis vectors vary in the horizontal and vertical, respectively, while the covariant and contravariant basis vectors are non-orthogonal when the height and slope of terrain do not equal zero "

meteo student said:
I presume I can have a set of covariant basis vectors and contravariant basis vectors.
In this context, a "vector" at a point in an n-dimensional manifold can be defined, not as an n-tuple of numbers, but as a function that associates an n-tuple of real numbers with each coordinate system. This "vector" is then said to be "contravariant" if the relationship between the n-tuples associated with different coordinate systems is given by a certain formula (that I'm not going to explain in this post, but you may be familiar with it already), and "covariant" if it's given by a similar but different formula.

Given a point p, a coordinate system x (with p in its domain) and an n-tuple r, you can always define a "contravariant vector" at p and a "covariant vector" at p by saying "let u be the contravariant vector at p that associates r with x, and let v be the covariant vector at p that associates r with x". I only took a glance at the paper, but I think that this is what formulas (4) and (5) are doing.

meteo student said:
Now assume that one of those three axes will be non orthogonal to the other two(the paper calls it the sigma coordinate). Will the following statement be correct ?
Not in general no. Maybe that "sigma coordinate" is defined in some special way to make that statement correct?

Fredrik - very interesting. So I need to figure out what is it about that sigma coordinate that leads to that conclusion.

For me it is confusing to read that whether a vector is covariant or contravariant depends on a condition of orthogonality or lack thereof.

Although it is true that a covariant vector transforms one way, and a contravariant vector transforms another way, I find that to be an extremely unsatisfactory way to define which vectors are co- or contravariant, since it fails to explain why one vector should be one and another vector the other.

I think of covariant and contravariant vectors the way mathematicians do: A covariant vector describes a tangent direction at a point p of a space: It is the velocity vector of a certain curve (as well as many others), thought of as the trajectory of a moving point. The set of all tangent vectors at the point p is called the tangent space (of the space in question) at the point p. This tangent space, often denoted by Tp, is a vector space: Its vectors can be added and/or multiplied by a scalar at the point p.

A contravariant vector eats regular vectors for breakfast, . . . and spits out a scalar.

Suppose that, at some point p, the vectors e1, e2, ..., en for a basis for the tangent space. Then there is a natural basis for the "cotangent" space — the space of all contravariant vectors at p, which also forms a vector space in its own right:

The contravariant vectors of the dual basis are sometimes denoted by e1*, e2,*, ..., en*. They are also sometimes denoted instead by e1, ..., en instead.

In any case, they are defined by the condition

ei(ej) = δij

for all i, j (where the Kronecker delta δij equals 1 if i = j and 0 if i ≠ j).

The fact that, for instance, e3(e2) = 0 is sometimes interpreted (as in the paper linked in the original post) to mean that this is a "dot product" between e3 and e2, and that its value of 0 means that e3 and e2 are "orthogonal".

Maybe it's just me, but I find this attempt at a geometric interpretation of angles between vectors and dual vectors to be merely confusing and not helpful.

meteo student and Fredrik
Can you explain why it is confusing ? That would really help.

It's just that vectors and dual vectors are things — tensors, in fact — of different type, so that when *they* interact with each other, I prefer to keep that interaction in a special category other than two vectors in the same vector space having geometric relationships to each other.

meteo student
Now I totally understand your concern. You are saying they belong to different vector spaces(contravariant and covariant). Hence they should not be used to derive geometric relationships. Is that correct ?

I don't want to exclude any possibility, since my ignorance is vast, so I won't say "should not".

But unless there is some reason to think of vectors and dual vectors in the same geometric space, my own personal preference is to not think of them as related geometrically.

But rather, algebraically: the fundamental fact that a dual vector takes a vector as input, and the output is a real number. So we just evaluate the dual vector on the vector, but I don't necessarily see why to think of the result as the product of two absolute values and a cosine.

Is there a case for thinking them geometrically in the self dual space ? I mean if I have an orthonormal basis then there is no distinction between the contravariant and covariant components of a vector.

meteo student said:
Is there a case for thinking them geometrically in the self dual space ? I mean if I have an orthonormal basis then there is no distinction between the contravariant and covariant components of a vector.
There's no such thing as contravariant and covariant components of a vector. In the terminology used in differential geometry, there are two vector spaces associated with each point p of a manifold M: the tangent space ##T_pM## and the cotangent space ##T_pM^*##. The latter is the dual space of the former. Given a coordinate system x with p in its domain, there's a straightforward way to define an ordered basis ##(e_1,\dots,e_n)## for the tangent space at p. Its dual ##(e^1,\dots,e^n)## is an ordered basis for the cotangent space at p. So a change of coordinate system induces a change of ordered basis for both the tangent space and the cotangent space. The n-tuple of components of a tangent vector (i.e. an element of ##T_pM##) transforms contravariantly under such a change. The n-tuple of components of a cotangent vector (i.e. an element of ##T_pM^*##) transforms covariantly under such a change.

Here's the corresponding explanation using the obsolete and horrible terminology: A contravariant vector is an n-tuple of real numbers that transforms contravariantly, and a covariant vector is an n-tuple of real numbers that transforms covariantly. This is supposed to mean that vectors are functions that associate n-tuples with coordinate systems. To say that the n-tuple "transforms" in a certain way is to say that the n-tuples associated with two given coordinate systems are related in a certain way.

What you're thinking about is probably that the metric can be used to "raise and lower indices". What this means is that if ##v^i## is the ith component of the n-tuple that a contravariant vector associates with a certain coordinate system, then ##v_i=g_{ij}v^j## is the ith component of the n-tuple that a covariant vector associates with that same coordinate system. If the matrix of components of the metric is the identity matrix, i.e. if ##g_{ij}=\delta_{ij}## for all i,j, then the covariant vector associates the same n-tuple with every coordinate system as the corresponding contravariant vector.

meteo student
Fredrik said:
What you're thinking about is probably that the metric can be used to "raise and lower indices". What this means is that if ##v^i## is the ith component of the n-tuple that a contravariant vector associates with a certain coordinate system, then ##v_i=g_{ij}v^j## is the ith component of the n-tuple that a covariant vector associates with that same coordinate system. If the matrix of components of the metric is the identity matrix, i.e. if ##g_{ij}=\delta_{ij}## for all i,j, then the covariant vector associates the same n-tuple with every coordinate system as the corresponding contravariant vector.

Very good on guessing what my line of thinking is :)

"Is there a case for thinking them geometrically in the self dual space ? I mean if I have an orthonormal basis then there is no distinction between the contravariant and covariant components of a vector."

It is true that, given an inner product

< , >: V x V →

on a finite-dimensional vector space V, that inner product provides a certain isomorphism between the vector space V and its dual V* that is given by

v |→ <v, >
.

But the existence of an isomorphism is not the same as saying that two isomorphic vector spaces V and V* are identical. In this case, they are not.

## 1. What is the difference between covariant and contravariant basis vectors?

Covariant and contravariant basis vectors are two different ways of representing the same vector in a coordinate system. Covariant basis vectors are aligned with the coordinate axes, while contravariant basis vectors are perpendicular to the coordinate axes. This means that covariant basis vectors change with the coordinate system, while contravariant basis vectors remain the same.

## 2. How are covariant and contravariant basis vectors used in Euclidean space?

In Euclidean space, covariant and contravariant basis vectors are used to describe the orientation and direction of a vector in a given coordinate system. They are also used to perform operations such as vector addition, subtraction, and multiplication in multiple dimensions.

## 3. What is the significance of Euclidean space in mathematics and physics?

Euclidean space is a fundamental concept in mathematics and physics, representing a flat, continuous, and infinite space in which we can perform geometric and physical calculations. It is the most commonly used mathematical model for describing the physical world and is the basis for many theories and equations in physics.

## 4. How do covariant and contravariant basis vectors relate to tensors?

Tensors are mathematical objects that represent geometric and physical quantities in a coordinate-independent manner. Covariant and contravariant basis vectors are the building blocks of tensors and are used to define the tensor components, which then determine how the tensor behaves under coordinate transformations.

## 5. Can you give an example of how covariant and contravariant basis vectors are used in practical applications?

One practical example of using covariant and contravariant basis vectors is in physics, specifically in the description of electromagnetic fields. The electric and magnetic fields are represented by two different types of tensors, each with its own set of covariant and contravariant basis vectors, allowing for a more comprehensive and accurate representation of these complex fields.