# What is a co-variant derivative?

1. Jul 10, 2012

### codeman_nz

Hi everyone,

What is a co-variant derivative? I have looked online but the explanations are not clear.

I just want a simple explanation with a simple example more specifically the co-variant of a 3-metric.

Thanks.

2. Jul 10, 2012

### Muphrid

Let me give an explanation from a physicist's perspective.

Say you have a vector field with values at very point in space. You can apply a local rotation--rotating the vector field in an arbitrary direction and by an arbitrary angle. More importantly, the rotation angles and directions can be different at every point in space.

The covariant derivative is then defined such that the new covariant derivative of the rotated field is the rotation of the covariant derivative of the field.

That may be a little hard to parse. Let's write it in mathematical language. You have a rotation operator (matrix) $\underline L$, which that acts on a vector field $A(x)$ such that $A \mapsto A' = \underline L(A)$. The covariant derivative $D$ is defined so that, for any vector $a$, $D$ transforms to $D'$, obeying

$$a \cdot D' A' = \underline L(a \cdot DA)$$

I think this conceptual idea--that the covariant derivative is the derivative operator that obeys this sensible transformation law--helps a bit in understanding why it is convenient. This way, you can calculate the covariant derivative in a particular frame from the covariant derivative in another frame and the rotation that relates the two frames. You can't do this for the usual derivative operator $\nabla$. (If you're curious, I can prove why this is.)

I know the idea of the covariant derivative likely has a more general driving principle when considered for embedded manifolds or more general spaces. I've just tried to talk about what I know. The covariant derivative is really just a modified derivative operator with some extra terms that talk about the rotation between two frames.

3. Jul 11, 2012

### dydxforsn

Well, lots of texts show the covariant derivative of a rank 1 tensor (vector) first to establish kind of the idea for what we're going for before developing the concept for higher rank tensors, so here is a very simple case for what the covariant derivative is for vectors from an article I'm writing (this might be too simple for you I'm not really sure what you're looking for, and it's too simple to show all of the essential features of the covariant derivative clearly, it's just a very simple specific case):

Consider the derivative of a vector in general curvilinear coordinates such that the derivative of the basis vectors may change:
$$\frac{{\partial}{\vec{A}}}{{\partial}{x^{\alpha}}} = \frac{{\partial}{(A^{i} \vec{e}_{i})}}{{\partial}{x^{\alpha}}} = \frac{{\partial}{A^{i}}}{{\partial}{x^{\alpha}}} \vec{e}_{i} + A^{i} \frac{{\partial}{\vec{e}_{i}}}{{\partial}{x^\alpha}}$$
Now, the next step is going to be in anticipation that we can find a way to factor the basis vector '$\vec{e}_{i}$' out of the equation such that our derivative is an operator on this basis vector. Thus we need to do something about the derivative on the far right. We know that the result of this derivative is going to be some weighted combination of the basis vectors, thus we instead represent this derivative as some sum of each unit vector times some weighing factor:
$$\frac{{\partial}{\vec{e}_{i}}}{{\partial}{x^\alpha}} = \sum_{j}{\Gamma^{j}_{{i}{\alpha}} \vec{e}_{j}}$$
These coefficients aren't weird looking on accident, they appear frequently and are called "Christoffel symbols of the $2^{nd}$ kind". There are 3 indices for these symbols, the first bottom index (here 'i') indicates the basis vector being differentiated, the second bottom index (here '$\alpha$') indicates the direction in which the basis vector is being differentiated in, and finally the top index (here 'j') indicates the direction in which this derivative points. Thus we can write:
$$= \frac{{\partial}{A^{i}}}{{\partial}{x^{\alpha}}} \vec{e}_{i} + A^{i} (\Gamma^{j}_{{i}{\alpha}} \vec{e}_{j})$$
Now comes a trick that allows us to factor out the basis vector '$\vec{e}_{i}$', the result of our Christoffel symbol notation. We simply notice that 'i' and 'j' are simply dummy summation variables, and thus we can swap them without changing a thing:
$$= \frac{{\partial}{A^{i}}}{{\partial}{x^{\alpha}}} \vec{e}_{i} + A^{j} (\Gamma^{i}_{{j}{\alpha}} \vec{e}_{i})$$
Thus we can now factor out our basis vector '$\vec{e}_{i}$' and define an operation that we call the "covariant derivative":
$$= ( \frac{{\partial}{A^{i}}}{{\partial}{x^{\alpha}}} + A^{j} \Gamma^{i}_{{j}{\alpha}} ) \vec{e}_{i}$$
The symbol we use to denote this derivative is the semi-colon before the variable being differentiated:
$$A^{i}_{{;}{\alpha}} = \frac{{\partial}{A^{i}}}{{\partial}{x^{\alpha}}} + \Gamma^{i}_{{j}{\alpha}} A^{j}$$
$$\frac{{\partial}{\vec{A}}}{{\partial}{x^{\alpha}}} = A^{i}_{{;}{\alpha}} \vec{e}_{i}$$
And thus the covariant derivative becomes something like the Christoffel symbols, weighing factors that link the derivative to each of the basis vectors representing the space.

4. Jul 12, 2012

### lavinia

Covariant differentiation is a generalization of the idea of differentiating a vector field along a curve.

In Euclidean space it is just differentiating the coordinate functions of the vector field. On a surface in Euclidean space it is the tangential component of the derivative of the coordinate functions. One can think of this as the part of the derivative of the vector field that an observer living on the surface can see. On a general manifold, it can also be thought of in this way since any manifold can be isometrically embedded in some Euclidean space.

Once you know how to differentiate a vector fields along a curve you can use a metric to extend this to 1 forms and then from there to tensors.

Last edited: Jul 12, 2012
5. Jul 14, 2012

### quasar987

In a vector bundle, the notion of a covariant derivative operator is equivalent to that of a connexion. And there are many apparently different but equivalent ways to think about a connexion, each useful in their own way.

For instance, you are interested in understanding the meaning of the covariant derivative ∇g of a metric g on a manifold M.

One way to view a connection on M is as a collection of parallel transport operators. That is, for every curve c(t) on M, the data of an isomorphism P btw the tangent spaces to M at every point on the curve. If you adopt this point of view, then ∇g measures the failure of this operator to be isometric (namely, ∇g=0 iff the parallel transport operator P satisfies g(Pv,Pw)=g(v,w)).

Another way to view a connection on M is as an operator ∇ that sends a pair of vectors fields X,Y to a third one ∇XY, which we interpret as a kind of derivative of Y with respect to X, because it satisfies a kind of Leibniz rule. This can be extended in a natural way to an operator acting on any type of tensor fields. In particular, for covariant 2-tensor fields such as our metric g, ∇g is then a measure of the failure of the following nice "product rule"-looking identity: ∇Xg(Y,Z) = g(∇XY,Z)+g(Y,∇XZ). In other words, ∇g = 0 iff this identity is verified.