1. Jul 7, 2012

GarageDweller

So as I have read in Schutzs book on GR, and I'm finding his section on tensors and one forms very confusing.
Schutz describes gradients of functions as a one form, I cannot quite grasp why. In calculus I was taught that the gradient was a vector pointing in the direction of the fastest increase. Could someone shed some light on this?

2. Jul 7, 2012

Bill_K

They transform differently when you make a coordinate change.

Say r' = 2r. Then the circles r' = const are twice as close together as the r circles were. If v is a contravariant vector, visualized as an arrow, it now spans twice as many circles, implying that its radial component is now twice as great. And in fact, vr' = (∂r'/∂r) vr = 2 vr.

On the other hand if v is a covariant vector ("one form"), such as the gradient of a function V(r), then its radial component says how much it changes from one circle to the next, which is now clearly half as much as it was. And in fact, vr' = (∂r/∂r') vr = ½ vr.

3. Jul 7, 2012

HallsofIvy

Staff Emeritus
Roughly speaking, the set of all one forms is the "dual space" to the set of vectors. Given any vector space, it "dual space" is the set of all linear functions whose domain is the set of vectors and whose range is the set of real numbers. That is, a "one form" maps every vector to a number. We can think of that as $\int_A v\cdot d\omega$ where "A" is the manifold over which we are defining the vectors and one-forms, v is a vector and $d\omega$ is a one-form.

Some authors refer to the vectors as being in the "tangent space" at a point on the manifold and one forms as being in the "cotangent space". Strictly speaking, since we define the operation of the one-form on a "vector" as the integral over the entire manifold, we are really talkig about vector fields rather than individual vectors. But if we have a metric or "connection" defined on the manifold, we can associate a vector field over the entire manifold with a vector at a specific point in much the same we talk about "moving" vectors in classical physics.

You might get a better response if this were posted in "Mathematics- Topology and Differential Geometry".

4. Jul 7, 2012

lavinia

The gradient is a vector but it becomes a 1 form when you takes its inner product with other vectors.

the 1 form is v -> <grad f,v>

In regular calculus, the inner product is assumed to be the dot product. But on a manifold this is replaced with a Riemannian metric i.e an inner product on each tangent space. A different Riemannian metric will give you a different gradient.

Last edited: Jul 7, 2012
5. Jul 8, 2012

Rasalhague

Given a scalar field $f$, some people use the word gradient for the cotangent vector field $\mathrm{d}f$ which tells you how fast the scalar field is changing in each direction. It specifies a cotangent vector $\mathrm{d}f |_p$ at each point $p$ which maps a tangent vector $v$ at that point to a number $\mathrm{d}f |_p (v)$, namely the rate of change of f in the direction of that tangent vector, multiplied by the magnitude of the tangent vector: $||v|| \; \mathrm{d}f |_p (u)$, where $||v||$ is the magnitude of $v$, and $u$ a unit tangent vector in the same direction as $v$.

Given a metric tensor field $g$, which specifies a metric tensor $g |_p$ (i.e. an inner product) at each point $p$, we have a mapping $b$ between tangent and cotangent vectors: $b(v) := g |_p (v,\cdot )$, where the dot indicates a "slot waiting to be filled." Other people, including most introductory texts and courses on multivariable calculus and vector analysis, use the word gradient to refer to the tangent vector field $V$ whose value $V|_p$ at each point $p$ is such that $\mathrm{d}f|_p = b(V|_p) = g|_p (V|_p,\cdot )$.

When working with a fixed coordinate system, the two definitions are as good as each other, but in switching coordinate systems and their corresponding coordinate bases, the component functions of the vector field $V$ must be derived afresh from the cotangent vector field $\mathrm{d}f$, whose components transform in the usual "covariant" way. (I.e. the components of $V$ with respect to one coordinate system can't necessarily be derived from those of the other by the usual "contravariant" transformation.) This makes $\mathrm{d}f$ the more natural and fundamental entity representing the rate of change of the scalar field $f$.

6. Jul 8, 2012

Hurkyl

Staff Emeritus
By applying the metric in various ways, you can convert a geometric object into other objects.

When you learned multivariable calculus, you only knew about scalars and vectors, and the course did not intend to introduce tensors, so everything got converted into a scalar or a vector. (3 is the largest dimension in which this trick can be pulled)

So, when you learn about the gradient -- the function that tells you how fast a function varies in various directions -- you weren't taught the gradient directly. Instead, you were taught about the direction of steepest ascent and the dot-product formula that relates it to directional derivatives.

Notions such as "axial vectors", "pseudoscalars" or "densities" are other examples of this. An axial vector is really a bivector -- but one can delay learning about tensors by using the metric to convert it into a vector, so long as one remembers it's not really a vector but an "axial vector".

This works pretty well to some extent -- if you never bother with coordinate transformations, or stick to orthogonal ones only -- you might not even notice that things are weird. The first clue that something's up comes when you think about reflections, but everything gets really screwy when you start considering more general transformations. e.g. the notion of density is notoriously tricky when you rescale things.

It turns out the notion of covector is rather trivial to treat, though. I even figured it when I took multivariable calculus -- long before I had any inkling of notions like "tensor" or "dual space". Geometric vectors are "column vectors": 3x1 matrices. However the gradient is best thought of as a "row vector": a 1x3 matrix.

It turns out this is really equivalent to the distinction between vector and 1-forms: relative to a basis, the coordinate representation of a vector really is a column vector, and the coordinate representation of a 1-form really is a row vector.

Now, to make things even more annoying, some people argue that because there is are two standard notations for the 1-form version -- $\nabla f$ and $df$ -- then one should reserve $\mathop{\mathrm{grad}} f$ for the vector version.

7. Jul 8, 2012

lavinia

the differential of a function, df, is a 1 form but it is not the gradient of the function. The gradient is a vector. The equation relating the two is

df(v) = <grad f,v> where <,> is the inner product. But if you change the inner product, the gradient also changes. It is a different vector. But df is the same always.

At each point of a manifold, a 1 form is a linear map from the tangent space at that point into the base field - although this idea can be generalized to linear maps into vector bundles.

The 1 form is smooth if the values that it takes on an arbitrary smooth vector field is a smooth function.

If there is a Riemannian metric on the manifold then each vector field,v, determines a 1 form by the rule, w -> <v,w>. Conversely each 1 form may be expressed as an inner product with a vector field.

When one has a basis for a vector space, one can immediately define the dual basis for the linear maps of the vector space into the base field. These are so called co-vectors I think.
So the choice of a basis defines an isomorphic correspondence between the vector space and its dual space. An inner product also defines an isomorphic correspondence. With an inner product, no basis is required. Without a choice of basis or an inner product there is no natural way to define this isomorphism. A choice of a basis for the vector fields in a region, e,g, with a coordinate system, similarly gives you a dual basis of 1 forms.

Vector fields and 1 forms transform in opposite ways - one is called covariant, the other contravariant. The transformations can be defined without the use of a basis as follows.

Given a a function from one region to another say U into V,e.g. a change of coordinates, then df(u) is a vector in V when u is a vector in U. If one thinks of vectors as operators on functions then df(u).g = u.(g$\circ$f).

Given a 1 form on V (not on U) then f determines a 1 form on U by the transformation (df*ω)(v) = ω(df(v))

In term of bases these transformations appear as the usual covariant and contravariant transformation rules that you find in books on the theory of relativity.

In mathematics there is a general concept of covariant and contravariant functors. A functor associates objects in one category with objects in another and maps in the first category with maps in the second. depending on the way maps are associated, the functor is called covariant or contravariant. Vectors and 1 forms are an example of this general idea.

To each smooth manifold (the objects of the first category) associate its tangent bundle (the objects in the second category) To each smooth map between manifolds associate its differential, which is a map between the tangent spaces of the two manifolds. This is a covariant functor.

If on the other hand one associates, the dual tangent space to each manifold and the pull back map, df*, to each smooth map, one gets a contravariant functor.

Last edited: Jul 8, 2012
8. Jul 11, 2012

dydxforsn

Well for the gradient to be a one form it has to obey the definition of a one form, which is simply a rule that one forms must obey under a coordinate transformation. The rule is that each component of a one form transform with a weighted combination of all of the components of the one form from the previous coordinate system. Specifically for the gradient vector, we need to prove the following equation true:
$$(\nabla f)_{i'} = \sum_{i}{\frac{{\partial}{x_{i}}}{{\partial}{x_{i'}}} (\nabla f)_{i}}$$ or
$$\frac{{\partial}{f}}{{\partial}{x_{i'}}} = \sum_{i}{\frac{{\partial}{x_{i}}}{{\partial}{x_{i'}}} \frac{{\partial}{f}}{{\partial}{x_{i}}}}$$
Here the primed variables are in the new coordinate system and the un-primed variables are in the coordinate system we've transformed from.

We prove this equation is true by simply noticing that the left side of the equation can be expanded using the chain rule seeing as how each $x_{i'}$ is a function of each of the original variables in the non-primed coordinate system (all of the $x_{i}$'s..)
$$\sum_{j}{\frac{{\partial}{f}}{{\partial}{x_{j}}} \frac{{\partial}{x_{j}}}{{\partial}{x_{i'}}}} = \sum_{i}{\frac{{\partial}{x_{i}}}{{\partial}{x_{i'}}} \frac{{\partial}{f}}{{\partial}{x_{i}}}}$$
It doesn't matter what dummy index we sum over, so we'll just make them both over 'i', and swapping the order of multiplication of one side (it obviously doesn't matter), the two sides are equal:
$$\sum_{i}{\frac{{\partial}{x_{i}}}{{\partial}{x_{i'}}} \frac{{\partial}{f}}{{\partial}{x_{i}}}} = \sum_{i}{\frac{{\partial}{x_{i}}}{{\partial}{x_{i'}}} \frac{{\partial}{f}}{{\partial}{x_{i}}}}$$
So it's been proven that the components of the gradient vector transform covariantly, and thus it is a one form.

As a quick note one forms are often called "dual vectors". All dual vectors have to transform between coordinate systems via a covariant transformation (well, their components do.) Thus we started the proof from that requirement, and it is THE definition of the types of vectors we call "one forms".

So vectors in your calculus class like the gradient could have easily have been one forms because they're just special types of vectors. Note that dual vectors are indistinguishable with their counterparts (vectors who's components transform "contravariantly") in Cartesian coordinates, and so the distinction is usually only brought up when one discusses generalized curvilinear coordinates.

Last edited: Jul 11, 2012
9. Jul 11, 2012

lavinia

The gradient is not a 1 form. It is a vector. The gradient of a function,f, is the vector dual to the 1 form, df, the differential of the function. This vector is only defined when there is a Riemannian metric on the tangent space.

10. Jul 11, 2012

Muphrid

Would it not be correct to say that the gradient of a scalar field yields a cotangent vector field, or does the cotangent space only exist under, say, the more specialized case of relativity and not in general?

11. Jul 11, 2012

lavinia

The gradient is a tangent vector. The differential is a cotangent vector. There may be some terminology confusion here but in mathematics gradients are vectors always not cotangent vectors.

There is a profound point here. The differential of a function can be defined without an inner product. One does not need a metric to take a derivative. But without a metric you only have a cotangent vector. If you want to get a vector you need a metric to find the dual of the differential. This dual to the differential is the gradient.

12. Jul 11, 2012

Muphrid

Fair enough; it was precisely that subtle distinction that I suspected was being invoked and hoped to be made clear. I suspect the physics literature GarageDweller will be browsing may not be so observant of the distinction, but it seems a reasonable one to maintain for the purposes of clarity.

Overall, then, GarageDweller, you should take this away from the discussion: typical vector calculus courses use the metric of flat space to convert differentials (which are cotangent vector fields or higher ranking analogues of such) into tangent vector fields that the course had already introduced you to. In doing so, these courses avoid a lot of extra complications that aren't really relevant to the flat space case--these transformations are largely trivial. Nevertheless, the differential of a scalar field is a field in the cotangent space, and it just simplifies to gradient and such through relatively trivial manipulations when you're talking about just flat space.

13. Jul 11, 2012

lavinia

Well said

14. Jul 11, 2012

dydxforsn

Hmmm, that's interesting. I think we might have an issue of semantics here because I have a source that says otherwise:

"As in the traditional approach, vectors (which utilize contravariant components) expand using original basis vectors, while one-forms (which utilize covariant components) expand using basis one-forms, which are equivalent to dual basis vectors in the traditional approach."

Fleisch, Daniel. A Student's Guide to Vectors and Tensors. New York: Cambridge University Press, 2012. 156-157. Print.

Based upon this definition of one-forms I did indeed prove that the gradient vector is a one-form. I come from a pure physics background and have never taken a formal course in Differential Geometry, and you clearly have much more mathematical sophistication than I, could this be a mathematics versus physics terminology dispute or am I doing something wrong? I would authentically like to get to the bottom of this, the source could obviously be wrong, but wolfram seems to agree as well.

15. Jul 11, 2012

lavinia

A good application of these concepts is the definition of the Laplacian of a function.
The Laplacian is the divergence of the gradient. The Laplacian changes if the shape of the surface changes.

16. Jul 11, 2012

lavinia

Not sure. What I said is definitely right though. But I will get out a physics book and see how these transformations are defined. then get back to you.

But again, the important point is that the differential of a function is defined without a metric while the gradient requires a metric because the metric allows you to express the 1 form as an inner product with a fixed vector. This is the key idea, one needs a metric, the other does not. This means that the changes if the metric changes.

BTW: In vector calculus the differential of a function applied to a vector is described as the
dot product of the gradient with the vector. The dot product is just the inner product on flat Euclidean space. Without the dot product you can not express the value of the differential in this way. Instead you must compute it by taking the directional derivative of the function with respect to the vector.

Last edited: Jul 11, 2012