# Visualization of contangent vector?

1. Oct 28, 2004

### arivero

I'd like to head about how people visualizes a cotangent vector, ie a differential form. I like to use surfaces of f(x)=cte.

2. Oct 28, 2004

### robphy

3. Oct 28, 2004

### quetzalcoatl9

another good way is the physical example that Wald gives in his book "General Relativity":

imagine that you have an antenna (e.g. a vector) in a magnetic field. you can use this antenna to measure the strength of the magnetic field in a given direction: you point it one way, and you get a reading (a real number). so we have a mapping from a vector to a real number (i.e. a linear functional).

the problem seems pointless since there are an unlimited number of possible directions that this antenna can be pointed, and yet we want to represent the entire field. but if i told you that the magnetic field varies in a linear fashion, then all we would have to do is pick 3 linearly independent directions, point the antenna in those directions and get those 3 readings.

those 3 readings form the components of a covector. since any vector is a linear combination of it's basis, then by knowing what values the basis vectors map into we can evaluate a linear functional (covector) on ANY vector. therefore our covector represents the magnetic field in it's entirety (well, at least from the fixed point that we measure it from) whereas our vectors do not.

i am not a physicist, but it is my understanding that this is why maxwell's equations can be expressed elegantly in terms of covectors/forms.

4. Oct 29, 2004

### arivero

Another way I use, generalizing the paralell planes idea for a full covector field, is to imagine a set of level curves. Generically, n-1 surfaces.

5. Oct 29, 2004

### shankarvn

Does this have anything to do with the contravariant and covariant representations of a vector? Can u explain the antennae example a bit more. That seems to be a nice way to understand. How are these differential forms related to the "concept of an infinitesimal ie. the way we understand things like dx,dy,dz etc..." ...??Since we integrate on differential forms, I believe there has to be a relation to the way we understand dx,dy etc as something very close to zero but not zero. Can someone explain?
Shankar

6. Oct 30, 2004

### quetzalcoatl9

Shankar,

As far as what you are calling infinitesmals goes, there is an area called Non-standard Analysis which treats these things as you say (an element of the hypereals, reals that are always less than the lowest real but greater than zero).

But it sounds like what you are getting at is that you sense that there is some deeper connection between calculus and differential forms. and indeed there is. and the deeper reason for that is due to the exterior derivative and the fact that it commutes with respect to something called the pullback map.

this is all summarized in a pivotal equation of differential forms, the Generalized Stokes' Theorem:

$$\int _{\partial M} \alpha = \int _{M} d\alpha$$

this equations holds everything about diff. forms, exterior differentiation, vector analysis, the fundamental theorem of calculus, and other things. what it says, is that the integral of the form over the bounday of a manifold is equal to the integral of the exterior derivative of the form over the manifold itself.

for example, let $$d\alpha = 2x dx$$ and the integration be with respect to a 1-dimensional manifold, defined to be an interval of the real line. this interval has a bounds of a and b, so $$M = [a,b] \subseteq R$$ this means that the boundary of the manifold $$M$$ is as follows: $$\partial M = \{ a, b\}$$

then:

$$\int _{M} d\alpha = \int _{[a,b]} 2x dx$$

and then applying the GST:

$$\int _{M} d\alpha = \int _{\partial M} \alpha = \int _{\{a, b\}} \alpha = \int _a ^b x^2 = x^2(b) - x^2(a)$$

which is the fundamental theorem of calculus. notice that the $$\int _a ^b x^2$$ has no $$dx$$ in it. this is because integrating a function (a 0-form) over the bounds it essentially the difference between the function at those two points. this is why in calculus we always "put a $$dx$$ at the end" even though this was no doubt not explained when we learned calculus. however, remember that in calculus the derivative was definited as a limit, so there is no need to go through defining covectors and such in order to do calculus.

the reason for this has to do with the fact that given a mapping $$k$$ between manifolds, that $$k^{*}(d\alpha) = d(k^{*}(\alpha))$$

there is a book on differential forms by Weintraube which explains this well and is fairly easy to follow.

7. Oct 30, 2004

### quetzalcoatl9

for a more concrete example from the antenna scenario, consider that you point your antenna in 3 different (linearly independent) directions with respect to a particular coordinate system and get 3 different readings. let these 3 linearly independent vectors form a vector basis of $$\{ \partial_1, \partial_2, \partial_3 \}$$ and lets say, for example, the mapping goes like this for our covector $$\alpha : TM_p \rightarrow R$$

$$\alpha(\vec{\partial _1}) = 3$$
$$\alpha(\vec{\partial _2}) = 4$$
$$\alpha(\vec{\partial _3}) = 5$$

so these are the readings our instrument gives when pointed in one of those 3 directions.

remember that:

$$dx^i(\partial _j) = \delta ^i _j = \left\{\begin{array}{cc}0,&\mbox{ if }i \neq j\\1, & \mbox{ if }i = j\end{array}\right$$

so our covector is:

$$\alpha = 3dx_1 + 4 dx_2 + 5 dx_3$$

and now lets choose an arbitraty vector in $$TM_p$$, like this one $$\vec {v} = 7\partial_1 + 8\partial_2 + 9\partial_3$$ for example.

then by linearity:

$$\alpha(\vec{v}) = \alpha(7\partial_1 + 8\partial_2 + 9\partial_3) = \alpha(7\partial_1) + \alpha(8\partial_2) + \alpha(9\partial_3) = 7(\alpha(\partial_1)) + 8(\alpha(\partial_2)) + 9(\alpha(\partial_3)) = (7)(3) + (8)(4) + (9)(5) = 98$$

you see the idea here? you can visualize a covector as simply being a linear mapping from a vector to a real number. the key is that the covector space has it's own basis of $$dx^i$$ dual to $$\partial_j$$

hope this helps

8. Oct 30, 2004

### shankarvn

Hi quetzalcoatl9

Thanks a lot..That really helped ... I just have one last question. In this antennae example are we not assuming that the field varies linearly??Isn't that a sweeping assumption ??I dont know much about EM theory and maxwell's eqns..But I was just wondering whether we know apriori that the field varies linearly.

Thanks again
Shankar

9. Oct 30, 2004

### shankarvn

Hi quetzalcoatl9
This might not make sense but I still wanted to ask. Can we say that at a pont in the manifold $$TM_p$$ , the tangent vector has a covariant representation $$dx_1$$ and a contravariant component $$dx^1$$.
I was taught that we can think of 2 representations(components) of a vector with respect to a basis and with respect to its dual(reciprocal basis). So are we looking at the tangent vector with respect to its components, with respect to its local basis(call it covariant) and with respect to its reciprocal/dual basis(call it its contravariant comp) . This question is due to the fact that I do not understand the concept of a differential map between tangent spaces. This is because they write out the differential map(Jacobian) between tangent spaces $$R^M_{p}\rightarrow R^N_{f(p)}$$ and that happens to be the map between the differential forms(cotangent vectors) from what we know " as the way differentials map with respect to coordinate transformation". This kind of forces me to think that cotangent vectors and tangent vectors are kind of representations of the same thing..If you feel I am talking absolute nonsense please ignore this question(do tell me you are ignoring it ).

Thanks
Shankar

10. Oct 30, 2004

### quetzalcoatl9

i made the assumption that the field varies linearly because i have been told that it does.

i don't know enough about physics to know how someone could figure that out, but it was just a convient example of how such a thing can be modelled.

if the field were not linear, then none of this would apply, since we require (by definition) that our covectors be linear functionals:

$$\alpha(\vec{v}) = \alpha(v^i \partial_i) = v^i \alpha(\partial_i)$$

and

$$\alpha(\vec{v} + \vec{w}) = \alpha(\vec{v}) + \alpha(\vec{w})$$

11. Oct 30, 2004

### quetzalcoatl9

Shankar,

let me try to help clarify some stuff:

a) $$TM_p$$ is the tangent space at point $$p$$ of the manifold $$M$$. so some tangent vector $$\vec{v} = v^i \partial_i$$ is an element of this (linear) vector space $$TM_p$$. we may also write this as $$\vec{v_p}$$ or some ppl use capitals like $$X_p$$ to be clear that this tangent vector is only defined in the tangent space based at that point.

b) we consider tangent vectors to be linear operators. so for some function $$f$$ defined in a coordinate chart of $$M$$, then $$\vec{v}(f) = v^i \frac {\partial}{\partial x^i} f$$
note that our tangent vector takes a function and maps it to a real number. this all makes perfect sense because a tangent vector is just the directional derivative of some function. however, given some tangent vector (pick one) our choice of function through that point $$p$$ really doesn't matter since new components can simply be chosen such that we still have the same vector. this gives us our coordinate independent definition: instead we just write
$$\vec{v_p} = v^i \frac {\partial}{\partial x^i} = v^i \partial_i$$

how is this coord. independent you say? let $$r^i$$ be a coord. system and $$s^j = s^j(r^i)$$. so a then a vector $$\vec{v}$$ is:

$$\vec{v_r} = v^i_r \frac {\partial}{\partial x^i_r}$$

in terms of the other coordinates, the same vector is:

$$\vec{v_s} = v^j_s \frac {\partial}{\partial x^j_s} = (\frac {\partial x^j_s}{\partial x^i_r} v^i_r)(\frac {\partial x^i_r}{\partial x^j_s}\partial x^i_r) = v^i_r \frac {\partial}{\partial x^i_r}$$ right back where we started

i dont like the whole "contravariant/covariant" terminology because it is so confusing. when i think "contravariant" i think of the components of a tangent vector (or the basis of a covector) and when i think "covariant" i think of the basis of a tangent vector and the components of a covector. it sounds to me that you are using them in the exact opposite way, which is something that physicists do. i would just rather not use those words.

c) the differential map is defined as follows:

$$df(\vec{v}) = \vec{v}(f)$$

so we define some linear functional $$df: TM_p \rightarrow R$$
all this means is that there is some linear functional we call $$df$$ which when acting on a the vector $$\vec{v}$$ gives us the same value as $$\vec{v}$$ acting on $$f$$

so to answer your question, yes, there is a covector representation of a tangent vector. the differential map takes our vector basis $$\{ \partial_i \}$$ to a covector basis $$\{ dx^j \}$$

$$dx^j(\vec{v}) = \vec{v}(x^j) = v^i \frac {\partial}{\partial x_i} x^j = v^i \delta ^i _j = v^j$$

so our $$dx$$'s strip of the jth component of our vectors. the antenna example shows this clearly.

the components of our covectors are the linear functionals acting on the vector, so:

$$\alpha = \alpha_j dx^j = \alpha(\partial_j) dx^j$$
$$\alpha(\vec{v}) = (\alpha_j dx^j)(\vec{v}) = (\alpha_j dx^j)(v^i \partial_i) = \alpha_j(dx^j(v^i \partial_i)) = \alpha_j v^i dx^j(\partial_i) = \alpha_j v^j = \alpha(\partial_j) v^j$$

in the antenna example, the components of the covector $$\alpha$$ were $$\alpha_1 = \alpha(\partial_1) = 3, \alpha_2 = \alpha(\partial_2) = 4, \alpha_3 = \alpha(\partial_3) = 5$$

to speak of components in one versus components in the other, we need the metric tensor (inner product) to relate them directly:

$$v_j = v^i g_{ij}$$