Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A How do I see a dual basis

  1. Aug 12, 2017 #1

    JTC

    User Avatar

    Please help.

    I do understand the representation of a vector as: vi∂xi
    I also understand the representation of a vector as: vidxi

    So far, so good.

    I do understand that when the basis transforms covariantly, the coordinates transform contravariantly, and v.v., etc.

    Then, I study this thing called the gradient.
    If I work out an indicial notation, I get this:
    ∇f=(∂f/∂xi)dxi

    Now comes my trouble.

    I can "envision" in my mind, that ∂xi are tangent to the coordinate curves and are, essentially, directions.

    But I cannot see the "directions" of dxi
    I cannot see them as "basis vectors" as easily as I see ∂xi

    I do understand that dxi ∂xj = δij
    And I understand how the direction of, say in 3D space, dx1 is perpedicular to the plane formed by ∂x2 and ∂x3.

    But I cannot easily see the "directions" of the basis of the dual vectors as easily as I can see the basis of the original vectors (as tangent to the coordinate curves).

    I cannot make the leap and replace dxi by e1, e2, e3, and easily as I can replace ∂xi with e1, e2, e3

    I can begin with the definition of how I construct a dual basis... But I cannot easily make the leap to see the basis as this form dxi. I just don't see "directions" here.

    Can someone provide some insight?

    Also, given a metric, I can convert the basis of the gradient to a covariant form and the components to a contravariant. So why is the gradient called contravariant, when it can go either way with a metric?
     
  2. jcsd
  3. Aug 12, 2017 #2

    JTC

    User Avatar

    In other words: whenever I see a representation of a gradient, I always see a basis that looks just like the basis of an original vector space (not the dual space)

    And I KNOW the components transform contravariantly--but what are the directions of the bases?

    I can only distinguish these animals algebraically (dx), and not geometrically (directions).

    I am not sure I am asking this correctly.
     
    Last edited: Aug 12, 2017
  4. Aug 12, 2017 #3

    fresh_42

    Staff: Mentor

    I think the simplest way to think about it is as follows. A derivative as a result of differentiation is a linear approximation in some direction. This means we have ##f(x_0+v) = f(x_0) + L_{x_0}(v) + r(v)## with the directional (in direction of ##v##) derivative ##L## and the error ##r##. So you can consider the vector ##L(v)## or the linear function ##L\, : \, v \mapsto L(v)\,.## It is the same coin with two different sides. An easy example would be ##(7+v)^2= 7^2 + (\left.\frac{d}{dx}\right|_{x=7}x^2)\cdot v + v^2 = 7^2 + 2 \cdot x|_{x=7} \cdot v + v^2##. If you ask a high school student about the differential of ##x^2## you'll get the answer ##2x##. But here the ##2## represents the slope of the tangent, the direction if you like, as well as the linear map ##2\, : \, x \mapsto 2\cdot x\,.##

    Not sure whether this helps here, but I've tried to shed some lights on it with this insight article:
    https://www.physicsforums.com/insights/the-pantheon-of-derivatives-i/

    And here's "point of view of a derivative put to the extreme": The derivative
    $$
    D_{x_0}L_g(v)= \left.\frac{d}{d\,x}\right|_{x=x_0}\,L_g(x).v = J_{x_0}(L_g)(v)=J(L_g)(x_0;v)
    $$
    can be viewed as
    1. first derivative ##L'_g : x \longmapsto \alpha(x)##
    2. differential ##dL_g = \alpha_x \cdot d x##
    3. linear approximation of ##L_g## by ##L_g(x_0+\varepsilon)=L_g(x_0)+J_{x_0}(L_g)\cdot \varepsilon +O(\varepsilon^2) ##
    4. linear mapping (Jacobi matrix) ##J_{x}(L_g) : v \longmapsto \alpha_{x} \cdot v##
    5. vector (tangent) bundle ##(p,\alpha_{p}\;d x) \in (D\times \mathbb{R},\mathbb{R},\pi)##
    6. ##1-##form (Pfaffian form) ##\omega_{p} : v \longmapsto \langle \alpha_{p} , v \rangle ##
    7. cotangent bundle ##(p,\omega_p) \in (D,T^*D,\pi^*)##
    8. section of ##(D\times \mathbb{R},\mathbb{R},\pi)\, : \,\sigma \in \Gamma(D,TD)=\Gamma(D) : p \longmapsto \alpha_{p}##
    9. If ##f,g : D \mapsto \mathbb{R}## are smooth functions, then $$D_xL_y (f\cdot g) = \alpha_x (f\cdot g)' = \alpha_x (f'\cdot g + f \cdot g') = D_xL_y(f)\cdot g + f \cdot D_xL_y(g)$$ and ##D_xL_y## is a derivation on ##C^\infty(\mathbb{R})##.
    10. ##L_x^*(\alpha_y)=\alpha_{xy}## is the pullback section of ##\sigma: p \longmapsto \alpha_p## by ##L_x##.
    And all of these, only to describe ##(x^2)'=2x\,.##
     
  5. Aug 12, 2017 #4

    JTC

    User Avatar

    I am afraid this has made things worse for me. My question is really simple.

    I have a vector. I have a coordinate system. A vector is an operator on functions. I can see how the function changes in each coordinate direction and I can interpret this as a basis. I can see the basis.

    I now have gradient. And can formulate it. But I cannot see the "directions" of the gradient. I cannot see the dxi as directions.
     
  6. Aug 12, 2017 #5

    fresh_42

    Staff: Mentor

    A matrix consists of a bunch of vectors. Can you "see" them, if you look at it? And if, what do you see? Vectors ##v^i## or linear forms ##x \mapsto ( v^i \mapsto \langle x,v^i \rangle)##, or do you take the entire thing as one transformation? Maybe it helps to consider ##dx^i## as the projection, that cuts out the ##i-## component of a vector. A vector space ##V## and its dual ##V^*## might be isomorphic, but this doesn't mean one "sees" it.
     
    Last edited: Aug 12, 2017
  7. Aug 12, 2017 #6

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    The dual vectors are not directions in the same sense as the tangent vectors, they belong to a completely different vector space, namely the vector space of linear maps from the tangent space to scalars.
     
  8. Aug 12, 2017 #7

    robphy

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

  9. Aug 13, 2017 #8

    JTC

    User Avatar

    Thank you everyone. This has been a great help.

    I am grateful for the attempts, but I think Orodruin saw what was troubling me.
     
  10. Aug 14, 2017 #9

    Ben Niehoff

    User Avatar
    Science Advisor
    Gold Member

    Say I give you a basis of vectors at a point. Forget about differential geometry for a moment, and let's just think about a vector space. So, you have an n-dimensional vector space, and I give you a set of n linearly-independent little arrows at the origin.

    The little arrows don't have to be at right angles to each other; they only have to be linearly independent. So, say there is a point A in the vector space, and you want to give its location in terms of the little arrows I've given you. Well, there are two ways you can do this:

    1. Interpret the little arrows as directions along which you make displacements, and take the appropriate linear combination of directed displacements to land at the point A.

    This is probably the most straightforward thing you can do. But there is another option:

    2. Interpret each subset of (n-1) arrows as specifying an oriented hyperplane. The meaning of the n-th coordinate is the amount by which you should displace the hyperplane of the remaining (n-1) arrows, parallel to itself. So, to reach the point A, we make parallel displacements of each of the n such hyperplanes, such that their new point of mutual intersection is the point A.

    Now, if the basis I gave you was orthonormal to begin with, then the definitions 1 and 2 are actually the same (this should be easy to imagine in 3d space with three orthogonal unit vectors). But if the basis I gave you was *not* orthonormal, then definitions 1 and 2 describe different procedures (imagine 3d space with non-orthogonal vectors; you should be able to see that parallel displacements of, say, the XY plane do not correspond to displacements along the Z direction, since Z is not perpendicular to the XY plane).

    Definition 2 is essentially what the dual basis means. There is the remaining question of the "weight" to give to these parallel displacements of hyperplanes (i.e., how much displacement corresponds to a given value of a coordinate), and this "weight" is determined by the defining formula,

    $$\omega_i v^j = \delta_i^j$$

    In any case, the basic idea is not hard to visualize and is actually helpful when thinking about differential geometry: Vectors measure displacements along a direction; dual vectors measure displacements from a hypersurface. This is why, e.g., vectors are appropriate for describing velocities, whereas dual vectors are appropriate for describing gradients.
     
  11. Aug 14, 2017 #10

    JTC

    User Avatar

    thank you, Ben
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted