Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Vectors As Geometric Objects And Reciprocal Basis?

  1. Feb 4, 2009 #1
    I'm trying to build up enough understanding to work through some GR on my own, but I'm horribly confused by some of the math concepts. So terribly so, that i'm not even sure how to ask my questions. Please bear with me.

    Lets work in a 2D plane.

    Assume I have a vector u which I can write out in terms of orthonormal basis vectors: u = 3e1 + 4e2. This gives it a length of 5.

    What if the basis isn't orthogonal? My vector acquires a different length and my notion of a dot product has to change, right?

    Now take away the coordinate system. What's left?
    My vector u is still there and can still be expressed by the expansion above, but it now makes no sense at all to talk about its length or the normality of the basis vectors, correct?

    If it's true that we cannot talk about lengths without a coordinate system, then doesn't that also mean that we cannot construct a reciprocal basis?

    Two basis e1 e2 e3 and e1 e2 e3 are said to be reciprocal if they satisfy the following condition: ei . ek = [tex]\delta_i^k[/tex]
    --Vector and Tensor Analysis with Applications by Borisenko and Tarapov

    But how can we satisfy that condition if we don't have a unit length? Does the presence of a reciprocal basis indicate that a coordinate system has been chosen (though the location of the origin may not be fixed)? Something else?

    If a column vector exists in the original basis, does the transpose of the vector exist in the reciprocal basis?

    I'm of the impression that the reciprocal basis is a map that takes the vector space into the real numbers. Thus, I assume it is absolutely required for dot products and any notion of the length of a vector in a non-orthogonal basis. Is that correct?

    If so, then all dot products should be of the form ei . ek or ei . ek, but I also read about dot products with both indices up or both down. What do those mean?

    Is the reciprocal basis the same place where all the covariant vectors 'live'? If not, how is the concept of a reciprocal basis related to the idea of covariance?

    I struggle with the idea of vectors being coordinate system independent because I equate the cotangent space with the reciprocal space and don't see how it can be defined without a coordinate representation.

    Well, I'm gonna cut myself off right here. Any help in answering any of these questions is greatly appreciated. Even just helping me to perhaps better formulate my questions would be a big plus. I have a BS in Physics but not really any background in the more abstract areas of math other than set theory, so it would be appreciated if answers didn't assume too much prior knowledge. Thanks a million!
  2. jcsd
  3. Feb 4, 2009 #2


    User Avatar
    Science Advisor
    Homework Helper

    As long as we have a vector space, we can always talk about vectors, like u and e1.

    If we have a metric space, then we have a metric || . || which maps the vector u to a number ||u|| which we call the length of u. Note that this is an abstract operation and 'length' is just terminology.

    If we have an inner product space, we can talk about the inner product between two vectors, like u . e1. Then we can also define the length of a vector, i.e. the square of the length of u is u . u. In other words, if we have an inner product space we can make it into a metric space, by using the inner product to define a 'canonical' metric.

    In any vector space, we can choose a basis. That means, that if our vector space is n-dimensional we can choose vectors e1, e2, ..., en such that we can express any vector as a linear combination: u = a1 e1 + a2 e2 + ... + n en.
    We can make the basis normalized, by taking the length of ei equal to one.
    If the vector space happens to have an inner product, we can easily find the coefficients ai as u . ei. In that case we can also speak of a orthogonal basis by requiring that ei . ej = 0 if i and j are not equal. If we also normalize the vectors (e.g. that expression gives 1 when i = j) then we have an orthonormal basis.

    Of course, we are used to working in Euclidean space, which is all of the above. Therefore, when thinking about these things in an abstract way, it might be difficult at first to separate the concepts of vector, metric and inner product spaces.

    Finally, if we have a vector space V with vectors u, we can define a dual space V* of which the elements are functions from V to the real numbers. So an element of V* might be f, and then f(u) is a number. It turns out that V* is again a vector space (i.e. the functions satisfy the vector structure: if f and g are in V*, then we can also define (f + g) consistently). Indeed, a dual basis is just a basis in this vector space. I once wrote a long post on this, I will see if I can find it.
  4. Feb 5, 2009 #3
    Thanks CompuChip. I'm still confused and most of what you told me is stuff that I already know, however some of your terms were helpful. The notion of a 'metric space' and 'inner product space' for instance. I see now that I had the concepts of coordinate system and metric nearly equated.

    With these new words: I don't understand why we need the inner product space if we have a metric defined. You said with a metric we can talk about the 'length' of u. Well then why not talk about the length of another vector v and the angle between them as well? What else do we need to take inner products? Does the metric somehow not give us a concept of an angle? I don't see why it wouldn't or rather, how the metric space differs from the inner product space. Wikipedia informs me that they're not the same, but I can't quite follow how.

    What does it mean to have one space without the other? I can't separate them in my head.

    And to be clear, saying 'space' is a little misleading, right? It's all the same space, just different representations therein. I remember my quantum professor discouraged people from talking about momentum 'space' or coordinate 'space'.

    I'd say from wikipedia that the 'dual basis' is certainly my 'reciprocal basis'. But what's the difference between the dual space and the inner product space? Is the latter a subset of the former? Is that the difference?

    So an overarching question is: Which of the aforementioned 'spaces' are equal, how so or how not, and in which of them do the covariant and contravariant vectors 'live'?

    Is the purpose of the dual / reciprocal basis to give us a way to fake having orthonormal coordinates so that our calculations in these non-orthonormal representations make sense to us, since we experience the world orthonormally?

    Thanks again!
  5. Feb 5, 2009 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You can have a metric space that cannot have an inner product defined such that the metric is induced by the inner product. One example is R2 with the sup norm. In fact, given an inner product, you can fairly easily prove the metric induced by it satisfies the Parallelogram Law, and less easily prove that if a metric satisfies the parallelogram law, then there exists an inner product that induces the metric.

    Dual space: As a matter of fact, the dual space if a vector space of functions f:V->R (or C if you're working over complex numbers). An inner product space is just a vector space with an inner product. Every vector space has a dual space, not every vector space is an inner product space. What exactly is your confusion between the two?
  6. Feb 5, 2009 #5


    User Avatar
    Science Advisor
    Homework Helper

    Actually there is also something like a normed vector space, which has a norm || . ||. A norm operates on one element and gives its "length", a metric operators on two and gives the "distance" and the inner product operators on two and gives their "overlap" or "parralellity". The difference between metric and inner product is rather subtle, you should really look at the mathematical axioms I suppose.
    If you have an inner product space, it is always possible to make it into a metric space by defining a metric ("distance function") through the inner product. However, it is also possible to have some different metric, and it is also possible to define a metric without having an inner product at all. On the other hand, I don't think you can in general use the metric to define an inner product (or maybe you can, I can't recall seeing such a result before though). If you have a norm you can make it into a metric by d(u, v) := || u - v ||, however the opposite is not always true (knowing the length of two vectors doesn't mean you can say something about a "distance" between them).

    For example, you can think about a space of functions of x - say on an interval [a, b]. This is vector space (the function f(x) = 0 for all x acts as the zero, (f + g) can be defined by (f + g)(x) = f(x) + g(x), etc.).
    You can define a metric on them, for example, let the distance between two "vectors" f and g be given by
    [tex]d(f, g) = \inf_{x \in [a, b]} |f(x) - g(x)|[/tex]
    (I think, didn't check it's actually a metric) or
    [tex]d(f, g) = \int_a^b |f(x) - g(x)| \, dx[/tex].
    which would tell you something about the "distance" of the functions to each other (in whatever way you want to define that).

    You can also define an inner product, like the one you have in QM:
    [tex]f \cdot g = \int_a^b f(x) g(x) \, dx[/tex]
    which would tell you something about the "overlap" between the functions (e.g. if they're sharply peaked the inner product will be very small or large depending on whether the peaks are close by or far apart) in much the same way as the inner product between vectors in the plane tells you something about the "overlap" of the vectors ("how much they point in one another's direction").

    It is perfectly possible that you just need one or the other. You can use them both. Or you can only use the inner product and then define the metric as
    [tex]d_\mathrm{induced}(f, g) = (f - g) \cdot (f - g) = \int_a^b (f(x) - g(x))^2 \, dx[/tex]

    When you are talking about "real spcae" and "momentum space" you can argue that the language is misleading. However, in the present context "space" is just a shorthand for "vector space" and there is no ambiguity. The dual to a vector space is again a vector space. They're different things (in general, though in special cases like Rn they are isomorphic - again your intuition is making a clear distinction blurry :smile:) but they are both vector spaces.

    The dual space is something different. Generally, a vector space V can be anything: a space of points in Rn, a space of functions, or of more weird things. Now mathematicians have found out, that you can make a new vector space out of V, whose elements are linear functions [itex]f: V \to \mathbb{R}[/itex]. In this context, linear means for example that f(v + u) = f(v) + f(u), for any two vectors v and u in V. The vector space of these functions on V is called the dual space of V, denoted V*. It needn't be an inner product space or a metric space a priori, even if V is - see the example above.

    In quantum mechanics, we have a very nice notation. The Hilbert space we're working with there is an inner product space, with the inner product between two vectors |u> and |v> (i.e. elements from V) is denoted by <u|v>.
    Now we can make the vector |u> from V into a vector <u| from the dual space V*, which means that <u| is a function which acts on vectors |v> in V and gives a number <u|(|v>)
    (in "ordinary" notation, we make the vector u into a function fu which acting on v gives a number fu(v)). We define this function such that it gives the inner product:
    fu: V -> R, fu(v) = <u|v>
    or in bra-ket notation,
    <u|(|v>) = <u|v>
    (which is why the notation is so handy).
    If you have a basis, then to define the elements of the dual it suffices to say how they act on the basis (because you can expand any vector as [tex]v = a_1 e_1 + \cdots + a_n e_n[/tex] so if you know what f_u(e_i) is, then f(v) = a_1 f(e_1) + \cdots + a_n f(e_n)[/tex]. In a QM-like context, where you want the dual of a vector to give the inner product and the basis is orthonormal, you can easily check for yourself that you need to define fei by [tex]f_{e_i}(e_j) := \delta_i^j[/tex]... in the weird but convenient notation namely, that just says that [tex](\langle e_i |) (|e_j\rangle) = \langle e_i | e_j \rangle = \delta_i^j[/tex].

    However handy, for QM students the notation often adds to the confusion, because it is not clear to them how <u| and |u> are different things and how they are obtained from one another, though actually they are just elements in completely different vector spaces (which happen to be related, in the sense that the elements of one are functions which we can plug elements of the other into).
    Last edited: Feb 5, 2009
  7. Feb 5, 2009 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Whoops... my previous post was assuming the metric is a norm metric

    In general, if you use inf for a metric you have to be careful because a lot of times it'll give you 0 for two non-equal elements (as is the case here)
  8. Feb 5, 2009 #7


    User Avatar
    Science Advisor
    Homework Helper

    Yeah, so that was a non-example, as a metric has the property that d(x, y) = 0 iff x = y.
    I think with sup it works though (if the triangle inequality doesn't give problems)
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook