Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Covariance and Contravariance

  1. Jun 14, 2008 #1
    Hey, I posted earlier about the tensor covariant derivative, and the help was great, that makes sense to me now.

    However, I am getting really really stuck on the concept of covariant vectors vs. contravariant vectors. I've looked through as many resources as I can - wikipedia, mathworld, the NASA tensor pdf (which is otherwise great), schaum's tensor outline, and I'm getting nowhere. My next step is MTW, though I am sorta intimidated by it so I haven't looked yet. I just can't understand, maybe I'm stupid.

    Everyone seems to be concerned with how the components of the vector transform under a change of coordinates, but to me this feels like sophistry. The vector and manifold exist independent of coordinates - a change of coordinates is just redrawing lines over the manifold, nothing actually changes. Similarly, the change of coordinates does absolutely nothing to the vector itself, it just affects how it is written.

    So then how are contravariant and contravariant vectors actually different? It seems to me like they are just vectors, forget any of this contravariant/covariant business.

    Now another point a lot of these books bring up is how you can convert a covariant vector to a contravariant vector and vice versa by taking the inner product with the metric tensor. However, this seems to me like a hackish abuse of the inner product. Instead of the metric eating 2 vectors like a good inner product, it eats 1 and then chills out.

    Anyways, I would appreciate any help with this!
  2. jcsd
  3. Jun 14, 2008 #2


    User Avatar
    Science Advisor
    Homework Helper

    Technically speaking, contravariant vectors are not vectors. Suppose we have some vector space V. Then a vector is an element v of V. In quantum mechanics we would write |v> and in the tensor formalism we can write [itex]v^\mu[/itex] to indicate that we can write it out in components, in some basis. Of course, as you say, the choice of coordinates is arbitrary and for any coordinate system, the vector is really the same thing. But the components (the number we write down to specify the vector, relative to the coordinate system) do change. So if we have some matrix [itex]\Lambda[/itex] which gives the change of coordinates (as you are used to in linear algebra) we write the components relative to these bases as [itex]\Lambda_\nu{}^\lambda[/itex] and it turns out that the components of the vector are related by a special formula, [itex]v^\mu[/itex] in the new basis is given by [itex]\sum_\nu \Lambda_\nu{}^\mu v^\nu[/itex].

    A covariant vector is, however, not an element of the vector space V, but an element of the dual space V*. In other words, it is a linear form on V: you stick in a vector and get out a number, and if you stick in a linear combination of vectors you can calculate the number by looking at the separate parts of that linear combination. Now if we are in a special vector space, in which there is an inner product, it turns out that there exists for every vector v in V, an element v* of V* with the property that applying v* to any w in V gives the inner product between v and w. In quantum mechanics, we would write <v| instead of v* and in the tensor formalism we write [itex]v_\mu[/itex] for the components relative to some basis.

    Note that, though the same letter is used for both, they are actually different things which live in different spaces. Again, the [itex]v_\mu[/itex] are just number which depend on the coordinate system that you (arbitrarily) choose. Of course, we can just as well choose some other coordinate system by a change of basis [itex]\Lambda_\mu{}^\nu[/itex]. But because [itex]v_\mu w^\mu[/itex] (v*(w), in the old notation) is a number which does not have anything at all to do with the coordinates, the way in which it must transform is well-defined. In fact, an easy calculation shows that it must transform with the inverse of the change-of-coordinate matrix, which we - somewhat confusingly but very conveniently in practice - denote by [itex]\Lambda^\mu{}_\nu[/itex].

    So you have really two different objects in two different spaces, and an arbitrary set of coordinates. But indeed, as you say, certain things are unchanged when we choose another arbitrary set of coordinates. For example, the actual vector must stay the same, therefore, changing coordinates means that the components of the vector w.r.t. the coordinates must change as well. But also, these things from different spaces "interact" which each other (you can apply one to the other and get a number, which is independent of the coordinates); therefore the way that we can write down the components of both when going to another coordinate system is related.

    Finally, since the two spaces V and V* are isomorphic, we can go from elements v in the former to unique elements V* in the other and vice-versa. In quantum mechanics, this amounts to going from bras to kets (& v.v.) and in the tensor formalism this amounts to going from contravariant to covariant "vectors" (& v.v.). It turns out that the way to do it is by the metric (which has to do with the inner product <.|.>, and that has to satisfy certain rules such as <v|w> = <w|v>*, or, equivalently, [itex]v_\mu w^\mu = w^\mu v_\mu = v^\mu w_\mu[/itex]) and this can be shown (e.g. in a linear algebra setting, or with bra-ket notation, or in tensor notation, whichever you prefer). If you want you can view it this way: the transformation is fixed and well-defined; the index notation just turns out to have a very nice property, where this transformation just amounts to raising and lowering indices, or - equivalently - writing the metric in between and summing over all repeated indices. You can view this purely as a formal trick however, with a very convenient notation; the underlying principles can all be derived without this notation from linear algebra and functional analysis.

    I hope that makes things a bit clearer.
  4. Jun 14, 2008 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Very good post. Just a minor correction. They are vectors, but not tangent vectors. (It's obvious from the rest of your post that you are aware of this. I'm just nitpicking a bit, because I think I would find that first sentence a bit confusing if I didn't know this stuff already).
  5. Jun 14, 2008 #4
    Hello maze.

    You have already received excellent responses. But perhaps i can help a little because i think you will be having the same problems as me when i first began to look at the subject.

    Bearing in mind that I am also still learning about tensors and so am no expert but will attempt an answer to your confusion about the inner product. This is a bit rambling and a lot non-rigorous but I think it gives the general idea but any corrections from those better versed are of course welcome to me as well.

    The inner product is an operation between two vectors. These vectors live in the same space. As a covector lives in a different space it needs a metric to express it in the same space as the vector. An inner product can then be made between the two vectors.
    The relationship between vectors, covectors and the metric is vector times covector equals metric. To do this calculation in matrix form you must write the vector as a column and the covector as a row.

    In Euclidean space the metric is a diagonal matrix with unit entries and so the covector sort of “equal” to the vector and so the calculation can be done without the metric although I suppose we should think of the covector being acted upon by the metric to make it into a vector so that we can work out the inner product. When we first learn about inner products we learn using Euclidean space but at this level the distinction is unimportant.
  6. Jun 14, 2008 #5
    I too had a lot of trouble grappling with the distinction between covariant and contravariant vectors. I didn't understand it fully until I found an explanation in terms of the linear-algebraic aspects of tensor analysis, exactly like CompuChip's post.

    In my opinion, any introduction to tensors that doesn't talk about dual spaces and linear functionals is bound to fail to adequately explain the distinctions between tangent and cotangent vectors. All these explanations that attempt to simplify things and leave out such notions in the end only caused me a whole lot of pain that could have been avoided if they would have just presented the subject for what it is in the first place.
  7. Jun 14, 2008 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I advise completely avoiding the term "covariant" and "contravariant" when possible -- there are actually two conventions regarding those terms, and they are exact opposites!

    Spivak's Differental Geometry is a good source for differential geometry.

    When a particular vector space is most interesting (e.g. the tangent space to a point on your manifold), it is customary to reserve the word "vector" specifically for elements of that vector space, and use other names for elements of related vector spaces. For example, elements the dual space (e.g. the cotangent space) are called "covectors", and arbitrary elements of the tensor algebra are called "tensors".

    This is something you'll just have to get used to -- the ability to do this is, in one deep sense, the very reason why things like sets and vector spaces are very notions.

    You, incidentally, use this property very often -- it's exactly what you do when you define a function via something like
    f(x) := 3x.​
    Normally, multiplication is a function of two variables, the multiplier and multiplicand. But if you choose a specific value for the multiplier, you can plug it in, yielding a function of one variable.

    In general, if you have any function [itex]f:A \times B \to C[/itex], you can choose a specific element a of A, you can produce a new function [itex]g : B \to C[/itex] defined by [itex]g(b) = f(a, b)[/itex]. By letting [itex]C^B[/itex] denote the set of all functions from B to C, this process can be described as a function [itex]h: A \to C^B[/itex]. h is defined so that h(a) is the function defined by [itex]h(a)(b) = f(a, b)[/itex].

    For vector spaces, you have the same property relating linear functions [itex]A \otimes B \to C[/itex] to linear functions [itex]A \to L(B, C)[/itex], where [itex]L(B, C)[/itex] is the vector space of all linear transformations from B to C. (Note that the dual space [itex]V^*[/itex] is, by definition, given by [itex]V^* = L(V, \mathbb{R})[/itex], for real vector spaces)
    Last edited: Jun 14, 2008
  8. Jun 14, 2008 #7
    Ok, first of all thank you to everyone who has posted in this thread. The explanations are really helping, especially CompuChip's, but everyone else's as well. I'm going to try to write out how I understand things now, both for my own benefit, and so that you can correct me if I make mistakes.

    If you have a surface M, the space of all tangent vectors to M at a point p forms a vector space, [itex]V_p[/itex]. If the surface is expressed in a particular coordinate system, [itex]x_1,x_2,...[/itex], then the tangent vectors to the coordinate curves at p form one basis for [itex]V_p[/itex]. However, if you changed the coordinate system of M to [itex]\bar{x}_1,\bar{x}_2,...[/itex], the tangent vectors to the new coordinate curves would form a different basis of [itex]V_p[/itex]. You can convert back and forth between these basis'es by using the chain rule, and in general the transformation of vector components [itex]v^i[/itex] of v looks like a matrix multiplication, [itex]\bar{v}^i = \Lambda_{i}^{k} v^{i}[/itex].

    The vector space [itex]V_p[/itex] admits an inner product, [itex]\left<\textbf{v},\textbf{w}\right>[/itex]. In order for the inner product to correspond properly to length and angles in euclidean space, the inner product is given in particular coordinates by
    [tex]\left<\textbf{v},\textbf{w}\right>=\left[v^1, v^2, ...\right] \left(J J^T\right)^{-1} \left[w^1, w^2, ...\right]^T[/tex]

    where J is the Jacobian for the transformation from euclidean coordinates to the coordinate system v and w are written in.

    To generalize from a surface to a more general manifold that is not necessairily embedded in euclidean space, you have to be a little more careful about how you build the tangent vector space. One way to do this is to use equivalence classes of curves as a substitute for tangent vectors when building [itex]V_p[/itex]. You also don't have a mapping from euclidean coordinates to your coordinates or the Jacobian, but thats OK because the manifold will come equipped with an inner product already in the form of the metric, and you can just replace [itex]\left(J J^T\right)^{-1}[/itex] with the metric matrix G.
    [tex]\left<\textbf{v},\textbf{w}\right>=v^i G_{i j} w^i[/tex]

    If I were inventing differential geometry, I would probably have just stopped right here, seeing as how you can calculate pretty much anything you want about tangent vectors with what we have so far. However, apparently it is interesting and useful to also consider the space of all linear functionals on [itex]V_p[/itex] as well (why is this?).

    Now, let's actually think about linear functionals for a minute. If f is a linear functional, that means f(v) eats a vector and spits out a number and does so linearly. That's all fine and good, but suppose we have 2 linear functionals, f and g. Is there a sense in which we could "add" the functionals? Yes! If you define (f + g)(v) = f(v)+g(v), then "f+g" is another linear functional. More broadly, with addition between linear functionals defined this way, the space of all linear functionals actually forms a vector space, call it [itex]V^*_p[/itex] (one could check all the vector space axioms - its all good).

    To actually calculate f(v), first express v in terms of a basis, [itex]\textbf{v}=v^1 \textbf{e}_1 + v^2 \textbf{e}_2 + ...[/itex]. Then we can figure out how f acts on v if we know how f acts on each element of the basis.
    [tex]f\left(\textbf{v}\right) = f\left(v^1 \textbf{e}_1 + v^2 \textbf{e}_2 + ...\right)=v^1 f\left(\textbf{e}_1\right) + v^2 f\left(\textbf{e}_2\right) + ...=\left<\textbf{v},\textbf{e}_1\right> f\left(\textbf{e}_1\right) + \left<\textbf{v},\textbf{e}_2\right> f\left(\textbf{e}_2\right) + ...[/tex].

    Remembering that linear functionals are actually vectors in [itex]V^*_p[/itex], the values [itex]f_i = f\left(\textbf{e}_i\right)[/itex] can be thought of as the "components" of f in the basis of functionals [itex]\textbf{h}_1\left(\textbf{v}\right)=\left<\textbf{v},\textbf{e}_1\right>[/itex], [itex]\textbf{h}_2\left(\textbf{v}\right)=\left<\textbf{v},\textbf{e}_2\right>[/itex], ... Thus [itex]f=f_1 \textbf{h}_1 + f_2 \textbf{h}_2 + ...[/itex] This makes a particularly nice way to write f(v) if we know the components of f and v in their corresponding basis'es: [itex]f\left(\textbf{v}\right)=f_i v^i[/itex].

    Above we wrote v in a certain basis [itex]\textbf{e}_1,\textbf{e}_2, ...[/itex], and that corresponded to the following basis for f: [itex]h_1\left(\textbf{v}\right)=\left<\textbf{v},\textbf{e}_1\right>[/itex], [itex]h_2\left(\textbf{v}\right)=\left<\textbf{v},\textbf{e}_2\right>[/itex], ... If the coordinate system is changed, the basis for v and f are both changed, and so the components of v and f will change, but how? We already saw that under a coordinate transform, the components of v change according to a matrix multiplication [itex]\bar{v}^i = \Lambda_{i}^{k}v^{i}[/itex]. What about the components of f? If you work through it, you will find that the components of f transform inversely: [itex]\bar{f}_i = \left(\Lambda^{-1}\right)_{i}^{k}f_{i}[/itex] (is there an easier way to see this besides crunching the matrix algebra?).

    Finally, since
    [tex]f\left(\textbf{v}\right) =\left<\textbf{v},\textbf{e}_1\right> f\left(\textbf{e}_1\right) + \left<\textbf{v},\textbf{e}_2\right> f\left(\textbf{e}_2\right) + ...[/tex],

    and since
    [tex]\left<\textbf{v},\textbf{w}\right>=v^i G_{i j} w^i[/tex],

    we can think of [itex]v_j = v^i G_{i j}[/itex] as the coordinate representation of the linear functional that computes [itex]\left<\textbf{v},\textbf{w}\right>[/itex].
  9. Jun 15, 2008 #8


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Bleh, I'm quite conditioned to use the "(coordinate representations of) tangent vectors are columns" and "cotangent vectors are rows" convention. Hopefully I won't make any errors using your convention, but if you see something weird, that might be the reason. :smile:

    Since we're building things from scratch, we want to be careful to actually define what kind of object G is!

    We've defined the tangent space V so that it's a vector space, so it seems clear that we could simply define a "tangent linear transformation", which is a linear transformation from V to V. So we can let G be such a thing, and we're happy.

    Now, we can define the inner product: the inner product G represents ought to be [itex]\langle v, w \rangle := v G w^T[/itex]. But now we see a problem: what is the transpose of a tangent vector?!?! Okay, we can fall back on coordinates: we might simply try defining [itex]\langle v, w \rangle := \sum_{ij} v^i G_i^j w^j[/itex].

    So let's check: does this all make sense? We need to check coherence with respect to a change of basis. Let [itex][\mathbf{v}]_B[/itex] denote the coordinate representation of v with respect to the basis B. Let [itex]\Lambda[/itex] be the change-of-basis matrix between B and B' (so [itex][\mathbf{v}]_{B'} = [\mathbf{v}]_{B} \Lambda [/itex])

    (Or, if you prefer index notation, [itex]\bar{v}^i = v^j \Lambda_j^i[/itex])

    Exercise: Determine how the coordinate representation of G must transform...
    1. ... based on the fact G is a linear transformation
    2. ... based on the fact [itex]\langle \cdot, \cdot \rangle[/itex] is an inner product

    (Hrm. I have more to say, but I think I'll give you time to reflect upon this first)
    Last edited: Jun 15, 2008
  10. Jun 15, 2008 #9
    Ok, just thinking of everything as boxes of numbers being summed over, you could say that
    [tex]\left<\textbf{v},\textbf{w}\right>=v^i G^j_i w^j = \bar{v}^k \left(\Lambda^{-1}\right)^i_k G_i^j \left(\Lambda^{-1}\right)_l^j \bar{w}^l[/tex]

    Since [itex]\left<\textbf{v},\textbf{w}\right> = \bar{v}^i \bar{G}^j_i \bar{w}^j[/itex] as well, then [itex]\bar{G}_k^l=\left(\Lambda^{-1}\right)^i_k G_i^j \left(\Lambda^{-1}\right)^j_l[/itex]

    I'm pretty sure all the steps I took are correct (just substituting things in), but yet something screwy is going on with the indicies (they don't match).

    Also, I didn't use any of the properties of the inner product in these steps. The inner product would force G to be symmetric, positive definite, and possibly other nice things.
    Last edited: Jun 15, 2008
  11. Jun 15, 2008 #10


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Now, how does that compare with the result of exercise 1?

    That's a hint of things to come. :smile:
  12. Jun 15, 2008 #11


    User Avatar
    Science Advisor
    Homework Helper

    I still had a few small remarks about the earlier post.

    That has to do with Hurkyl's post about converting multilinear forms to lower "degree" linear forms. We know that the inner product is linear in - in particular - the second slot. So if we momentarily fix v, we can consider a function [itex]f: V \to \mathbb{R}: \vec w \mapsto \langle \vec v, \vec w \rangle[/itex] and this turns out to be a linear form. Hence, we can apply the full machinery of functional analysis here, and we can compare to the quantum mechanical formalism of bras and kets which we by now (hopefully) already understand very well. It is common in physics to look at the same thing in a different way, and try to discover parallels with what we already know. You could even turn it around and say that if we understand this view correctly we might better understand the QM formalism.

    No, it means that f eats a vector and spits out a number f(v). I suppose you meant: let's fix a vector v in the vector space and consider the unique linear functional we can build out of that by the construction given earlier, which - in a sense - still depends on v, so let's denote it by f(v). Then f(v) is a linear functional, it eats a vector (e.g. w) and spits out a number (f(v))(w) which happens to be equal to <v, w> by construction.

    In some cases it turns out to be isomorphic to V itself. And very generally it turns out that, repeating this construction, the dual of the dual V** is always isomorphic to V. In physics, I think we use this in the fact that we go back and forth between bra's and ket's (e.g. the bra corresponding to the ket corresponding to a bra, is just itself).

    Note that now you are switching back to f being the linear functional, so f(v) is already a number.

    You are messing things up a bit. The hi(v) you define are just numbers, they aren't a basis for anything. I suppose you meant to explain that one can define a basis { fi } of linear functions such that [tex]f_i \in V^*, f_i(e^j) = \delta_i^j[/tex] (extended linearly to V*) for the chosen basis { ei } of V. This basis is called the basis dual to { ei } and indeed we can write [itex]f = c_i f^i[/itex] (note that the c are just numbers, and the f are linear functionals). If you continue this to the rest of your post, you will find the result you stated (so you messed up the notation, but the idea was fine).

    Finally, I'd like to add that you can see the metric itself as the inner product. So the metric [itex]g_{\mu\nu}[/itex] is a tensor with two covariant indices (which just means: it is a bilinear map [itex]g: V \times V \to \mathbb{R}[/itex]). You plug in two vectors and get a number: [itex]g(\vec v, \vec w) = \langle \vec v, \vec w \rangle[/itex] in functional notation, in tensor notation: [itex]g_{\mu\nu} v^\mu w^\nu = v^\mu w_\mu[/itex] (the metric "lowers the index"; if you want just view [itex]v^\mu w_\mu[/itex] as the notation for the inner product, forgetting about linear forms) and because it is symmetric, we can also view this as not g(v, w) but g(w, v) and get [itex]v_\mu w^\mu[/itex]. But we can also plug in just one vector, so we get an object with just one slot for a vector which - when filled - produces a number. But that's exactly what we called a linear functional:
    [tex]g(\vec v, \cdot) \in V^*: \vec w \mapsto \langle \vec v, \vec w\rangle \in \mathbb{R}[/tex].
    In index notation, this is written as [itex]g_{\mu\nu} v^\mu[/itex]. This is an object with one lower index, and objects with a lower indices are linear maps ("co-vectors") so indeed the notation is consistent.

    One can extend this formalism to objects with k upper indices and l lower indices: if you plug in k linear forms and l vectors you get a number (and if you plug in less than that, you get a new map of "lower rank").
    Perhaps it would be very instructive if you read chapter 1 of Sean Carroll's lecture notes on General Relativity, as he explains this quite rigorously (for a physicist) with clear physical applications in mind.
  13. Jun 15, 2008 #12
    Ahh, I didn't realize those were supposed to be separate exercises. I'll go over it again at the next opportunity, which will be in a day or so.

    There is a lot of material in your post that will take a while to go through. I will have to give some serious thought to the issues of isomorphism and other points you mention. However a couple of things came up that I can address right away.

    First, my understanding of quantum mechanics is not at this high of a level, so a lot of the connections you making related to bra and ket vectors are going over my head.

    Second, I think a couple of the problems you've identified are not actually problems but rather misunderstandings due to my poor choice of wording.

    When I said f(v) in that section, I meant the general function, for any vector, not the value of f for a particular vector v. This is similar to how, if you want to talk about a function of a single variable, you might say "f(x)", even though technically f(x) is a particular value, not the function as a whole. I probably should have said [itex]f\left(\cdot\right)[/itex]. Again, the same terminology thing is going on when I talked about the (dual) basis vectors [itex]h_i\left(\cdot\right)[/itex] for the linear function space [itex]V^*[/itex].

    If there is a real problem or subtlety I'm missing here, please let me know, but I don't think there is. I do think that my reasoning is correct, despite being poorly worded.
  14. Jun 15, 2008 #13


    User Avatar
    Science Advisor
    Homework Helper

    I hope it's not too much new material, just the same things already discussed in this thread, maybe seen from another point of view. If they help you understand it better, good; if they confuse you more, forget about it.

    OK, that's too bad, I was hoping it would clear things up a bit. Anyway, when you ever decide to do QM you already understand about bras and kets :)

    As I already said, I think you got the idea very well. I don't even know if your notation was wrong or whether it was correct but just confusing for me. But indeed, you got the point :smile:

    Good luck reading on.
  15. Jun 15, 2008 #14
    So, based on linearity,
    [tex]G\left(\left[\textbf{v}\right]_B\right) = G\left[\textbf{v}\right]_B = G\Lambda^{-1}\left[\textbf{v}\right]_{B'}[/tex]

    Therefore [itex]\bar{G} = G \Lambda^{-1}[/itex].
    So [itex]\bar{G}=\Lambda^{-1}G\Lambda^{-1}[/itex] AND [itex]\bar{G}=G \Lambda^{-1}[/itex], so [itex]G \Lambda^{-1} = \Lambda^{-1} G \Lambda^{-1}[/itex], which would imply [itex]G = \Lambda^{-1} G[/itex]. I don't see how this could possibly hold for any possible coordinate transform matrix.
    Last edited: Jun 15, 2008
  16. Jun 15, 2008 #15
    scratch that. the manipulations there don't actually determine [itex]\bar{G}[/itex]
  17. Jun 15, 2008 #16


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Recall the basic computational formula for the coordinate representation [T] of a linear transformation T:
    [T(v)] = [v] [T]​

    (Incidentally, this is the main reason I prefer the convention where vectors are column vectors; so that this identity isn't 'backwards')

    Or, with indices, if w = T(v), then [itex]w^i = T^i_j v^j[/itex]

    If you want to try working it out again, then don't read below this point.

    Okay, the calculations work out to:
    [tex][v]_{B'} [G]_{B'} = [G(v)]_B' = [G(v)]_B \Lambda = [v]_B [G]_B \Lambda = [v]_{B'} \Lambda^{-1} [G]_B \Lambda[/tex]
    and so
    [tex][v]_{B'} \left( [G]_{B'} - \Lambda^{-1} [G]_B \Lambda \right) = 0[/tex]
    which implies, because this is true for all coordinate tuples [itex][v]_{B'}[/itex],
    [tex][G]_{B'} = \Lambda^{-1} [G]_B \Lambda[/tex]

    Your observation was correct -- this is different than the constraint we needed on G in order for our naive attempt at defining an inner product to work!

    The notion of an inner product clearly make sense -- but it is not so clear that inner products bear a nice relation to linear transformations of tangent vectors. Now that we've uncovered a contradiction, how would you proceed in developing differential geometry?
    Last edited: Jun 15, 2008
  18. Jun 16, 2008 #17
    I think there is a problem with your derivation. How can necessairily know that
    [tex][G(v)]_{B'} = [G(v)]_B \Lambda [/tex]

    I think that step is in error because the contradiction it would raise isn't just a surface issue that you avoid and develop differential geometry around. If you follow the contradiction back to the assumptions, it would literally mean that there exists no function of tangent vectors satisfying all the properties of an inner product! (except the euclidean inner product, then G would be I and there would be no contradiction).

    Also, on the previous page there was the issue of indices acting screwy and not matching when changing coordinates. I'm pretty sure that is because the indices on G were wrong to start out with. If you use [itex]\left<\textbf{v},\textbf{w}\right>=v^i G_{i j} w^j[/itex] instead of [itex]v^i G^i_j w^j[/itex], then everything works out fine. Also, having both indices lower like this is proper, since G is a linear functional on tangent vectors in V for both inputs.

    Here is a summary of the situation as I understand it (I'm really starting to warm up to your notation, and will use it here. Also, I'm using column vectors instead of row vectors)

    Coordinate representation of the inner product:
    Let B = [itex]\left{\textbf{e}_1, \textbf{e}_2, ...\right}[/itex] be a basis for the tangent space V. Then
    [tex]\left<\textbf{v},\textbf{w}\right> = \left<v^1\textbf{e}_1+v^1\textbf{e}_2+...,w^1\textbf{e}_1+w^1\textbf{e}_2+...\right> = \sum_{i j} v^i w^j \left<\textbf{e}_i,\textbf{e}_j\right>[/tex]

    [tex]\left[G\right]_B=\left[\begin{matrix}\left<\textbf{e}_1,\textbf{e}_1\right> & \left<\textbf{e}_1,\textbf{e}_2\right> & ... \\ \left<\textbf{e}_2,\textbf{e}_1\right> & \left<\textbf{e}_2,\textbf{e}_2\right> & ... \\ \vdots & \vdots & \ddots \end{matrix}\right] [/tex]

    [tex]\left<\textbf{v},\textbf{w}\right> =\left(\left[\textbf{v}\right]_B\right)^T \left[G\right]_B \left[\textbf{w}\right]_B[/tex].

    Change of coordinates:
    Let B' be another basis for V. Then the coordinate vector representation in one basis can be transformed into the coordinate vector representation in another basis via a linear transformation:
    [tex][\mathbf{v}]_{B'} = \Lambda [\mathbf{v}]_{B}[/tex]

    Transformation of G based on inner product properties:
    [tex]\left<\textbf{v},\textbf{w}\right>= \left(\left[\textbf{v}\right]_B\right)^T \left[G\right]_B \left[\textbf{w}\right]_B = \left(\Lambda^{-1}\left[\textbf{v}\right]_{B'}\right)^T \left[G\right]_B \Lambda^{-1} \left[\textbf{w}\right]_{B'} = \left(\left[\textbf{v}\right]_{B'}\right)^T \left(\Lambda^{-1}\right)^T \left[G\right]_B \Lambda^{-1} \left[\textbf{w}\right]_{B'} [/tex]
    [tex]\left[G\right]_{B'} = \left(\Lambda^{-1}\right)^T \left[G\right]_B \Lambda^{-1}[/tex]

    (Questionable) Transformation of G based on linearity:
    [tex] [G]_{B'}[v]_{B'} = [G(v)]_{B'} = \Lambda [G(v)]_B = [G]_B \Lambda [v]_B = \Lambda [G]_B \Lambda^{-1} [v]_{B'}[/tex]
    [tex] \left( [G]_{B'} - \Lambda [G]_B \Lambda^{-1} \right)[v]_{B'} = 0[/tex]
    for all [itex][v]_{B'}[/itex]
    [tex][G]_{B'} = \Lambda [G]_B \Lambda^{-1}[/tex]

    Incidentally, this coincides with the result from inner product properties if [itex]\Lambda[/itex] is orthogonal. Perhaps [itex][G(v)]_{B'} = [G(v)]_B \Lambda [/itex] only if [itex]\Lambda[/itex] is orthogonal?
  19. Jun 17, 2008 #18


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If we assume G is a linear transformation (i.e. that it's coordinate representation is an ordinary matrix), then G(v) is a tangent vector!

    I've been exhausted lately, I dunno if I can spend as much time on this trying to ask leading questions, so I'll go straight for the point.

    Because we see interesting objects transforming differently than simple things (like tangent vectors or linear transformations of tangent vectors), that shows that we must be must be interested in a greater variety of objects.

    The correspondence between indices and matrix algebra is that superscripts are row indices, and subscripts are column indices. You are aware that G should really have two lower indices (of course, this is a little circular, because the index convention was developed, I assume, precisely because it respects this distinction), and so it shouldn't be a matrix. Or at least, it shouldn't be an n by n matrix -- in one of the standard ways of representing such things, the coordinate representation of G would, in fact, be a 1 by n^2 matrix (actually, it would be partitioned; n partitions of n columns each), and the inner product would be computed by

    [tex]\langle x, y \rangle = [G]_B ([x]_B \otimes [y]_B)[/itex]

    where, for example, we use [itex]\otimes[/itex] to denote the Kronecker product.

    Aside: can you find a good product [G]_B [x]_B that's based on viewing [G] not as a 1xn^2 matrix, but instead as a 1xn matrix of 1xn matrices? Can you find a second one? (There are only two "good" ones) The two choices are obvious in index notation. For either of those products, What kind of object is [G]_B [x]_B? Can you define a reasonable product Gx? What kind of object would it be?

    You kept wanting to write [itex][v]^T_B[/itex], the (matrix) transpose of the coordinate representation of v. Such a thing, of course, is not a (column, coordinate) vector. What sort of role does it play in the matrix algebra? What kind of object could have such a coordinate representation?

    Coming up with an interpretation of [itex]v^T[/itex] is harder -- defining a 'transpose' operation on vectors is equivalent to choosing a metric! For example, in special relativity, the 'right' transpose to use is

    \left[ \begin{array}{c} c t \\ x \\ y \\ z \end{array} \right]^T
    = \left[ \begin{array}{cccc} ct & -x & -y & -z \end{array} \right]

    which gives the Minkowski inner product as [itex]\langle v, w \rangle = v^T w[/itex].
  20. Jun 17, 2008 #19
    Ahhh, I see! You are requiring strict enforcement of the idea that functionals correspond to row vectors and tangent vectors must correspond to column vectors respectively in the matrix representation. I was just using the matrix notation as a shorthand for summation with no deeper meaning.

    [tex]\left[\begin{matrix}\left[\begin{matrix}G_{1 1} \\ G_{1 2} \\ \vdots \end{matrix}\right] \\ \left[\begin{matrix}G_{2 1} \\ G_{2 2} \\ \vdots \end{matrix}\right] \\ \vdots \end{matrix}\right]\left[\begin{matrix}x_1 \\ x_2 \\ \vdots \end{matrix}\right]=\left[\begin{matrix}x_1\left[\begin{matrix}G_{1 1} \\ G_{1 2} \\ \vdots \end{matrix}\right] + x_2\left[\begin{matrix}G_{2 1} \\ G_{2 2} \\ \vdots \end{matrix}\right] + ...\end{matrix}\right][/tex]


    [tex]=\left[\begin{matrix}\left[\begin{matrix}G_{1 1} \\ G_{1 2} \\ \vdots\end{matrix}\right]\cdot\left[\begin{matrix}x_1 \\ x_2 \\ \vdots\end{matrix}\right] \\ \left[\begin{matrix}G_{2 1} \\ G_{2 2} \\ \vdots\end{matrix}\right]\cdot\left[\begin{matrix}x_1 \\ x_2 \\ \vdots\end{matrix}\right] \\ \vdots \end{matrix}\right][/tex]

    In this new system where rows and columns actually have meaning, I'm not sure what it would represent. If you want a "real" row vector, you can't just transpose a box of numbers, you need to do something the minkowski transpose or whatnot depending on the space youre in.
  21. Jun 17, 2008 #20


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Almost right, but G (with lower indices) is a row vector of row vectors, not a column vector of column vectors.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Covariance and Contravariance