Can a geometric vector become a covector when referred to a dual basis?

In summary: I'm not sure what this analogy is supposed to be, but it's not helping. Anyway, the two spaces are not equal. They may, as vector spaces be 'indistinguishable', but there is more information there than 'just a vector space'.
  • #1
matheinste
1,068
0
Hello all.

I wasd beginning to feel at home with vectors and covectors but while trying to fully understand the concepts this query came up._________

Excuse the lack of rigorous definition but I think you will realize what I am aiming at.

Take a geometrical vector in a finite dimensional space and give it an oblique linear coordinate basis. Take this vector as not representing any physical quantity. Construct the dual basis. If we called the vector referred to the original basis a VECTOR, does it become a COVECTOR when referred to the dual basis.

If this is the case and as the first choice of basis was arbitrary, then the dual basis could have been chosen as the original basis ( ? ) and the vector would be a VECTOR when referred to this basis and a COVECTOR when rerferred to the new dual basis.

I am not sure if my reasoning is correct. It seems too simple and I have a feeling that a metric will in some way be involved and invalidate this reasoning.

Matheinste
 
Physics news on Phys.org
  • #2
I'm not sure I really understand your question, but I'll have a go. At each point on a manifold, there exists a tangent vector space, consisting of the tangent vectors at that point. To each of these tangent vector spaces, there exists a dual space called the cotangent vector space, consisting of the cotangent vectors at each point. Now, clearly, the dual of the cotangent space is the tangent space. Thus, we could call elements in the tangent space covectors, and elements of the cotangent space vectors. Is this your point?
 
  • #3
Thanks Cristo.

Yes that is exactly what I was asking.

So really as long as one lives in the dual space of the other the names are interchangeable and this brings me to a point I don't fully understand. The vectors-covectors are often defined by the way they transform and merely changeing names cannot alter this so there is obviously something basic which I have not grasped about the very fundamentals of these transformations.

Thanks Matheinste.
 
  • #4
Well, if you swap the names for the object, then you would of course swap their definitions. There's no problem with this, since they are just names for things, but there is no advantage to doing it!
 
  • #5
You have met your first case of 'isomorphic but not equal.' A vector space is isomorphic to its dual. This choice is not canonical - it requires a choice of basis. Anyway, the two spaces are not equal. They may, as vector spaces be 'indistinguishable', but there is more information there than 'just a vector space'.
 
Last edited:
  • #6
Thanks Matt.

I was expecting something along those lines and I will explore further the definition of isomorphic, it sounds much more like real mathematics, and see how I get on. When ( not if ) I have other questions I will get back.

For the moment thanks for your input.

Mateinste.
 
  • #7
Here's one example of why they are 'different' spaces. Suppose that you have two vector spaces and an linear map f:X-->Y. Then the isomorphism gives a linear map on the dual spaces _but in the other direction_. I.e. there is a map f*:Y*-->X*. So the isomorphim of a space with its dual is not a natural one.
 
  • #8
heres another take. let M be a manifold. then consider curves in M and functions on M.
i.e. curves in M are maps R-->M, and functions on M

are maps M-->R. do they look the same to you?

but they can be paired with each other, in various ways, around any point p.

i.e. given a point p, a function f defined near p,

and a curve passing through p, i.e. a function

s:R-->M with s(0) = p, we can compose getting (fos), a real valued function R-->R

which we can differentiate at t=0, getting a number, called <s,f>(p).

Now we make this pairing non degenerate as follows: consider the equivalence relation on functions near p, where two functions f,g are equivalent provided <s,f>(p) =<s,g>(p) for all curves s.

an equivalence class of functions f is called a covector at p.

on the other hand, if we set two curves r,s, equivalent provided

we have <r,f>(p) = <s,f>(p) for all functions f near p, then

the equivalence class of a curve is called a (tangent) vector at p.

these objects are I hope obviously not the same, no matter how carelessly we name them.The analogy with Matt's example is that his dual vectors are linear maps L:V-->R, and his vectors are linear maps m:R-->V, which can be identified with their values at 1, i.e. linear maps R-->V are the same as vectors in V, whereas linear maps V-->R are not the same as vectors in V, unless you have a dot product given.On a manifold, vectors are the linear part of curves, while dual vectors are the linear part of functions. (and never the twain shall be the same).
 
Last edited:
  • #9
Thanks Mathwonk and Matt.

I am happy with isomorphisms for now. Are there maps from a vector space to its dual. As far as I know I haven't come across any examples. Although the two spaces are isomorphic their constituents seem to be of a very different form.

Hopefully, as usual, there is a simple answer which I have overlooked.

Thanks Matheinste.
 
  • #10
By definition, since they are isomorphic, there are not only maps between them but invertible linear maps. The maps are not canonical - they require a choice of basis. Any two vector spaces have maps between them.
 
  • #11
of course you were talking about finite dimensional vector spaces, but for infinite dimensional ones the duals do not seem to be isomorphic to the original space.

suppose for instance we have a space with a countable basis. this basis defines an isomorphism of the space with the countable direct sum of copies of the field, i.e. every vector is represented by a function from the basis to the field, which ahs value zero except finitely often.
\
but an element of the dual space is any linear map from the original space, say `V, to the field, and hence can have any value on the basis. so the dual space is the direct product of countably many copies of the field, i.e. a dual vector is represented by an arbitrary map from the basis to the field.

in general this seems to make the dual space much larger than the original space, and it is harder to find a basis for the dual space. i.e. there is a pairing between the two spaces defined by the original basis, wherein we "dot" a sequence with finitely many non zero entries against one with infinitely many, getting a finite answer. but not every function can be obtained by dotting elements of V against vectors in V, i.e. against sequences with only finitely many non zero entries.

so the covectors dual to the basis for V do not give a basis for V*. i.e. the pairing is not perfect.

this same phenomenon occurs in analysis when we look e.g. at the space of compactly supported smooth functions, analogous to sequences with finitely many non zero entries. we can also dot these, against any locally integrable functions, by integrating their product, since the product will be compactly suported.

that means all locally integrable functions define covectors for the space of compactly supported smooth ones. sometimes the covectors for this space are called distributions, and one tries to prove that all distributiuons have some nice form, like being represented by some functions more general than we started with.
 
Last edited:
  • #12
Thank you both for your replies. I am not yet ready for infinite dimensional spaces but that time will come.

What I'm really after is an example of a map from a space to its dual. When I see one I should have no trouble making others up.

When we map from a vector space to another vector space we map a vector in one space to a vector in the other. So I assume that a map from a space to its dual takes a vector in the first space and produces a covector or linear functional in the dual. For some reason I cannot think of a transform that will do this.

Matheinste.
 
  • #13
Well, to define the linear map, you need only tell me where a basis goes. Also, in order to define a linear functional one need only define it's value on a basis.

So let e1, ...en be a basis of V. Then if we define the value of n different functions on the ei, that will define a linear map. One obvious choice would be to define

[tex]f_i(e_j)=\delta_{ij}[/tex]

This gives us what is called the dual basis to e1,...en
 
  • #14
a basis defines an isomorphism of V with k^n, so it suffices to show a map from k^n to its dual.

a vector in k^n has a representation as an n tuple of elements of k,

i.e. as a = (a1,...,an). looking at this as coefficients of a linear form

i.e. as a1X1+...anXn, gives a correspomnding molinear function on k^n.

more simply, if a is a vector in k^n, then the action of dotting with a is a covector. so sending a to a.( ) is map from k^n to its dual.

this is the same map deadwolfe gave above, since dotting with ej gives delta(ij) on ei.
 
  • #15
Thanks to you both of you.

I think I am getting there. It is much as I thought it would be but I have not yet grasped the proper notation required to explain exactly what I mean.

So mapping ax1,bx2, ... in V to a(.).b(.)...in V^*is such a map.

So a(.), and not the result when a vector is plugged in, is the covector.

Any corrections welcome.

Mateinste.
 
  • #16
Another point to add to all of these. A linear space V (without any metric) has as its isometry group the full group GL(V), the group of arbitrary linear transformations (expressible in terms of arbitrary invertible matrices if you define a basis). Indeed by definition V transforms as the vector representation of this group. You can then define the dual space V* of linear functionals over V which transforms under the distinct dual representation. Note in particular doubling the magnitude of all elements of V would correspond to halving all elements of V*. We call elements of V vectors and the elements of V* co-vectors.

Now we don't need to invoke a basis to see these properties. But you can take an arbitrary basis of V,
[tex]e_1, e_2, \cdots e_n[/tex]
and from it define a dual basis:
[tex] f^1,f^2,\cdots f^n[/tex]
which has the property that (remembering dual elements are functions on V):
[tex] f^k(e_j) = \delta^k_j[/tex]

You can then in this choice of basis and dual basis speak of the dual of a specific vector but it is very very basis dependent since changing just one element of the original basis will change every element of the dual basis.

But wait! That's not all! OK now take this space V and define a symmetric bilinear form, M or in simple terms a metric or "dot product".
[tex] x\cdot y = M(x,y)=M(y,x)[/tex]
This allows you to compare vectors, talk about unit vectors and orthogonality.
For simplicity let's assume the metric is non-singular and positive definite so that the space is then Euclidean. All of what follows generalizes to indefinite spaces (such as Minkowski space-time) as well.

Once you do this then you can describe (in the finite dimensional case) each linear functional as the dot product of some vector. (Who's theorem?)

You then also can define an ortho-normal basis:
[tex] u_1, u_2, \cdots , u_n[/tex]
and the dual basis will necessarily be:
[tex] v^k : v^k(x) = u_k\cdot x[/tex]
So the co-vectors can be expressed as vectors with an implied dot product.


You will then find that the dual of a vector is now uniquely defined because of the metric structure we've added. What's more the isometry group now that we've added this metric structure is the orthogonal subgroup O(V;M) of GL(V).

What you will then see is that when we restrict to non-inverting transformations PSO(V;M), vectors and covectors transform identically.
(They may in the general orthogonal group depending on the pairity of the dimension.)

In summary: If you always stick to orthogonal transformations and orthonormal bases then you can consider vectors and covectors equivalent entities. It is when you consider arbitrary linear transformations then the distinction between vector and covector really becomes apparent.

One especially important place this distinction comes up is when you try to determine the relationship between differentials in different coordinate systems vs the relationship between partial derivatives. The key point that you need to do this is that the contraction of differentials with partial derivatives is the contraction of a vector and co-vector which yields a scalar (and thus then is independent of changes of coordinates).

[tex] dx^k\frac{\partial}{\partial x^k} = dy^k \frac{\partial}{\partial y^k}[/tex]

I hope this clarified more than it confused. I find that if you look at the groups and their representations, it is there that you find the fundamental meaning of vectors, tensors, and such.

Regards,
James Baugh
 
  • #17
nice, but i do not see how covectors and vectors are ever the same, since they transform differently. i.e. functions are never to me the same as vectors,

i.e. the functor Hom(k,.) which is covariant, is never the same as Hom(.,k) which is contravariant, but we have ahd this discussion infinitely many times before, and people who prefer indices to what they represent never seem to agree with me. i.e. even in the orthogonal case the matrices [aij] and [aji] are different.

I.e. just because you have chosen a metric, and hence arranged that your transformation matrices have transpose equal to their inverse does not make them equal to themselves. i.e. tramsforming by M or by Minverse is not the same. covectors still transform by Mtranspose, and vectors by the original matrix M.
 
Last edited:
  • #18
Hello mathwonk.

This also confuses me at a basic level. A vector seems to me a sort of passive object and a covector as you say seems to be a function or active entity. My understanding is improving but I am still not at ease with the subject and I will probably need more help when I have digested the last two posts.

Thanks. Matheinste.
 
  • #19
Pardon then for any confusion I may have added.

I see a "vector" as any element of a set of objects which can be added and multiplied by scalars. (Actually in my mind a "vector space" is simply an abelian Lie group.)

What's more you can think of a vector [tex]\mathbf{v}[/tex] just as easily as a linear functional on the dual vector [tex] \psi[/tex] by defining:
[tex] \mathbf{v}(\psi)\equiv \psi(\mathbf{v})[/tex]

mathwonk, Your point is well taken but you are thinking in terms of matrix notation and not the actual group action. You can take the transpose of the right action of the matrix on the co-vector expressed as a row and you get the identical left action of the transpose matrix on the column. Since the transpose is the inverse for orthogonal transformations you get the same linear combinations of prior basis co-vectors for the new co-basis as you get when transforming the prior vector basis to the new basis.

As a rule you write them differently so that if you don't use orthogonal transformations you will get the correct transformation, however you can always write both vector and covector in terms of column matrices of components and then vectors are transformed by some matrix and co-vectors are transformed by the transposed inverse matrix.

Again look at the group representations. Orthogonal and symplectic groups have a single vector representation, linear and unitary groups have two conjugate vector irreps. Specifically when you look at the orthogonal subgroup of a linear or unitary group you get that both of these conjugate vector irreps become isomorphic irreps of the orthogonal subgroup.

Regards,
James Baugh

PS Remember "isomorphic" is a loaded term. You must be exact about the category in which you are using the term.
R.
J.B.
 
Last edited:
  • #20
James, you are quite right there is no such thing as a vector as opposed to a covector in the absolute sense. I was using the terms in the following way. One begins with a fixed vector space V. Then one speaks of vectors in V as opposed to covectors on V. Thus there is a clear distinction between vectors and covectors with respect to V.

This arises naturally when one has a given manifold M to work on. then the "vectors" in that context are the elements of the tangent spaces to M, and the covectors are elements of the dual tangent spaces.

Instead of talking about vectors and covectors, we can look at the functors Hom(k, .) and Hom(. ,k). To me these are very different objects (acting either on the category of k vector spaces or the category of vector spaces with inner product), because (.) = Hom(k,.), and (.)* = Hom(.,k) are functors with different variance.

Looking at a vector as a linear function on its dual space is saying that V is equivalent to V**, not to V*. the fact that even this natural equivalence fails in infinite dimensions should suggest that V is not really the same as V**. There is a map from V to V** which is an isomorphism only when V is finite dimensional.

and do you mean you think of a vector space as a lie algebra? clearly a product of circles is an abelian lie group, but not a vector space.
 
Last edited:
  • #21
Ok James, I think I see the sense in which you are considering V and V* to be equivalent. If we look at the category of finite dimensional vector spaces with an inner product, in which as you say isomorphisms are orthogonal transformations, AND we consider only those maps which are orthogonal isomorphisms, and not all linear maps, then there is apparently an equivalence between the functors

V-->V and V-->V*, acting on these restricted categories.

i.e. for each V, assign the isomorphism V-->V* taking x to x.( ), and for each orthogonal isomorphism f:V-->W, send f to the map (f^(-1))* =
( ) o f^(-1), from V*-->W*.

To me there are two somewhat unnatural restrictions here, to finite dimensions, and to only considering maps which are orthogonal isomorphisms. But in some restricted situations this could be useful, as apparently it is since you use it.

Whenever you need more flexible assumptions, i.e. infinite dimensions, or maps which do not preserve length, you will have to treat V and V* differently. For instance in the setup above one cannot even consider non trivial "orthogonal" projections, nor scalings.

e.g. in a physical environment it seems all particles would have to be moving at unit speed, a rather restrictive hypothesis.
 
  • #22
mathwonk said:
...
and do you mean you think of a vector space as a lie algebra? clearly a product of circles is an abelian lie group, but not a vector space.

A vector space is an abelian Lie group, with group product +, and exponentiation the scalar multiples (scalar multiplication is iterated addition).
I think of the Lie algebra (being also a vector space) as the Abelianization of the corresponding group with the Lie product being the secondary structure allowing recovery of the (local) group product.

One also gets e.g. the Killing form, and a symplectic form via Killing norm of Lie products, so I then like to think of real vector spaces with auxiliary structures such as a metric or symplectic form as Lie algebras, possibly Lie subalgebras.

This gives me in most cases the Lie groups as a base category. The reason I like this base category is one needn't go off the reservation to speak of (special) automorphism groups. It is I think a natural base language for formulating physics. It is the relativity groups and dynamic groups which define the physics of a system, (representations then being simply group homomorphisms.)

Regards,
James Baugh
 
  • #23
mathwonk said:
Ok James, I think I see the sense in which you are considering V and V* to be equivalent. ...

To me there are two somewhat unnatural restrictions here, to finite dimensions, and to only considering maps which are orthogonal isomorphisms. But in some restricted situations this could be useful, as apparently it is since you use it.
Yes I like the finite dimension restriction. I think infinite dimensional structures are essentially non-operational. They should arise only as singular limits e.g. the horizon when we assume Earth is flat. Too many essential problems are swept under the infinite dimensional rug rather than addressed IMNSHO.

But as you say, the identification of which I speak is easily broken once you extend the group. I see it as important especially to recognized this specific point. Thus you don't accidentally generalize incorrectly from examples where you've failed to think outside an orthogonal group paradigm or fail to recognize the equivalence when you for other reasons must remain in an orthogonal group paradigm.
Whenever you need more flexible assumptions, i.e. infinite dimensions, or maps which do not preserve length, you will have to treat V and V* differently. For instance in the setup above one cannot even consider non trivial "orthogonal" projections, nor scalings.

e.g. in a physical environment it seems all particles would have to be moving at unit speed, a rather restrictive hypothesis.
But in the relativistic treatment of massive particles they do "travel at unit speed". It is precisely the point, the inertial frame relativity group is the orthogonal group, SO(3,1). (And the Poincare group as well is the contraction of another orthogonal group, deSitter or anti-deSitter.)

(And so this is one of those cases where one needs to pay attention to the orthogonal vs non-orthogonal cases to pick a better example!)

But you are correct. For general work it is best to understand the distinction between V and V* while understanding that there is an isomorphism (in the finite dimensional case) in the restricted case I mentioned.

Regards,
James
 
Last edited:
  • #24
jambaugh said:
Yes I like the finite dimension restriction. I think infinite dimensional structures are essentially non-operational. They should arise only as singular limits e.g. the horizon when we assume Earth is flat. Too many essential problems are swept under the infinite dimensional rug rather than addressed IMNSHO.

Really? So, functional analysis, hilbert spaces, quantum mechanics, algebraic topology, stable homotopy theory, etc don't matter and are sweeping the essential problems away? There is a lovely (hard) book by neeman showing the benefits of accepting infinite direct sums. I don't think any mathematician would want to be without such things these days. C.F. the modern way of using Stmod(kG) and not stmod(kG). Passing to infinite dimensional objects is now seen to be a help not a hindrance to understanding finite dimensional things. Indeed, we'd not have the famous BCR paper on classifiying thick subcategories of stmod(kG) without this idea. (For mathwonk : the C of BCR is Jon Carlson, and the B is Dave Benson.)
 
  • #25
matt grime said:
Really? So, functional analysis, hilbert spaces, quantum mechanics, algebraic topology, stable homotopy theory, etc don't matter and are sweeping the essential problems away?
Let's take them one at at time.
Hilbert spaces are not necessarily infinite dimensional... and when used in quantum mechanics, Yes certain essential difficulties are ignored rather than addressed by taking the infinite dimensional cases. This was one of the specific cases to which I was referring. I'm specifically thinking of the lack of finite dimensional unitary reps of non-compact groups. By insisting on unitarity one puts off addressing the need for a pseudo-unitary formulation of QM. Such need keeps popping up when you generalize from the scalar field to field theory of quanta with non-trivial intrinsic spin.

Algebraic topology? I don't see that it necessarily apply to infinite dimensional objects. Indeed it would seem most if not all physical applications apply Algebraic topology to finite dimensional structures. Nonetheless the theory does not rely on manifestly infinite dimension. It is rather applicable to those cases.

Functional analysis, etc, are essential mathematical tools. They are of course necessary when you formulate physical theories in manifestly infinite dimensional representations.

As to the rest of your comments... you stretch my jaws wide to put so many words in my mouth. I said I like the finite dimensionality restriction, not that I thought it should be engraved into the bylaws of all mathematics. The domain in which my preference was personally directed was physics, which was implied by my reference to operational meaning. When we idealize a measurement of one of a continuum of values we are in fact measuring inclusion within one of a finite number of ranges for those values as a inescapable pragmatic restriction. We adopt e.g. infinite spectrum observables when we are in fact working at a scale where the consequences of bounded spectrum may be ignored.

In e.g. formulating quantum field theory it is the infinite dimensional base mathematical structures which give rise to the divergences. Indeed the very assertion that the number of photons of a given frequency within a given box is unbounded is contrary to the gravitational considerations. I suggest that by ignoring pseudo-bosonic theories where there are upper bounds on e.g. photon number one may be missing those issues which shed light on a quantum theory of gravity.

I speculate of course, but not without some heuristic evidence. The open-endedness of the bosonic fields is a linearizing approximation not valid for non-linear fields e.g. gravitation. You will note the first attempts at quantum gravity start with the linearizing approximation. Why is anyone shocked (in hindsight) that the theory is not renormalizable?

Regards,
James Baugh
 
  • #26
Algebraic topologists long ago accepted the benefit of using spectra with infinitely many cells (Bousfield, Brown, Dold, Kan et al.)
 

1. What is a geometric vector dual space?

A geometric vector dual space is the set of all linear functionals on a vector space. It is a mathematical concept that allows for the study of geometric properties of vector spaces by considering the behavior of linear functions on those spaces.

2. How is a geometric vector dual space related to a vector space?

A geometric vector dual space is the dual space of a vector space, meaning it is the set of all linear functionals that map the vector space to the real numbers. It is closely related to the vector space in that it provides a way to understand the geometric and algebraic properties of the space through linear functions.

3. What is the dimension of a geometric vector dual space?

The dimension of a geometric vector dual space is always equal to the dimension of the original vector space. This is because the dual space contains the same number of linear functionals as the original space has vectors.

4. How is a basis for a geometric vector dual space chosen?

A basis for a geometric vector dual space is chosen by taking the dual basis of a basis for the original vector space. This means that for each vector in the original basis, there is a corresponding linear functional in the dual basis that maps that vector to 1 and all other vectors in the basis to 0.

5. What is the significance of the geometric vector dual space in physics?

In physics, the geometric vector dual space is used to represent physical quantities, such as forces and velocities, through linear functions. This allows for a better understanding and analysis of physical systems and their properties. Additionally, the idea of duality is fundamental in many areas of physics, such as quantum mechanics and relativity.

Similar threads

Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
7
Views
235
  • Linear and Abstract Algebra
Replies
3
Views
292
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
449
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
Replies
2
Views
473
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
3K
Back
Top