What is a Vector? Geometric & Algebraic Definitions

  • Thread starter Rasalhague
  • Start date
  • Tags
    Vector
In summary, the algebraic definition of a vector is more general than the geometric definition. A vector can be thought of as an element of a vector space, which is invariant under a change of basis.
  • #1
Rasalhague
1,387
2
Elementary definitions seem to come in two flavours: (1) geometric,

"A vector [...] is a combination of direction and magnitude. Vectors have no other properties: the magnitude and direction specify a vector completely" (D.H. Griffel: Linear Algebra and its Applications, Volume 1, p. 3),

and (2) algebraic, e.g.

A vector is element of a set V associationed with a field, [itex]\mathbb{K}[/itex], and operations called vector addition, [itex]V \times V \to V[/itex], and scalar multiplication, [itex]\mathbb{K} \times V \to V[/itex], such as to comprise a vector space, according to the vector space axioms.

Is the algebraic definition more general, and the geometric definition a special case of the algebraic, or are they in some sense equivalent (perhaps given some generalised, abstract definition of direction)?

My second question is about the relationship of vectors to their components and the effects of a change of basis. A lot of the texts I've been reading have emphasised that a vector (at least when considered as a type-(1,0) tensor) is invariant under a change of basis. I take this to mean that the vector itself is thought of, in this sense, as existing independently of its coordinate representation, and that there are infinitely many ordered sets of n numbers which could represent the same n-vector (there being just one such set for each basis), and infinitely many n-vectors which can be represented by a given ordered set of n numbers (there being just one such n-vector for each basis). I was thinking of "n-vector" as meaning an element of an n-dimensional vector space, a vector which requires n components, regardless of what those components happen to be in any particular basis.

And yet I also read statements like, "For any positive integer, n, and n-vector is an ordered set of n numbers" (Griffel, p. 22). And in Chapter 3, e.g. p. 77, Griffel uses the term n-vector for an element of a vector space [itex]\mathbb{K}^{n}[/itex] over a field [itex]\mathbb{K}[/itex]. These n-vectors are presumably ordered sets of n elelements, each element of which is an element of [itex]\mathbb{K}[/itex].

So, if "every n-dimensional [vector] space over [itex]\mathbb{K}[/itex] is isomorphic to [itex]\mathbb{K}^{n}[/itex]" (Griffel, p. 77), an isomorphism being given by each basis, does this imply that [itex]\mathbb{K}^{n}[/itex] can only have one basis? The name "standard basis" suggests not, but if we allow the possibility of a change of basis in [itex]J : \mathbb{K}^{n} \to \mathbb{K}^{n}[/itex], and if elements of [itex]\mathbb{K}^{n}[/itex] are not defined geometrically, by magnitude and direction, but rather defined (as Griffel defines them) as ordered sets of n particular numbers, then the change of basis may transform one n-vector into a completely different n-vector, rather than simply relabelling it with a different set of components, and would therefore not be a change of basis in the traditional sense, or would it?

Or should I be thinking of a change of basis for [itex]\mathbb{K}^{n}[/itex] as involving multiple copies of [itex]\mathbb{K}^{n}[/itex], one in which the vector itself exists in some sort of absolute sense, and say two other copies of [itex]\mathbb{K}^{n}[/itex] playing the role of coordinate systems in which the vector itself is merely represented (by the same or a different arbitrary element of [itex]\mathbb{K}^{n}[/itex])? I'm thinking here of the definitions of a manifold that I've read and am still digesting. If I've got this right, the vector space [itex]\mathbb{K}^{n}[/itex] where the vector itself lives, in this absolute sense, would be the manifold, and the two other copies, the domain and range of the change of basis [itex]J : \mathbb{K}^{n} \to \mathbb{K}^{n}[/itex], would be mere placeholders or arbitrary ways of representing/incarnating the vector as a set of components. (This viewpoint presumably coming into its own when the manifold is something more complicated than [itex]\mathbb{K}^{n}[/itex].)
 
Physics news on Phys.org
  • #2
This is my understanding:

Vectors are elements of vector spaces, nothing more, and it is wrong to define a vector to be an ordered tuple of elements from a field. First, this is obviously wrong in the case of infinite-dimensional spaces. Second, while the familiar classification of finite-dimensional vector spaces via dimension shows that this definition would in fact exhaust all the finite-dimensional spaces up to isomorphism, it would place unnecessary restrictions on how particular vector spaces must be constructed. Under this definition, we could not, for example, speak of the space of [tex]m \times n[/tex] matrices because these are not tuples of real numbers. Of course, there is a procedure for identifying such matrices with tuples; there is always such a procedure for any finite-dimensional vector space due to the aforementioned classification result, but the point is that this identification shouldn't intuitively be necessary.

Another example which would be clumsy if one were forced to always carry out this identification is the dual space of one-forms (linear functionals) on a finite-dimensional space.

If it helps, given that a vector is defined as an element of a vector space, nothing more, here is the formalism for introducing and working with coordinates in finite-dimensional spaces. Assume V is an n-dimensional vector space over the field F.

Select a basis [tex]B = (v_1, v_2, \dots, v_n)[/tex] of V. There is no doubt that this is possible; the axiom of choice is not needed to guarantee the existence of a basis in the finite-dimensional case. Now consider the map sending an n-tuple [tex](c_1, c_2, \dots, c_n)^t[/tex] in [tex]F^n[/tex] to [tex]c_1 v_1 + c_2 v_2 + \dots + c_n v_n[/tex] in V. This map is surjective because B spans V. It is injective because B is linearly independent. And it is linear. So it is an isomorphism of vector spaces. The procedure is complete: the tuples in [tex]F^n[/tex] are called the coordinate vectors of their images in V.
 
Last edited:
  • #3
Rasalhague said:
Is the algebraic definition more general, and the geometric definition a special case of the algebraic, or are they in some sense equivalent (perhaps given some generalised, abstract definition of direction)?
I think I have only seen the "geometric definition" in texts written for people who haven't yet studied linear algebra. I don't think it even qualifies as a definition unless it also states how to specify a "direction".

Rasalhague said:
A lot of the texts I've been reading have emphasised that a vector (at least when considered as a type-(1,0) tensor) is invariant under a change of basis. I take this to mean that the vector itself is thought of, in this sense, as existing independently of its coordinate representation,
That's right.

Rasalhague said:
So, if "every n-dimensional [vector] space over [itex]\mathbb{K}[/itex] is isomorphic to [itex]\mathbb{K}^{n}[/itex]" (Griffel, p. 77), an isomorphism being given by each basis, does this imply that [itex]\mathbb{K}^{n}[/itex] can only have one basis?
No.

Rasalhague said:
...the change of basis may transform one n-vector into a completely different n-vector, rather than simply relabelling it with a different set of components, and would therefore not be a change of basis in the traditional sense, or would it?
Consider (x,y)=x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0). The vector (x,y) has components x and y in the standard basis, and components y and -x in the other basis. Nothing weird about that. Just don't write it as (y,-x), because that would be confusing.

Rasalhague said:
Or should I be thinking of a change of basis for [itex]\mathbb{K}^{n}[/itex] as involving multiple copies of [itex]\mathbb{K}^{n}[/itex], one in which the vector itself exists in some sort of absolute sense, and say two other copies of [itex]\mathbb{K}^{n}[/itex] playing the role of coordinate systems in which the vector itself is merely represented (by the same or a different arbitrary element of [itex]\mathbb{K}^{n}[/itex])? I'm thinking here of the definitions of a manifold that I've read and am still digesting. If I've got this right, the vector space [itex]\mathbb{K}^{n}[/itex] where the vector itself lives, in this absolute sense, would be the manifold, and the two other copies, the domain and range of the change of basis [itex]J : \mathbb{K}^{n} \to \mathbb{K}^{n}[/itex], would be mere placeholders or arbitrary ways of representing/incarnating the vector as a set of components. (This viewpoint presumably coming into its own when the manifold is something more complicated than [itex]\mathbb{K}^{n}[/itex].)
It's sometimes useful to think that way (or at least in a similar way, I'm not sure I understood you exactly right). For example, I like to think of Minkowski spacetime as just a vector space, or just a manifold, with points that are assigned coordinates by functions from Minkowski spacetime into [itex]\mathbb R^4[/itex]. What you should be calling "coordinate systems" are those functions, not their codomains (the "copies" of [itex]\mathbb R^4[/itex]). This post shows how each (ordered) basis defines a coordinate system.
 
  • #4
I think the main thing you might be missing is to separate the idea of "vector" from the idea of "coordinate representation of a vector".


We are really good at matrix algebra. It is often fruitful to convert problems to matrix algebra when possible, because we are good at computing with it. (alas, sometimes this conversion is done more than it really should, but anyways...)


Imagine for a moment that we have a vector space V that is a black box, something we can, for the most part, only access with the operations of "scalar multiplication" and "vector addition".

In light of my previous comment, to do calculations, we might strive to find an isomorphism S : V --> Kn, so that we can translate questions about the linear algebra of V into questions of matrix algebra over K. This might be done by finding a basis for V.

(V doesn't have to be a black box -- this can be a useful thing to do even when V is something more accessible, or even with Kn)


A change of basis just means choosing a different isomorphism T : V --> Kn. If we change bases, we often have questions like the following:
I have an n-vector that was representing some element of V via S. I want to represent that element via T now. How can I compute this new n-vector?​
This question can be answered by applying the inverse of S then applying T. It gives an isomorphism Kn --> Kn, which we might call a change of basis transformation.



You may find it useful to put the "coordinate form of" operation explicitly in your calculations. If you've chosen a basis for your vector space and are writing things in coordinate form relative to that basis, then you could write things like the following:
  • If v is a vector and B is a basis, then [itex][v]_B[/itex] is the n-tuple of the coordinates of v relative to B
  • If T is a linear map V-->V, then [itex][T]_B[/itex] is the nxn-matrix of the coordinates of T relative to B. That is, [itex][Tv]_B = [T]_B [v]_B[/itex]
  • Same T as before, but with two bases B and B', we could write [itex][T]_{B',B}[/itex] for the coordinates of T relative to B on input and B' on output. That is, [itex][Tv]_{B'} = [T]_{B',B} [v]_B[/itex]

Note that the change of basis matrix is easy to express with this last one: it is simply [itex]_{B',B}[/itex].
 
  • #5
the geometric definition of a vector you have there is a bit wish-washy. for geometric interpretations of vectors, read about inner product spaces
 
  • #6
Fredrik said:
Consider (x,y)=x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0). The vector (x,y) has components x and y in the standard basis, and components y and -x in the other basis. Nothing weird about that. Just don't write it as (y,-x), because that would be confusing.

So here, if I've got this right, the nonstandard basis is the same as the standard basis except that the first basis vector has been reflected about the y-axis. And {(-1,0), (0,1)} are the component forms of these nonstandard basis vectors expressed in the standard basis, which I suppose are the same as the component forms of the standard basis vectors expressed in this particular nonstandard basis. And a vector which had been called (x,y) in the standard basis would become (-x,y) in the nonstandard basis, where x and y stand for the same pair of numbers. But any set of basis vectors for R2, when expressed in their own basis, take the form {(1,0), (0,1)}.

So I guess I should ditch the definition "For any positive integer, n, an n-vector is an ordered set of n numbers" (Griffel, p. 22), and continue to think of an n-vector as an element of an n-dimensional vectors space, which requires n components if it's to be expressed in component form, but isn't identical to any particular set of n components?

Fredrik said:
It's sometimes useful to think that way (or at least in a similar way, I'm not sure I understood you exactly right). For example, I like to think of Minkowski spacetime as just a vector space, or just a manifold, with points that are assigned coordinates by functions from Minkowski spacetime into [itex]\mathbb R^4[/itex]. What you should be calling "coordinate systems" are those functions, not their codomains (the "copies" of [itex]\mathbb R^4[/itex]). This post shows how each (ordered) basis defines a coordinate system.

In a different thread, you pointed me in the direction of Domenico Giulini's http://arxiv.org/abs/0802.4345" which characterises Minkowski space as an affine space, rather than a vector space. Could we use the same idea but with Minkowski space, and/or the copies of R4, regarded as affine spaces, or is it necessary that they be vector spaces. If they were treated as affine spaces, would the difference be that the coordinate functions would supply the origin as well as the basis.
 
Last edited by a moderator:
  • #7
Hurkyl said:
(V doesn't have to be a black box -- this can be a useful thing to do even when V is something more accessible, or even with Kn)

A change of basis just means choosing a different isomorphism T : V --> Kn.

I like the black box idea. (It's sounds less craaazy than my seasonal metaphor of V as heaven, and putting vectors in component form as incarnation!) The importance of distinguishing the concept of a vector from a particular component representation is what all these physics books have been drilling into me. What confuses me is that it seems to clash with Griffel's "an n-vector is an ordered set of n numbers". Perhaps I should mentally ammend that to "an n-vector is an element of an n-dimentional vectors space; that is, a vector which requires n components when expressed in component form."

I'm still a bit puzzled about the case where we can see inside the black box and know that this particular V actually does happen to be Kn. Are you saying that even then, an element of the vector space Kn is not to be identified with its components or thought of as a particular n-tuple? But then what does it mean to define a change of basis as a map Kn --> Kn? Should I be making a dictinction between Kn, a vector space (however defined), and Kn, the set of n-tuples of elements in the field K? Is "coordinate vector" and "coordinate space" useful concepts, and what do they mean if vectors are not to be identified with their coordinates? Do you agree with the following definitions, and if not, how would you modify them?

Wolfram Mathworld: "Euclidean n-space, sometimes called Cartesian space or simply n-space, is the space of all n-tuples of real numbers, (x1, x2, ..., xn)."

Wolfram Mathworld: "A vector is formally defined as an element of a vector space. In the commonly encountered vector space Rn (i.e., Euclidean n-space), a vector is given by n coordinates and can be specified as (A1,A2,...,An)."

Surely any vector in an n-dimensional real vector space can be specified as (A1,A2,...,An), given that this space is isomorphic to Rn. How does this property distinguish Euclidean n-space from any other n-dimensional vector space? How should an element of Euclidean n-space be defined if not as an n-tuple of real numbers?
 
  • #8
Rasalhague said:
So here, if I've got this right, the nonstandard basis is the same as the standard basis except that the first basis vector has been reflected about the y-axis.
What I had in mind was actually a 90-degree counterclockwise rotation of the basis vectors, but since the order of the basis vectors isn't important here, it can be interpreted the way you described it as well.

Rasalhague said:
So I guess I should ditch the definition "For any positive integer, n, an n-vector is an ordered set of n numbers" (Griffel, p. 22), and continue to think of an n-vector as an element of an n-dimensional vectors space, which requires n components if it's to be expressed in component form, but isn't identical to any particular set of n components?
It doesn't really matter if you do or don't, since all n-dimensional vector spaces are isomorphic. The definition of an "n-vector" as an ordered n-tuple of members of some field seems appropriate to me.

Rasalhague said:
In a different thread, you pointed me in the direction of Domenico Giulini's http://arxiv.org/abs/0802.4345" which characterises Minkowski space as an affine space, rather than a vector space.
It doesn't matter if we define it as a vector space, an affine space, or a manifold. The physics is the same either way. Things get easier if we use a vector space, but I suspect that he wanted to avoid that because a vector space isn't manifestly translation invariant (because the 0 vector appears to define a "special" point in spacetime).

Note that Guilini is actually defining Minkowski spacetime to be an affine space and a manifold, in a (for me) slightly cryptic way, by saying that we assume the usual "differential structure".
 
Last edited by a moderator:
  • #9
Fredrik said:
The definition of an "n-vector" as an ordered n-tuple of members of some field seems appropriate to me.

In your example, x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0), by this definition, (x,y) is a 2-vector, but so is (y,-x). And (y,-x) is not the same pair of numbers as (x,y), so if an n-vector is defined as an n-tuple of members of some field, this seems to be saying that (x,y) is not the same vector as (y,-x). But this seems to contradict the idea that a vector is invariant under a change of basis and independent of its coordinate representation (not to be identified with any particular representation). In the latter way of thinking, isn't (x,y) in the standard basis equal to (y,-x) in the other basis; aren't they the same vector, even though they're different n-tuples?

And doesn't the same pair of numbers (x,y) represent a different vector in the different bases? Hmm, and the bases differ only by a rotation; they're both orthonormal, so does that mean that either of them, viewed in isolation, would qualify as a standard basis (and that therefore there are infinitely many standard bases)??
 
  • #10
Rasalhague said:
In your example, x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0), by this definition, (x,y) is a 2-vector, but so is (y,-x). And (y,-x) is not the same pair of numbers as (x,y), so if an n-vector is defined as an n-tuple of members of some field, this seems to be saying that (x,y) is not the same vector as (y,-x).
That's why I said you shouldn't write (y,-x), "because that would be confusing". We clearly do not have (x,y)=(y,-x) unless the symbols mean different things on the two sides of the equality.

Rasalhague said:
But this seems to contradict the idea that a vector is invariant under a change of basis and independent of its coordinate representation (not to be identified with any particular representation). In the latter way of thinking, isn't (x,y) in the standard basis equal to (y,-x) in the other basis; aren't they the same vector, even though they're different n-tuples?
No, we have x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0), but these things are both =(x,y). Neither of them is (-y,x).

Rasalhague said:
And doesn't the same pair of numbers (x,y) represent a different vector in the different bases? Hmm, and the bases differ only by a rotation; they're both orthonormal, so does that mean that either of them, viewed in isolation, would qualify as a standard basis (and that therefore there are infinitely many standard bases)??
There's only one standard basis on [itex]\mathbb R^n[/itex], and it's (1,0,...,0), (0,1,0,...,0),...,(0,...,0,1). But there are infinitely many orthonormal bases.
 
  • #11
Is "n-vector" synonymous with "n-tuple of elements of a field" and "coordinate vector" and "element of a vector space [itex]\mathbb{K}^n[/itex], where [itex]\mathbb{K}[/itex] is a field"? And I take it these are not synonymous with n-dimensional vector = an element of a vector space with dimension n, although there will be infinitely many isomorphisms between an n-dimensional vector space over [itex]\mathbb{K}[/itex] and the vector space [itex]\mathbb{K}^n[/itex].
 
  • #12
Fredrik said:
That's why I said you shouldn't write (y,-x), "because that would be confusing". We clearly do not have (x,y)=(y,-x) unless the symbols mean different things on the two sides of the equality.

Are you using (x,y) to stand for a particular ordered set of real numbers, or is (x,y) more like what Blandford and Thorne call "slot-naming index notation" for tensors, where the index only indicates the amount (and, by their height, the type) of the tensor's arguments, without referring to a particular ordered set of numerical components in a particular basis? Or should I think of (x,y) as, by definition, the components of an n-vector (ordered n-tuple of elements of the field) wrt the standard basis?

Fredrik said:
No, we have x(1,0)+y(0,1)=y(0,1)+(-x)(-1,0), but these things are both =(x,y). Neither of them is (-y,x).

If we express the vectors (0,1) and (-1,0) in the basis ((0,1), (-1,0)), then these vectors will be 1(0,1) + 0(-1,0) and 0(0,1) + 1(-1,0) wrt this basis. Their components are 1 and 0, and 0 and 1, respectively, in this coordinate system. Would we say that the change of basis gives an isomorphism between these two vectors and the standard basis (or does that ambiguity over whether it's a refletion or a rotation complicate things)?

Fredrik said:
There's only one standard basis on [itex]\mathbb R^n[/itex], and it's (1,0,...,0), (0,1,0,...,0),...,(0,...,0,1). But there are infinitely many orthonormal bases.

Is it that, although not every kind of vector is to be identified with its components, n-vectors (n-tuples of elements of the field over which the vector space is defined), are? So when we write (x,y), where x is some element of [itex]\mathbb{R}^{n}[/itex] and y is another, this defines a unique 2-vector. And (y,-x) is a different 2-vector. And if we rotate the basis, now we must use a different 2-vector to represent the same physical quantity, whichever orthonormal basis we started with being identified as "the standard basis" and all other orthonormal bases being defined in terms of this arbitrary choice?

Is it that we can arbitrarily identify any of those orthonormal bases as the standard base, and let the others be defined with respect to that one?
 
  • #13
Rasalhague said:
Wolfram Mathworld: "A vector is formally defined as an element of a vector space. In the commonly encountered vector space Rn (i.e., Euclidean n-space), a vector is given by n coordinates and can be specified as (A1,A2,...,An)."

Surely any vector in an n-dimensional real vector space can be specified as (A1,A2,...,An), given that this space is isomorphic to Rn. How does this property distinguish Euclidean n-space from any other n-dimensional vector space? How should an element of Euclidean n-space be defined if not as an n-tuple of real numbers?

The elements of [tex]F^n[/tex] for a field F are defined to be n-tuples of elements from F. Remember that [tex]F^n[/tex] is the n-fold product of the set F with itself. It happens that there is a natural F-vector space structure on these n-tuples.

As you mentioned, every n-dimensional F-vector space is isomorphic to [tex]F^n[/tex], so in some sense the theory of finite-dimensional spaces is exactly the theory of the spaces [tex]F^n[/tex] for various values of n. [In fact, the category of F-vector spaces is equivalent to the subcategory whose objects are [tex]F^n[/tex] for cardinal numbers n.]

However, just because every n-dimensional space over F is isomorphic to [tex]F^n[/tex] doesn't mean that a vector should be defined as an element of [tex]F^n[/tex] for some n. In fact, this would basically throw away the isomorphism because then the spaces [tex]F^n[/tex] would be the only finite-dimensional spaces over F, so the claim that any n-dimensional F-vector space is isomorphic to [tex]F^n[/tex] would be trivial. It is the benefit of the abstract definition that an n-dimensional space over F whose elements are not n-tuples will still behave exactly as if it were just [tex]F^n[/tex] and, moreover, an explicit isomorphism can be found if one can find a basis for the space.

The important point to remember is that vectors are coordinate-free objects, but coordinate systems can be introduced by selecting a basis and thereby constructing an isomorphism to the appropriate [tex]F^n[/tex]. In linear algebra there is, for example, an interesting and somewhat pretty interplay between the theory of linear operators on a finite-dimensional space and the theory of square matrices over the relevant field. Most results can be stated in two forms: a coordinate-free form in terms of linear operators and a matrix form depending on a selection of a basis (or some other choice of coordinates).

As for the term "n-vector", I don't think it's actually a very standardized term. I personally would not use it unless I gave an explicit definition ahead of time.
 
  • #14
Rasalhague said:
Are you using (x,y) to stand for a particular ordered set of real numbers,
Yes.

Rasalhague said:
Or should I think of (x,y) as, by definition, the components of an n-vector (ordered n-tuple of elements of the field) wrt the standard basis?
This option is equivalent to the first (the one I just said "yes" to).

Rasalhague said:
If we express the vectors (0,1) and (-1,0) in the basis ((0,1), (-1,0)), then these vectors will be 1(0,1) + 0(-1,0) and 0(0,1) + 1(-1,0) wrt this basis. Their components are 1 and 0, and 0 and 1, respectively, in this coordinate system.
Agreed. (But I'd say "in this basis", not "in this coordinate system").

Rasalhague said:
Would we say that the change of basis gives an isomorphism between these two vectors and the standard basis (or does that ambiguity over whether it's a refletion or a rotation complicate things)?
Vector spaces can be isomorphic to each other, vectors can't.

A change of basis is defined by an expression of the form

[tex]\vec e_j{}'=\sum_i A_{ij}\vec e_i[/tex]

where the [itex]A_{ij}[/itex] are the components of a non-singular matrix, and such a matrix defines a linear bijection (i.e. an isomorphism) from the vector space into itself.

Rasalhague said:
Is it that, although not every kind of vector is to be identified with its components, n-vectors (n-tuples of elements of the field over which the vector space is defined), are?
Yes. (See cpconn's post for a more complete answer).

Rasalhague said:
So when we write (x,y), where x is some element of [itex]\mathbb{R}^{n}[/itex] and y is another, this defines a unique 2-vector. And (y,-x) is a different 2-vector.
Yes.

Rasalhague said:
And if we rotate the basis, now we must use a different 2-vector to represent the same physical quantity, whichever orthonormal basis we started with being identified as "the standard basis" and all other orthonormal bases being defined in terms of this arbitrary choice?
Things get really confusing if we use this terminology. Why not just just say things like "the components of x in the basis E are x1,...,xn"?

I agree with cpconn about "n-vector" not being a standard term.
 
  • #15
zpconn said:
If it helps, given that a vector is defined as an element of a vector space, nothing more, here is the formalism for introducing and working with coordinates in finite-dimensional spaces. Assume V is an n-dimensional vector space over the field F.

Select a basis [tex]B = (v_1, v_2, \dots, v_n)[/tex] of V. There is no doubt that this is possible; the axiom of choice is not needed to guarantee the existence of a basis in the finite-dimensional case. Now consider the map sending an n-tuple [tex](c_1, c_2, \dots, c_n)^t[/tex] in [tex]F^n[/tex] to [tex]c_1 v_1 + c_2 v_2 + \dots + c_n v_n[/tex] in V. This map is surjective because B spans V. It is injective because B is linearly independent. And it is linear. So it is an isomorphism of vector spaces. The procedure is complete: the tuples in [tex]F^n[/tex] are called the coordinate vectors of their images in V.

Okay, so in the case where V = Fn, then this map is an isomorphism from Fn to Fn, an automorphism on Fn. Do we need to distinguish between the copy of Fn that's playing the role of V and the other Fn where the coordinate vectors live? I mean are elements of the Fn that is V to be regarded as type (1,0)-tensors, while their coordinate vectors are not tensors with respect to the Fn that is V, since they're not invariant under a change of basis (not "coordinate-free objects", being defined as coordinates)?
 
Last edited:
  • #16
I like to think of [itex]v=c_1 v_1 + c_2 v_2 + \dots + c_n v_n[/itex] as the vector and [itex](c_1\ c_2\ \dots\ c_n)^T[/itex] as the matrix of components of v in the basis [itex]\{v_i\}[/itex]. You can of course define a vector space structure on the set of such matrices in the usual way, and then this vector space is "another copy of Fn".
 
  • #17
Fredrik said:
I like to think of [itex]v=c_1 v_1 + c_2 v_2 + \dots + c_n v_n[/itex] as the vector and [itex](c_1\ c_2\ \dots\ c_n)^T[/itex] as the matrix of components of v in the basis [itex]\{v_i\}[/itex]. You can of course define a vector space structure on the set of such matrices in the usual way, and then this vector space is "another copy of Fn".

Here, I suppose, these basis vectors, if they belong to [itex]\mathbb{K}^n[/itex] (where [itex]\mathbb{K}[/itex] is the field over which the vector space is defined), are n-tuples always corresponding to their components in the standard basis, rather than n-tuples of components that depend on the basis, which basis might depend on another basis, and so on in infinite regression!

So how does this sound? If we were making a rudimentary, fairly trivial kind of a manifold, as a model of physical space, we could have one copy of [itex]\mathbb{R}^n[/itex], which we think of as the underlying set of the manifold, and a coordinate chart consisting of a map sending elements of this set to an open set in a second copy of [itex]\mathbb{R}^n[/itex], for example the improper subset consiting of the whole of [itex]\mathbb{R}^n[/itex]. And this map associates each n-tuple in [itex]\mathbb{R}^n[/itex], the underlying set of the manifold, with another, possibly different n-tuple in the underlying set of the chart; and the elements of the latter set are each an n-tuple representing the components of a vector in the first [itex]\mathbb{R}^n[/itex] (the underlying set of the manifold), and these n-tuples are called the coordinate vectors of the vectors in the first [itex]\mathbb{R}^n[/itex]. And a change of basis is a map from the second copy of [itex]\mathbb{R}^n[/itex] to a third, associating each coordinate vector with a different coordinate vector. (And in this model, all three copies of [itex]\mathbb{R}^n[/itex] can be defined as vector spaces or affine spaces, according to taste.)
 
Last edited:

1. What is a vector?

A vector is a mathematical object that has both magnitude and direction. It is commonly used in physics and mathematics to represent quantities such as displacement, velocity, and force.

2. How is a vector represented geometrically?

Geometrically, a vector is represented by an arrow with a specific length and direction. The length of the arrow represents the magnitude of the vector, while the direction of the arrow represents the direction of the vector.

3. What is the difference between a geometric and algebraic definition of a vector?

The geometric definition of a vector focuses on the physical representation of the vector as an arrow, while the algebraic definition focuses on the mathematical representation of a vector as a set of numbers or coordinates.

4. How is a vector added or subtracted?

Vectors can be added or subtracted by combining their components. This can be done geometrically by placing the vectors tip to tail and drawing a line from the starting point of the first vector to the end point of the second vector. Algebraically, the components of the vectors are added or subtracted separately.

5. What is the difference between a vector and a scalar?

A vector has both magnitude and direction, while a scalar only has magnitude. Scalars are represented by a single number, while vectors are represented by multiple numbers or coordinates.

Similar threads

  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
867
  • Linear and Abstract Algebra
Replies
1
Views
836
  • Linear and Abstract Algebra
Replies
7
Views
235
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
998
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
925
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
869
Back
Top