I found this thread while googling "vector vs point".
In the context of vector spaces:
If a vector space \V is defined over a field of scalars |F (which *might not* be the real or complex numbers we're used to dealing with
http://en.wikipedia.org/wiki/Field_(mathematics)#Examples"), then a "point" is an ordered tuple *of scalars* representing the expansion of the linear vector in a given basis (i.e. it's coordinates in a basis).
Any vector |v> in the vector space can be represented as a linear combination of the basis vectors. In general, in a two-dimensional vector space (and in the most abstract form you can probably think of):
\vec{v}=(\alpha_1 \hat{X} \vec{e}_1) \hat{O} (\alpha_2 \hat{X} \vec{e}_2)
Note that the scalars (alphas), operators (X and O) and vectors can be defined in *any way at all*. So long as they obey the axioms of a vector space, the entirety of vector analysis can be applied on them. I used two dimensions just to simplify the notation, but the same can be said for any number of dimensions.
For example, given the two-dimensional vector space R
2 (defined over the field of real numbers, no less), and a basis {|e>} which contains the vectors:
|e1> = (2,0)
|e2> = (0,3),
and a very important vector (not for any good reason, just because I had to pick one):
|v> = (8, 15),
then |v> is expanded in this basis as |v> = 4*|e1> + 5*|e2> and its coordinates
in this basis are represented by the n-tuple (4, 5). In our particular basis, given our particular vector, the idea of expansion is natural. This is not always the case.
What is "the origin" in this vector space? It's not the 0-vector: the 0-vector |0> is the vector that, when added to any other vector |v>, results in |v>, which is not related to the idea of an origin. This property would be true even if the notion of "origin" didn't exist.
An origin is represented as the coordinates (0,0,0), where 0 is the scalar additive identity. In a vector space, these coordinates result in the |0> vector, regardless of the basis used, since the multiplication of any vector by the 0-scalar is the |0> vector. Hence, the idea of "shifting the origin" cannot be thought of as a change in the coordinates (0,0,0) to some other coordinates, nor a change in the property that these coordinates result in the |0> vector.
A shift in the origin is
no shift in the origin at all. It is represented by a http://en.wikipedia.org/wiki/Translation_(mathematics)" on all vectors in the vector space, so that the coordinates of each vector are shifted by some amount
relative to the origin.
What does this have to do with "points"? Well, a point is an ugly notion in a vector space, but I suppose it can be thought of as a set of coordinates in a basis. What is the relation between the points (1,1) and (2,2), then? Without a notion of basis, it means nothing. In a basis, each of these points represents exactly one vector and hence can be represented as such.
Just as the notion of points in vector space is ugly, so is the notion of vectors in Euclidian space. And just as the notion of a vector |v> in a vector space is independent of any basis (and hence independent of any coordinates or "origin"), so is the notion of a point independent of that of basis (and coordinates and origin) in Euclidian space.
Adding and subtracting is
always defined in a vector space, since the vector addition operation is a requirement for the vector space, as is the |0> vector. Moreover, when a basis is defined, it is possible to add vectors by adding their coordinates in this basis. However,
no such constraint exists for the existence of a (0) point (the origin) in Euclidian space. So how do you add points in E-space? It's not always possible (to see more, check out the idea of an http://en.wikipedia.org/wiki/Affine_space" ).
So that's the difference. Euclidian space is an n-tuple space (or a coordinate space), whereas a vector space, while it *can* be an n-tuple space (as our example above), doesn't have to be, and coordinates in a vector space are not the fundamental component of a vector space as they are of the Euclidian Space. Why do we always talk about vectors in Euclidian space? Note, first, that vectors in Euclidian space are universally considered as "arrows" or some other representation of displacement. This is the first clue, and it is reinforced by assigning a "head" and "tail" to the vectors. Any time you consider the relationship between two points as a vector in Euclidian space (and any relationship between vectors in Euclidian space), you construct a transient vector space in which the axioms are satisfied by your arbitrary assignment of a |0> vector (whichever point you picked as your reference point).
The confusion compounds because the notions of inner product, orthogonality, and all that other fancy stuff exists in Euclidian space. This can be considered as either intrinsic to Euclidian space, or just the result of the transient vector space. In the former case, the Euclidian space has absolutely *no* relation to any vector space at all, and all its properties are built from the ground, up, including the difference between the notion of vectors and points.
It gets even messier when dealing with vector-valued functions defined over Euclidian space. That is, the typical "vector fields". Are the vectors defined in the Euclidian space? Not really. They belong to a vector space, and those neat arrow-diagrams represent a "graph" of the function (consisting ((x,y), V(x,y)) where V(x,y) is a vector. Transfering the results from the vector space to Euclidian space is just the reverse of transferring points from coordinate space to vector space. Note that the function V(x,y) can also be considered to produce a
point relative to (x,y), in which case the function maps the Euclidian space into itself, and the set of points and their associated outputs from V can be interpreted as vectors (or not) as you please. A counter-example is simply a function that accepts coordinates and returns other coordinates which are
not intended to be considered relative to the input coordinates.