# Understanding the abstract notion of an vector space

1. Jan 3, 2014

### "Don't panic!"

Hi,
I'm trying to justify to myself the abstract notion of a vector space and I would really appreciate if people wouldn't mind taking a look at my description and letting me know if it's correct, and if not, what is the correct explanation? :

"Vectors are most often introduced as ordered arrays of numbers (ordered "n-tuples") in $\mathbb{R}^{n}$ (or $\mathbb{C}^{n}$), specifying their components along each coordinate line in Euclidean space. However, this viewpoint is very restrictive, as it requires one to introduce a specific coordinate system in order to describe a vector; an often non-trivial process, as it is not necessarily obvious which is the best coordinate system to choose. Vectors are intrinsically geometric entities, possessing both magnitude \& direction; they are mathematical objects that exist independently of any given coordinate system. Thus, we require a coordinate-free definition of a vector, and the 'space' that it lives in. We do this by abstracting the definition of a Euclidean vector in $\mathbb{R}^{n}$ (or $\mathbb{C}^{n}$), retaining only the properties that are characteristic to the vectors themselves, and not a particular coordinate system. The characteristic properties that we refer to are the binary operations of vector addition and scalar multiplication."

Apologies in advance for the lack of mathematical rigour, I'm a physics student, but I am very interested in the mathematical formalism.

2. Jan 3, 2014

### AlephZero

That's a good first step, but later on your will find there are vectors and vector spaces that are not "intrinsically geometric entities," and not limited to only a finite number of dimensions. For example a set of mathematical functions (e.g. all polynomials) can be considered as vectors that form a vector space. As an extreme example, you can have vector spaces where the "vectors" are themselves vector spaces.

You probably won't meet infinite dimensional vector spaces in a first course on abstract algebra or linear algebra, but they turn up in the maths of things like Fourier analysis, and in quantum mechanics.

3. Jan 3, 2014

### "Don't panic!"

Thanks for the speedy response!
Is it more correct then to think of a "vector" as a mathematical object that satisfies a certain set of 'intrinsic' properties (the so-called vector-space axioms)?
I suppose I've always struggled to fully understand the abstraction from Euclidean vectors, visualised as arrows with magnitude & direction, to the more general notion of a vector. Is it purely that this restricts us to describing a vector as an ordered n-tuple in $\mathbb{R}^{n}$, dependent on a specific coordinate system, and we require a coordinate-independent definition, as there are many mathematical objects that satisfy the properties of vectors (such as addition and multiplication), all of which exist independently of any coordinate basis, thus requiring an abstracted definition?
Would you be able to explain this to me in a bit more depth? Thanks.

Last edited: Jan 3, 2014
4. Jan 3, 2014

### AlephZero

Exactly. If it walks like a duck and quacks like a duck, it's a duck. The same principle applies to vector spaces, and almost everything else in "advanced" math.

Suppose you have some physical system that you want to model using vectors - e.g. the forces, accelerations, momentum, etc, in a dynamics problem. The point is that the physics is independent of the coordinate system you use to model it. To solve a particular problem, you can often pick a coordinate system that makes the algebra simple, but the answer would have the same physical meaning whatever coordinate system you used. The physical system doesn't "know" anything about your choice of coordinate system, it just does what it does.

The basic equations of physics (Newton's second law, Maxwell's equations, etc) often involve vectors, but they don't involve coordinate systems. The same is true of the basic theorems in vector calculus, like the divergence theorem. That theorem connects what happens inside a region in space, with what happens on the boundary of the region. It doesn't matter what shape the region is, nor does it matter how you choose to define the shape using coordinates or anything else.

The idea of representing a vector as "an arrow on a diagram" instead of "its x,y,z coordinates" goes some way towards the idea that the physics is independent of the coordinate system, but of course you can't do much with an arrow on a diagram, until you make it into a mathematical concept.

If you have some time for self-study, it might help to start where most pure math courses start on "abstract algebra", which is with groups rather than vector spaces. A group is a set of objects with only one mathematical operation (and only 4 axioms). An example is the set of integers (positive and negative) and addition. But there are many "real world" examples of different types of things which have the properties of a group, apart from "numbers" and "arithmetic".

Reading a bit of group theory might help to get used to the idea of why more general abstract definitions are useful. It's harder to find "simple" examples of vector spaces, where Cartesian coordinates are not the "obvious" way to represent them.

5. Jan 3, 2014

### "Don't panic!"

One more quick question, possibly a little stupid (so apologies for that). Do vectors always possess magnitude and direction in their abstracted definition?

Last edited by a moderator: Jan 4, 2014
6. Jan 3, 2014

### Number Nine

There's an extra structure called an inner-product that can be attached to a vector space to recover some of these properties, but it's not necessary for a vector space to have them.

7. Jan 3, 2014

### "Don't panic!"

Ok, so in general vectors do not have magnitude and direction, but if they form an 'inner-product space', then these characteristics can be recovered? Is the notion of a direction of a given vector only introduced once one specifies a particular coordinate system?

8. Jan 3, 2014

### johnqwertyful

The best way I can explain vectors is linear combinations. If you can take linear combinations of something, it's a vector space. i.e. αx+βy makes sense. Commutivity, associativity and distributivity etc. are all axioms, meaning they must satisfied to be called a "vector space" BUT in general these things are trivial. The only time you'll really worry about proving something satisfies these axioms is in your linear algebra class.

Polynomials, continuous functions, matrices, Euclidean vectors, etc. are all vector spaces because you can take linear combinations of them.

In general though, this is just an example of an algebraic structure. A set with one or more operations on it. If you get to abstract algebra, you learn about all sorts of structures. Groups, rings, fields, algebras, modules, vector spaces, latices.

Inner product spaces define a notion of angle or orthogonality. Euclidean space with the dot product is an inner product space. if x*y=0, the vectors are orthogonal. This might not be a big deal to you now, but it will be as you move on.

This will probably be beyond you for right now, but in functional analysis (used in quantum a lot; practically invented for it), you discuss metric spaces, normed vector spaces, and inner product spaces. A metric space is ANY set (doesn't have to be a vector space) with a notion of "distance". A normed vector space is a vector space with a notion of "length". An inner product space is a vector space with a notion of "angle".

The cool thing is that if you have an IP space, you get a NV space, which gives you a metric space. The other way doesn't hold. If you have a metric space, you might not have a NV space (you might not even have a vector space). If you have a NV space, you might not have an inner product space.

With general metric spaces and NV spaces, you have a lot of weird examples. Distances and lengths that don't match up with reality. But in order to have an IP space, it really restricts what you can have and makes things match up reality much better.

This might be beyond you for now, but I guess what I'm trying to say is that inner products on a space REALLY wrangle it, make things more "normal".

Edit: for your question of "magnitude", a normed vector space gives a notion of magnitude (or length).

Last edited by a moderator: Jan 4, 2014
9. Jan 3, 2014

### "Don't panic!"

Thanks johnqwertyful! A very insightful description, this has given me a lot more understanding than I had when it was first introduced to me. Much appreciated!
Is there any intrinsic property though, that gives a sense of "direction" to a vector ?

Last edited: Jan 3, 2014
10. Jan 3, 2014

### johnqwertyful

Sorry, I did not respond to that.

Really, what is direction? If you were to say that a force is acting in the "x-direction", what does that mean? Well it means that the "angle" between the force vector, F, and the x direction, i, is 0. If you have an inner product space, you have a notion of "angle". You can define an angle between two vectors as: http://upload.wikimedia.org/math/8/9/1/891f5665ee88157f3f54f28fe8633b24.png

To answer your question, in the general case vector spaces don't have a notion of magnitude or length. But if you add a norm structure, you get a magnitude. If you add an inner product, you get a notion of direction. But an inner product implies a norm.

Also, an inner product really restricts what norms are available. Only certain norms can become inner products. So really everything boils down to an inner product. Inner product spaces are really well behaved.

11. Jan 3, 2014

### "Don't panic!"

Great. Thanks, this has really helped.

In this abstracted framework, is it then correct to distinguish between vectors and scalars by their behavior under coordinate transformations, i.e. a scalar quantity is invariant under coordinate transformations, whereas a vector transforms in such a way as to preserve its overall form?

Last edited: Jan 3, 2014
12. Jan 3, 2014

### Xiwi

Also, just as a side note, you can think of the angle definition as a result of the Cauchy-Schwarz inequality in inner product spaces.

13. Jan 3, 2014

### D H

Staff Emeritus
In the most abstract? No. The concepts of magnitude and direction are not a part of the concept of an abstract vector space.

Adding the concept of a norm (and not all vector spaces have a norm) gives the concept of magnitude. Some function spaces don't even admit the concept of a norm, but they are nonetheless vector spaces. It's only when the norm is induced by an inner product that it makes sense to talk about "magnitude" and "direction". The Manhattan norm (e.g., "drive two blocks north then three blocks east") is an example of a normed vector space that doesn't admit the concept of direction.

14. Jan 3, 2014

### johnqwertyful

In the language of vector spaces, "coordinates" give way for a basis. Formally a basis is a "linearly independent" set of vectors that "span" the space. In other words a basis is a set of vectors such that every vector in the space can be UNIQUELY written as a linear combination of the vectors.

A basis for R^3 is i, j, k. Meaning every vector in R^3 is uniquely ai+bj+ck. Coordinates may make sense in the abstract sense, but it's mainly a geometry thing.

There ARE change of bases. I'm not quite sure what you mean by "scalar quantity is invariant".

15. Jan 3, 2014

### jbunniii

What do you mean by "a vector transforms in such a way as to preserve its overall form?"

Consider the set of all real numbers of the form $p + \sqrt{2} q$, where $p$ and $q$ are rational numbers. It's straightforward to show that this is a 2-dimensional vector space over the rationals. The obvious basis for this vector space is $\{1, \sqrt{2}\}$. Another basis is $\{1 - \sqrt{2}, 1 + \sqrt{2}\}$. All that a coordinate transformation does is to express $p + \sqrt{2} q$ in the alternate form $r (1 - \sqrt{2}) + s(1 + \sqrt{2})$, where $r = (p-q)/2$ and $s = (p+q)/2$. But the vector (real number) hasn't changed in any way.

By the way, the above example also gives you a concrete case of a vector space where there's no useful notion of direction/angle. All the vectors lie on the same real line, so they are all parallel with each other.

16. Jan 3, 2014

### "Don't panic!"

Sorry, not well worded - I meant as in the vector itself exists independently of any coordinate system, so its "shape" should not change under coordinate transformations (its components should vary in such a way, that they compensate for any change in the basis that it is represented in), right?

Sorry, I haven't explained my problem very well re. scalars. I guess the crux of my issue is, is that I am struggling to place what a 'scalar' quantity is in the framework of abstract vector spaces?

17. Jan 3, 2014

### D H

Staff Emeritus
It's an element of a "field". A field in mathematics is (informally) a mathematical structure that has addition, subtraction, multiplication, and division.

18. Jan 3, 2014

### "Don't panic!"

can one view elements of a field as 0-dimensional 'numbers' (what really distinguishes them from vectors, as they seem to obey the vector space axioms as well). Sorry, all this must seem very trivial, I'm just struggling a bit to transition my understanding from the specific definition of a scalar in the physical sense (i.e. a quantity that is completely defined by its magnitude, such that it is invariant under rotations, translations, etc. in a given coordinate system).

...is it that in a given coordinate system, performing an 'active transformation' on a given vector, will in general, change the form of that vector in some way, whereas, doing the same procedure to a scalar will leave it unchanged?

Last edited by a moderator: Jan 4, 2014
19. Jan 3, 2014

### jbunniii

Correct.
I think you have the right idea, but your wording is a bit unconventional. The vector is some object which may not have a "shape" in the geometric sense of the word. But the object, whatever it is, does not change regardless of what basis you use to represent it.

You can add vectors together to get new vectors, but in general you cannot multiply vectors together. However, you can multiply them by scalars. Geometrically in $\mathbb{R}^n$, this has the effect of changing the length of the vector but not its direction (except possibly a sign change). Thus if we fix a specific nonzero vector $v$, then the set $\{\alpha v : \alpha \in \mathbb{R}\}$ consisting of all scalar multiples of $v$ is a line through the origin, parallel with $v$.

This notion carries over to a general vector space $V$ over a field of scalars $F$: given a nonzero vector $v \in V$, the set $\{\alpha v : \alpha \in F\}$ is a one-dimensional subspace of $V$, which we call the subspace spanned by $v$. This subspace includes the origin (zero vector) but it doesn't necessarily look like a line. For example, in the vector space I indicated previously, $\{p + \sqrt{2}q : p,q \in \mathbb{Q}\}$, the subspace generated by $\sqrt{2}$ is the set of all rational multiples of $\sqrt{2}$. Another example: if the vector space is $\mathbb{C}^2$ and the scalars are $\mathbb{C}$, then the subspace generated by $(1,0)$ looks like the entire complex plane.

Last edited: Jan 3, 2014
20. Jan 3, 2014

### johnqwertyful

The real numbers are a 1-dimensional vector space over themselves. Dimension has a very precise definition. The dimension of a vector space is how many vectors it takes to form a basis.
The only vector space with dimension 0 is simply the identity, {0}.

Also, like I said coordinates aren't really used. Bases are.