Usefulness of Basis for a Vector Space, General?

In summary, a basis for a vector space is a collection of vectors that allow us to describe a linear map between the space and allows us to count things. It is important for applications in physics, engineering, etc because it allows for easy encoding of information.
  • #1
Bacle
662
1
"Usefulness" of Basis for a Vector Space, General?

Hi, Everyone:

I am teaching an intro class in Linear Algebra. During the section on "Basis and Dimension"
a student asked me what was the use or purpose of a basis for a vector space V.

All I could think of is that bases allow us to define a linear map L for all vectors, once
we know the value of L at the basis vectors for V, i.e., vector spaces are free in their
bases and so on. I mumbled something about identifying all vector spaces over the
same field by their dimension, i.e., if V,V' v.spaces over F both, with the same dimension,
then they are isomorphic.

Are there other aspects where bases are equally important or more?

Thanks.
 
Last edited:
Physics news on Phys.org
  • #2


Make sure students appreciate that "linear transformations" can include motions like rotations about the origin, reflections, shear, etc. They can even include translations if you use a specialized coordinate system ("projective coordinates"). Students are liable to confuse linear transformations with linear equations . The representation of linear transformations as matrices depends on using a basis - OK,maybe your students aren't sold on matrices either.

A "dramatic" use of the use of a basis in a vector space is expressing a function as a sum of other functions. For example, Fourier series, Chebeshev polynomials. Dont' forget , the wider meaning of "vector space" goes beyond operations on n-tuples of numbers. (See the thread: "a question on orthogonality relating to Fourier analysis and also solutions of PDES" )
 
  • #3


Stephen Tashi said:
A "dramatic" use of the use of a basis in a vector space is expressing a function as a sum of other functions. For example, Fourier series, Chebeshev polynomials.

Careful, you need http://en.wikipedia.org/wiki/Schauder_basis" for that. On the topic of Fourier, each piano key corresponds to a sine function. The collection of finite sums gives the set of all piano chords.

There is also http://en.wikipedia.org/wiki/Lattice-based_cryptography" , but that is discrete so ymmv.
 
Last edited by a moderator:
  • #4


The most important thing for applications in physics, engineering, etc is when you can choose a basis where the matrices involved become diagonal.

But you won't have good examples of that until you have studied eigenvalues and eigenvectors.
 
  • #5


AlephZero said:
The most important thing for applications in physics, engineering, etc is when you can choose a basis where the matrices involved become diagonal.

But you won't have good examples of that until you have studied eigenvalues and eigenvectors.

Perhaps if students were interested in the idea of polynomials in matrices or a geometric series of matrices, they would understand the significance of diagonalization in making those functions easy to compute. But would learning those topics be interesting to them or would they take it as more pedagogical sadism?
 
  • #6


vector spaces are, in general, fairly large things (those in the back row, stop heckling about galois fields! those are next year!).

if one is doing vector spaces over a field such as Q, R or C (which normally you are in a first-time exposure to linear algebra), these are infinite sets.

so what is interesting about a basis, is that it tells us everything we need to know about the entire vector space, and it's much smaller. if the space has finite dimension, the basis is finite. so if you're dealing with a plane space, for example, all you need to do is specify TWO vectors. that's a fair amount of bang for the buck.

another reason why bases are important, is that they allow us to describe elements of a vector space in coordinate terms. this turns the "algebra" of linear algebra, into an arithmetic, the arithmetic of matrices, which is, in truth, not much harder than the arithmetic of grade school (although it does use up a lot more space on the paper).

furthermore, with respect to a given field, all vector spaces of a fixed dimension, are isomorphic as vector spaces. this means, for example, when you are trying to identify a solution set for the equation Ax = 0, and you determine it has dimension 2, you already have a picture of what the solution set is: it is a plane. which plane? find two basis vectors. the "solution space" will behave just like any other plane does.

bases let us count things: if we row-reduce a matrix, and get 2 zero rows, this tells us we lose 2 dimensions (2 basis elements). that gives us some concrete information that makes solving a problem easier (if i have a 3x3 matrix, row-reduced with 2 0-rows, if i find one vector in the column space (range), i don't have to look for any more).

so how big the basis is, tells us directly how complicated the vector space is. it's a very efficient way of encoding the information, it makes it manageable.

and, for some purposes, some bases are better than others: it's more pithy to say (1,0) than: "the unit-vector along the x-axis". the standard basis should get props for being so unobtrusive, and user-friendly (and having so many 0's in their coordinates. every 0 that pops up in linear algebra, means less work for somebody).

large matrices can be unwieldy to compute with. diagonal matrices are easier. by changing the basis, if we can turn an ugly matrix into a "nice" one, who's going to complain? that is cause for celebration, in my opinion.

just like atoms are the building blocks of molecules, a basis forms the building blocks of a vector space. no vector space should leave home without one.
 
  • #7


^ that's what I was thinking, any vector can be written as a sum of basis elements
 
  • #8


Only after choosing a basis can you "name" a vector by a set of coordinates.

The difficulty I think many students would have is that they treat coordinates as if they were the vectors themselves.

It might help to do a little demonstration. Cut a circle or a funny blob shape out of a piece of paper. Draw a point at the center and label it the origin. Then, draw a few other points on the surface and label them u, v, w, or whatever.

Emphasize that the paper can be rotated on the table. There is no sense of "x or y axes". So how would we write down these vectors using (x, y) coordinates?

The answer is, of course, you have to pick a basis to do so. The basis might be {u, v} or {v, w} or {w, u}. Show them how you can always name the third point in terms of the other two. And the scalars involved become the coordinates of the point.

Dimensionality then relates to how many vectors you need to come up with coordinates, and thus, is the number of coordinates required to "name" all the points on the plane.
 
  • #9


indeed, the trouble most students have with bases, is that the standard basis is too natural, we forget we're using it.

bases take the geometry of vectors (which has nothing to do with which coordinate system you use), and turn it into an algebra (which has everything to do with which coordinate system you use).

an interesting example is to use the basis {-e1,e2}. everything looks "backwards". our sense of what is "positive" has to do with what basis we choose, it's not "built-in" to the geometry. in some alternate universe, math students are encouraged to use "the left-hand rule".

isn't it a cruel joke, that after struggling to learn what the xy coordinate plane is, and how to use it, later we ask these same students to realize that that "really isn't how the plane actually is, we just made it up?" but it would be undoubtedly unfair to teach 8th graders abstract algebra first, so that much much later, they could see how eminently practical, yet arbitrary, analytic geometry was.
 

What is a basis for a vector space?

A basis for a vector space is a set of linearly independent vectors that span the entire vector space. This means that any vector within the space can be written as a unique linear combination of the basis vectors.

Why is a basis important in linear algebra?

A basis is important in linear algebra because it allows us to represent and manipulate vectors in a more efficient and organized manner. It also helps us understand the structure of a vector space and perform operations such as finding coordinates and projections.

How do you determine if a set of vectors is a basis for a vector space?

To determine if a set of vectors is a basis for a vector space, you can check if they are linearly independent and if they span the entire vector space. This can be done through various methods such as row reduction or calculating determinants.

Can a vector space have more than one basis?

Yes, a vector space can have more than one basis. In fact, any vector space with dimension greater than 1 will have infinitely many bases. This is because there are multiple ways to choose a set of linearly independent vectors that span the space.

What is the relationship between a basis and a dimension of a vector space?

The dimension of a vector space is equal to the number of vectors in a basis for that space. This means that the dimension of a vector space is a unique characteristic that is determined by its basis.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
519
  • Linear and Abstract Algebra
Replies
3
Views
263
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
842
  • Linear and Abstract Algebra
2
Replies
38
Views
5K
Replies
12
Views
3K
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
  • Linear and Abstract Algebra
Replies
2
Views
893
Back
Top