Basis of a Subspace of a Vector Space

In summary: Among all the possible bases, there is at least one orthogonal and orthonormal basis, correct?Correct, but it is not necessarily unique. There can be multiple orthogonal and orthonormal bases, depending on the dimensions and field of the vector space.For example, in a 2-dimensional real vector space, there can be infinitely many orthonormal bases, as you can rotate the vectors by any angle and still have an orthonormal basis.
  • #1
fog37
1,568
108
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?

A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?

Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!
 
Physics news on Phys.org
  • #2
fog37 said:
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?
Multiple. You can always rotate an orthonormal basis by an arbitrary angle and have still an orthonormal basis. Or change the numbering. Or mirror single basis vectors ##\vec{b} \mapsto -\vec{b}##. Also your entire argument doesn't consider the case of finite fields.
A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero...
Zero in this context, for otherwise ##\vec{0}## wouldn't be part of your subspace.
... since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?
You may look at it this way, but the inclusion is easier than the projection.
Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!
No. See above: finite fields, rotations etc.
 
  • #3
fog37 said:
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?

A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?

Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!

I will give a few remarks here:

1) It doesn't make sense to consider a vector space without specifying a field. For example, the vector space of the complex numbers with real scalars is a very different vector space than the complex numbers with complex scalars. They don't even have the same dimension! (The former has dimension 2, the latter dimension 1).

2) You say a subspace has 'less' elements. This is not nessecarily true: every vector space is a subspace of itself.
 
  • #4
Here is one consequence of @fresh_42's correction (that the orthonormal basis is not unique) that may help you. Suppose you have a vector space R2 with orthonormal basis (1,0) and (0,1). The set (x,x) is a subspace with an orthonormal basis (1/√2, 1/√2). So the basis you are using for the subspace does not have to be part of the basis you are using for the larger space. But you can add to the subspace basis to get another basis for the space. In this example, (1/√2, 1/√2) and (-1/√2,1/√2) would be an orthonormal basis for the space.
 
  • #5
First of all a vector space may not have an idea of length and angle in which case it makes no sense to talk of orthonomal bases. The idea of a vector space is independent of the idea of length and angle measurement. Angle and length come from an inner product. A vector space can have many inner products and for each, the idea of orthonormal is different. For instance in the plane ##R^2## the usual inner product tells you that ##(0,1)## and ##(1,0)## form an orthonormal basis.But with the inner product defined by ##(x,y).(z,w) = 4xz+4yw## they do not since they each then have length 2.

For a fixed inner product there is more than one orthonormal basis. For instance for the real line with the usual dot product, 1 and -1 are two orthonormal bases. In the plane, ##R^{2}## for each unit vector ##(cos θ, sin θ)## there are two vectors ##(cos θ+π/2,sin θ + π/2)## and ##(cos θ-π/2,sin θ -π/2)## that extend it to an orthonormal basis. Therefore the set of orthonormal bases is infinite and forms a continuum.
 
Last edited:
  • Like
Likes FactChecker and member 587159
  • #6
lavinia said:
For a fixed inner product there is more than one orthonormal basis. For instance for the real line with the usual dot product, 1 and -1 are two orthonormal bases. In the plane, ##R^{2}## for each unit vector ##(cos θ, sin θ)## there are two vectors ##(cos θ+π/2,sin θ + π/2)## and ##(cos -θ-π/2,sin -θ -π/2)## that extend it to an orthonormal basis. Therefore the set of orthonormal bases is infinite and forms a continuum.
The last two pairs of vectors would be clearer with parentheses. I think this is what you meant, @lavinia:
##(\cos(\theta + \pi/2), \sin(\theta + \pi/2))## and ##(\cos(\theta - \pi/2), \sin(\theta - \pi/2))##
 
  • Like
Likes lavinia
  • #7
fog37 said:
a linear vector space contains an infinity of elements called vectors that must obey certain rules.
Just to point out, the most important application of abstract vector spaces are Hamming codes which are on the spaces ##\mathbb{Z}_2^{2^n-1}## (or ##\mathbb{Z}_2^{2^n}## for SECDEC). These are finite vector spaces.
 
  • #8
pwsnafu said:
Just to point out, the most important application of abstract vector spaces are Hamming codes which are on the spaces ##\mathbb{Z}_2^{2^n-1}## (or ##\mathbb{Z}_2^{2^n}## for SECDEC). These are finite vector spaces.
I would say that the application of vector spaces is too universal to single out one as "most important".
 
  • #9
FactChecker said:
I would say that the application of vector spaces is too universal to single out one as "most important".

I couldn't agree more.
 

1. What is a subspace of a vector space?

A subspace of a vector space is a subset of the original vector space that follows the same rules and properties as the original vector space. This means that it is also a vector space itself.

2. How is a subspace determined?

A subspace is determined by a set of vectors that satisfy three conditions: closure under addition, closure under scalar multiplication, and containing the zero vector. If these conditions are met, then the set of vectors is a subspace of the original vector space.

3. What is the basis of a subspace?

The basis of a subspace is a set of linearly independent vectors that span the subspace. This means that any vector in the subspace can be written as a linear combination of the basis vectors.

4. How do you find the basis of a subspace?

To find the basis of a subspace, you can use the process of Gaussian elimination to reduce the vectors in the subspace to their simplest form. The resulting vectors will be the basis of the subspace.

5. Why is the basis of a subspace important?

The basis of a subspace is important because it allows us to represent any vector in that subspace in a simplified form. It also helps us understand the structure of the subspace and its relationship to the original vector space.

Similar threads

  • Linear and Abstract Algebra
Replies
6
Views
885
  • Linear and Abstract Algebra
Replies
9
Views
582
  • Linear and Abstract Algebra
Replies
8
Views
885
  • Linear and Abstract Algebra
Replies
9
Views
204
Replies
12
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
305
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
939
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
885
Back
Top