Basis of a Subspace of a Vector Space

Click For Summary

Discussion Overview

The discussion revolves around the concept of bases in vector spaces and subspaces, including the nature of orthonormal bases, the uniqueness of such bases, and the implications of different fields and inner products on these concepts. Participants explore theoretical aspects, definitions, and examples related to linear algebra.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants assert that a vector space can have multiple orthonormal bases due to the ability to rotate or reflect basis vectors.
  • Others argue that the uniqueness of an orthonormal basis is not guaranteed, especially when considering different fields or inner products.
  • One participant emphasizes that the concept of a subspace having "less" vectors than the original vector space is not necessarily true, as every vector space is a subspace of itself.
  • It is noted that the definition of orthonormality depends on the inner product used, and different inner products can yield different orthonormal bases.
  • Examples are provided to illustrate that the basis for a subspace does not need to be part of the basis for the larger space, highlighting the flexibility in choosing bases.

Areas of Agreement / Disagreement

Participants express disagreement regarding the uniqueness of orthonormal bases and the implications of different fields and inner products. The discussion remains unresolved on several points, particularly concerning the nature of subspaces and the definitions involved.

Contextual Notes

Some participants highlight the importance of specifying a field when discussing vector spaces, as this can affect dimensionality and the nature of bases. Additionally, the discussion touches on the role of inner products in defining orthonormality, indicating that assumptions about length and angle may not apply universally.

fog37
Messages
1,566
Reaction score
108
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?

A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?

Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!
 
Physics news on Phys.org
fog37 said:
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?
Multiple. You can always rotate an orthonormal basis by an arbitrary angle and have still an orthonormal basis. Or change the numbering. Or mirror single basis vectors ##\vec{b} \mapsto -\vec{b}##. Also your entire argument doesn't consider the case of finite fields.
A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero...
Zero in this context, for otherwise ##\vec{0}## wouldn't be part of your subspace.
... since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?
You may look at it this way, but the inclusion is easier than the projection.
Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!
No. See above: finite fields, rotations etc.
 
fog37 said:
Hello Forum and happy new year,

Aside from a rigorous definitions, a linear vector space contains an infinity of elements called vectors that must obey certain rules. Based on the dimension ##N## (finite or infinite) of the vector space, we can always find a set of ##n=N## linearly independent vectors that can form a basis. For each vector space, there is an infinity of possible bases to choose from. The basis vectors inside a particular basis don't need to be orthogonal or unit in length. Among the many many bases there is one specific basis that is orthonormal: it is composed of unit vectors that are pairwise orthogonal to each other. Among all the possible bases, only one is orthogonal and orthonormal, correct? Or are there multiple orthonormal bases?

A subspace of a vector space is also a set that contains an infinite number of vectors but it contains "less" vectors than the original host vector space. What can we say about the bases of a subspace ##B## of a vector space ##A##? For example, if the host vector space ##A## has ##N=4##, it means that:
  • Each vector in ##A## has four components: ##a = (a_{1}, a_{2}, a_{3}, a_{4})##
  • Each possible basis contains 4 linearly independent vectors
For example, if the subspace ##B## has ##N=2##, it means that each vector in the subspace has only two components while the other two components are either constant or zero since the vectors of ##B## are just the projections of the vectors of ##A## on a certain plane, correct?

Subspace ##B##, being a vector space on its own, also has an infinity of possible bases but only one specific orthonormal basis, right?

Thanks!

I will give a few remarks here:

1) It doesn't make sense to consider a vector space without specifying a field. For example, the vector space of the complex numbers with real scalars is a very different vector space than the complex numbers with complex scalars. They don't even have the same dimension! (The former has dimension 2, the latter dimension 1).

2) You say a subspace has 'less' elements. This is not nessecarily true: every vector space is a subspace of itself.
 
Here is one consequence of @fresh_42's correction (that the orthonormal basis is not unique) that may help you. Suppose you have a vector space R2 with orthonormal basis (1,0) and (0,1). The set (x,x) is a subspace with an orthonormal basis (1/√2, 1/√2). So the basis you are using for the subspace does not have to be part of the basis you are using for the larger space. But you can add to the subspace basis to get another basis for the space. In this example, (1/√2, 1/√2) and (-1/√2,1/√2) would be an orthonormal basis for the space.
 
First of all a vector space may not have an idea of length and angle in which case it makes no sense to talk of orthonomal bases. The idea of a vector space is independent of the idea of length and angle measurement. Angle and length come from an inner product. A vector space can have many inner products and for each, the idea of orthonormal is different. For instance in the plane ##R^2## the usual inner product tells you that ##(0,1)## and ##(1,0)## form an orthonormal basis.But with the inner product defined by ##(x,y).(z,w) = 4xz+4yw## they do not since they each then have length 2.

For a fixed inner product there is more than one orthonormal basis. For instance for the real line with the usual dot product, 1 and -1 are two orthonormal bases. In the plane, ##R^{2}## for each unit vector ##(cos θ, sin θ)## there are two vectors ##(cos θ+π/2,sin θ + π/2)## and ##(cos θ-π/2,sin θ -π/2)## that extend it to an orthonormal basis. Therefore the set of orthonormal bases is infinite and forms a continuum.
 
Last edited:
  • Like
Likes   Reactions: FactChecker and member 587159
lavinia said:
For a fixed inner product there is more than one orthonormal basis. For instance for the real line with the usual dot product, 1 and -1 are two orthonormal bases. In the plane, ##R^{2}## for each unit vector ##(cos θ, sin θ)## there are two vectors ##(cos θ+π/2,sin θ + π/2)## and ##(cos -θ-π/2,sin -θ -π/2)## that extend it to an orthonormal basis. Therefore the set of orthonormal bases is infinite and forms a continuum.
The last two pairs of vectors would be clearer with parentheses. I think this is what you meant, @lavinia:
##(\cos(\theta + \pi/2), \sin(\theta + \pi/2))## and ##(\cos(\theta - \pi/2), \sin(\theta - \pi/2))##
 
  • Like
Likes   Reactions: lavinia
fog37 said:
a linear vector space contains an infinity of elements called vectors that must obey certain rules.
Just to point out, the most important application of abstract vector spaces are Hamming codes which are on the spaces ##\mathbb{Z}_2^{2^n-1}## (or ##\mathbb{Z}_2^{2^n}## for SECDEC). These are finite vector spaces.
 
pwsnafu said:
Just to point out, the most important application of abstract vector spaces are Hamming codes which are on the spaces ##\mathbb{Z}_2^{2^n-1}## (or ##\mathbb{Z}_2^{2^n}## for SECDEC). These are finite vector spaces.
I would say that the application of vector spaces is too universal to single out one as "most important".
 
FactChecker said:
I would say that the application of vector spaces is too universal to single out one as "most important".

I couldn't agree more.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 32 ·
2
Replies
32
Views
6K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K