What Happens When Gram-Schmidt Is Applied to Linearly Dependent Vectors?

  • Thread starter Thread starter mpm
  • Start date Start date
  • Tags Tags
    Process
Click For Summary
SUMMARY

The Gram-Schmidt process, when applied to a set of vectors {v1, v2, v3} where v1 and v2 are linearly independent and v3 belongs to Span(v1, v2), does not fail but results in an orthonormal set that does not form a basis for the entire vector space V. Instead, it merely spans a subset of V. The process remains valid as long as at least one vector in the set is outside the span of the others. However, if all vectors are contained within the span of previous vectors, the process leads to a division by zero, indicating a breakdown in generating a complete orthonormal basis.

PREREQUISITES
  • Understanding of linear independence and dependence
  • Familiarity with vector spaces and spans
  • Knowledge of the Gram-Schmidt orthogonalization process
  • Concept of inner product spaces
NEXT STEPS
  • Study the properties of linear independence in vector spaces
  • Explore the implications of the Gram-Schmidt process on finite-dimensional inner product spaces
  • Investigate examples of Gram-Schmidt applied to linearly dependent sets
  • Learn about alternative orthogonalization methods, such as Modified Gram-Schmidt
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators teaching concepts of vector spaces and orthogonalization techniques.

mpm
Messages
82
Reaction score
0
I have a question about this process.

What will happen if this process is applied to a set of vectors {v1, v2, v3} where v1 and v2 are linearly independent, but v3 belongs to set Span(v1,v2). Will the process fail? If it fails, why does it fail?
 
Physics news on Phys.org
Doesn't anyone possibly know the answer to this question? Or can anyone even give me some direciton on it?
 
Have you tried it out on a concrete example? What happens?
 
The goal of GM is to take a nonorthogonal set of linearly independent functions and construct an orthogonal basis such that the the span of the original set is contained in the span of the orthonormalized set. This is a key ingridient to proving its most natural corollarly: that every finite dimensional inner product space admits an orthonormal basis. Note that vj is not contained in span(v1,v2,...vj-1) since (v1,...vj) is linearly independent and therefore vj is not in the span(e1,...,ej-1). If vj is in the span(v1,...vj-1) for any j, then carrying out GS merely produces a particular element which lies in the span(e1,...ej-1). It does nothing to further the purpose of GS, however; it does not destroy it, so long as your original set does contain elements which lie outside the span of its companions. But, if j=Dim(V) where V was an arbitrary space, and vk was an element of span(v1,...vk-1) for k<j, then continuing this process would result in an orthonormal set which DOES not form a basis for V, it merely spans/forms a basis or some subset/space of V.
 
mpm said:
I have a question about this process.
What will happen if this process is applied to a set of vectors {v1, v2, v3} where v1 and v2 are linearly independent, but v3 belongs to set Span(v1,v2). Will the process fail? If it fails, why does it fail?

Eventually, you will wind up having to divide by 0.
 

Similar threads

Replies
4
Views
2K
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
11K