Gram-Schmidt Orthonormalization .... Garling Theorem 11.4.1 ....

  • Context: MHB 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Theorem
Click For Summary
SUMMARY

The discussion centers on the proof of Theorem 11.4.1 from D. J. H. Garling's "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable." The theorem establishes that the span of the orthonormal vectors \( e_j \) generated through the Gram-Schmidt process is equivalent to the span of the original vectors \( x_j \) in a normed space. Specifically, the formulation \( f_j = x_j - \sum_{i=1}^{j-1} \langle x_j, e_i \rangle e_i \) ensures that \( e_j \) is orthonormal, leading to the conclusion that \( \text{span}(e_1, \ldots, e_j) = W_j \).

PREREQUISITES
  • Understanding of Gram-Schmidt orthonormalization process
  • Familiarity with inner product spaces and linear independence
  • Knowledge of vector norms and their properties
  • Basic concepts of metric spaces as outlined in Garling's work
NEXT STEPS
  • Study the Gram-Schmidt process in detail, focusing on its application in vector spaces
  • Explore the properties of inner product spaces and their implications for orthonormal sets
  • Investigate the concept of linear independence in the context of vector spans
  • Review the definitions and properties of metric spaces as presented in Garling's "A Course in Mathematical Analysis"
USEFUL FOR

Mathematicians, students of advanced calculus, and anyone studying linear algebra or functional analysis will benefit from this discussion, particularly those interested in the Gram-Schmidt process and its applications in vector spaces.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help with an aspect of the proof of Theorem 11.4.1 ...

Garling's statement and proof of Theorem 11.4.1 reads as follows:
View attachment 7921In the above proof by Garling we read the following:

" ... ... Let $$f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i$$. Since$$ x_j \notin W_{ j-1 }, f_j \neq 0$$.

Let $$e_j = \frac{ f_j }{ \| f_j \| } $$. Then $$\| e_j \| = 1$$ and

$$\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j $$

... ... "
Can someone please demonstrate rigorously how/why $$f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i $$

and

$$e_j = \frac{ f_j }{ \| f_j \| }$$imply that $$\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j$$

Help will be much appreciated ...

Peter
 
Physics news on Phys.org
Peter said:
I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help with an aspect of the proof of Theorem 11.4.1 ...

Garling's statement and proof of Theorem 11.4.1 reads as follows:
In the above proof by Garling we read the following:

" ... ... Let $$f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i$$. Since$$ x_j \notin W_{ j-1 }, f_j \neq 0$$.

Let $$e_j = \frac{ f_j }{ \| f_j \| } $$. Then $$\| e_j \| = 1$$ and

$$\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j $$

... ... "
Can someone please demonstrate rigorously how/why $$f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i $$

and

$$e_j = \frac{ f_j }{ \| f_j \| }$$imply that $$\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j$$

Help will be much appreciated ...

Peter
Reflecting on my post above I have formulated the following proof of Garling's statement ... ...$$\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j$$

We have $$e_1 = \frac{ f_1 }{ \| f_1 \| }$$ and we suppose that we have constructed $$e_1, \ ... \ ... \ e_{j - 1 } $$, satisfying the conclusions of the theorem ...Let $$f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i$$Then $$e_j = \frac{ f_j }{ \| f_j \| } = \frac{ x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i }{ \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| }$$

So ...

$$e_j = \frac{ x_j - \langle x_j , e_1 \rangle e_1 - \langle x_j , e_2 \rangle e_2 - \ ... \ ... \ ... \ - \langle x_j , e_{ j - 1 } \rangle e_{ j - 1 } }{ \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| }$$ Therefore ...

$$x_j = \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| e_j + \langle x_j , e_1 \rangle e_1 + \langle x_j , e_2 \rangle e_2 + \ ... \ ... \ ... \ + \langle x_j , e_{ j - 1 } \rangle e_{ j - 1 }$$Therefore $$x_j \in \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )$$ ... ... ... ... ... (1)But $$W_{j-1} = \text{ span } ( x_1, x_2, \ ... \ ... \ , x_{ j - 1 } ) = \text{ span } ( e_1, e_2, \ ... \ ... \ , e_{ j - 1} ) $$ ... ... ... ... ... (2) Now $$(1) (2) \Longrightarrow \text{ span } ( x_1, x_2, \ ... \ ... \ , x_j ) \subseteq \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )$$But ... both lists are linearly independent (x's by hypothesis and the e's by orthonormality ...)

Thus both lists have dimension j and hence they must be equal ...That is $$\text{ span } ( x_1, x_2, \ ... \ ... \ , x_j ) = \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )

$$

Is that correct ...?

Can someone please critique the above proof pointing out errors and/or shortcomings ...Peter*** EDIT ***

Above I claimed that the the list of vectors $$e_1, e_2, \ ... \ ... \ , e_j$$ was orthonormal ... and hence linearly independent ... but I needed to show that the list $$e_1, e_2, \ ... \ ... \ , e_j $$ was orthonormal ... To show this let $$1 \le k \lt j$$ and calculate $$\langle e_j, e_k \rangle$$ ... indeed it readily turns out that $$\langle e_j, e_k \rangle = 0$$ for all $$k$$ such that $$1 \le k \lt j$$ and so list of vectors $$e_1, e_2, \ ... \ ... \ , e_j$$ is orthonormal ... Peter
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
1K
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K