Gram-Schmidt Orthonormalization .... Garling Theorem 11.4.1 .

  • B
  • Thread starter Math Amateur
  • Start date
  • Tags
    Theorem
In summary: Here ##e_0## is defined to be 0.) This implies that ##f_j## is contained in the orthogonal complement of ##\text{span} (e_{j-1}, ..., e_0)##, which is the same as ##\text{span} (W_{j-1}, e_{j-1})##.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help with an aspect of the proof of Theorem 11.4.1 ...

Garling's statement and proof of Theorem 11.4.1 reads as follows:
Garling - Theorem 11.4.1 ... .png


In the above proof by Garling we read the following:

" ... ... Let ##f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i##. Since ##x_j \notin W_{ j-1 }, f_j \neq 0##.

Let ##e_j = \frac{ f_j }{ \| f_j \| }## . Then ##\| e_j \| = 1## and

##\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j##

... ... "
Can someone please demonstrate rigorously how/why##f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i##

and

##e_j = \frac{ f_j }{ \| f_j \| }##imply that##\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j##
Help will be much appreciated ...

Peter
 

Attachments

  • Garling - Theorem 11.4.1 ... .png
    Garling - Theorem 11.4.1 ... .png
    38.1 KB · Views: 432
Physics news on Phys.org
  • #2
Have you tried inducting on the size of the basis?
 
  • #3
Hi WWGD ... thanks for the hint ...

I've seen and followed a proof by induction in Axler: Linear Algebra Done Right ...

But ,,, I still do not follow Garling's logic ... particularly the part i quoted ...

Can you help further ...?

Peter
 
  • #4
Hi again WWGD ...

Reflecting on your advice and on Axler's proof ... I now believe that I understand Garling's statement that I quoted ...

I will post the proof of Garling's statement later ...

Thank you for your help ...

Peter
 
  • Like
Likes WWGD
  • #5
Reflecting on my post above I have formulated the following proof of Garling's statement ... ...##\text{ span } ( e_1, \ ... \ ... \ e_j ) = \text{ span } ( W_{ j - 1 } , e_j ) = \text{ span }( W_{ j - 1 } , x_j ) = W_j##

We have ##e_1 = \frac{ f_1 }{ \| f_1 \| } ## and we suppose that we have constructed ##e_1, \ ... \ ... \ e_{j - 1 }## , satisfying the conclusions of the theorem ...Let ##f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i##Then ##e_j = \frac{ f_j }{ \| f_j \| } = \frac{ x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i }{ \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| }##

So ...

##e_j = \frac{ x_j - \langle x_j , e_1 \rangle e_1 - \langle x_j , e_2 \rangle e_2 - \ ... \ ... \ ... \ - \langle x_j , e_{ j - 1 } \rangle e_{ j - 1 } }{ \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| }##Therefore ...

##x_j = \| x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i \| e_j + \langle x_j , e_1 \rangle e_1 + \langle x_j , e_2 \rangle e_2 + \ ... \ ... \ ... \ + \langle x_j , e_{ j - 1 } \rangle e_{ j - 1 }##Therefore ##x_j \in \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )## ... ... ... ... ... (1)

But ##W_{j-1} = \text{ span } ( x_1, x_2, \ ... \ ... \ , x_{ j - 1 } ) = \text{ span } ( e_1, e_2, \ ... \ ... \ , e_{ j - 1} )## ... ... ... ... ... (2)Now (1) (2) ##\Longrightarrow \text{ span } ( x_1, x_2, \ ... \ ... \ , x_j ) \subseteq \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )##But ... both lists are linearly independent (x's by hypothesis and the e's by orthonormality ...)

Thus both lists have dimension j and hence they must be equal ...That is ##\text{ span } ( x_1, x_2, \ ... \ ... \ , x_j ) = \text{ span } ( e_1, e_2, \ ... \ ... \ , e_j )##
Is that correct ...?

Can someone please critique the above proof pointing out errors and/or shortcomings ...Peter

*** EDIT ***

Above I claimed that the the list of vectors ##e_1, e_2, \ ... \ ... \ , e_j## was orthonormal ... and hence linearly independent ... but I needed to show that the list ##e_1, e_2, \ ... \ ... \ , e_j## was orthonormal ...To show this let ##1 \le k \lt j ## and calculate ##\langle e_j, e_k \rangle## ... indeed it readily turns out that ##\langle e_j, e_k \rangle = 0## for all ##k## such that ##1 \le k \lt j ## and so the list of vectors ##e_1, e_2, \ ... \ ... \ , e_j## is orthonormal ...Peter
 
Last edited:
  • #6
Math Amateur said:
##f_j = x_j - \sum_{ i = 1 }^{ j-1 } \langle x_j , e_i \rangle e_i##
You just have to note that ##f_j## is orthogonal to ##e_{j-1}, ... ,e_0##.
 
  • Like
Likes FactChecker and Math Amateur

Related to Gram-Schmidt Orthonormalization .... Garling Theorem 11.4.1 .

1. What is Gram-Schmidt Orthonormalization?

Gram-Schmidt Orthonormalization is a mathematical process used to transform a set of linearly independent vectors into a set of orthonormal vectors, which are vectors that are perpendicular and have a length of 1. This process is commonly used in linear algebra and is also known as the Gram-Schmidt process.

2. Why is Gram-Schmidt Orthonormalization important?

Gram-Schmidt Orthonormalization is important because it allows us to find an orthonormal basis for a vector space, which can simplify many mathematical calculations. It is also used in applications such as signal processing, data compression, and computer graphics.

3. How does Gram-Schmidt Orthonormalization work?

The Gram-Schmidt process involves taking a set of linearly independent vectors and transforming them into a set of orthonormal vectors. This is done by subtracting the projections of the previous vectors onto the current vector, ensuring that each new vector is perpendicular to all the previous vectors. The resulting vectors are then normalized to have a length of 1, creating an orthonormal basis.

4. What is Garling Theorem 11.4.1?

Garling Theorem 11.4.1 is a theorem in mathematics that states that the Gram-Schmidt process produces an orthonormal basis for a vector space. This theorem is often used in conjunction with the Gram-Schmidt process to prove the existence of an orthonormal basis.

5. What are some applications of Gram-Schmidt Orthonormalization?

Gram-Schmidt Orthonormalization has many applications in mathematics and other fields. It is used for solving systems of linear equations, finding the least-squares solution to a system, and in the QR decomposition of matrices. It is also used in signal processing, data compression, and computer graphics to simplify calculations and improve efficiency.

Similar threads

Replies
2
Views
1K
Replies
6
Views
1K
Replies
2
Views
1K
Replies
2
Views
1K
  • Topology and Analysis
Replies
4
Views
2K
Replies
8
Views
2K
Replies
2
Views
906
  • Topology and Analysis
Replies
5
Views
1K
Replies
2
Views
962
Back
Top