- #1

Math Amateur

Gold Member

MHB

- 3,998

- 48

I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help in order to understand the meaning and the point or reason for some remarks by Garling made after Theorem 11.4.1, Gram-Schmidt Orthonormalization ... ..

Theorem 11.4.1 and its proof followed by remarks by Garling ... read as follows:View attachment 8968

View attachment 8969

In his remarks just after the proof of Theorem 11.4.1, Garling writes the following:

" ... ... Note that if \(\displaystyle ( e_1, \ ... \ ... , e_k)\) is an orthonormal sequence and \(\displaystyle \sum_{ j = 1 }^k x_j e_j = 0 \text{ then } x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle = 0 \) for \(\displaystyle 1 \le i \le k\); ... ... "Can someone please explain how/why \(\displaystyle x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ...

... where did this expression come from ... ?PeterNOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...EDIT ... reflection ... May be worth expanding the term \(\displaystyle \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ... but then how do we deal with the \(\displaystyle x_j e_j\) terms ... we of course look to exploit \(\displaystyle \langle e_j, e_k \rangle = 0\) when \(\displaystyle j \neq k\) ... and \(\displaystyle \langle e_i, e_i \rangle = 1\) ... ..

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help in order to understand the meaning and the point or reason for some remarks by Garling made after Theorem 11.4.1, Gram-Schmidt Orthonormalization ... ..

Theorem 11.4.1 and its proof followed by remarks by Garling ... read as follows:View attachment 8968

View attachment 8969

In his remarks just after the proof of Theorem 11.4.1, Garling writes the following:

" ... ... Note that if \(\displaystyle ( e_1, \ ... \ ... , e_k)\) is an orthonormal sequence and \(\displaystyle \sum_{ j = 1 }^k x_j e_j = 0 \text{ then } x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle = 0 \) for \(\displaystyle 1 \le i \le k\); ... ... "Can someone please explain how/why \(\displaystyle x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ...

... where did this expression come from ... ?PeterNOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...EDIT ... reflection ... May be worth expanding the term \(\displaystyle \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ... but then how do we deal with the \(\displaystyle x_j e_j\) terms ... we of course look to exploit \(\displaystyle \langle e_j, e_k \rangle = 0\) when \(\displaystyle j \neq k\) ... and \(\displaystyle \langle e_i, e_i \rangle = 1\) ... ..

#### Attachments

Last edited: