Gram_Schmidt Orthonormalization .... Remarks by Garling, Section 11.4 .... ....

  • Context: MHB 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Section
Click For Summary
SUMMARY

The discussion revolves around understanding the Gram-Schmidt Orthonormalization process as presented in D. J. H. Garling's "A Course in Mathematical Analysis: Volume II." Specifically, participants seek clarification on the expression $$x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ following Theorem 11.4.1. The linearity and scalar multiplication properties of the inner product are highlighted, confirming that $$x_j$$ represents scalar components while $$e_j$$ denotes basis vectors. Additionally, participants note typographical errors in the text regarding the dimensions of spaces, which may cause confusion.

PREREQUISITES
  • Understanding of inner product spaces and their properties
  • Familiarity with the Gram-Schmidt process for orthonormalization
  • Knowledge of vector representation and scalar multiplication
  • Basic comprehension of metric and normed spaces as discussed in Garling's work
NEXT STEPS
  • Study the properties of inner products in vector spaces
  • Explore the Gram-Schmidt process in detail, focusing on its applications
  • Review common notation conventions in linear algebra to avoid confusion
  • Investigate the implications of dimension in metric and normed spaces
USEFUL FOR

Mathematics students, educators, and researchers interested in advanced topics in linear algebra, particularly those focusing on orthonormalization and inner product spaces.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help in order to understand the meaning and the point or reason for some remarks by Garling made after Theorem 11.4.1, Gram-Schmidt Orthonormalization ... ..

Theorem 11.4.1 and its proof followed by remarks by Garling ... read as follows:View attachment 8968
View attachment 8969

In his remarks just after the proof of Theorem 11.4.1, Garling writes the following:

" ... ... Note that if $$( e_1, \ ... \ ... , e_k)$$ is an orthonormal sequence and $$\sum_{ j = 1 }^k x_j e_j = 0 \text{ then } x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle = 0 $$ for $$1 \le i \le k$$; ... ... "Can someone please explain how/why $$x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ ...

... where did this expression come from ... ?PeterNOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...EDIT ... reflection ... May be worth expanding the term $$\left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ ... but then how do we deal with the $$x_j e_j$$ terms ... we of course look to exploit $$\langle e_j, e_k \rangle = 0$$ when $$j \neq k$$ ... and $$\langle e_i, e_i \rangle = 1$$ ... ..
 

Attachments

  • Garling - 1 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks  ... PART 1 .png
    Garling - 1 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks ... PART 1 .png
    47.6 KB · Views: 129
  • Garling - 2 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks  ... PART 2 ... .png
    Garling - 2 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks ... PART 2 ... .png
    7.6 KB · Views: 119
Last edited:
Physics news on Phys.org
Peter said:
Can someone please explain how/why $$x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ ...

... where did this expression come from ... ?
The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ The orthonormal sequence has the property that $ \langle e_j , e_i \rangle $ is zero if $j\ne i$, and is $1$ when $j=i$. So that last sum reduces to the single term $x_i$.

Peter said:
NOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...
Garling starts the section by announcing that his spaces will all have dimension $n$. But then he immediately states a theorem in which the space has dimension $d$. He then sticks with $d$ consistently as the dimension of the space. But at one point he writes "Thus $(e_1,\ldots,e_n)$ is a basis for $W_j$." In that sentence, the $n$ should be $j$.

I did not spot any other mistakes.
 
Opalg said:
The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ The orthonormal sequence has the property that $ \langle e_j , e_i \rangle $ is zero if $j\ne i$, and is $1$ when $j=i$. So that last sum reduces to the single term $x_i$.Garling starts the section by announcing that his spaces will all have dimension $n$. But then he immediately states a theorem in which the space has dimension $d$. He then sticks with $d$ consistently as the dimension of the space. But at one point he writes "Thus $(e_1,\ldots,e_n)$ is a basis for $W_j$." In that sentence, the $n$ should be $j$.

I did not spot any other mistakes.
Thanks for the help Opalg ...

But ... just a clarification ...

You write:

" ... ... The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ ... ..."

However, you seem to be treating $$x_j$$ as a scalar ... but isn't $$x_j$$ a vector ... By the way I agree with you on the typos ... I find that they can be slightly disconcerting ... Thanks again ...

Peter
 
Peter said:
you seem to be treating $$x_j$$ as a scalar ... but isn't $$x_j$$ a vector ...
No. In the expression $x_je_j$, $x_j$ is a scalar and $e_j$ is a vector. Garling typically writes $x = \sum_{ j = 1 }^k x_j e_j$ to write a vector $x$ as a sum of components $x_je_j$, where $x_j$ is the scalar component (or coordinate) of $x$ in the direction of the basis vector $e_j$.
 
Opalg said:
No. In the expression $x_je_j$, $x_j$ is a scalar and $e_j$ is a vector. Garling typically writes $x = \sum_{ j = 1 }^k x_j e_j$ to write a vector $x$ as a sum of components $x_je_j$, where $x_j$ is the scalar component (or coordinate) of $x$ in the direction of the basis vector $e_j$.
Oh Lord ... how are we supposed to tell what's a scalar and what's a vector when just above in the proof $$x_1 \ ... \ , x_d$$ are basis vectors ... ! ... how confusing ...

So ... the $$x_j$$ in $$\left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ have nothing to do with the $$ x_j$$ in the basis $$(x_1, \ ... \ ... , x_d)$$ ... ?

... ... why not use $$\lambda_j$$ instead of $$x_j$$ in $$\left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ ... ... ?Just another clarification ... I can see that $$x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle$$ but why is the expression equal to $$ 0$$ ...?

Peter
 
Last edited:
Peter said:
Oh Lord ... how are we supposed to tell what's a scalar and what's a vector when just above in the proof $$x_1 \ ... \ , x_d$$ are basis vectors ... ! ... how confusing ...
I agree, it's shockingly bad notation. Next time I see Ben Garling I'll have a word with him. (Wait)

One thing you can be sure of is that in a vector space, the only multiplication that can occur is scalar multiplication. There is no vector multiplication. In a product of the form $xe$ the scalar will (almost?) invariably come before the vector. So if it hasn't already been made clear, you can expect that $x$ must be the scalar and $e$ the vector.
 
Opalg said:
I agree, it's shockingly bad notation. Next time I see Ben Garling I'll have a word with him. (Wait)

One thing you can be sure of is that in a vector space, the only multiplication that can occur is scalar multiplication. There is no vector multiplication. In a product of the form $xe$ the scalar will (almost?) invariably come before the vector. So if it hasn't already been made clear, you can expect that $x$ must be the scalar and $e$ the vector.
Thanks for al your help, Opalg ...

It as helped me no end ... ... !

Wonderful that you know Ben Garling ... his 3 volumes on mathematical analysis is comprehensive and inspiring ,,,

Thanks again ...

Peter
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
1K
Replies
2
Views
2K