Why Does Lemma 8.4 Equal This Expression?

Click For Summary

Discussion Overview

The discussion revolves around understanding Lemma 8.4 from Andrew Browder's book "Mathematical Analysis: An Introduction," specifically focusing on the expression for the operator norm of linear transformations. Participants are exploring the mathematical details and implications of the lemma, which involves concepts from linear algebra and Euclidean space.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Peter seeks clarification on why the expression for the operator norm in Lemma 8.4 holds, specifically the equality involving the squared norm of the transformation.
  • GJA explains that when using orthonormal bases for the domain and codomain, the linear operator can be represented by a matrix, and the vector can be expressed as a column vector in the domain basis.
  • GJA notes that the notation used by Browder can be confusing as the same symbol represents different bases in different contexts, which may lead to misunderstandings.
  • GJA emphasizes that the computation of the squared length of a vector follows the Pythagorean theorem, which is a standard approach in Euclidean space.
  • Peter expresses gratitude for the clarification regarding the first equality but indicates difficulty with the second equality and requests further assistance.

Areas of Agreement / Disagreement

Participants generally agree on the need for clarification regarding the expressions in Lemma 8.4, but the discussion remains unresolved regarding the details of the second equality.

Contextual Notes

The discussion highlights potential confusion arising from the notation used in the lemma, particularly the simultaneous use of the same symbols for different bases. There is also an acknowledgment of the subtleties involved in transitioning between domain and codomain representations.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ...

I am reader Andrew Browder's book: "Mathematical Analysis: An Introduction" ... ...

I am currently reading Chapter 8: Differentiable Maps and am specifically focused on Section 8.1 Linear Algebra ...

I need some help in fully understanding Lemma 8.4 ...

Lemma 8.4 reads as follows:View attachment 7452
View attachment 7453In the proof of the above Lemma we read the following:" ... ... $$ \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
$$$$= \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2$$ ... ... "
Can someone please demonstrate why/how $$ \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
$$$$= \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2$$ ... ..

Help will be much appreciated ... ...

Peter
 
Physics news on Phys.org
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
 
Last edited:
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

GJA said:
Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Peter said:
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
Hi GJA

I've now worked through your post ... and I now (thanks to you) understand the first equality ... but am stuck on the details of the second equality (despite your hint) ...

Can you please help further with the second equality ...?

Peter
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Was
GJA said:
Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
Oh! Indeed ...!

Thanks GJA ...Peter
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
2K
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K