MHB Why Does Lemma 8.4 Equal This Expression?

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ...

I am reader Andrew Browder's book: "Mathematical Analysis: An Introduction" ... ...

I am currently reading Chapter 8: Differentiable Maps and am specifically focused on Section 8.1 Linear Algebra ...

I need some help in fully understanding Lemma 8.4 ...

Lemma 8.4 reads as follows:View attachment 7452
View attachment 7453In the proof of the above Lemma we read the following:" ... ... $$ \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
$$$$= \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2$$ ... ... "
Can someone please demonstrate why/how $$ \lvert Tv \rvert^2 = \left\lvert \sum_{j= 1}^m \sum_{k= 1}^n a_k^j v^k e_j \right\rvert^2
$$$$= \sum_{j= 1}^m \left( \sum_{k= 1}^n a_k^j v^k \right)^2$$ ... ..

Help will be much appreciated ... ...

Peter
 
Physics news on Phys.org
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
 
Last edited:
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

GJA said:
Hi Peter,

When orthonormal bases, say $\{e_{1},\ldots, e_{n}\}$ and $\{u_{1},\ldots, u_{m}\}$, are selected for $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, the linear operator $T$ can be represented by a matrix, which, in this case, the author denotes by $A$. Moreover, a vector $v$ in $\mathbb{R}^{n}$ can be expressed as a column vector whose component $v^{k}$ is the coefficient of $e_{k}$; i.e.,

$$v = \begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=v^{1}e_{1}+\cdots + v^{n}e_{n} = \sum_{k=1}^{n}v^{k}e_{k}.$$

Thus the function/linear operator $T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ can be computed as a matrix vector product:

$$Tv = A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}\qquad (*).$$

Now here's the rub, and the part I imagine is a bit confusing: The author is using the same symbol $e_{k}$ to simultaneously mean the standard basis for the domain of $T$, $\mathbb{R}^{n}$ - indicated by writing $v=\sum_{k}v^{k}e_{k}$ - as well as for the codomain of $T$, $\mathbb{R}^{m}$ - indicated in the double-sum over $j.$ Now, eventually, you will adjust and get used to this as it's common and not considered bad notation. But for someone trying to work out all the details for the first time, it can be a sticking point. Be that as it may, I will proceed from here assuming that everything up to and including the starred equation made sense.

In the equation $(*)$, $v$ is being expressed in the domain basis $\{e_{1},\ldots, e_{n}\}.$ Once we work out the matrix vector product, however, the column vector we obtain is tacitly being written in the codomain basis, $\{e_{1},\ldots, e_{m}\}$:

$$ Tv=A\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

The transition is subtle, often unstated, and often overlooked. To stress what I am saying, the column vector

$$\begin{bmatrix}v^{1}\\ \vdots \\ v^{n} \end{bmatrix}$$

has height $n$ and the column vector

$$\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}$$

has height $m$. Now, if all that made sense, since the column vector of height $m$ is really a set of coefficients for the codomain basis $\{e_{1},\ldots, e_{m}\}$, we can write

$$Tv=\begin{bmatrix}\sum_{k=1}^{n}a^{1}_{k}v^{k}\\ \vdots \\ \sum_{k=1}^{n}a^{m}_{k}v^{k} \end{bmatrix}=\left( \sum_{k=1}^{n}a^{1}_{k}v^{k}\right)e_{1}+\cdots +\left(\sum_{k=1}^{n}a^{m}_{k}v^{k} \right)e_{m}=\sum_{j=1}^{m}\sum_{k=1}^{n}a^{j}_{k}v^{k}e_{j},$$

which is where the first equality comes from that you asked about. The second equality says that computing the (square of) the length of a vector in Euclidean space with respect to the standard orthonormal basis $\{e_{1},\ldots, e_{m}\}$ is given by the Pythagorean theorem (i.e.; sum the squares of the coefficients of the basis vectors). Note: this is automatically what you would do if you were asked to compute the (square of) distance from the origin to the point $(x,y)=xe_{1}+ye_{2}$ in the plane: $x^{2}+y^{2}.$

Hope this helps.
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Peter said:
GJA ... thanks so much for your help ...

Your post is a major assistance to me in understanding multi variable calculus/analysis ...

It is much appreciated...

Thanks again,

Peter
Hi GJA

I've now worked through your post ... and I now (thanks to you) understand the first equality ... but am stuck on the details of the second equality (despite your hint) ...

Can you please help further with the second equality ...?

Peter
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
 
Re: The "Operator Norm" for Linear Transformations ... Browder, Lemma 8.4, Section 8.1, Ch. 8 ... ..

Was
GJA said:
Hi Peter,

The squared length of a vector $x$ written in terms of the standard basis $x=x^{1}e_{1}+\cdots + x^{m}e_{m}$ is given by the generalized Pythagorean theorem

$$|x|^{2}=\left(x^{1}\right)^{2}+\cdots +\left(x^{m}\right)^{2}=\sum_{j=1}^{m}(x^{j})^{2}.$$

The second equality is obtained by noting that, in your case, $x^{j}=\sum_{k=1}^{n}a^{j}_{k}v^{k}.$
Oh! Indeed ...!

Thanks GJA ...Peter
 
A sphere as topological manifold can be defined by gluing together the boundary of two disk. Basically one starts assigning each disk the subspace topology from ##\mathbb R^2## and then taking the quotient topology obtained by gluing their boundaries. Starting from the above definition of 2-sphere as topological manifold, shows that it is homeomorphic to the "embedded" sphere understood as subset of ##\mathbb R^3## in the subspace topology.
Back
Top