Operator norm --- Remarks by Browder After Lemma 8.4 ....

  • Context: MHB 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Norm Operator
Click For Summary
SUMMARY

The discussion centers on Andrew Browder's "Mathematical Analysis: An Introduction," specifically Chapter 8, Section 8.1, focusing on the operator norm as discussed in Lemma 8.4. Participants seek to rigorously demonstrate that the supremum and infimum in equations (8.2) and (8.3) are indeed maximum and minimum values, respectively. The conversation highlights the application of continuity and compactness theorems to establish the attainment of these bounds, alongside an algebraic approach involving singular values and eigenvalues of the matrix involved.

PREREQUISITES
  • Understanding of operator norms in functional analysis
  • Familiarity with continuity and compactness theorems in real analysis
  • Knowledge of singular value decomposition and eigenvalues
  • Basic concepts of quadratic forms and their properties
NEXT STEPS
  • Study the continuity theorem for functions on closed and bounded sets
  • Learn about singular value decomposition and its applications in linear algebra
  • Explore the properties of eigenvalues of the matrix product $\mathbf{A}^* \mathbf{A}$
  • Investigate the relationship between the Frobenius norm and the Schatten norms
USEFUL FOR

Students and professionals in mathematics, particularly those studying functional analysis, linear algebra, and optimization techniques in mathematical analysis.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Andrew Browder's book: "Mathematical Analysis: An Introduction" ... ...

I am currently reading Chapter 8: Differentiable Maps and am specifically focused on Section 8.1 Linear Algebra ...

I need some help in fully understanding some remarks by Browder after Lemma 8.4 pertaining to the "operator norm" Lemma 8.4 ... The relevant text including Lemma 8.4 reads as follows:

View attachment 9373
View attachment 9374
Near the end of the above text we read the following:

" ... ... We leave it to the reader to show that the sup and inf in (8.2) and (8.3) are actually max and min, i.e. that they are actually attained ... ... "
Could someone please demonstrate rigorously that the sup and inf in (8.2) and (8.3) are actually max and min, i.e. that they are actually attained ... ...

Help will be much appreciated ...

Peter
 

Attachments

  • Browder - 1 - Lemma 8.4 ... PART 1 .. .png
    Browder - 1 - Lemma 8.4 ... PART 1 .. .png
    39.8 KB · Views: 146
  • Browder - 2 - Lemma 8.4 ... PART 2 ... .png
    Browder - 2 - Lemma 8.4 ... PART 2 ... .png
    12.3 KB · Views: 144
Physics news on Phys.org
Peter said:
Could someone please demonstrate rigorously that the sup and inf in (8.2) and (8.3) are actually max and min, i.e. that they are actually attained ... ...
I think that you are going to have to quote a theorem from earlier in the Analysis course in order to prove these results.

The function given by $f(\mathbf{v}) = |T\mathbf{v}|$ is a continuous function from $\Bbb{R}^n$ to $\Bbb{R}$. The set $\{\mathbf{v}\in \Bbb{R}^n:|\mathbf{v}|\leqslant1\}$ is a closed, bounded subset of $\Bbb{R}^n$. There is a theorem that a continuous function on a closed, bounded set is bounded and attains its bounds. That explains why the sup in (8.2) is attained.

As for (8.3), you can check that the inf in (8.3) is equal to the sup in (8.2). So if the sup in (8.2) is attained at some vector $\mathbf{v}$, then the inf in (8.3) is attained at that same vector..
 
the above approach using Compactness probably is the right answer for an analysis text

- - - - -
However, there is an alternative algebraic approach to (8.2) that I hope is of considerable interest to OP. First focus on $\big \Vert \mathbf v \big \Vert_2 = 1$ and consider
$0 \leq \sigma_n \leq \sigma_{n-1} \leq ... \leq \sigma_2 \leq \sigma_1$
then for $j\in\{1,2,..., n\}$ we have the point-wise bound

$\sigma_j^2 \leq \sigma_1^2$
which is preserved under rescaling by real non-negative numbers $w_j$ (in particular for convex combinations, where $\sum_{j=1}^n w_j = 1$)

so we have
$w_j \sigma_j^2 \leq w_j \sigma_1^2$
and summing over the bound
$\sum_{j=1}^n w_j \sigma_j^2 \leq \sum_{j=1}^n w_j \sigma_1^2 =\sigma_1^2 \cdot \sum_{j=1}^n w_j = \sigma_1^2$ (now to accommodate the 'more general' case of $\big \Vert \mathbf v \big \Vert_2 \leq 1$, insert a slack parameter $\sigma_{n+1} = 0 $ and $w_{n+1}\geq 0 $, where again $\sum_{j}w_j = 1$ and re-run the above argument)

- - - - -
what are these sigmas? They are the singular values of $\mathbf A$ and recall that one way of constructing them is to look at the eigenvalues of $\big(\mathbf A^* \mathbf A\big)$ (and then take square roots). Why should the eigenvalues of $\mathbf A^* \mathbf A$ come up? Because $\big \Vert \mathbf A\mathbf x \big \Vert_2^2 = \mathbf x^* \mathbf A^*\mathbf A \mathbf x$ and hopefully one recalls how to examine quadratic form problems...

There are a few more details that need filled in here, and I'd like to leave them as an exercise for the reader Peter

incidentally the trace norm is better known as a Frobenius norm (or Schatten 2 norm, in which case the operator norm is the Schatten $\infty$ norm)
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
5
Views
2K
Replies
2
Views
2K
Replies
2
Views
2K
Replies
7
Views
2K
Replies
2
Views
2K
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
4
Views
2K