Maximizing Norms of Matrices: A Scientific Approach

  • Thread starter Pacopag
  • Start date
  • Tags
    Norm
In summary, the norm of a matrix is a measure of its size or magnitude, calculated by taking the square root of the sum of the squared elements in the matrix. This can be represented mathematically as ||A|| = √(∑<sup>n</sup><sub>i=1</sub>∑<sup>m</sup><sub>j=1</sub> a<sub>ij</sub><sup>2</sup>). Its purpose is to compare sizes of different matrices and identify largest or smallest elements. There are different types of matrix norms, including the L1, L2, and Frobenius norms. In real-world applications, the norm of a matrix is used in data analysis, machine learning
  • #1
Pacopag
197
4

Homework Statement


I am trying to show that
(1) [tex]||A||_1 = \max_j \sum_i |a_{ij}|[/tex]
(2) [tex]||A||_2 = \sqrt{max\_eigval\_ of\_{ } A^* A}[/tex] where A* is the conjugate transpose
(3) [tex]||A||_\infty = \max_i \sum_i |a_{ij}|[/tex]


Homework Equations


In general,
[tex]||A||_p = max_{x\neq 0} {{||Ax||_p}\over{||x||_p}}[/tex]
where for a vector we have
[tex]||x||_p = \left( \sum_i |x_i|^p \right)^{1\over p}[/tex]


The Attempt at a Solution


First of all, write
[tex](Ax)_i = \sum_j a_{ij} x_j[/tex].
(1) Here we have
[tex]||Ax||_1 = max_{x} {{\sum_i \left|\sum_j a_{ij} x_j \right|}\over{\sum_j |x_j|}}[/tex].
I'm pretty sure we could switch the summation order in the numerator too. But I don't know how to proceed from here.
(2) Here we have
[tex]||Ax||_2^2 = max_x {{x^* A^* A x}\over{x^* x}}[/tex] using [tex]||x||_2^2 =x^* x [/tex]
Now, I read somewhere that the x that maximizes this is an eigenvector of [tex]A^* A[/tex]. So we get
[tex]||Ax||_2^2 = \lambda_{max}[/tex], the largest eigenvalue.
My only problem with this is: How do we know that an eigenvector will maximize it?
(3) Here we have
[tex]||Ax||_\infty = max_x {{max_i \left( \left| \sum_j a_{ij} x_j \right| \right)}\over{max_j (|x_j|)}}[/tex]
using
[tex]||x||_\infty = max_j (x_j)[/tex].
Again, I don't know how to proceed from here. It seems that in order to arrive at the answer (which I like to pretend that I don't know),
I would need to choose x to be a vector where all the components are the same. Then the x_j's would cancel and i get the right answer.
But I don't see why such an x is guaranteed to maximize ||A||.
 
Last edited:
Physics news on Phys.org
  • #2
For (1), note that, for a fixed i,

[tex]\left| \sum_j a_{ij} x_j \right| \leq \sum_j |a_{ij}| |x_j| \leq \max_j |a_{ij}| \sum_j |x_j|.[/tex]

Use this to conclude that [itex]\|Ax\|_1 \leq \max_j \sum_i |a_{ij}|[/itex]. Can you find unit vectors to help guarantee that equality is achieved?

A similar technique can be applied to (3).

As for (2), note that A*A is Hermitian, so we can unitarily diagonalize it. Let B={v_1, ..., v_n} be an orthonormal basis for C^n consisting of eigenvectors of A*A. Writing [itex]x=k_1 v_1+\cdots+k_n v_n[/itex], we see that (with respect to the basis B)

[tex]x^*A^*Ax = (k_1^*, \ldots, k_n^*) \left(\begin{array}{cccc} \lambda_1 \\ & \lambda 2 \\ & & \ddots \\ & & & \lambda_n \end{array}\right) \left(\begin{array}{c}{k_1 \\ \vdots \\ k_n \end{array}\right) = \sum_j \lambda_j |k_j|^2.[/tex]

Can you take it from here?
 
  • #3
Yes! That is a tremendous help. Thank you very much.
Is the answer to your question "Can you find unit vectors to help guarantee that equality is achieved?" a vector
[tex]x=\left(\begin{array}{c}{0 \\ 0 \\ \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \\0 \end{array}\right)[/tex]
where the 1 is in the max_j-th row?
 
  • #4
Yup, that'll get you that [itex]\sum_i |a_{ij}| \leq \|Ax\|_1[/itex] for each j.
 

FAQ: Maximizing Norms of Matrices: A Scientific Approach

1. What is the norm of a matrix?

The norm of a matrix is a measure of its size or magnitude. It is calculated by taking the square root of the sum of the squared elements in the matrix.

2. How is the norm of a matrix calculated?

The norm of a matrix is calculated by taking the square root of the sum of the squared elements in the matrix. This can be represented mathematically as ||A|| = √(∑ni=1mj=1 aij2), where A is the matrix and n and m are the dimensions of the matrix.

3. What is the purpose of calculating the norm of a matrix?

The norm of a matrix is used to measure the size or magnitude of the matrix. It can also be used to compare the sizes of different matrices and to identify the largest or smallest elements in a matrix.

4. What are the different types of matrix norms?

There are several types of matrix norms, including the L1 norm, L2 norm, and Frobenius norm. The L1 norm is the sum of the absolute values of the matrix elements. The L2 norm is the square root of the sum of the squared elements. The Frobenius norm is the square root of the sum of the squared elements, similar to the L2 norm.

5. How is the norm of a matrix used in real-world applications?

The norm of a matrix is used in various real-world applications, such as in data analysis, machine learning, and image processing. It can be used to identify patterns in data, classify data points, and calculate distances between data points. It is also used in optimization problems, such as in minimizing errors in a regression model.

Back
Top