Norm indueced by a matrix with eigenvalues bigger than 1

In summary, the norm associated with the matrix ##A## is defined as ##|||\cdot|||_A=\|Sx\|## where ##S## is an invertible transformation.
  • #1
Diffie Heltrix
4
0
Suppose we pick a matrix M\in M_n(ℝ) s.t. all its eigenvalues are strictly bigger than 1.
In the question here the user said it induces some norm (|||⋅|||) which "expands" vector in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .

I still cannot understand why it's correct. How can one pick this norm explicitly? The comments suggested dividing to diagonal case and normal Jordan form but I cannot see in both cases how to define this norm.
 
Physics news on Phys.org
  • #2
Doesn't it say there that the smallest of the eigenvalues is a candidate for that norm ?
 
  • #3
So You map every $x=Id\cdot x$ to 1? That's not a norm.
 
  • #4
Diffie Heltrix said:
in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .
There's a c missing in your version:$$
\forall A\in E_n(\mathbb{R}),\exists c>1: \forall x\in\mathbb{R}^n, |||Ax|||\ge c|||x|||$$the original at our colleagues site looks better.

I don't think I wanted to "map every ##x=\mathbb I \cdot x## to 1 ?" what gave you that impression ? (what is the ld in your post ?)

And I don't think you can do much better than the smallest eigenvalue ##\lambda_i##. After all, if you pick that eigenvector ##y_i## as x then ## ||| Ax ||| = \lambda_i |||x||| ##.
 
  • #5
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
 
  • #6
Diffie Heltrix said:
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
Can't read that. What is $|||x|||=||Ax||_1$?

http://www.math.cuhk.edu.hk/course_builder/1415/math3230b/matrix%20norm.pdf doesn't take the smallest , but the biggest K
 
Last edited by a moderator:
  • #7
Again, given vector ##x\in\mathbb{R}^n##, How can I define the norm associated with ##A## (denoted by ##|||\cdot|||_A##) s.t. ##|||Ax|||_A \ge c |||x|||_A## (where ##c>1##)? What is the connection to the minimal eigenvalue exactly?
 
  • #8
To me it seems they are one and the same thing...
 
  • #9
You want to prove the following statement: if ##A## is a matrix with eigenvalues ##\lambda_k##, and ##r=\min_k |\lambda_k|##, then for any ##a<r## you can find a norm ##||| \,\cdot\,|||## such that ##||| Ax|||\ge a|||x|||## for all vectors ##x##.

We will be looking for a norm of form ##|||x|||=\|Sx\|##, where ##\|x\|## is the standard norm in ##\mathbb R^n## or ##\mathbb C^n## and ##S## is an invertible transformation. Then the required estimate can be rewritten as $$\|SAx\|\ge a\|Sx\| \qquad \forall x, $$ or equivalently $$\|SAS^{-1} x\|\ge a\|x\| \qquad \forall x.$$
By the theorem about upper triangular respresentation there exists an orthonormal basis such that in this basis such that the matrix of ##A## in this basis is upper triangular, so we can fix this basis and assume without loss of generality that ##A## is upper triagular. We can write ##A=D+T##, where ##D## is a diagonal matrix and ##T## is strictly upper triangular, i.e. all its non-zero entries are strictly above the main diagonal. Note that the diagonal entries are exactly eigenvalues of ##A##.

Now, let ##\epsilon>0## be a small numer, and let our ##S## be a diagonal matrix with entries ##1, \epsilon^{-1}, \epsilon^{-2}, \ldots, \epsilon^{-n-1}## on the diagonal (exactly in this order). You that can see that the matrix ##SAS^{-1}## is obtained from ##A## as follows: the main diagonal remais the same as the main diagonal of ##A##, the first diagonal above main is multiplyed by ##\epsilon##, the second diagonal above main is multiplied by ##\epsilon^2##, etc.

So, ## SAS^{-1} = D+ T_\epsilon##, and by picking sufficiently small ##\epsilon## we can make the norm of ##T_\epsilon## as small as we want. For our purposes it is sufficient to get ##\|T_\epsilon\|\le r-a##, can you see how to proceed from here?
 

FAQ: Norm indueced by a matrix with eigenvalues bigger than 1

1. What is the norm induced by a matrix with eigenvalues bigger than 1?

The norm induced by a matrix with eigenvalues bigger than 1 is a measure of the largest possible stretching or scaling factor of a vector when multiplied by the matrix. It is also known as the spectral norm or operator norm.

2. How is the norm induced by a matrix with eigenvalues bigger than 1 calculated?

The norm is calculated by finding the square root of the largest eigenvalue of the matrix multiplied by its transpose. This can also be written as the maximum value of the singular values of the matrix.

3. What does a norm induced by a matrix with eigenvalues bigger than 1 tell us about the matrix?

A norm induced by a matrix with eigenvalues bigger than 1 provides information about the size and scaling properties of the matrix. It also gives insight into the stability and convergence properties of algorithms that use the matrix.

4. Why is it important to consider the eigenvalues of a matrix when calculating its norm?

The eigenvalues of a matrix determine its scaling properties, and therefore play a crucial role in calculating its norm. By considering the eigenvalues, we can determine the maximum possible stretching or scaling factor of a vector when multiplied by the matrix.

5. How is the norm induced by a matrix with eigenvalues bigger than 1 used in practical applications?

The norm induced by a matrix with eigenvalues bigger than 1 is commonly used in applications such as control theory, optimization, and numerical analysis. It can also be used to analyze the stability and convergence properties of algorithms that use the matrix. Additionally, it is used in machine learning and data analysis to measure the similarity between vectors and matrices.

Similar threads

Replies
6
Views
2K
Replies
6
Views
3K
Replies
1
Views
2K
Replies
6
Views
4K
Replies
1
Views
4K
3
Replies
101
Views
16K
Back
Top