Norm indueced by a matrix with eigenvalues bigger than 1

Click For Summary
SUMMARY

The discussion focuses on defining a norm induced by a matrix \( A \) with eigenvalues greater than 1. It establishes that for any vector \( x \in \mathbb{R}^n \), there exists a constant \( c > 1 \) such that \( |||Ax||| \geq c |||x||| \). The participants explore various approaches to explicitly define this norm, suggesting forms like \( |||x|||_A = \lambda_i ||x||_1 \) and discussing the implications of using the smallest eigenvalue \( \lambda_i \). They conclude that the norm can be expressed as \( |||x||| = \|Sx\| \), where \( S \) is an invertible transformation, leading to the formulation \( |||Ax||| \geq a |||x||| \) for appropriate choices of \( S \).

PREREQUISITES
  • Understanding of matrix eigenvalues and eigenvectors
  • Familiarity with matrix norms and their properties
  • Knowledge of linear transformations and invertible matrices
  • Basic concepts of upper triangular matrices and diagonalization
NEXT STEPS
  • Research the properties of matrix norms in linear algebra
  • Study the implications of eigenvalues on matrix stability and transformations
  • Explore the concept of Jordan forms and their applications in defining norms
  • Learn about theorems related to upper triangular matrices and their significance in norm definitions
USEFUL FOR

Mathematicians, linear algebra students, and researchers interested in matrix theory, particularly those focusing on eigenvalue analysis and norm definitions in vector spaces.

Diffie Heltrix
Messages
4
Reaction score
0
Suppose we pick a matrix M\in M_n(ℝ) s.t. all its eigenvalues are strictly bigger than 1.
In the question here the user said it induces some norm (|||⋅|||) which "expands" vector in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .

I still cannot understand why it's correct. How can one pick this norm explicitly? The comments suggested dividing to diagonal case and normal Jordan form but I cannot see in both cases how to define this norm.
 
Physics news on Phys.org
Doesn't it say there that the smallest of the eigenvalues is a candidate for that norm ?
 
So You map every $x=Id\cdot x$ to 1? That's not a norm.
 
Diffie Heltrix said:
in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .
There's a c missing in your version:$$
\forall A\in E_n(\mathbb{R}),\exists c>1: \forall x\in\mathbb{R}^n, |||Ax|||\ge c|||x|||$$the original at our colleagues site looks better.

I don't think I wanted to "map every ##x=\mathbb I \cdot x## to 1 ?" what gave you that impression ? (what is the ld in your post ?)

And I don't think you can do much better than the smallest eigenvalue ##\lambda_i##. After all, if you pick that eigenvector ##y_i## as x then ## ||| Ax ||| = \lambda_i |||x||| ##.
 
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
 
Diffie Heltrix said:
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
Can't read that. What is $|||x|||=||Ax||_1$?

http://www.math.cuhk.edu.hk/course_builder/1415/math3230b/matrix%20norm.pdf doesn't take the smallest , but the biggest K
 
Last edited by a moderator:
Again, given vector ##x\in\mathbb{R}^n##, How can I define the norm associated with ##A## (denoted by ##|||\cdot|||_A##) s.t. ##|||Ax|||_A \ge c |||x|||_A## (where ##c>1##)? What is the connection to the minimal eigenvalue exactly?
 
To me it seems they are one and the same thing...
 
You want to prove the following statement: if ##A## is a matrix with eigenvalues ##\lambda_k##, and ##r=\min_k |\lambda_k|##, then for any ##a<r## you can find a norm ##||| \,\cdot\,|||## such that ##||| Ax|||\ge a|||x|||## for all vectors ##x##.

We will be looking for a norm of form ##|||x|||=\|Sx\|##, where ##\|x\|## is the standard norm in ##\mathbb R^n## or ##\mathbb C^n## and ##S## is an invertible transformation. Then the required estimate can be rewritten as $$\|SAx\|\ge a\|Sx\| \qquad \forall x, $$ or equivalently $$\|SAS^{-1} x\|\ge a\|x\| \qquad \forall x.$$
By the theorem about upper triangular respresentation there exists an orthonormal basis such that in this basis such that the matrix of ##A## in this basis is upper triangular, so we can fix this basis and assume without loss of generality that ##A## is upper triagular. We can write ##A=D+T##, where ##D## is a diagonal matrix and ##T## is strictly upper triangular, i.e. all its non-zero entries are strictly above the main diagonal. Note that the diagonal entries are exactly eigenvalues of ##A##.

Now, let ##\epsilon>0## be a small numer, and let our ##S## be a diagonal matrix with entries ##1, \epsilon^{-1}, \epsilon^{-2}, \ldots, \epsilon^{-n-1}## on the diagonal (exactly in this order). You that can see that the matrix ##SAS^{-1}## is obtained from ##A## as follows: the main diagonal remais the same as the main diagonal of ##A##, the first diagonal above main is multiplyed by ##\epsilon##, the second diagonal above main is multiplied by ##\epsilon^2##, etc.

So, ## SAS^{-1} = D+ T_\epsilon##, and by picking sufficiently small ##\epsilon## we can make the norm of ##T_\epsilon## as small as we want. For our purposes it is sufficient to get ##\|T_\epsilon\|\le r-a##, can you see how to proceed from here?
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
2
Views
5K
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
3K