Norm indueced by a matrix with eigenvalues bigger than 1

Diffie Heltrix
Messages
4
Reaction score
0
Suppose we pick a matrix M\in M_n(ℝ) s.t. all its eigenvalues are strictly bigger than 1.
In the question here the user said it induces some norm (|||⋅|||) which "expands" vector in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .

I still cannot understand why it's correct. How can one pick this norm explicitly? The comments suggested dividing to diagonal case and normal Jordan form but I cannot see in both cases how to define this norm.
 
Physics news on Phys.org
Doesn't it say there that the smallest of the eigenvalues is a candidate for that norm ?
 
So You map every $x=Id\cdot x$ to 1? That's not a norm.
 
Diffie Heltrix said:
in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .
There's a c missing in your version:$$
\forall A\in E_n(\mathbb{R}),\exists c>1: \forall x\in\mathbb{R}^n, |||Ax|||\ge c|||x|||$$the original at our colleagues site looks better.

I don't think I wanted to "map every ##x=\mathbb I \cdot x## to 1 ?" what gave you that impression ? (what is the ld in your post ?)

And I don't think you can do much better than the smallest eigenvalue ##\lambda_i##. After all, if you pick that eigenvector ##y_i## as x then ## ||| Ax ||| = \lambda_i |||x||| ##.
 
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
 
Diffie Heltrix said:
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
Can't read that. What is $|||x|||=||Ax||_1$?

http://www.math.cuhk.edu.hk/course_builder/1415/math3230b/matrix%20norm.pdf doesn't take the smallest , but the biggest K
 
Last edited by a moderator:
Again, given vector ##x\in\mathbb{R}^n##, How can I define the norm associated with ##A## (denoted by ##|||\cdot|||_A##) s.t. ##|||Ax|||_A \ge c |||x|||_A## (where ##c>1##)? What is the connection to the minimal eigenvalue exactly?
 
To me it seems they are one and the same thing...
 
You want to prove the following statement: if ##A## is a matrix with eigenvalues ##\lambda_k##, and ##r=\min_k |\lambda_k|##, then for any ##a<r## you can find a norm ##||| \,\cdot\,|||## such that ##||| Ax|||\ge a|||x|||## for all vectors ##x##.

We will be looking for a norm of form ##|||x|||=\|Sx\|##, where ##\|x\|## is the standard norm in ##\mathbb R^n## or ##\mathbb C^n## and ##S## is an invertible transformation. Then the required estimate can be rewritten as $$\|SAx\|\ge a\|Sx\| \qquad \forall x, $$ or equivalently $$\|SAS^{-1} x\|\ge a\|x\| \qquad \forall x.$$
By the theorem about upper triangular respresentation there exists an orthonormal basis such that in this basis such that the matrix of ##A## in this basis is upper triangular, so we can fix this basis and assume without loss of generality that ##A## is upper triagular. We can write ##A=D+T##, where ##D## is a diagonal matrix and ##T## is strictly upper triangular, i.e. all its non-zero entries are strictly above the main diagonal. Note that the diagonal entries are exactly eigenvalues of ##A##.

Now, let ##\epsilon>0## be a small numer, and let our ##S## be a diagonal matrix with entries ##1, \epsilon^{-1}, \epsilon^{-2}, \ldots, \epsilon^{-n-1}## on the diagonal (exactly in this order). You that can see that the matrix ##SAS^{-1}## is obtained from ##A## as follows: the main diagonal remais the same as the main diagonal of ##A##, the first diagonal above main is multiplyed by ##\epsilon##, the second diagonal above main is multiplied by ##\epsilon^2##, etc.

So, ## SAS^{-1} = D+ T_\epsilon##, and by picking sufficiently small ##\epsilon## we can make the norm of ##T_\epsilon## as small as we want. For our purposes it is sufficient to get ##\|T_\epsilon\|\le r-a##, can you see how to proceed from here?
 
Back
Top