Norm indueced by a matrix with eigenvalues bigger than 1

Click For Summary

Discussion Overview

The discussion revolves around the properties of a matrix \( M \) with eigenvalues greater than 1 and the norms it induces. Participants explore how to define such a norm explicitly and its implications for vector expansion, focusing on theoretical aspects and mathematical reasoning.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant questions how to explicitly define a norm induced by a matrix \( M \) with eigenvalues greater than 1, referencing the need for a constant \( c \) that satisfies \( |||Ax||| \ge |||x||| \).
  • Another participant suggests that the smallest eigenvalue might be a candidate for defining the norm.
  • A different participant challenges the idea that mapping every vector to 1 constitutes a valid norm.
  • Clarifications are made regarding the relationship between the smallest eigenvalue and the norm, with one participant asserting that using the smallest eigenvalue \( \lambda_i \) could provide a valid norm definition.
  • Some participants propose defining the norm as \( |||x|||_A = \lambda_i ||x||_1 \) and question how \( \lambda_i \) connects to the norm.
  • There is a discussion about the form of the norm associated with matrix \( A \) and the conditions under which \( |||Ax|||_A \ge c |||x|||_A \) holds, where \( c > 1 \).
  • One participant outlines a proof structure involving an invertible transformation \( S \) and the properties of upper triangular matrices, suggesting a method to establish the norm's validity.

Areas of Agreement / Disagreement

Participants express differing views on how to define the norm and its relationship to the eigenvalues of the matrix. There is no consensus on a single definition or approach, and multiple competing ideas are presented throughout the discussion.

Contextual Notes

Some participants reference specific mathematical properties and theorems related to matrix norms and eigenvalues, but the discussion remains open-ended regarding the precise definition and implications of the norm in question.

Diffie Heltrix
Messages
4
Reaction score
0
Suppose we pick a matrix M\in M_n(ℝ) s.t. all its eigenvalues are strictly bigger than 1.
In the question here the user said it induces some norm (|||⋅|||) which "expands" vector in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .

I still cannot understand why it's correct. How can one pick this norm explicitly? The comments suggested dividing to diagonal case and normal Jordan form but I cannot see in both cases how to define this norm.
 
Physics news on Phys.org
Doesn't it say there that the smallest of the eigenvalues is a candidate for that norm ?
 
So You map every $x=Id\cdot x$ to 1? That's not a norm.
 
Diffie Heltrix said:
in sense that exists constant c∈ℝ s.t. ∀x∈ℝ^n |||Ax||| ≥ |||x||| .
There's a c missing in your version:$$
\forall A\in E_n(\mathbb{R}),\exists c>1: \forall x\in\mathbb{R}^n, |||Ax|||\ge c|||x|||$$the original at our colleagues site looks better.

I don't think I wanted to "map every ##x=\mathbb I \cdot x## to 1 ?" what gave you that impression ? (what is the ld in your post ?)

And I don't think you can do much better than the smallest eigenvalue ##\lambda_i##. After all, if you pick that eigenvector ##y_i## as x then ## ||| Ax ||| = \lambda_i |||x||| ##.
 
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
 
Diffie Heltrix said:
So how the norm is defined precisely? Maybe $|||x|||_A=\lambda_i ||x||_1$? How lambda_i is connected here?
Can't read that. What is $|||x|||=||Ax||_1$?

http://www.math.cuhk.edu.hk/course_builder/1415/math3230b/matrix%20norm.pdf doesn't take the smallest , but the biggest K
 
Last edited by a moderator:
Again, given vector ##x\in\mathbb{R}^n##, How can I define the norm associated with ##A## (denoted by ##|||\cdot|||_A##) s.t. ##|||Ax|||_A \ge c |||x|||_A## (where ##c>1##)? What is the connection to the minimal eigenvalue exactly?
 
To me it seems they are one and the same thing...
 
You want to prove the following statement: if ##A## is a matrix with eigenvalues ##\lambda_k##, and ##r=\min_k |\lambda_k|##, then for any ##a<r## you can find a norm ##||| \,\cdot\,|||## such that ##||| Ax|||\ge a|||x|||## for all vectors ##x##.

We will be looking for a norm of form ##|||x|||=\|Sx\|##, where ##\|x\|## is the standard norm in ##\mathbb R^n## or ##\mathbb C^n## and ##S## is an invertible transformation. Then the required estimate can be rewritten as $$\|SAx\|\ge a\|Sx\| \qquad \forall x, $$ or equivalently $$\|SAS^{-1} x\|\ge a\|x\| \qquad \forall x.$$
By the theorem about upper triangular respresentation there exists an orthonormal basis such that in this basis such that the matrix of ##A## in this basis is upper triangular, so we can fix this basis and assume without loss of generality that ##A## is upper triagular. We can write ##A=D+T##, where ##D## is a diagonal matrix and ##T## is strictly upper triangular, i.e. all its non-zero entries are strictly above the main diagonal. Note that the diagonal entries are exactly eigenvalues of ##A##.

Now, let ##\epsilon>0## be a small numer, and let our ##S## be a diagonal matrix with entries ##1, \epsilon^{-1}, \epsilon^{-2}, \ldots, \epsilon^{-n-1}## on the diagonal (exactly in this order). You that can see that the matrix ##SAS^{-1}## is obtained from ##A## as follows: the main diagonal remais the same as the main diagonal of ##A##, the first diagonal above main is multiplyed by ##\epsilon##, the second diagonal above main is multiplied by ##\epsilon^2##, etc.

So, ## SAS^{-1} = D+ T_\epsilon##, and by picking sufficiently small ##\epsilon## we can make the norm of ##T_\epsilon## as small as we want. For our purposes it is sufficient to get ##\|T_\epsilon\|\le r-a##, can you see how to proceed from here?
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
2
Views
5K
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 6 ·
Replies
6
Views
5K
Replies
1
Views
3K