Estimating eigenvalue of perturbed matrix

  • Context: Graduate 
  • Thread starter Thread starter julian
  • Start date Start date
  • Tags Tags
    Eigenvalue Matrix
Click For Summary

Discussion Overview

The discussion revolves around estimating the eigenvalue of a perturbed matrix defined as M_{ij} = A_{ij} + s B_{ij}/2, where A_{ij} is symmetric and the smallest eigenvalue of A is given. Participants explore the implications of perturbations on eigenvalues, continuity of eigenvalues with respect to matrix entries, and the application of mathematical theorems in this context.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant suggests that since the minimum eigenvalue of A is less than or equal to -(1/2), and the deviation of M from A can be made small by choosing s_0 appropriately, the minimum eigenvalue of M can be guaranteed to be less than or equal to -(1/4).
  • Another participant questions whether higher-order polynomials have analytic solutions and if the roots depend continuously on the matrix elements.
  • A different participant proposes using the implicit function theorem to establish the continuity of eigenvalues concerning matrix elements, expressing uncertainty about the theorem's applicability.
  • One participant notes that the minimum eigenvalue is continuous over the compact space defined by |A_{ij}| <= 1, raising a question about uniform convergence of continuous functions in this context.
  • Another participant expresses confusion about the goal of the discussion, suggesting that the continuity of eigenvalues with respect to matrix entries does not necessarily require the implicit function theorem.
  • It is mentioned that for the continuity argument to hold, the discussion must be framed over complex numbers or an algebraically closed field.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of the implicit function theorem and the conditions under which eigenvalues vary continuously with matrix entries. The discussion remains unresolved regarding the best approach to establish the continuity of eigenvalues and the implications of the perturbation.

Contextual Notes

There are limitations regarding the assumptions about the continuity of eigenvalues, the dependence on the specific form of the matrices, and the scope of the implicit function theorem's applicability. Some participants also note the need for clarity on the mathematical framework being used.

julian
Science Advisor
Gold Member
Messages
862
Reaction score
366
Say M_{ij} = A_{ij} + s B_{ij}/2, where the matrices are 3 by3 and A_{ij} symmetric, s \in [0,s^*], and the smallest eigenvalue of A is lambda \leq -(1/2). Given that |M_{ij} - A_{ij}| \leq to C_{s^*} s/2 and |A_{ij}| \leq 1, plus that the cubic equation determining the eigenvalues has an explicit fomrula to solve, how do you show that there is some s_0 \in [0,s^*] such that M_{ij} has an eigenvalue \leq -(1/4)?
 
Last edited:
Physics news on Phys.org
Actually it's quite easy isn't it... The min eigenvalue of A is <= -(1/2). The deviation of M from A can be made arbitrarily small by choice of s_0, the min eigenvalue is a continuous function of the entries of the matrix hence by choosing s_0 smal enough we can gaurantee the min eigenvalue be <= to say -(1/4).
 
In our case, we know for 3 by 3 matrices that the eigenvalues are continuous functions of the matrix entrie because we have an explicit formula for roots of a cubic polynomial. Remind me,do higher order polynomials have analytic solutions? Could we still say the roots depend continuously on the matrix elements?
 
Is this where we would use the implicit function theorem or somerthing to estaablish continuity of eigenvalue on matrix elements - I'm a physicist, not completely familiar with this stuff.
 
Plus a part of the original problem was to show s_0 can be chosen in a way independent of A_{ij}. Forgot to mention that -what makes the problem a bit more difficult.
 
So the min eigenvalue is a continuous function over the space given by |A_{ij}| <=1. Because this spce is compact does this mean that the continuous functions l_s (A_{ij}) converge uniformally to l_0 A_{ij} in the parameter s? This sounds familiar to me.
 
If I had to use the implicit function theorem would it work only locally? Would this cause me any problems? If not why not? If a maths person could help fill in details for a poor physicist that would be great.
 
Last edited:
I'm having a hard time understanding what you want to accomplish. Are you just looking for the result that the eigenvalues of a matrix vary continuously with the entries of the matrix? For this you don't need the implicit function theorem. You just need to observe that the eigenvalues vary continuously with the coefficients of the characteristic polynomial, which in turn varies continuously with the entries of the matrix. Note that for this to make sense, you need to be working over the complex numbers (or some algebraically closed field), to guarantee that your matrix and perturbations of it have eigenvalues.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K