Differentiability of eigenvalues of a positive matrix

Leo321
Messages
38
Reaction score
0
I have a matrix A, which contains only positive real elements. A is a differentiable function of t.
Are the eigenvalues of A differentiable by t?
 
Physics news on Phys.org
Let's work out some special cases!

If A is 0x0, the problem is vacuous (there are no eigenvalues).

If A is 1x1, the problem is too easy.

So let's consider the case of A being 2x2. I think this is still small enough we can brute force our way through it, even in the general case. What do you think? Still it might be worth considering special cases of 2x2 matrices.


By the way, how precisely are you defining "eigenvalues of A"? The eigenvalues form a set, and I don't think the question you have in mind is whether a set-valued function is differentiable! (Though, it may make sense to hold off on answering this question until after some more analysis)
 
Speaking as an engineer it seems fairly obvious that you are asiking "is each eigenvalue considered separately a differentiable function".

Follow Hurkyl''s advice, and find out what happens when there are repeated eigenvalues. (You may not like what you find).
 
AlephZero said:
Speaking as an engineer it seems fairly obvious that you are asiking "is each eigenvalue considered separately a differentiable function".
What is less obvious is that this definition is lacking...
 
AlephZero said:
Speaking as an engineer it seems fairly obvious that you are asiking "is each eigenvalue considered separately a differentiable function".

I tried to make it easier, but maybe it had the opposite effect. I am interested in the largest eigenvalue only. I do know that if the largest eigenvalue occurs more than once, the derivative might not exist. But..

Follow Hurkyl''s advice, and find out what happens when there are repeated eigenvalues. (You may not like what you find).

In my case all the elements of A are positive. According to my understanding, Perron's theorem states that for such matrices, the eigenvalue with the largest absolute value is unique and is real and positive.
 
Leo321 said:
I am interested in the largest eigenvalue only. ... According to my understanding, ... is unique and is real and positive.
Ah, now that is a well-defined real-valued function. It sounds like you already know the things I was hoping you'd find out by working through my exercise. But that said, I still think my exercise should allow us to settle the 2x2 case definitively...
 
For the 2x2 case, we get\lambda=\frac{a+d}{2}+\frac{\sqrt{4bc+(a-d)^{2}}}{2}
The value inside the square root is always positive, and this function seems to be always differentiable.
Right?
Any ideas about a general nxn matrix?
 
Leo321 said:
The value inside the square root is always positive, and this function seems to be always differentiable.
Right?
Any ideas about a general nxn matrix?
That's the result I got. We probably don't even want to treat 3x3 with brute force like this! Instead, we need some way to work implicitly with the eigenvalue...
 
Did you solve it? Not see what to do with my hint? Not notice I was hinting?

A key piece of information is that if g is a polynomial of one variable, then the only solutions to g(x)=g'(x)=0 are double roots of g.
 
  • #10
I've always considered it annoying how the differentiability of eigenvalues is considered trivial in the context of perturbation theory... So I this thread caught my attention. I guess this is what Hurkyl is talking about:

If \mathbb{R}\to\mathbb{R}^{n\times n},\; t\mapsto A(t) is some continuously differentiable function, then we can define a function

<br /> f:\mathbb{R}^{1+n}\to\mathbb{R}^n,\quad f_i(t,\lambda_1,\ldots, \lambda_n) = \det\big(A(t) - \lambda_i\textrm{id}_{n\times n}\big)<br />

The implicit function theorem says that if f(0,\lambda_1,\ldots, \lambda_n)=0 with some lambdas, and if

<br /> \left(\begin{array}{ccc}<br /> \frac{\partial f_1}{\partial\lambda_1} &amp; \cdots &amp; \frac{\partial f_1}{\partial\lambda_n} \\<br /> \vdots &amp; &amp; \vdots \\<br /> \frac{\partial f_n}{\partial\lambda_1} &amp; \cdots &amp; \frac{\partial f_n}{\partial\lambda_n} \\<br /> \end{array}\right)<br />

is invertible at this location (0,\lambda_1,\ldots, \lambda_n), then there exists some continuously differentiable mapping t\mapsto (\lambda_1(t),\ldots, \lambda_n(t)) such that these values are eigenvalues of A(t).

<br /> i\neq j\quad\implies\quad \frac{\partial f_i}{\partial\lambda_j} = 0<br />

so actually we only need to prove that

<br /> \frac{\partial f_i}{\partial\lambda_i} \neq 0<br />

for all 0\leq i\leq n.
 
  • #11
The implicit function theorem is certainly what I had in mind... but you only need one \lambda variable. Also, in my mind, I was expressing things as functions of the components of the matrix, rather than in terms of the parameter t of the opening post.

(Maybe having n \lambda variables let's you solve a more subtle problem than the one I've solved... I haven't thought it through)
 
Last edited:
  • #12
Thanks a lot for all the answers. In the end it seems that we found a way around this issue, so we don't rely on the derivative of eigenvalues at all.
 
Back
Top