Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Function scales eigenvalues, but what happens to eigenvectors?

  1. Oct 5, 2014 #1
    Statement: I can prove that if I apply a function to my matrix (lets call it) "A"........whatever that function does on A, it will do the same thing to the eigenvalues (I can prove this with a similarity transformation I think), so long as the function is basically a linear combination of the powers of "A" or something like that.

    Question: How do I prove what this function does to the eigen vectors though? Do they remain the same? Do they change? Thanks!!
     
  2. jcsd
  3. Oct 6, 2014 #2

    Simon Bridge

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    You have a matrix A and you have eigenvalues so that you know ##Av_i = a_iv_i## where ##a_i## is the ith eigenvalue with ##v_i## as the corresponding eigenvector. (I take it A is nxn and there are n different eigenvalues?)

    You have some function f(A) ... which is a matrix equation ... so B=f(A) - is that correct?
    Can you represent the effect of f on A by a matrix so that B=FA?

    Then when find the eigenvalues ... you get ##Bu_i = b_iu_i## (##b_i## and ##u_i## are the eigenvalues and vectors for B) ... and you find that, when they are ordered a certain way, ##b_i=sa_i## where s is a constant scale factor between "corresponding" eigenvalues?

    So - to show the relationship between the eigenvectors - if any, you need to be explicit about how you proved the relationship for the eigenvalues.
     
  4. Oct 6, 2014 #3
    Thank you for the response. A is indeed an nxn matrix, but we cannot assume n different eigenvalues. Also, f(A) is more like a polynomial function or power series, and so cannot be assumed to be a matrix transformation equation. I incorrectly used the word "scales" in the question title, which is probably why you assumed these things above (my fault!). The similarity transformation that I mention has Jordan Normal form involved, which allows us to prove that whatever the function does to A , it also does to the eigenvalues of A.

    I was trying to think about how to use the Av=av equation you show above to prove that the eigenvectors do not change, but I'm not sure how....
     
  5. Oct 6, 2014 #4

    lavinia

    User Avatar
    Science Advisor
    Gold Member

    Conjugation invariant polynomials will work, for instance the trace and the determinant.
     
  6. Oct 6, 2014 #5
    Hi lavinia - can you please elaborate for me regarding your response? Thank you!
     
  7. Oct 6, 2014 #6

    lavinia

    User Avatar
    Science Advisor
    Gold Member

    Hi johnpjust
    If I understood you right a function like the trace depends only on the eigenvalues of the matrix. While it is the sum of the diagonal entries of the matrix it is also the sum of the eigenvalues. By conjugation any matrix can be put in Jordan canonical form and then the diagonal entries are the eigenvalues with multiplicity.

    I think - but correct me if I am wrong - that any conjugation invariant function will depend only on the eigenvalues because in Jordan canonical form the only entries are the eigenvalues on the diagonal and 1's on the super diagonal. This is certainly true of symmetric invariant polynomials such as the trace and determinant.

    Your statement of your question confused me a little so I interpreted to mean functions that are determined by the eigenvalues.
     
  8. Oct 7, 2014 #7

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is a special case of a theorem called "the spectral mapping theorem" in books on functional analysis. Note that if there's a non-zero vector x such that ##Ax=\lambda x##, then ##(A-\lambda I)x=0##, and ##A-\lambda I## isn't invertible (because if it is, then x=0). Because of this, the spectrum of A is defined as the set of all ##\lambda\in\mathbb C## such that ##A-\lambda I## is not invertible. The spectrum of A is denoted by ##\sigma(A)##. One version of the spectral mapping theorem says that if f is a polynomial, then we have ##\sigma(f(A))=f(\sigma(A))##. The right-hand side is (by definition of the notation) equal to ##\{f(\lambda)|\lambda\in\sigma(A)\}##.

    I don't know if there's a simple proof for the case where A is a linear operator on a finite-dimensional vector space.
     
    Last edited: Oct 7, 2014
  9. Oct 7, 2014 #8
    If ##A## is a square matrix, ##\lambda## is its eigenvalue with a corresponding eigenvector ##v##, then if ##p (x)= \sum_{k=0}^n a_k x_k## is a polynomial, then $$p(A) v = p(\lambda) v, $$ i.e. eigenvalues change by the rule ##\lambda\to p(\lambda)## and the eigenvectors remain the same.
    That is a very easy statement, it is almost trivial for ##p(x)=x^k##, and then you just take the linear combination of identities ##A^k v= \lambda^k v##.

    A more non-trivial statement is the so-called spectral mapping theorem, which says that if ##\mu## is an eigenvalue of ##p(A)##, then there exists ##\lambda\in\sigma(A)## (i.e. ##\lambda## is an eigenvalue of ##A##) such that ##\mu=p(\lambda)##. You can find a proof in any a more or less advanced linear algebra text, you can look for example in "Linear algebra done wrong", Ch. 9, s. 2.

    As for eigenvectors, if $$p(A) v = \mu v,$$ then $$v= \sum_{k=1}^r \alpha_k v_k,$$ where ##\alpha_k## are arbitrary scalars, ##Av_k = \lambda_k v_k##, and ##\lambda_k##, ##k=1, 2, \ldots, r## are all eigenvalues of ##A## such that ##p(\lambda_k)=\mu##. This statement can be easily seen from the Jordan decomposition of a matrix. I am nor aware of any "elementary" proof.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Function scales eigenvalues, but what happens to eigenvectors?
Loading...