Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Characteristic Polynomials/Eigenvalues

  1. May 27, 2009 #1
    Theorem 5.3: Let A be in [tex]M_n_x_n(F)[/tex].
    (a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient [tex](-1)^n[/tex].
    (b) A has at most n distinct eigenvalues.
    Note: The theorem can be proved by a straightfoward induction arguement.

    Question: Can someone help with the proofs? For part (b), I understand there can be at most n distinct eigenvalues, since the dimension of the matrix is the same as the number of elements along the diagonal. For this reason, there can be at most n distinct eigenvalues. But for (b), does the proof require induction also, or is the text simply encouraging induction for part (a)?

    One last easy question: Let T be the linear operator on [tex]P_2(R)[/tex] defined by [tex]T(f(x)) = f(x) + (x + 1)f'(x)[/tex], let B be the standard ordered basis for [tex]P_2(R)[/tex], and let A = [tex][T]_B[/tex]. Then,

    A = { (1, 0, 0), (1, 2, 0), (0, 2, 3) } This is a matrix with each paranthesis being column vectors.

    In this example, i know B = { [tex]1, x, x^2[/tex] } is an ordered basis for [tex]P_2(R)[/tex]. So we plug the first element into the equation [tex]T(f(x)) = f(x) + (x + 1)f'(x)[/tex], and then plug x, then finally x^2. But for some reason I don't know how to evaluate each equation to get the respective column vectors above. In particular what is f(1), or what is [tex]f(x^2)[/tex]?


    Thanks so much,


    JL
     
  2. jcsd
  3. May 27, 2009 #2

    jbunniii

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    (b) follows directly from (a) and the fundamental theorem of algebra (a polynomial of degree n has at most n complex roots). This assumes that you already have the fact that the eigenvalues are precisely the roots of the characteristic polynomial.

    (a) should be immediate if you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis. It's always possible to find such a basis. Then use what you know about the determinant of an upper triangular matrix.

    Of course, you can equally well define the characteristic polynomial as det[tex](\lambda I - A)[/tex], in which case the leading coefficient will be 1, not [tex](-1)^n[/tex].
     
  4. May 27, 2009 #3

    I know that the determinant of an upper triangular matrix is the product of the diagonal entries. But how do you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis (this would be a linear transformation such that the domain and codomain differ- since the basis differ)?

    Thanks,

    J
     
    Last edited: May 27, 2009
  5. May 28, 2009 #4

    jbunniii

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    THEOREM: If V is a complex finite-dimensional vector space and A is a linear map from V to V, then there exists a basis for V with respect to which the matrix of A is upper-triangular. (See, e.g., Axler's "Linear Algebra Done Right," Theorem 5.13.)

    Equivalently, if M is the matrix of A with respect to the standard basis, then there exists an invertible matrix S such that

    [tex]M = S U S^{-1}[/tex]

    where U is upper-triangular. ("M is similar to U.")

    M and U have the same eigenvalues, since they represent the same linear map A.

    You can use this theorem in your case as follows:

    Let B be a basis for V with respect to which the linear map A has an upper triangular matrix U. Then the matrix of

    [tex]A - \lambda I[/tex]

    with respect to the basis B is simply

    [tex]U - \lambda I[/tex]

    Then

    [tex]det(A - \lambda I) = det(U - \lambda I)[/tex]

    which equals the product of the diagonal elements. From here, your result follows easily.
     
    Last edited: May 28, 2009
  6. May 29, 2009 #5
    How about a proof by induction.

    Proof: Consider the matrix A as a 2x2 matrix (base case).
    Thus det(A - tI) = [tex](a_1_1 - t)(a_2_2 - t) - (a_2_1)(a_1_2)[/tex] = [tex](a_1_2a_2_2) - a_1_1t - a_2_2t + t^2[/tex] = [tex](-1)^2(a_1_2a_2_2 - a_1_1t - a_2_2t + t^2 )[/tex].

    So we see in the base case has degree 2 with leading coefficient [tex](-1)^2[/tex] just as expected. So now we proceed with the induction hypothesis and consider some nxn matrix A. By cofactor expansion,
    [tex]det(A - tI) = (-1)^1^+^1(a_1_1 - tI)det(A'_1_1) + \sum (-1)^k a_i_1 det(A_i_1)[/tex] (#1) (NOTE: summation runs from i = 2 to n)
    = [tex](-1)^1^+^1(a_1_1 - tI)((-1)^n^-^1t^n^-^1+... + c_0)[/tex] (such that c_0 is a constant) (#2)
    = ................. conclusion of polynomial of degree n with leading coefficient [tex](-1)^n[/tex]?


    Questions:
    1. In general how does one proceed from (#1) to conclude polynomial of degree n with leading coefficient [tex](-1)^n[/tex]?
    2. Cofactor expansion of [tex]A'_1_1[/tex] (and even of it's minors) may have leading terms of [tex](-1)^(^n^-^1^)^+^(^n^-^1^)[/tex]. This doesn't necessarily mean all leading coefficients will be of the form [tex](-1)^n[/tex], since different choices of rows and columns may produce [tex](-1)^5[/tex] which does not equal [tex](-1)^n[/tex] if say n =10.
    3. The person who helped me outline the proof for this theorem in the text never considered a leading term of [tex](-1)^(^n^-^1^)^+^(^n^-^1^)[/tex] on line (#2). I was wondering shouldn't there be such a term since we crossed out the first row and first column? Also, in line (#2), why isn't there some term of the form [tex](a_2_2 - tI) [/tex] in the paranthesis, but instead simply the term [tex]t^n^-^1[/tex]?
    4. Since if we keep taking the determinants of each minor repeatedly, we will eventually have a remaining 2x2 matrix A'. is this why in line (#2), they incorporated a constant [tex]c_0[/tex]?
    5. Lastly, is line (#2) only a cofactor expansion of [tex]A_1_1[/tex], meaning that this line should include the cofactor expansion of all [tex]A_i_1[/tex] elements along the first column? Oh yeah was the induction hypothesis applied in line (#2)?
     
    Last edited: May 29, 2009
  7. May 30, 2009 #6

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    You're getting there. Expand along the first row as normal

    (a_11 - t) det ( ) + ... sum of stuff involving dets

    What are the things you're taking determinants of...? (That's the hint for why this is induction.)



    As for the other question: f(x) represents a generic element of P_2. The map sends

    f(x) to f(x) + (x+1)f'(x)

    So you need to substitute the basis elements for f. I don't know why you're trying to put them into f as variables.
     
  8. Jun 1, 2009 #7
    I'm not sure about the proof, maybe I will try to seek help tomorrow. As for the second portion involving f(x) to f(x) + (x+1)f'(x), you are correct, and it works out if I substitute it for f.

    THanks,

    Jeffrey
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Characteristic Polynomials/Eigenvalues
Loading...