Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Cayley hamilton

  1. Jun 29, 2006 #1

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    If A is an n by n matrix of constants from the commutative ring k, and I is the identity n by n matrix, then Lagrange's expansion formula for determinants implies that

    adj[XI-A].[XI-A] = f(X).I where f(X) is the characteristic polynomial of A, and adj denotes the classical adjoint whose entries are + or - the (n-1) by (n-1) minors of A.

    Since setting X=A makes the left side equal to zero, we also have f(A) = 0. QED for cayley hamilton. (i.e. any square matrix A satisfies its own characteristic polynomial f(X) = det[XI-A].)


    Q: is this a correct argument?

    [hint: how could anything so simple and natural not be correct?]:biggrin:
     
    Last edited: Jun 29, 2006
  2. jcsd
  3. Jun 30, 2006 #2
    I do not really understand what you're doing. How can you multiply the (n-1)x(n-1) adj[XI-A] matrix with the nxn (XI-A) matrix?
     
  4. Jun 30, 2006 #3
    Nevermind, adj[XI-a] is an nxn matrix. Now I understand your proof and I think it's correct, yes.
     
  5. Jun 30, 2006 #4

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    if this simple proof, using only, cramers rule and the remainder theorem is correct, why do standard books including Artin, Bourbaki, Lang, Hungerford, Jacobson, Van der Waerden, Rotman, Sah, Birkhoff - Maclane, and my own grad algebra notes, not give it?

    these sources instead appeal to deeper results like decompositon of modules over pids, or jordan form, or rational canonical form, or diagonalization, or existence of partial decomposition into cylic subspaces, or tedious unenlightening computations.:confused:
     
  6. Jun 30, 2006 #5
    Actually I do object against your arguments. For square matrices we have adj(A).A = det(A).I. So indeed adj(xI-A).(xI-A)=det(xI-A).I, but how do you conclude det(xI-A)=f(x) for x a matrix, instead of an element of k? I am pretty sure that's not true in general.

    My shorter proof: det(xI-A)=f(x), putting x=A gives f(A)=det(0)=0. Can't be correct.
     
  7. Jun 30, 2006 #6

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    puzzling, isn't it? hint: one is working in two different rings as one pleases here: namely polynomials with matrix coefficients, and matrices with polynomial coefficients. the calculation adj[XI-A].[XI-A] = f(X).I, is usually justified by cramers rule, in the ring of matrices with polynomial coefficients. But the substitution f(A) is done in the ring of polynomials with matrix coefficients. Are these rings isomorphic?


    [actually the short proof above is essentially the one in hefferon's free web notes, but he gives it as an unenlightening computation, without explaining what is going on. word for word the same proof appears in an old book by ivar nering, so maybe he just copied it? at any rate he works, as does nering, at times in one of the rings above, at times in another, never saying why the arguments do not depend on the ring, i.e. never making clear the isomorphism mentioned above.]

    (your argument which as you say is wrong, operates in the ring of matrices with matrix coefficients!):rolleyes:
     
  8. Jun 30, 2006 #7

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    remark, you have to deal also with non commutativity issues. i.e. suppose f(x) is a polynomial and A is an element that does not commute with the coefficients of f. what does f(A) mean? is this a problem above?
     
  9. Jul 1, 2006 #8

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    here is the relevant non commutative algebra: if f(X) is a polynomial with coefficients in a non commutative ring, but X commutes with every element, then there are two meanings to f(A) for A an element of the ring of coefficients. one can evaluate f from the right oe from the left, at A.

    I.e. one can take a0 + a1 A + a2 A^2 +....+ an A^n, the right evaluation, or one can take

    a0 + A a1 + A^2 a2 +...+A^n an, the left evaluation.

    then the basic high school root factor theorem has two version: namely if (X-A) divides g(X) from the left, then the left evaluation of f at A is zero, and simialrly for right division.

    thus since one has the cramers rule ad(XI-A)(XI-A) = f(X), where f is the characteristic polynomial of A, since XI-A divides f(X) from the right, then we have f(A) = 0 wheren f(A) is the evaluation of f at A from the right. now also one knows that (XI-A) ad(XI-A)= f(X), so also f(A) = 0 where f(A) is evaluation from the left, and of course since f has scalar coefficients, these are the same.


    check this out. this is to me the simplest, most elementary proof of cayley hamilton, and is very hard to find in books.
     
    Last edited: Jul 2, 2006
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Cayley hamilton
  1. Cayley's theorem (Replies: 1)

Loading...