Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Some eigenstuff

  1. Oct 30, 2007 #1
    Hey guys,

    Couple of questions here id like some help with. Firstly an unnamed theorem in my textbook that says: suppose s and t are distinct eigenvalues for a 2x2 matrix A, then (A-sI)(A-tI) = 0

    The proof goes something like this. We know there is an eigenvector u for the value s such that Au=su (can we assume that? i assume so :P) which means (A-sI)u = 0 which is fine i can accept that. We can also find a vector v to corrispond to value t such that the same thing applies and we can show that u and t are linearly independent in a number of ways but lets assume they are for now. We also can show that (A-sT)(A-tI) commute.

    Now heres the part im not sure about. Lets take a vector w = au+bv then
    (A-sI)(A-tI)(au+bv) = (A-sI)(A-tI)au + (A-sI)(A-tI)bv
    = (A-tI)(A-sI)au + (A-sI)(A-tI)bc
    =(A-tI)0 + (A-sI)0
    = 0
    Now this is all cool i get that but i dont see how we can deduce that this means (A-sI)(A-tI) = 0 given they are matrices. I mean for real numbers if we have ab = 0 and b isnt zero then a must be right? but for matrices that doesnt have to be so, according to nilpotence, so what am i missing here? What reason do we have for saying (A-sI)(A-tI) = 0 given the above result?

    Next question =D
    Primary Decomposition Theorem. Well its called that in the lecture notes but when i look it up i find something i consider a little different but anyway. It states:

    The dimension of the generalised eignspace for an eigenvalue s, for the case where all eigenvalues of an nxn matrix A are real, is the multiplicity of the eigenvalue s. Moreover, R^n is the direct sum of the generalised eigenspace. The proof that follows in my book confuses the hell out of me so i was wondering if someone here could elaborate.

    The way i read it was as follows, and bear in mind im quite sure this is wrong and that some of my preconseptions of ideas here is false but i need you to correct those in the same fell swoop if possible :)

    Anyway, i find a good way to thinkabout this eigenstuff is that the values are a deformation factor, stretching or whathave you, and the vectors are the direction in which this occurs. Crude, bound to low dimensional spaces, but useful i find. So if we have an eigenvalue s for a matrix A (assumed all real etc) we have a set of eigenvectors to go with it which should be (i think) linearly independent and so span a space which is the eigenspace of value s. Given that all the vectors in question are LI the space they span has dimension equal to the number of LI vectors used to span it. Moreover the direct sum of these spaces for different eigenvalues are disjoint and so the direct sum spans the whole space. Out of this comes the idea that if you have an eigenvalue occuring several times, you will have more vectors in that space and thus a higher dimensional eigenspace.

    is that about right? i doubt it but hey thats why im here =D

  2. jcsd
  3. Oct 30, 2007 #2


    User Avatar
    Science Advisor
    Homework Helper

    A 2x2 matrix is a linear map from R^2 to R^2. So if you found two distinct linearly independent eigenvectors u and v for A (why do eigenvectors exist? read the definition of an eigenvalue), that means they form a basis for R^2. So any vector in R^2 is of the form au+bv. And consequently, because (A-sI)(A-tI) sends all these vectors to zero, that means it sends all of R^2 to zero, and so must be the zero map. (In general, if you have a basis {v_1, ..., v_n} and a linear map T such that Tv_i = 0 for all i, then T=0, because a linear map is completely determined by its action on the basis.)

    And what kind of elaboration are you looking for with respect to the primary decomposition theorem?
  4. Oct 30, 2007 #3
    Hmmm ok i accept that its the zero map but writing it as that quantity equals zero still seems a bit strange to me.

    For the second part i just wanted to know if what i was saying was correct basically.

  5. Oct 30, 2007 #4


    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    tghe proof is trivial: if s is an eigenvalue ther5e is a vectpor v with Tv = sv, and the same eqution holds for any multiple of v.

    hence if t is a different eigenvalue thre is a vectorc w such that Tw = tw and w is not a multiple of v, so v,w is a basis of the space. hence (T-s)(T-t)(av+bw) = 0.
  6. Oct 31, 2007 #5
    i understand how we get to (T-sI)(T-tI)(av+bw) = 0. What i dont get is how we can deduce from this that (T-sI)(T-tI) = 0. I mean morphism sort of convinced me but i will need to ponder it a bit more :)
  7. Oct 31, 2007 #6


    User Avatar
    Science Advisor
    Homework Helper

    What can you say about a matrix A if A v = 0 for any vector v (in a > 1 -dimensional space, so that v is not always a multiple of the eigenvector for zero :smile:)?
  8. Nov 12, 2007 #7
    I think he means something like, for matrices A,B


    But the problem is is A=[1 1] and B= [-10 0]
    ............................... [0 0] ......... [ 10 0]
    but obviously A=0 or B=0 is not necessarily true. Something like that.
    Last edited: Nov 12, 2007
  9. Nov 12, 2007 #8

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    This is just repetition, but there is nothing else to say.

    1) A has two eigenvectors with to different eigenvalues.

    2) Therefore they are linearly independent.

    3) Therefore any span, and any vector can be written as a linear combination of them

    4) A annihilates any linear combination of eigenvectors.

    5) A must therefore send every vector to zero

    6) Therefore A is the 0 matrix.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Some eigenstuff