Hey guys,(adsbygoogle = window.adsbygoogle || []).push({});

Couple of questions here id like some help with. Firstly an unnamed theorem in my textbook that says: suppose s and t are distinct eigenvalues for a 2x2 matrix A, then (A-sI)(A-tI) = 0

The proof goes something like this. We know there is an eigenvector u for the value s such that Au=su (can we assume that? i assume so :P) which means (A-sI)u = 0 which is fine i can accept that. We can also find a vector v to corrispond to value t such that the same thing applies and we can show that u and t are linearly independent in a number of ways but lets assume they are for now. We also can show that (A-sT)(A-tI) commute.

Now heres the part im not sure about. Lets take a vector w = au+bv then

(A-sI)(A-tI)(au+bv) = (A-sI)(A-tI)au + (A-sI)(A-tI)bv

= (A-tI)(A-sI)au + (A-sI)(A-tI)bc

=(A-tI)0 + (A-sI)0

= 0

Now this is all cool i get that but i dont see how we can deduce that this means (A-sI)(A-tI) = 0 given they are matrices. I mean for real numbers if we have ab = 0 and b isnt zero then a must be right? but for matrices that doesnt have to be so, according to nilpotence, so what am i missing here? What reason do we have for saying (A-sI)(A-tI) = 0 given the above result?

Next question =D

Primary Decomposition Theorem. Well its called that in the lecture notes but when i look it up i find something i consider a little different but anyway. It states:

The dimension of the generalised eignspace for an eigenvalue s, for the case where all eigenvalues of an nxn matrix A are real, is the multiplicity of the eigenvalue s. Moreover, R^n is the direct sum of the generalised eigenspace. The proof that follows in my book confuses the hell out of me so i was wondering if someone here could elaborate.

The way i read it was as follows, and bear in mind im quite sure this is wrong and that some of my preconseptions of ideas here is false but i need you to correct those in the same fell swoop if possible :)

Anyway, i find a good way to thinkabout this eigenstuff is that the values are a deformation factor, stretching or whathave you, and the vectors are the direction in which this occurs. Crude, bound to low dimensional spaces, but useful i find. So if we have an eigenvalue s for a matrix A (assumed all real etc) we have a set of eigenvectors to go with it which should be (i think) linearly independent and so span a space which is the eigenspace of value s. Given that all the vectors in question are LI the space they span has dimension equal to the number of LI vectors used to span it. Moreover the direct sum of these spaces for different eigenvalues are disjoint and so the direct sum spans the whole space. Out of this comes the idea that if you have an eigenvalue occuring several times, you will have more vectors in that space and thus a higher dimensional eigenspace.

is that about right? i doubt it but hey thats why im here =D

Cheers

-Graeme

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Some eigenstuff

Loading...

Similar Threads - eigenstuff | Date |
---|---|

Eigenstuff of Second Derivative | Oct 21, 2014 |

Clues for the eigenstuff proof | May 15, 2005 |

**Physics Forums - The Fusion of Science and Community**