Understanding an Unnamed Theorem and the Primary Decomposition Theorem

  • Context: Graduate 
  • Thread starter Thread starter FunkyDwarf
  • Start date Start date
  • Tags Tags
    Decomposition Theorem
Click For Summary
SUMMARY

The discussion centers on the proof of an unnamed theorem regarding distinct eigenvalues of a 2x2 matrix A, specifically that if s and t are distinct eigenvalues, then (A-sI)(A-tI) = 0. The participants clarify that if A has two linearly independent eigenvectors corresponding to s and t, any vector in R² can be expressed as a linear combination of these eigenvectors, leading to the conclusion that the linear transformation represented by (A-sI)(A-tI) must be the zero map. Additionally, the Primary Decomposition Theorem is discussed, asserting that the dimension of the generalized eigenspace for an eigenvalue equals its multiplicity, and that R^n is the direct sum of these generalized eigenspaces.

PREREQUISITES
  • Understanding of eigenvalues and eigenvectors in linear algebra
  • Familiarity with matrix operations and linear transformations
  • Knowledge of the concept of linear independence
  • Basic grasp of the Primary Decomposition Theorem
NEXT STEPS
  • Study the proof of the Primary Decomposition Theorem in detail
  • Explore the implications of nilpotent matrices in linear algebra
  • Learn about the relationship between eigenvalues and the characteristic polynomial
  • Investigate the concept of generalized eigenspaces and their applications
USEFUL FOR

Students and professionals in mathematics, particularly those focusing on linear algebra, eigenvalue problems, and matrix theory. This discussion is beneficial for anyone seeking to deepen their understanding of eigenvalue decomposition and its implications in higher-dimensional spaces.

FunkyDwarf
Messages
481
Reaction score
0
Hey guys,

Couple of questions here id like some help with. Firstly an unnamed theorem in my textbook that says: suppose s and t are distinct eigenvalues for a 2x2 matrix A, then (A-sI)(A-tI) = 0

The proof goes something like this. We know there is an eigenvector u for the value s such that Au=su (can we assume that? i assume so :P) which means (A-sI)u = 0 which is fine i can accept that. We can also find a vector v to corrispond to value t such that the same thing applies and we can show that u and t are linearly independent in a number of ways but let's assume they are for now. We also can show that (A-sT)(A-tI) commute.

Now here's the part I am not sure about. Let's take a vector w = au+bv then
(A-sI)(A-tI)(au+bv) = (A-sI)(A-tI)au + (A-sI)(A-tI)bv
= (A-tI)(A-sI)au + (A-sI)(A-tI)bc
=(A-tI)0 + (A-sI)0
= 0
Now this is all cool i get that but i don't see how we can deduce that this means (A-sI)(A-tI) = 0 given they are matrices. I mean for real numbers if we have ab = 0 and b isn't zero then a must be right? but for matrices that doesn't have to be so, according to nilpotence, so what am i missing here? What reason do we have for saying (A-sI)(A-tI) = 0 given the above result?

Next question =D
Primary Decomposition Theorem. Well its called that in the lecture notes but when i look it up i find something i consider a little different but anyway. It states:

The dimension of the generalised eignspace for an eigenvalue s, for the case where all eigenvalues of an nxn matrix A are real, is the multiplicity of the eigenvalue s. Moreover, R^n is the direct sum of the generalised eigenspace. The proof that follows in my book confuses the hell out of me so i was wondering if someone here could elaborate.

The way i read it was as follows, and bear in mind I am quite sure this is wrong and that some of my preconseptions of ideas here is false but i need you to correct those in the same fell swoop if possible :)

Anyway, i find a good way to thinkabout this eigenstuff is that the values are a deformation factor, stretching or whathave you, and the vectors are the direction in which this occurs. Crude, bound to low dimensional spaces, but useful i find. So if we have an eigenvalue s for a matrix A (assumed all real etc) we have a set of eigenvectors to go with it which should be (i think) linearly independent and so span a space which is the eigenspace of value s. Given that all the vectors in question are LI the space they span has dimension equal to the number of LI vectors used to span it. Moreover the direct sum of these spaces for different eigenvalues are disjoint and so the direct sum spans the whole space. Out of this comes the idea that if you have an eigenvalue occurring several times, you will have more vectors in that space and thus a higher dimensional eigenspace.

is that about right? i doubt it but hey that's why I am here =D

Cheers
-Graeme
 
Physics news on Phys.org
A 2x2 matrix is a linear map from R^2 to R^2. So if you found two distinct linearly independent eigenvectors u and v for A (why do eigenvectors exist? read the definition of an eigenvalue), that means they form a basis for R^2. So any vector in R^2 is of the form au+bv. And consequently, because (A-sI)(A-tI) sends all these vectors to zero, that means it sends all of R^2 to zero, and so must be the zero map. (In general, if you have a basis {v_1, ..., v_n} and a linear map T such that Tv_i = 0 for all i, then T=0, because a linear map is completely determined by its action on the basis.)

And what kind of elaboration are you looking for with respect to the primary decomposition theorem?
 
Hmmm ok i accept that its the zero map but writing it as that quantity equals zero still seems a bit strange to me.

For the second part i just wanted to know if what i was saying was correct basically.

Thanks
-G
 
tghe proof is trivial: if s is an eigenvalue ther5e is a vectpor v with Tv = sv, and the same equation holds for any multiple of v.

hence if t is a different eigenvalue thre is a vectorc w such that Tw = tw and w is not a multiple of v, so v,w is a basis of the space. hence (T-s)(T-t)(av+bw) = 0.
 
i understand how we get to (T-sI)(T-tI)(av+bw) = 0. What i don't get is how we can deduce from this that (T-sI)(T-tI) = 0. I mean morphism sort of convinced me but i will need to ponder it a bit more :)
 
What can you say about a matrix A if A v = 0 for any vector v (in a > 1 -dimensional space, so that v is not always a multiple of the eigenvector for zero :smile:)?
 
I think he means something like, for matrices A,B

A.B=0

But the problem is is A=[1 1] and B= [-10 0]
...... [0 0] ... [ 10 0]
but obviously A=0 or B=0 is not necessarily true. Something like that.
 
Last edited:
This is just repetition, but there is nothing else to say.

1) A has two eigenvectors with to different eigenvalues.

2) Therefore they are linearly independent.

3) Therefore any span, and any vector can be written as a linear combination of them

4) A annihilates any linear combination of eigenvectors.

5) A must therefore send every vector to zero

6) Therefore A is the 0 matrix.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K