# Diagonalizability of a matrix

1. Oct 15, 2011

### Lily@pie

1. The problem statement, all variables and given/known data
A is a 2 by 2 real matrix which cannot be diagonalized by matrix P. Prove that there is a invertible P such that P-1AP=[[Ω,1][0,Ω]]

2. The attempt at a solution
I didn't know how to do this so I tried the following.

Since we need to prove there is a invertible P such that P-1AP=[[Ω,1][0,Ω]], this means that we need to prove A and L:=[[Ω,1][0,Ω]] are similar matrix.

So we need to show A and L have the same eigenvalue whenever A is not diagonalizable.

det (xI-L) = (x-Ω)2
eigenvalue of L =Ω

Since A is not diagonalizable, it will not have 2 distinct eigenvalues. This implies that (a+d)^2-4(ad-bc) = 0. Hence eigenvalue of A =(x- (a+d)/2)2

Since Ω can be any number, the eigenvalues of A and L will be the same when A is not diagonalizable. This implies that A and L are similar matices. (This is the main part that I am not sure about, it seems wrong)

Therefore, there exist an invertible P such that P-1AP=[[Ω,1][0,Ω]]

2. Oct 15, 2011

### vela

Staff Emeritus
3. Oct 15, 2011

### Lily@pie

But because I wasn't taught on the Jordan Normal form and my lecturer state that it can be done without it. Will my prove be valid?

4. Oct 15, 2011

### vela

Staff Emeritus
I think your instructor wants you go through the construction of the Jordan normal form matrix.

If you already knew about Jordan normal form, the problem is trivial. You would recognize L is that form, so there's exists a matrix P, blah blah blah.

On the other hand, if you go through the logic of where these generalized eigenvectors come from, you can show they're independent and therefore they're a basis for R2. Then you can show, for example, what (A-IΩ) has to look like in that basis, and so on. You're not so much using Jordan normal form; you're deriving it.

This way you'll understand why the matrix has the form it does, which is probably what your instructor wants you to learn, rather than just knowing how to write down the matrix without understanding where it came from.

EDIT: Of course, I could be wrong. Use your own judgment. :)

Last edited: Oct 15, 2011
5. Oct 15, 2011

### Lily@pie

I will look up Jordan normal form. Just want to ask one question, does having the same eigenvalue implies that the matrices are similar? Because I know that similar matrices implies that they have the same eigenvalue.

6. Oct 15, 2011

### vela

Staff Emeritus
No, the matrices
\begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}and
\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}have the same eigenvalue, but obviously, they can't be similar.

7. Oct 16, 2011

### Lily@pie

Oh... ok...
I still don't really understand the jordan normal form... urgh!!!

btw, is there any characteristic that we can use to prove that 2 matrix are similar?

8. Oct 16, 2011

### vela

Staff Emeritus
The only way I know of is to find the matrix P.

9. Oct 16, 2011

### I like Serena

Aren't 2 matrices similar if they have the same Jordan normal form (using some ordering)?
That is, they are similar if they have the same eigenvalues combined with the same ranks of the corresponding eigen spaces?

Btw, is Ω supposed to be a real number?
Because if so, I don't think it's always possible.
What if there are no real eigenvalues?
Like in:
$$A=\begin{pmatrix}0 & -1 \\ 1 & 0 \end{pmatrix}$$

10. Oct 16, 2011

### Lily@pie

Ω is not specify to be anything... I presume it can be complex or real...

So will I be able to say since the both of them have the same eigenvalues and ranks to the corresponding eigenspace, so they are similar?

11. Oct 16, 2011

### I like Serena

I believe you can indeed say that, but... you would still need to proof it....
I dare to say it because of the theorem of the Jordan normal form, but if you're not supposed to use that...

12. Oct 16, 2011

### Lily@pie

oh my... how am I suppose to prove this. I've read so many on jordan normal form. All I know is a non-diagonalized matrix can be written in jordan normal form. and...

Any hints??

13. Oct 16, 2011

### I like Serena

I'm a bit fuzzy on what you can and cannot use.
Which theorems do you have available?

For starters, let me review your step:
"Since A is not diagonalizable, it will not have 2 distinct eigenvalues."

It is true, but how do you know this?
Which theorem are you using here?
Otherwise you should still proof it.

14. Oct 16, 2011

### Lily@pie

Because if A is not diagonal, this means that it will have 1 or less eigenvector. Which also implies that it would be impossible to have 2 different eigenvalues as 2 different eigenvalues will lead to 2 eigenvectors....

That's what I used to deduce that...

15. Oct 16, 2011

### I like Serena

Apparently you are using a theorem on the diagonizability of a matrix...

Let me rephrase.

If a 2x2 matrix A has 2 distinct eigenvalues a and b, it also has 2 independent corresponding eigenvectors v and w.
(Why are they independent?)

So Av = av and Aw = bw.
This means that $A (\boldsymbol v ~ \boldsymbol w) = (a\boldsymbol v ~ b\boldsymbol w) = (\boldsymbol v ~ \boldsymbol w) \begin{pmatrix}a & 0 \\ 0 & b \end{pmatrix}$.

There!
With P=(v w) we have the (sub)proof that 2 distinct eigen values imply diagonizability.
(Why is P invertible? And why does this imply similarity?)

TBH, I haven't worked your problem out (completely) myself yet.
But I think your proof should be something similar as what we did just now.

Last edited: Oct 16, 2011
16. Oct 16, 2011

### Lily@pie

So we have therefore proven that not diagonalizable matrix implies that the is no 2 distinct eigenvalues.

P will be invertible because v and w are linearly independent. So this implies similarity between A and [[a,0][0,b]] since P-1AP=[[a,0][0,b]]

What if now we change [[a,0][0,b]] to [[Ω,1][0,Ω]]?? Does this tell us anything...
So A[v w]=[v w][[Ω,1][0,Ω]]
Av=Ωv which implies that v is an eigenvector for A with eigenvalue Ω.
Aw=v+Ωw which... erm, I don't really know

17. Oct 16, 2011

### I like Serena

Yep.
So if we can prove that we can always find a w, linearly independent of v, such that Aw=v+Ωw, we're basically done.

Btw, we can already say that for any vector w independent of v, that (A-ΩI)w≠0.
(Why?)

18. Oct 16, 2011

### Lily@pie

because v is an eigenvector which is by definition, a non-zero vector. So (A-ΩI)w≠0 since (A-ΩI)w=v

but how do we show v and w when (A-ΩI)w=v are always linearly independent?

19. Oct 16, 2011

### I like Serena

I just went over the wiki article on Jordan normal forms again:
http://en.wikipedia.org/wiki/Jordan_normal_form

It gives a method to find w (a generalized eigenvector):

From:
Aw=v+Ωw​
we get:
(A-ΩI)w=v
(A-ΩI)2w=(A-ΩI)v=0​

So w is a vector in the kernel of (A-ΩI)2.

The article also gives a proof why this always works.
Perhaps it can be simplified for a 2x2 matrix.

20. Oct 16, 2011

### Lily@pie

I could understand until the fact that v is in the intersection of Range (A-ΩI) & Ker(A-ΩI) and w is a vector in the kernel of (A-ΩI)2.

how to relate to w?? hmm... since w is not in the Ker(A-ΩI), v≠w...