# Linear Algebra - Diagonalisation

1. Oct 6, 2008

### kehler

1. The problem statement, all variables and given/known data
a) Consider a real matrix, A =
[a b]
[c d]
Give simple algebraic criteria in terms of a,b,c,d for when the following are true:
i)there exists a real matrix P that diagonalises A,
ii)there exists a complex matrix P that diagonalises A, but no real matrix P that does

b) Let A be a 2 x 2 matrix which cannot be diagonalized by any matrix P (real or not). Prove there is an invertible real 2 x 2 matrix P such that
P-1AP =
[ lambda 1 ]
[ 0 lambda]

3. The attempt at a solution
Here's what I have so far (the 'x's are meant to be lambdas):

a)To find the eigenvalues, let det(A-xI) = 0
So, (a-x)(d-x)-bc = 0
x2 - (a+d)x -bc + ad = 0
x = ((a + d) +- sqrt(a2 + d2 -2ad +4bc) ) / 2

i) When there are two real and linearly independent eigenvectors, A is diagonalisable by a real matrix P. This occurs when A has two distinct real eigenvalues.
So a2 + d2 -2ad +4bc > 0

ii) When there are two complex and linearly independent eigenvectors, A is diagonalisable by a complex matrix P. This occurs when A has two distinct complex eigenvalues.
So a2 + d2 -2ad +4bc < 0

When a2 + d2 -2ad +4bc = 0, a has one eigenvalue ((a+d)/2a) of multiplicity 2. Row reducing (A-xI) for this eigenvalue produces at most one free variable, and so there will not be two linearly independent eigenvectors. Thus, A is not diagonalisable when a2 + d2 -2ad +4bc = 0.

Is this correct?? :S I don't quite know what the question means by 'simple algebraic criteria'..

b) I have no clue how to do this part. If A can't be diagonalised, it will not have two linearly independent eigenvectors. What then will the columns of P be? :S
How should I go about proving this?

Any help would be much appreciated :)

2. Oct 8, 2008

Anyone? :(

3. Oct 8, 2008

### Dick

That seems pretty ok. Except that just because the eigenvalue equation has a double root, that doesn't mean it cant be diagonalized. [[2,0],[0,2]] has a double eigenvalue but it can be pretty easily diagonalized. For b) what P really represents is a choice of basis. Choose a special basis {b1,b2} where b1 is an eigenvalue with eigenvector lambda and b2 is orthogonal to b1.

4. Oct 8, 2008

### kehler

Thanks Dick :).
For (a), is it correct to say that if an eigenvalue equation has double roots, it can be diagonalised only when a=d, b=0, and c=0? I've tried putting in numbers and those seem like the conditions that the matrix has to fulfil to be diagonlised when it has a single eigenvalue.

5. Oct 8, 2008

### Dick

It sure is. If you have a diagonal matrix with constant entries (which your diagonalized matrix must be), then it's a multiple of the identity. It's the same diagonal matrix in ALL bases.

6. Oct 9, 2008

### kehler

Cool thanks :)
I don't quite get what you mean by this. Do you mean b1=
[lambda]
[....0 ...] ?
Or did you mean to say that b1 is an eigenvector with eigenvalue lambda? But this doesn't seem to work cos the eigenvector would be
[0]
[0]
which would mean that b2 could be made up of any two numbers :S

Can you give me a pointer on how I can start the proof? Do I just find P and expand P-1AP?

Last edited: Oct 9, 2008
7. Oct 9, 2008

### Dick

I meant eigenvector with eigenvalue lambda, sure. A(b1)=lambda*b1. Let's take {b1,b2} to be orthonormal as well. Then in that basis the matrix A_{ij}=(bi).A(bj). Does that help to start?

Last edited: Oct 9, 2008
8. Oct 9, 2008

### kehler

Oh ok, thanks :). I'll work on that and see if I get anywhere.

9. Oct 9, 2008

### Dick

You shouldn't have to work that hard. Tell me what A_{11} and A_{21} are.

10. Oct 9, 2008

### kehler

Hmm do I just put the values it? In that case,
A_{11} is b1.lambda(b1)
A_{21} is b2.lambda(b1) = 0?? because b2 and b1 are orthogonal
A_{12} is b1.lambda(b2) = 0 for the same reason
A_{22} is b2.lambda(b1)

But I should be getting
[ lambda 1 ]
[ 0 lambda]
shouldn't I? :S

11. Oct 9, 2008

### Dick

Not yet. We picked b1 and b2 to be orthonormal. So A_11=lambda*b1.b1=lambda. A_21 is lambda*b1.b2=0. A_12 is b1.A(b2) and A_22 is b2.A(b2). We can't say much about them (since b2 isn't an eigenvector). But at least you have two elements of your matrix right. But we also know A should have a double eigenvalue of lambda. You can tell me what A_22 is based on that information, right?

12. Oct 9, 2008

### kehler

So A_22 is lambda?
Are we trying to find b2 from this? :S

13. Oct 9, 2008

### Dick

Ja, A_22 must be lambda. Now we can forget about the basis. So the matrix looks like A=[[lambda,a],[0,lambda]] with a not equal to zero, correct? It's now pretty easy to find a matrix P so that P^(-1)AP=[[lambda,1],[0,lambda]]. At least that's the way I did it.

14. Oct 9, 2008

### kehler

Got to run to school now. Thanks for your help so far :)

15. Oct 10, 2008

### kehler

Hey Dick,
I still have a couple of questions about the proof... Hope you don't mind :)

1. How did you know it had to be a basis of the eigenvector and a vector orthogonal to it?
2. How did u get this formula : matrix A_{ij}=(bi).A(bj).
The only formula I know to change bases is P-1AP :S
3. Why's lambda*b1.b1=lambda?
4. And how is it easy to find P from here???

16. Oct 10, 2008

### Dick

2 first. A_{ij} is the ith component of the A acting on the jth basis vector. The column vectors of A are the are the images of the basis vectors. So A_{ij}=bi.A(bj) if the basis is orthonormal. I picked the vectors the way I did because I wanted to get (lambda,0) in the first column. I picked b1 and b2 to be orthonormal, so b1.b2=0 and b1.b1=b2.b2=1. Do you mean the matrix P that will change [[lambda,a],[0,lambda]] into [[lambda,1],[0,lambda]]? It's just [[1,0],[0,1/a]], it's pretty easy to see that.

17. Oct 10, 2008

### kehler

Wait... So do you mean P-1A results in [[lambda,a],[0,lambda]] and then multiplying this by P gives us [[lambda,1],[0,lambda]]??

18. Oct 10, 2008

### Dick

I mean if A=[[lambda,a],[0,lambda]] and P=[[1,0],[0,1/a]] then P^(-1)AP=[[lambda,1],[0,lambda]].

19. Oct 10, 2008

### kehler

Ohh ok :). So what you did was change the basis of A to get it into that form? Sorry, I kinda still fail to see the big picture here...

20. Oct 10, 2008

### Dick

I picked a special basis of A and argued what the matrix must look like in that basis. Then I did a final P thing to change the a to 1.