1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Algebra - Diagonalisation

  1. Oct 6, 2008 #1
    1. The problem statement, all variables and given/known data
    a) Consider a real matrix, A =
    [a b]
    [c d]
    Give simple algebraic criteria in terms of a,b,c,d for when the following are true:
    i)there exists a real matrix P that diagonalises A,
    ii)there exists a complex matrix P that diagonalises A, but no real matrix P that does

    b) Let A be a 2 x 2 matrix which cannot be diagonalized by any matrix P (real or not). Prove there is an invertible real 2 x 2 matrix P such that
    P-1AP =
    [ lambda 1 ]
    [ 0 lambda]


    3. The attempt at a solution
    Here's what I have so far (the 'x's are meant to be lambdas):

    a)To find the eigenvalues, let det(A-xI) = 0
    So, (a-x)(d-x)-bc = 0
    x2 - (a+d)x -bc + ad = 0
    Using the quadratic formula,
    x = ((a + d) +- sqrt(a2 + d2 -2ad +4bc) ) / 2

    i) When there are two real and linearly independent eigenvectors, A is diagonalisable by a real matrix P. This occurs when A has two distinct real eigenvalues.
    So a2 + d2 -2ad +4bc > 0

    ii) When there are two complex and linearly independent eigenvectors, A is diagonalisable by a complex matrix P. This occurs when A has two distinct complex eigenvalues.
    So a2 + d2 -2ad +4bc < 0

    When a2 + d2 -2ad +4bc = 0, a has one eigenvalue ((a+d)/2a) of multiplicity 2. Row reducing (A-xI) for this eigenvalue produces at most one free variable, and so there will not be two linearly independent eigenvectors. Thus, A is not diagonalisable when a2 + d2 -2ad +4bc = 0.

    Is this correct?? :S I don't quite know what the question means by 'simple algebraic criteria'..

    b) I have no clue how to do this part. If A can't be diagonalised, it will not have two linearly independent eigenvectors. What then will the columns of P be? :S
    How should I go about proving this?

    Any help would be much appreciated :)
     
  2. jcsd
  3. Oct 8, 2008 #2
    Anyone? :(
     
  4. Oct 8, 2008 #3

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    That seems pretty ok. Except that just because the eigenvalue equation has a double root, that doesn't mean it cant be diagonalized. [[2,0],[0,2]] has a double eigenvalue but it can be pretty easily diagonalized. For b) what P really represents is a choice of basis. Choose a special basis {b1,b2} where b1 is an eigenvalue with eigenvector lambda and b2 is orthogonal to b1.
     
  5. Oct 8, 2008 #4
    Thanks Dick :).
    For (a), is it correct to say that if an eigenvalue equation has double roots, it can be diagonalised only when a=d, b=0, and c=0? I've tried putting in numbers and those seem like the conditions that the matrix has to fulfil to be diagonlised when it has a single eigenvalue.
     
  6. Oct 8, 2008 #5

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    It sure is. If you have a diagonal matrix with constant entries (which your diagonalized matrix must be), then it's a multiple of the identity. It's the same diagonal matrix in ALL bases.
     
  7. Oct 9, 2008 #6
    Cool thanks :)
    I don't quite get what you mean by this. Do you mean b1=
    [lambda]
    [....0 ...] ?
    Or did you mean to say that b1 is an eigenvector with eigenvalue lambda? But this doesn't seem to work cos the eigenvector would be
    [0]
    [0]
    which would mean that b2 could be made up of any two numbers :S

    Can you give me a pointer on how I can start the proof? Do I just find P and expand P-1AP?
     
    Last edited: Oct 9, 2008
  8. Oct 9, 2008 #7

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I meant eigenvector with eigenvalue lambda, sure. A(b1)=lambda*b1. Let's take {b1,b2} to be orthonormal as well. Then in that basis the matrix A_{ij}=(bi).A(bj). Does that help to start?
     
    Last edited: Oct 9, 2008
  9. Oct 9, 2008 #8
    Oh ok, thanks :). I'll work on that and see if I get anywhere.
     
  10. Oct 9, 2008 #9

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    You shouldn't have to work that hard. Tell me what A_{11} and A_{21} are.
     
  11. Oct 9, 2008 #10
    Hmm do I just put the values it? In that case,
    A_{11} is b1.lambda(b1)
    A_{21} is b2.lambda(b1) = 0?? because b2 and b1 are orthogonal
    A_{12} is b1.lambda(b2) = 0 for the same reason
    A_{22} is b2.lambda(b1)

    But I should be getting
    [ lambda 1 ]
    [ 0 lambda]
    shouldn't I? :S
     
  12. Oct 9, 2008 #11

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Not yet. We picked b1 and b2 to be orthonormal. So A_11=lambda*b1.b1=lambda. A_21 is lambda*b1.b2=0. A_12 is b1.A(b2) and A_22 is b2.A(b2). We can't say much about them (since b2 isn't an eigenvector). But at least you have two elements of your matrix right. But we also know A should have a double eigenvalue of lambda. You can tell me what A_22 is based on that information, right?
     
  13. Oct 9, 2008 #12
    So A_22 is lambda?
    Are we trying to find b2 from this? :S
     
  14. Oct 9, 2008 #13

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Ja, A_22 must be lambda. Now we can forget about the basis. So the matrix looks like A=[[lambda,a],[0,lambda]] with a not equal to zero, correct? It's now pretty easy to find a matrix P so that P^(-1)AP=[[lambda,1],[0,lambda]]. At least that's the way I did it.
     
  15. Oct 9, 2008 #14
    Got to run to school now. Thanks for your help so far :)
     
  16. Oct 10, 2008 #15
    Hey Dick,
    I still have a couple of questions about the proof... Hope you don't mind :)

    1. How did you know it had to be a basis of the eigenvector and a vector orthogonal to it?
    2. How did u get this formula : matrix A_{ij}=(bi).A(bj).
    The only formula I know to change bases is P-1AP :S
    3. Why's lambda*b1.b1=lambda?
    4. And how is it easy to find P from here???
     
  17. Oct 10, 2008 #16

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    2 first. A_{ij} is the ith component of the A acting on the jth basis vector. The column vectors of A are the are the images of the basis vectors. So A_{ij}=bi.A(bj) if the basis is orthonormal. I picked the vectors the way I did because I wanted to get (lambda,0) in the first column. I picked b1 and b2 to be orthonormal, so b1.b2=0 and b1.b1=b2.b2=1. Do you mean the matrix P that will change [[lambda,a],[0,lambda]] into [[lambda,1],[0,lambda]]? It's just [[1,0],[0,1/a]], it's pretty easy to see that.
     
  18. Oct 10, 2008 #17
    Wait... So do you mean P-1A results in [[lambda,a],[0,lambda]] and then multiplying this by P gives us [[lambda,1],[0,lambda]]??
     
  19. Oct 10, 2008 #18

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I mean if A=[[lambda,a],[0,lambda]] and P=[[1,0],[0,1/a]] then P^(-1)AP=[[lambda,1],[0,lambda]].
     
  20. Oct 10, 2008 #19
    Ohh ok :). So what you did was change the basis of A to get it into that form? Sorry, I kinda still fail to see the big picture here...
     
  21. Oct 10, 2008 #20

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I picked a special basis of A and argued what the matrix must look like in that basis. Then I did a final P thing to change the a to 1.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Linear Algebra - Diagonalisation
  1. Linear algebra (Replies: 3)

  2. Linear Algebra (Replies: 5)

  3. Linear Algebra (Replies: 1)

Loading...