1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Diagonalizability of a matrix

  1. Oct 15, 2011 #1
    1. The problem statement, all variables and given/known data
    A is a 2 by 2 real matrix which cannot be diagonalized by matrix P. Prove that there is a invertible P such that P-1AP=[[Ω,1][0,Ω]]

    2. The attempt at a solution
    I didn't know how to do this so I tried the following.

    Since we need to prove there is a invertible P such that P-1AP=[[Ω,1][0,Ω]], this means that we need to prove A and L:=[[Ω,1][0,Ω]] are similar matrix.

    So we need to show A and L have the same eigenvalue whenever A is not diagonalizable.

    det (xI-L) = (x-Ω)2
    eigenvalue of L =Ω

    det (xI-A) = x2-(a+d)x+ad-bc
    eigenvalue of A = 1/2(a+d+SQRT[(a+d)^2-4(ad-bc)]) or 1/2(a+d+SQRT[(a+d)^2+4(ad-bc)])
    Since A is not diagonalizable, it will not have 2 distinct eigenvalues. This implies that (a+d)^2-4(ad-bc) = 0. Hence eigenvalue of A =(x- (a+d)/2)2

    Since Ω can be any number, the eigenvalues of A and L will be the same when A is not diagonalizable. This implies that A and L are similar matices. (This is the main part that I am not sure about, it seems wrong)

    Therefore, there exist an invertible P such that P-1AP=[[Ω,1][0,Ω]]
     
  2. jcsd
  3. Oct 15, 2011 #2

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

  4. Oct 15, 2011 #3
    But because I wasn't taught on the Jordan Normal form and my lecturer state that it can be done without it. Will my prove be valid?
     
  5. Oct 15, 2011 #4

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    I think your instructor wants you go through the construction of the Jordan normal form matrix.

    If you already knew about Jordan normal form, the problem is trivial. You would recognize L is that form, so there's exists a matrix P, blah blah blah.

    On the other hand, if you go through the logic of where these generalized eigenvectors come from, you can show they're independent and therefore they're a basis for R2. Then you can show, for example, what (A-IΩ) has to look like in that basis, and so on. You're not so much using Jordan normal form; you're deriving it.

    This way you'll understand why the matrix has the form it does, which is probably what your instructor wants you to learn, rather than just knowing how to write down the matrix without understanding where it came from.


    EDIT: Of course, I could be wrong. Use your own judgment. :)
     
    Last edited: Oct 15, 2011
  6. Oct 15, 2011 #5
    I will look up Jordan normal form. Just want to ask one question, does having the same eigenvalue implies that the matrices are similar? Because I know that similar matrices implies that they have the same eigenvalue.
     
  7. Oct 15, 2011 #6

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    No, the matrices
    \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}and
    \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}have the same eigenvalue, but obviously, they can't be similar.
     
  8. Oct 16, 2011 #7
    Oh... ok...
    I still don't really understand the jordan normal form... urgh!!!

    btw, is there any characteristic that we can use to prove that 2 matrix are similar?
     
  9. Oct 16, 2011 #8

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    The only way I know of is to find the matrix P.
     
  10. Oct 16, 2011 #9

    I like Serena

    User Avatar
    Homework Helper

    Aren't 2 matrices similar if they have the same Jordan normal form (using some ordering)?
    That is, they are similar if they have the same eigenvalues combined with the same ranks of the corresponding eigen spaces?


    Btw, is Ω supposed to be a real number?
    Because if so, I don't think it's always possible.
    What if there are no real eigenvalues?
    Like in:
    [tex]A=\begin{pmatrix}0 & -1 \\ 1 & 0 \end{pmatrix}[/tex]
     
  11. Oct 16, 2011 #10
    Ω is not specify to be anything... I presume it can be complex or real...

    So will I be able to say since the both of them have the same eigenvalues and ranks to the corresponding eigenspace, so they are similar?
     
  12. Oct 16, 2011 #11

    I like Serena

    User Avatar
    Homework Helper

    I believe you can indeed say that, but... you would still need to proof it....
    I dare to say it because of the theorem of the Jordan normal form, but if you're not supposed to use that...
     
  13. Oct 16, 2011 #12
    oh my... how am I suppose to prove this. I've read so many on jordan normal form. All I know is a non-diagonalized matrix can be written in jordan normal form. and...

    Any hints??
     
  14. Oct 16, 2011 #13

    I like Serena

    User Avatar
    Homework Helper

    I'm a bit fuzzy on what you can and cannot use.
    Which theorems do you have available?

    For starters, let me review your step:
    "Since A is not diagonalizable, it will not have 2 distinct eigenvalues."

    It is true, but how do you know this?
    Which theorem are you using here?
    Otherwise you should still proof it.
     
  15. Oct 16, 2011 #14
    Because if A is not diagonal, this means that it will have 1 or less eigenvector. Which also implies that it would be impossible to have 2 different eigenvalues as 2 different eigenvalues will lead to 2 eigenvectors....

    That's what I used to deduce that...
     
  16. Oct 16, 2011 #15

    I like Serena

    User Avatar
    Homework Helper

    Apparently you are using a theorem on the diagonizability of a matrix...

    Let me rephrase.

    If a 2x2 matrix A has 2 distinct eigenvalues a and b, it also has 2 independent corresponding eigenvectors v and w.
    (Why are they independent?)

    So Av = av and Aw = bw.
    This means that [itex]A (\boldsymbol v ~ \boldsymbol w) = (a\boldsymbol v ~ b\boldsymbol w) = (\boldsymbol v ~ \boldsymbol w) \begin{pmatrix}a & 0 \\ 0 & b \end{pmatrix}[/itex].

    There! :smile:
    With P=(v w) we have the (sub)proof that 2 distinct eigen values imply diagonizability.
    (Why is P invertible? And why does this imply similarity?)


    TBH, I haven't worked your problem out (completely) myself yet.
    But I think your proof should be something similar as what we did just now.
     
    Last edited: Oct 16, 2011
  17. Oct 16, 2011 #16
    So we have therefore proven that not diagonalizable matrix implies that the is no 2 distinct eigenvalues.

    P will be invertible because v and w are linearly independent. So this implies similarity between A and [[a,0][0,b]] since P-1AP=[[a,0][0,b]]

    What if now we change [[a,0][0,b]] to [[Ω,1][0,Ω]]?? Does this tell us anything...
    So A[v w]=[v w][[Ω,1][0,Ω]]
    Av=Ωv which implies that v is an eigenvector for A with eigenvalue Ω.
    Aw=v+Ωw which... erm, I don't really know
     
  18. Oct 16, 2011 #17

    I like Serena

    User Avatar
    Homework Helper

    Yep.
    So if we can prove that we can always find a w, linearly independent of v, such that Aw=v+Ωw, we're basically done.

    Btw, we can already say that for any vector w independent of v, that (A-ΩI)w≠0.
    (Why?)
     
  19. Oct 16, 2011 #18
    because v is an eigenvector which is by definition, a non-zero vector. So (A-ΩI)w≠0 since (A-ΩI)w=v

    but how do we show v and w when (A-ΩI)w=v are always linearly independent?
     
  20. Oct 16, 2011 #19

    I like Serena

    User Avatar
    Homework Helper

    I just went over the wiki article on Jordan normal forms again:
    http://en.wikipedia.org/wiki/Jordan_normal_form

    It gives a method to find w (a generalized eigenvector):

    From:
    Aw=v+Ωw​
    we get:
    (A-ΩI)w=v
    (A-ΩI)2w=(A-ΩI)v=0​

    So w is a vector in the kernel of (A-ΩI)2.

    The article also gives a proof why this always works.
    Perhaps it can be simplified for a 2x2 matrix.
     
  21. Oct 16, 2011 #20
    I could understand until the fact that v is in the intersection of Range (A-ΩI) & Ker(A-ΩI) and w is a vector in the kernel of (A-ΩI)2.

    how to relate to w?? hmm... since w is not in the Ker(A-ΩI), v≠w...
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Diagonalizability of a matrix
  1. Diagonalizable Matrix (Replies: 3)

  2. Diagonalizable matrix (Replies: 3)

  3. Diagonalizable Matrix (Replies: 9)

  4. Diagonalizable matrix (Replies: 1)

Loading...