Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Diagonolizing 5x5 Matrix by Hand

  1. Sep 25, 2012 #1
    I'm required to diagonalize a 5x5 matrix by hand, using 'appropriate similarity transformations.' I'm not asking for an answer to a homework question, I'm asking is there any sort of easier (by hand) way of doing this than how I'm approaching it?

    If it matters, the matrix is as follows:
    A= {
    {1, 2, 1, 0, 0},
    {2, 1, 2, 0, 0},
    {1, 1, 2, 0, 0},
    {0, 0, 0, 0, 2},
    {0, 0, 0, 2, 0}}

    Looking at this makes it seem like a 3x3 matrix, with a 2x2 tacked on the bottom right corner, and zero's added to the filler space made as a result of increasing by 2 dimensions.

    The method I'm planning on using to diagonalize this:
    - Find eigenvalues, not sure how many there are, but I know there could be 5 max
    - Find normalized eigenvectors associated with each eigenvalue
    - Use the rule: A is diagonalize-able where P-1AP = D (D being diagonal matrix)
    --- P is a matrix where each column is an eigenvector of A

    This seems WAY too complicated to me. The class this is for is supposed to be a class about the mathematical methods in Physics, and this just seems like Linear Algebra all over again. This has no physical meaning to me and is just manipulating matrices using certain properties of matrices to get other matrices.

    So, any thoughts on how to approach this? I vaguely remember someone else commenting on how the 5x5 looks like a 2x2 tacked on a 3x3, and somehow using that to our advantage to do this. But I seriously don't want to find the eigenvectors of a 5x5 by hand the way I've been doing it for a 3x3.

    If anyone can explain what is going on physically here as well, that would be great. I want to understand this stuff, but right now it's just way too mathy.
     
  2. jcsd
  3. Sep 25, 2012 #2
    You're on the right track with the realsation that this is just a 3x3 matrix and a 2x2 matrix. Let [itex]V[/itex] be the matrix that brings the 3x3 portion into triangular form, and let [itex]W[/itex]
    be the matrix that brings the 2x2 matrix into triangular form. Then:
    [tex]
    \begin{pmatrix}
    V & 0 \\
    0 & W
    \end{pmatrix}
    [/tex]

    is the similarity transform.
     
  4. Sep 25, 2012 #3
    So, in reality, as opposed to treating the 5x5 as a whole and using its eigenvectors to fulfill [P-1AP = D], I should triangularise the 3x3 and the 2x2 and then put the resultant triangularised matrices back into a 5x5, which will then be equivalent to the Diagonal of the original 5x5?

    If that's true, that is indeed very intuitive. How can this be represented physically? I'm sure there has to be some correlation to a physical phenomenon seeing as I'm learning this in a Physics-ish course.
     
  5. Sep 25, 2012 #4
    Yes, that is correct.

    As to the physics connection, I'm not too sure. Something can surely be cooked up, though. For example, think of a matrix as asystem of equations in 5 variables. In this case, for the first 3 equations, the coeficients on the "last 2" variables are 0 and for the last two equations the coefficients on the "first 3" variables are zero. Then maybe look at this as some sort of physical system, modeling something. Like stresses on a bridge or something with a spring-mass system, or perhaps several springs coupled together or something.
     
  6. Sep 25, 2012 #5

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You're overthinking it. There isn't anything deep going on here: it's not much different than "you have two things. Do something with one thing. Do something else with the other thing".

    In some sense, the whole point of diagonalization is to transform a problem so you can look at it in this fashion. It's just in this problem, part of that work is already done.
     
    Last edited: Sep 25, 2012
  7. Sep 25, 2012 #6
    So, just to make sure again..

    Let's call the 3x3 matrix M, and the 2x2 matrix N.

    I have to triangularize M and N, to get what was previously called V and W, not diagnalize M and N to get V and W?

    So,
    [tex]
    A = \begin{pmatrix}
    M & 0 \\
    0 & N
    \end{pmatrix}
    [/tex]

    and

    [tex]
    D = \begin{pmatrix}
    V & 0 \\
    0 & W
    \end{pmatrix}
    [/tex], where D is the diagonalization of A, V is the triangularization of M, and W is the triangularization of N?

    Does it matter if V and W are upper or lower triangularized?
     
  8. Sep 25, 2012 #7

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    You can probably solve this by inventing some "simple" matrices ##P_i## such that ##P_n^{-1}\cdots P_1^{-1}AP_1\cdots P_n## is a transformation that diagonalizes A one step at a time. For example, think of a P matrix that will swap two rows and columns of A. For the 3x3 part, think about P matrix that will add or subtract two rows or columns. All the entries in the 3x3 part are either 1 or 2, so it shouldn't be too hard to make most of them 0 by doing that.

    Finding the eigenvalues and vectors will work, but it's not nice to do by hand and not so instructive as thinikig out a neater way to solve the problem.
     
  9. Sep 25, 2012 #8
    How can I triangularize a matrix like this.. I don't see how it's possible.

    {0, 2}
    {2, 0}

    The diagonal is already zero..and the outside bits are all non-zero. Is it literally impossible to triangularize this by roiw reductions? Same with the 3x3.. Maybe this way isn't the right way :(
     
  10. Sep 25, 2012 #9
    Find the eigenvalues.
     
  11. Sep 25, 2012 #10
    I've actually found that the diagonal of the 5x5 is just its eigenvalues down the main diagonal.

    But, is the order important? I found the eigenvalues of the 2x2 and 3x3's, and they're all the same as the eigenvalues of the 5x5, which would mean they are the elements of the diagonal 5x5.. but in what order?
     
  12. Sep 26, 2012 #11
    Well, yeah, of course.

    Yes, the eigenvalues are the same - that's the reason you can use the trick we have described. You said the problem was to do a similarity transform, right? I think what the professor had in mind was finding a matrix [itex]V[/itex] so that [itex]V^{-1} X V[/itex] is diagonal (where X is the matrix you are starting with.) Then the [itex]i^{th}[/itex] column of V is going to be the eigenvector assoicated to the eigenvalue in the [itex](i,i)[/itex] of the diagonal matrix. To find the eigenvalues and eigenvectors, it is sufficient to do it for the 3x3 and 2x2 matrix as described in previous posts.
     
  13. Sep 26, 2012 #12
    It's great that I've worked out what I have done so far, I've come to some realizations that I didn't really notice before.

    But my question was (and I guess I should have worded it better), does the order of the eigenvalues in the diagonal matter? Seeing how, for example, calculating the trace of the diagonal, the order wouldn't matter as much as the price of bread in China, since it's all addition.

    I'm also unsure as to whether or not the matrix V with columns equal to the Eigenvectors of X needs to have its columns ordered in any specific way, either.

    Does it matter if any particular eigenvector is the 4th column as opposed to the 2nd? Does it matter if any particular eigenvalue is in the [2,2] element of the diagonal as opposed to the [5,5]?
     
  14. Sep 26, 2012 #13
    (Just so we are on the same page) If a matrix [itex]M[/itex] is diagonalisable then what we mean is that there is some invertable matrix [itex]V[/itex] so that [itex]V^{-1}MV=D[/itex] where [itex]D[/itex] is diagonal and, futhermore, the diagonal entries are the eigenvalues of [itex]M[/itex]. What is [itex]V[/itex]? It is the matrix whose columns are the eigenvectors of [itex]M[/itex]. (Note that a matrix is invertable iff the columns are linearly independent, so this means that a matrix is diaonalisable iff there are n linearliy independent eigenvectors, where n is the dimension of the space.) Now, let's say that the [itex]i^{th}[/itex] column of [itex]V[/itex] is an eigenvector corresponding to the eigenvalue [itex]\lambda[/itex]. Then the [itex]i,i[/itex] entry of [itex]R[/itex].

    So, to answer your question, we can order the eigenvalues in whatever order we like, as long as the [itex]i^{th}[/itex] column of [itex]V[/itex] is the eigenvalue associated to the eigenvalue in the [itex]i,i[/itex] entry of [itex]R[/itex].
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook