Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Matrix dialonalization

  1. Sep 2, 2005 #1

    oki

    User Avatar

    Matrix diagonalization

    Hi, guys!

    I've graduated some time ago, and by now have completely fogotten all those matrix rules. Could you please advise me. The problem is following:

    I have a large square matrix (like 1000x1000) and I have to rise this matrix to a large power, like 1000. The matrix is quite a sparse one. It looks like this:

    x0x00000000000000000
    0x000000000000000000
    000x00x0x0x0x0x0x0x0x
    0xx0xx00000000000000
    00x00000000000000000
    000000x0000000000000
    00x00000000000000000
    00000000x00000000000
    00x00000000000000000
    0000000000x000000000
    00x00000000000000000
    000000000000x0000000
    00x00000000000000000
    00000000000000x00000

    all x's are different, all zero's are just zero's.
    So, my first quess would be that I have to diagonalize the matrix somehow. But I don't remember how to do this. I want to try manually first. Then if it fails - numerically (I prefer fortran).

    Any suggestions on how to diagonalized? The right bottom part of the matrix looks like it is already diagonal (although the elements are not on the main diagonal).

    Any help will be greatly appreciated.
     
    Last edited: Sep 2, 2005
  2. jcsd
  3. Sep 2, 2005 #2
    It's hard to tell with your example, but usually you'd find the eigenvalues and eigenvectors of the matrix and put it into the form:

    [tex]A=PDP^{-1}[/tex]

    Where A is your original matrix, D is the diagonal eigenvalue matrix, and P is the eigenvector matrix.

    I haven't ever done one this large before though...
     
  4. Sep 2, 2005 #3

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Are you sure that, for your application, you really need to actually compute the matrix raised to a high power? That there might not be some other way to get the thing you really want?
     
  5. Sep 3, 2005 #4

    oki

    User Avatar

    it's a good point. i am really thinking now of another approach avoiding matrices. but for now, i want to try with matrices first. wouldn't that be a nice solution if to split a matrix into four bloks, and operate with this blocks as with the matrix elements? The blocks may look like the following:

    x0x00000000000000000
    0x000000000000000000
    000x00x0x0x0x0x0x0x0x

    0xx0xx00000000000000
    00x00000000000000000
    000000x0000000000000
    00x00000000000000000
    00000000x00000000000
    00x00000000000000000
    0000000000x000000000
    00x00000000000000000
    000000000000x0000000
    00x00000000000000000
    00000000000000x00000


    I am not sure yet how this can help, but perhaps it is possible to solve first analitically a problem of rising a 4x4 matrix to a large power, and then return to the blocks - one of them would be a diagonal matrix, and two - composed of a small number of rows or columns.
     
  6. Sep 3, 2005 #5

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The nice thing about sparse matrices is that multiplication is very fast. For example, if A and B are sparse matrices, then it is often faster to compute A(Bv) than it is to compute (AB)v.

    (Where I've used parentheses to indicate the order in which I'm computing the multiplications)


    If you're interested in (A^100)v, then it could be faster to multiply by A 100 times than it is to precompute A^100 and multiply by that.

    If precomputing A^100 turns out to be best, then it might be fastest to compute A^100 via repeated multiplication rather than any clever techniques.


    This is all assuming you are representing your sparse matrix in some sort of sparse format.
     
  7. Sep 4, 2005 #6

    oki

    User Avatar

    Thank you for this info.
    How would you compare the speeds of the calculations that you have mentioned with the possibility to use PC to diagonalize the whole large matrix numerically, and then rize the elements of the main diagonal to the N's power?
     
  8. Sep 4, 2005 #7

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Empirically!
     
  9. Sep 4, 2005 #8

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    But knowing the theory is important too, I suppose. Unfortunately, I don't know it all, but I can give some of it.


    Normally, if you have an mxn matrix A and you multiply by an nx1 vector v, it takes Θ(mn) operations to compute the product Av.

    However, in a sparse representation, if matrix A has p entries, it takes Θ(p + n) operations to compute the product Av. (Normally you'd ignore the "+n" part out, but it's required if p is much less than n... because it takes time to initialize the output vector)

    Note that if A is 1000x1000, and has about 1000 nonzero entries, then it takes on the order of C * 1000 * 1000 steps to simply multiply by A 1000 times.

    However, A^1000 will be a dense matrix, and it will take D * 1000 * 1000 operations to simply compute (A^1000)v.

    Of course, you cannot directly compare these running times because we don't know what C and D are... they should be roughly similar, but there's no reason to think they're actually equal. (And note that we haven't even considered the time to compute A^1000! This is why I suggested repeated multiplication might be the superior option.


    I don't know the actual amount of time it would take to compute A^k for an nxn matrix A. However, it would seem to me that you'd need to solve roughly n nxn systems of equations, and I think the most straightforward way I know to solve an nxn system of equations runs in Θ(n^3) time, so diagonalizing A in this manner would take Ω(n^4) time.

    For comparison, multiplying two nxn (dense) matrices, using the most straightforward method, takes Θ(n^3) time, so computing A^k directly would take Θ(k n^3) time.

    But, multiplying two nxn matrices where one is sparse with p entries (assuming n is in O(p)) takes Θ(n p). So, computing A^k by repeated multiplication when A is sparse takes Θ(k n p).


    Oh, and there's one thing I've been ignoring in all of this: numerical stability. A seemingly correct algorithm can give you entirely wrong answers, simply through the accumulation and magnification of roundoff error. I don't know much about the numerical stability of these algorithms. :frown: But, if all of your work is done with integers, then there are no stability issues with the repeated multiplication approaches!
     
  10. Sep 4, 2005 #9

    oki

    User Avatar

    Ok, I see, thank you!

    My matrix elements are not integers, so the stability issues do exist. Taking this into account, and also, looking at your speed estimations, it seems that the simplest way would be just do the straighforward matrix multiplication. I will try to perform this using Fortran packages for sparse matrises. Let's see...
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Matrix dialonalization
  1. Hermitian Matrix (Replies: 4)

  2. Help with a matrix (Replies: 2)

Loading...