Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Numerical Matrix Exponential

  1. Apr 23, 2012 #1
    I need to compute the matrix exponential of a real matrix that has no special structure (is not symmetric, positive definite, nilpotent, normal, etc....). Currently, this is the most time consuming step in my program; I need to be able to transform this matrix, I suppose, into something that can have its exponential easily computed. Thanks in advance for any ideas.
     
  2. jcsd
  3. Apr 23, 2012 #2
    If you can find the eigenvalues and eigenvectors of the matrix efficiently, then you can write:
    [tex]
    A \, U = U \, \Lambda
    [/tex]
    where [itex]\Lambda[/itex] is a diagonal matrix containing the eigenvalues, and [itex]U[/itex] is a matrix whose columns are the corresponding eigenvectors. If these are linearly independent, then [itex]U^{-1}[/itex], and you can write:
    [tex]
    A = U \, \Lambda \, U^{-1} \Rightarrow \exp(A) = U \, \exp(\Lambda) \, U^{-1}
    [/tex]
    The matrix [itex]\exp(\Lambda)[/itex] is a diagonal matrix with the diagonal elements being the exponential of the corresponding eigenvalue.

    This method works for an arbitrary function of a matrix.
     
  4. Apr 24, 2012 #3
    Sorry, I forgot to mention that the matrix is not invertible; so this method will not work.
     
  5. Apr 24, 2012 #4
    To be more specific..... my matrix is such that the diagonal bits are equal to the negative of the sum of the off diagonal elements associated with that given row. That is why my matrix will never be invertible, its singular by construction. But can this construction be manipulated?
     
  6. Apr 24, 2012 #5
    It doesn't matter if the matrix is singular. That only means that 0 is an eigenvalues. What you need is the matrix whose columns are the eigenvectors of your matrix to be non-singular. This is quite different.
     
  7. Apr 24, 2012 #6
    So, does your matrix satisfy:
    [tex]
    A_{i i} = -\sum_{k \neq i}{A_{i k}}
    [/tex]
     
  8. Apr 24, 2012 #7
    Yes, that is correct.

    In response to your post above this, I think that it does matter that an eigenvalue is zero. If an eigenvalue is zero, there is no unique associated eigenvector (any will do). This will ruin the basis, and therefore the inverse does not exist, so there is no motivation for the equation:
    i.e. A V = V Λ → A = VΛV^-1 ⇔ V^-1 exists ⇔ 0 isn't an eigenvalue..... right?
     
  9. Apr 24, 2012 #8
    No, not any vector is an eigenvector for the zero eigenvalue. There is a vector [itex]\vert x \rangle[/itex], such that:
    [tex]
    A \, \vert x \rangle \neq 0
    [/tex]
    Otherwise, the matrix is identically equal to zero. But, then, this vector is not an eigenvector.

    BTW, your matrix satisfies the following:
    [tex]
    \sum_{k}{A_{i k} \cdot 1} = 0
    [/tex]
    so the vector [itex]\vert x_0 \rangle = \frac{1}{\sqrt{N}} \, \left( 1, 1 , \ldots, 1 \right)^{\top}[/itex] is the normalized eigenvector corresponding to the zero eigenvalue.
     
  10. Apr 24, 2012 #9
    Example of diagonalizing. Take the obviously singular 2x2 matrix:
    [tex]
    A = \left( \begin{array}{cc}
    1 & 2 \\

    3 & 6
    \end{array}\right)
    [/tex]

    The characteristic equation is:
    [tex]
    \left\vert \begin{array}{cc}
    1 - \lambda & 2 \\

    3 & 6 - \lambda
    \end{array}\right\vert = (1 - \lambda)(6 - \lambda) - 6 = \lambda^2 - 7 \lambda = \lambda(\lambda - 7) = 0
    [/tex]
    Thus, one eigenvalue is [itex]\lambda_1 = 0[/itex]. This is why the determinant of the matrix is zero, i.e. it is singular. But, the eigenvector is:
    [tex]
    \vert \lambda_1 \rangle = \left(\begin{array}{c}
    A_1 \\

    B_1
    \end{array}\right), \ A_1 + 2 B_1 = 0 \Rightarrow A_1 = -2 B_1 \Rightarrow \vert \lambda_1 \rangle = \left( \begin{array}{c}
    2 \\

    -1
    \end{array}\right)
    [/tex]

    The eigenvector corresponding to [itex]\lambda_2 = 7[/itex] is:
    [tex]
    \vert \lambda_2 \rangle = \left(\begin{array}{c}
    A_2 \\

    B_2
    \end{array}\right), -6 \ A_2 + 2 B_2 = 0 \Rightarrow B_2 = 3 A_2 \Rightarrow \vert \lambda_1 \rangle = \left( \begin{array}{c}
    1 \\

    3
    \end{array}\right)
    [/tex]
    Thus, the matrix U is:
    [tex]
    U = \left(\begin{array}{cc}
    2 & 1 \\

    -1 & 3
    \end{array}\right)
    [/tex]

    The determinant of this matrix is [itex]2 \cdot 3 - 1 \cdot (-1) = 7 \neq 0[/itex], i.e. this matrix is non-singular! Its inverse is:
    [tex]
    U^{-1} = \frac{1}{7} \left(\begin{array}{cc}
    3 & -1 \\

    1 & 2
    \end{array}\right)
    [/tex]
    Then, you may represent the original matrix as a product (check it!):
    [tex]
    A = \frac{1}{7} \, \left(\begin{array}{cc}
    2 & 1 \\ -1 & 3 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    0 & 0 \\ 0 & 7 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    3 & -1 \\ 1 & 2 \end{array}\right)
    [/tex]

    Then, you can easily write the exponential of the matrix as:
    [tex]
    A = \frac{1}{7} \, \left(\begin{array}{cc}
    2 & 1 \\ -1 & 3 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    e^0 & 0 \\ 0 & e^7 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    3 & -1 \\ 1 & 2 \end{array}\right)
    [/tex]
     
  11. Apr 24, 2012 #10
    Okay, I see what you mean. Although, if zero is an eigenvalue then the rank of A is (N-1) : where N is the number rows/columns. But one of the requirements for doing a eigenvalue decomposition is that the rank is full and that the rows/columns are linearly independent.

    We take a simple example:

    A = [-a,a;b,-b]. In this example, the top row is (b/a) *bottom row, so there is linear dependence. I'm sure that this is true for the other cases (more rows/columns). Because of this, I don't see how the decomposition could be reliable.

    ------------------

    Now, to try your example, (this was a print out from the matlab environment):

    First off, the expressions don't even commute

    >> inv(U)*[1,0;0,exp(7)]*U

    ans =

    157.5190 -469.5571
    -313.0380 940.1141

    >> U*[1,0;0,exp(7)]*inv(U)

    ans =

    157.5190 313.0380
    469.5571 940.1141


    >> U

    U =

    2 1
    -1 3
    >> A=[1,2;3,6]

    A =

    1 2
    3 6
    >> exp(A)

    ans =

    2.7183 7.3891
    20.0855 403.4288

    So unfortunately, it does not work.

    [tex]
    exp(A) ≠\frac{1}{7} \, \left(\begin{array}{cc}
    2 & 1 \\ -1 & 3 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    e^0 & 0 \\ 0 & e^7 \end{array}\right)

    \cdot \left(\begin{array}{cc}
    3 & -1 \\ 1 & 2 \end{array}\right)
    [/tex]
     
  12. Apr 24, 2012 #11
  13. Apr 24, 2012 #12
  14. Apr 24, 2012 #13
    If you don't know, you're doin' it wrong!
     
  15. Apr 25, 2012 #14

    chiro

    User Avatar
    Science Advisor

    Ba dum ching!
     
  16. Apr 26, 2012 #15
    I'm wondering now if the situation is
    exp(tA) then is it best to re-evaluate exp(tD) every iteration (for a given t)?
    Assuming exp(tA) = Qexp(tD)Q^-1. where t from 0 to infinity.

    We need to evaluate periodically until ∂exp(tA)/∂t → 0.
    I've been reading through: http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf
    For example, on page 17 there is an evaluation (lagrange interpolation) which leaves the matrix problem to evaluating a different scalar for every given time (very fast). Yet, its probably not the most stable method right? Also, later (top of pg 25) they discuss a method like the one we've been talking about,.... except that they talk about the schur decomposition and discuss ways of finding the upper triangle in terms of the diagonal which is a function of t.....

    What is the best course of action for evaluation exp(tA) for various t? Symbolic evaluation?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook