What is the most efficient method for computing exp(tA)?

brydustin
Messages
201
Reaction score
0
I need to compute the matrix exponential of a real matrix that has no special structure (is not symmetric, positive definite, nilpotent, normal, etc...). Currently, this is the most time consuming step in my program; I need to be able to transform this matrix, I suppose, into something that can have its exponential easily computed. Thanks in advance for any ideas.
 
Physics news on Phys.org
If you can find the eigenvalues and eigenvectors of the matrix efficiently, then you can write:
<br /> A \, U = U \, \Lambda<br />
where \Lambda is a diagonal matrix containing the eigenvalues, and U is a matrix whose columns are the corresponding eigenvectors. If these are linearly independent, then U^{-1}, and you can write:
<br /> A = U \, \Lambda \, U^{-1} \Rightarrow \exp(A) = U \, \exp(\Lambda) \, U^{-1}<br />
The matrix \exp(\Lambda) is a diagonal matrix with the diagonal elements being the exponential of the corresponding eigenvalue.

This method works for an arbitrary function of a matrix.
 
Sorry, I forgot to mention that the matrix is not invertible; so this method will not work.
 
To be more specific... my matrix is such that the diagonal bits are equal to the negative of the sum of the off diagonal elements associated with that given row. That is why my matrix will never be invertible, its singular by construction. But can this construction be manipulated?
 
It doesn't matter if the matrix is singular. That only means that 0 is an eigenvalues. What you need is the matrix whose columns are the eigenvectors of your matrix to be non-singular. This is quite different.
 
So, does your matrix satisfy:
<br /> A_{i i} = -\sum_{k \neq i}{A_{i k}}<br />
 
Dickfore said:
So, does your matrix satisfy:
<br /> A_{i i} = -\sum_{k \neq i}{A_{i k}}<br />

Yes, that is correct.

In response to your post above this, I think that it does matter that an eigenvalue is zero. If an eigenvalue is zero, there is no unique associated eigenvector (any will do). This will ruin the basis, and therefore the inverse does not exist, so there is no motivation for the equation:
i.e. A V = V Λ → A = VΛV^-1 ⇔ V^-1 exists ⇔ 0 isn't an eigenvalue... right?
 
No, not any vector is an eigenvector for the zero eigenvalue. There is a vector \vert x \rangle, such that:
<br /> A \, \vert x \rangle \neq 0<br />
Otherwise, the matrix is identically equal to zero. But, then, this vector is not an eigenvector.

BTW, your matrix satisfies the following:
<br /> \sum_{k}{A_{i k} \cdot 1} = 0<br />
so the vector \vert x_0 \rangle = \frac{1}{\sqrt{N}} \, \left( 1, 1 , \ldots, 1 \right)^{\top} is the normalized eigenvector corresponding to the zero eigenvalue.
 
Example of diagonalizing. Take the obviously singular 2x2 matrix:
<br /> A = \left( \begin{array}{cc}<br /> 1 &amp; 2 \\<br /> <br /> 3 &amp; 6<br /> \end{array}\right)<br />

The characteristic equation is:
<br /> \left\vert \begin{array}{cc}<br /> 1 - \lambda &amp; 2 \\<br /> <br /> 3 &amp; 6 - \lambda<br /> \end{array}\right\vert = (1 - \lambda)(6 - \lambda) - 6 = \lambda^2 - 7 \lambda = \lambda(\lambda - 7) = 0<br />
Thus, one eigenvalue is \lambda_1 = 0. This is why the determinant of the matrix is zero, i.e. it is singular. But, the eigenvector is:
<br /> \vert \lambda_1 \rangle = \left(\begin{array}{c}<br /> A_1 \\<br /> <br /> B_1<br /> \end{array}\right), \ A_1 + 2 B_1 = 0 \Rightarrow A_1 = -2 B_1 \Rightarrow \vert \lambda_1 \rangle = \left( \begin{array}{c}<br /> 2 \\<br /> <br /> -1<br /> \end{array}\right) <br />

The eigenvector corresponding to \lambda_2 = 7 is:
<br /> \vert \lambda_2 \rangle = \left(\begin{array}{c}<br /> A_2 \\<br /> <br /> B_2<br /> \end{array}\right), -6 \ A_2 + 2 B_2 = 0 \Rightarrow B_2 = 3 A_2 \Rightarrow \vert \lambda_1 \rangle = \left( \begin{array}{c}<br /> 1 \\<br /> <br /> 3<br /> \end{array}\right) <br />
Thus, the matrix U is:
<br /> U = \left(\begin{array}{cc}<br /> 2 &amp; 1 \\<br /> <br /> -1 &amp; 3<br /> \end{array}\right)<br />

The determinant of this matrix is 2 \cdot 3 - 1 \cdot (-1) = 7 \neq 0, i.e. this matrix is non-singular! Its inverse is:
<br /> U^{-1} = \frac{1}{7} \left(\begin{array}{cc}<br /> 3 &amp; -1 \\<br /> <br /> 1 &amp; 2<br /> \end{array}\right)<br />
Then, you may represent the original matrix as a product (check it!):
<br /> A = \frac{1}{7} \, \left(\begin{array}{cc}<br /> 2 &amp; 1 \\ -1 &amp; 3 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> 0 &amp; 0 \\ 0 &amp; 7 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> 3 &amp; -1 \\ 1 &amp; 2 \end{array}\right)<br />

Then, you can easily write the exponential of the matrix as:
<br /> A = \frac{1}{7} \, \left(\begin{array}{cc}<br /> 2 &amp; 1 \\ -1 &amp; 3 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> e^0 &amp; 0 \\ 0 &amp; e^7 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> 3 &amp; -1 \\ 1 &amp; 2 \end{array}\right)<br />
 
  • #10
Okay, I see what you mean. Although, if zero is an eigenvalue then the rank of A is (N-1) : where N is the number rows/columns. But one of the requirements for doing a eigenvalue decomposition is that the rank is full and that the rows/columns are linearly independent.

We take a simple example:

A = [-a,a;b,-b]. In this example, the top row is (b/a) *bottom row, so there is linear dependence. I'm sure that this is true for the other cases (more rows/columns). Because of this, I don't see how the decomposition could be reliable.

------------------

Now, to try your example, (this was a print out from the MATLAB environment):

First off, the expressions don't even commute

>> inv(U)*[1,0;0,exp(7)]*U

ans =

157.5190 -469.5571
-313.0380 940.1141

>> U*[1,0;0,exp(7)]*inv(U)

ans =

157.5190 313.0380
469.5571 940.1141>> U

U =

2 1
-1 3
>> A=[1,2;3,6]

A =

1 2
3 6
>> exp(A)

ans =

2.7183 7.3891
20.0855 403.4288

So unfortunately, it does not work.

<br /> exp(A) ≠\frac{1}{7} \, \left(\begin{array}{cc}<br /> 2 &amp; 1 \\ -1 &amp; 3 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> e^0 &amp; 0 \\ 0 &amp; e^7 \end{array}\right)<br /> <br /> \cdot \left(\begin{array}{cc}<br /> 3 &amp; -1 \\ 1 &amp; 2 \end{array}\right)<br />
 
  • #13
brydustin said:
What's a dickfore?

If you don't know, you're doin' it wrong!
 
  • #14
Dickfore said:
If you don't know, you're doin' it wrong!

Ba dum ching!
 
  • #15
I'm wondering now if the situation is
exp(tA) then is it best to re-evaluate exp(tD) every iteration (for a given t)?
Assuming exp(tA) = Qexp(tD)Q^-1. where t from 0 to infinity.

We need to evaluate periodically until ∂exp(tA)/∂t → 0.
I've been reading through: http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf
For example, on page 17 there is an evaluation (lagrange interpolation) which leaves the matrix problem to evaluating a different scalar for every given time (very fast). Yet, its probably not the most stable method right? Also, later (top of pg 25) they discuss a method like the one we've been talking about,... except that they talk about the schur decomposition and discuss ways of finding the upper triangle in terms of the diagonal which is a function of t...

What is the best course of action for evaluation exp(tA) for various t? Symbolic evaluation?
 
Back
Top