How can we accurately compute the matrix exponential?

  • Context: Graduate 
  • Thread starter Thread starter princeton118
  • Start date Start date
  • Tags Tags
    Exponential Matrix
Click For Summary

Discussion Overview

The discussion focuses on methods for computing the matrix exponential, specifically for matrices like L in the expression exp(-iaL). Various approaches are explored, including power series, diagonalization, Jordan normal form, and numerical methods.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants propose using the power series expansion for the matrix exponential, expressed as exp(-iaL) = ∑_{n=0}^{∞}((-ia)^nL^n)/n!.
  • Others suggest diagonalizing the matrix L, leading to a simpler evaluation of the exponential via the diagonal matrix D, where e^D is straightforward to compute.
  • A participant notes that not all matrices are diagonalizable and introduces the concept of Jordan normal form, which complicates the computation.
  • Another participant discusses the use of Taylor series to prove the properties of matrix exponentials and suggests using trigonometric functions for complex matrices.
  • One contributor mentions a method of decomposing matrices into a diagonal and nilpotent part, referencing a source but lacking details.
  • Concerns are raised about the numerical stability of Jordan form, with a participant advocating for the Schur decomposition as a more robust alternative for certain matrices.
  • A method used in MATLAB is described, involving scaling and squaring along with Pade approximation for better accuracy in computing matrix exponentials.

Areas of Agreement / Disagreement

Participants express multiple competing views on the best method for computing matrix exponentials, with no consensus reached on a single approach. Disagreements exist regarding the effectiveness and applicability of different methods, particularly concerning numerical stability and the conditions under which certain methods are superior.

Contextual Notes

Limitations include the dependence on the diagonalizability of matrices, the potential numerical issues with Jordan normal form, and the conditions under which various methods yield accurate results.

princeton118
Messages
33
Reaction score
0
How to calculate a matrix's exponential?

e.g exp(-iaL), where L is a 4*4 matrix (like a group generator )
 
Physics news on Phys.org
By its power series:

[tex]\exp(-iaL)=\sum_{n=0}^{\infty}\frac{(-ia)^nL^n}{n!}[/tex]
 
Another method is to diagonalize L:

[tex]L = P^{-1}DP[/tex]

where

[tex]D = \left[ \begin{array}{cccc}\lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n \end{array} \right][/tex]

for the eigenvalues [itex]\lambda_k[/itex]. Then

[tex]e^L = e^{P^{-1}DP} = P^{-1}e^DP[/tex]

(sorry, I forget the proof of this). Then [itex]e^D[/itex] is easy to evaluate:

[tex]e^D = \left[ \begin{array}{cccc}e^{\lambda_1} & & & \\ & e^{\lambda_2} & & \\ & & \ddots & \\ & & & e^{\lambda_n} \end{array} \right][/tex]
 
Ben Niehoff's suggestion is almost the same as Sangreda's. In order to efficiently evaluate the sum Sangreda gives, you really need to use a diagonal matrix. Unfortunately, not every matrix is diagonalizable and you have to use "Jordan Normal Form" which leads to a much more complicated formula.

Also to prove Ben Niehoff's formula, you can use the Taylors series for ex. If A = PDP-1, where D is diagonal, note that [itex]A^2= (PDP^{-1})^2= (PDP^{-1})(PDP^{-1})= PD(P^{-1}P)DP^{-1}= PD^2P^{-1}[/itex]. Then [itex]A^3= (PDP^{-1})^3= (PDP^{-1})^2(PDP^{-1})= PD^3P^{-1}[/itex] and you can prove generally (by induction) that [itex]A^n= (PDP^{-1})^n= PD^nP^{-1}[/itex].

Then
[tex]e^A= I+ A+ \frac{1}{2}A^2+ \cdot\cdot\cdot+ \frac{1}{n!}A^n+ \cdot\cdot\cdot[<br /> [tex]= I+ PDP^{-1}+ \frac{1}{2}(PDP^{-1})^2+ \cdot\cdot\cdot+ \frac{1}{n!}(PDP^{-1})^n+ \cdot\cdot\cdot[/tex]<br /> [tex]= (PP^{-1})+ PDP^{-1}+ /frac{1}{2}(PD^2P^{-1})+ \cdot\cdot\cdot+ \frac{1}{n!}+ PD^nP^{-1}+ \cdot\cdot\cdot[/tex]<br /> [tex]= P(I+ D+ \frac{1}{2}D^2+ \cdot\cdot\cdot+ \frac{1}{n!}D^n+ \cdot\cdot\cdot)P^{-1}[/tex]<br /> [tex]= Pe^DP^{-1}[/tex]<br /> and e<sup>D</sup> is just the diagonal matrix with e^{a} on the diagonal where a is a diagonal element of D.<br /> <br /> With that "i" you might find it better to use [itex]e^{iA}= cos(A)+ i sin(A)[/itex]. You can find cos(A) and sin(A) by using their Taylor series in exactly the same way: if A is diagonalizable- [itex]A= PDP^{-1}[/itex], then cos(A)= Pcos(D)P^{-1}, sin(A)= Psin(D)P^{-1}. Of course, cos(D) is the diagonal matrix with diagonal elements cos(a) for every a on the diagonal of D and sin(D) is the diagonal matrix with diagonal elements sin(a) for every a on the diagonal of D.[/tex]
 
1) Functions of operators

Let [itex]A[/itex] be an operator, [itex]a_k[/itex] its eigenvalues and [itex]|\Psi_k \rangle[/itex] the eigenvectors,
i.e. you have the eigenequation [itex]A |\Psi_k \rangle = a_k |\Psi_k \rangle[/itex].
Functions of an operator [itex]A[/itex] are calculated as

[tex]f(A) = \sum_{k=1}^N f(a_k) |\Psi_k \rangle \langle \Psi_k|[/tex]

Let's call this equation (1).
(see Plenio, lecture notes on quantum mechanics 2002, page 51 eq. (1.86) http://www.lsr.ph.ic.ac.uk/~plenio/teaching.html ).2) Functions of matrices

Equation (1) can also be used to calculate functions of matrices.
Let A be a matrix with the eigenequation

[tex]A \vec{\Psi}_k = a_k \vec{\Psi}_k[/tex]

Then equation (1) becomes

[tex]f(A) = \sum_{k=1}^N f(a_k) \vec{\Psi}_k (\vec{\Psi}_k)^T[/tex]

[itex]a_k[/itex] is the eigenvalue, [itex]\vec{\Psi}_k[/itex] is the eigenvector (column vector) and [itex]\vec{\Psi}_k^T[/itex] is the transposed eigenvector (row vector).3) Calculate exp(L)

In your case we have a matrix L.
You first have to calculate the eigenvalues [itex]l_k[/itex] and eigenvectors [itex]\vec{\Psi}_k[/itex] of L, i.e. you have the eigenequation
[itex]L \vec{\Psi}_k = l_k \vec{\Psi}_k[/itex]

And for your case we consider the function [itex]f(x)=\mbox{exp}(x)[/itex] such that
[itex]f(L)=\mbox{exp}(L)[/itex] and [itex]f(l_i) = \mbox{exp}(l_i)[/itex]

Plugging this into equation (1) we get

[tex]f(L) = \sum_{k=1}^N f(l_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]

[tex]\mbox{exp}(L) = \sum_{k=1}^N \mbox{exp}(l_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]4) Calculate [tex]\mbox{exp}(-iaL)[/tex]

In order to calculate [itex]\mbox{exp}(-iaL)[/itex] you just multiply your
eigenequation [itex]L \vec{\Psi}_k = l_k \vec{\Psi}_k[/itex] by [itex](-ia)[/itex]
and get the new eigenequation
[itex](-ai)L \vec{\Psi}_k = (-ai)l_k \vec{\Psi}_k[/itex]

You can interpret this as eigenequation with the matrix
[itex](-ai)L[/itex] whose eigenvalues are [itex](-ai)l_k[/itex]

Thus, you can calculate [itex]\mbox{exp}(-aiL)[/itex] as

[tex]\mbox{exp}(-aiL) = \sum_{k=1}^N \mbox{exp}(-ail_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]
 
Last edited by a moderator:
I have seen a method of decomposing the matrix [tex]A[/tex] as [tex]A=D+N[/tex] where [tex]D[/tex] is diagonable and [tex]N[/tex] is nilpotent and the method is mentioned in wiki. however I cannot find its original source and other relevant papers. Can anybody tell me something about the details?
 
Look up Jordan normal form.
 
The Jorden form is problematic numerically, and thus only useful for certain kinds of matrices. (e.g. ones we can evaluate symbolically). The problem with the Jorden form is very slight changes in the eigenvectors, destroy the Jorden representation. The scur decomposition is more numerically robust but I haven’t heard of numeric algorithms that use this decomposition to compute the matrix exponential.

Matlab uses a scaling and squaring operation.

exp(A)~exp(A/t)^t

First the matrix is divided by some number so that the matrix can be approximated with a Pade approximation. A pade approximation provides a better fit then a taylor series. Once the matrix exponential fox A/t is approximated then matrix approximation of A is approximated via the above relation.

The method used by MATLAB is superior for most matrices. The diagonalization method is only superior when the eignevalues are nearly orthogonal. There are many cases where numeric problems may arise. For instance if the eigen values are too close together or too far apart. In such cases the numeric methods may not yield accurate computations of the Matrix expontial.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K