The exponential of matrix

  • #1
How to calculate a matrix's exponential?

e.g exp(-iaL), where L is a 4*4 matrix (like a group generator )
 

Answers and Replies

  • #2
2
0
By its power series:

[tex]\exp(-iaL)=\sum_{n=0}^{\infty}\frac{(-ia)^nL^n}{n!} [/tex]
 
  • #3
Ben Niehoff
Science Advisor
Gold Member
1,879
162
Another method is to diagonalize L:

[tex]L = P^{-1}DP[/tex]

where

[tex]D = \left[ \begin{array}{cccc}\lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n \end{array} \right][/tex]

for the eigenvalues [itex]\lambda_k[/itex]. Then

[tex]e^L = e^{P^{-1}DP} = P^{-1}e^DP[/tex]

(sorry, I forget the proof of this). Then [itex]e^D[/itex] is easy to evaluate:

[tex]e^D = \left[ \begin{array}{cccc}e^{\lambda_1} & & & \\ & e^{\lambda_2} & & \\ & & \ddots & \\ & & & e^{\lambda_n} \end{array} \right][/tex]
 
  • #4
HallsofIvy
Science Advisor
Homework Helper
41,833
956
Ben Niehoff's suggestion is almost the same as Sangreda's. In order to efficiently evaluate the sum Sangreda gives, you really need to use a diagonal matrix. Unfortunately, not every matrix is diagonalizable and you have to use "Jordan Normal Form" which leads to a much more complicated formula.

Also to prove Ben Niehoff's forumla, you can use the Taylors series for ex. If A = PDP-1, where D is diagonal, note that [itex]A^2= (PDP^{-1})^2= (PDP^{-1})(PDP^{-1})= PD(P^{-1}P)DP^{-1}= PD^2P^{-1}[/itex]. Then [itex]A^3= (PDP^{-1})^3= (PDP^{-1})^2(PDP^{-1})= PD^3P^{-1}[/itex] and you can prove generally (by induction) that [itex]A^n= (PDP^{-1})^n= PD^nP^{-1}[/itex].

Then
[tex]e^A= I+ A+ \frac{1}{2}A^2+ \cdot\cdot\cdot+ \frac{1}{n!}A^n+ \cdot\cdot\cdot[
[tex]= I+ PDP^{-1}+ \frac{1}{2}(PDP^{-1})^2+ \cdot\cdot\cdot+ \frac{1}{n!}(PDP^{-1})^n+ \cdot\cdot\cdot[/tex]
[tex]= (PP^{-1})+ PDP^{-1}+ /frac{1}{2}(PD^2P^{-1})+ \cdot\cdot\cdot+ \frac{1}{n!}+ PD^nP^{-1}+ \cdot\cdot\cdot[/tex]
[tex]= P(I+ D+ \frac{1}{2}D^2+ \cdot\cdot\cdot+ \frac{1}{n!}D^n+ \cdot\cdot\cdot)P^{-1}[/tex]
[tex]= Pe^DP^{-1}[/tex]
and eD is just the diagonal matrix with e^{a} on the diagonal where a is a diagonal element of D.

With that "i" you might find it better to use [itex]e^{iA}= cos(A)+ i sin(A)[/itex]. You can find cos(A) and sin(A) by using their Taylor series in exactly the same way: if A is diagonalizable- [itex]A= PDP^{-1}[/itex], then cos(A)= Pcos(D)P^{-1}, sin(A)= Psin(D)P^{-1}. Of course, cos(D) is the diagonal matrix with diagonal elements cos(a) for every a on the diagonal of D and sin(D) is the diagonal matrix with diagonal elements sin(a) for every a on the diagonal of D.
 
  • #5
703
13
1) Functions of operators

Let [itex]A[/itex] be an operator, [itex]a_k[/itex] its eigenvalues and [itex]|\Psi_k \rangle[/itex] the eigenvectors,
i.e. you have the eigenequation [itex]A |\Psi_k \rangle = a_k |\Psi_k \rangle[/itex].
Functions of an operator [itex]A[/itex] are calculated as

[tex]f(A) = \sum_{k=1}^N f(a_k) |\Psi_k \rangle \langle \Psi_k|[/tex]

Let's call this equation (1).
(see Plenio, lecture notes on quantum mechanics 2002, page 51 eq. (1.86) http://www.lsr.ph.ic.ac.uk/~plenio/teaching.html [Broken]).


2) Functions of matrices

Equation (1) can also be used to calculate functions of matrices.
Let A be a matrix with the eigenequation

[tex]A \vec{\Psi}_k = a_k \vec{\Psi}_k[/tex]

Then equation (1) becomes

[tex]f(A) = \sum_{k=1}^N f(a_k) \vec{\Psi}_k (\vec{\Psi}_k)^T[/tex]

[itex]a_k[/itex] is the eigenvalue, [itex]\vec{\Psi}_k[/itex] is the eigenvector (column vector) and [itex]\vec{\Psi}_k^T[/itex] is the transposed eigenvector (row vector).


3) Calculate exp(L)

In your case we have a matrix L.
You first have to calculate the eigenvalues [itex]l_k[/itex] and eigenvectors [itex]\vec{\Psi}_k[/itex] of L, i.e. you have the eigenequation
[itex]L \vec{\Psi}_k = l_k \vec{\Psi}_k[/itex]

And for your case we consider the function [itex]f(x)=\mbox{exp}(x)[/itex] such that
[itex]f(L)=\mbox{exp}(L)[/itex] and [itex]f(l_i) = \mbox{exp}(l_i)[/itex]

Plugging this into equation (1) we get

[tex]f(L) = \sum_{k=1}^N f(l_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]

[tex]\mbox{exp}(L) = \sum_{k=1}^N \mbox{exp}(l_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]


4) Calculate [tex]\mbox{exp}(-iaL)[/tex]

In order to calculate [itex]\mbox{exp}(-iaL)[/itex] you just multiply your
eigenequation [itex]L \vec{\Psi}_k = l_k \vec{\Psi}_k[/itex] by [itex](-ia)[/itex]
and get the new eigenequation
[itex](-ai)L \vec{\Psi}_k = (-ai)l_k \vec{\Psi}_k[/itex]

You can interpret this as eigenequation with the matrix
[itex](-ai)L[/itex] whose eigenvalues are [itex](-ai)l_k[/itex]

Thus, you can calculate [itex]\mbox{exp}(-aiL)[/itex] as

[tex]\mbox{exp}(-aiL) = \sum_{k=1}^N \mbox{exp}(-ail_k) \vec{\Psi}_k \vec{\Psi}_k^T[/tex]
 
Last edited by a moderator:
  • #6
42
0
I have seen a method of decomposing the matrix [tex]A[/tex] as [tex]A=D+N[/tex] where [tex]D[/tex] is diagonable and [tex]N[/tex] is nilpotent and the method is mentioned in wiki. however I cannot find its original source and other relevent papers. Can anybody tell me something about the details?
 
  • #7
morphism
Science Advisor
Homework Helper
2,015
4
Look up Jordan normal form.
 
  • #8
The Jorden form is problematic numerically, and thus only useful for certain kinds of matrices. (e.g. ones we can evaluate symbolically). The problem with the Jorden form is very slight changes in the eigenvectors, destroy the Jorden representation. The scur decomposition is more numerically robust but I haven’t heard of numeric algorithms that use this decomposition to compute the matrix exponential.

Matlab uses a scaling and squaring operation.

exp(A)~exp(A/t)^t

First the matrix is divided by some number so that the matrix can be approximated with a Pade approximation. A pade approximation provides a better fit then a taylor series. Once the matrix exponential fox A/t is approximated then matrix approximation of A is approximated via the above relation.

The method used by MATLAB is superior for most matrices. The diagonalization method is only superior when the eignevalues are nearly orthogonal. There are many cases where numeric problems may arise. For instance if the eigen values are too close together or too far apart. In such cases the numeric methods may not yield accurate computations of the Matrix expontial.
 

Related Threads on The exponential of matrix

Replies
6
Views
208
Replies
1
Views
3K
Replies
3
Views
2K
Replies
2
Views
725
  • Last Post
Replies
6
Views
3K
  • Last Post
Replies
3
Views
544
Replies
16
Views
2K
  • Last Post
Replies
4
Views
540
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
4
Views
5K
Top