# The exponential of matrix

1. Jan 7, 2008

### princeton118

How to calculate a matrix's exponential?

e.g exp(-iaL), where L is a 4*4 matrix (like a group generator )

2. Jan 7, 2008

### Sagreda

By its power series:

$$\exp(-iaL)=\sum_{n=0}^{\infty}\frac{(-ia)^nL^n}{n!}$$

3. Jan 7, 2008

### Ben Niehoff

Another method is to diagonalize L:

$$L = P^{-1}DP$$

where

$$D = \left[ \begin{array}{cccc}\lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n \end{array} \right]$$

for the eigenvalues $\lambda_k$. Then

$$e^L = e^{P^{-1}DP} = P^{-1}e^DP$$

(sorry, I forget the proof of this). Then $e^D$ is easy to evaluate:

$$e^D = \left[ \begin{array}{cccc}e^{\lambda_1} & & & \\ & e^{\lambda_2} & & \\ & & \ddots & \\ & & & e^{\lambda_n} \end{array} \right]$$

4. Jan 7, 2008

### HallsofIvy

Staff Emeritus
Ben Niehoff's suggestion is almost the same as Sangreda's. In order to efficiently evaluate the sum Sangreda gives, you really need to use a diagonal matrix. Unfortunately, not every matrix is diagonalizable and you have to use "Jordan Normal Form" which leads to a much more complicated formula.

Also to prove Ben Niehoff's forumla, you can use the Taylors series for ex. If A = PDP-1, where D is diagonal, note that $A^2= (PDP^{-1})^2= (PDP^{-1})(PDP^{-1})= PD(P^{-1}P)DP^{-1}= PD^2P^{-1}$. Then $A^3= (PDP^{-1})^3= (PDP^{-1})^2(PDP^{-1})= PD^3P^{-1}$ and you can prove generally (by induction) that $A^n= (PDP^{-1})^n= PD^nP^{-1}$.

Then
$$e^A= I+ A+ \frac{1}{2}A^2+ \cdot\cdot\cdot+ \frac{1}{n!}A^n+ \cdot\cdot\cdot[ [tex]= I+ PDP^{-1}+ \frac{1}{2}(PDP^{-1})^2+ \cdot\cdot\cdot+ \frac{1}{n!}(PDP^{-1})^n+ \cdot\cdot\cdot$$
$$= (PP^{-1})+ PDP^{-1}+ /frac{1}{2}(PD^2P^{-1})+ \cdot\cdot\cdot+ \frac{1}{n!}+ PD^nP^{-1}+ \cdot\cdot\cdot$$
$$= P(I+ D+ \frac{1}{2}D^2+ \cdot\cdot\cdot+ \frac{1}{n!}D^n+ \cdot\cdot\cdot)P^{-1}$$
$$= Pe^DP^{-1}$$
and eD is just the diagonal matrix with e^{a} on the diagonal where a is a diagonal element of D.

With that "i" you might find it better to use $e^{iA}= cos(A)+ i sin(A)$. You can find cos(A) and sin(A) by using their Taylor series in exactly the same way: if A is diagonalizable- $A= PDP^{-1}$, then cos(A)= Pcos(D)P^{-1}, sin(A)= Psin(D)P^{-1}. Of course, cos(D) is the diagonal matrix with diagonal elements cos(a) for every a on the diagonal of D and sin(D) is the diagonal matrix with diagonal elements sin(a) for every a on the diagonal of D.

5. Jan 8, 2008

### Edgardo

1) Functions of operators

Let $A$ be an operator, $a_k$ its eigenvalues and $|\Psi_k \rangle$ the eigenvectors,
i.e. you have the eigenequation $A |\Psi_k \rangle = a_k |\Psi_k \rangle$.
Functions of an operator $A$ are calculated as

$$f(A) = \sum_{k=1}^N f(a_k) |\Psi_k \rangle \langle \Psi_k|$$

Let's call this equation (1).
(see Plenio, lecture notes on quantum mechanics 2002, page 51 eq. (1.86) http://www.lsr.ph.ic.ac.uk/~plenio/teaching.html [Broken]).

2) Functions of matrices

Equation (1) can also be used to calculate functions of matrices.
Let A be a matrix with the eigenequation

$$A \vec{\Psi}_k = a_k \vec{\Psi}_k$$

Then equation (1) becomes

$$f(A) = \sum_{k=1}^N f(a_k) \vec{\Psi}_k (\vec{\Psi}_k)^T$$

$a_k$ is the eigenvalue, $\vec{\Psi}_k$ is the eigenvector (column vector) and $\vec{\Psi}_k^T$ is the transposed eigenvector (row vector).

3) Calculate exp(L)

In your case we have a matrix L.
You first have to calculate the eigenvalues $l_k$ and eigenvectors $\vec{\Psi}_k$ of L, i.e. you have the eigenequation
$L \vec{\Psi}_k = l_k \vec{\Psi}_k$

And for your case we consider the function $f(x)=\mbox{exp}(x)$ such that
$f(L)=\mbox{exp}(L)$ and $f(l_i) = \mbox{exp}(l_i)$

Plugging this into equation (1) we get

$$f(L) = \sum_{k=1}^N f(l_k) \vec{\Psi}_k \vec{\Psi}_k^T$$

$$\mbox{exp}(L) = \sum_{k=1}^N \mbox{exp}(l_k) \vec{\Psi}_k \vec{\Psi}_k^T$$

4) Calculate $$\mbox{exp}(-iaL)$$

In order to calculate $\mbox{exp}(-iaL)$ you just multiply your
eigenequation $L \vec{\Psi}_k = l_k \vec{\Psi}_k$ by $(-ia)$
and get the new eigenequation
$(-ai)L \vec{\Psi}_k = (-ai)l_k \vec{\Psi}_k$

You can interpret this as eigenequation with the matrix
$(-ai)L$ whose eigenvalues are $(-ai)l_k$

Thus, you can calculate $\mbox{exp}(-aiL)$ as

$$\mbox{exp}(-aiL) = \sum_{k=1}^N \mbox{exp}(-ail_k) \vec{\Psi}_k \vec{\Psi}_k^T$$

Last edited by a moderator: May 3, 2017
6. Feb 25, 2008

### bchui

I have seen a method of decomposing the matrix $$A$$ as $$A=D+N$$ where $$D$$ is diagonable and $$N$$ is nilpotent and the method is mentioned in wiki. however I cannot find its original source and other relevent papers. Can anybody tell me something about the details?

7. Feb 25, 2008

### morphism

Look up Jordan normal form.

8. Feb 26, 2008

### John Creighto

The Jorden form is problematic numerically, and thus only useful for certain kinds of matrices. (e.g. ones we can evaluate symbolically). The problem with the Jorden form is very slight changes in the eigenvectors, destroy the Jorden representation. The scur decomposition is more numerically robust but I haven’t heard of numeric algorithms that use this decomposition to compute the matrix exponential.

Matlab uses a scaling and squaring operation.

exp(A)~exp(A/t)^t

First the matrix is divided by some number so that the matrix can be approximated with a Pade approximation. A pade approximation provides a better fit then a taylor series. Once the matrix exponential fox A/t is approximated then matrix approximation of A is approximated via the above relation.

The method used by MATLAB is superior for most matrices. The diagonalization method is only superior when the eignevalues are nearly orthogonal. There are many cases where numeric problems may arise. For instance if the eigen values are too close together or too far apart. In such cases the numeric methods may not yield accurate computations of the Matrix expontial.