# Homework Help: Matrix Exponentials

1. Apr 14, 2012

### R2Zero

1. The problem statement, all variables and given/known data

Let B =

|1 -2|
|1 3 |

Let C =

|1 4 0|
|0 1 0|
|0 0 2|

Find e$^{B}$ and e$^{C}$

2. Relevant equations

e$^{A}$ = $\Sigma$x$^{n}$/n!

e$^{A}$ = P$^{-1}$e$^{D}$P

3. The attempt at a solution
My professor told me the first step to approaching these types of problems is to find the eigenvalues for both B and C. For B, I get eigenvalues 2+i and 2-i, and I get 1 (double root) and 2 as the eigenvalues for C.

I'm having trouble finding some eigenvectors for B (my teacher did not provide any sufficient or clear examples of eigenstuff involving complex numbers), and I get [1, 0, 0] and [0, 0, 1] as my eigenvectors for C (for e-values 1 and 2, respectively).

I'm sure at this point I'm supposed to construct P, P$^{-1}$ and D for each and somehow use those to find e$^{B}$ and e$^{C}$, but I'm not sure how to go about doing that since my professor did not clearly explain the entire process...

2. Apr 14, 2012

### morphism

You should look in a linear book (or online) to see how one typically finds eigenvectors. And while you're at it, look up "diagonalization".

The point of diagonalization here is this: if D is diagonal, say with entries d_1, ..., d_n down the diagonal, then e^D will be diagonal entries $e^{d_1}, \ldots, e^{d_n}$ down the diagonal. This makes the formula you quoted ($A=P^{-1}PD \implies e^A=P^{-1}e^DP$ handy.

Just a tip: the matrix C will turn out to not be diagonalizable. But if you look at it, it's clearly "block" diagonal. So by the same token, it will turn out that e^C will be block diagonal with e^{2x2 block of C} for the first block and e^2 for the second.

So now you have to compute the exp of $\left(\begin{smallmatrix} 1&4\\0&4 \end{smallmatrix}\right)$. For this, I recommend directly using the Taylor series of exp and trying to spot the pattern.

3. Apr 14, 2012

### Ray Vickson

You don't need the eigenvectors. It is the case that for any analytical function
$$f(x) = c_0 + c_1 x + c_2 x^2 + \cdots,$$ if we define f(A) for an nxn matrix A as $$f(A) = c_0 I + c_1 A + c_2 A^2 + \cdots,$$ then: if the eigenvalues of A are r1 with multiplicity m1, r2 with multiplicity m2, ... we have: there are matrices E_11,...., E_1m1, E_21,...,E_2m2,... such that
$$f(A) = \sum_{j=1}^{m_1} E_{1j} f^{(j-1)}(r_1) + \sum_{j=1}^{m_2} E_{2j} f^{(j-1)}(r_2) + \cdots,$$
where $f^{0}(r) = f(r), \, f^{(1)}(r) = f^{\prime}(r), \; f^{(2)}(r) = f^{\prime \prime}(r),$ etc. Here, the matrices E_{ij} are independent of the form of the function f.

For the case of C, this says
$f(C) = E_{11}f(1) + E_{12}f'(1) + E_{21} f(2)$ for any f. In particular, if you apply this to f(x) = 1 (f(C) = I), f(x) = x (f(C) = C) and f(x) = x^2 (f(C) = C^2) you can get the matrices $E_{11}, E_{12}, E_{22}.$ Then $e^C = E_{11}e^1 + E_{12} e^1 + E_{22} e^2 = (E_{11} + E_{12}) e^1 + E_{22} e^2.$

RGV