Proving a matrix exponential determinant is a exponential trace

1. Dec 10, 2011

Demon117

1. The problem statement, all variables and given/known data
Prove that for any matrix $A$, the following relation is true:

$det(e^{A})=e^{tr(A)}$

3. The attempt at a solution
PROOF: Let $A$ be in Jordan Canonical form, then

$A=PDP^{-1}$

where $D$ is the diagonal matrix whose entries are the eigenvalues of $A$. Then,

$e^{A}=Pe^{D}P^{-1}$

By application of the determinant operator we have

$det(e^{A})=det(Pe^{D}P^{-1})=det(P)det(e^{D})det(P^{-1})=det(e^{D})$

Since the diagonal matrix has eigenvalue elements along its main diagonal, it follows that the determinant of its exponent is given by

$det(e^{D})=e^{\lambda_{1}} \cdot e^{\lambda_{2}} \cdot \cdot \cdot e^{\lambda_{n}}$

By simple algebra the product of the exponents is the exponent of the sum, so

$det(e^{D})=e^{\lambda_{1}+\lambda_{2}+ \cdot \cdot \cdot +\lambda_{n}}$

This of course is simply the exponent of the trace of $D$. Therefore,

$det(e^{A})=e^{tr(D)}$

Now, this is where I get messed up. My question is simply this: Because $A$ is similar to $D$, does it follow that the trace of $D$ is the same as the trace of $A$? I heard this somewhere but I cannot verify this statement anywhere in my notes or textbooks. If this is a true statement, then the proof is complete.

2. Dec 10, 2011

Demon117

Well, this was pointless. I found that the similar matrices do indeed have same trace. But does that also follow for diagonal matrices?

3. Dec 10, 2011

Dick

If two matrices are similar, their traces are equal whether one is diagonal or not. But that's not your biggest worry. Your proof looks fine for diagonal matrices, but not all matrices have Jordan form that is diagonal. What about them?

4. Dec 10, 2011

Demon117

But every matrix can be put into Jordan normal form correct? If this is true (and the statement of the problem should actually be "every n x n matrix") then the proof would not be altogether different, just write it in terms of the Jordan normal form?

5. Dec 10, 2011

Dick

No, it's not altogether different. Any matrix is similar to D+N, where D is your diagonal matrix of eigenvalues and N is strictly upper triangular. You want to show even with N nonzero trace of exp(D+N) is equal trace of exp(D).

Last edited: Dec 10, 2011
6. Dec 10, 2011

I like Serena

Hi matumich26!

Sorry for being a bit of a nitpicker... but...

Huh? I thought A was just any matrix?

How can you make a Jordan Canonical form similar to a diagonal matrix?

How did you get this?
It's not true.

7. Dec 10, 2011

Demon117

I'll just ignore this because I realize the error in my assumptions, but thank you. I have a different proof that I'd like to submit for your critique.

PROOF: Let A be any matrix. By Schur Decomposition A can be written as

$A=D+N$

where D is a diagonal matrix composed of the eigenvalues of A, and N is some upper triangular matrix whose entries may be nonzero. Then, by Taylor series expansion

$e^{A}=\sum \frac{A^{k}}{k!}$

By properties of diagonal matrices and strictly upper triangular matrices we know that both $ND$ and $DN$ will also be strictly upper triangular matrices and so will their sum.

Thus the powers of $A$ are:

$A=(D+N)=D+N_{1}$
$A^{2}=(D+N)(D+N)=D^{2}+N_{2}$
$\cdot$
$\cdot$
$\cdot$
$A^{k}=D^{k}+N_{k}$
$\cdot$
$\cdot$
$\cdot$

The $N_{i}$ matrices are all strictly upper triangular. The recursion formula is given by $N_{n+1}=DN_{n}+N_{n}D+N_{1}N_{n}$. We can write the exponential of the matrix as follows:

$e^{A}=e^{D}+\widetilde{N}$

where the matrix $\widetilde{N}=\sum \frac{N_{k}}{k!}$ is strictly upper triangular. The matrix exponential $e^{D}=diag(e^{\lambda_{1}},e^{\lambda_{2}},...,e^{\lambda_{n}})$ where $D=diag(\lambda_{1},\lambda_{2},...,\lambda_{n})$.

By all of these arguments it follows that $e^{A}$ will be strictly upper triangular. Since the determinant of an upper triangular matrix is the product of the diagonal elements, it follows that

$det(e^{A})=\prod e^{\lambda_{n}}=e^{\sum \lambda_{n}}=e^{tr(A)}$

Thus, the relation holds.

8. Dec 10, 2011

Dick

Of course it's right. It's a pretty literal rewriting of the proof on PlanetMath website. There's nothing wrong with looking something up, but you might at least want to digest it and then put it in you own words and notation. That's really too close to the exact form of what you looked up. Might make somebody think you really don't understand it and are just parroting.

Last edited: Dec 10, 2011
9. Dec 10, 2011

Demon117

In the words of a wise friend, even if a solution looks good before you make use of it (i.e. quote it) make sure you have it checked by at least two or more good sources so that you don't look like a fool. Passing it off as my own was never my intention here, but now that I have seen the general idea I can at least use some of the ideas that I hadn't understood before. But thank you for helping me keep a level head about this sir.

10. Dec 10, 2011

Dick

No problem, everybody looks ideas up if you can't solve it after thinking about it for a while. That's just research. Just use them to make you own version. Like if you know the Jordan form use that. You don't have to use the Schur decomposition just because the model did. Just use the ideas, not the literal text of the other proof.

11. Dec 11, 2011

I like Serena

Hmm, it was not my intention to burn you down or anything.
Sorry for that.

As for the trace being the same for similar matrices.
Perhaps you'd like to know that just like the determinant of any matrix is the product of the eigenvalues, the trace is the sum of the eigenvalues.

12. Jan 6, 2013

Haikku

I was just wondering why (according to your last claim), if $$A = PDP^{-1}$$ where D is a diagonal matrix, the statement $$e^A = Pe^DP^{-1}$$ is not true? To me it seems to hold, since clearly $$A^k = PD^kP^{-1}$$ holds for any natural number k (by cancellation of all the 'inbetween' P and P^{-1}). Hence

e^A = \sum_{k=0}^{\infty} \frac{A^k}{k!} = \lim_{N \rightarrow \infty} \sum_{k=0}^{N} \frac{A^k}{k!} = P \left(\lim_{N \rightarrow \infty} \sum_{k=0}^{N} \frac{D^k}{k!}\right)P^{-1} = Pe^DP^{-1}.

Is there anything wrong with this?

13. Jan 6, 2013

Haikku

Just a remark: There is no reason why D should be a diagonal matrix in this post (but of course it matters very much for the main discussion in this thread!) and that is the reason I chose to keep it this way. But as some of you might well point out, the above calculation goes through for any similar matrices.

14. Jan 6, 2013

Ray Vickson

This is way too complicated. If J is the Jordan form and f is any analytic function, the matrix f(J) is upper-triangular with elements $f(\lambda_i)$ along the diagonal. The determinant of an upper-triangular matrix is the product of its diagonal elements.