Is My Approach to Matrix Exponentiation Valid?

DuckAmuck
Messages
238
Reaction score
40
If we have two square matrices of the same size P and Q, we can put one in the exponent of the other by:
M = P^Q = e^{ln(P)Q}
ln(P) may give multiple results R, which are square matrices the same size as P.
So then we have:
M = e^{RQ}
which can be Taylor expanded to arrive at a final square matrix (matrices) M.

I've been wondering about this, and want to know if my approach is valid. Thank you.

Also, does anyone know any tricks in computing the log of a matrix?
If P is not diagonalizable, it seems you'd have to use the Taylor series expansion. So you'd have an expansion within an expansion for M.
 
Last edited:
Physics news on Phys.org
DuckAmuck said:
If we have two square matrices of the same size P and Q, we can put one in the exponent of the other by:
M = P^Q = e^{ln(P)Q}
##\ln(P)## is only well-defined when P is invertible; otherwise P does not have a logarithm.

The wikipedia article on matrix logarithms has some discussion on computing the logarithm in the non-diagonalizable case.
 
MATLAB has a logm function to take matrix logs for numerical calculations.

If you're doing symbolic work, there's one trick I remember from doing exponentials by hand. I don't know the fancy/correct term for it, but often times when evaluating a matrix power series the powers of the matrix will repeat after a certain number of powers. For example, the generator of the 2D special orthogonal group is [0, -1; 1, 0]. Square that and you get [-1, 0; 0, -1]. Cube it and you get [0, 1; -1, 0]. Fourth power gives you [1, 0; 0, 1]. Since that's just the identity you the fifth power is [0, -1; 1, 0] and the cycle repeats. The fifth power equals the first, the sixth equals the second, and so on. I suppose you could say that in such cases the sequence of all powers of [0, -1; 1, 0] is homomorphic to Z4 under matrix multiplication, or I could be just making a fool outta myself. That way the power series reduces to a sum over four terms. It should apply to any convergent Taylor series if the powers of the generating matrix repeat.

Can you give us a little more info on what logs you want to take?
 
Twigg said:
MATLAB has a logm function to take matrix logs for numerical calculations.

If you're doing symbolic work, there's one trick I remember from doing exponentials by hand. I don't know the fancy/correct term for it, but often times when evaluating a matrix power series the powers of the matrix will repeat after a certain number of powers. For example, the generator of the 2D special orthogonal group is [0, -1; 1, 0]. Square that and you get [-1, 0; 0, -1]. Cube it and you get [0, 1; -1, 0]. Fourth power gives you [1, 0; 0, 1]. Since that's just the identity you the fifth power is [0, -1; 1, 0] and the cycle repeats. The fifth power equals the first, the sixth equals the second, and so on. I suppose you could say that in such cases the sequence of all powers of [0, -1; 1, 0] is homomorphic to Z4 under matrix multiplication, or I could be just making a fool outta myself. That way the power series reduces to a sum over four terms. It should apply to any convergent Taylor series if the powers of the generating matrix repeat.

Can you give us a little more info on what logs you want to take?

I just recently discovered logm and expm, which are quite handy for this sort of thing.
I'm not trying to take any particular logs. I am more curious about what the limitations of this operation are.
Obviously recursive and idempotent/nilpotent matrices would be nice here. :)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top