How to compute logarithm of orthogonal matrix?

jostpuur
Messages
2,112
Reaction score
19
Suppose X\in\mathbb{R}^{n\times n} is orthogonal. How do you perform the computation of series

<br /> \log(X) = (X-1) - \frac{1}{2}(X-1)^2 + \frac{1}{3}(X-1)^3 - \cdots<br />

Elements of individual terms are

<br /> ((X-1)^n)_{ij} = (-1)^n\delta_{ij} \;+\; n(-1)^{n-1}X_{ij} \;+\; \sum_{k=2}^{n} (-1)^{n-k} \frac{n!}{k!(n-k)!} X_{il_1} X_{l_1 l_2} \cdots X_{l_{k-1}j}<br />

but these do not seem very helpful. I don't see how orthogonality could be used here. By orthogonality we have

<br /> X_{ik}X_{jk} = \delta_{ij},\quad\quad X_{ki}X_{kj} = \delta_{ij}<br />

but

<br /> X_{ik}X_{kj}<br />

is nothing special, right?

The special case n=2 would also be nice to start with. If

<br /> X = \left(\begin{array}{cc}<br /> \cos(\theta) &amp; -\sin(\theta) \\<br /> \sin(\theta) &amp; \cos(\theta) \\<br /> \end{array}\right)<br />

then the result should be

<br /> \log(X) = \left(\begin{array}{cc}<br /> 0 &amp; -\theta \\<br /> \theta &amp; 0 \\<br /> \end{array}\right)<br />

but how does one arrive at this honestly from the series?
 
Last edited:
Physics news on Phys.org
Wouldn't the orthogonality simplify exponents of the matrix?

X^{2n}=I
X^{2n+1}=X
 
John Creighto said:
Wouldn't the orthogonality simplify exponents of the matrix?

X^{2n}=I
X^{2n+1}=X

No. For example

<br /> \left(\begin{array}{cc}<br /> \cos(\theta) &amp; -\sin(\theta) \\<br /> \sin(\theta) &amp; \cos(\theta) \\<br /> \end{array}\right)^2<br /> = \left(\begin{array}{cc}<br /> \cos(2\theta) &amp; -\sin(2\theta) \\<br /> \sin(2\theta) &amp; \cos(2\theta) \\<br /> \end{array}\right)<br />

is usually not the identity. Elements of some Lie algebras often have properties that resemble your equations.
 
Could it simplify your product in the equation bellow where you write "Elements of individual terms" BTW, where does this equation come from? It looks interesting.
 
Are you asking how to prove that the power series is correct, or something else?

If you are, would you be able to answer the question if X were a real number instead of a matrix?

Now, if I ask you "Is an orthogonal matrix diagonalisable?", does that give you a clue?
 
DrGreg said:
Are you asking how to prove that the power series is correct, or something else?

I am interested in some formula, which would give a corresponding angle \theta\in\mathbb{R}^3 when a rotation matrix X\in SO(3) is given. I have more than one reason to obtain such formula in fact.

I have written a (programming language) function which gives some angle vector, when a rotation matrix is given, but it works by using iterations. The function could be made more efficient, if there was some formula for the angle vector. For example there is a direct formula for the elements of a rotation matrix, when the angle vector is given. I wrote it down here: Elements of SO(3)?

If you are, would you be able to answer the question if X were a real number instead of a matrix?

Now, if I ask you "Is an orthogonal matrix diagonalisable?", does that give you a clue?

I can see that if I substitute a diagonalizable matrix V\textrm{diag}(\lambda_1,\ldots,\lambda_n)V^{-1} into the series, and use the Taylor series of a single variable logarithm, assuming that |\lambda_i - 1|&lt; 1 for all i, then I get

<br /> \log(V\textrm{diag}(\lambda_1,\ldots,\lambda_n)V^{-1}) = V\textrm{diag}(\log(\lambda_1),\ldots ,\log(\lambda_n))V^{-1}.<br />

I think it is a good thing that you reminded me of this, because this made me recall that the eigenvalues of orthogonal matrices (or unitary matrices) are often not in the ball |z-1|&lt;1, so the series I wrote down originally is probably not going to converge for all matrices of interest. For example -1\in SO(2) doesn't have eigenvalues in this domain of convergence.

Do you know how to prove that all unitary matrices are diagonalizable?

I don't know. I have read some material, about random matrix theory, which seems to assume that diagonalizability of unitary matrices is plain clear, so I believe it is true. I have only one idea for the proof. I know how to prove that Hermitian matrices are diagonalizable, so if I can show that for all unitary U\in \mathbb{C}^{n\times n} there exists a Hermitian H\in\mathbb{C}^{n\times n} such that

<br /> U=e^{iH}<br />

then the diagonalizability of U would follow. So how do I show that this H exists? If I had some knowledge about the logarithm

<br /> H = -i\log(U)<br />

then it could solve the problem. This is why I would like to know how to deal with logarithms in some other way, than via assuming that I already knew the parameter matrix to be diagonalizable.
 
For diagonalisation of an orthogonal matrix, have a look at the Wikipedia articles Normal_matrix and Spectral_theorem

I'm a bit rusty on this, but in the n=2 case the family

X(\theta) = \left(\begin{array}{cc}<br /> \cos\theta &amp; -\sin\theta \\<br /> \sin\theta &amp; \cos\theta \\<br /> \end{array}\right)​

forms a one-parameter group, with infinitesimal generator

Z = \frac{dX}{d\theta}(0)<br /> = \left(\begin{array}{cc}<br /> 0 &amp; -1 \\<br /> 1 &amp; 0 \\<br /> \end{array}\right)​

and so

X(\theta) = e^{Z\theta}​

which is exactly what you wanted.

I haven't thought this through, but maybe a similar (multiparameter) technique might work in larger dimensions?
 
Have you considered a scaling and squaring algorithm?

http://eprints.ma.man.ac.uk/634/01/covered/MIMS_ep2006_394.pdf

This is how MATLAB computes the matrix and it is usually more accurate then denationalization, unless the matrix is symmetric. Also, I think there are probably some math packages that have these algorithms implemented. See linpac for instance.
 
The spectral theorem seems to deal with existence questions, so it does not solve my need for the logarithm formula. The spectral theorem does suggest, though, that my idea about how to prove diagonalizability of unitary matrices was not the best one. Anyway, I'm still interested in logarithm formulas.

It became clear that it was not a good idea to consider the series of logarithm, so it could be that I should try to reformulate the original question again. I have not yet thought about this to the end, but in the case n=2 it is easy to show example of something like what I have been after for.

Using commutativity of

<br /> \left(\begin{array}{cc}<br /> x &amp; 0 \\<br /> 0 &amp; x \\<br /> \end{array}\right)<br />

and

<br /> \left(\begin{array}{cc}<br /> 0 &amp; -y \\<br /> y &amp; 0 \\<br /> \end{array}\right)<br />

one can calculate

<br /> \exp\left(\begin{array}{cc}<br /> x &amp; -y \\<br /> y &amp; x \\<br /> \end{array}\right)<br /> =\left(\begin{array}{cc}<br /> e^x \cos(y) &amp; -e^x \sin(y) \\<br /> e^x \sin(y) &amp; e^x \cos(y) \\<br /> \end{array}\right) \quad\quad\quad\quad (9.1)<br />

Suppose we choose a following branch of the inverse of cosine:

<br /> \cos^{-1}:[-1,1]\to [0,\pi]<br />

We can then write one possible branch of logarithm like this:

<br /> \log\left(\begin{array}{cc}<br /> x &amp; -y \\<br /> y &amp; x \\<br /> \end{array}\right)<br /> =\left(\begin{array}{cc}<br /> \log(\sqrt{x^2 + y^2}) &amp; -\frac{y}{|y|}\cos^{-1}\big(\frac{x}{\sqrt{x^2 + y^2}}\big) \\<br /> \frac{y}{|y|}\cos^{-1}\big(\frac{x}{\sqrt{x^2 + y^2}}\big) &amp; \log(\sqrt{x^2 + y^2}) \\<br /> \end{array}\right)\quad\quad\quad\quad (9.2)<br />

This is an example of a kind of formula that I would consider useful.

If we set x=0 in the equation (9.1), then the equation shows an exponentiation of some member of \mathfrak{so}(2), and the result is a member of SO(2). If we set x^2 + y^2 = 1 in the equation (9.2), then the equation shows a logarithm of some member of SO(2), and the results is a member of \mathfrak{so}(2). Anyway, it is not very difficult to perform these computations without these assumptions, because these are actually the exponential and logarithm functions in the complex plane.

But then... same stuff with SO(3) next? :cool:
 
Last edited:
Back
Top