How Does Matrix Exponentiation Retrieve Rotation Matrices?

  • Thread starter Thread starter gentsagree
  • Start date Start date
  • Tags Tags
    Matrix
gentsagree
Messages
93
Reaction score
1
I need to retrieve a finite rotation matrix (with cos and sin) from the exponentiation of the infinitesimal version of it.

Suppose my infinitesimal matrix is ω. I then compute exp(ω).

My guess would be

\exp(\omega)=\sum_{k=0}\frac{\omega^{2k}}{2k!}+\sum_{k=0}\frac{\omega^{2k+1}}{(2k+1)!}

i.e. the even and odd contributions.

The notes I'm reading suggest instead:

\exp(\omega)=I+\sum_{k=1}\frac{\omega^{2k}}{2k!}+\sum_{k=1}\frac{\omega^{2k+1}}{(2k+1)!}

which looks weird to me; if I take the identity matrix I to be the k=0 contribution of the even part (ω^0=1), then I don't know where the term linear in ω is in the series any more. I think it's not there at all.

Even more: I do need the k=0 contributions later on to retrieve the series expansion expressions for cos and sin.

What do you think? Any comments?
 
Physics news on Phys.org
I agree. Replacing the ##\omega^0/0!## term by ##I## is fair enough, but the second sum should be from ##k = 0##.

It's probably just a typo.
 
?? The "linear term in \omega" is the term with \omega^1 which means it is the term in the second sum with k= 0: \frac{\omega^{2(0)+ 1}}{(2(0)+ 1)!}= \omega.

The linear approximation to e^\omega is I+ \omega.
 
I agree. It's a typo. If you are doing this, don't forget that you can use the characteristic equation of the matrix ω to eliminate all the high powers.
 
Thread 'Determine whether ##125## is a unit in ##\mathbb{Z_471}##'
This is the question, I understand the concept, in ##\mathbb{Z_n}## an element is a is a unit if and only if gcd( a,n) =1. My understanding of backwards substitution, ... i have using Euclidean algorithm, ##471 = 3⋅121 + 108## ##121 = 1⋅108 + 13## ##108 =8⋅13+4## ##13=3⋅4+1## ##4=4⋅1+0## using back-substitution, ##1=13-3⋅4## ##=(121-1⋅108)-3(108-8⋅13)## ... ##= 121-(471-3⋅121)-3⋅471+9⋅121+24⋅121-24(471-3⋅121## ##=121-471+3⋅121-3⋅471+9⋅121+24⋅121-24⋅471+72⋅121##...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
It is well known that a vector space always admits an algebraic (Hamel) basis. This is a theorem that follows from Zorn's lemma based on the Axiom of Choice (AC). Now consider any specific instance of vector space. Since the AC axiom may or may not be included in the underlying set theory, might there be examples of vector spaces in which an Hamel basis actually doesn't exist ?
Back
Top