Matrix Differentiation Problem

leonmate
Messages
81
Reaction score
1
Simple question really,

I'm not sure why the constant pulled out of the derivative becomes negative (-w2). I've tried looking for answers by googling but can't come up with anything.
I feel like its because the first term (1,1) is negative but I want to be sure.

Thanks
 

Attachments

  • Screen Shot 2015-04-16 at 13.39.10.png
    Screen Shot 2015-04-16 at 13.39.10.png
    11.7 KB · Views: 522
Mathematics news on Phys.org
The minus sign is pulled out so that there are fewer minus signs inside the matrix is my guess.
 
You have
R(\omega t)= \begin{pmatrix}cos(\omega t) & -sin(\omega t) & 0 \\ sin(\omega t) & cos(\omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix}
(Rotation about the z-axis with constant angular velocity \omega)

I presume you know that the derivative of sin(\omega t) is \omega cos(\omega t) and the derivative of cos(\omega t) is -\omega sin(\omega t) and, further, that you differentiate a matrix by differentiating its components separately.

So you should see, easily, that
\frac{dR}{dt}= \begin{pmatrix} -\omega sin(\omega t) & -\omega cos(\omega t) & 0 \\ \omega cos(\omega t) & -\omega sin(\omega t) & 0 \\ 0 & 0 & 0 \end{pmatrix}
and, of course, you can factor "\omega" out of the matrix.

To find the second derivative, differentiate again:
\frac{d^2R}{dt^2}= \begin{pmatrix} -\omega^2 cos(\omega t) & \omega^2 sin(\omega t) & 0 \\ -\omega^2 sin(\omega t) & -\omega^2 cos(\omega t) & 0 \\ 0 & 0 & 0 \end{pmatrix}
and factoring -\omega^2 out of that gives
-\omega^2 \begin{pmatrix} cos(\omega t) & - sin(\omega t) & 0 \\ sin(\omega t) & cos(\omega t) & 0 \\ 0 & 0 & 0 \end{pmatrix}= -\omega^2 R
 
If ##f(t)=\cos(\omega t)## and ##g(t)=\sin(\omega t)##, what are ##f''(t)## and ##g''(t)##?
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top