Solving a separable matrix ODE.

Click For Summary
Solving a matrix ODE like P' = QP is not as straightforward as solving a scalar ODE due to the non-commutative nature of matrix multiplication. The proposed method of separating variables is invalid because it assumes P and Q commute, which is not guaranteed. Instead, the correct approach involves using an integrating factor, e^{-Qt}, to transform the equation into a form that can be integrated. This leads to the solution P(t) = e^{Qt}C, where C is determined by the initial condition P(0). Care must be taken with matrix calculus, as many rules from scalar calculus do not directly apply.
docnet
Messages
796
Reaction score
488
Homework Statement
Solve ##P'=QP## where Q and P are ##n \times n\in## matrices over the reals.
Relevant Equations
##P'=QP##.
I have never solved a matrix ODE before, and am wondering if solving it is similar to solving ##y'=ay## where ##a## is a constant and ##y:\mathbb{R} \longrightarrow \mathbb{R}## is a function. The solution is right according to wikipedia, and I am just looking for your inputs. Thanks

$$\begin{align*}
\frac{d}{dt}P(t)&=QP(t)\\
\frac{1}{P(t)}dP(t)&=Qdt\\
\int\frac{1}{P(t)}dP(t)&=\int Qdt\\
\ln{P(t)}&=Qt+C\\
P(t)&=Ce^{Qt}\\
P(0)&=I\\
\Longrightarrow &Ce^0=I\\
\Longrightarrow &C=I\\
P(t)&=e^{Qt}.
\end{align*}$$

Line 2: Separation of variables.
Lines 3 and 4: Integration with respect to ##t##.
Line 5: Rule of exponents.
Lines 6-8: Determination of the matrix ##C##.
 
Last edited:
Physics news on Phys.org
Why not leave it here for someone else?
 
docnet said:
Homework Statement: Solve ##P'=QP## where Q and P are ##n \times n\in## matrices over the reals.
Relevant Equations: ##P'=QP##.

I have never solved a matrix ODE before, and am wondering if solving it is similar to solving ##y'=ay## where ##a## is a constant and ##y:\mathbb{R} \longrightarrow \mathbb{R}## is a function. The solution is right according to wikipedia, and I am just looking for your inputs. Thanks

$$\begin{align*}
\frac{d}{dt}P(t)&=QP(t)\\
\frac{1}{P(t)}dP(t)&=Qdt\\
\end{align*}$$

This is invalid. On the left hand side you are multiplying by P^{-1} on the left, but on the right hand side you are multiplying by P^{-1} on the right. You do not know that P or P^{-1} commutes with Q. Also, the matrix exponential function does not have an inverse: a matrix may not have a logarithm, or it may have more than one.

If <br /> P&#039; = QP for constant Q, then multiply on the left by the integrating factor e^{-Qt}, which commutes with Q, to obtain <br /> 0 = e^{-Qt}P&#039; - e^{-Qt}QP = (e^{-Qt}P)&#039;. Now integrate to obtain C = e^{-Qt}P(t), and finally multiply on the left by e^{Qt} to obtain <br /> P(t) = e^{Qt}C = e^{Qt}P(0). This is consistent with the case where P is a column vector and Q is square.
 
Last edited:
The fact that matrix multiplication is not commutative means that certain results from calculus of real-valued functions may not carry over. For example. from first principles \begin{split}<br /> \frac{d}{dt}A^2 &amp;= \lim_{\delta t \to 0} \frac{(A + \delta A)^2 - A^2}{\delta t} \\<br /> &amp;= \lim_{\delta t \to 0} \frac{A\delta A + (\delta A) A + \delta A^2}{\delta t} \\<br /> &amp;= A\frac{dA}{dt} + \frac{dA}{dt} A \end{split} and it is not generally the case that this equals either 2A\frac{dA}{dt} or 2\frac{dA}{dt}A. The product rule does carry over, but one must be careful to preserve the order of factors: (AB)&#039; = A&#039;B + AB&#039;. It follows that in general the derivative of \exp(A(t)) is neither A&#039;\exp(A) nor \exp(A)A&#039;.
 
First, I tried to show that ##f_n## converges uniformly on ##[0,2\pi]##, which is true since ##f_n \rightarrow 0## for ##n \rightarrow \infty## and ##\sigma_n=\mathrm{sup}\left| \frac{\sin\left(\frac{n^2}{n+\frac 15}x\right)}{n^{x^2-3x+3}} \right| \leq \frac{1}{|n^{x^2-3x+3}|} \leq \frac{1}{n^{\frac 34}}\rightarrow 0##. I can't use neither Leibnitz's test nor Abel's test. For Dirichlet's test I would need to show, that ##\sin\left(\frac{n^2}{n+\frac 15}x \right)## has partialy bounded sums...