Solutions of first-order matrix differential equations

Click For Summary
The discussion centers on solving the first-order matrix differential equation, [A(t)+B(t) ∂_t]|ψ⟩=0, where A(t) and B(t) cannot be simultaneously diagonalized. A proposed approach involves rewriting the equation as ∂_t|ψ⟩=-B^{-1}(t)A(t)|ψ⟩, leading to a general solution involving an exponential integral. However, there are concerns about the method's validity, particularly regarding the mixing of integrating factors and the solution form. The conversation also highlights the need for clarity in notation, as well as the potential for no closed-form solution unless specific conditions on the matrices are met. Overall, the discussion seeks alternative approximation methods and references for further understanding.
Haorong Wu
Messages
417
Reaction score
90
Homework Statement
How to solve the following matrix differential equation, ##[A(t)+B(t) \partial_t]\left | \psi \right >=0 ##, where ##A(t)## and ##B(t)## are ##n\times n## matrices and ##\left | \psi \right >## is a ##n##-vector.
Relevant Equations
None
Hello, there. I am trying to solve the differential equation, ##[A(t)+B(t) \partial_t]\left | \psi \right >=0 ##. However, ##A(t)## and ##B(t)## can not be simultaneous diagonalized. I do not know is there any method that can apprixmately solve the equation.

I suppose I could write the equation as ##\partial_t \left | \psi \right >=-B^{-1}(t) A(t)\left | \psi \right > ## with general solutions being ##\left | \psi \right >=\exp \left ( \int_0^t -B^{-1}(t') A(t')dt'\right ) \left | c\right > ## and ##\left | c\right > ## is a constant vector. Then a first-order approximation may be ##\left | \psi \right >=\left (I+ \int_0^t -B^{-1}(t') A(t')dt'\right ) \left | c\right > ##.

I am not familiar with matrix differential equations. Does this method have any restrictions or problems? Or there may be other better approximation solutions? Any references would be greatly appreciated.

Thanks!
 
Physics news on Phys.org
Haorong Wu said:
Homework Statement:: How to solve the following matrix differential equation, ##[A(t)+B(t) \partial_t]\left | \psi \right >=0 ##, where ##A(t)## and ##B(t)## are ##n\times n## matrices and ##\left | \psi \right >## is a ##n##-vector.
Relevant Equations:: None

Hello, there. I am trying to solve the differential equation, ##[A(t)+B(t) \partial_t]\left | \psi \right >=0 ##. However, ##A(t)## and ##B(t)## can not be simultaneous diagonalized. I do not know is there any method that can apprixmately solve the equation.

I suppose I could write the equation as ##\partial_t \left | \psi \right >=-B^{-1}(t) A(t)\left | \psi \right > ## with general solutions being ##\left | \psi \right >=\exp \left ( \int_0^t -B^{-1}(t') A(t')dt'\right ) \left | c\right > ## and ##\left | c\right > ## is a constant vector. Then a first-order approximation may be ##\left | \psi \right >=\left (I+ \int_0^t -B^{-1}(t') A(t')dt'\right ) \left | c\right > ##.

I am not familiar with matrix differential equations. Does this method have any restrictions or problems? Or there may be other better approximation solutions? Any references would be greatly appreciated.

Thanks!
I don't understand what you did. Starting from as ##\partial_t \left | \psi \right >=-B^{-1}(t) A(t)\left | \psi \right > ##, wouldn't you get ##\left | \psi \right >= \left ( \int_0^t -B^{-1}(t') A(t')dt'\right ) ##? You might be mixing up the concept of an integrating factor with the solution of a differential equation. It's been nearly 25 years since I worked on this stuff, so I could be mistaken.

Nit: Also, you have used the symbol ##\partial_t##. Since the matrices and vector are functions of a single variable t, the ordinary derivative would be more suitable, IMO.
 
Thanks, @Mark44. There is a ##\left | \psi \right >## in the rhs, so it is like the equation as ##y'=\alpha y##, which solution is exponential functions. Also, thanks for the suggestions about ##\partial_t##, but I use ##\partial_t## for partial and ordinary derivatives if no confusion could occur.
 
Haorong Wu said:
Thanks, @Mark44. There is a ##\left | \psi \right >## in the rhs, so it is like the equation as ##y'=\alpha y##, which solution is exponential functions.
OK, I understand. I'm not so familiar with the notation ##|\psi>##, as that's probably more of a physics notation rather than one used in mathematics.
 
Mark44 said:
OK, I understand. I'm not so familiar with the notation ##|\psi>##, as that's probably more of a physics notation rather than one used in mathematics.
Sorry for the confusion. It can be treated as a vector.
 
Assuming B is invertible, we can rewrite the ODE as <br /> \frac{d}{dt}(B\psi) + \left[ AB^{-1} - \frac{dB}{dt}B^{-1}\right](B\psi) = 0 which is of the standard form <br /> \frac{du}{dt} + C(t)u = 0. But in general there is no closed form solution unless C commutes with \int_0^t C(s)\,ds, when the solution is u(t) = \exp\left(\int_0^t C(s)\,ds\right)u(0).
 
First, I tried to show that ##f_n## converges uniformly on ##[0,2\pi]##, which is true since ##f_n \rightarrow 0## for ##n \rightarrow \infty## and ##\sigma_n=\mathrm{sup}\left| \frac{\sin\left(\frac{n^2}{n+\frac 15}x\right)}{n^{x^2-3x+3}} \right| \leq \frac{1}{|n^{x^2-3x+3}|} \leq \frac{1}{n^{\frac 34}}\rightarrow 0##. I can't use neither Leibnitz's test nor Abel's test. For Dirichlet's test I would need to show, that ##\sin\left(\frac{n^2}{n+\frac 15}x \right)## has partialy bounded sums...