System ODE, nonconstant but periodic coefficients

Marin
Messages
192
Reaction score
0
Hi all!

I'm trying to solve the following system of ODE's, but somewhat unsuccessful...

\dot \vec x = [-i\omega(t)\sigma_z - \nu(t)\sigma_y]\vec x

with sigma_i the Pauli matrices and w(t) and v(t) well-behaved functions of t (actually I also have that w = 1+v). Nevertheless, v(t+T) = v(t) and the same holds of course for w(t).

I know such systems are hardly ever solvable, however here we have some nice properties (e.g. the time dependence is outside the matrices, the matrices themselves have nice properties, the coefficients are periodic) which make me think it's doable..

The problem comes from the fact that the Pauli matrices do not commute and so an exponential ansatz fails (unlike if one of the matrices was the identity).

The periodicity reminds me of the Floquet thm./analysis, but I haven't found any statement towards solvability in there so far. It only says that the solution can be written in the form

\vec x(t) = Q(t)e^{tR}\vec x_0

for some matrix-valued function Q(t) and a constant matrix R. However, when I plug it in the equation above it doesn't help much, mainly because of the a priori non-commutativity of Q and R (even if I assume they are both invertible for all t).

I tried some ways to rewrite the system in a different way, but it only made it more complicated, since matrix-valued functions appeared, where the time dependence cannot be separated.

Also tried to use the fact that w = 1+v but the non-commutativity problem remains and so I doubt it could be useful.

Any ideas or hints are well appreciated. If someone had this or similar problem before and would like to share their experience would be perfect :)

Thanks a lot,
marin
 
Physics news on Phys.org
Since u and v are periodic, have you tried finding a Fourier series solution for x?
 
Hi, AlephZero!

If we assume that x(t) is periodic and indeed expand it in Fourier series (vector-valued coefficients a_n), to compare the coefficients, the rhs has time dependent coefficients, so we get so solve an equation of the form

in*a_n = G(t)*a_n - for some matrix G(t) I think I cannot solve for a_n here

If, on the other hand, I try to Fourier transform the whole equation, then again in the rhs, I get two convolutions (don't even know if it has an analogue in the discrete case of Fourier series), because of the time dependent w(t), v(t).

What I've tried some time ago was, to Fourier transform w and v, so that the expansion coefficients become matrices A_n, to obtain

\dot\vec x = \sum_n A_n e^{int}\vec x

but again no one guarantees that [A_n, A_m] = 0, and so it's the same problem as before...

Did I understand you correct, AlephZero, or did you mean something else?
 
Marin said:
The periodicity reminds me of the Floquet thm./analysis, but I haven't found any statement towards solvability in there so far. It only says that the solution can be written in the form

\vec x(t) = Q(t)e^{tR}\vec x_0

for some matrix-valued function Q(t) and a constant matrix R. However, when I plug it in the equation above it doesn't help much, mainly because of the a priori non-commutativity of Q and R (even if I assume they are both invertible for all t).

Use the fact that the Pauli matrices, combined with the identity, are a complete basis for 2x2 complex matrices. So write

Q(t) = \vec{q}(t) \cdot \vec \sigma + q_0(t) \sigma_0, \qquad R = -i \vec r \cdot \vec \sigma -i r_0 \sigma_0

Where \sigma_0 is the 2x2 identity. Then use the fact that for a unit vector \hat{n},

\exp (i \alpha \hat{n} \cdot \vec \sigma) = \cos \alpha \; \sigma_0 + i \sin \alpha \; \hat{n} \cdot \vec \sigma

This formula also holds if \alpha is complex, by analytic continuation. With these facts, you should be able to plug in the ansatz and find the solution by using the Pauli matrix algebra.
 
I haven't tried it, but I was thinking along the lines of

Express x, w and v as Fourier series.

Take the DE, multiply by e^{ipt} where p corresponds to a particular harmonic and integrate over one period.

With luck most of the terms will be zero, because of the orthogonality of e^{ipt} and e^{iqt} when p and q are different harmonics.

If you want an approximate numerical solution, you could try the Harmonic Balance Method. (But without knowing something about w and v, there is no way to tell how useful that idea would be ).
 
Ben Niehoff, this is a nice way of rewriting the ansatz. One sees immediately that r_0 has to vanish, since the identity matrix is not present in the rhs of the ODE.

However, there are two major problems with it:

As a solution to the ODE, x(t) has to reproduce itself (modulo factors) somehow in order to be able to compare the coefficients in front of the matrix basis to those in the rhs of the ODE. The linear combination of sigma_i's can be easily differentiated but I really don't see how it can reproduce itself... Trying to insert an identity Q^-1*Q involves computing the determinant of Q which is a quadratic form of all the q_i's and is put in the denominator..

Similar problem occurs in the e-term (after differentiation). There we have to exchange R and Q, to be able to read off x(t) again. But there is no way (at least I don't see how) to exchange the positions of R and Q with picking up a factor only, even taking into account all the nice properties of the Pauli matrices you'd be forced to pick up a commutator which does not occur in the definition of x(t). Indeed, the commutator would involve the Pauli matrices themselves but it won't have the form of Q(t) and then, the only thing that comes to my mind is again inserting a term Q^-1*Q

Using the formula for the exponential and various other identities involving Pauli matrices the ansatz ca also be written in the form:

\vec x = ([\vec q \cos(rt)+(\vec q\times\hat\vec r - iq_0\hat\vec r)\sin(rt)]\vec\sigma+[q_0\cos(rt)-i\vec q\cdot\hat\vec r\sin(rt)]\sigma_0)\vec x_0

which looks a little confusing having in mind that we seek an expression for the derivative of the form x' = A(t) x

But maybe I overlook something?

AlephZero,
Express x, w and v as Fourier series.

Take the DE, multiply by LaTeX Code: e^{ipt} where p corresponds to a particular harmonic and integrate over one period.

When I express x, w, v as Fourier series, then on the rhs we get something like a double series, since the w and v series are over the index, say, m, while the series of the x term runs over a different index, say n:

\sum_n in\vec x_n e^{int} = \sum_{m,n} -(i\omega_n\sigma_z-\nu_n\sigma_y)\vec x_ne^{imt}e^{int}

Then we could indeed compare coefficients before e^{int} since its a basis in L^2(0,T), T being the period, resulting in:

in\vec x_n e^{int} = \sum_{m} -(i\omega_n\sigma_z-\nu_n\sigma_y)e^{imt}\vec x_n

but this eq. does not tell us anything about the x_n, does it?

This ODE is indeed very interesting. It looks like it has an analytical solution because of the many nice properties it has, but finding it is by no means trivial!
 
does anyone have any other ideas?
 
I'm not familiar with QM so I had to translate your equation into a more "elementary" form. I hope I didnt' make a mistake doing that otherwise this solution will be nonsense!

\vec x' = [ -i \omega(t) \sigma_x - \nu(t)\sigma_y] \vec x
Where

\omega = 1 + \nu

\nu(t+T) = \nu(t)

\sigma_x = \left(\begin{matrix}0 & 1 \cr 1 & 0 \end{matrix}\right)

\sigma_y = \left(\begin{matrix}0 & -i \cr i & 0 \end{matrix} \right)

So if

\vec x = \left(\begin{matrix} x_1 \cr x_2 \end{matrix}\right)

This is equivalent to

\begin{matrix} x&#039;_1 \cr x&#039;_2 \end{matrix} =<br /> \left(\begin{matrix} 0 &amp; -i \cr -i(\omega + \nu) &amp; 0 \end{matrix}\right) <br /> \begin{matrix} x_1 \cr x_2 \end{matrix}

We can differentiate this and get independent equations in x_1 and x_2

x_1&#039;&#039; + (w + v)x_1 = 0

and similarly for x_2.

To simplify the notation, normalize time so the T = 2 \pi.

Let

\omega(t)+\nu(t) = \sum_{k=0}^\infty f_k e^{ikt}

Look for a solution

x_1 = \sum_{k=0}^\infty a_k e^{ikt}

\sum_{k\ge 0} -k^2 a_k e^{ikt} = \sum_{j,k\ge 0} f_j a_k e^{i(j+k)t}

Multiply by e^{int} and integrate between 0 and 2 \pi.

-n^2 a_n = \sum_{j+k=n} f_j a_k

This is the (infinite) system of equations
0 = f_0 a_0

-a_1 = f_0 a_1 + f_1 a_0

-4a_2 = f_0a_2 + f_1a_1 + f_2a_0

etc.

In general the solution for all the a_n is zero, but if f_0 = 0 there is a non-trivial solution, where presumably a_0 would be determined by the boundary conditions.

Maybe if f_0 \ne 0 there is a different non-periodic solution corresponding to that constant value?

Anyway, even if this attempt at a solution is wrong, if might lead to something better!
 
AlephZero,

I guess you copied the wrong ode :( - its sigma_z and not sigma_x in the problem.

However, something similar could work with the actual ODE. The second order equation there takes a slightly more complicated form:

\nu\ddot x-\dot\nu\dot x-[\dot\nu+\nu+2\nu\dot\nu]x = 0

Then, using a Floquet ansatz of the form
x(t) = e^{-t}p(t)
with some periodic function p(t) it reduces to:

\nu\ddot p-(2\nu+\dot\nu)\dot p - 2\dot\nu\nu p = 0

By general theory of linear odes we only need one particular solution whatever, to obtain the general solution - this is what I'm trying these days, but so far unsuccessful :(

Or we just need some substitution to kill one of the terms in the ode for p - then it reduces either to a Hill's ode if we kill the middle term, or a first order linear ode, if we kill the first/last term respectively.

Your method could give a recursive relation for the constants which indeed might be useful - I'll give it a try as well :)
 
  • #10
Marin said:
I guess you copied the wrong ode :( - its sigma_z and not sigma_x in the problem.
Yeah, silly mistakes like that always happen on the first line :rolleyes:
 

Similar threads

Back
Top