System ODE, nonconstant but periodic coefficients

In summary, the conversation discusses the difficulty in solving a system of ODE's involving Pauli matrices and well-behaved functions, despite having some nice properties such as time dependence outside the matrices and periodic coefficients. Various methods, such as using the Floquet theorem, have been attempted but the non-commutativity of the matrices continues to cause problems. Suggestions are made to use a Fourier series solution or the Harmonic Balance Method, but without more information on the functions involved, it is unclear how successful these methods will be.
  • #1
Marin
193
0
Hi all!

I'm trying to solve the following system of ODE's, but somewhat unsuccessful...

[tex]\dot \vec x = [-i\omega(t)\sigma_z - \nu(t)\sigma_y]\vec x[/tex]

with sigma_i the Pauli matrices and w(t) and v(t) well-behaved functions of t (actually I also have that w = 1+v). Nevertheless, v(t+T) = v(t) and the same holds of course for w(t).

I know such systems are hardly ever solvable, however here we have some nice properties (e.g. the time dependence is outside the matrices, the matrices themselves have nice properties, the coefficients are periodic) which make me think it's doable..

The problem comes from the fact that the Pauli matrices do not commute and so an exponential ansatz fails (unlike if one of the matrices was the identity).

The periodicity reminds me of the Floquet thm./analysis, but I haven't found any statement towards solvability in there so far. It only says that the solution can be written in the form

[tex]\vec x(t) = Q(t)e^{tR}\vec x_0[/tex]

for some matrix-valued function Q(t) and a constant matrix R. However, when I plug it in the equation above it doesn't help much, mainly because of the a priori non-commutativity of Q and R (even if I assume they are both invertible for all t).

I tried some ways to rewrite the system in a different way, but it only made it more complicated, since matrix-valued functions appeared, where the time dependence cannot be separated.

Also tried to use the fact that w = 1+v but the non-commutativity problem remains and so I doubt it could be useful.

Any ideas or hints are well appreciated. If someone had this or similar problem before and would like to share their experience would be perfect :)

Thanks a lot,
marin
 
Physics news on Phys.org
  • #2
Since u and v are periodic, have you tried finding a Fourier series solution for x?
 
  • #3
Hi, AlephZero!

If we assume that x(t) is periodic and indeed expand it in Fourier series (vector-valued coefficients a_n), to compare the coefficients, the rhs has time dependent coefficients, so we get so solve an equation of the form

in*a_n = G(t)*a_n - for some matrix G(t) I think I cannot solve for a_n here

If, on the other hand, I try to Fourier transform the whole equation, then again in the rhs, I get two convolutions (don't even know if it has an analogue in the discrete case of Fourier series), because of the time dependent w(t), v(t).

What I've tried some time ago was, to Fourier transform w and v, so that the expansion coefficients become matrices A_n, to obtain

[tex]\dot\vec x = \sum_n A_n e^{int}\vec x[/tex]

but again no one guarantees that [A_n, A_m] = 0, and so it's the same problem as before...

Did I understand you correct, AlephZero, or did you mean something else?
 
  • #4
Marin said:
The periodicity reminds me of the Floquet thm./analysis, but I haven't found any statement towards solvability in there so far. It only says that the solution can be written in the form

[tex]\vec x(t) = Q(t)e^{tR}\vec x_0[/tex]

for some matrix-valued function Q(t) and a constant matrix R. However, when I plug it in the equation above it doesn't help much, mainly because of the a priori non-commutativity of Q and R (even if I assume they are both invertible for all t).

Use the fact that the Pauli matrices, combined with the identity, are a complete basis for 2x2 complex matrices. So write

[tex]Q(t) = \vec{q}(t) \cdot \vec \sigma + q_0(t) \sigma_0, \qquad R = -i \vec r \cdot \vec \sigma -i r_0 \sigma_0[/tex]

Where [itex]\sigma_0[/itex] is the 2x2 identity. Then use the fact that for a unit vector [itex]\hat{n}[/itex],

[tex]\exp (i \alpha \hat{n} \cdot \vec \sigma) = \cos \alpha \; \sigma_0 + i \sin \alpha \; \hat{n} \cdot \vec \sigma[/tex]

This formula also holds if [itex]\alpha[/itex] is complex, by analytic continuation. With these facts, you should be able to plug in the ansatz and find the solution by using the Pauli matrix algebra.
 
  • #5
I haven't tried it, but I was thinking along the lines of

Express x, w and v as Fourier series.

Take the DE, multiply by [itex]e^{ipt}[/itex] where p corresponds to a particular harmonic and integrate over one period.

With luck most of the terms will be zero, because of the orthogonality of [itex]e^{ipt}[/itex] and [itex]e^{iqt}[/itex] when p and q are different harmonics.

If you want an approximate numerical solution, you could try the Harmonic Balance Method. (But without knowing something about w and v, there is no way to tell how useful that idea would be ).
 
  • #6
Ben Niehoff, this is a nice way of rewriting the ansatz. One sees immediately that r_0 has to vanish, since the identity matrix is not present in the rhs of the ODE.

However, there are two major problems with it:

As a solution to the ODE, x(t) has to reproduce itself (modulo factors) somehow in order to be able to compare the coefficients in front of the matrix basis to those in the rhs of the ODE. The linear combination of sigma_i's can be easily differentiated but I really don't see how it can reproduce itself... Trying to insert an identity Q^-1*Q involves computing the determinant of Q which is a quadratic form of all the q_i's and is put in the denominator..

Similar problem occurs in the e-term (after differentiation). There we have to exchange R and Q, to be able to read off x(t) again. But there is no way (at least I don't see how) to exchange the positions of R and Q with picking up a factor only, even taking into account all the nice properties of the Pauli matrices you'd be forced to pick up a commutator which does not occur in the definition of x(t). Indeed, the commutator would involve the Pauli matrices themselves but it won't have the form of Q(t) and then, the only thing that comes to my mind is again inserting a term Q^-1*Q

Using the formula for the exponential and various other identities involving Pauli matrices the ansatz ca also be written in the form:

[tex]\vec x = ([\vec q \cos(rt)+(\vec q\times\hat\vec r - iq_0\hat\vec r)\sin(rt)]\vec\sigma+[q_0\cos(rt)-i\vec q\cdot\hat\vec r\sin(rt)]\sigma_0)\vec x_0[/tex]

which looks a little confusing having in mind that we seek an expression for the derivative of the form x' = A(t) x

But maybe I overlook something?

AlephZero,
Express x, w and v as Fourier series.

Take the DE, multiply by LaTeX Code: e^{ipt} where p corresponds to a particular harmonic and integrate over one period.

When I express x, w, v as Fourier series, then on the rhs we get something like a double series, since the w and v series are over the index, say, m, while the series of the x term runs over a different index, say n:

[tex]\sum_n in\vec x_n e^{int} = \sum_{m,n} -(i\omega_n\sigma_z-\nu_n\sigma_y)\vec x_ne^{imt}e^{int}[/tex]

Then we could indeed compare coefficients before e^{int} since its a basis in L^2(0,T), T being the period, resulting in:

[tex] in\vec x_n e^{int} = \sum_{m} -(i\omega_n\sigma_z-\nu_n\sigma_y)e^{imt}\vec x_n[/tex]

but this eq. does not tell us anything about the x_n, does it?

This ODE is indeed very interesting. It looks like it has an analytical solution because of the many nice properties it has, but finding it is by no means trivial!
 
  • #7
does anyone have any other ideas?
 
  • #8
I'm not familiar with QM so I had to translate your equation into a more "elementary" form. I hope I didnt' make a mistake doing that otherwise this solution will be nonsense!

[tex]\vec x' = [ -i \omega(t) \sigma_x - \nu(t)\sigma_y] \vec x [/tex]
Where

[tex]\omega = 1 + \nu[/tex]

[tex]\nu(t+T) = \nu(t)[/tex]

[tex]\sigma_x = \left(\begin{matrix}0 & 1 \cr 1 & 0 \end{matrix}\right) [/tex]

[tex]\sigma_y = \left(\begin{matrix}0 & -i \cr i & 0 \end{matrix} \right) [/tex]

So if

[tex]\vec x = \left(\begin{matrix} x_1 \cr x_2 \end{matrix}\right) [/tex]

This is equivalent to

[tex] \begin{matrix} x'_1 \cr x'_2 \end{matrix} =
\left(\begin{matrix} 0 & -i \cr -i(\omega + \nu) & 0 \end{matrix}\right)
\begin{matrix} x_1 \cr x_2 \end{matrix} [/tex]

We can differentiate this and get independent equations in [itex]x_1[/itex] and [itex]x_2[/itex]

[tex]x_1'' + (w + v)x_1 = 0[/tex]

and similarly for [itex]x_2[/itex].

To simplify the notation, normalize time so the [itex]T = 2 \pi[/itex].

Let

[tex]\omega(t)+\nu(t) = \sum_{k=0}^\infty f_k e^{ikt}[/tex]

Look for a solution

[tex]x_1 = \sum_{k=0}^\infty a_k e^{ikt}[/tex]

[tex]\sum_{k\ge 0} -k^2 a_k e^{ikt} = \sum_{j,k\ge 0} f_j a_k e^{i(j+k)t}[/tex]

Multiply by [itex]e^{int}[/itex] and integrate between 0 and [itex]2 \pi[/itex].

[tex]-n^2 a_n = \sum_{j+k=n} f_j a_k[/tex]

This is the (infinite) system of equations
[tex]0 = f_0 a_0[/tex]

[tex]-a_1 = f_0 a_1 + f_1 a_0[/tex]

[tex]-4a_2 = f_0a_2 + f_1a_1 + f_2a_0 [/tex]

etc.

In general the solution for all the [itex]a_n[/itex] is zero, but if [itex]f_0 = 0[/itex] there is a non-trivial solution, where presumably [itex]a_0[/itex] would be determined by the boundary conditions.

Maybe if [itex]f_0 \ne 0[/itex] there is a different non-periodic solution corresponding to that constant value?

Anyway, even if this attempt at a solution is wrong, if might lead to something better!
 
  • #9
AlephZero,

I guess you copied the wrong ode :( - its sigma_z and not sigma_x in the problem.

However, something similar could work with the actual ODE. The second order equation there takes a slightly more complicated form:

[tex]\nu\ddot x-\dot\nu\dot x-[\dot\nu+\nu+2\nu\dot\nu]x = 0[/tex]

Then, using a Floquet ansatz of the form
[tex]x(t) = e^{-t}p(t)[/tex]
with some periodic function p(t) it reduces to:

[tex]\nu\ddot p-(2\nu+\dot\nu)\dot p - 2\dot\nu\nu p = 0[/tex]

By general theory of linear odes we only need one particular solution whatever, to obtain the general solution - this is what I'm trying these days, but so far unsuccessful :(

Or we just need some substitution to kill one of the terms in the ode for p - then it reduces either to a Hill's ode if we kill the middle term, or a first order linear ode, if we kill the first/last term respectively.

Your method could give a recursive relation for the constants which indeed might be useful - I'll give it a try as well :)
 
  • #10
Marin said:
I guess you copied the wrong ode :( - its sigma_z and not sigma_x in the problem.
Yeah, silly mistakes like that always happen on the first line :rolleyes:
 

1. What is a system of ODE with nonconstant but periodic coefficients?

A system of ODE with nonconstant but periodic coefficients is a set of differential equations that describe the behavior of a dynamic system, where the coefficients (constants) in the equations are not fixed but vary periodically with time. These systems are commonly used to model real-world phenomena such as oscillations, cycles, and periodic patterns.

2. How are these coefficients different from constant coefficients?

The main difference between nonconstant and constant coefficients is that the values of nonconstant coefficients change over time, while constant coefficients remain fixed. This means that the behavior of a system with nonconstant coefficients is not constant, but rather varies periodically according to the changes in the coefficients.

3. What are some real-world examples of systems with nonconstant but periodic coefficients?

Some common examples include electrical circuits with time-varying resistances, chemical reactions with periodic reaction rates, and population dynamics with seasonal variations in birth and death rates. Additionally, many physical systems exhibit periodic behavior due to factors such as temperature changes, environmental conditions, or external forces.

4. How do you solve a system of ODE with nonconstant but periodic coefficients?

The most common approach is to use numerical methods, such as Runge-Kutta or Euler's method, to approximate the solutions. However, for some special cases, analytical solutions may be possible using techniques such as separation of variables or Laplace transforms.

5. What are the applications of systems of ODE with nonconstant but periodic coefficients?

These systems have a wide range of applications in various fields, including engineering, physics, biology, and economics. They are commonly used to model and analyze complex systems that exhibit periodic behavior, such as electronic circuits, chemical reactions, population dynamics, and climate patterns.

Similar threads

  • Differential Equations
Replies
8
Views
2K
  • Quantum Physics
Replies
1
Views
558
  • Differential Equations
Replies
1
Views
985
  • Differential Equations
Replies
2
Views
1K
  • Special and General Relativity
Replies
5
Views
1K
  • Differential Equations
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
0
Views
161
  • Quantum Physics
Replies
1
Views
780
  • Differential Equations
Replies
8
Views
1K
  • Differential Equations
Replies
2
Views
1K
Back
Top