From system of first-order to a single ODE

  • Thread starter Thread starter rsq_a
  • Start date Start date
  • Tags Tags
    Ode System
rsq_a
Messages
103
Reaction score
1
Is there an easy way to show that the system:

<br /> \begin{align}<br /> x_1&#039; &amp;= p_{11} x_1 + p_{12} x_2 + \ldots + p_{1n} x_n \\<br /> x_2&#039; &amp;= p_{21} x_1 + p_{22} x_2 + \ldots + p_{2n} x_n \\<br /> \ldots &amp;= \ldots \\<br /> x_n&#039; &amp;= p_{n1} x_1 + p_{n2} x_2 + \ldots + p_{nn} x_n<br /> \end{align}<br />

must be equivalent to a single nth order differential equation, like
<br /> a_n y^{(n)} + a_{n-1} y^{(n-1)} + \ldots + a_0 y_n = 0<br />

All the p_{ij} = p_{ij}(t) and a_i = a_i(t). In the case that n = 2, it's easy to show just by manipulation. I assume that it's true in general, but I can't find a slick way to do it.
 
Physics news on Phys.org
Interesting... I never thought about it before, usually you want to reduce the nth order ode to the system and not the other way around. An nth order ode is always equivalent to a system of n first order odes. But is a system of n first order odes always equivalent to a nth order ode?

The eigenvalues of the nxn matrix of first order equations give you the n roots of the characteristic equation of the equivalent nth order ode. And the n roots of the characteristic equation of an nth order ode give you the n eigenvalues of the equivalent system.
So if the matrix leads to n eigenvalues, it gives you an equivalent nth order ode. Otherwise, the system is not equivalent to an nth order ode.
 
bigfooted said:
Interesting... I never thought about it before, usually you want to reduce the nth order ode to the system and not the other way around. An nth order ode is always equivalent to a system of n first order odes. But is a system of n first order odes always equivalent to a nth order ode?

The eigenvalues of the nxn matrix of first order equations give you the n roots of the characteristic equation of the equivalent nth order ode. And the n roots of the characteristic equation of an nth order ode give you the n eigenvalues of the equivalent system.
So if the matrix leads to n eigenvalues, it gives you an equivalent nth order ode. Otherwise, the system is not equivalent to an nth order ode.

I think in the case of a constant coefficient ODE, it's perhaps a bit easier to reason (??)

Here is one argument just by counting unknowns. Maybe the first non-trivial example is with 3 dependent variables:<br /> \begin{align}<br /> x_1&#039; &amp;= p_{11} x_1 + p_{12} x_2 + p_{13} x_3 \\<br /> x_2&#039; &amp;= p_{21} x_1 + p_{22} x_2 + p_{23} x_3 \\<br /> x_3&#039; &amp;= p_{31} x_1 + p_{32} x_2 + p_{33} x_3 <br /> \end{align}<br />

where p = p(t). Differentiating the third equation twice, we get
x_3&#039;&#039; = f(x_1, x_1&#039;, x_1&#039;&#039;, x_2, x_2&#039;, x_2&#039;&#039;, x_3, x_3&#039;, x_3&#039;&#039;)

There are thus 6 unknowns (from x1 and x2 and their two derivatives). If you look at the above system, you immediately have two equations for x1 and x2, and you get another 4 by differentiating both equation twice. Thus 6 equations and 6 unknowns. So in some cases, you can solve for {x1, x1', x1'', x2, x2', x2''} in terms of x3, and that gives you an uncoupled equation for x3.

Can someone give some thoughts on where this might fail, and what that means for the eventual high order equation in x3?
 
rsq_a said:
x_3&#039;&#039; = f(x_1, x_1&#039;, x_1&#039;&#039;, x_2, x_2&#039;, x_2&#039;&#039;, x_3, x_3&#039;, x_3&#039;&#039;)
Seems to me that if you differentiate twice it's
x_3&#039;&#039;&#039; = f(x_1, x_1&#039;, x_1&#039;&#039;, x_2, x_2&#039;, x_2&#039;&#039;, x_3, x_3&#039;, x_3&#039;&#039;)
rsq_a said:
There are thus 6 unknowns (from x1 and x2 and their two derivatives). If you look at the above system, you immediately have two equations for x1 and x2, and you get another 4 by differentiating both equation twice.
Well if you differentiate twice here you'll get for example an equation for x_1&#039;&#039;&#039;, so you'll be able to solve for {x1, x1', x1'', x2, x2', x2''} , but the expressions will involve x_1&#039;&#039;&#039; and x_2&#039;&#039;&#039;. I think I vaguely remember finding out at some point that this inversion can't be done in general, but I can't remember how to construct a counter example.
 
Also, why is the 2x2 case trivial?
 
kai_sikorski said:
Also, why is the 2x2 case trivial?

...I've no idea why I thought it was a few days ago. You're right, I was speaking nonsense.
 
Thread 'Direction Fields and Isoclines'
I sketched the isoclines for $$ m=-1,0,1,2 $$. Since both $$ \frac{dy}{dx} $$ and $$ D_{y} \frac{dy}{dx} $$ are continuous on the square region R defined by $$ -4\leq x \leq 4, -4 \leq y \leq 4 $$ the existence and uniqueness theorem guarantees that if we pick a point in the interior that lies on an isocline there will be a unique differentiable function (solution) passing through that point. I understand that a solution exists but I unsure how to actually sketch it. For example, consider a...
Back
Top