An autonomous ODE is simply an ODE

  • Thread starter Thread starter Rasalhague
  • Start date Start date
  • Tags Tags
    Ode
Click For Summary
An autonomous ordinary differential equation (ODE) is defined as one where the independent variable does not appear explicitly, meaning it only depends on the dependent variable. The discussion highlights the confusion surrounding the definitions and implications of autonomy, particularly in differentiating between autonomous and non-autonomous forms. Participants explore the nuances of function composition and the role of partial derivatives in establishing whether an ODE qualifies as autonomous. The conversation also touches on the potential for any ODE to be transformed into an autonomous form through variable changes, raising questions about the significance of such transformations. Overall, clarity on the definition and characteristics of autonomous ODEs remains a central concern in the discussion.
Rasalhague
Messages
1,383
Reaction score
2
An autonomous ODE is simply an ODE in which the independent variable does not appear explicitly. (hfshaw, Yahoo Answers)

Okay, good, so y' = 3y is an autonomous ODE, while y'(t) = 3y(t) is not autonomous??

In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not depend on the independent variable. (Wikipedia)

Seems like a contradiction in terms. Differentiation of a function with respect to a variable on which it doesn't depend, i.e. one which doesn't denote its argument, is meaningless. I might as well differentiate f:R-->R, f(x) = x2 with respect to the colour of my eyes, instead of with respect to x. So in what sense "does not depend"?

Moreover, any system can always be reduced to a first-order system by changing to the new set of dependent variables y = (x,x(1),x2,...,x(k-1). This yields the new first order system

y' = (y2,y3,...,yk,f(t,y)).

We can even add t to the dependent variables z = (t,y), making the right-hand
side independent of t

z' = (1,z3,z4,...,zk+1,f(z)).

Such a system, where f does not depend on t, is called autonomous. (Teschl, p. 7)

Hmm, let's see. On p. 9, Teschl gives the "simplest (nontrivial) case of a fi rst-order autonomous equation", (1.20)

x' = f(x), x(0) = x0, f in C(R).

Since x is a function in C(J), J subset of R (see p. 6), definition (1.20) is a syntax error. I guess what it means is x' = f o x, where o denotes composition. This looks rather like Wikipedia's definition, an autonomous ODE is an equation of the form

d/dt x(t) = f(x(t)),

which it contrasts with an non-autonomous equation, one of the form

d/dt x(t) = g(x(t),t)).

But equations are normally given not in terms of explicit functions but in a form such as

d/dt x(t) = sin(x(t)).

If I define a particular g by the rule g(p,q) = sin(p), we have

d/dt x(t) = g(x(t),t) = sin(x(t)),

a non-autonomous ODE, according to Wikipedia. Can someone suggest an unambiguous definition of autonomy, or give me any hint? The idea seems to be that the dependence of the equation on t must "pass through" the unknown function; for the equation to qualify as autonomous, Teschl's outer function f must be "blind to t", and only able to perceive it through the medium of x. To get around the problem of whimsical definitions like g(p,q) = sin(p), I tried expressing the idea as "autonomous iff there exists a function such that it doesn't depend on t", but then realized that won't do, given that any equation can be put into autonomous form. Or maybe such a definition is possible, if only I knew which kinds of algebraic rearrangement/rewriting constitute "changing" the form of an ODE, and which are considered trivial.
 
Physics news on Phys.org
Rasalhague said:
d/dt x(t) = g(x(t),t) = sin(x(t)),

a non-autonomous ODE, according to Wikipedia.

But g(x(t),t) = sin(x(t)) is not a function of t, only a function of x(t).

∂g/∂t = 0.

(and ∂g/∂x = cosx, and dg/dt = ∂g/∂x dx/dt + ∂g/∂t = cosx dx/dt)
 


tiny-tim said:
But g(x(t),t) = sin(x(t)) is not a function of t, only a function of x(t).

Hi, tiny-tim. Thanks for your answer. I think this may be the clue I was looking for! First, I hope you don't mind if I indulge in a paragraph of pedantry - not for the sake of argument, but for untangling practice, just to revise these concepts in my head, and offer them up for correction if need be.

If we take g(x(t),t) literally it's not a function at all, but a real number, the value, at t, of g composed with (x,I), where I is the identity function. But if we use g(x(t),t), as is often done less formally, to denote the function g o (x,I) itself, then it is a "function of t", in the sense that t represents an arbitrary input of this function. The composite function I've called g o (x,I) is the "g" in your dg/dt. The "g" in your ∂g/∂t is the outer function, the function that I've labelled g. Unfortunately, the Leibniz notation asks us to also make "t" denote two things. In dg/dt, it means differentiate this function, g o (x,I), with respect to its only argument. In ∂g/∂t, it means differentiate this function, g, with respect to its 2nd argument. So it first labels an argument of one function, then it labels an argument of a different function.

∂g/∂t = 0.

(and ∂g/∂x = cosx, and dg/dt = ∂g/∂x dx/dt + ∂g/∂t = cosx dx/dt)

Aha! I think I see your point. Developing that, maybe we could state autonomy thus:

An ODE, y'(t) = g o y (t), is autonomous iff exactly one of the following holds:

(1) none of the component functions of the inner function, y, is the identity function;

(2) the partial derivative of g with respect to whichever argument slot of the composition g o y is occupied by the identity function is identically equal to zero.

Then autonomy is insensitive to whimsical changes of definition, and x'(t) = sin(x(t)) is autonomous regardless of whether we formalise this relation as x'(t) = g(x(t),t), where g(p,q) = sin(p), or as x'(t) = f(x(t)), where f(w) = sin(w). The expression x'(t) = sin(x(t)) is enough to tell us that it's autonomous. And it makes no difference to its autonomy whether t appears explicity in the equation, x'(t) = sin(x(t)), or not, x' = sin o x.

Or have I misunderstood something about the meaning of terms like "function" and "argument"; is g(p,q) = sin(p), and the like, ungrammatical, thus ruling out the need for condition (2), or avoided by convention?

I'm still rather confused, though, as to what manipulations can be done to an ODE without changing its order, homogeneity and autonomy. Any ODE can be made 1st order by a change of variables, and likewise any ODE can be made autonomous? So if it's such a trivial thing anyway, maybe someone would consider the whimsical rearrangements I proposed sufficient to make an autonomous ODE not autonomous; I'm guessing not, but obviously I'm all at sea still with this subject and will welcome any advice.
 
Hi Rasalhague! :smile:

(just got up :zzz: …)
Rasalhague said:
If we take g(x(t),t) literally it's not a function at all, but a real number, the value, at t, of g composed with (x,I), where I is the identity function. But if we use g(x(t),t), as is often done less formally, to denote the function g o (x,I) itself, then it is a "function of t", in the sense that t represents an arbitrary input of this function. The composite function I've called g o (x,I) is the "g" in your dg/dt. The "g" in your ∂g/∂t is the outer function, the function that I've labelled g. Unfortunately, the Leibniz notation asks us to also make "t" denote two things. In dg/dt, it means differentiate this function, g o (x,I), with respect to its only argument. In ∂g/∂t, it means differentiate this function, g, with respect to its 2nd argument. So it first labels an argument of one function, then it labels an argument of a different function.

Yes … I think that if you are careful to use ∂/∂t and d/dt where appropriate, there'll be no confusion. :smile:

But I think you're missing the point about functions …

a function is a map, and its definition must include an object and image space. :wink:

g is a map from X x T -> Y.

When we write g(x(t),t), we are restricting consideration to the subset {(x(t),t)}t

but the function g always has two "inputs", and its derivatives have to be ∂s.

There is a closely related function G from T -> Y defined by G(t) = g(x(t),t) … it has only one "input", and its derivative is a d. :smile:
 


tiny-tim said:
Hi Rasalhague! :smile:

(just got up :zzz: …)


Oh, I've been up a while, doggedly working through another PDF intro to differential equations. (Grits teeth.) I will get there one of these days!

Yes … I think that if you are careful to use ∂/∂t and d/dt where appropriate, there'll be no confusion. :smile:

Well, everyone seems to manage. Except... the reason I went into such obsessive detail is that I've been confused over the use of composite functions in this very context of defining types of differential equation, so I'm trying to make sure I can always translate back any shorthand notation into a notation I understand, just in case. And, well, I'm still confused about something, although I'm not sure this is exactly the source of it. But was I thinking along the right lines (or along possible right lines) to define/formalize autonomy in terms of an identically equal partial derivative?

Incidentally, I like Spivak's D, Di notation. Well, actually when I'm doing a calculation and know - or think I know : ) - what I'm doing, I use whatever shorthand is convenient, but when I'm copying out definitions, I may write something like, for example, to express the chain rule

D[g o f, t] = D[g,g o f[t]]] * D[f,t], where D[_,_]:{differentiable functions} x R --> R.

But I think you're missing the point about functions …

a function is a map, and its definition must include an object and image space. :wink:

I could be mistaken, but I sense we're saying the same thing in different words/notation!

I haven't met the terms "object and image space" before. Does object mean the same as domain, and does image space mean the same as codomain? Or does image space mean the same as image?

Anyway, I suppose a differential equation doesn't define a particular function, g, but maybe it would simplify things to say, whatever the equation on the right is, describe it with some g, composed with a function of the form (I,x,x',x'',...,x(k)), with, say, the first (or last) input slot of g as the one to be composed with the identity function. Then iff the partial derivative with respect to that slot gives the value 0 for all inputs - or perhaps all inputs in the range of (I,x,x',x'',...,x(k)) - we call the equation autonomous.

Hmm, but then, if any ODE can be expressed in this way, we'd need some rule which states what kinds of function we can use without "changing" the equation, so as to allow the existence of non-autonomous ODEs. Otherwise there'd be no distinction at all. ...which possibly brings me back to square one: what is the distinction?

Should I be thinking rather of an ODE as an expression rather than as any kind of function?

g is a map from X x T -> Y.

In other words, in this simple case,

g:U \subseteq \mathbb{R}^2 \rightarrow \mathbb{R}[/itex]<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> When we write g(x(t),t), we are restricting consideration to the subset {(x(t),t)}<sub>t</sub> … </div> </div> </blockquote><br /> Perhaps I should have added, &quot;Let o denote composition <i>and any necessary restriction</i>.&quot; The Wikipedia article on composite functions allows the possibility that the composition operator, o, doesn&#039;t also restrict the outer function to the range of the inner function, but it greatly simplifies explanations to assume that it does, and I get the impression it&#039;s widely done, implicitly, in practice.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> but the function g always has <i>two</i> &quot;inputs&quot;, and its derivatives have to be ∂s.<br /> <br /> There is a closely related function G from T -&gt; Y defined by G(t) = g(x(t),t) … it has only <i>one</i> &quot;input&quot;, and its derivative is a d. <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":smile:" title="Smile :smile:" data-smilie="1"data-shortname=":smile:" /> </div> </div> </blockquote><br /> Yes, this is what I was trying to say : )<br /> <br /> Your G is what I meant by g o (x,I), a map R--&gt;R.
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
28
Views
3K