Rasalhague said:
Arnold defines an ordinary differential equation as an equation of the form
\frac{\partial}{\partial t} g(t,x) \bigg|_{(0,x)} = v(x)
where g:RxM-->M is the evolution function of a state space, M = Rn, and v:M-->M called phase velocity. Is Schutz's equation (4.25) of this form? If so, how would it be expressed in this way; which part of (4,25) is the first partial of the evolution function (the partial with respect to time), evaluated at (0,x); is it dS? And which part of it is the phase velocity, everything else apart from dS? But then what do d(rho) and dn mean in that context? Are they basis vectors?
I haven't looked too deeply into this However, I'm pretty sure that an equation with partial derivatives is a partial differential equation, not an ordinary differential equation. In any case, this is not going in the right direction.
You seem to have trouble with the distinction between the independent variable and the dependent variable in an equation of the form y = f(x). In this equation, y has a defining function, namely f. However, x does not have a defining function, it is the independent variable. Independent means something close to "has no defining function".
I mentioned earlier what the infinitesimal is. Here is a more detailed description. It starts with a Taylor series. That is, suppose that
f(x + \Deltax) = f(x) + \Deltaxf'(x) + \frac{(\Delta x)^2}{2!}f''(x) + ...
This is an exact equation. Now consider taking just the first two terms on the right hand side.
f(x + \Deltax) = f(x) + \Deltaxf'(x)
This is what I call the linearized equation because the right hand side, unlike the Taylor series, is linear in \Deltax. It is not an exact equation, but if f is well-behaved and \Deltax is sufficiently small, so is the error caused by using it. Let's rename \Deltax to dx and call dx 'infinitesimal' to remind ourselves that the following equations are not exact, but are nearly valid if dx is sufficiently small.
f(x + dx) = f(x) + dxf'(x)
Also, define
df(x) = f(x + dx) - f(x)
As you say, dx and df are defined differently. dx is just some sufficiently small number, df depends on the choice of dx in a particular way (as well as depending on x and f).
Now we can rearrange the linearized Taylor series as:
df(x) = dxf'(x)
One more step, divide both sides by dx.
\frac{df(x)}{dx} = f'(x)
This equation is an approximation, but it becomes an equality if you take the limit as dx goes to zero. Schutz is doing nothing more or less than this except that his functions are functions of two variables instead of one and he doesn't take the limit.