# ODE with non-exact solution: closed-form, non-iterative approximations

• I
• FranzS

#### FranzS

TL;DR Summary
What is a good approach for approximating a non-polynomial function appearing in an ODE in order to find a closed-form approximate solution?
In case of an integral ##\rightarrow## differential equation of the type:
$$f(t) = \int_0^t g(f(\tau)) d\tau$$
$$\rightarrow \frac{df(t)}{dt} = g(f(t))$$
which turns out not to be solvable in exact form because ##g(f(t))## is a non-polynomial function (but it would if ##g(f(t))## was a polynomial), how would you approximate ##g(f(t))##?
The purpose is to get a closed-form approximate solution with no iterative processes.
Given the "dynamic evolution / evolving nature" of an ODE, very loosely speaking, I would assume that it is better to consider a Taylor polynomial centered in ##t=0## (lower limit of the integral, to be considered as the "starting instant" of a physical evolving system described by the above equations), whereas — for instance — a multilinear ("polynomial") regression would provide a "wider" overall accuracy for the "static" ##g(f(t))## (so to speak) but its worse approximation at ##t=0## would add greater and greater error as time passes.

The problem is autonomonous and one-dimensional. Given $X = f(0)$, $f(t)$ must be bounded by $X$ and the next zero of $g$, to the right if $g(X) > 0$ and to the left if $g(X) < 0$. You can then interpolate $g$ on that interval. If there is no next zero in that direction then you will need to do something else.

Whatever approximate method you use will have the property that the error gets worse as $t \to \infty$; the question is how fast it grows.

Whatever approximate method you use will have the property that the error gets worse as t→∞; the question is how fast it grows.
There are also predictor-corrector methods which use higher order expansions to do fancier iteration processes. I always look them up in Abramowitz and Stegun and check that they converge with decreasing step size. After that I assume all is copacetic. Life is short.

There is a method to produce a solution that may be useful for some purposes. It's called "solution generation." The basic idea is, you choose a form of the function g() such that you can solve the system exactly. You choose a form that is as reasonably close to the actual form of g() as you can and still solve the system.

For example, if you had g(f(t)) = f(t), that is, g() is just the identity, then you can clearly solve the system. But maybe that is too drastically different from the g() in your case.

So you start looking at forms of g() that are closer to the real form, but still give you solutions. For example, you could look for forms that let you use a Laplace transform to solve the resulting differential equation. Or any other form that you can solve the resulting differential equation.

If you can find a situation that is not drastically far from the real g(), then there are a large variety of perturbative methods to solve the resultant system. Which one you choose will depend on the details of the system and the form of solution that will work for your purposes. And you will always have the concern of whether such a scheme converges to the correct answer.