Variational solutions to non-linear ODE

  • Thread starter Thread starter member 428835
  • Start date Start date
  • Tags Tags
    Non-linear Ode
member 428835
Hi PF!

I have a system of nonlinear ODE's, wherein the only constant ##C## in the ODE takes on several values depending on the geometry; thus once a geometry is defined for the ODE, ##C## is uniquely determined. Let's say I want to guess a quadratic solution to the ODE, call it ##\phi(x)##. However, I want to adjust the coefficients of ##\phi## (since it's a quadratic) so they minimize the error of the ODE's actual solution for some range of ##C##.

I am reading an article on how to do this, and the author seems to state that the residue equals the ODE (once set equal to zero). Then the author squares the residual and integrates it with respect to ##x## over a certain interval (0 to 1, although I'm not too concerned with this).

The author calls this integral a functional, and then starts minimizing the functional.

My question is, and I can be more specific if it helps, is anyone familiar with this technique? Becker 1964 originally used the technique.

Thanks so much!

Josh
 
Last edited by a moderator:
Physics news on Phys.org
You can certainly attempt to minimize the error for arbitrary C.

Suppose your ODE is y'' = f(y,y') for y : [0,1] \to \mathbb{R} subject to given boundary conditions at 0 and 1.

You can then look for an approximation \phi : [0,1] \to \mathbb{R} which doesn't necessarily satisfy \phi&#039;&#039; - f(\phi,\phi) = 0 everywhere, but does minimize <br /> \int_0^1 (\phi&#039;&#039; - f(\phi,\phi&#039;))^2\,dx.

If you are looking for a quadratic approximation then you have \phi(x) = ax^2 + bx + c. Substituting that into the above gives you <br /> \int_0^1 (2a - f(ax^2 + bx + c,2ax + b))^2\,dx = G(a,b,c) for some G. If you require that \phi satisfy the boundary conditions, you can eliminate two of the three unknown coefficients to be left with the problem of minimizing a function of one variable. Otherwise you have the problem of minimizing a function of three variables.
 
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?
 
thanks for your reply too!
 
joshmccraney said:
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?

It is possible to define a norm on a suitable space of real-valued functions on [0,1] whereby \|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}. By restricting \phi to a particular subset of functions1 and minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| (or equivalently minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\|^2) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between \phi&#039;&#039; and f(\phi,\phi&#039;). Note that a solution \phi_0 of the ODE will always minimize \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| since then by definition \|\phi_0&#039;&#039; - f(\phi_0,\phi_0&#039;)\| = \|0\| = 0.

1 Usually one restricts \phi to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
 
  • Like
Likes member 428835
pasmith said:
It is possible to define a norm on a suitable space of real-valued functions on [0,1] whereby \|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}. By restricting \phi to a particular subset of functions1 and minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| (or equivalently minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\|^2) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between \phi&#039;&#039; and f(\phi,\phi&#039;). Note that a solution \phi_0 of the ODE will always minimize \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| since then by definition \|\phi_0&#039;&#039; - f(\phi_0,\phi_0&#039;)\| = \|0\| = 0.

1 Usually one restricts \phi to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
This looks like a least squares issue, where the closest value is given by the ortho projection, using that L^2 is a Hilbert space.
 
Last edited:
Thank you both!
 
pasmith did 99% of the work and I share the credit. I like it ;).
 
hahahahahaha!
 
  • #10
Sorry to open this thread back up, but there is something I was hoping you could help me with.

I have the following ODE and boundary conditions: ## y y'' + 2 y'^2 + x y'= 0## and ## y(1) = .00000001## and ##y'(1) = -1/2##. Also, ##y## is a function of ##x##. I'm trying to find a good quadratic fit, so I attempted to do the method described above in post 2. When doing this, I get a good solution, but when I check this solution next to the numerical solution mathematica gave me, I found if I changed the value of the last coefficient I was able to get an answer that is much closer the the numerical solution.

Any ideas why this is?

I can describe more if I've left anything out.
 
Back
Top