Variational solutions to non-linear ODE

  • Context: Graduate 
  • Thread starter Thread starter member 428835
  • Start date Start date
  • Tags Tags
    Non-linear Ode
Click For Summary

Discussion Overview

The discussion revolves around the application of variational methods to nonlinear ordinary differential equations (ODEs), specifically focusing on minimizing the error of a quadratic approximation to the ODE's solution. Participants explore the theoretical underpinnings of this approach and its implications for finding approximate solutions under varying conditions.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant describes a system of nonlinear ODEs where a constant ##C## varies based on geometry, leading to a unique quadratic solution ##\phi(x)## that minimizes error for a range of ##C## values.
  • Another participant suggests minimizing the integral of the squared residual, proposing that the approximation does not need to satisfy the ODE everywhere but should minimize the error over a specified interval.
  • There is a question regarding the theoretical justification for minimizing the integral of the squared residual, indicating a desire for deeper understanding of the method's foundation.
  • Participants discuss defining a norm on a space of functions and how minimizing this norm leads to the closest approximation to the actual solution, with a note that a true solution of the ODE minimizes this norm to zero.
  • One participant likens the approach to a least squares problem, referencing the concept of orthogonal projection in Hilbert spaces.
  • A later post introduces a specific ODE and boundary conditions, expressing concern that a quadratic fit yields results that differ from a numerical solution, prompting inquiries about potential reasons for this discrepancy.

Areas of Agreement / Disagreement

Participants generally agree on the approach of minimizing the error in the context of variational methods, but there remains uncertainty regarding the theoretical justification and the effectiveness of the quadratic approximation in specific cases. The discussion about the specific ODE and its approximation reveals differing experiences and outcomes, indicating unresolved questions about the method's reliability.

Contextual Notes

The discussion highlights limitations related to the assumptions made about the function space and boundary conditions, as well as the potential for varying results based on the choice of coefficients in the quadratic approximation.

member 428835
Hi PF!

I have a system of nonlinear ODE's, wherein the only constant ##C## in the ODE takes on several values depending on the geometry; thus once a geometry is defined for the ODE, ##C## is uniquely determined. Let's say I want to guess a quadratic solution to the ODE, call it ##\phi(x)##. However, I want to adjust the coefficients of ##\phi## (since it's a quadratic) so they minimize the error of the ODE's actual solution for some range of ##C##.

I am reading an article on how to do this, and the author seems to state that the residue equals the ODE (once set equal to zero). Then the author squares the residual and integrates it with respect to ##x## over a certain interval (0 to 1, although I'm not too concerned with this).

The author calls this integral a functional, and then starts minimizing the functional.

My question is, and I can be more specific if it helps, is anyone familiar with this technique? Becker 1964 originally used the technique.

Thanks so much!

Josh
 
Last edited by a moderator:
Physics news on Phys.org
You can certainly attempt to minimize the error for arbitrary C.

Suppose your ODE is y'' = f(y,y') for y : [0,1] \to \mathbb{R} subject to given boundary conditions at 0 and 1.

You can then look for an approximation \phi : [0,1] \to \mathbb{R} which doesn't necessarily satisfy \phi&#039;&#039; - f(\phi,\phi) = 0 everywhere, but does minimize <br /> \int_0^1 (\phi&#039;&#039; - f(\phi,\phi&#039;))^2\,dx.

If you are looking for a quadratic approximation then you have \phi(x) = ax^2 + bx + c. Substituting that into the above gives you <br /> \int_0^1 (2a - f(ax^2 + bx + c,2ax + b))^2\,dx = G(a,b,c) for some G. If you require that \phi satisfy the boundary conditions, you can eliminate two of the three unknown coefficients to be left with the problem of minimizing a function of one variable. Otherwise you have the problem of minimizing a function of three variables.
 
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?
 
thanks for your reply too!
 
joshmccraney said:
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?

It is possible to define a norm on a suitable space of real-valued functions on [0,1] whereby \|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}. By restricting \phi to a particular subset of functions1 and minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| (or equivalently minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\|^2) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between \phi&#039;&#039; and f(\phi,\phi&#039;). Note that a solution \phi_0 of the ODE will always minimize \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| since then by definition \|\phi_0&#039;&#039; - f(\phi_0,\phi_0&#039;)\| = \|0\| = 0.

1 Usually one restricts \phi to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
 
  • Like
Likes   Reactions: member 428835
pasmith said:
It is possible to define a norm on a suitable space of real-valued functions on [0,1] whereby \|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}. By restricting \phi to a particular subset of functions1 and minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| (or equivalently minimizing \|\phi&#039;&#039; - f(\phi,\phi&#039;)\|^2) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between \phi&#039;&#039; and f(\phi,\phi&#039;). Note that a solution \phi_0 of the ODE will always minimize \|\phi&#039;&#039; - f(\phi,\phi&#039;)\| since then by definition \|\phi_0&#039;&#039; - f(\phi_0,\phi_0&#039;)\| = \|0\| = 0.

1 Usually one restricts \phi to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
This looks like a least squares issue, where the closest value is given by the ortho projection, using that L^2 is a Hilbert space.
 
Last edited:
Thank you both!
 
pasmith did 99% of the work and I share the credit. I like it ;).
 
hahahahahaha!
 
  • #10
Sorry to open this thread back up, but there is something I was hoping you could help me with.

I have the following ODE and boundary conditions: ## y y'' + 2 y'^2 + x y'= 0## and ## y(1) = .00000001## and ##y'(1) = -1/2##. Also, ##y## is a function of ##x##. I'm trying to find a good quadratic fit, so I attempted to do the method described above in post 2. When doing this, I get a good solution, but when I check this solution next to the numerical solution mathematica gave me, I found if I changed the value of the last coefficient I was able to get an answer that is much closer the the numerical solution.

Any ideas why this is?

I can describe more if I've left anything out.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K