Variational solutions to non-linear ODE

In summary, the conversation discusses a technique for minimizing the error in a system of nonlinear ODE's by finding a quadratic approximation that satisfies given boundary conditions. The method involves minimizing the integral of the squared difference between the solution and the ODE over a specified interval. This can be achieved by restricting the function to a finite-dimensional subspace and using a suitable norm. However, it is noted that this may not always give the closest approximation and further adjustments may be necessary.
  • #1
member 428835
Hi PF!

I have a system of nonlinear ODE's, wherein the only constant ##C## in the ODE takes on several values depending on the geometry; thus once a geometry is defined for the ODE, ##C## is uniquely determined. Let's say I want to guess a quadratic solution to the ODE, call it ##\phi(x)##. However, I want to adjust the coefficients of ##\phi## (since it's a quadratic) so they minimize the error of the ODE's actual solution for some range of ##C##.

I am reading an article on how to do this, and the author seems to state that the residue equals the ODE (once set equal to zero). Then the author squares the residual and integrates it with respect to ##x## over a certain interval (0 to 1, although I'm not too concerned with this).

The author calls this integral a functional, and then starts minimizing the functional.

My question is, and I can be more specific if it helps, is anyone familiar with this technique? Becker 1964 originally used the technique.

Thanks so much!

Josh
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
You can certainly attempt to minimize the error for arbitrary [itex]C[/itex].

Suppose your ODE is [tex]y'' = f(y,y')[/tex] for [itex]y : [0,1] \to \mathbb{R}[/itex] subject to given boundary conditions at 0 and 1.

You can then look for an approximation [itex]\phi : [0,1] \to \mathbb{R}[/itex] which doesn't necessarily satisfy [itex]\phi'' - f(\phi,\phi) = 0[/itex] everywhere, but does minimize [tex]
\int_0^1 (\phi'' - f(\phi,\phi'))^2\,dx.[/tex]

If you are looking for a quadratic approximation then you have [itex]\phi(x) = ax^2 + bx + c[/itex]. Substituting that into the above gives you [tex]
\int_0^1 (2a - f(ax^2 + bx + c,2ax + b))^2\,dx = G(a,b,c)[/tex] for some [itex]G[/itex]. If you require that [itex]\phi[/itex] satisfy the boundary conditions, you can eliminate two of the three unknown coefficients to be left with the problem of minimizing a function of one variable. Otherwise you have the problem of minimizing a function of three variables.
 
  • #3
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?
 
  • #4
thanks for your reply too!
 
  • #5
joshmccraney said:
Yes! This is exactly what I was doing. But what is there theory behind minimizing ##\int (\phi''-f)^2 \, dx##? Am I missing something?

It is possible to define a norm on a suitable space of real-valued functions on [itex][0,1][/itex] whereby [tex]\|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}.[/tex] By restricting [itex]\phi[/itex] to a particular subset of functions1 and minimizing [itex]\|\phi'' - f(\phi,\phi')\|[/itex] (or equivalently minimizing [itex]\|\phi'' - f(\phi,\phi')\|^2[/itex]) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between [itex]\phi''[/itex] and [itex]f(\phi,\phi')[/itex]. Note that a solution [itex]\phi_0[/itex] of the ODE will always minimize [itex]\|\phi'' - f(\phi,\phi')\|[/itex] since then by definition [itex]\|\phi_0'' - f(\phi_0,\phi_0')\| = \|0\| = 0[/itex].

1 Usually one restricts [itex]\phi[/itex] to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
 
  • Like
Likes member 428835
  • #6
pasmith said:
It is possible to define a norm on a suitable space of real-valued functions on [itex][0,1][/itex] whereby [tex]\|g\| = \left(\int_0^1 g(x)^2\,dx\right)^{1/2}.[/tex] By restricting [itex]\phi[/itex] to a particular subset of functions1 and minimizing [itex]\|\phi'' - f(\phi,\phi')\|[/itex] (or equivalently minimizing [itex]\|\phi'' - f(\phi,\phi')\|^2[/itex]) we obtain an approximation to the actual solution which is the "closest" approximation, in the sense that it minimizes the distance between [itex]\phi''[/itex] and [itex]f(\phi,\phi')[/itex]. Note that a solution [itex]\phi_0[/itex] of the ODE will always minimize [itex]\|\phi'' - f(\phi,\phi')\|[/itex] since then by definition [itex]\|\phi_0'' - f(\phi_0,\phi_0')\| = \|0\| = 0[/itex].

1 Usually one restricts [itex]\phi[/itex] to a finite-dimensional subspace, but it may be that the set of functions satisfying a particular boundary condition is not a subspace.
This looks like a least squares issue, where the closest value is given by the ortho projection, using that [itex] L^2 [/itex] is a Hilbert space.
 
Last edited:
  • #7
Thank you both!
 
  • #8
pasmith did 99% of the work and I share the credit. I like it ;).
 
  • #9
hahahahahaha!
 
  • #10
Sorry to open this thread back up, but there is something I was hoping you could help me with.

I have the following ODE and boundary conditions: ## y y'' + 2 y'^2 + x y'= 0## and ## y(1) = .00000001## and ##y'(1) = -1/2##. Also, ##y## is a function of ##x##. I'm trying to find a good quadratic fit, so I attempted to do the method described above in post 2. When doing this, I get a good solution, but when I check this solution next to the numerical solution mathematica gave me, I found if I changed the value of the last coefficient I was able to get an answer that is much closer the the numerical solution.

Any ideas why this is?

I can describe more if I've left anything out.
 

1. What is a variational solution?

A variational solution is a type of solution to a non-linear ordinary differential equation (ODE) that is found using the calculus of variations. This method involves finding the path or function that minimizes a certain functional or expression.

2. How is a variational solution different from other solutions?

Unlike traditional methods of solving ODEs, such as finding exact or numerical solutions, a variational solution is found by minimizing an expression rather than explicitly solving the equation. This approach can often provide more insight into the behavior of the system.

3. What types of non-linear ODEs can be solved using variational methods?

Variational methods can be applied to a wide range of non-linear ODEs, including those that are separable, first-order, second-order, and higher-order. However, the complexity of the problem may affect the difficulty of obtaining a variational solution.

4. What are the advantages of using variational solutions?

Variational solutions can provide a more elegant and intuitive approach to solving non-linear ODEs. They can also provide a deeper understanding of the dynamics of the system and can be useful in finding approximate solutions when exact solutions are difficult to obtain.

5. Are there any limitations to using variational solutions?

Variational solutions may not always be possible or straightforward to obtain for all non-linear ODEs. In some cases, the functional that needs to be minimized may be difficult to formulate or solve. Additionally, the resulting solution may not be as accurate as numerical or exact solutions.

Similar threads

Replies
3
Views
1K
Replies
3
Views
795
  • Differential Equations
Replies
6
Views
1K
  • Differential Equations
Replies
16
Views
898
Replies
2
Views
2K
  • Differential Equations
Replies
4
Views
2K
Replies
13
Views
1K
  • Differential Equations
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
338
  • Differential Equations
Replies
2
Views
1K
Back
Top