- #1
member 428835
Hi PF!
I have a system of nonlinear ODE's, wherein the only constant ##C## in the ODE takes on several values depending on the geometry; thus once a geometry is defined for the ODE, ##C## is uniquely determined. Let's say I want to guess a quadratic solution to the ODE, call it ##\phi(x)##. However, I want to adjust the coefficients of ##\phi## (since it's a quadratic) so they minimize the error of the ODE's actual solution for some range of ##C##.
I am reading an article on how to do this, and the author seems to state that the residue equals the ODE (once set equal to zero). Then the author squares the residual and integrates it with respect to ##x## over a certain interval (0 to 1, although I'm not too concerned with this).
The author calls this integral a functional, and then starts minimizing the functional.
My question is, and I can be more specific if it helps, is anyone familiar with this technique? Becker 1964 originally used the technique.
Thanks so much!
Josh
I have a system of nonlinear ODE's, wherein the only constant ##C## in the ODE takes on several values depending on the geometry; thus once a geometry is defined for the ODE, ##C## is uniquely determined. Let's say I want to guess a quadratic solution to the ODE, call it ##\phi(x)##. However, I want to adjust the coefficients of ##\phi## (since it's a quadratic) so they minimize the error of the ODE's actual solution for some range of ##C##.
I am reading an article on how to do this, and the author seems to state that the residue equals the ODE (once set equal to zero). Then the author squares the residual and integrates it with respect to ##x## over a certain interval (0 to 1, although I'm not too concerned with this).
The author calls this integral a functional, and then starts minimizing the functional.
My question is, and I can be more specific if it helps, is anyone familiar with this technique? Becker 1964 originally used the technique.
Thanks so much!
Josh
Last edited by a moderator: