ODE Fail: Numerical Solution Oscillations - Possible Solutions | Hi PF

In summary, the conversation revolves around solving an ODE using numerical methods and comparing it to the exact solution. The numerical solutions show oscillatory behavior and deviate from the exact solution at larger times. The accuracy of the numerical methods can be determined by checking the documentation and trying different methods. It is also mentioned that numerical algorithms can be tweaked and trained to give acceptable answers.
  • #1
member 428835
Hi PF

The following ODE
$$\ddot x + x - x^3 = 0\\
x(0)=0,\,\,\,\dot x(0) = \frac {1}{ \sqrt 2}$$
is solve exactly with ##\tanh (t/\sqrt 2)##. However, when I try to solve this with MATLAB ode45 (ode23t looks identical) or Mathematica NDSolve I get an oscillatory numerical solution (see attached).

Any idea what's going wrong?
 

Attachments

  • MMA.pdf
    22.2 KB · Views: 395
  • MLAB.pdf
    59.6 KB · Views: 402
Physics news on Phys.org
  • #3
You should always post your program because when things go south its the program or the data.
 
  • Like
Likes member 428835
  • #4
jedishrfu said:
Programmer error?
See that's what I thought. However, check out this Mathematica snippet, specifically the ODE being exactly solved by ##xE##, and how it's the same as the numerical ODE.
 

Attachments

  • Screen Shot 2019-09-10 at 4.36.54 PM.png
    Screen Shot 2019-09-10 at 4.36.54 PM.png
    17.8 KB · Views: 411
  • #5
jedishrfu said:
You should always post your program because when things go south its the program or the data.
Yea I realized this a few moments lat but you already commented...sorry.
 
  • Like
Likes jedishrfu
  • #6
So maybe ode45 is not the right ODE for this problem. It works initially but the deviates at x=10
 
  • #7
jedishrfu said:
So maybe ode45 is not the right ODE for this problem. It works initially but the deviates at x=10
That's what I thought too, but some others failed, so I switched to Mathematica, and it failed too! Any ideas about the oscillations? Looks so unusual.
 
  • #8
It looks like a flattened sin curve. It you'll recall the x'' + x = 0 has sin/cos solutions. Since its flattened then maybe its something approaching a square wave.

Try some other ODE like ODE23 or ODE?? from that reference I gave you in another thread for MATLAB.
 
  • #9
joshmccraney said:
Hi PF

The following ODE
$$\ddot x + x - x^3 = 0\\
x(0)=0,\,\,\,\dot x(0) = \frac {1}{ \sqrt 2}$$
is solve exactly with ##\tanh (t/\sqrt 2)##. However, when I try to solve this with MATLAB ode45 (ode23t looks identical) or Mathematica NDSolve I get an oscillatory numerical solution (see attached).

Any idea what's going wrong?
Numerical solutions don't necessarily give good solutions over a large interval. Your numeric solution matches the exact solution pretty well when x is close to 0, the value used in the initial conditions. When x is relatively far away from 0, the numeric solution varies considerably from the exact solution.
 
  • Like
Likes Cryo, Delta2 and jedishrfu
  • #10
joshmccraney said:
See that's what I thought. However, check out this Mathematica snippet, specifically the ODE being exactly solved by ##xE##, and how it's the same as the numerical ODE.
I don't do Mathematica, but there are a couple of things in your screen shot that puzzle me.
First, when you define the exact solution, xE[t_], you have a different parameter in the definition:
$$xE[t_\ ] := \frac{Tanh[b + \frac t {\sqrt 2}]}{\sqrt{-\alpha \epsilon}}$$
Shouldn't you have t_ on the right side, not t?

Second, why are you using the constants ##\alpha, \epsilon,## and b? Is the idea here that you can vary these parameters and see how the solutions change?
 
  • #11
Mark44 said:
I don't do Mathematica, but there are a couple of things in your screen shot that puzzle me.
First, when you define the exact solution, xE[t_], you have a different parameter in the definition:
$$xE[t_\ ] := \frac{Tanh[b + \frac t {\sqrt 2}]}{\sqrt{-\alpha \epsilon}}$$
Shouldn't you have t_ on the right side, not t?
No, the syntax I've used is correct: t_ is LHS and t is RHS.

Mark44 said:
Second, why are you using the constants ##\alpha, \epsilon,## and b? Is the idea here that you can vary these parameters and see how the solutions change?
I'm using these since they were a part of how I found the exact solution. Something like similarity transform variables. I then set them to particular values. I don't really care what they are, but it bothers me that I can't recover a solution that looks reasonable to the analytic for longer times. Like, the oscillatory behavior is very strange.
 
  • #12
jedishrfu said:
Try some other ODE like ODE23 or ODE?? from that reference I gave you in another thread for MATLAB.
I actually tried this first, I tried every solver I could find in MATLAB. They either oscillated or diverged. None looked accurate for larger times. Isn't this bad? How would I know without an analytic solution whether the numerics are correct?
 
  • #13
joshmccraney said:
None looked accurate for larger times. Isn't this bad?
See post #9.
joshmccraney said:
How would I know without an analytic solution whether the numerics are correct?
From the documentation for the method you're using. They should give you some idea of the accuracy of the method in terms of the interval around the initial conditions.
 
  • #14
Mark44 said:
See post #9.
From the documentation for the method you're using. They should give you some idea of the accuracy of the method in terms of the interval around the initial conditions.
I'll read further into the documentation. And yea, I saw post #9, but it seems like such a simple curve: I'm shocked it deviates so much from analytic at late times.
 
  • #15
joshmccraney said:
but it seems like such a simple curve: I'm shocked it deviates so much from analytic at late times.
ODE45 and NDSolve don't know anything about the analytic solution. All they have to go on are the differential equation and initial conditions, and they use this information to get another solution point. The farther away you get from, in this case, t = 0, the less accurate the computed solution is.
 
  • #16
Check what happens to the numerical solution when you start at ##t=10## with the exact value as a starting point.
 
  • Like
Likes Cryo and member 428835
  • #17
Also, please post your code as text, since that allows others to copy-and-paste it.
 
  • Like
Likes jedishrfu
  • #18
With respect to numerical algorithms, it's a bit of an art to know which one is best. This is common practice in all sorts of computational systems. Machine Learning is another area where there are a variety of techniques to choose from and a variety of parameters within a technique to choose from. As an example, the activation function within the node of a neural net is something that can be tweaked using sigmoids, ReLU or some other novel function.

Basically the human is training the computational system to give the answers that seem acceptable and then using it on more complex systems to discover something new which may in fact be an artifact of the computational process.

The good news is that you get to play with the computer day in and day out while wrestling with your data and results trying to understand what's going on and maybe you'll discover something novel.

How most people do it today is to find a computational problem similar to the one they are working on and adopt the parameters used initially changing them as needed to get the results you're looking for. Essentially the real cost function is the human trying to decide on the cost function to use.
 
  • Like
Likes member 428835
  • #19
In Mathematica if I compare the results of NDSolve with the default MachinePrecision versus WorkingPrecision->64 I find the same oscillatory result in the first case and the expected more accurate behavior in the second.

Code:
sol1=x[t]/.NDSolve[{x''[t]+x[t]-x[t]^3==0,x[0]==0,x'[0]==1/Sqrt[2]},x[t],{t,0,30}][[1]];
sol2=x[t]/.NDSolve[{x''[t]+x[t]-x[t]^3==0,x[0]==0,x'[0]==1/Sqrt[2]},x[t],{t,0,30},WorkingPrecision->64][[1]];
Plot[{sol1,sol2},{t,0,30}]

If I drop that WorkingPrecision down to 32 then the oscillation begins, but later than when using MachinePrecision.

I see that AdvanPix has a drop-in multi precision package for Matlab with a free trial. Perhaps you could see how your result compares with a quadruple precision calculation of exactly this result. Perhaps there are other alternatives that will let you experiment with greater precision.
 
Last edited:
  • Like
Likes member 428835
  • #20
It doesn't matter how much precision you use. The point x = 1 x' = 1 is an unstable point in phase space.
You can easily see this by making this into a physical problem. You have a conservative force here of x^3 -x, and a mass 1. f=ma will give x'' = x^3 - x. What you are really doing is rolling a ball up a hill (without friction) and trying it to come to rest on the top of a hill. If your initial speed is bigger than 1/sqrt(2) no matter how little, the ball will over shoot and roll down the other side. If the speed is too small, the ball won't get to the top and will roll back towards x=0 and go up the hill towards x=-1 (what you are seeing now). Since 1/sqrt(2) does not have an exact digital representation, all computations can't start with the right initial speed, and they are doomed from the start.
The differential equation has no exact solution where the ball actually comes to rest on the top of the hill, the exact solution you computed will never reach x=1, while the speed becomes smaller and smaller. At some point the numerical error will make it roll back down or overshoot the hill.
 
  • Like
Likes Cryo, member 428835 and DrClaude
  • #21
willem2 said:
It doesn't matter how much precision you use. The point x = 1 x' = 1 is an unstable point in phase space.
You can easily see this by making this into a physical problem. You have a conservative force here of x^3 -x, and a mass 1. f=ma will give x'' = x^3 - x. What you are really doing is rolling a ball up a hill (without friction) and trying it to come to rest on the top of a hill. If your initial speed is bigger than 1/sqrt(2) no matter how little, the ball will over shoot and roll down the other side. If the speed is too small, the ball won't get to the top and will roll back towards x=0 and go up the hill towards x=-1 (what you are seeing now). Since 1/sqrt(2) does not have an exact digital representation, all computations can't start with the right initial speed, and they are doomed from the start.
The differential equation has no exact solution where the ball actually comes to rest on the top of the hill, the exact solution you computed will never reach x=1, while the speed becomes smaller and smaller. At some point the numerical error will make it roll back down or overshoot the hill.
Thanks for this, that was a great explanation of what's happening!

So you mentioned phase space: this equation can be decomposed as $$\dot y = -x + x^3\\ \dot x = y$$ which admits equilibrium solutions ##y=0,\,\,\,x=0,\pm 1##. If we perturb off these equilibria with small numbers ##\epsilon_x,\epsilon_y##, then the eigenvalues of the Jacobian (two with each input since two state variables) are $$\lambda_0 = [-1 + 3 \epsilon_x ^2, 0]\\\lambda_{\pm 1} = [-1 + 3 (\pm 1 + \epsilon_x)^2, 0]$$ which, when linearized about ##\epsilon_x## yields eigenvalues

$$\lambda_0 = [-1 , 0]\\\lambda_{\pm 1} = [2, 0].$$ Can you remind me how we can determine stability from this? I know from a phase portrait ##(0,0)## is stabile and the other two are saddles.
 
  • #22
joshmccraney said:
$$\lambda_0 = [-1 , 0]\\\lambda_{\pm 1} = [2, 0].$$ Can you remind me how we can determine stability from this? I know from a phase portrait ##(0,0)## is stabile and the other two are saddles.
if the real part of all of the eigenvalues is <0, the fixed point is asymptotically stable. There will be a neighbourhood of the fixed point in phase space where all initial points will end up in the fixed point (in the limit as t->inf).
If the real part of one of the eigenvalues is 0, the fixed point might be stable. The point x=0, v=0. is not asymptotically stable. You get oscillations. In phase space, the solution will circle this point.
You could add a friction term proportional to v^2 to the system. That would keep the eigenvalues the same but it would make the system asymptotically stable (or unstable if you use negative friction).

what always seem to happen when you implement the system
v' = x^3 - x
x' = v
you will use this in practice.
xn+1 = xn + v Δt
vn+1 = vn + (x^3 - x) Δt
since x is around 1 and v is around 0, the absolute error for floating point numbers is much bigger for x, than for v. At some point the next value of x you compute will be equal to the previous value, while x is still < 1.
(x^3 - x) will remain constant and v will become negative.
Using a better integration method won't help at all, if you use the same time step because this will happen whenever v Δt becomes smaller than the smallest difference between two floating point numbers.
Only increasing the accuracy of the floating point numbers will help, and decreasing the time step will make it worse.
(using a better method for integration can help, if you can increase the time step)
 
  • Like
Likes member 428835
  • #23
Try solving it in two parts. The homogeneous part [itex] \ddot{x} +x = 0[/itex] then since we get sinusoids, use that as the basis for the [itex] x^{3}[/itex] piece.
 
  • #24
jedishrfu said:
With respect to numerical algorithms, it's a bit of an art to know which one is best. This is common practice in all sorts of computational systems. Machine Learning is another area where there are a variety of techniques to choose from and a variety of parameters within a technique to choose from.

I'm sure someone out there is working out how to train an AI / ML system to pick the best numerical ODE, for a given ODE problem. It would be surprising if no one has tackled something like this as a PhD project, for example.
 
  • #25
I‘m there are many grad students doing this for their particular projects and that many others have published similar work.
 

1. What is an ODE?

An ODE, or ordinary differential equation, is a mathematical equation that describes the relationship between a dependent variable and its derivatives with respect to one or more independent variables.

2. What is an ODE fail?

An ODE fail occurs when the numerical solution to an ODE produces oscillations, or a repeating pattern of values, instead of a smooth curve. This can happen due to errors in the initial conditions, the numerical method used to solve the ODE, or the nature of the ODE itself.

3. What is a numerical solution?

A numerical solution is an approximation of the true solution to an ODE, obtained by using numerical methods to solve the equation. These methods involve breaking the ODE down into smaller, simpler equations that can be solved using algorithms and computer programs.

4. What are some possible solutions to ODE fail?

Possible solutions to ODE fail include adjusting the initial conditions to be more accurate, using a different numerical method to solve the ODE, or changing the parameters of the ODE to make it less sensitive to errors. It may also be helpful to consult with other experts or conduct further research to gain a better understanding of the problem.

5. How can I prevent ODE fail?

To prevent ODE fail, it is important to carefully choose appropriate initial conditions, use a reliable numerical method, and double-check the parameters and equations used in the ODE. It may also be helpful to test the solution with different values and compare the results to ensure accuracy. Additionally, continuous learning and staying updated on new methods and techniques can also help prevent ODE fail.

Similar threads

  • Differential Equations
Replies
3
Views
1K
  • Differential Equations
Replies
5
Views
2K
Replies
5
Views
1K
  • Differential Equations
Replies
5
Views
1K
  • Differential Equations
Replies
5
Views
1K
Replies
2
Views
1K
Replies
3
Views
1K
  • Differential Equations
Replies
16
Views
876
Replies
1
Views
2K
Replies
12
Views
2K
Back
Top