# I Numerical ODE Fail

#### joshmccraney

Hi PF

The following ODE
$$\ddot x + x - x^3 = 0\\ x(0)=0,\,\,\,\dot x(0) = \frac {1}{ \sqrt 2}$$
is solve exactly with $\tanh (t/\sqrt 2)$. However, when I try to solve this with MATLAB ode45 (ode23t looks identical) or Mathematica NDSolve I get an oscillatory numerical solution (see attached).

Any idea what's going wrong?

#### Attachments

• 22.2 KB Views: 17
• 59.6 KB Views: 14
Related Differential Equations News on Phys.org

#### jedishrfu

Mentor
Programmer error?

#### jedishrfu

Mentor
You should always post your program because when things go south its the program or the data.

#### joshmccraney

Programmer error?
See that's what I thought. However, check out this Mathematica snippet, specifically the ODE being exactly solved by $xE$, and how it's the same as the numerical ODE.

#### Attachments

• 28.4 KB Views: 14

#### joshmccraney

You should always post your program because when things go south its the program or the data.
Yea I realized this a few moments lat but you already commented...sorry.

#### jedishrfu

Mentor
So maybe ode45 is not the right ODE for this problem. It works initially but the deviates at x=10

#### joshmccraney

So maybe ode45 is not the right ODE for this problem. It works initially but the deviates at x=10
That's what I thought too, but some others failed, so I switched to Mathematica, and it failed too! Any ideas about the oscillations? Looks so unusual.

#### jedishrfu

Mentor
It looks like a flattened sin curve. It you'll recall the x'' + x = 0 has sin/cos solutions. Since its flattened then maybe its something approaching a square wave.

Try some other ODE like ODE23 or ODE?? from that reference I gave you in another thread for MATLAB.

#### Mark44

Mentor
Hi PF

The following ODE
$$\ddot x + x - x^3 = 0\\ x(0)=0,\,\,\,\dot x(0) = \frac {1}{ \sqrt 2}$$
is solve exactly with $\tanh (t/\sqrt 2)$. However, when I try to solve this with MATLAB ode45 (ode23t looks identical) or Mathematica NDSolve I get an oscillatory numerical solution (see attached).

Any idea what's going wrong?
Numerical solutions don't necessarily give good solutions over a large interval. Your numeric solution matches the exact solution pretty well when x is close to 0, the value used in the initial conditions. When x is relatively far away from 0, the numeric solution varies considerably from the exact solution.

#### Mark44

Mentor
See that's what I thought. However, check out this Mathematica snippet, specifically the ODE being exactly solved by $xE$, and how it's the same as the numerical ODE.
I don't do Mathematica, but there are a couple of things in your screen shot that puzzle me.
First, when you define the exact solution, xE[t_], you have a different parameter in the definition:
$$xE[t_\ ] := \frac{Tanh[b + \frac t {\sqrt 2}]}{\sqrt{-\alpha \epsilon}}$$
Shouldn't you have t_ on the right side, not t?

Second, why are you using the constants $\alpha, \epsilon,$ and b? Is the idea here that you can vary these parameters and see how the solutions change?

#### joshmccraney

I don't do Mathematica, but there are a couple of things in your screen shot that puzzle me.
First, when you define the exact solution, xE[t_], you have a different parameter in the definition:
$$xE[t_\ ] := \frac{Tanh[b + \frac t {\sqrt 2}]}{\sqrt{-\alpha \epsilon}}$$
Shouldn't you have t_ on the right side, not t?
No, the syntax I've used is correct: t_ is LHS and t is RHS.

Second, why are you using the constants $\alpha, \epsilon,$ and b? Is the idea here that you can vary these parameters and see how the solutions change?
I'm using these since they were a part of how I found the exact solution. Something like similarity transform variables. I then set them to particular values. I don't really care what they are, but it bothers me that I can't recover a solution that looks reasonable to the analytic for longer times. Like, the oscillatory behavior is very strange.

#### joshmccraney

Try some other ODE like ODE23 or ODE?? from that reference I gave you in another thread for MATLAB.
I actually tried this first, I tried every solver I could find in MATLAB. They either oscillated or diverged. None looked accurate for larger times. Isn't this bad? How would I know without an analytic solution whether the numerics are correct?

#### Mark44

Mentor
None looked accurate for larger times. Isn't this bad?
See post #9.
How would I know without an analytic solution whether the numerics are correct?
From the documentation for the method you're using. They should give you some idea of the accuracy of the method in terms of the interval around the initial conditions.

#### joshmccraney

See post #9.
From the documentation for the method you're using. They should give you some idea of the accuracy of the method in terms of the interval around the initial conditions.
I'll read further into the documentation. And yea, I saw post #9, but it seems like such a simple curve: I'm shocked it deviates so much from analytic at late times.

#### Mark44

Mentor
but it seems like such a simple curve: I'm shocked it deviates so much from analytic at late times.
ODE45 and NDSolve don't know anything about the analytic solution. All they have to go on are the differential equation and initial conditions, and they use this information to get another solution point. The farther away you get from, in this case, t = 0, the less accurate the computed solution is.

#### DrClaude

Mentor
Check what happens to the numerical solution when you start at $t=10$ with the exact value as a starting point.

#### DrClaude

Mentor
Also, please post your code as text, since that allows others to copy-and-paste it.

#### jedishrfu

Mentor
With respect to numerical algorithms, it's a bit of an art to know which one is best. This is common practice in all sorts of computational systems. Machine Learning is another area where there are a variety of techniques to choose from and a variety of parameters within a technique to choose from. As an example, the activation function within the node of a neural net is something that can be tweaked using sigmoids, ReLU or some other novel function.

Basically the human is training the computational system to give the answers that seem acceptable and then using it on more complex systems to discover something new which may in fact be an artifact of the computational process.

The good news is that you get to play with the computer day in and day out while wrestling with your data and results trying to understand what's going on and maybe you'll discover something novel.

How most people do it today is to find a computational problem similar to the one they are working on and adopt the parameters used initially changing them as needed to get the results you're looking for. Essentially the real cost function is the human trying to decide on the cost function to use.

#### Bill Simpson

In Mathematica if I compare the results of NDSolve with the default MachinePrecision versus WorkingPrecision->64 I find the same oscillatory result in the first case and the expected more accurate behavior in the second.

Code:
sol1=x[t]/.NDSolve[{x''[t]+x[t]-x[t]^3==0,x[0]==0,x'[0]==1/Sqrt[2]},x[t],{t,0,30}][[1]];
sol2=x[t]/.NDSolve[{x''[t]+x[t]-x[t]^3==0,x[0]==0,x'[0]==1/Sqrt[2]},x[t],{t,0,30},WorkingPrecision->64][[1]];
Plot[{sol1,sol2},{t,0,30}]
If I drop that WorkingPrecision down to 32 then the oscillation begins, but later than when using MachinePrecision.

I see that AdvanPix has a drop-in multi precision package for Matlab with a free trial. Perhaps you could see how your result compares with a quadruple precision calculation of exactly this result. Perhaps there are other alternatives that will let you experiment with greater precision.

Last edited:

#### willem2

It doesn't matter how much precision you use. The point x = 1 x' = 1 is an unstable point in phase space.
You can easily see this by making this into a physical problem. You have a conservative force here of x^3 -x, and a mass 1. f=ma will give x'' = x^3 - x. What you are really doing is rolling a ball up a hill (without friction) and trying it to come to rest on the top of a hill. If your initial speed is bigger than 1/sqrt(2) no matter how little, the ball will over shoot and roll down the other side. If the speed is too small, the ball won't get to the top and will roll back towards x=0 and go up the hill towards x=-1 (what you are seeing now). Since 1/sqrt(2) does not have an exact digital representation, all computations can't start with the right initial speed, and they are doomed from the start.
The differential equation has no exact solution where the ball actually comes to rest on the top of the hill, the exact solution you computed will never reach x=1, while the speed becomes smaller and smaller. At some point the numerical error will make it roll back down or overshoot the hill.

#### joshmccraney

It doesn't matter how much precision you use. The point x = 1 x' = 1 is an unstable point in phase space.
You can easily see this by making this into a physical problem. You have a conservative force here of x^3 -x, and a mass 1. f=ma will give x'' = x^3 - x. What you are really doing is rolling a ball up a hill (without friction) and trying it to come to rest on the top of a hill. If your initial speed is bigger than 1/sqrt(2) no matter how little, the ball will over shoot and roll down the other side. If the speed is too small, the ball won't get to the top and will roll back towards x=0 and go up the hill towards x=-1 (what you are seeing now). Since 1/sqrt(2) does not have an exact digital representation, all computations can't start with the right initial speed, and they are doomed from the start.
The differential equation has no exact solution where the ball actually comes to rest on the top of the hill, the exact solution you computed will never reach x=1, while the speed becomes smaller and smaller. At some point the numerical error will make it roll back down or overshoot the hill.
Thanks for this, that was a great explanation of what's happening!

So you mentioned phase space: this equation can be decomposed as $$\dot y = -x + x^3\\ \dot x = y$$ which admits equilibrium solutions $y=0,\,\,\,x=0,\pm 1$. If we perturb off these equilibria with small numbers $\epsilon_x,\epsilon_y$, then the eigenvalues of the Jacobian (two with each input since two state variables) are $$\lambda_0 = [-1 + 3 \epsilon_x ^2, 0]\\\lambda_{\pm 1} = [-1 + 3 (\pm 1 + \epsilon_x)^2, 0]$$ which, when linearized about $\epsilon_x$ yields eigenvalues

$$\lambda_0 = [-1 , 0]\\\lambda_{\pm 1} = [2, 0].$$ Can you remind me how we can determine stability from this? I know from a phase portrait $(0,0)$ is stabile and the other two are saddles.

#### willem2

$$\lambda_0 = [-1 , 0]\\\lambda_{\pm 1} = [2, 0].$$ Can you remind me how we can determine stability from this? I know from a phase portrait $(0,0)$ is stabile and the other two are saddles.
if the real part of all of the eigenvalues is <0, the fixed point is asymptotically stable. There will be a neighbourhood of the fixed point in phase space where all initial points will end up in the fixed point (in the limit as t->inf).
If the real part of one of the eigenvalues is 0, the fixed point might be stable. The point x=0, v=0. is not asymptotically stable. You get oscillations. In phase space, the solution will circle this point.
You could add a friction term proportional to v^2 to the system. That would keep the eigenvalues the same but it would make the system asymptotically stable (or unstable if you use negative friction).

what always seem to happen when you implement the system
v' = x^3 - x
x' = v
you will use this in practice.
xn+1 = xn + v Δt
vn+1 = vn + (x^3 - x) Δt
since x is around 1 and v is around 0, the absolute error for floating point numbers is much bigger for x, than for v. At some point the next value of x you compute will be equal to the previous value, while x is still < 1.
(x^3 - x) will remain constant and v will become negative.
Using a better integration method won't help at all, if you use the same time step because this will happen whenever v Δt becomes smaller than the smallest difference between two floating point numbers.
Only increasing the accuracy of the floating point numbers will help, and decreasing the time step will make it worse.
(using a better method for integration can help, if you can increase the time step)

#### Dr Transport

Gold Member
Try solving it in two parts. The homogeneous part $\ddot{x} +x = 0$ then since we get sinusoids, use that as the basis for the $x^{3}$ piece.

"Numerical ODE Fail"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving