Python Lorenz ODE solver in 35 lines of pure Python

  • Thread starter m4r35n357
  • Start date
624
142
Features:
No external dependencies.
Arbitrary order simulation (accuracy limited by float precision).
No finite difference errors.
Can be extended to arbitrary precision with gmpy2 and two more lines.

Enjoy!

Parameters:
  • order
  • step size
  • number of steps
  • initial conditions (x, y, z)
  • parameters (sigma, rho, beta_numerator, beta_denominator)
Sample invocation:
Code:
./tsm-lorenz.py 10 .01 10001 -15.8 -17.48 35.64 10 28 8 3
Code:
Python:
#!/usr/bin/env python3
from sys import argv

n = int(argv[1])
h = float(argv[2])
steps = int(argv[3])
x = float(argv[4])
y = float(argv[5])
z = float(argv[6])
s = float(argv[7])
r = float(argv[8])
b = float(argv[9]) / float(argv[10])

cx = [0.0 for _ in range(n + 1)]
cy = [0.0 for _ in range(n + 1)]
cz = [0.0 for _ in range(n + 1)]
for t in range(1, steps):
    print("{:.9e} {:.9e} {:.9e} {:.5e}".format(x, y, z, t * h))
    cx[0] = x
    cy[0] = y
    cz[0] = z
    for k in range(n):
        xz = xy = 0.0
        cx[k + 1] = s * (cy[k] - cx[k]) / (k + 1)
        for j in range(k + 1):
            xz += cx[k - j] * cz[j]
        cy[k + 1] = (r * cx[k] - xz - cy[k]) / (k + 1)
        for j in range(k + 1):
            xy += cx[k - j] * cy[j]
        cz[k + 1] = (xy - b * cz[k]) / (k + 1)
    x = cx[n]
    y = cy[n]
    z = cz[n]
    for i in range(n - 1, -1, -1):
        x = x * h + cx[i]
        y = y * h + cy[i]
        z = z * h + cz[i]
 
10,337
3,867
Nice thanks for sharing.

Have you checked out the site Rosettacode.org?

You might be able to construct a task and add your code as a solution in Python.

Also you could setup a Github account and add your code there. Many folks go there looking for software to solve their problems.
 
624
142
Nice thanks for sharing.

Also you could setup a Github account and add your code there. Many folks go there looking for software to solve their problems.
Cheers, actually I am currently in the process of pushing a more complete collection, including more ODEs, a c version and basic plotting scripts in Python, to github. I'll post again when it is up.

Incidentally these solvers use a method based on automatic differentiation of Taylor series in which the solver and the model are intricately linked. For that reason I think it's too specific for Rosetta, although I like the site a lot.

For those interested my investigations were inspired by this paper (containing a link to their amazing software), which uses a code generation approach to do the same thing.
 
624
142
OK this is a first go, I have had to reorganize into a new local repo first, but it works for me (TM). I realize I need to document the intent and primary tasks (primarily solving coupled nonlinear ODEs, but also a bit of root finding) and I will be updating the README file as and when.
 
624
142
OK now down to 25 lines of dependency-free Python, or 24 if you run it as a script (without the shebang) ;)

[EDIT] now 23/22 . . . !!!

Python:
#!/usr/bin/env python3
from sys import argv
n, h, steps = int(argv[3]), float(argv[4]), int(argv[5])  # integrator controls
x, y, z = float(argv[6]), float(argv[7]), float(argv[8])  # coordinates
cx, cy, cz = [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)]  # jets
s, r, b = float(argv[9]), float(argv[10]), float(argv[11]) / float(argv[12])  # parameters
for step in range(1, steps):
    print("{:.9e} {:.9e} {:.9e} {:.5e}".format(x, y, z, step * h))
    cx[0], cy[0], cz[0] = x, y, z
    for k in range(n):
        xz = xy = 0.0
        cx[k + 1] = s * (cy[k] - cx[k]) / (k + 1)
        for j in range(k + 1):
            xz += cx[j] * cz[k - j]
        cy[k + 1] = (r * cx[k] - xz - cy[k]) / (k + 1)
        for j in range(k + 1):
            xy += cx[j] * cy[k - j]
        cz[k + 1] = (xy - b * cz[k]) / (k + 1)
    x, y, z = cx[n], cy[n], cz[n]
    for i in range(n - 1, -1, -1):
        x = x * h + cx[i]
        y = y * h + cy[i]
        z = z * h + cz[i]
Incidentally the github stuff (for which this snippet is just a promo) is pretty much feature complete now, just working on more docs.
 
Last edited:

Ibix

Science Advisor
Insights Author
4,688
3,031
I can save you three lines by replacing the loops that calculate xy and xz with sum() calls:
Python:
#!/usr/bin/env python3
from sys import argv
n, h, steps = int(argv[3]), float(argv[4]), int(argv[5])  # integrator controls
x, y, z = float(argv[6]), float(argv[7]), float(argv[8])  # coordinates
cx, cy, cz = [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)]  # jets
s, r, b = float(argv[9]), float(argv[10]), float(argv[11]) / float(argv[12])  # parameters
for step in range(1, steps):
    print("{:.9e} {:.9e} {:.9e} {:.5e}".format(x, y, z, step * h))
    cx[0], cy[0], cz[0] = x, y, z
    for k in range(n):
        cx[k + 1] = s * (cy[k] - cx[k]) / (k + 1)
        xz = sum(cx[j] * cz[k - j] for j in range(k + 1))
        cy[k + 1] = (r * cx[k] - xz - cy[k]) / (k + 1)
        xy = sum(cx[j] * cy[k - j] for j in range(k + 1))
        cz[k + 1] = (xy - b * cz[k]) / (k + 1)
    x, y, z = cx[n], cy[n], cz[n]
    for i in range(n - 1, -1, -1):
        x = x * h + cx[i]
        y = y * h + cy[i]
        z = z * h + cz[i]
Obviously you could save another couple of lines by putting the calculations in line in the calculations for cy[k+1] and cz[k+1].

I tried doing similar to the update for x, y and z by defining an array containing powers of h, but rounding error seemed to accumulate and then come and bite me.

Incidentally, your argvs seem to be numbered a bit funny - there's no argv[1] or argv[2].
 
624
142
I can save you three lines by replacing the loops that calculate xy and xz with sum() calls:
Nice, I wouldn't have thought of that!
Obviously you could save another couple of lines by putting the calculations in line in the calculations for cy[k+1] and cz[k+1].
But I should have noticed that, they are coordinates so I feel no shame in handling them as a tuple.
I tried doing similar to the update for x, y and z by defining an array containing powers of h, but rounding error seemed to accumulate and then come and bite me.
The python stuff is regular floats so roundoff will be an issue (BTW that bit is Horner's method, I think you know this though). I have a gmpy2 version but haven't pushed it as yet - I wanted a pure Python implementation for bragging rights ;) The c programs use MPFR.
Incidentally, your argvs seem to be numbered a bit funny - there's no argv[1] or argv[2].
It's a longish and boring story, but those missing parameters are used by the full python program in arbitrary precision, and I like to re-use command history without too much editing ;)
 
624
142
Thanks for all that, here is what I have now, 15 lines!
Python:
from sys import argv
n, h, steps = int(argv[3]), float(argv[4]), int(argv[5])  # integrator controls
x, y, z = float(argv[6]), float(argv[7]), float(argv[8])  # coordinates
cx, cy, cz = [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)], [0.0 for _ in range(n + 1)]  # jets
s, r, b = float(argv[9]), float(argv[10]), float(argv[11]) / float(argv[12])  # parameters
for step in range(1, steps):
    print("{:.9e} {:.9e} {:.9e} {:.5e}".format(x, y, z, step * h))
    cx[0], cy[0], cz[0] = x, y, z
    for k in range(n):  # build up the jets using the recurrence relations and the derivative rule
        cx[k + 1] = s * (cy[k] - cx[k]) / (k + 1)
        cy[k + 1] = (r * cx[k] - sum(cx[j] * cz[k - j] for j in range(k + 1)) - cy[k]) / (k + 1)
        cz[k + 1] = (sum(cx[j] * cy[k - j] for j in range(k + 1)) - b * cz[k]) / (k + 1)
    x, y, z = cx[n], cy[n], cz[n]
    for i in range(n - 1, -1, -1):  # Horner's method
        x, y, z = x * h + cx[i], y * h + cy[i], z * h + cz[i]
I don't like single-use variables ;)
 

Ibix

Science Advisor
Insights Author
4,688
3,031
Glad to be of service. :wink: And if you don't mind one really long line you can write cx[k+1],cy[k+1],cz[k+1]=(etc) as one line...

I did realise what you were doing in the x,y,z calculation, although I didn't know it was called Horner's method. You do seem to get quite different results if you use different approaches to the polynomial (such as pre-calculating the ##h^i## values and summing in order of increasing or decreasing ##i##), but I suppose that's to be expected in a system like this one.
 
624
142
Glad to be of service. :wink: And if you don't mind one really long line you can write cx[k+1],cy[k+1],cz[k+1]=(etc) as one line...
That is too much even for me, even though they could be a tuple ;)
I did realise what you were doing in the x,y,z calculation, although I didn't know it was called Horner's method. You do seem to get quite different results if you use different approaches to the polynomial (such as pre-calculating the ##h^i## values and summing in order of increasing or decreasing ##i##), but I suppose that's to be expected in a system like this one.
Ah OK, all the h's should be the same (the time step) - are you sure the code is what you think it is? It isn't a regular "weighted" accumulation.
BTW do you have the facility to plot this data?
 

Ibix

Science Advisor
Insights Author
4,688
3,031
Ah OK, all the h's should be the same (the time step) - are you sure the code is what you think it is? It isn't a regular "weighted" accumulation.
Unwinding your loop, you are writing
Python:
    x=cx[10]
    x=x*h+cx[9]
    x=x*h+cx[8]
    ...
    x=x*h+cx[0]
which means that you are calculating cx[10]*h**10+cx[9]*h**9+cx[8]*h**8+...+cx[0]. What I tried was to pre-calculate
Python:
hpow=[h**i for i in range(n+1)]
and then instead of your Horner's method loop, write
Python:
    x=sum(hi * cxi for (hi,cxi) in zip(hpow,cx))
You can sum in the opposite order with zip(hpow[::-1],cx[::-1]) if you like.
BTW do you have the facility to plot this data?
Yes, in 2d at least (3d also possible, but I'd need some lead time). This is x versus y:
figure_1.png

Blue line is your code, green is my version. It's not really different in overall form, but the trajectories do differ.
 

Attachments

Ibix

Science Advisor
Insights Author
4,688
3,031
I did a 3d plot and had a look. At least to my eye, the lines seem to lie in the same subspace using either method - they just seem to have different trajectories through it. Here's an edge-on view, which hopefully shows what I'm getting at.
figure_2.png
 

Attachments

624
142
Unwinding your loop, you are writing
OK, I see that you do get it, I am beginning to remember writing it in the first place, think I'll defer that calculation to you ;)
Blue line is your code, green is my version. It's not really different in overall form, but the trajectories do differ.
I've checked my data against the literature (search for Clean Numerical Simulation on arxiv if you are really bored!), and have lots of verification data for those parameters. Time I did so again I think.

BTW I'm doing complete carnage to my Taylor recurrences using your "sum" trick, so thanks again. I'll probably push the gmpy2 branch once I get it all rebased and tested.
 
624
142
I did a 3d plot and had a look. At least to my eye, the lines seem to lie in the same subspace using either method - they just seem to have different trajectories through it. Here's an edge-on view, which hopefully shows what I'm getting at.
View attachment 235625
Just done a check of the 15 line version against my saved data and it matches up to about 30 time units, which is just what you would expect from a 10th order simulation with these parameters. When you get the chance can you just compare the first 30 time units worth? Hopefully they will match better.
 
624
142
And that, ladies & gentlemen, is what we call "sensitive dependence on initial conditions" ;)

In the graph below (made using compare.py from github), I plot the reference data in black, then superimpose the simulation data. I think it makes quite an impressive visualization (error is in red).

Incidentally, it turns out that as long as the precision to order ratio is kept at ~1.6 (this can be reduced slightly at very high orders), the time before onset of chaos is approximately ##3 * order## (for these parameters, initial conditions and step size). I have simulated (and verified) up to 1500 time units with the c version (the overall process duration is ##(O)n^4##! - it took about 13 hours on an i5 NUC).
 

Attachments

Last edited:
624
142
I have just pushed a new branch (gmpy2) to github supporting arbitrary precision floating point in Python as well as in c. It depends on the gmpy2 library, which is a port to Python of the same MPFR library that is already in use in the c implementation.

I have also incorporated @Ibix's "sum" simplifications, and a few list comprehensions of my own in the Series class.
 
624
142
Well this is a bit of an eye opener! It would appear that Python with gmpy2 is quicker than c with my "hand optimized" MPFR calls . . .
Code:
$ time -p ./tsm-mp.py lorenz 3200 2000 .01 1 -15.8 -17.48 35.64 10 28 8 3
taylor module loaded
-1.580000000e+01 -1.748000000e+01 3.564000000e+01 0.00000e+00
-1.588859189e+01 -1.596400586e+01 3.732161107e+01 1.00000e-02
real 57.03
user 57.00
sys 0.03
$ time -p ./tsm-lorenz-static 3200 2000 .01 1 -15.8 -17.48 35.64 10 28 8 3
-1.580000000e+01 -1.748000000e+01 3.564000000e+01 0.00000e+00
-1.588859189e+01 -1.596400586e+01 3.732161107e+01 1.00000e-02
real 64.31
user 64.30
sys 0.01
Yes, those are single steps using 2000th order Taylor series (RK what ?) and 3200 decimal places of precision. The inner loop of the product recurrence is therefore executed 8000000 times per loop (there are two products in the Lorenz system). I now intend to throw what little effort I spend on the Python/gmpy2 version. The pure Python implementation is now frozen, and the c implementation will be kept as a reference.
 

Want to reply to this thread?

"Lorenz ODE solver in 35 lines of pure Python" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top