Create an Orbital Simulator

by bkelly
Tags: orbital, simulator
 P: 84 I found a simulator here: http :// kornelix . squarespace . com / galaxy / (it seems I don't have URL priviledge, but google on "stellar simulation") It runs under Linux so I fired up that computer and installed it. It runs well, but not much in the way of features. I haven't reviewed the code closely, but it seems to just use the F=MA type of approach. If I can get version 1.0 running, then I can look into better accuracy for close approaches. There are two primary segments of this projects Calculating the positions Displaying the field In this forum I want to discuss the calculations. What type of coordinate system would be best, polar, Cartesian, or something else I don't know about? It seems to me that Cartesian would be better. If for no other reason, I have never messed with polar coordinates. But if polar would be much better, I would like to know. If Cartesian, I suspect that it would be easier to deal with if the origin is in one of the corners rather than in the middle. My first thought would be the lower left corner. Thoughts about selecting the range: Astronomical Units, millions of miles, something else? BTW: This site seems to respond much faster under Linux with Firefox than under XP with IE-7.
P: 4,513
 Quote by Redbelly98 Too bad yours didn't work out. I did something similar (in Visual Basic, on a Windows PC) 3 or 4 years ago and was reasonably happy with the results. I remember modifying the force so that it didn't diverge when the bodies got too close to each other, for the reason you saw: all hell breaks loose otherwise. Of course that meant some inaccuracies in the trajectories, but for example I was able to give a cool demonstration of the gravity-assist technique for boosting satellites. If you're interested I can try to look up exactly how I did the modification to the force. FYI, I only worked with up to 10 or 20 objects at a time, the computations really bogged down beyond that (Remember, this was Windows and Visual Basic).
bkelley's the one you want to talk to about code. I wrote mine a few moons ago before they has invented rocks. Did you solve or notice the walk-away problem where circular orbits are really outward spirals? You can check if it's happening with a running total-energy value.

I hope I didn't PO lonedragon excessively...
 Mentor P: 11,985 bkelly, In Cartesian coordinates it's easier to convert from the (x,y) calculated positions to displayed pixel locations, as only simple scaling factors are involved. Might also be easier to put the origin at pixel (0,0) in the display in the upper left, but if you don't it's still pretty straightforward to rescale from (x,y) to pixel location. I remember an algorithm a former physics teacher gave me. Better than Euler, not as accurate as Runge-Kutta, but pretty straightforward in terms of understanding it. In 1-D it works like this ... Do a Taylor series expansion to find $$x_{n+1}$$ given $$x_n$$ , where $$x_n = x(n \cdot \Delta t)$$ : $$x_{n+1} = x_n + v_n \cdot \Delta t + \frac{1}{2} a_n \cdot \Delta t^2 + \frac{1}{6}x'''_n \cdot \Delta t^3 +$$ ... Similarly, $$x_{n-1} = x_n - v_n \cdot \Delta t + \frac{1}{2} a_n \cdot \Delta t^2 - \frac{1}{6}x'''_n \cdot \Delta t^3 +$$ ... where $$v_n, a_n, x'''_n$$ are the velocity, acceleration, and 3rd derivitive of x, respectively, at time $$t= n \cdot \Delta t$$ Add these two equations: $$x_{n+1} + x_{n-1} = 2 \cdot x_n + a_n \cdot \Delta t^2 +$$ ... or $$x_{n+1} = 2 \cdot x_n - x_{n-1} + a_n \cdot \Delta t^2 +$$ ... Note the cancellation of the odd-order terms, so that the error is actually of order $$\Delta t^4$$, not $$\Delta t^3$$. All you need to calculate $$x_{n+1}$$ are the previous 2 values of x, as well as the acceleration at time $$n \cdot \Delta t$$, which you calculate from the Forces and masses in the system. In 2-D you'd need to calculate the x- and y-components of the accelerations, then apply the algorithm to find $$x_{n+1}$$ and $$y_{n+1}$$ . There might be a common name for this algorithm, if anybody knows it feel free to post the info.
Mentor
P: 11,985
 Quote by Phrak bkelley's the one you want to talk to about code. I wrote mine a few moons ago before they has invented rocks. Did you solve or notice the walk-away problem where circular orbits are really outward spirals? You can check if it's happening with a running total-energy value.
I didn't check circular orbits, that would have been a good test. What I did notice was that if two objects got very close, they could suddenly go flying off the screen at incredibly high speed.

p.s. I just looked over my code, and found that I used the algorithm I described in message #21. Somehow I thought I later changed this to Runge-Kutta but apparently not.
I can't figure out, from a quick glance at the code, just what I'm doing to the force to make it "behave" at close range. I would have to find some 3-year-old handwritten notes that I have somewhere to figure that out again.
 P: 94 Anyone feel like sharing one of their simulators? Would be interesting.
 P: 4,513 The guys at NASA, NSA, and whoever, should have devoted quit a few man-years at this. So they may have found a different method than interating t, and updating V_i and X_i. There's the Hamiltonian, if you've heard of it. It's an alternative, equivalent to Newtonian mechanics. What I'm trying to say, is that it may be possible to iterate values in some other mathematical space, rather than velocity and position. Angular momentum and Kinetic energy perhaps; I have no idea really. Just offinging possibilities.
Mentor
P: 14,436
 Quote by Phrak I wrote an 3D, N-body orbital simulator, with displayed output, in basically the same manner you outline. Variables were updated, for the most part, in parallel. The model suffered orbits that would gradually increase in radius over iteration.
This is a direct consequence of using the Euler method. The Euler method displays this problem even with two bodies and even without any eccentricity.

 Quote by Redbelly98 I remember an algorithm a former physics teacher gave me. Better than Euler, not as accurate as Runge-Kutta, but pretty straightforward in terms of understanding it. ... There might be a common name for this algorithm, if anybody knows it feel free to post the info.
The method you described is the basic verlet method. As you noted, it is better than Euler. An even better approach and still very simple to understand is Heun's method.

 Quote by bkelly I found a simulator here: http:// kornelix.squarespace.com/galaxy/. It runs well, but not much in the way of features. I haven't reviewed the code closely, but it seems to just use the F=MA type of approach.
The cited simulator updates the velocity and then updates the position using the updated velocity. This is not a very good integrator. A simple improvement is to update the position using the average of the velocity before and velocity after the velocity update. This simple improvement is Heun's method, a second-order Runge-Kutta integrator. Heun's method is considerably better than the midpoint method, another second-order Runge-Kutta integrator.

 Quote by bkelly I haven't reviewed the code closely, but it seems to just use the F=MA type of approach.
*All* of the Runge-Kutta techniques as well as the verlet method "just use the F=Ma" type of approach. Some just do it a lot better than others.

 Quote by Redbelly98 I didn't check circular orbits, that would have been a good test.
That is always a good test. A simulator that can't do a very good job reproducing a circular orbit isn't worth very much at all. Highly elliptical orbits and hyperbolic orbits are also good tests because the close approaches will stress even good integrators.
Mentor
P: 14,436
 Quote by Phrak The guys at NASA, NSA, and whoever, should have devoted quit a few man-years at this. So they may have found a different method than interating t, and updating V_i and X_i. There's the Hamiltonian, if you've heard of it. It's an alternative, equivalent to Newtonian mechanics.
NSA, as far as I know, doesn't address this issue. OTOH, NASA certainly does. Hamiltonian-based techniques are called symplectic integrators; google this phrase for more info. The advantage is that these techniques inherently do a better job at conserving energy than do non-symplectic integrators (e.g., the Euler method and the standard RK4 method). The basic verlet method is a symplectic integrator, for example, and it does a much better job than does the Euler method even though both evaluate state derivatives once for each propagation interval. I believe JPL uses symplectic integration techniques for developing ephemerides.

There are many, many other orbital propagation techniques. One approach is to use Keplerian elements (e.g., $a,e,i,\Omega,\omega,M$) rather than position and velocity. In the case of two bodies, all Keplerian elements but the mean anomaly are constant and the mean anomaly is a linear function of time. Propagation is easy and exact. Making sense out of the elements (e.g., plot positions on a computer screen) takes a little work, but not too much. Now consider a stellar system which comprises a single star and several planets. The planets' orbits can still be described in terms of Keplerian elements, but these now exhibit time-varying behavior. The variations are small, so perturbation techniques can be put into play. NASA used such an approach during the Apollo era, and still uses it in some arenas today.
P: 4,513
 Quote by D H ... There are many, many other orbital propagation techniques. One approach is to use Keplerian elements (e.g., $a,e,i,\Omega,\omega,M$) rather than position and velocity. In the case of two bodies, all Keplerian elements but the mean anomaly are constant and the mean anomaly is a linear function of time. Propagation is easy and exact. Making sense out of the elements (e.g., plot positions on a computer screen) takes a little work, but not too much. Now consider a stellar system which comprises a single star and several planets. The planets' orbits can still be described in terms of Keplerian elements, but these now exhibit time-varying behavior. The variations are small, so perturbation techniques can be put into play. NASA used such an approach during the Apollo era, and still uses it in some arenas today.
I hadn't considered perturbations from elliptical orbits, but then again, I haven't considered orbital simulations in a long while.

An interesting possibility to avoid close-approach divergence for any pair of bodies, might be to switch to elliptical orbits + perturbations (for the pair only) based upon some nearness condition, then switch back. It may even be possible to survive a direct hit where the mass centers pass through one another.
 P: 4,513 In the above I failed to mention the possibility of close encounders of hyperbolic and parabolic trajectories where these might also be degenerate (head on collisions).
 P: 277 I'm also trying to make an n-body orbital simulator, its going quite well - i've recently upgraded from euler to fourth order runga-kutta. I've been using Cartesian coordinates for all of my work so far, and for the actual calculations i plan to continue - but i need to be able to use keplerian orbital elements instead of state vectors for initial conditions. I was wondering if anyone out there had experience or direction on how to convert between orbital elements and state vectors besides drowning myself in geometry. Thanks!
 P: 288 Regardless of what everyone is telling you, your method will work fine. Just be prepared to use very small time steps if you are using Euler's method. Besides inaccuracies that can occur due to the approximate nature of the solution, the near-collision "explosion" can be solved by simply defining the force piecewise. Calculate the distance between each pair of bodies (this must be done anyway) and see if it's less than some minimum value (this can be as simple or as complicated as you like). Then decide whether or not you want to "do" the force there, and act accordingly. Otherwise, you could use a similar method to use conservation of momentum to have them collide (elastically or inelastically) like billiard balls. Or have them lump together and run until all masses have lumped together. There's a lot of potential for fun and good programming in what you're talking about.Good graphics can be done in Java, C, C++, C#, Visual Basic, etc. For 3-D, you may even look into DirectX or OpenGL. The programming won't be hard, but there are some things you can do to make it easy on yourself. Look for libraries with functions / objects you can use. I recommend taking the opportunity to explore OO approaches, if you're not already familiar with these. You can make objects for vectors and massy particles and this will greatly ease the notation. Have fun with it.
 P: 516 May I ask, for a given accuracy of 10^-5 say, and for a given orbit lasting 1 year, which integration method is the fastest?
 P: 288 I would suspect that you can't do much better than the fourth order adaptive stepsize Runge Kutta algorithm.
 P: 516 Does that mean repeating with a smaller time interval until all 3 steps of 4th order Runge-Kutta agree within an error margin?
 P: 288 I'm not sure precisely what happens, but I'm pretty sure that a timestep is taken with one RK method and then another is taken with one which is of one higher order, and if the difference is within the error bars, it goes with it. Otherwise, it tells you what the timestep should be to get your accuracy. It has the benefit of remaining as accurate as you tell it (within reason) and not being *too* when the forces are negligibly small (for instance, why use a timestep of h = 1.0e-10 both when the force is large and changing quickly AND when the force is next to nothing and staying put?)
 Sci Advisor PF Gold P: 1,542 You should start with Euler's method because it is simple and you understand it. Then if you like, you can upgrade your method later. I wrote Gravity Simulator: www.gravitysimulator.com , which sounds like what you want to do. It uses Euler's method. Using Euler's method I can: *Predict moon phases at least 50 years into the future *Propogate Apophis 22 years into the future, and miss by only 200 km its close approach distance to Earth in 2029 (which btw is also the amount the higher-order Picard's method misses by, so the inaccuracy seems to have more to do with forces not modeled, rather than with integrator inaccuracies). *Correctly predict the Newtonian predicted value of the precession of Mercury's perihelion *accurately recreate virtually any point-mass Newtonian simulation done by professional astronomers except for the ones that last billions of years. *and the list goes on. My web site has more examples I'm not an expert on the higher order routines, but most of the bad things I've heard about Euler's method seem to be overrated and seem to disappear with a slow enough time step. In the real universe, there are lots of other forces besides Newtonian gravity from point-mass sources: radiation pressure, relativity, non-spherical objects, etc. I'm guessing that errors caused by ignoring these overwhelm the errors caused by Euler's method. For example, the toughest thing to compute is the orbits of tight moons such as Phobos and Deimos. After 2 years of simulating, even at a slow time step, they're not where they are supposed to be. The size, shape and orientation of their orbits are still fine, but their position on that orbit is off, by lots for Phobos, and by a little for the more distant Deimos. But this is more likely due to treating Mars as spherical, than with numerical errors. So even a higher order routine will miss this one too. I recently wrote a 2nd order routine. It was more accurate than Eulers at time steps less than 64 seconds, but less accurate than Euler at time steps higher than 64 seconds. In the 2nd order routine, the planets would spiral out. It was explained to me like this: In Euler (1st order) every time you cut your time step in half, you cut your error in half. In 2nd order routines, every time you cut your time step in half, you cut your error 4x. But the inverse of this is true too. Every time you double your time step, your error grows by 4x for the 2nd order routine and by only 2x for Euler. Since most simulations I like to perform require a time step much high than 64 seconds, I won't find much use for the 2nd order routine. Next I'd like to try RK4. It will be interesting to compare to the current Euler routine. Will I like it better, given that it seems like Euler is very underrated? Don't be discouraged from using Euler if you don't understand the higher order methods. You'll have lots of fun with it. And don't be scared away from 3d. It's not that much different from 2d. Pothaorean theorem works in 3d too: d=sqrt(x^2+y^2+z^2), a=GM/d^2, r=r+a
Mentor
P: 11,985
 Quote by tony873004 You should start with Euler's method because it is simple and you understand it. Then if you like, you can upgrade your method later.
Definitely good advice here. Get something working so you can start having fun with it. Then worry about what is the "best" method.

 Related Discussions Programming & Computer Science 16 General Astronomy 6 General Discussion 4 Introductory Physics Homework 1 Aerospace Engineering 2