How to simulate planets orbit curce around the sun ?

Click For Summary
To simulate planetary orbits around the sun, understanding Kepler's laws and Newton's laws of gravity is essential. The simulation requires calculating the gravitational force, velocity changes, and new positions iteratively, while ensuring compliance with conservation laws of energy and momentum to avoid systemic errors. The choice of numerical methods, such as symplectic multi-step integrators or Runge-Kutta methods, significantly impacts the accuracy of the simulation. It's also important to consider the representation of data, using objects for each celestial body to facilitate dynamic additions or removals during the simulation. Perfect simulations are unattainable for n-body problems with more than two bodies, necessitating approximations for realistic modeling.
  • #31
D H said:
This is only true if you are computing with infinite precision, but computing with infinite precision would require an infinite amount of time to make even the smallest of steps.
I'm thinking about if it is possible to get (almost) infinite precision by adding some sort of factor to the result from the finite calculations?
 
Astronomy news on Phys.org
  • #32
kjensen said:
Again thanks a lot for your deep insights. I agree to what you saying, but actually it is possible in for example C/C++ to extent to arbitrary precision as you can see here:

http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
Sure, you can extend to some greater degree of precision than nominal, but you cannot extend to infinite precision. That would require infinite memory and an infinitely fast processor. For any representation of the reals on a digital computer there will always exist a number δ>0 such that 1+ε=1 for all |ε|<δ.

My experience is that if you use the newest processors and compile for example Linux to your newest processor (or processor cluster), then you can still achieve good performance in very high precision.
My experience is otherwise. Extended precision arithmetic is very expensive computationally, typically 10 times slower (very optimistic) to 1000 times slower (or worse!) than using native floating point. This is an expense that the typical application of orbit propagation programs cannot endure. Besides, there is little need for extended precision arithmetic here. Sophisticated integration techniques can achieve a relative precision of 10-14 for a long span of time using native doubles; simpler but still good techniques can achieve a relative precision of about 10-12 (but only for a shorter span of time). At 60 AU, these levels of precision correspond to 9 centimeters and 9 meters respectively.

Very few applications in physics need extended precision arithmetic for the simple reason that most physical measurements aren't good to anywhere close to 16 decimal digits of accuracy.

Also if you write the calculation engine in assembler using parallel processing techniques then it is possible on new processors to get very good performance.
Assembly? Not usually, especially not for sophisticated numerical integration techniques. A good compiler will typically do a better job. Parallel computing? Numerical solution of the N-body gravitational problem as applied to the solar system is a bit tough to parallelize. Parallel algorithms work quite nicely for modeling a bunch of stars where behaviors rather than accuracy is what is important. Those galactic simulations typically use simple techniques such as leapfrog and involve a huge number of interacting bodies. A solar system simulation involves a small number of interacting bodies and accuracy takes on greater importance. Variable step-size, variable-order Adams methods are a bit tough to parallelize. These factors make it much easier to write a highly parallel solver for a galactic simulation than for the solar system.

The people who do go the parallel computing route inevitable do so using native floating point arithmetic. Using extended precision arithmetic would defeat the purpose. A somewhat recent development is to perform those parallel computations on a computer's graphics processor -- using native floating point arithmetic of course.
 
  • #33
D H said:
My experience is otherwise. Extended precision arithmetic is very expensive computationally, typically 10 times slower (very optimistic) to 1000 times slower (or worse!) than using native floating point.
But according to the benchmarks then this GNU library can perform arbitrary precision pretty fast:

http://gmplib.org/

Also I think that a real good assembly programmer can beat any compiler by far or at least make code that performs just as fast, especially if using multiple cores in parallel clusters (but I admit that it will be very tedious to program such algorithms).

Also to get extreme speed then VHDL hardware programmed algorithms can be used (extremely fast performance if asynchronous fpga's is used)!

But I admit that I easily get carried away when it comes to computers, and for my first one planet orbiting the sun in 2D experiment it might not be necessary with multi cores CPU's connected in clusters supported by asynchronous fpga's calculation engines. But sometimes it is nice to dream :-) Maybe later if I extent my program to simulating the whole universe it might become handy. Anyway I appreciate your deep and profound replies and obviously you know a lot more about physics simulations than I do.
 
Last edited:
  • #34
D H said:
Parallel computing? Numerical solution of the N-body gravitational problem as applied to the solar system is a bit tough to parallelize.

Most of the time is used for the calculation of accelerations and this is not that difficult to parallelize. For an n-body system there are n·(n-1)/2 connections. With k processors just assign n·(n-1)/(2·k) to every processor. Simultaneous memory access can be avoided by mirroring position and mass data k times. It might be also useful to parallelize parts of the numerical integrator but everything else does not need to be optimized.
 
  • #35
kjensen said:
But according to the benchmarks then this GNU library can perform arbitrary precision pretty fast:

http://gmplib.org/
Pretty fast compared to other bignum packages, but still very slow compared to using native floating point. The native floating point calculations are done on a separate processor that is specially architected to perform floating point calculations.

Also I think that a real good assembly programmer can beat any compiler by far, especially if using multiple cores in parallel clusters (but I admit that it will be very tedious to program such algorithms).
Assembly programming is a a niche field today, mostly relegated to area of firmware. To quote John Moore (of Moore's law fame), "He who hasn't hacked assembly language as a youth has no heart. He who does as an adult has no brain."

Assembly is pretty much dead in the field of scientific programming, and has been for a couple of decades. It made a very brief comeback when people first started using GPUs for scientific computing, but even that pretty much ended when compiler writers added those targets to their toolsets. Assembly code isn't portable from one machine to another. Finely-tuned assembly isn't even portable across machines with the same underlying machine language; those fine-tunings that make the function run so fast on machine X makes it run slow on machine Y. Lack of portability is one reason people stay away from assembly nowadays. An even bigger problem: Who will maintain the code? Assembly is such a niche skill. The job market for it is rather small. Finding someone who can do math, physics and general purpose computing well is hard enough. Finding some who can do all that and can also write assembly well: It's easier to find one needle randomly buried in one of a very large number of haystacks.
 
  • #36
DrStupid said:
Most of the time is used for the calculation of accelerations and this is not that difficult to parallelize. For an n-body system there are n·(n-1)/2 connections. With k processors just assign n·(n-1)/(2·k) to every processor. Simultaneous memory access can be avoided by mirroring position and mass data k times. It might be also useful to parallelize parts of the numerical integrator but everything else does not need to be optimized.
It is very hard to parallelize. Derivative computations can be expensive (e.g., a plate model for aerodynamic drag, non-spherical gravity, a throttleable rocket with a complex buildup/trailoff, a stiff set of robotic arm links, ...), and then yes, the derivative calculations vastly overwhelm the integration. Assuming simple Newtonian gravity calculations for spherical mass distributions, the problem at hand has exceeding simple derivative calculations. The overhead of coarse grain parallelism (e.g., threads) is going to swamp any gains from using threads -- particularly so if one is using a sophisticated integration technique.

Fine-grain parallelism is a different story. It is possible to farm out the entire integration process to a GPU. It's a pain in the rear and it's not very portable, but it is doable.
 
  • #37
D H said:
...
Assembly programming is a a niche field today, mostly relegated to area of firmware. To quote John Moore (of Moore's law fame), "He who hasn't hacked assembly language as a youth has no heart. He who does as an adult has no brain."

Assembly is pretty much dead in the field of scientific programming, and has been for a couple of decades ...
I agree that assembly language can be very tedious and also it is not very portable. But also it can be very fun. But maybe I have a bizarre way of having fun! And if Moore is right then I should have a heart, but no brain ... That is something to think about :-)
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
17
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
45
Views
6K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 8 ·
Replies
8
Views
578
  • · Replies 1 ·
Replies
1
Views
2K