Programming Efficiency N-body problem

In summary, the author is currently writing a VBA program to solve a n-body gravitational problem. The question that is coming into the author's mind is whether it is better to optimize for speed or memory. The author has found that if you take advantage of symmetry, you can increase the speed by 2x, but the cost is increased memory requirements. If you only calculate forces once, the memory requirements become on the order of n^2. However, if you take into account memory constraints, your program should be able to control most of that memory. The author has tested the program on a model of the solar system and found promise.
  • #1
kjohnson
157
0
I am currently in the process of writing a program in VBA that will numerically "solve" a given n-body gravitational problem. I'm doing this pretty much just for fun and am by no means a programmer (had a 1 semester course in c++). The question I have is something that seems to keep coming into my mind as I write the code.

When attempting to write an efficient program/code it seems as though you can optimize either for speed or memory. Is it a matter of deciding what is more important or trying to find the balance between the two?

For example in the n-body problem I set up my code so that given "n" bodies, to find forces acting on all the bodies i calculate (n/2)(n-1) forces instead of the normal n^2 required amount. I am simply taking advantage of that fact that you can think of all the bodys as forming an nxn matrix. This matrix is symmetric since force (1,2) for bodies 1 and 2 is equal to (2,1) in magnitude and the diagonal is not valid (the forces (1,1)...). I figured that this would optimize memory, though I'm not sure about speed. My n-body code seem longer and more complicated (with many nested loops and if statements) than the more brute force method of simply having the computer just recalculate the other symmetric side of the nxn matrix. Obviously my method is still classified under the brute force method, though it cuts the number of calculated forces by 2.

If you were to simulate a large number of bodies with the brute force method than it seems as though memory would be an issue, but speed as well. I mean I don't want to wait all day for the results from 1 simulation. Anyway if someone could shed alittle more light onto this that would be great.
 
Technology news on Phys.org
  • #2
In your example, symmetry let's you increase the speed 2x at the cost of increased memory requirements.

If you do it the usual way, you compute every force twice, but you don't need much memory (your memory requirements are on the order of n).

If you compute every force only once, you have to store computation results somewhere, maybe in a matrix like you described, in which case your memory requirements become on the order of n^2.

In tasks like this one, you'll probably want to get the best performance possible, taking into account your memory constraints. Your computer probably has 2 or 4 gigabytes of memory. Your program should be able to take control of most of that memory. If you come up with an optimization that doubles the speed and manages to keep memory requirements below that number, that's a no-brainer. If your optimization cuts the number of computations, but raises memory requirements beyond what's physically available, that optimization probably won't make your program faster (because the operating system will use the hard disk as an extension of your system memory, and hard disks are slow.) Production applications may have stronger constraints on memory, since, in the first place, you may not know how much physical memory your potential user has, and, in the second place, the program may run concurrently with others and thus not all memory will be available.

Another aspect that you've already noticed is code simplicity. It does not matter how fast your program runs if it does not give you correct answers. A good idea is to make the initial program as simple as possible and then to maintain a set of tests that you run through periodically as you optimize things, in order to make sure that your answers are still correct.
 
  • #3
Yeah, I've been trying to test the code, but this is the first n-body code i wrote. I had some code that would work for 2 bodies, if the mass of 1 was much greater than that of the other. That way you only have to calculate one change is distance each interval. I could test results against this, but even with two bodies my new code will calcualte the combined change in distance between the two bodes (though it should be minimal if m1>>m2). I have also tried to find problems to test on it, but haven't had much luck online. I plugged in a model of the solar system and did a quick check to see after say 1000 increments how much the sun moved from its initial position at (0,0) since right now I have it set to run 2-D only. It showed promise in that it had moved to a new poistion of something like 250,000km out. This should be on the order of 1 sun radius=695,000km which corresponds to the center of mass of the solar system. Anyway thanks for the advice. I did evolve my code from a much simpler code and I find myself saving the code under a new name everytime I think I am going to make a big change. This way I can always open up older versions that I know were working. Great adivice though.
 
  • #4
The amount of memory is not nearly so much a concern as the efficiency of the algorithm. A gigabyte will let you perform a 7000 body simulation.

As noted, one can squeeze a factor of two by computing the forces between pairs only once. Even better, compute [itex](\mathbf r_i - \mathbf r_j)/||\mathbf r_i - \mathbf r_j||^3[/itex] for pairs, and compute acceleration rather than force.

However, these are small gains compared to the *huge* gains in performance that can be achieved by using a better integration technique.
 
  • #5
There are nLogn algorithms for N-body problems, such as barnes-hut. By default I would say this is the preferred choice, (n/2)(n-1) is essentially as bad as n^2. With 7000 bodies this will be ~300 times faster. The memory footprint is higher than a trivial simulation, but it's generally arithmetic throughput rather than memory that is the general issue with these kinds of problems.

The actual computation of forces on the bodies is only Log(n), the building of the quadtree for 2D simulations or octree for 3D simulations is what comprises the n part.

If you want to perform non-trivial simulations, VBA won't do much for you. With a modern GPU on C/C++ using barnes-hut, you can compute over 1,000,000 body interactions per second, which would easily be a million times faster than your implementation. For over-night renders with billions of bodies, your approach would be pretty close to a billion times slower. Not to do injustice to your confidence:smile:, only to put into perspective just what you can do with the right implementation.
 
  • #6
Thanks for the input guys. Negatron, my current code is pretty much the basics; it uses euler integration and yes it nearly calculates the n^2 amount of forces. I pretty much wanted to just get it working before i implemented a new form of numerical integration or an alogrithim like tree code. I would like to keep the "brute force" as one method and add something like tree code as well. This way it will be a user option depending on the number of bodies and overall accuracy desired, as the brute force method is still more accurate. Either way could you please explain why two sets of "identical" code in C++ vs. VBA will run at different speeds. I know its a plain fact and remember hearing something about it. Is it in the compiling...? Excuse me again if I'm way off I am no computer science major, but I do find it very interesting.
 
  • #7
Negatron said:
There are nLogn algorithms for N-body problems, such as barnes-hut. By default I would say this is the preferred choice, (n/2)(n-1) is essentially as bad as n^2.
The last half of the last sentence is correct. N2/2-N is still O(N2).

Whether something like Barnes-Hut is preferable to the simpler O(N2) algorithms depends on the nature of the simulation. For a solar system-type simulation, I suspect a Barnes-Hut algorithm would yield worse performance than the dumb O(N2) algorithm (along with the obvious loss in accuracy).

For a solar-system type simulation, the major gains in performance are attained by using a better integration technique. Something that simulates thousands of particles most likely will use a very simplistic technique such as symplectic Euler or a verlet-based algorithm. Those are incredibly poor choices for a solar system model.

If you want to perform non-trivial simulations, VBA won't do much for you. With a modern GPU on C/C++ using barnes-hut, you can compute over 1,000,000 body interactions per second, which would easily be a million times faster than your implementation.
I missed the reference to VBA. Eek! (I'm surprised galaxy-type simulations use C/C++ rather than Fortran. Those huge numerical simulations is one of the fields where Fortran still rules the day as far as I know.)
 
  • #8
kjohnson said:
Either way could you please explain why two sets of "identical" code in C++ vs. VBA will run at different speeds.

http://shootout.alioth.debian.org/u32/benchmark.php?test=nbody&lang=all&box=1

This benchmarks many different implementations. In this case n-body simulations, but you can choose another benchmark. It doesn't have VB unfortunately, but it does show how dramatically performance can differ using the same algorithm. I can't say what the performance will be like across all the version of VB, including VBA and .NET, but it can be safely said that in the best case performance will be significantly slower. Another problem is the lack of availability of mathematics/linalg/tree algorithms/data structure libraries for VB that would handle such a problem efficiently. Making them yourself is VERY hard.

In your case the biggest problem is the choice of algorithm. The choice of programming language is secondary.

D H said:
Whether something like Barnes-Hut is preferable to the simpler O(N2) algorithms depends on the nature of the simulation. For a solar system-type simulation, I suspect a Barnes-Hut algorithm would yield worse performance than the dumb O(N2) algorithm (along with the obvious loss in accuracy).

Depends what you mean by solar system. If only planetary bodies are considered than indeed BH makes little sense, but in practice the dumb algorithm will grow increasingly impractical. In large scale simulations, such as galaxies, the loss in accuracy is entirely negligible. The BH vector computed from high-level nodes will be next to identical to the n^2 vector. This is because the center of gravity for particles scattered (randomly or "noisily") in a node converges to the center of the node with increasing n. Might seem that the idea doesn't hold true in practice, but no astronomical simulation I've seen in the longest time has used n^2 algorithms on the count of their accuracy.

D H said:
I'm surprised galaxy-type simulations use C/C++ rather than Fortran
Both are popular. I would say the primary reason Fortran is still used is because it enforces data-oriented thought, essential for efficient computation. C++ can lead a developer more astray to using objects where they don't belong, but it's not inherently prone to being slower than Fortran. With GPU programming APIs I suspect that C++ will ultimately end Fortran's reign. It will allow efficient computation without the developer necessarily being performance-minded, and the object-oriented element will allow expanding to large applications. All of Fortran's benefits, none of the drawbacks.
 
Last edited by a moderator:

1. What is the "N-body problem" in programming efficiency?

The N-body problem in programming efficiency refers to the challenge of efficiently simulating the motion of a large number of objects interacting with each other through gravitational forces. This problem is often encountered in scientific fields such as astrophysics and molecular dynamics.

2. Why is the N-body problem considered a difficult computational task?

The N-body problem is considered a difficult computational task because the number of calculations required to accurately simulate the interactions between all the objects increases exponentially as the number of objects increases. This makes it challenging to find efficient algorithms that can handle large numbers of objects in a reasonable amount of time.

3. How do scientists approach solving the N-body problem?

Scientists approach solving the N-body problem by using various numerical methods and algorithms that aim to balance accuracy and efficiency. Some common techniques include Barnes-Hut simulation, Fast Multipole Method, and hierarchical tree algorithms.

4. What are some factors that affect the efficiency of N-body simulations?

There are several factors that can affect the efficiency of N-body simulations. These include the number of objects being simulated, the accuracy of the simulation, the choice of algorithm used, and the hardware and software resources available for the simulation.

5. How does parallel computing help improve the efficiency of N-body simulations?

Parallel computing, which involves using multiple processors to perform calculations simultaneously, can significantly improve the efficiency of N-body simulations. By distributing the computational workload among multiple processors, the simulation can be completed much faster than on a single processor, allowing for larger and more complex simulations to be performed in a reasonable amount of time.

Similar threads

  • Programming and Computer Science
Replies
19
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
15
Views
2K
  • Programming and Computer Science
Replies
25
Views
2K
  • Programming and Computer Science
Replies
8
Views
1K
  • Programming and Computer Science
Replies
16
Views
1K
Replies
1
Views
648
  • Programming and Computer Science
7
Replies
235
Views
9K
Replies
9
Views
1K
  • Programming and Computer Science
Replies
1
Views
749
Back
Top