Particle simulation and parallelism

Click For Summary

Discussion Overview

The discussion revolves around the challenges and methodologies of simulating a vast number of particles, specifically in the context of real-time particle simulations in software like Unity3D. Participants explore the feasibility of simulating 500 trillion particles, the implications of parallel computing, and the storage and rendering of simulation data.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Conceptual clarification

Main Points Raised

  • One participant questions whether a simulation can maintain 60 frames per second while calculating the positions of 500 trillion particles, suggesting that parallel processing might help.
  • Another participant describes a method where simulations run on thousands of computers in parallel, storing data at each time step and generating frames separately, which can then be compiled into a movie.
  • A participant expresses concern about the feasibility of simulating such a large number of particles in real-time and asks for strategies to tackle the problem.
  • One participant challenges the idea of simulating 500 trillion particles in 1/60 of a second, suggesting that the required computations would exceed current capabilities.
  • A later reply references the K supercomputer's work on brain synapses, noting the extensive resources required for even a small fraction of a brain's capacity and raises questions about the scalability of such simulations.

Areas of Agreement / Disagreement

Participants express differing views on the feasibility of simulating 500 trillion particles in real-time, with some suggesting it may be possible through parallel computing, while others argue it is beyond current technological capabilities. The discussion remains unresolved regarding the best approach to tackle the problem.

Contextual Notes

Participants highlight limitations related to computational power and the storage requirements for large-scale simulations. The discussion does not resolve the assumptions about the capabilities of current hardware or the efficiency of proposed methods.

fredreload
Messages
250
Reaction score
6
So when I use software like Unity3D and I click on the play button to run the simulation, is it updating the particle at every frame for the animation, let's say 60 frames per second, so it calculates the x and y position of where the particle should go within the time frame and show it.
Now if I have more particles then the computer can handle say 500 trillion particles each needed to calculate the next x and y position of where it should go, can the computer still runs at 60 frames per second?
Would it help if I am running the calculation of these particles in parallel to each other? How would such a mechanism be implemented?
How is particle simulation like this one carries out?
 
Computer science news on Phys.org
The simulation is run and the data at each time step is stored to disk. This simulation might run on thousands of computers in parallel. After the simulation is run, which might take weeks or more, a separate code is used to generate a frame of the animation at each time step. Then another code stitches together the frames into a movie. You can make many different movies from the same simulation. For example, you might make one movie looking at the particle density, another looking at the particle velocities, and so on.

The data from the simulation might be huge, for example the (x,y,z) locations of 500 trillion particles would take Terabytes of storage. However, the frames of the animation are much smaller, since you only have to store the color of each pixel on the screen, which is only Megabytes of data. So the movie itself can run on a single computer, just like when you play a movie on your laptop.
 
Ya, it needs to be rendered in real time. 500 trillion is a lot, what can I do to start tackling this problem?
 
You want to simulate 500 trillion particles in 1/60 of a second? Try calculating the number of FLOPS this requires. I think it is many orders of magnitude beyond current computing capabilities. Good luck, but I think it is out of reach. Why would you try to do such a thing?
 
I saw an article on K supercomputer here and its computation on brain synapses using NEST software. It took them 40 minutes to calculate only 1% of brain's capacity. So I've been thinking if there's a way to reduce the task needed. It is true that they are designing new simulation software. Still it takes tremendous resources just to simulate a brain. What if we need to simulate tens or thousands of these brains? This is on the software side, if they can reduce hardware down to a regular computer it would be cool too
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
1K
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K