# Particle simulation and parallelism

So when I use software like Unity3D and I click on the play button to run the simulation, is it updating the particle at every frame for the animation, let's say 60 frames per second, so it calculates the x and y position of where the particle should go within the time frame and show it.
Now if I have more particles then the computer can handle say 500 trillion particles each needed to calculate the next x and y position of where it should go, can the computer still runs at 60 frames per second?
Would it help if I am running the calculation of these particles in parallel to each other? How would such a mechanism be implemented?
How is particle simulation like this one carries out?

Related Computing and Technology News on Phys.org

#### phyzguy

The simulation is run and the data at each time step is stored to disk. This simulation might run on thousands of computers in parallel. After the simulation is run, which might take weeks or more, a separate code is used to generate a frame of the animation at each time step. Then another code stitches together the frames into a movie. You can make many different movies from the same simulation. For example, you might make one movie looking at the particle density, another looking at the particle velocities, and so on.

The data from the simulation might be huge, for example the (x,y,z) locations of 500 trillion particles would take Terabytes of storage. However, the frames of the animation are much smaller, since you only have to store the color of each pixel on the screen, which is only Megabytes of data. So the movie itself can run on a single computer, just like when you play a movie on your laptop.

Ya, it needs to be rendered in real time. 500 trillion is a lot, what can I do to start tackling this problem?

#### phyzguy

You want to simulate 500 trillion particles in 1/60 of a second? Try calculating the number of FLOPS this requires. I think it is many orders of magnitude beyond current computing capabilities. Good luck, but I think it is out of reach. Why would you try to do such a thing?

I saw an article on K supercomputer here and its computation on brain synapses using NEST software. It took them 40 minutes to calculate only 1% of brain's capacity. So I've been thinking if there's a way to reduce the task needed. It is true that they are designing new simulation software. Still it takes tremendous resources just to simulate a brain. What if we need to simulate tens or thousands of these brains? This is on the software side, if they can reduce hardware down to a regular computer it would be cool too

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving