Is there such a thing as multiple simulation?

AI Thread Summary
Simulating phenomena across multiple computers simultaneously is feasible, allowing for partial simulations of complex objects, such as a statue, by different systems. This approach can enhance the simulation of intricate real-world scenarios, where a single computer may struggle due to complexity. However, challenges arise in coordinating these simulations, especially when intermediate results must be shared, which can slow down processing. Problems that require interdependent calculations, like heat propagation or matrix convolutions, complicate parallelization efforts. The concept of diakoptics has gained renewed interest in this context, as it addresses the need for effective parallel computing strategies.
FallenApple
Messages
564
Reaction score
61
We know that a computer can simulate phenomena. Is it possible to simulate the same phenomena partially on different computers simultaneously?

For example, Computer 1 simulates the front of a statue. Computer 2 simulates the back of a statue. Computer 3 simulates the birds eye view of a statue. Then the three computers have independently but as a set have nearly simulated the entire statue. I suppose the problem is coordinating this well, since the computers are separate it may not be feasible to produce a coherent result.
So this artificial statue would exists partially in the program of 3 computers, and hence would exist in the set of the computers if the set is considered a legitmate object.

Would this have applications of simulating the real world? Afterall, the real world is too complex for a single computer to simulate, but many computers can simulate some aspect of it in such a way that the divided work can be put together to give a better view of the whole.
 
Technology news on Phys.org
Some problems can be separated in this manner such as fractal computations which iterate on one point. The programmer can distribute the points onto different computers and then merge everything back to make the fractal image.

However, many more problems cannot be easily divided and parallelized in this manner and so intermediate results need to be communicated back and forth between computers to compute the final answer which results in roadblocks that slow things down and then these results need to be aggregated together to get the final result.

Computing the effects of heat propagating through a medium would be an example or doing a convolution over a matrix spread across multiple computers would be another example.
 
  • Like
Likes FallenApple
jedishrfu said:
Some problems can be separated in this manner such as fractal computations which iterate on one point. The programmer can distribute the points onto different computers and then merge everything back to make the fractal image.

However, many more problems cannot be easily divided and parallelized in this manner and so intermediate results need to be communicated back and forth between computers to compute the final answer which results in roadblocks that slow things down and then these results need to be aggregated together to get the final result.

Computing the effects of heat propagating through a medium would be an example or doing a convolution over a matrix spread across multiple computers would be another example.

Is it because many aspects are not independent? That is, the temperature at a certain point would affect the temperature at another point, and surely this would affect magnetic properties, vibrational properties etc, which are interconnect in a complex manner with themselves and the environment. So that is why the other computers need to be updated on the statuses of each other, otherwise they get something that is highly divergent.

So it works for fractals because the different parts of the fractal are independent other than the one singular fact that they all iterate. Because we know where the source of commonality comes from, we a knock it away by using multiple computers, running with just aspect in common?
 
Yes when the algorithm needs to factor in neighboring points then it gets more difficult or even impossible to parallelize effectively. This is the big conundrum of computer science.
 
  • Like
Likes jim mcnamara
The rather old mathematical method of diakoptics has new interest because of the wide availability of parallel computers. If you're interested, read the Wikipedia article.

https://en.wikipedia.org/wiki/Diakoptics
 
  • Like
Likes jedishrfu
There are combat flight simulators that simulate several airplanes, one on each of several computers. As long as they can be synchronized and the required information made available to all on a network, it is possible. The required rate of interactions must be low enough to allow the communication. I have also seen several computers accessing shared memory in a simulation that required higher communication rates than a network would allow.
 
  • Like
Likes jim mcnamara
Clustered computer systems do what you describe by default. If a job can be defined as separate flow, what @FactChecker describes works really well. Clusters were created to accomplish two things well: well defined parallelism on common data and seamless failover. @jedishrfu described the cluster failure point nicely.
 
  • Like
Likes FactChecker
jim mcnamara said:
Clustered computer systems do what you describe by default. If a job can be defined as separate flow, what @FactChecker describes works really well. Clusters were created to accomplish two things well: well defined parallelism on common data and seamless failover. @jedishrfu described the cluster failure point nicely.
The Beowulf computer clusters with Raspberry Pi are interesting.
 
Yes, I do this all the time. It's called cloud baking. Mostly for physics, light mapping, or advanced rendering.
 
Back
Top