Could computer simulations lead to discovering new laws of the universe?

  • Thread starter Thread starter bassplayer142
  • Start date Start date
  • Tags Tags
    Simulation Universe
AI Thread Summary
Computer simulations have the potential to reveal new laws of the universe by modeling existing universal constants and interactions. While creating such simulations would require immense computational resources, they could help identify missing components of our current understanding of the universe. Notable projects like the Hubble Volume and work by researchers such as Renate Loll demonstrate the complexity and scale of these simulations, with some achieving significant breakthroughs in modeling dimensionality. However, the challenge remains in determining which details to include, as any simulation will inherently be an approximation of reality. Ultimately, advancements in computational power may enable more accurate models that could mirror our universe closely.
bassplayer142
Messages
431
Reaction score
0
Of all our learning of the natural world we have come up with many universal constants. How things interact and such. Whos to say we couldn't make a computer program with these laws in a space and let it go. Disregarding the fact that the memory and computation speed of the computer would be enormous. But wouldn't you be able to learn from this and play around with it until we discover new things out there. And if the program didn't end up like our universe then we know we would be missing many parts.
 
Space news on Phys.org
Look at this
http://www.physics.lsa.umich.edu/hubble-volume/expert.htm
http://www.physics.lsa.umich.edu/hubble-volume/
==quote==
* Each simulation:
o employs one billion mass elements and 1024^3 Fourier grid cells
o generates nearly 0.5 terabytes of raw output (later compressed to about 200 Gb)
o requires roughly 70 hours of CPU on 512 processors (four years of a single processor!)

* Some details of the LCDM model :
o Wm = 0.3, WL=0.7, s8 = 0.9, power spectrum from CMBFAST
o simulated cube of comoving length 3/h gigaparsecs (3000/h Mpc)
o simulation begun at redshift z = 35
o force resolution is 0.1/h Mpc
==endquote==

Note that the formidable SIMON WHITE is involved (Cambridge, UC Berkeley, now director Max Planck Astrophysics at Garching)
kick-ass astrophysicist IMHO.

But it is just a simulation with mass points corresponding to galaxies.
It is called Hubble Volume. roughly a cube chunk of the universe on a scale of 13 billion lightyears on a side at present (earlier smaller).

I think that uses effective largescale modeling.

If you are interested in microscopic modeling of spacetime on a Planck scale using Monte Carlo simulation of a quantum gravity dynamical model, then
Renate Loll did that, with her co-worker Jan Ambjorn at Utrecht Netherlands but her computer resources were teensy compared with what she needed.
http://arxiv.org/abs/hep-th/0509010
The Universe from Scratch
R. Loll, J. Ambjorn, J. Jurkiewicz

Her universes were brief little quantum fluctuation burps. But SHE DID NOT FORCE THEM TO BE EVEN THE RIGHT DIMENSION AND THEY TURNED OUT TO BE 3 + 1 = 4 DIMENSIONAL. That was a triumph, which occurred in 2005. After many years many people being frustrated, she succeeded in having it evolve the right dimensionality of its own accord instead of being told what dimension to be. After all, Nature does this.

Dan Christensen has the use of a Beowulf cluster (supercomputer) at the Uni Western Ontario and he has been doing quantum gravity simulations but so far I think this is way too small to be what you are imagining.

Maybe it isn't possible even in 100 years. I don't know what computer resources it would take to do a really satisfying job of simulating the universe.

If all you want to do is simulate GALAXY FORMATION in a fixed spatially flat standardized spacetime. then I think that may have been done. Wallace might know.

Other people may know of other computer simulations of universe(s).
 
Last edited by a moderator:
bassplayer142 said:
Disregarding the fact that the memory and computation speed of the computer would be enormous.

"Aye there's the Rub" as the Prince of Denmark once said :smile:

Simulations of something always ignore some details of the thing being modeled. The trick is to 'cut the fat' so to speak and include as much of the details that make a big difference and ignore the details that make a small difference. The difficulty is that it is often not clear which details are which!

Consider also that the Universe is in fact a giant computer. It has a bunch of information to being with (where stuff is), and is computing the effect of evolving that forward under a set of physical laws. This is all we do when we do computer simulations. Therefore in order to exactly simulate the Universe we need to make another Universe and set that running! Anything less than this contains less information than the thing we are trying to model (the Universe) and hence will be an incomplete and approximate answer.

Note that I havn't even opened the can'o'worms that is the irreducible randomness of quantum mechanics...
 
for simulations that like the universe itself are based on simple rules like cellular automata the complexity of simulations are exponentiating with computational power- which is why 21st century physics belongs to Computer Science and formal mathematics: within decades even unsophisticated brute force simulations on quantum and classical computers will not only accurately model all physical laws- but actually be universes equivalent to ours-

http://arxiv.org/abs/quant-ph/9904050
http://arxiv.org/abs/quant-ph/0011122
http://arxiv.org/abs/quant-ph/0501135

this of course leads to the Simulation Argument

http://www.simulation-argument.com/simulation.html
 
Last edited:
https://en.wikipedia.org/wiki/Recombination_(cosmology) Was a matter density right after the decoupling low enough to consider the vacuum as the actual vacuum, and not the medium through which the light propagates with the speed lower than ##({\epsilon_0\mu_0})^{-1/2}##? I'm asking this in context of the calculation of the observable universe radius, where the time integral of the inverse of the scale factor is multiplied by the constant speed of light ##c##.
The formal paper is here. The Rutgers University news has published a story about an image being closely examined at their New Brunswick campus. Here is an excerpt: Computer modeling of the gravitational lens by Keeton and Eid showed that the four visible foreground galaxies causing the gravitational bending couldn’t explain the details of the five-image pattern. Only with the addition of a large, invisible mass, in this case, a dark matter halo, could the model match the observations...
Hi, I’m pretty new to cosmology and I’m trying to get my head around the Big Bang and the potential infinite extent of the universe as a whole. There’s lots of misleading info out there but this forum and a few others have helped me and I just wanted to check I have the right idea. The Big Bang was the creation of space and time. At this instant t=0 space was infinite in size but the scale factor was zero. I’m picturing it (hopefully correctly) like an excel spreadsheet with infinite...
Back
Top