Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Universe as Computer Sim. - Practical Considerations

  1. Aug 9, 2008 #1
    I don't think there's a thread directly on point here but if so my apologies in advance.

    I'm an ex-computer scientist and so I thought it'd be interesting to discuss what would go into a computer simulation of the universe.

    More specifically, I'm wondering if the things about quantum mechanics that wierd us out - wave/particle duality, entanglement, etc. - might be explained as glitches or bugs in the simulation.

    Let's say we're simulating the emission of a photon and the odds of it interacting with an electron 1 meter away. Are there practical reasons we can think of to represent the path of the photon as a wavefunction rather than a random, pre-determined vector? In other words is it easier to compute a probability of a particular photon and electron interacting by using their respective wavefunctions than by deterministically tracing their paths and continuously calculating whether they'll come close enough to interact?

    I have to imagine the answer is yes but I wonder if anyone else has done a more rigorous analysis.
  2. jcsd
  3. Aug 9, 2008 #2


    User Avatar

    Staff: Mentor

    Since the theory is exquisitely well proven it is very unlikely to be a bug.
  4. Aug 9, 2008 #3


    User Avatar
    Gold Member

    It seems to me that the worst the computer can be accused of is merely that it's presenting a sim that does not have internally consistent behaviour.

    The computer that runs the simulation must, by definition, be able to manipulate bits that, within the sim, appear to be spatially isolated - that is to say, the computer could, if so desired, make two electrons on opposite sides of its sandbox-sim-universe behave as if connected. It has to have trhis capability, if it is to function at all.

    The behaviour of the sim is nothing more than a set of rules that the computer follows. (eg. Make sure nothing appears to go faster than speed x.) But no one said that computers can't be programmed to ignore rules.

    For some reason the programmers are choosing to simulate a universe where, in most cases, nothing can move faster than c. But in certain circumstances - such as when the computer is simulating entangled electrons - it has been programmed to not enforce this rule in the sim.

    It's not a "bug" any more than programming a computer to count up to 100 while skipping all numbers that have a 3 in them is a "bug". It's more like a logic flaw in the design of the program.
    Last edited: Aug 9, 2008
  5. Aug 9, 2008 #4
    The theory is too complicated and rich with structure to be analogous to any kind of computer bugs I know of. At best it would be like including the wrong library, e.g. instead of


    maybe someone(?) accidentally coded


    where one highly structured behavior is replaced by another totally different one.

    For reasons of technical correctness, let's talk about an electron instead of a photon. Either way we absolutely cannot do away with the wave function, this is a necessary part of the description. A random path alone could never suffice. I will try to explain with another analogy:

    Suppose a sociology survey interviews people and determines that heterosexual males have an average of 7 sexual partners in their lifetime, and that heterosexual females have an average of 3 sexual partners (the gap in the numbers qualitatively agree with real studies of this kind). Notice that there is no distribution of sexual encounters that could possibly lead to this result i.e. every time a woman has sex with a new partner, then we also have a man having sex with a new partner, and so the averages should be exactly equal. The only explanation is that the men are exaggerating and the women are under-reporting e.g. these averages are impossible, and it comes from lying.

    The analogous results in quantum mechanics involve measurements that are incompatible with every possible classical distribution of paths. It started with an idea by Einstein, Podolsky and Rosen, who in the 1930s created the so-called EPR entanglement paradox with the intention of discrediting quantum mechanics, on the grounds that it made predictions about entanglement that contradicted common sense.

    The next development occurred when John Bell published an abstract mathematical argument that showed how in certain experiments the results of QM would differ from any possible 'local hidden variables theory' i.e. any theory that obeys special relativity and assigns properties like random paths to the particles.

    The next development occured in the 1980s when Alain Aspect finally created an experiment to settle this disagreement, and the results were exactly as predicted by QM, contradicting Einstein's intuition and ruling out local hidden variable theories. Since then there have been numerous other experiments to support this conclusion, like the GHZ experiment.

    No, wave functions are not used for the sake of convenience but because they are the only description we have that works, and they work fantastically well in all known experiments, and because the behaviors of photons and electrons are incompatible with such a description in terms of definite paths, as long as special relativity is valid (and it is our best tested theory).

    I hope you can see that the answer is basically 'no', although I should point out that there is such a thing called Bohmian Mechanics, first put out by De Broglie and then refined by David Bohm, that assigns a deterministic path to quantum particles e.g. electrons. The trade off is that the wave function is still necessary, and it guides the particle like a so called 'pilot wave.' The true problem is that this pilo wave can instantly effect the particle from any point in space and so this theory is incompatible with special relativity, and so is not given much attention.
  6. Aug 9, 2008 #5
    I think my point was not clear.

    We all know how QM works and how the simulation would "have to" be programmed to some extent in order to replicate QM's results. What I'm trying to get at is *why* would someone simulating *a* universe choose the rules that nature has blessed us with? Could QM wierdness be a by-product of the method that was chosen to program the universe, and, if so, why might that method have been chosen? Can we identify CS-related advantages to doing it this way?
  7. Aug 10, 2008 #6
    Thank you for clarifying. One way to look at this is by comparing the capabilities of classical computers (which are all equivalent to finite-memory turing machines) to the capabilities of quantum computers. If it turns out that NP =/= P, then classical machines would not suffice if the universe is intended to handle NP tasks, but quantum computers could provide a solution.
  8. Aug 11, 2008 #7
    Interesting approach to attacking QM phenomena. Identifying something like that would require a very very good understanding of QM though, as well as a proficiency in CS.

    But for someone in the know, I think what you are proposing is a very good approach to "smelling out" underlying things that might not have been considered.

  9. Aug 11, 2008 #8
    Exactly, that was my thinking. :)

    Another area that I think would be extremely difficult to simulate is relativity. "Background independence" is something that is talked about a lot, the Machian idea that there is no space without matter to describe it.

    The computer simulation, it seems, would consist primarily of a "list" of "things", rather than a "grid" of "space." "Space" would be a by-product of the relative distances between the "things", and not an allocated memory set into which "things" are "put".

    Either way becomes very complicated though, because of relativity. How can such a simulation possibly run without the notion of absolute time? Would we not need a fifth dimension in order that the universe evolves?
  10. Aug 11, 2008 #9


    User Avatar
    Gold Member

    Why? The simulation is really more a diorama of an unmoving 4D space-time. It does not evolve, except to certain creatures within who are doomed to sense one of the dimensions sequentially rather than as a whole.
  11. Aug 11, 2008 #10
    So is the function of the simulation to calculate the state of a particle at some point in its proper time? Do we then have but a single dimension, (t), with all other apparent dimensions being a by-product of the length of propertime between events in that particle's worldline?
  12. Aug 11, 2008 #11


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi peter. My background isn't in physics, but thought I'd throw this in. Feynman wrote a paper in '82, "Simulating Physics with Computers" which first considers a classical aproximation and then discusses various quantum mechanical aproximations and problems. I think it might be of interest to you. If you'd like I can send you a copy, just pm with email address.

    Also, you might want to do a search on this topic in Google Scholar for additional papers.
  13. Aug 11, 2008 #12


    User Avatar
    Gold Member

    Sorry, I may have confused the issue.

    The idea for fixed 4D space-time has nothing to do with this thread; it exists as a hypothesis in-and-of-itself.

    Simply put, space-time is a 4D structure where time is merely one of the dimensions. Time does not "pass", the universe do not "evolve". The passage of time is actually an illusion of - or more accurately, a limitation on - 3-dimensional creatures who can only see that dimension in slices.

    The compuer simulation could model (or "render") that 4-D, fixed universe. There's actually no need for the simulation to evolve in the computer's memory at all. The computer does not produce a 3D movie running in time - it outputs a single, 4-D, still picture.
  14. Aug 11, 2008 #13
    Yes but is a single fixed 4D universe consistent with relativity? The fourth dimensiion is no mere simple, Euclidian orthogonal addition to the matrix. Thus, I have no idea how the 4th dim. would be represented in computer memory so that the laws of SR would hold up.

    Put another way,everything would be "rendered" relative to one particle's worldline, which itself would violate the fundamental tenet of SR.

    In pseudo-code:

    Particle Spacetime[size_x][size_y][size_z][size_t];
    function IsParticleThere (x,y,z,t)


    All this is not consistent with Machian background independence or SR.
  15. Aug 11, 2008 #14


    User Avatar
    Gold Member

    I thought that was the beauty of it, that it is.
  16. Aug 11, 2008 #15
    I guess it depends on your perspective, but whenever you represent Minkowski Spacetime, you have one observer whose worldine goes straight up, and everyone else's goes in some other direction not greater than 45 degrees from normal at any time. So yes, you could represent all of the universe that way by choosing one particle to be the reference point. But that seems like cheating to me. I think a simulation more faithful to the tenets of GR and SR would be naught more than a list of particles, each containing their past and future histories using as little information as possible to define themselves.

    And since all a particle "knows" is its wavefunction - which incorporates forces acting upon it - that should be all that's necessary to define it for any given proper time.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook