Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Space expansion and Universe as computation

  1. Dec 19, 2015 #1
    Good day.

    I do not know much about cosmology, rather computer science, but the following theoretical question bothers me a little. Some scientists, like Tegmark, Wolfram, Zuse or Fredkin, support the idea that the Universe might be just computation. Computable means that something can be effectively calculated in finite time via a finite algorithm. Let's pretend that our Universe were a giant computation. It is typical to assume in the framework of this idea that the space-time is discrete. Suppose we have a finite set of particles in our Universe. It seems possible in principle to simulate such a toy Universe (since there are already approximate simulations of our Universe as far as I remember).

    So far so good. But what if the space is expanding? It seems that you'd need to "create" new space cells (or quanta) indefinitely which contradicts the idea that the Universe is computation. It so because you'd need infinite computational resources just to track all particles's positions, let alone their interaction. Could it be that space expansion is evidence against the idea that the Universe might be computation?

    I do not pretend that the Universe as computation is an adequate model of the physical reality, so I'd like to avoid philosophical discussions.
     
  2. jcsd
  3. Dec 19, 2015 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Why infinite? If you start with a finite volume, it will stay finite. Your simulation will need more computing power and memory over time, sure. So what?
     
  4. Dec 20, 2015 #3
    I don't think it's the way computation works. This effectively means that the Universe is uncomputable since it would require infinite "creation" of computational power which contradicts the very theory of computable functions.
     
  5. Dec 20, 2015 #4

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    There is nothing infinite if the universe is finite.
     
  6. Dec 20, 2015 #5
    Are you asking if the universe is actually a simulation and then attempting to prove the universe is not?

    There are a number of untestable theories to that regard, but I can't see how your argument would preclude the possibility.
     
  7. Dec 21, 2015 #6
    I don't fully understand this statement. As far as I remember, the standard model of Universe states it's infinite. But even if we restrict ourselves just to the observable Universe, the things don't change either -- the observable Universe expands forever and more and more objects enter it eventually, more computational power is "created" from nothing -- it contraditcs the idea that the (observable) Universe is computtion. Also, this last sentence is from Seth lloyd's book.

    That's actually my question. How can it not preclude that? Infinite expansion = infinte growth of computational effort. That's not how computation works. Even though there are different meanings of this term.
     
  8. Dec 21, 2015 #7

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It does not specify the size at all. It can be infinite (and this is the easiest model), but it does not have to be. Experimentally, we just have a lower limit on the size.
    If you think of objects where we'll be able to see their current state in the future, it is the opposite: more and more objects are leaving it. The total number of objects we can interact with is finite, even for an infinite future, due to accelerated expansion. Expansion simplifies (!) the computation of the observable universe. In the very distant future, all we'll have in the observable universe are the remains of our galaxy cluster, plus some very redshifted CMB.
    The future doesn't matter for the original argument, however. A computer simulation does not have to be able to continue the computation indefinitely, it just has to reach the current state.
     
  9. Dec 21, 2015 #8
    But space would be distance between particles, that is a variable of particles and not a stand-alone entity. If the number of particles remains constant then I don't see why you'd ever need more power to compute it. If particles decay, which they do, then make sure you have power to compute for a total max of particles. I also suppose that since entropy never decreases, computing becomes simpler over time, i.e., you need fewer and fewer parameters to compute something like matter distribution.
     
  10. Dec 21, 2015 #9
    Well, first you are stuck on the idea that every single datapoint must be quantized and why?

    I will cite an example here from Ray Kurzweil. If we take a 1 kg rock it will have approximately 1025 atoms. That is about 1027 bits of information. That is a lot to model, but is it really information?

    The argument for the definition of information is important. If I have a binary number that is 0101010101, you might say that is 10 bits of information, but it really is only 2 bits of useful information repeated 5 times.

    We don't describe a rock using 1027 bits of information, we describe it in the terms of its abstract properties. We don't need the spin or angular momentum of every electron, but we can create that exact same model based on far less information. That's just one thing.

    As far as we know, there is a finite amount of energy and matter in the universe. The space between that energy and matter is unimportant in the sense we are only concerned about the spatial position of things, not the pixelated space between (assuming everything distills down to Planck units).

    Yes, there are random virtual particles, the the key here is that they are random, so once you model a cubic cm of space you can model any amount of space based on the first model, just randomize each subsequent cubic cm.

    This means that despite the infinite growth of the universe in size, the amount of material and energy to model will always be finite.
     
  11. Dec 21, 2015 #10

    phinds

    User Avatar
    Gold Member
    2016 Award

    Uh ... seriously?
     
  12. Dec 21, 2015 #11
    Dear forum members, these last answers are just excellent! A short remark, which I want to make so far, is:


    Sure, it makes no sense to store the whole (quantized) space as an enormous multidimensional matrix -- it suffices to store just particles' positions here, as pointed out by guywithdoubts. However, even if we were to store just one distance of a pair of particles, we'd have to have an infinitely growing memory storage since, theoretically, each particle can still occupy any neighboring space cell. That is, we would still have to track the positions up to the maximum precision (Planck distance for example, but it doesn't really matter). In the following, I'll try to come up with a workaround.


    So, if I were to simulate our Universe with enormous but finite computational resources, I'd be only concerned with particles that are able to interact in principle. As far as I understand this is related to the cosmic event horizon. That is, fix a particle as the observer. If another particle is within the event horizon, then it could possibly interact with the fixed particle. Otherwise, they will never interact. Notice that particles can leave the event horizon in finite time (here can be subtleties with the notion of time though). It is not so for the particle horizon where, on contrary, more objects may become "seen" by the observer (in their past state!). But let's not care about the particle horizon, let's account only for actual interactions. It turns out (correct me if I'm wrong) that after finite time, the event horizon will contain no other particles. So nothing to interact with. At this moment, computation of interactions of the fixed particle is literally terminated (regardless of how bluntly it sounds, it wouldn’t introduce any violation of physical laws for all the other observes). Alternatively, tracking of all the distances from the fixed particle to other particles may be terminated.


    If there is a finite amount of particles, the same “termination” procedure may be executed for all of them as soon as they become completely “isolated”. Now, one could argue that there is such thing called entanglement. I’d suggest to stop tracking entangled particles as soon as all of them become isolated. For instance, if both electrons in an entangled two-electron state get isolated, tracking of distances to other particles may be terminated.

    Thoughts and ideas are welcome.

    A side question, which may shed some light on the subject: does the event horizon have a limit in proper units as time goes to infinity?
     
  13. Dec 21, 2015 #12
    I wouldn't say it's sufficient.
     
  14. Dec 21, 2015 #13

    phinds

    User Avatar
    Gold Member
    2016 Award

    You really need to get a grip on the concept of infinity and not use the word causally.
     
  15. Dec 21, 2015 #14
    Yes, I agree, and I have no idea why the amount of memory would need to increase. The positions may change, but the amount of data needed to describe the position doesn't — unless one wants to keep a complete historical record of the two positions, but that isn't how the universe works, so why would a simulation need to?
     
  16. Dec 21, 2015 #15

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It is sufficient to reach the current state by definition.
    The part of today's universe with causal connection to us just has 10182 planck volumes, and the obserable universe is just two orders of magnitude larger. 500 bits (that is a finite number) are sufficient to describe the position of a classical particle with planck-scale accurary. The universe is not classical, of course, but that is a different issue.

    If you limit the simulation to 10185 planck volumes, you can account for everything that ever interacted or will ever interact with anything in the current observable universe. That's even better than just keeping track of the observable universe. And hey, who cares about three orders of magnitude?
     
  17. Dec 22, 2015 #16
    How? Suppose you have a fixed resolution of space. The only way, as I see it, to simulate the space expansion is by creating more space cells. It means that representation of the particle's position needs to grow indefinitely. Suppose you had a distance of 100 meters and the resolution were 1 m. Now, you space has expanded and the distance became 1000 m. But the resolution stayed. Suppose one of the particles has moved just by one cell, i. e. one meter. Now, the number is, say, 999 m. Earlier it'd have been just 99 m. How can you argue that 99 requires the same storage as 999?

    This is not exactly what I'm asking. I am asking about simulating the Universe at any state with a uniform bound on computational resources. Also, someone seems to metion history. I don't think it's necessary to store all the history.

    This is an interesting idea, but I would appreciate a clarification. How does this number of Planck volumes also apply to the future? Do you imply finiteness of the event horizon in proper units? Notice that some cosmologists seem to use the particle horizon to indicate the observable Universe. And that is not convergent unlike the event horizon. Another subtlety is that the observable Universe is a relative notion, regardless of the horizon that we use (event or particle). But there shouldn't be any problem provided that the number of observers is finite.
     
  18. Dec 22, 2015 #17

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It is not about the same size, it is about finite and infinite. 1000 is finite. Also, you can limit the computation to particles, where the distances don't matter, and particle numbers are (quite) constant.
    Yes. The part of the universe that can interact with us in the future has a radius of about 15 billion light years, and that number won't change significantly (in particular, it approaches a constant value).
     
  19. Dec 22, 2015 #18
    First of all, if I were creating a computer program to simulate the universe I wouldn't simulate things that have no value. Empty space is just a coordinate system with random noise in it (if you want to count virtual particles).

    You are just interested in the relative position of matter and energy with regard to each other. It's like a trucking company keeping track of its GPS equipped trucks in a growing territory. The company only needs to know where the trucks are relative to the dispatch office. The space in between is not important and in the case of the universe all the same anyway.

    Just what information were you thinking of assigning every cube of empty Planck space anyway?

    Think of it another way. If you simulate the universe are you going to assign memory for the value of PI?

    That would be pretty silly as the resources would need to be infinite, but you can compute the value of PI to any needed precision with a simple formula, which is much more efficient than the brute force approach you are thinking of.

    Hang around 40 to 50 years when our own machines grow in performance to the point where humans start creating their own simulated universes.
     
  20. Dec 22, 2015 #19
    Funny we didn't mention procedural generation yet.
     
  21. Dec 22, 2015 #20
    A bit of an abstract concept here, but if something has no value, does that really mean it carries no information? Can something not have any value in the first place?... Which i think is at the heart of the OP's question.

    For example, If we have an empty grid, does the grid not have any information at all, or only once we put a coordinate on that grid do we gain information? Can we even say that the grid even existed if the coordinate was or wasn't there?

    If we think about it literally, there's no way to show that a grid actually exists in space as we know it. But if you look at a through a computational perspective, one can say that the grid is there, because without the grid, you can not place a coordinate that makes sense onto wherever it is you want to place a coordinate. If Quantum mechanics holds true, then any change to this grid, either expanding or contracting, changes the potentialities of all particles in a system to include or exclude those new coordinates, which sounds like an increase in processing power.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Space expansion and Universe as computation
  1. Expansion of Space (Replies: 54)

  2. Expansion of universe (Replies: 33)

Loading...