Space expansion and Universe as computation

In summary, the theory that the universe might be just computation is controversial because it requires infinite computational effort to simulate. It has been suggested that space expansion is evidence against the idea, but this argument does not hold water.
  • #36
Loren said:
All of the data from the SDSS fits on a hard drive. What's the big deal?

The deal is that this data, when analyzed by scientists, needs to be consistent with all other scientific observations and their analysis. Not "mostly consistent", not "looking okayish at the first glance". You can't have SDSS data on stellar spectra and stellar population statistics contradict your elementary particle theories derived from accelerator experiments.
 
Space news on Phys.org
  • #37
Loren said:
If the universe is a set of discrete quantum points, then Planck units are the smallest size for any measurement. One Planck length is about 1.6 x 10-35 meters. That's a discrete quanta. You can't get any more precise than that and so every position in the universe can be described by a finite set of numbers no matter how big the universe is or gets.

The numbers may appear big, but they still resolve to finite values.

Your argument boils down to two points:
- Universe needs only a finite amount of data and computations to be simulated
- We rapidly increase our computatonal resources

From this you assume it means it would be possible to simulate a Universe.

I think you fail to realize that a "finite" amount of data can nevertheless be so vast that there is no chance to tackle it, regardless of how fast we would evolve our computers.

Here is an example:
https://en.wikipedia.org/wiki/Graham's_number

Easy, eh? It's only a single finite number. It's not even the aleph-zero. Now, can we calculate and write down its value in base 10? Can we ever do that? Say, after a billion years of advancement in computers?
 
  • #38
nikkkom said:
Your argument boils down to two points:
- Universe needs only a finite amount of data and computations to be simulated
- We rapidly increase our computatonal resources

From this you assume it means it would be possible to simulate a Universe.

I think you fail to realize that a "finite" amount of data can nevertheless be so vast that there is no chance to tackle it, regardless of how fast we would evolve our computers.

Here is an example:
https://en.wikipedia.org/wiki/Graham's_number

Easy, eh? It's only a single finite number. It's not even the aleph-zero. Now, can we calculate and write down its value in base 10? Can we ever do that? Say, after a billion years of advancement in computers?

All I can say is remember this day and your posts. I think that you will be surprised, but time will tell.
 
  • #39
Good day again and apologies for a necrobump(or is it?).

While this was a nice and productive discussion on an a matter that was seemingly little addressed in the related literature, I came up with a new question that has to do with the current thread.

Namely, quantum and thermal fluctuations that theoretically take place even in the case of the Heat Death. A nasty consequence of these might be Poincare Recurrence, creation of a new universe, Boltzmann brain or the like. Even if the particles are completely isolated by the event horizons, there are still nonzero possibilities that something appears out of "nothing".

This puts question as to whether any finite computational power can simulate something like this.

I think the problem might be in having a sort of "true random number generator" in our imaginary computer. If there is such a thing as TRUE randomness, then, probably, the universe cannot be simulated. If, however, every randomness is only apparent, the situation might be rescued. Imagine that the simulated quantum mechanics is deterministic (such models exist, e. g., Bohmian mechanics -- even though it is seen as unpleasant due to nonlocality, it might fit the current discussion). Then, every process which appears "random" to the internal observer might actually be a result of some sophisticated algorithm (refer, for example, to pseudorandom generators which have extremely huge periods sufficient for any practical use).

So might a deterministic picture resolve the issue of Boltzmann brain and related phenomena which preclude the possibility of simulating an indefinitely expanding universe with finite computational power?

Comments / thoughts are welcome.
 
  • #40
Who said you have to get done with the computation of the infinite future of our universe with finite computational power? The simulation can be stopped once it gets boring, or just kept running, simulating further into our future over the runtime of the simulation.

The many-worlds interpretation is deterministic and local.
 
  • #41
As was said earlier, the horizon is asymptotic to a limit but comoving material continues to expand. The number of particles within the observable universe will therefore fall, freeing up resource to model new entities should these be modeled by your program. Location can be an attribute of each particle with finite resolution hence finite resources will always be adequate, and in fact we may be past the peak since the expansion is already accelerating.

This line of thinking isn't going to define any limit in the universe other than that it couldn't be infinite, unless of course it runs on an infinite array of distributed processors. That only begs the question, what is the size of the universe in which the computer is running so this appears to be heading towards the philosophical dead end of infinite regression.
 
  • #42
mfb said:
Who said you have to get done with the computation of the infinite future of our universe with finite computational power? The simulation can be stopped once it gets boring, or just kept running, simulating further into our future over the runtime of the simulation.

The many-worlds interpretation is deterministic and local.

You are right. But "boring" may be differently understood. Having nothing to interact with can be classified as "boring". But having a Boltzmann brain occurred right in front of you may be not!
 
  • #43
GeorgeDishman said:
The number of particles within the observable universe will therefore fall, freeing up resource to model new entities should these be modeled by your program

I got your point. But what might appear is not (to my understanding) limited in any way. At least dimensionally and in terms of complexity. There may, say, a new universe appear. Also, and I must have stated it somewhere, the simulation time should be finite. The criterion was to stop the simulation when each particle gets isolated. But random occurrence of literally anything breaks this idea apart.
 
  • #44
Why would we need an external universe to do the simulation computations for this one?
The numbers are big but the methodologies of symbols and manipulations of them are not.

The description of Graham's number takes a page, showing the construction of G64. All numbers could be described as a base Graham plus Exp residue like this:
A particular number is G12 + G3 + 8.2x10^246 + 3.290869x10^-29 or some similar way.

How big a number and how many of them are needed in a description of the universe as phase space in Planck units?
 
  • #45
bahamagreen said:
Why would we need an external universe to do the simulation computations for this one?

How could a simulation run in a computer built in this universe be the universe in which it is running? The conversation isn't about simulating the universe.
 
  • #46
Right. The conversation about a universe as a (finite) computation.

Meanwhile, I have serious doubts whether it is possible given the standard interpretation of QM. Even if the universe approaches a heat death, literally anything can still happen and so forever. Perhaps, Bohmian mechanics is more suitable in this context. At least, it addresses the final state better.
 
  • #47
The volume of the observable universe is roughly 9.5184 cubic Planck units. While not infinite, it may as well be for all practical purposes.
 
  • #48
Maybe I'm not understanding what "universe as computation" means.
Is it a model that is fully detailed and complete?

Like this?
The basic Planck units are length, mass, time, charge, and temperature
The volumes are assigned with respect to times as events
Current volumes is about 9.5^184, current times is about 8^60
Current events is about 7.6^245
That leaves an attribute of mass, charge, and temperature for each volume at each time (event)
Each volume/time event might take three attributes (mass, charge, temperature), or all 16 derived Planck units
These attributes will be "small numbers" up to singularity.
I don't think the computation itself, despite being self referential, counts as a singularity.
Would these events qualify as indistinguishable micro-states where we need N= (7.6^245)! or about 10^10^218?
 
  • #49
Temperature is a macroscopic quantity, it does not exist on the level of individual particles.

Thermodynamics sets limits on the amount of computation you can get done - it is much lower than one operation per Planck (4d) volume. Bremermann's limit is an example, if you scale it up from Earth to the observable universe you get something like 10120 operations. A computer simulating our observable universe would not need more than that.
 
  • #50
Computing requires three things in the most sensible way:
  1. a program
  2. some form of storage
  3. a means to execute the program
So let us look at the universe. It is big and for all reasonable purposes finite as light has a finite speed and information cannot (As far as we know) travel faster than that.
This also gives us a finite amount of information each step the program takes which means it can be traversed through in finite time.

Now let's state that the universe consists of a finite amount of programs that mutate each step in time in such a way that the information it receives changes the program as well as the stored information (this last is not required if the program can be altered in such a way that the data becomes the program) Self mutating programs are well known throughout the computer science community. However running them can be quite tricky but doable.
So instead of having one giant program you end up with a finite amount of small ones that work together to create what we perceive as the universe. The logic in those programs alters and the effect of new information also changes the behaviour in the next cycle.
A universe that is run by one program violates it's own limitation set on it as it would require to calculate all elements each time step regardless of the distance. (adding extra dimension can resolve this but then information could travel from one place to the next instantaneously which is not observed) This requires a tremendous amount of effort to calculate by one entity or program. For all practical purposes one can mark that as infinite amount of steps. But I don't think the universe is build like that. It is a huge array parallel processing entities.
The one argument that this thought experiment may overlook is in what are those programs running. For now I just assume that each program is both processor as program.
I think using a finite amount of programs that can communicate with each other through the exchange of information which can travel at a finite speed that change behaviour based on the information would be a doable
Of course this is just an idea of how I would build such a simulation using a finite resource and a staggering amount of configurations.

If stupidity got us into this why can't it get us out?
 
Last edited:
  • #51
Michael27 said:
A universe that is run by one program violates it's own limitation set on it as it would require to calculate all elements each time step regardless of the distance.
Why? It is perfectly possible to consider only elements next to place X for the computation what happens at place X. In fact, every calculation of GR and lattice QFT is doing this, and programs for tasks like weather predictions do it as well, so there are counterexamples to your statement even within this universe.
Michael27 said:
It is a huge array parallel processing entities.
That is not in conflict with a single program running that.
 
  • #52
mfb said:
Why? It is perfectly possible to consider only elements next to place X for the computation what happens at place X. In fact, every calculation of GR and lattice QFT is doing this, and programs for tasks like weather predictions do it as well, so there are counterexamples to your statement even within this universe.That is not in conflict with a single program running that.

You are right but I should elaborate as I was not clear enough. A single program is run on a single entity which in itself can have multiple processing entities however in the end the governing program is run as one entity and has to communicate with all sub processes in order to coordinate. That is what I was trying to distinguish with and did not succeed. In order for such an entity to coordinate it would have to have information send to it from every location. That is why I added the remark of multiple dimensions where the governing could retrieve and send information through as not to violate the restriction of the speed of information/light.
As always it comes down to the old paradox how can a program delete itself while it is running. There must be some governing entity to do this which in term suggests that there should be something outside the universe being able to do so.
The idea of how much computing power one needs to simulate the universe will not change due to this paradox.

mfb said:
"That is not in conflict with a single program running that.
Your second remark is correct as well but I hope I have clarified my position a little bit better. Without the need of a governing entity this is correct. I think the terms program and what is running the program are in conflict here.
 
  • #53
Chronos said:
The volume of the observable universe is roughly 9.5184 cubic Planck units. While not infinite, it may as well be for all practical purposes.

You don't seem to be a good programmer, do you? Things like algorithms may be themselves simple, but able to generate indefinite arrays. It doesn't matter "how many" entities, it does matter how they are generated.
 
  • #54
While clever, your misdirection conveniently ignores the fact the information required to execute such a simulation is staggering. That it may be computable does not alter the fact computation time grows exponentially with the data. For example; the Illustris project, a childishly simplified [only 12 billion pixels] simulation of the observable universe, required 3 months run time on one of the worlds most advanced super computers - re: https://www.nics.tennessee.edu/illustris-project.
 

Similar threads

Replies
6
Views
1K
Replies
65
Views
4K
Replies
1
Views
1K
Replies
13
Views
1K
Replies
1
Views
1K
Replies
17
Views
2K
Replies
6
Views
1K
Replies
4
Views
1K
Replies
1
Views
1K
Back
Top