I Space expansion and Universe as computation

ErikZorkin
Messages
104
Reaction score
6
Good day.

I do not know much about cosmology, rather computer science, but the following theoretical question bothers me a little. Some scientists, like Tegmark, Wolfram, Zuse or Fredkin, support the idea that the Universe might be just computation. Computable means that something can be effectively calculated in finite time via a finite algorithm. Let's pretend that our Universe were a giant computation. It is typical to assume in the framework of this idea that the space-time is discrete. Suppose we have a finite set of particles in our Universe. It seems possible in principle to simulate such a toy Universe (since there are already approximate simulations of our Universe as far as I remember).

So far so good. But what if the space is expanding? It seems that you'd need to "create" new space cells (or quanta) indefinitely which contradicts the idea that the Universe is computation. It so because you'd need infinite computational resources just to track all particles's positions, let alone their interaction. Could it be that space expansion is evidence against the idea that the Universe might be computation?

I do not pretend that the Universe as computation is an adequate model of the physical reality, so I'd like to avoid philosophical discussions.
 
Space news on Phys.org
ErikZorkin said:
It so because you'd need infinite computational resources just to track all particles's positions, let alone their interaction.
Why infinite? If you start with a finite volume, it will stay finite. Your simulation will need more computing power and memory over time, sure. So what?
 
mfb said:
Your simulation will need more computing power and memory over time
I don't think it's the way computation works. This effectively means that the Universe is uncomputable since it would require infinite "creation" of computational power which contradicts the very theory of computable functions.
 
There is nothing infinite if the universe is finite.
 
Are you asking if the universe is actually a simulation and then attempting to prove the universe is not?

There are a number of untestable theories to that regard, but I can't see how your argument would preclude the possibility.
 
mfb said:
There is nothing infinite if the universe is finite.
I don't fully understand this statement. As far as I remember, the standard model of Universe states it's infinite. But even if we restrict ourselves just to the observable Universe, the things don't change either -- the observable Universe expands forever and more and more objects enter it eventually, more computational power is "created" from nothing -- it contraditcs the idea that the (observable) Universe is computtion. Also, this last sentence is from Seth lloyd's book.

Loren said:
Are you asking if the universe is actually a simulation and then attempting to prove the universe is not?

There are a number of untestable theories to that regard, but I can't see how your argument would preclude the possibility.

That's actually my question. How can it not preclude that? Infinite expansion = infinite growth of computational effort. That's not how computation works. Even though there are different meanings of this term.
 
ErikZorkin said:
As far as I remember, the standard model of Universe states it's infinite.
It does not specify the size at all. It can be infinite (and this is the easiest model), but it does not have to be. Experimentally, we just have a lower limit on the size.
ErikZorkin said:
the observable Universe expands forever and more and more objects enter it eventually
If you think of objects where we'll be able to see their current state in the future, it is the opposite: more and more objects are leaving it. The total number of objects we can interact with is finite, even for an infinite future, due to accelerated expansion. Expansion simplifies (!) the computation of the observable universe. In the very distant future, all we'll have in the observable universe are the remains of our galaxy cluster, plus some very redshifted CMB.
The future doesn't matter for the original argument, however. A computer simulation does not have to be able to continue the computation indefinitely, it just has to reach the current state.
 
  • Like
Likes PeterDonis
But space would be distance between particles, that is a variable of particles and not a stand-alone entity. If the number of particles remains constant then I don't see why you'd ever need more power to compute it. If particles decay, which they do, then make sure you have power to compute for a total max of particles. I also suppose that since entropy never decreases, computing becomes simpler over time, i.e., you need fewer and fewer parameters to compute something like matter distribution.
 
ErikZorkin said:
I don't fully understand this statement. As far as I remember, the standard model of Universe states it's infinite. But even if we restrict ourselves just to the observable Universe, the things don't change either -- the observable Universe expands forever and more and more objects enter it eventually, more computational power is "created" from nothing -- it contraditcs the idea that the (observable) Universe is computtion. Also, this last sentence is from Seth lloyd's book.
That's actually my question. How can it not preclude that? Infinite expansion = infinite growth of computational effort. That's not how computation works. Even though there are different meanings of this term.

Well, first you are stuck on the idea that every single datapoint must be quantized and why?

I will cite an example here from Ray Kurzweil. If we take a 1 kg rock it will have approximately 1025 atoms. That is about 1027 bits of information. That is a lot to model, but is it really information?

The argument for the definition of information is important. If I have a binary number that is 0101010101, you might say that is 10 bits of information, but it really is only 2 bits of useful information repeated 5 times.

We don't describe a rock using 1027 bits of information, we describe it in the terms of its abstract properties. We don't need the spin or angular momentum of every electron, but we can create that exact same model based on far less information. That's just one thing.

As far as we know, there is a finite amount of energy and matter in the universe. The space between that energy and matter is unimportant in the sense we are only concerned about the spatial position of things, not the pixelated space between (assuming everything distills down to Planck units).

Yes, there are random virtual particles, the the key here is that they are random, so once you model a cubic cm of space you can model any amount of space based on the first model, just randomize each subsequent cubic cm.

This means that despite the infinite growth of the universe in size, the amount of material and energy to model will always be finite.
 
  • #10
Loren said:
The argument for the definition of information is important. If I have a binary number that is 0101010101, you might say that is 10 bits of information, but it really is only 2 bits of useful information repeated 5 times.
Uh ... seriously?
 
  • #11
Dear forum members, these last answers are just excellent! A short remark, which I want to make so far, is:Sure, it makes no sense to store the whole (quantized) space as an enormous multidimensional matrix -- it suffices to store just particles' positions here, as pointed out by guywithdoubts. However, even if we were to store just one distance of a pair of particles, we'd have to have an infinitely growing memory storage since, theoretically, each particle can still occupy any neighboring space cell. That is, we would still have to track the positions up to the maximum precision (Planck distance for example, but it doesn't really matter). In the following, I'll try to come up with a workaround.So, if I were to simulate our Universe with enormous but finite computational resources, I'd be only concerned with particles that are able to interact in principle. As far as I understand this is related to the cosmic event horizon. That is, fix a particle as the observer. If another particle is within the event horizon, then it could possibly interact with the fixed particle. Otherwise, they will never interact. Notice that particles can leave the event horizon in finite time (here can be subtleties with the notion of time though). It is not so for the particle horizon where, on contrary, more objects may become "seen" by the observer (in their past state!). But let's not care about the particle horizon, let's account only for actual interactions. It turns out (correct me if I'm wrong) that after finite time, the event horizon will contain no other particles. So nothing to interact with. At this moment, computation of interactions of the fixed particle is literally terminated (regardless of how bluntly it sounds, it wouldn’t introduce any violation of physical laws for all the other observes). Alternatively, tracking of all the distances from the fixed particle to other particles may be terminated.If there is a finite amount of particles, the same “termination” procedure may be executed for all of them as soon as they become completely “isolated”. Now, one could argue that there is such thing called entanglement. I’d suggest to stop tracking entangled particles as soon as all of them become isolated. For instance, if both electrons in an entangled two-electron state get isolated, tracking of distances to other particles may be terminated.

Thoughts and ideas are welcome.

A side question, which may shed some light on the subject: does the event horizon have a limit in proper units as time goes to infinity?
 
  • #12
mfb said:
The future doesn't matter for the original argument, however. A computer simulation does not have to be able to continue the computation indefinitely, it just has to reach the current state.

I wouldn't say it's sufficient.
 
  • #13
ErikZorkin said:
... we'd have to have an infinitely growing memory storage ...
You really need to get a grip on the concept of infinity and not use the word causally.
 
  • #14
phinds said:
You really need to get a grip on the concept of infinity and not use the word causally.

Yes, I agree, and I have no idea why the amount of memory would need to increase. The positions may change, but the amount of data needed to describe the position doesn't — unless one wants to keep a complete historical record of the two positions, but that isn't how the universe works, so why would a simulation need to?
 
  • #15
ErikZorkin said:
I wouldn't say it's sufficient.
It is sufficient to reach the current state by definition.
ErikZorkin said:
That is, we would still have to track the positions up to the maximum precision (Planck distance for example, but it doesn't really matter).
The part of today's universe with causal connection to us just has 10182 Planck volumes, and the obserable universe is just two orders of magnitude larger. 500 bits (that is a finite number) are sufficient to describe the position of a classical particle with Planck-scale accurary. The universe is not classical, of course, but that is a different issue.

If you limit the simulation to 10185 Planck volumes, you can account for everything that ever interacted or will ever interact with anything in the current observable universe. That's even better than just keeping track of the observable universe. And hey, who cares about three orders of magnitude?
 
  • #16
Loren said:
The positions may change, but the amount of data needed to describe the position doesn't

How? Suppose you have a fixed resolution of space. The only way, as I see it, to simulate the space expansion is by creating more space cells. It means that representation of the particle's position needs to grow indefinitely. Suppose you had a distance of 100 meters and the resolution were 1 m. Now, you space has expanded and the distance became 1000 m. But the resolution stayed. Suppose one of the particles has moved just by one cell, i. e. one meter. Now, the number is, say, 999 m. Earlier it'd have been just 99 m. How can you argue that 99 requires the same storage as 999?

mfb said:
It is sufficient to reach the current state by definition.
This is not exactly what I'm asking. I am asking about simulating the Universe at any state with a uniform bound on computational resources. Also, someone seems to metion history. I don't think it's necessary to store all the history.

mfb said:
If you limit the simulation to 10185 Planck volumes, you can account for everything that ever interacted or will ever interact with anything in the current observable universe. That's even better than just keeping track of the observable universe. And hey, who cares about three orders of magnitude?
This is an interesting idea, but I would appreciate a clarification. How does this number of Planck volumes also apply to the future? Do you imply finiteness of the event horizon in proper units? Notice that some cosmologists seem to use the particle horizon to indicate the observable Universe. And that is not convergent unlike the event horizon. Another subtlety is that the observable Universe is a relative notion, regardless of the horizon that we use (event or particle). But there shouldn't be any problem provided that the number of observers is finite.
 
  • #17
ErikZorkin said:
How can you argue that 99 requires the same storage as 999?
It is not about the same size, it is about finite and infinite. 1000 is finite. Also, you can limit the computation to particles, where the distances don't matter, and particle numbers are (quite) constant.
ErikZorkin said:
How does this number of Planck volumes also apply to the future? Do you imply finiteness of the event horizon in proper units?
Yes. The part of the universe that can interact with us in the future has a radius of about 15 billion light years, and that number won't change significantly (in particular, it approaches a constant value).
 
  • #18
ErikZorkin said:
How? Suppose you have a fixed resolution of space. The only way, as I see it, to simulate the space expansion is by creating more space cells. It means that representation of the particle's position needs to grow indefinitely. Suppose you had a distance of 100 meters and the resolution were 1 m. Now, you space has expanded and the distance became 1000 m. But the resolution stayed. Suppose one of the particles has moved just by one cell, i. e. one meter. Now, the number is, say, 999 m. Earlier it'd have been just 99 m. How can you argue that 99 requires the same storage as 999?

First of all, if I were creating a computer program to simulate the universe I wouldn't simulate things that have no value. Empty space is just a coordinate system with random noise in it (if you want to count virtual particles).

You are just interested in the relative position of matter and energy with regard to each other. It's like a trucking company keeping track of its GPS equipped trucks in a growing territory. The company only needs to know where the trucks are relative to the dispatch office. The space in between is not important and in the case of the universe all the same anyway.

Just what information were you thinking of assigning every cube of empty Planck space anyway?

Think of it another way. If you simulate the universe are you going to assign memory for the value of PI?

That would be pretty silly as the resources would need to be infinite, but you can compute the value of PI to any needed precision with a simple formula, which is much more efficient than the brute force approach you are thinking of.

Hang around 40 to 50 years when our own machines grow in performance to the point where humans start creating their own simulated universes.
 
  • #19
Loren said:
That would be pretty silly as the resources would need to be infinite, but you can compute the value of PI to any needed precision with a simple formula

Funny we didn't mention procedural generation yet.
 
  • #20
Loren said:
First of all, if I were creating a computer program to simulate the universe I wouldn't simulate things that have no value.

A bit of an abstract concept here, but if something has no value, does that really mean it carries no information? Can something not have any value in the first place?... Which i think is at the heart of the OP's question.

For example, If we have an empty grid, does the grid not have any information at all, or only once we put a coordinate on that grid do we gain information? Can we even say that the grid even existed if the coordinate was or wasn't there?

If we think about it literally, there's no way to show that a grid actually exists in space as we know it. But if you look at a through a computational perspective, one can say that the grid is there, because without the grid, you can not place a coordinate that makes sense onto wherever it is you want to place a coordinate. If Quantum mechanics holds true, then any change to this grid, either expanding or contracting, changes the potentialities of all particles in a system to include or exclude those new coordinates, which sounds like an increase in processing power.
 
  • #21
The simplest and most economical way to simulate the universe would only require simulating the people.
 
  • #22
mfb said:
It is not about the same size, it is about finite and infinite. 1000 is finite. Also, you can limit the computation to particles, where the distances don't matter, and particle numbers are (quite) constant.
Didn't get you here. Could you elaborate a bit more precisely?

Loren said:
First of all, if I were creating a computer program to simulate the universe I wouldn't simulate things that have no value. Empty space is just a coordinate system with random noise in it (if you want to count virtual particles).
I do agree. But still, let's go back to my example with just two particles and one distance. Do you mean that you would store it only to a finite precision when the space expands? Then, you would lose precision. How to compute the situation when one particle approached us by one Planck volume? I do have a feeling that it is solvable, but I can't see exactly how so far.

Justice Hunter, your post is definitely off. Please avoid philosophical discussions here! We are not talking about what "actual existence means". Commenting on this post might turn the discussion in false direction.
 
  • #23
Loren said:
The simplest and most economical way to simulate the universe would only require simulating the people.
Please avoid such discussions here.
 
  • #24
ErikZorkin said:
Didn't get you here. Could you elaborate a bit more precisely?I do agree. But still, let's go back to my example with just two particles and one distance. Do you mean that you would store it only to a finite precision when the space expands? Then, you would lose precision. How to compute the situation when one particle approached us by one Planck volume? I do have a feeling that it is solvable, but I can't see exactly how so far.

Justice Hunter, your post is definitely off. Please avoid philosophical discussions here! We are not talking about what "actual existence means". Commenting on this post might turn the discussion in false direction.

If you use your requirement of Planck units then the coordinates must be finite by definition.
 
  • #25
ErikZorkin said:
Please avoid such discussions here.

Why?

You are already talking about a virtual universe; one that is simulated. Define what actual existence means in that context.
 
  • #26
Loren said:
If you use your requirement of Planck units then the coordinates must be finite by definition.
Huh? But if space expands indefinitely? I still think, that my suggestion on stopping tracking particles outside of the event horizon is at least technically consistent.

Loren said:
Define what actual existence means in that context.
No. Please do not discuss it here. I am sure there are plenty of another threads here that are more appropriate.
 
  • #27
ErikZorkin said:
if space expands indefinitely?

It doesn't, at least not in the relevant sense for this discussion; that is the point mfb has been making (several times now). The distance to the cosmological event horizon does not increase indefinitely; it approaches a constant value because of the effects of dark energy (mfb used the term "accelerated expansion"). The fact that "comoving" objects continue to move apart (which is what "expansion" means in this context) does not change that fact; it just means that over time, more and more objects pass behind the cosmological event horizon and no longer need to be simulated.
 
  • Like
Likes mfb
  • #28
Loren said:
The simplest and most economical way to simulate the universe would only require simulating the people.

I think you didn't actually try to think through how would one realistically try to do that. Take just one moderately complex work of science, such as one of SDSS star spectroscopic surveys. Your simulation needs to be precise enough that simulated data from simulated telescope yields hundreds of millions of simulated star spectra which, after a rather complicated processing, matches star evolution models within error bars. It must correlate with theoretical models done by other simulated people on stellar hydrodynamics, stellar fusion, etc. You can not afford to generate data which is detectably logically inconsistent, or else some of simulated scientists will one day detect that.
 
Last edited:
  • #29
Consider 2^n, where n is the number of femtoseconds since the big bang. This number grows without bound, but it will always be finite.

Heck, consider n^n^n^n. Same thing.
 
  • #30
ErikZorkin said:
Justice Hunter, your post is definitely off. Please avoid philosophical discussions here! We are not talking about what "actual existence means". Commenting on this post might turn the discussion in false direction.

Err, you may have missed the point i was trying to make, but it's okay ill elaborate more on your original post.

Lets say you have a 6x5 unit grid. at origin point 0,0 you would have a particle or rather a coordinate that indicates the existence of a particle being at that coordinate.

Particles in the real world, obey quantum mechanics. That means that until interacted with, a particle is nothing more then a wave of potentialities across the system.

Now in this 6x5 grid, the particle at the origin point should have another, almost imaginary grid, that indicates the potentialities of that particle. That means storing a set of information across all points in that 6x5 grid with a function that determines that probability that the particle would be located at any of those points.

Grid phys.png


now amplify this example with every particle in the universe, and for every voxel of space (smallest unit, if one actually exists) And you have a near infinite amount of information being stored on this imaginary grid of potentialities, but still finite.

But now the real problem, is that when space expands, the amount of potentialities must also accommodate the new set of coordinates. That means in addition to all of the potentialities, each particles potential function must now "adjust" to include all new possible coordinates for that particle to manifest. This is what i meant when even the grid of potentialities, although have no "value" still really do have values, and that these "hidden values" must take up some form of information (processing power)

If you want a straight forward answer to your post, like many have said before me, if you have a finite smallest size to your grid, then your final answer will be finite. If you have no minimum size...well that's where it gets bad, re-normalization comes in and we start loosing control of physics, but that's a different tale.
 
  • #31
PeterDonis said:
It doesn't, at least not in the relevant sense for this discussion; that is the point mfb has been making (several times now). The distance to the cosmological event horizon does not increase indefinitely; it approaches a constant value because of the effects of dark energy (mfb used the term "accelerated expansion"). The fact that "comoving" objects continue to move apart (which is what "expansion" means in this context) does not change that fact; it just means that over time, more and more objects pass behind the cosmological event horizon and no longer need to be simulated.
Nuff said. That's kinda what I meant.

To Justice Hunter: I see your point now. Good remark. Still, it, and the comments of the others ITT, more or less match with my original suggestion.

Thanx for the great discussion folks!
 
  • #32
nikkkom said:
I think you didn't actually try to think through how would one realistically try to do that. Take just one moderately complex work of science, such as one of SDSS star spectroscopic surveys. Your simulation needs to be precise enough that simulated data from simulated telescope yields hundreds of millions of simulated star spectra which, after a rather complicated processing, matches star evolution models within error bars. It must correlate with theoretical models done by other simulated people on stellar hydrodynamics, stellar fusion, etc. You can not afford to generate data which is detectably logically inconsistent, or else some of simulated scientists will one day detect that.

All of the data from the SDSS fits on a hard drive. What's the big deal?

I don't have the reference handy, but you can easily look it up, We should be able to simulate a human brain by 2020 to 2030. Another couple of decades beyond that we should be able to simulate every human mind that has ever lived.

With that kind of computing power it shouldn't be hard to suspend disbelief. Ray Kurzweil's The Singularity is near, page 148, states that the human brain functions at about 1016 calculations per second (CPS). Current supercomputers in China produce 3.39 1016 CPS. The hardware is there, but we haven't figured out the model yet.

Nevertheless, we will, and the exponential rate of technological growth should realistically put simulating a universe within the next half century, at least in a rudimentary way. What will the next thousand years of development bring?

To be clear, I am not saying that I am a believer that we live in a simulated universe, but I am not closed minded enough to think that it can't be done. We are standing at the doorstep of doing those exact things for ourselves and it seems very likely that you will witness the dawn of such things. Perhaps you will be lucky and witness even more.

The problem with the original question and the doubts he had about it are simply tied to the difficulty in stretching one's mind to comprehend what is possible. We live in an age that we feel so secure in our beliefs about what is possible, yet we fail to take the lessons form history seriously. We are so sure of ourselves.

100 years ago most leading scientists did not believe we could get to the Moon, much less do it in the next 50 years. Learned men failed to recognize the possibilities because they were grounded in their ways. That couldn't happen today, could it?

In 200 years Star Trek will not be a reality. In 200 years we won't recognize ourselves using the eyes of today. Yet we dream with those eyes, just like our ancestors and we will be no less stunned to see our future in 100 years than Einstein, Planck, Bohr, Fermi, or Goddard would be to see what we learned today. In fact, it will be worse.

We are at the knee of explosive exponential technological growth. We are already seeing the very beginning of it now, but in the course of the next few decades things are going to change so much faster than you or anyone else can imagine.

I think it is myopic to think that computationally simulating the universe is impossible. We can't do it with today's technology, but tomorrow is another day. Again, I am not saying we live in a simulation. That wasn't the original question anyway.
 
  • #33
ErikZorkin said:
Didn't get you here. Could you elaborate a bit more precisely?I do agree. But still, let's go back to my example with just two particles and one distance. Do you mean that you would store it only to a finite precision when the space expands? Then, you would lose precision. How to compute the situation when one particle approached us by one Planck volume? I do have a feeling that it is solvable, but I can't see exactly how so far.

Justice Hunter, your post is definitely off. Please avoid philosophical discussions here! We are not talking about what "actual existence means". Commenting on this post might turn the discussion in false direction.

If the universe is a set of discrete quantum points, then Planck units are the smallest size for any measurement. One Planck length is about 1.6 x 10-35 meters. That's a discrete quanta. You can't get any more precise than that and so every position in the universe can be described by a finite set of numbers no matter how big the universe is or gets.

The numbers may appear big, but they still resolve to finite values.
 
  • #34
@Loren: While this is certainly a possible future, it is not the only one. Imagine a Roman writing the same things around the year 0, and then see how the world looked 500 or 1000 years later. Learning from the past also means realizing that there was not always progress. Stagnation can happen, sometimes it even goes backwards and knowledge and technology get lost. We get more and more dependent on a society that depends on trust in the society...

Another issue: we don't know how long Moore's law will hold. Certainly not forever, as total computation power in the universe is limited and Moore's law would reach that limit within thousands of years. It is a really hard fundamental limit: to simulate a whole universe in every detail, you need a computer in a more complex universe. Unless you consider the universe itself as "computing itself".
 
  • #35
mfb said:
@Loren: While this is certainly a possible future, it is not the only one. Imagine a Roman writing the same things around the year 0, and then see how the world looked 500 or 1000 years later. Learning from the past also means realizing that there was not always progress. Stagnation can happen, sometimes it even goes backwards and knowledge and technology get lost. We get more and more dependent on a society that depends on trust in the society...

Another issue: we don't know how long Moore's law will hold. Certainly not forever, as total computation power in the universe is limited and Moore's law would reach that limit within thousands of years. It is a really hard fundamental limit: to simulate a whole universe in every detail, you need a computer in a more complex universe. Unless you consider the universe itself as "computing itself".

Technology advances in spurts that can be seen as S shipped steps. However, the overall average of that growth is still exponential.

Of course we could have an extinction level event that stops it.

Your example about Rome is probably not a good one. The problem is that technology's advancement was very much at the beginning of the exponential curve and it took a millennia for any significant change to occur. It wasn't until almost two millennia that things got interesting. The sum of human knowledge is now doubling every 12 months!

That rate will continue to increase exponentially once more and more machines are developed to solve more and more problems. We use machines to build even more complex machines, so it has already begun and machines will soon surpass our own abilities to develop technology. That's where the real explosion will happen.

You are right that we really can't see where this will go. It is just too big and too fast to fathom, but as long as that train isn't stopped it will most likely leave us breathless.
 
  • #36
Loren said:
All of the data from the SDSS fits on a hard drive. What's the big deal?

The deal is that this data, when analyzed by scientists, needs to be consistent with all other scientific observations and their analysis. Not "mostly consistent", not "looking okayish at the first glance". You can't have SDSS data on stellar spectra and stellar population statistics contradict your elementary particle theories derived from accelerator experiments.
 
  • #37
Loren said:
If the universe is a set of discrete quantum points, then Planck units are the smallest size for any measurement. One Planck length is about 1.6 x 10-35 meters. That's a discrete quanta. You can't get any more precise than that and so every position in the universe can be described by a finite set of numbers no matter how big the universe is or gets.

The numbers may appear big, but they still resolve to finite values.

Your argument boils down to two points:
- Universe needs only a finite amount of data and computations to be simulated
- We rapidly increase our computatonal resources

From this you assume it means it would be possible to simulate a Universe.

I think you fail to realize that a "finite" amount of data can nevertheless be so vast that there is no chance to tackle it, regardless of how fast we would evolve our computers.

Here is an example:
https://en.wikipedia.org/wiki/Graham's_number

Easy, eh? It's only a single finite number. It's not even the aleph-zero. Now, can we calculate and write down its value in base 10? Can we ever do that? Say, after a billion years of advancement in computers?
 
  • #38
nikkkom said:
Your argument boils down to two points:
- Universe needs only a finite amount of data and computations to be simulated
- We rapidly increase our computatonal resources

From this you assume it means it would be possible to simulate a Universe.

I think you fail to realize that a "finite" amount of data can nevertheless be so vast that there is no chance to tackle it, regardless of how fast we would evolve our computers.

Here is an example:
https://en.wikipedia.org/wiki/Graham's_number

Easy, eh? It's only a single finite number. It's not even the aleph-zero. Now, can we calculate and write down its value in base 10? Can we ever do that? Say, after a billion years of advancement in computers?

All I can say is remember this day and your posts. I think that you will be surprised, but time will tell.
 
  • #39
Good day again and apologies for a necrobump(or is it?).

While this was a nice and productive discussion on an a matter that was seemingly little addressed in the related literature, I came up with a new question that has to do with the current thread.

Namely, quantum and thermal fluctuations that theoretically take place even in the case of the Heat Death. A nasty consequence of these might be Poincare Recurrence, creation of a new universe, Boltzmann brain or the like. Even if the particles are completely isolated by the event horizons, there are still nonzero possibilities that something appears out of "nothing".

This puts question as to whether any finite computational power can simulate something like this.

I think the problem might be in having a sort of "true random number generator" in our imaginary computer. If there is such a thing as TRUE randomness, then, probably, the universe cannot be simulated. If, however, every randomness is only apparent, the situation might be rescued. Imagine that the simulated quantum mechanics is deterministic (such models exist, e. g., Bohmian mechanics -- even though it is seen as unpleasant due to nonlocality, it might fit the current discussion). Then, every process which appears "random" to the internal observer might actually be a result of some sophisticated algorithm (refer, for example, to pseudorandom generators which have extremely huge periods sufficient for any practical use).

So might a deterministic picture resolve the issue of Boltzmann brain and related phenomena which preclude the possibility of simulating an indefinitely expanding universe with finite computational power?

Comments / thoughts are welcome.
 
  • #40
Who said you have to get done with the computation of the infinite future of our universe with finite computational power? The simulation can be stopped once it gets boring, or just kept running, simulating further into our future over the runtime of the simulation.

The many-worlds interpretation is deterministic and local.
 
  • #41
As was said earlier, the horizon is asymptotic to a limit but comoving material continues to expand. The number of particles within the observable universe will therefore fall, freeing up resource to model new entities should these be modeled by your program. Location can be an attribute of each particle with finite resolution hence finite resources will always be adequate, and in fact we may be past the peak since the expansion is already accelerating.

This line of thinking isn't going to define any limit in the universe other than that it couldn't be infinite, unless of course it runs on an infinite array of distributed processors. That only begs the question, what is the size of the universe in which the computer is running so this appears to be heading towards the philosophical dead end of infinite regression.
 
  • #42
mfb said:
Who said you have to get done with the computation of the infinite future of our universe with finite computational power? The simulation can be stopped once it gets boring, or just kept running, simulating further into our future over the runtime of the simulation.

The many-worlds interpretation is deterministic and local.

You are right. But "boring" may be differently understood. Having nothing to interact with can be classified as "boring". But having a Boltzmann brain occurred right in front of you may be not!
 
  • #43
GeorgeDishman said:
The number of particles within the observable universe will therefore fall, freeing up resource to model new entities should these be modeled by your program

I got your point. But what might appear is not (to my understanding) limited in any way. At least dimensionally and in terms of complexity. There may, say, a new universe appear. Also, and I must have stated it somewhere, the simulation time should be finite. The criterion was to stop the simulation when each particle gets isolated. But random occurrence of literally anything breaks this idea apart.
 
  • #44
Why would we need an external universe to do the simulation computations for this one?
The numbers are big but the methodologies of symbols and manipulations of them are not.

The description of Graham's number takes a page, showing the construction of G64. All numbers could be described as a base Graham plus Exp residue like this:
A particular number is G12 + G3 + 8.2x10^246 + 3.290869x10^-29 or some similar way.

How big a number and how many of them are needed in a description of the universe as phase space in Planck units?
 
  • #45
bahamagreen said:
Why would we need an external universe to do the simulation computations for this one?

How could a simulation run in a computer built in this universe be the universe in which it is running? The conversation isn't about simulating the universe.
 
  • #46
Right. The conversation about a universe as a (finite) computation.

Meanwhile, I have serious doubts whether it is possible given the standard interpretation of QM. Even if the universe approaches a heat death, literally anything can still happen and so forever. Perhaps, Bohmian mechanics is more suitable in this context. At least, it addresses the final state better.
 
  • #47
The volume of the observable universe is roughly 9.5184 cubic Planck units. While not infinite, it may as well be for all practical purposes.
 
  • #48
Maybe I'm not understanding what "universe as computation" means.
Is it a model that is fully detailed and complete?

Like this?
The basic Planck units are length, mass, time, charge, and temperature
The volumes are assigned with respect to times as events
Current volumes is about 9.5^184, current times is about 8^60
Current events is about 7.6^245
That leaves an attribute of mass, charge, and temperature for each volume at each time (event)
Each volume/time event might take three attributes (mass, charge, temperature), or all 16 derived Planck units
These attributes will be "small numbers" up to singularity.
I don't think the computation itself, despite being self referential, counts as a singularity.
Would these events qualify as indistinguishable micro-states where we need N= (7.6^245)! or about 10^10^218?
 
  • #49
Temperature is a macroscopic quantity, it does not exist on the level of individual particles.

Thermodynamics sets limits on the amount of computation you can get done - it is much lower than one operation per Planck (4d) volume. Bremermann's limit is an example, if you scale it up from Earth to the observable universe you get something like 10120 operations. A computer simulating our observable universe would not need more than that.
 
  • #50
Computing requires three things in the most sensible way:
  1. a program
  2. some form of storage
  3. a means to execute the program
So let us look at the universe. It is big and for all reasonable purposes finite as light has a finite speed and information cannot (As far as we know) travel faster than that.
This also gives us a finite amount of information each step the program takes which means it can be traversed through in finite time.

Now let's state that the universe consists of a finite amount of programs that mutate each step in time in such a way that the information it receives changes the program as well as the stored information (this last is not required if the program can be altered in such a way that the data becomes the program) Self mutating programs are well known throughout the computer science community. However running them can be quite tricky but doable.
So instead of having one giant program you end up with a finite amount of small ones that work together to create what we perceive as the universe. The logic in those programs alters and the effect of new information also changes the behaviour in the next cycle.
A universe that is run by one program violates it's own limitation set on it as it would require to calculate all elements each time step regardless of the distance. (adding extra dimension can resolve this but then information could travel from one place to the next instantaneously which is not observed) This requires a tremendous amount of effort to calculate by one entity or program. For all practical purposes one can mark that as infinite amount of steps. But I don't think the universe is build like that. It is a huge array parallel processing entities.
The one argument that this thought experiment may overlook is in what are those programs running. For now I just assume that each program is both processor as program.
I think using a finite amount of programs that can communicate with each other through the exchange of information which can travel at a finite speed that change behaviour based on the information would be a doable
Of course this is just an idea of how I would build such a simulation using a finite resource and a staggering amount of configurations.

If stupidity got us into this why can't it get us out?
 
Last edited:
Back
Top