I MUST every possible thing happen in a Tegmark L1 Multiverse?

AI Thread Summary
Max Tegmark's 'Mathematical Universe' hypothesis posits that in a Level 1 multiverse, everything that can happen does happen, relying on the assumption that each Level 1 spacetime is spatially infinite and that a Level 2 multiverse is ergodic. Critics argue that Tegmark's conclusion about events occurring in every Level 1 spacetime does not logically follow from ergodic theory, which suggests that while an event may occur almost surely, it does not guarantee occurrence in every instance. The discussion raises questions about Tegmark's understanding of probability theory and the implications of infinite spacetimes. Additionally, the conversation touches on the nature of our universe's finite age and the underlying dynamics that could explain its orderly state. The complexities of these concepts highlight the ongoing debate in understanding the multiverse and the fundamental laws of physics.
andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
4,140
Reaction score
1,741
I've been looking at one of Max Tegmark's articles about his 'Mathematical Universe' hypothesis, here on arXiv.

As a preliminary, note that Tegmark's framework has four 'levels' of multiverses, with each level being an infinite collection of multiverses at the level below it. The second or third level is to do with quantum superpositions and has strong similarities to Everett's Many-Worlds framework. A Level 1 'multiverse' is just a single spacetime, but Tegmark assumes each Level 1 spacetime is spatially infinite.

One of the attention-grabbing aspects of Tegmark's hypothesis is that it says that everything that can happen, does happen.

That is not surprising when we are talking about quantum superpositions, as Everett says more or less the same thing.

What is surprising is that Tegmark claims this to be the case for every single Level 1 spacetime as well. The claim relies on an assumption that a single Level 2 multiverse, which is an infinite collection of Level 1 spacetimes, considered as a probability space, is ergodic, and that each Level 1 spacetime is spatially infinite. He writes:

Max Tegmark said:
The physics description of the world is traditionally split into two parts: initial conditions and laws of physics specifying how the initial conditions evolve. Observers living in parallel universes at Level I observe the exact same laws of physics as we do, but with different initial conditions than those in our Hubble volume. The currently favored theory is that the initial conditions (the densities and motions of different types of matter early on) were created by quantum fluctuations during the inflation epoch (see section 3). This quantum mechanism generates initial conditions that are for all practical purposes random, producing density fluctuations described by what mathematicians call an ergodic random field.
Ergodic means that if you imagine generating an ensemble of universes, each with its own random initial conditions, then the probability distribution of outcomes in a given volume is identical to the distribution that you get by sampling different volumes in a single universe.

In other words, it means that everything that could in principle have happened here did in fact happen somewhere else.

That last paragraph ('In other words...') does not seem to me to follow from what goes before it. Ergodicity is about expected values, and ergodic properties such as the one I linked above are carefully constrained with uses of the technical terms 'almost surely' or 'almost everywhere'. In an ergodic ensemble of infinite spacetimes, for a given spacetime S, the probability is 1 that an event E that occurs anywhere in the ensemble will occur somewhere in S. But that means that E occurs almost surely somewhere in S, which is not the same as saying that it does in fact occur in S.

I am trying to work out why Tegmark might have written this. Possibilities that occur to me are:

  1. Tegmark is trained as a physicist, not a mathematician, and has never studied formal probability theory, and does not understand that, in an infinite sample space, probability one does not imply certainty.
  2. Tegmark has learned that, but has forgotten it.
  3. Tegmark knows the distinction, but in the course of simplifying his conclusion in order to reach a wider audience, 'simplified' it to the point of making it misleading (which reminds me of Einstein's dictum: 'we should simplify as much as possible, but not more than that').
  4. There is some other reason, beyond mere ergodicity, why every possible thing must happen somewhere in an infinite Level 1 spacetime; or
  5. I have misunderstood ergodicity.
I would greatly appreciate people's thoughts on this.
 
Space news on Phys.org
"Everything" happens for a reason... not anything, not just some things, "everything" happens. Ergodic is a new term to me but my initial assessment is something along the lines of "going full circle" of all possibilities. In infinite spaces of finite possibilities it seems to me common sense that things would have to repeat...
 
andrewkirk said:
I've been looking at one of Max Tegmark's articles about his 'Mathematical Universe' hypothesis, here on arXiv.

As a preliminary, note that Tegmark's framework has four 'levels' of multiverses, with each level being an infinite collection of multiverses at the level below it. The second or third level is to do with quantum superpositions and has strong similarities to Everett's Many-Worlds framework. A Level 1 'multiverse' is just a single spacetime, but Tegmark assumes each Level 1 spacetime is spatially infinite.

One of the attention-grabbing aspects of Tegmark's hypothesis is that it says that everything that can happen, does happen.

That is not surprising when we are talking about quantum superpositions, as Everett says more or less the same thing.

What is surprising is that Tegmark claims this to be the case for every single Level 1 spacetime as well. The claim relies on an assumption that a single Level 2 multiverse, which is an infinite collection of Level 1 spacetimes, considered as a probability space, is ergodic, and that each Level 1 spacetime is spatially infinite. He writes:
That last paragraph ('In other words...') does not seem to me to follow from what goes before it. Ergodicity is about expected values, and ergodic properties such as the one I linked above are carefully constrained with uses of the technical terms 'almost surely' or 'almost everywhere'. In an ergodic ensemble of infinite spacetimes, for a given spacetime S, the probability is 1 that an event E that occurs anywhere in the ensemble will occur somewhere in S. But that means that E occurs almost surely somewhere in S, which is not the same as saying that it does in fact occur in S.

I am trying to work out why Tegmark might have written this. Possibilities that occur to me are:

  1. Tegmark is trained as a physicist, not a mathematician, and has never studied formal probability theory, and does not understand that, in an infinite sample space, probability one does not imply certainty.
  2. Tegmark has learned that, but has forgotten it.
  3. Tegmark knows the distinction, but in the course of simplifying his conclusion in order to reach a wider audience, 'simplified' it to the point of making it misleading (which reminds me of Einstein's dictum: 'we should simplify as much as possible, but not more than that').
  4. There is some other reason, beyond mere ergodicity, why every possible thing must happen somewhere in an infinite Level 1 spacetime; or
  5. I have misunderstood ergodicity.
I would greatly appreciate people's thoughts on this.
This is a bit off the cuff, but from what I recall the way it works is that the system isn't actually random. The fundamental laws are fully-deterministic (so far as we know). You get probability out by making some simplifying assumptions upon how the underlying physics works. So what is meant by "P > 0 that system is in state S at time T" is another way of saying that the system spends a finite amount of time in this state.

In order to have a system where every state occurs at some point, you have to have underlying dynamics that, given an infinite amount of time, fill the entire allowable phase space of the system in question. I can't be confident that this condition is satisfied in every conceivable mathematical structure that might describe the fundamental mathematical laws of a universe.
 
Ir seems fairly obvious our universe is not of infinite age. Which raises a question that is difficult to ignore - why do we reside in a universe that is of a measurably finite age? It seems very odd we would happen to arrive on the scene at a measurable age in a temporally unbound universe. Further compounding matters, it appears our universe, has been in an orderly state as far back as we can determine. Random chance does not offer an entirely satisfactory explanation. for these coincidences.
 
Chronos said:
Ir seems fairly obvious our universe is not of infinite age. Which raises a question that is difficult to ignore - why do we reside in a universe that is of a measurably finite age? It seems very odd we would happen to arrive on the scene at a measurable age in a temporally unbound universe. Further compounding matters, it appears our universe, has been in an orderly state as far back as we can determine. Random chance does not offer an entirely satisfactory explanation. for these coincidences.
That we observe our universe in a state where there is a measurably-finite age doesn't seem odd at all to me, given the dynamics we observe. By the time the age of our universe is no longer observable, there will also be no more observers.

The interesting question here is what caused our universe to start in an extremely low-entropy state early-on. This question for sure cannot be answered by the simplistic concept of random thermal fluctuations, with very rare ones producing universes (because it leads to the Boltzmann Brain problem). There have been many attempts to put forth a model which actually does make sense, but so far we just don't know.
 
https://en.wikipedia.org/wiki/Recombination_(cosmology) Was a matter density right after the decoupling low enough to consider the vacuum as the actual vacuum, and not the medium through which the light propagates with the speed lower than ##({\epsilon_0\mu_0})^{-1/2}##? I'm asking this in context of the calculation of the observable universe radius, where the time integral of the inverse of the scale factor is multiplied by the constant speed of light ##c##.
The formal paper is here. The Rutgers University news has published a story about an image being closely examined at their New Brunswick campus. Here is an excerpt: Computer modeling of the gravitational lens by Keeton and Eid showed that the four visible foreground galaxies causing the gravitational bending couldn’t explain the details of the five-image pattern. Only with the addition of a large, invisible mass, in this case, a dark matter halo, could the model match the observations...
Why was the Hubble constant assumed to be decreasing and slowing down (decelerating) the expansion rate of the Universe, while at the same time Dark Energy is presumably accelerating the expansion? And to thicken the plot. recent news from NASA indicates that the Hubble constant is now increasing. Can you clarify this enigma? Also., if the Hubble constant eventually decreases, why is there a lower limit to its value?
Back
Top