Fine tuning, muitverse and a double standard

Click For Summary

Discussion Overview

The discussion revolves around the concepts of fine-tuning in the universe and the multiverse hypothesis, particularly focusing on the implications of extrapolating physical constants and the validity of such extrapolations in scientific discourse. Participants explore the relationship between constants of nature and the emergence of life, as well as the philosophical and methodological implications of modeling the universe.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Exploratory

Main Points Raised

  • Some participants argue that the assertion that the universe is fine-tuned for life relies on extrapolating physics into unobservable realms, similar to the criticisms levied against the multiverse hypothesis.
  • Others propose that computer simulations can provide insights into how changes in physical constants might affect the emergence of life, although they acknowledge that simulations differ from actual experiments.
  • It is suggested that the requirements for life are complex and may depend on a narrow range of physical constants, with specific examples such as the cosmological constant being discussed as critical for structure formation.
  • Some participants express skepticism about the ability to predict life forms from simulations, emphasizing the limitations of current knowledge regarding the conditions necessary for life.
  • There is a discussion about the philosophical implications of theoretical extrapolation in science, questioning whether such extrapolations should be permitted for both fine-tuning and multiverse discussions.
  • Concerns are raised about the assumptions inherent in simulations, with a focus on how small errors can lead to significant deviations in outcomes, potentially misleading interpretations of fine-tuning.

Areas of Agreement / Disagreement

Participants express multiple competing views regarding the validity of extrapolating physical constants and the implications for both fine-tuning and the multiverse hypothesis. The discussion remains unresolved, with no consensus on the status of these extrapolations in scientific inquiry.

Contextual Notes

Limitations include the dependence on unobservable realms, the challenges of accurately modeling complex systems, and the unresolved nature of the relationships between fundamental constants. The discussion highlights the uncertainty surrounding the requirements for life and the implications of theoretical models.

palmer eldtrich
Messages
46
Reaction score
0
I just watched a talk from George Ellis about cosmology. I believe there is a serious double standard implied in it.
He says
(A) the universe is fine tuned for life. Change one of the constants of nature like the mass of the electron and you won't have life in the universe.
He also says
(B)the multiverse might explain this fine tuning, but the multiverse is based on extrapolating physics to unknown realms that we can't observe and is therefore questionable science.

However doesn't the statement (A) that life can't exist unless we have these very finely tuned constants also depend on extrapolating physics into realms we can't observe? After all no one can ever do an experiment whereby we change a constant of nature and then see if life still emerges. Perhaps changing one constant leads another to move to compensate, who knows? The conclusion of fine tuning seems to be based on just such an an unverifiable extrapolation that Ellis accuses the multiverse proponents of being guilty of .
it seems to be if we allow for such extrapolations then both A and B can be fair game, but if we don't then neither A nor B is fair game.
 
Space news on Phys.org
You can do computer simulations with tweaked constants.

I'm sure he was referring to life like us, we require atoms and they have to act a very particular way, our chemistry is incredibly advanced. Making a small tweak in a big engine can easily break the engine.
 
newjerseyrunner said:
You can do computer simulations with tweaked constants.

I'm sure he was referring to life like us, we require atoms and they have to act a very particular way, our chemistry is incredibly advanced. Making a small tweak in a big engine can easily break the engine.

A simulation in a computer is not the same as doing an experiment or collecting data from nature. You can't do that with the constants of nature.
 
newjerseyrunner said:
I'm sure he was referring to life like us, we require atoms and they have to act a very particular way, our chemistry is incredibly advanced. Making a small tweak in a big engine can easily break the engine.
Doesn't even need to be, "life like us." It's very difficult to estimate the probability of life very unlike ourselves, but you can still approach the problem very broadly by examining the fundamental requirements of life, the most fundamental of which is the formation of structure. Life cannot exist if there is no gravitational collapse: you need matter to come together if there is to be any hope of any complex structures. And it turns out that you need a pretty narrow range of the known physical constants for any structures to form at all. The cosmological constant is particularly relevant: if the cosmological constant has a positive value greater than about ##10^{-120}##, then no structures can ever form because everything is blown away from everything else in an accelerated expansion before anything has a chance to form any kind of structure. If the cosmological constant has a negative value less than about ##-10^{-120}##, then the universe almost instantly recollapses and there's no time for any life to form.

Physicists have, for a long time, tried to get around this conundrum by assuming there there must be some kind of symmetry that sets the cosmological constant to zero. No such symmetry has been found.

You can delve into this problem a bit more carefully by examining other basic requirements. For example, if gravity is too strong, then anything that does enter into gravitational collapse simply becomes a black hole. If the constants that govern nuclear physics are off a bit then no heavy elements can ever form, so there's no possibility of complex chemistry.

The real difficulty is that in order to approach the problem, you also need some kind of theory which describes the possible outcomes for the various physical constants. For example, imagine that we have physical constants A and B. A is measured to take the value 0.5, while B is measured to take the value 0.000000005. Naively the value of A=0.5 doesn't seem too weird. There's no need for a detailed explanation for numbers that are close to 1. But the value of 0.000000005? That's odd. It demands an explanation. One possible explanation might be that B can take any value from 0-1, but only values that are close to 0 permit any life to exist, so living organisms always observe values of B close to 0. Another explanation could be that the constants A and B are related: instead of being independent values, there's some fundamental relationship that sets B = A / 1,000,000,000, while A can take any value from 0-1. Suddenly the value for B no longer seems weird: it's a consequence of the way the fundamental laws behave and the random result of A.
 
So a computer simulation of the universe with the initial conditions one million years after the Big Bang would predict bacteria, earthworms, birds, etc?
 
Thecla said:
So a computer simulation of the universe with the initial conditions one million years after the Big Bang would predict bacteria, earthworms, birds, etc?

Theoretically, given a powerful enough computer, the necessary data, and a long enough time to run and it probably would. Of course there's no way to gather the necessary data since we can't go back in time to one-million years after the big bang. And if you don't have those exact conditions, there's no telling what this computer will predict (within reason).
 
Thecla said:
So a computer simulation of the universe with the initial conditions one million years after the Big Bang would predict bacteria, earthworms, birds, etc?
What? No. Definitely not. Firstly, one million years isn't anywhere close to enough time. Our universe was basically nothing but a hydrogen and helium gas at that time. No galaxies, no stars. Just some parts of the gas were a bit more dense than others.

Second, we don't know anywhere near enough to say definitively what conditions are necessary for life. The best we can do is say that some conditions would make life (any life) impossible. There are probably other ways the universe could be that also make life impossible, but we don't know for sure just how specific the requirements for life are.
 
The issue I'm concerned with is not whether or not one can realistically model the universe and see if life should arise given different conditions. The issue is the very process itsetlf, what status should such a process be given? In particular if theoretical extrapolation beyond what is observable should not be allowed in science as Ellis seems to imply with the multiverse; then the same rules should we should not even talk about a universe with different constants. We can't observe the results and so such discussion is not science.
Either extraploating beyond what we can directly observe is ok or its not. If it is the both the fine tuning problem and the multiverse as a solution are fair moves. But if its not ok then neither the question (fine tuning ) nor the answer (a mulitverse) should be allowed. I don't see why one is ok and the other is not.
 
Theoretically, if you had a precise equation for the nature of the universe (which we don't) and it's deterministic (which we aren't sure it is) and you knew the exact initial conditions (which isn't possible) you could simulate the universe and after 13.8 billion years, that simulation would ponder the same question and build it's own simulation.
 
  • #10
Assumptions are the achilles heel of simulations. Input one that is just slightly off and the consequences can take on a life of their own as the error can propagate exponentially. This factor in and of itself can give the illusion of fine tuning where none is necessary A more important consideration may be the relationship between relevant fundamental constants. I see no compelling reason to assume they exist blindly and arbitrarily. I prefer the idea that nature preserves specific, and likely highly complex relationships between fundamental constants according to rules we would find, assuming we knew and understood them, both logical and mathematically elegant. This would explain why we do not see runaway feedback loops in nature. All known physical processes appear to include a built in dampening mechanism.
 
  • #11
newjerseyrunner said:
Theoretically, if you had a precise equation for the nature of the universe (which we don't) and it's deterministic (which we aren't sure it is) and you knew the exact initial conditions (which isn't possible) you could simulate the universe and after 13.8 billion years, that simulation would ponder the same question and build it's own simulation.
But maybe to do that calculation your computer produced so much heat that it influences the universe in the time your calculation is finished in such a way that the macroscopic effect cannot be neglected anymore.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 25 ·
Replies
25
Views
8K
  • · Replies 32 ·
2
Replies
32
Views
8K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 71 ·
3
Replies
71
Views
16K
  • · Replies 2 ·
Replies
2
Views
2K