Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Fecund universes and physical constants

  1. Dec 19, 2007 #1
    How much water does Lee Smolin's theory of fecund universes hold?

    Also, according to his theory, the basic parameters of nature vary between universes. My question is - how much? Is it a noticeable difference that would make life as WE know it impossible?
     
  2. jcsd
  3. Dec 19, 2007 #2

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    The first question to ask about a proposed scientific theory is whether or not it is falsifiable.

    Does it make any new prediction (beyond what is already derivable from prior established theory) which would allow it to be checked?

    That is the essence of the method of empirical science outlined some 400 years ago by Sir Francis Bacon.

    Only after seeing if Smolin reproductive cosmology picture is TESTABLE would i bother to ask how likely, in your subjective judgement, is it to be true, or does it appeal to this or that person's imagination.
     
  4. Dec 19, 2007 #3

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Having said that what really matters is whether a theory is testable---and ultimately how the new predictions it makes fare when confronted by observation----I'll say that the question is it PLAUSIBLE depends on quantum cosmology and quantum black hole models.

    Probably the leading quantum bigbang model is what you get in LQG---the applied area of LQC (loop quantum cosmology). If you look at recent published work (> 2002) in quantum cosmology the LQC papers are by far the most cited. More so than for example Steinhardt clashing branes, or Stephen Hawking, or Veneziano stringy pre-bigbang.

    So the LQG way of dealing with the BH and BB singularities has at least achieved PROMINENCE.

    And within the context of that model what Smolin hypothesizes is certainly PLAUSIBLE.
    Because (Gambini Pullin) quantum black holes bounce and result in something resembling a bigbang. And the big bang is preceded by a prior contracting phase that looks rather like a black hole collapse (Ashtekar et al). If you want links, ask.

    In LQG there has been rapid progress since 2005 and the models which show bounce instead of singularity are now more definitive and more thoroughly studied. There still are questions, differences between different models (e.g. see Vandersloot and Boehmer LQG black hole model---different from Gambini Pullin). And this stuff needs to be TESTED.
    But in any case it looks as if Smolin reproductive cosmology picture has gained technical plausibility since 2005.

    When he first proposed it circa 1994 it was very much a longshot. he didn't know that within a little over 10 years LQG would have evolved to the point where they had actual bouncing models of BB and BH that in some sense FIT TOGETHER. he was guessing about some details working out and in some sense he guessed right.

    But what we should really focus on is the issue of testing Smolin's conjecture which is explicitlyThat the dimensionless parameters of the standard models of cosmology and particle physics are local optimized for astrophysical black hole abundance i.e. that the "genes" of our Universe are optimized for reproductive success.

    Locally optimized means better than the immediate neighbors. Our parameters should be better for producing astrophysical black holes than whatever set you can get by making small variations. It doesn't mean globally optimized which would be the best of all possible sets of numbers. We are at the top of our own local hill, it is conjectured.

    To falsify this hypothesis, all you need to do is find SOME SMALL ADJUSTMENT of the standard model parameters (like maybe a quark mass, or a tweaking of the finestructure 1/137 constant) that you can show would make black holes easier to form or their antecedent stars more abundant.

    In his technical papers about it, Smolin lists some astronomical observations that he says would serve to do this.

    In 2006, Vilenkin tried to falsify Smolin's hypothesis but it seems to have fallen flat. For more information see this article
    http://arxiv.org/abs/hep-th/0612185
    The status of cosmological natural selection
    Authors: Lee Smolin
    25 pages
    (Submitted on 18 Dec 2006)

    "The problem of making predictions from theories that have landscapes of possible low energy parameters is reviewed. Conditions for such a theory to yield falsifiable predictions for doable experiments are given. It is shown that the hypothesis of cosmological natural selection satisfies these conditions, thus showing that it is possible to continue to do physics on a landscape without invoking the anthropic principle. In particular, this is true whether or not the ensemble of universes generated by black holes bouncing is a sub-ensemble of a larger ensemble that might be generated by a random process such as eternal inflation.
    A recent criticism of cosmological natural selection made by Vilenkin in hep-th/0610051 is discussed. It is shown to rely on assumptions about both the infrared and ultraviolet behavior of quantum gravity that are very unlikely to be true."
     
    Last edited: Dec 19, 2007
  5. Dec 19, 2007 #4
    Yes, but that still doesn't answer my question about the constants.
     
  6. Dec 19, 2007 #5

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    I take it your question is by HOW MUCH do parameters vary between mother and daughter spacetime region
    This is not specified in the theory. If you mean the change over SEVERAL GENERATIONS then certainly you can assume that if you go back enough generations you could find conditions hostile to life as we know it.

    But the theory does not specify by how much parameters can vary in one generation. It does not NEED to specify this in order to work as a theory. Darwin didn't need to know exact mutation-rates, or exactly how many generations were needed to make a certain change.

    As it stands the theory IS NOT ABOUT LIFE. It does not involve any definition of life as we know it----or of life as we don't know it. the theory is about astrophysical BLACK HOLES. it conjectures that the dimensionless parameters of physical law may have evolved to maximize black holes

    and as a footnote to that, this would ACCIDENTALLY tended to produce conditions for rich chemistry, longlived stars, rocky planets etc etc. that is not part of the theory but rather something incidental to it. I think you probably are well aware of this but I mention it in case anybody else is reading the thread.
     
  7. Dec 19, 2007 #6
    If cosmological natural selection holds, then does that mean that matter that enters a black hole isn't destroyed? Would it pass into another universe unharmed?
     
  8. Dec 19, 2007 #7

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    I respect that you are probing into this and asking questions.
    Maybe someone else would give you different answers from mine, which could be more satisfactory for you.

    I will nevertheless tell you my personal take on it. which is that the testable hypothesis is all that matters. The hypothesis is simply that our list of parameters (particle masses, coupling constants etc) is a local optimum for black hole abundance.

    If turns out not to be true then we forget the whole thing. If after some years of trying nobody can falsify it by showing a variation of constants that would increase production then that would be very surprising. Why should our parameters be optimal for making black holes??!!

    In that case one can ask about details, like what happens to the matter that undergoes the bounce?

    Now you are already asking "what happens to the matter that undergoes the bounce?"

    That as I see it is not part of Smolin's hypothesis. It is not part of his evolution theory.

    BUT you can begin to find out what LQC says happens to the matter, for whatever that is worth. You can read what Ashtekar and Bojowald and others have to say about the bounce. They say in LQG there is a threshhold density at which quantum effects dominate classical and cause a different relation between matter and geometry, making gravity effectively repellent. They say the critical density is around 80 percent of Planck.
    That would be round 1093 times the density of water, I guess.

    In LQG there is also the idea that matter and geometry arise from the same fundamental elements----the same fundamental degrees of freedom describe both matter and space at a microscopic level.
    Lee Smolin and halfdozen other researchers are currently working on one possible way to make this mathematically explicit---but it has always been a vague notion present in LQG community which people try various ways to make rigorous.
    The idea is that when a region of spacetime and matter collapses to a near-planck density, then you can no longer any more say what is matter and what is space. they are reduced to the same thing-----the primitive microscopic degrees of freedom from which both arise in cooler more relaxed less concentrated conditions.


    You should disregard what is in red there because it is just my personal take on a persistent idea that I keep seeing in various forms but has not been stabilized.

    this just just my personal thought: I don't think people yet know, or have even the beginnings of a good idea, about what happens to matter during the cosmological bounce.
    or during the black hole bounce (if there is one.) And ordinary conservation laws which require a separation of space and matter do not seem likely to apply without change. Thermodynamics rules which depend on an observer, and various conventions involving temperature equilibrium and heat-engines also seem unlikely to apply without change.
    At a bounce, I mean, where the energy density approaches Planck level.

    So I would have to conclude that your question about "what happens to matter in a bounce?" is a very good question, but currently unanswerable.

    Also it should be asked to Ashtekar, Singh, Bojowald etc. It should not be asked of Smolin because his evolution hypothesis is of a general nature that does not depend on details like that. All he does is challenge you to figure out a way to modify standard model numbers that would get us more holes----in other words he challenges you to show that our numbers are not perfect already for making holes. this challenge does not depend on specifics of any bounce mechanism.
     
    Last edited: Dec 19, 2007
  9. Dec 20, 2007 #8
    If a parent universe produce child universes with slightly different constants, a question
    is posed. Wherein the structure of the universe lay the genes that determine those constants, and what allows them to vary from one generation to another?
     
  10. Dec 20, 2007 #9

    Fra

    User Avatar

    information/learning/philosophy perspective

    I personally think that if we could look more closely to the process we call falsification for the simple reason that the process of making a prediction, collecting the feedback and compute a conclusion is a very composite process.

    If we have an opinion, or expectation, then if this is reasonably constructed we should also have an expected confidence in this expectation (as calculated from our history). Now if new data arrives that fails to meet this expectation, then our expectation will be updated - but not instantly so, because there must be some kind of intertia originating from the relative confidences we have in the prior vs the new contradictory information.

    In this sense I think a falsification process is simply the case when new information arrives that is in contradiction with current expectations/predictions and where the confidence in this contradiction is large enough.

    If we are trying to make predictions and have expectations on processes that are very extended relative to ourselfs and our lifes, I don't see how we possibly could expect to come to very confident expectations that are EASY to falsify. But I don't think this conflict is a coincidence, I think there is a logic to it and I think analysing this will help optimise our progress.

    Could it be that we choose between either our expectations have high confidence, and are harder to falsify, because the falsification process requires too much data and dataprocessing - or they are far more speculative but easier to falsify?

    The question is what is the optimum strategy in the long run? High risk gaming and low chance of high growth, or low risk gaming and high chance of steady/slow growth? I think it probably depends on the environment, and maybe there is even a definite answer?
    What is beeing optimised?

    It's easy to feel our own limits and frustration. We simply don't know alot of things. So what else can be do, that to try to find the optimum way to let the little we know guide is into what we don't know and can't predict?

    I always have the opinion that at least sometimes, the falsification argument is used too easily to argue against ideas. I am not suggesting that anything goes, but I am just suggesting that I think the falsification process might benefit from a closer analysis.

    /Fredrik
     
  11. Dec 20, 2007 #10
    Agreeing with marcus, but some notes on method

    I'm not questioning the great importance of this question, but I think this is no longer the first question asked by Philosophers, and I think it's not always the first question asked by all Physicists.

    The questions I ask, for what it's worth, are:
    • how close is a model to current experimental data?
    • what suggestions are there for practical experiments that give interesting differences from the predictions of other theories?
    • how mathematically beautiful is a theory?
    • how ad-hoc are the calculations that produce successful models for experiments?
    • how productive of new ideas does a theory appear to be?
    • how good is the correspondence of the new theory to the current conventionally supported theory (think 1920s, when this was emphatically productive as their first question)?
    • how computationally tractable is a theory?
    • how well does a theory translate into an engineering context?
    • etc.

    If a theory does not agree with existing experimental data, we do not immediately drop it, even though it is, prima facie, false. Whether we drop the theory depends on whether we can see a modification of the theory that might make it work. If we can't see any possibility of making it work, we leave it to others who can to make it work slightly better; if, when other people have made the theory not quite such an experimental disaster, we can see now how it might be made better, we might work on it ourselves. The mathematical beauty of a theory shortcuts this discussion, making Physicists of a mathematical mind more committed to certain theories. When a theory that people have committed a lot of energy to suggests a new experiment and different results, that prediction, if verified, gives a substantial boost to the theory, even if it the theory is not in agreement with all existing data. Equally, if the predictions are shown to be false, a committed group of supporters will probably not give up immediately, they may well be able to find a mistake in their calculations or a new way to calculate -- then questions of whether the new calculation is ad-hoc or not may become paramount.

    Without a discussion at least at this level of methodological sophistication, without an absolute emphasis on any single question, I find it very hard to understand the history of science.

    In many ways, the robustness of quantum mechanics in Physics is due to the pragmatic effectiveness of QM as an engineering tool. Until a new approach can be better as an engineering tool, we will continue to use QM, just as we happily continue to use classical mechanics and electromagnetism as models whenever it's easier to do so.

    To return to the OP. If a multiverse model gives empirical consequences of there being other universes, then the multiverse is a universe, an interacting set of (very large) objects. There's just one universe, that's what the word means, IMO, "uni" is one. If, as a matter of definition a multiverse model is conceived with the idea that it has no interactions with other universes, so it has no empirical consequences in principle, then it is a metaphysical notion. If interactions between the multiverses are allowed back into the model, we are again back at a universe and the possibility of Physics.

    Long-winded and only a little different from marcus.
     
  12. Dec 20, 2007 #11

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    That is a really interesting question. I would guess in the microscopic substance of spacetime and matter----from which these larger scale effects emerge.

    Now this is just conjecture: I think there may be planckscale dynamical degrees of freedom from which both geometry and matter arise, and which determine the interactions between matter and geometry (which are different aspects of the same thing).

    And if that is so then I'd speculate that any planckscale piece of the underlying d.o.f. would contain the genes of all physical law.

    But that is just a vague suspicion.

    My point about Smolin's conjecture is (as you may very well recognize yourself) we don't have to know what the genes are or where they are or how they change in order to test Smolin's optimality hypothesis.

    The hypothesis is simply that the standard model is highly favorable to astrophysical black holes---and challenges you to find a slight variation of some number(s) which would lead to there being more.

    If that hypothesis were vigorously tested and tentatively confirmed it would present a huge puzzle. Why, in heaven'sname, should physics be fine-tuned to produce holes??!!
    How could that ever have occurred?

    And that would probably motivate further research into the process of blackhole and cosmological bounce and further investigation into whatever are the fundamental dynamical degrees of freedom underlying what we see happening around us---and attempts to conceptualize what you call the genes.

    However i think it would be sensible to vigorously test the optimality hypothesis before one gets into speculating about mechanisms. Because that is something current science is equiped to do, and the hypothesis could well turn out to be WRONG. Which would same a lot of wasted effort exploring mechanisms.
     
    Last edited: Dec 20, 2007
  13. Dec 20, 2007 #12

    marcus

    User Avatar
    Science Advisor
    Gold Member
    2015 Award
    Dearly Missed

    Peter Morgan,
    you are right to point out those criteria, some of which help determine whether an embryonic theory (even before it makes testable predictions) is even worth developing. Your list has clear pragmatic validity and is also interesting to think about, together with the examples you give.

    I should have qualified what I said---and not made that flat statement. I was responding to hammertime's question about HOLDING WATER.
    In this special case where one has Smolin's optimality hypothesis (which he states explicitly as such) we don't have issues of engineering application and all that other good stuff. Smolin didn't suggest hole-to-bang mechanisms---he just speculated a rough sketch to commicate the idea and give it a minimal plausibility. The only thing that could hold water (as I understand what hammertime's question) or have some truth value, is whether the standard model is tuned to favor black holes.

    So this is not a physical theory like quantum mechanics that is meant to apply to a lot of things and explain phenomena in many contexts---AFAICS it is just this one bare hypothesis: one testable assertion, is it a fact or not? The vague talk surrounding it merely hopes to persuade us that the conjecture is plausible and worth testing.

    So I should have qualified what I said---with this kind of conjecture which is only one testable assertion and lacks definite detail----then the main (almost the only) thing you can do with it is try to falsify it.

    With other kinds of theoretical constructs there are all sorts of other appropriate questions to ask. Some of which you can ask about an infant theory even before any predictions can be derived from it.
     
    Last edited: Dec 20, 2007
  14. Dec 21, 2007 #13
    I address some of these points in

    Self-Replicating Space-Cells and the Cosmological Constant

    http://arxiv.org/abs/0706.3379

    see also

    https://www.physicsforums.com/showthread.php?t=177015


    I argue, from certain assumptions about the properties a fundamental model of physics should have, that the familiar laws of physics are encoded in `Self-Replicating Space-Cells', distributed throughout space, at a roughly constant and near-Planck density. They replicate as space expands. As a corollary, such replicators provide a microscopic basis for Cosmological Natural Selection. This brings CNS in closer correspondence with biological evolution, with universes being viewed as `survival machines' for the Space-Cells.

    Abstract:
    We consider what the implications would be if there were a discrete fundamental model of physics based on locally-finite self-interacting information, in which there is no presumption of the familiar space and laws of physics, but from which such space and laws can nevertheless be shown to be able to emerge stably from such a fundamental model. We argue that if there is such a model, then the familiar laws of physics, including Standard Model constants, etc., must be encodable by a finite quantity C, called the complexity, of self-interacting information I, called a Space-Cell. Copies of Space-Cell I must be distributed throughout space, at a roughly constant and near-Planck density, and copies must be created or destroyed as space expands or contracts. We then argue that each Space-Cell is a self-replicator that can duplicate in times ranging from as fast as near-Planck-times to as slow as Cosmological-Constant-time which is 10^{61} Planck-times. From standard considerations of computation, we argue this slowest duplication rate just requires that 10^{61} is less than about 2^C, the number of length-C binary strings, hence requiring only the modest complexity C at least 203, and at most a few thousand. We claim this provides a reasonable explanation for a dimensionless constant being as large as 10^{61}, and hence for the Cosmological Constant being a tiny positive 10^{-122}. We also discuss a separate conjecture on entropy flow in Hole-Bang Transitions. We then present Cosmological Natural Selection II.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?