Inflation is very unlikely less than 10^-6.6×10^7

  • Thread starter emanaly
  • Start date
  • Tags
    Inflation
In summary, the proposed solution to the Big Pay-off Lottery Winning Paradox is the Anthological Principle, which assumes that there is a universe creating machine that creates an infinite number of universes. As there are an infinite number of universes created then it is possible that in one of the infinite number of universes created by universe creation machine that someone can win a big pay off lottery 52 times consecutively or equivalently Ω happens to be exactly equal to 1 at this point in time.
Space news on Phys.org
  • #2
Sean fails to point out prior work which draws a qualitatively similar conclusion (and goes deeper, beyond what he is talking about.)

http://arxiv.org/abs/0912.4093 and references therein

Using straight unquantized General Rel, other cosmologists (more prominent/reputable than Carroll) already argued inflation's extreme unlikelihood.*

But if you then use the most widely researched quantum cosmology model, you find the opposite---inflation becomes quite likely.*Estimates of likelihood in any case are model-dependent---depend on how one treats the space of parameters.
Ashtekar cites this 2006 paper, in that connection:
http://arxiv.org/abs/hep-th/0609095
G.W. Gibbons and N. Turok, Phys. Rev. D 77, 063516 (2008)

I see Sean DOES cite Gibbons and Turok!
==quote==
A common use of the canonical measure has been to calculate the likelihood of inflation
[5, 6, 7]. Most recently, Gibbons and Turok [13] have argued that the fraction of universes
that inflate is extremely small. However, they threw away all but a set of measure zero of
tra jectories, on the grounds that they all had negligibly small spatial curvature and therefore
physically indistinguishable. Inflation, of course, tends to make the universe spatially flat,
so this procedure is potentially unfair to the likelihood of inflation. We therefore re-examine
this question, following the philosophy suggested by the above analysis, which implies that
almost all universes are spatially flat. We choose to look only at flat universes, and calculate
the fraction that experience more than sixty e-folds of inflation.
==endquote==

That's good. I have to go, get back to this later.
 
Last edited:
  • #3
This is a link to Wright’s cosmology link that discusses the flatness-oldness problem and the cosmological parameter fine tuning problem.

The flatness-oldness problem and the cosmological fine tuning problem are analogous to the Copernican principle problem. A theory that requires humans to live at a very, very, very, very special point in time or place in the universe is likely incorrect.

As the universe expands, according to the fundamental science which is used to construct cosmological models such as ΛCDM , Ω changes. That Ω happens to be exactly equal to 1 at this point in time, has been called the cosmic fine tuning problem.

Another way of understanding the cosmic fine tuning problem is to look at other very, very, very, very improbable events.

Let’s say one individual bought lottery tickets each week and won a big pay off lottery every week for a year. Although that is not statistically impossible we know that it is not possible for the same person to win a big pay off lottery every week for a year or for a million monkeys randomly typing to write Hamlet with no typing errors in one week. The person in question who is winning the lottery every week would require either knowledge of the future or would need to have a method to ensure their numbers were selected each week.

The proposed solution to "The 52 times Big Pay-off Lottery Winning Paradox" is the Anthological Principle. The anthological principle assumes there is a universe creating machine that creates an infinite number of universes. As there are an infinite number of universes created then it is possible that in one of the infinite number of universes created by universe creation machine that someone can win a big pay off lottery 52 times consecutively or equivalently Ω happens to be exactly equal to 1 at this point in time.

Flatness-Oldness Problem

However, if Ωo is sufficiently greater than 1, the Universe will eventually stop expanding, and then Ω will become infinite. If Ωo is less than 1, the Universe will expand forever and the density goes down faster than the critical density so Ω gets smaller and smaller. Thus Ω = 1 is an unstable stationary point unless the expansion of the universe is accelerating, and it is quite remarkable that Ω is anywhere close to 1 now.

The figure above shows a(t) for three models with three different densities at a time 1 nanosecond after the Big Bang. The black curve shows a critical density case that matches the WMAP-based concordance model, which has density = 447,225,917,218,507,401,284,016 gm/cc at 1 ns after the Big Bang. Adding only 0.2 gm/cc to this 447 sextillion gm/cc causes the Big Crunch to be right now! Taking away 0.2 gm/cc gives a model with a matter density ΩM that is too low for our observations.

Thus the density 1 ns after the Big Bang was set to an accuracy of better than 1 part in 2235 sextillion. Even earlier it was set to an accuracy better than 1 part in 10^59! Since if the density is slightly high, the Universe will die in an early Big Crunch, this is called the "oldness" problem in cosmology.

And since the critical density Universe has flat spatial geometry, it is also called the "flatness" problem -- or the "flatness-oldness" problem. Whatever the mechanism for setting the density to equal the critical density, it works extremely well, and it would be a remarkable coincidence if Ωo were close to 1 but not exactly 1.
http://www.astro.ucla.edu/~wright/cosmo_03.htm
 
Last edited:
  • #4
Saul, your quoting from Ned Wright's site does not seem relevant. We know about the flatness, or flatness-oldness problem. That is one of the arguments for a 60 e-fold inflation having occurred.

What Gibbons and Turok 2006 paper points out is a seeming paradox involving such an inflation episode.

They suggest what they think is a natural measure on the parameter space and find (using old-fashioned unquantized Gen Rel) that such an inflation episode is very unlikely.

Gibbons and Turok carry the argument one step further than your Ned Wright quotes:
====
The Measure Problem in Cosmology
G.W. Gibbons, Neil Turok
(Submitted on 13 Sep 2006 (v1), last revised 2 Jan 2007 (this version, v2))
The Hamiltonian structure of general relativity provides a natural canonical measure on the space of all classical universes, i.e., the multiverse. We review this construction and show how one can visualize the measure in terms of a "magnetic flux" of solutions through phase space. Previous studies identified a divergence in the measure, which we observe to be due to the dilatation invariance of flat FRW universes. We show that the divergence is removed if we identify universes which are so flat they cannot be observationally distinguished. The resulting measure is independent of time and of the choice of coordinates on the space of fields. We further show that, for some quantities of interest, the measure is very insensitive to the details of how the identification is made. One such quantity is the probability of inflation in simple scalar field models. We find that, according to our implementation of the canonical measure, the probability for N e-folds of inflation in single-field, slow-roll models is suppressed by of order exp(-3N) and we discuss the implications of this result.
====

In other words, Saul, the "flatness problem" you mention is one of the reasons that, back in the 1980s, the inflation idea was invented and became popular. It convinced many people that inflation probably occurred---inflating by a factor of at least some e60, that is, by "60 e-folds" (anything less would be insufficient to explain observed flatness.)

Then in 2006, Gibbons Turok argued that if N is the number of e-folds (think of N=60) then such an inflation is unlikely by a factor of e-3N. Think e-180.

At that point, quantum cosmology came to the rescue. If one accepts what is today the most widely studied version of quantum cosmology (LQC) and uses that model instead then, as Ashtekar and Sloan showed, instead of being extremely unlikely that extent of inflation becomes quite probable (in Gibbons Turok terms).

Carroll fails to acknowledge this. No reference to the 2009 Ashtekar Sloan paper:

====
http://arxiv.org/abs/0912.4093
Loop quantum cosmology and slow roll inflation
Abhay Ashtekar, David Sloan
(Submitted on 21 Dec 2009)
In loop quantum cosmology the big bang is replaced by a quantum bounce which is followed by a robust phase of super-inflation. We show that this phase has an unforeseen implication: in presence of suitable inflationary potentials it funnels all dynamical trajectories to conditions which virtually guarantee a slow roll inflation with more than 68 e-foldings, without any input from the pre-big bang regime. This is in striking contrast to the situation in general relativity where it has been argued that the a priori probability of obtaining a slow roll inflation with N e-foldings is suppressed by a factor Exp(-3N).
Comments: 8 pages, 1 table
====
 
Last edited:
  • #5
I am not sure people understand the significance of the Cosmological fine tuning problem as it has not been defined to be a "52 Times Consecutive Winning a major lottery problem" or a million monkeys typing Hamlet in one week without typo errors problem.

One solution to the flatness problem is as noted below is to say it is not a problem. We like the cosmological model that we have and what the heck, people should stop being so picky. Our theory is very good in parts and hence let's not talk about the "52 times consecutive winning a major lottery problem."

If you do not talk about a problem it does not exist, as there is no observer.

Another approach to solving the problem is to request the universe creation machine to please create an "inflation field equation mechanism" for us.

The "inflation field" equation mechanism expands the universe in a manner that our cosmological equation works out the way we would like it to. This "inflation field" equation mechanism is different from other fields which we are familiar as they get less as space expands. The inflation field mechanism stays constant and then at some point in time which is perfect to match observational data the machine turn off the "inflation field" mechanism.

http://en.wikipedia.org/wiki/Flatness_problem

The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore the term ρa2 increases extremely rapidly as the scale factor a grows exponentially.

Post inflation
Although inflationary theory is regarded as having had much success, and the evidence for it as compelling, it is not universally accepted: cosmologists recognise that there are still gaps in the theory and are open to the possibility that future observations will disprove it.[17][18] In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed.[19] Many of these contain parameters or initial conditions which themselves require fine-tuning[19] in much the way that the early density does without inflation.

For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy[20] and gravity,[21] particle production in an oscillating universe,[22] and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles,[23] maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified.

Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem.[1][2]
 
  • #6
marcus said:
Saul, your quoting from Ned Wright's site does not seem relevant. We know about the flatness, or flatness-oldness problem. That is one of the arguments for a 60 e-fold inflation having occurred.At that point, quantum cosmology came to the rescue. If one accepts what is today the most widely studied version of quantum cosmology (LQC) and uses that model instead then, as Ashtekar showed, instead of being extremely unlikely that extent of inflation becomes quite probable (in Gibbons Turok terms).

Carroll fails to acknowledge this. No reference to the 2009 Ashtekar paper:

I would assume that when you write "quantum cosmology" comes to the rescue that the quantum cosmological solution to the "52 consecutive times winning a major lottery problem or the million monkeys typing Hamlet in one week without a typo problem is if we assume the universe creation machine can create an infinite number of universes then very, very, very, very improbable events can happen.

I am not sure the "quantum cosmology" solution is a realistic, scientific solution.

The quantum mechanics observations show uncertainty (fluctuation) at small scales. There is no evidence that entire universes exist.

The quantum cosmological solution to the fine tuning problem, is equivalent to explaining the improbable event of a million monkeys typing Hamlet in one week without typos and completing that task Friday, December 31, 2007 at 4:00pm

The quantum cosmological solution is if we could please accept the existence of an infinite number of universes, then it is the consensus position that the slight coincidence that a million monkey typed out Hamlet without typos in one week and that that fairly unlikely event occurred Friday December 31, 2007 at 4:00 pm should not be a concern.

What is missing from the public discussion of this problem is the observation that we live at the time when a million monkeys typed Hamlet in one week without a mistake. There are two improbable events. An event that is as improbable as a million monkeys typing Hamlet in one week without typos and the second that we happened to be living at the time when that improbable event occurred.
 
Last edited:
  • #7
We would not be as likely to find ourselves observing a Universe during other periods of time.

Star formation needs to progress for a certain duration to produce complex elements, second and third generations of stellar formation needed.

Reef galaxy formation needs to reach a certain point, the number of catastrophic GRB type extinction events needs to reach a certain low level. The activity of the central hole needs to quiet as well to allow stable nursery cloud formation.

Then you need the right quantity of rocky material near enough together that a star is formed, but doesn't consume all of the planetary disc, the right arrangement for those planetary discs takes some time to reach.

Then after your oven has pre-heated, insert the ingredients for a few billion years, and it will churn out complex self-reinforcing feedback structures of the type we call life.
 
  • #8
I thought the most interesting part here is that contrary to previous analyses, flatness doesn't appear to be a real problem with cosmology (if Sean's choice of assumptions is reasonable). I'm not sure I buy the argument that eternal inflation makes things worse, though. It seems a strange inversion to use the later volume of space to argue that an initial configuration is unlikely.
 
  • #9
Saul said:
I would assume that when you write "quantum cosmology" comes to the rescue that the quantum cosmological solution to the "52 consecutive times winning a major lottery problem or the million monkeys typing Hamlet in one week without a typo problem is if we assume the universe creation machine can create an infinite number of universes then very, very, very, very improbable events can happen.
...

Why would you assume that?

Ashtekar makes no appeal to improbable events.

The problem with Carroll's analysis is he uses an outmoded (vintage 1915) version of geometry---unquantized General Relativity. Widely recognized to fail around the start of expansion.

No one knows for sure how to quantize GR. There are several ways being worked on. This means there are several different possible versions of quantum cosmology (QC) and it hasn't been determined which is best. Observational tests are being developed, but it's still up in the air. These are fairly recent developments.

It happens that currently there is one version of QC where there's been a big increase in research. Whether right or not, it has become the predominant approach that you find if you do a keyword search "quantum cosmology" in the professional literature since, say 2007.

It just happens that if you use the mathematical model of the universe that has grown up in that type of QC then inflation is not improbable at all, it is inevitable. Inflation is generic in LQC. Then the question is how much. How likely is 60 e-folds? Ashtekar showed that under reasonable assumptions something like that amount, namely 60 e-folds, is not unlikely.

The main results in LQC are what is known as robust. They hold over a wide range of assumptions, without fine-tuning. You vary the parameters and you still get the same outcome.
 
  • #10
Maybe one way to say it is that there is a revolution in cosmology in progress. You can see it in the makeup of recent research literature. I'll get a "Spires" link for that in case anyone wants to see for themselves.
http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=dk+quantum+cosmology+and+date+%3E2006&FORMAT=www&SEQUENCE=citecount%28d%29 [Broken]

I'll also reiterate the Ashtekar Sloan link:
====
http://arxiv.org/abs/0912.4093
Loop quantum cosmology and slow roll inflation
Abhay Ashtekar, David Sloan
(Submitted on 21 Dec 2009)
In loop quantum cosmology the big bang is replaced by a quantum bounce which is followed by a robust phase of super-inflation. We show that this phase has an unforeseen implication: in presence of suitable inflationary potentials it funnels all dynamical trajectories to conditions which virtually guarantee a slow roll inflation with more than 68 e-foldings, without any input from the pre-big bang regime. This is in striking contrast to the situation in general relativity where it has been argued that the a priori probability of obtaining a slow roll inflation with N e-foldings is suppressed by a factor Exp(-3N).
Comments: 8 pages, 1 table
====

What you see if you click on the "Spires" link is all the professional research that appeared after 2006 in quantum cosmology, ordered by citation count. Generally speaking the more important papers are the most cited as references, so they come first. There are over 300 papers in all.
If you look, you will see mostly LQC (Loop approach to QC) in the first 100 papers because they are by far the most cited.
Loop has become a 'hot' area and has attracted a lot of new research people. You will also see papers proposing ways of testing the LQC model using spacecraft observatories. It has become able to make predictions.

The conjecture is that LQC is more accurate than classical GR cosmo based on vintage-1915 General Relativity. The conjecture is that LQC fits the data better.

This could be wrong or it could be right. We will just have to see. A spacecraft called "Planck" launched in 2009 and now taking data may help decide, or we may need further missions in the planning stage.

In any case nothing unlikely was put into the LQC models "by hand". The episode of superinflation that occurs near the start of expansion just came out of the math.
LQC has solutions to the "flatness" puzzle and the "horizon" puzzle which also were not put in ahead of time (indeed were not realized for several years.)

If you click on the Spires link, you will probably not see much there about Stephen Hawking, or Andrei Linde, or Sean Carroll, or "anthropic multiverse" or Brian Greene, or Discover magazine, or "eternal inflation"... Give it it a try.
 
Last edited by a moderator:
  • #11
I think it's easy enough to accept that one would get inflation in LQC, provided that there is a bounce. But my question is this: how likely is it to get the bounce in the first place, given inhomogeneous initial conditions?
 
  • #12
Chalnoth said:
I think it's easy enough to accept that one would get inflation in LQC, provided that there is a bounce. But my question is this: how likely is it to get the bounce in the first place, given inhomogeneous initial conditions?

Really good question. They keep working on that. In the past couple of years they have extended LQC to include inhomogeneity. Gradually relaxing isotropy. Just gradually chipping away at it.

I don't think at this point one can make a conceptual leap and guess how the program is going to turn out in that regard.

I think you know, if anything, better than I do that the only respectable attitude is agnostic about these topics. We don't know. There are various different models, different lines of investigation. Thankfully there also appear to be opportunities to test, and eventually distinguish the better fits from the worse. I just mention it in case others are reading.

My impression is there is a sea change----a lot of the people we heard about in the 1980s and 1990s have grey hair now, and some are into self-promotion and writing popular books.
Using classic Gen Rel, it turns out that one can prove stuff that seems ridiculous and far-fetched. Probably all that means is what we already knew: classic unquantized GR is inadequate. Probably that is simple unsensational significance of Sean Carroll's sensationally presented result. But I should wait for the dust to settle before saying something like that :biggrin:
 
  • #13
I think this whole probability issue is a non-issue. Consider just any event, and then try to think what unimaginable amount of other events must have happened for that particular event to have happened at exactly that time and place and in exactly the way it happened. If you figure that out, the fact that any event at all happens, is just an amazing coincidence, alsmost totally impossible to have happened. Still events happen, despite their very improbability.
For example, this post, is just the amazing coincidence of me sitting here at this time, while there could have been trillions of possibilities of me not having been here in the first place, and I could have been doing trillions of other things at this very moment.
And there were also innumerous other ways in which I could have typed this post (just figure out how many other messages I could have typed in!).
 
  • #14
robheus said:
I think this whole probability issue is a non-issue. Consider just any event, and then try to think what unimaginable amount of other events must have happened for that particular event to have happened at exactly that time and place and in exactly the way it happened. If you figure that out, the fact that any event at all happens, is just an amazing coincidence, alsmost totally impossible to have happened. Still events happen, despite their very improbability.
For example, this post, is just the amazing coincidence of me sitting here at this time, while there could have been trillions of possibilities of me not having been here in the first place, and I could have been doing trillions of other things at this very moment.
And there were also innumerous other ways in which I could have typed this post (just figure out how many other messages I could have typed in!).
The way that one talks about this sort of thing is as follows:

1. We can be extremely confident that we are real. This is due to the close match of our experiences with our observations.
2. If one is willing to consider any possibility, then it is possible for thermal fluctuations to cause brains to be produced within a thermal bath, think one single thought, then dissipate back into said bath. These will naturally be rare, but will still occur. These are called "Boltzmann Brains". Because there are not any constraints on what this thought is, the vast majority of such Boltzmann Brains will think nonsense, and not observe any sort of ordered existence (rather like how dreams are usually chaotic and nonsensical).
3. Any model of the formation of a universe, therefore, which produces many more Boltzmann Brains than real observers is highly unlikely to be true.

So yes, there is a sensible way to talk about probabilities where the formation of a universe is concerned.
 
  • #15
robheus said:
I think this whole probability issue is a non-issue. Consider just any event, and then try to think what unimaginable amount of other events must have happened for that particular event to have happened at exactly that time and place and in exactly the way it happened. If you figure that out, the fact that any event at all happens, is just an amazing coincidence, alsmost totally impossible to have happened. Still events happen, despite their very improbability.
For example, this post, is just the amazing coincidence of me sitting here at this time, while there could have been trillions of possibilities of me not having been here in the first place, and I could have been doing trillions of other things at this very moment.
And there were also innumerous other ways in which I could have typed this post (just figure out how many other messages I could have typed in!).

The problem with things like that is that we are attributing probabilities to something that is dynamically certain. Consider predicting a classical scattering experiment between two billiard balls given some set of initial conditions (positions and momenta). Theoretically we know how to do this, and indeed we will get some sort of predicted curve and lo and behold it matches what ends up happening.

Stating the above in the analogy is akin to asking why billiard ball A at time t, happens to be in position x when in fact it had an infinite phase space of positions and momentas in which it could be. Some people will even say that it has next to zero chance of being anywhere at all. But the point is its not that hard, or surprising. By talking about probabilities in that way, we have instead thrown out all the information contained in the dynamics.

What we should have said, was that billiard ball A at time t and position x was true with 100% certainty b/c that's what is predicted if we had the exact initial conditions and the exact dynamics down pat.

Now, with astrophysics, a lot of people go on tortured monologues about actually devising a theory of initial conditions. Its no longer good enough to say ok we have final observed state B, reasonable dynamics PHI, and we wish to discover the specific initial condition A such that the whole thing is internally consistent. Instead we have to find new dynamics PHI that selects amongst an infinite set of all initial conditions in order to find B with a decent probability. The former is what science used to be about, the latter is instead what people wish to now show and of course considerably more difficult.
 
  • #16
Haelfix said:
The problem with things like that is that we are attributing probabilities to something that is dynamically certain. Consider predicting a classical scattering experiment between two billiard balls given some set of initial conditions (positions and momenta). Theoretically we know how to do this, and indeed we will get some sort of predicted curve and lo and behold it matches what ends up happening.

Stating the above in the analogy is akin to asking why billiard ball A at time t, happens to be in position x when in fact it had an infinite phase space of positions and momentas in which it could be. Some people will even say that it has next to zero chance of being anywhere at all. But the point is its not that hard, or surprising. By talking about probabilities in that way, we have instead thrown out all the information contained in the dynamics.

What we should have said, was that billiard ball A at time t and position x was true with 100% certainty b/c that's what is predicted if we had the exact initial conditions and the exact dynamics down pat.

Now, with astrophysics, a lot of people go on tortured monologues about actually devising a theory of initial conditions. Its no longer good enough to say ok we have final observed state B, reasonable dynamics PHI, and we wish to discover the specific initial condition A such that the whole thing is internally consistent. Instead we have to find new dynamics PHI that selects amongst an infinite set of all initial conditions in order to find B with a decent probability. The former is what science used to be about, the latter is instead what people wish to now show and of course considerably more difficult.
The argument is as follows:
1. Model X predicts inflation is highly unlikely.
2. We observe inflation.
3. Therefore, model X is wrong (or at least incomplete), or our understanding of inflation is actually incorrect, and something else that mimics inflation's behavior is the cause of these observations.

These sorts of arguments, in other words, are a guide to uncovering a true model for the birth of a universe. Naturally they can't be the only guide, but at least they can rule out a number of models.
 
  • #17
Invoking inflation "solves" a lot of intractable BB cosmological problems. That attraction should not blind us to the plausibility of inflation.

Cosmological necessity shouldn't trump the observations that diverge from theory, IMO. If we are so wedded to a cosmology that we have to resort to miracles to resolve its problems, perhaps it's time for observational science to take the upper hand for a bit. Observational scientists (and astronomy is overwhelmingly an observational science) don't have the luxury of proposing multiple freely-adjustable parameters to explain their observations like the theorists do. They have to constantly improve and refine observations and (if allowed) challenge the theorists to explain observations that do not fit theoretical models.

Scroll down to Michael Strauss' presentation (almost 5 years old now) to see a glaring example of the disconnect between theory and observation WRT quasars. Strauss is the science spokesperson for SDSS.

http://www.stsci.edu/institute/itsd/information/streaming/archive/STScIScienceColloquiaFall2005/ [Broken]
 
Last edited by a moderator:
  • #18
turbo-1 said:
Invoking inflation "solves" a lot of intractable BB cosmological problems. That attraction should not blind us to the plausibility of inflation.
This is why the evidence in favor of inflation is primarily from WMAP observations (but also other balloon-borne and ground-based CMB observations). Hopefully Planck will have some interesting things to say further in this regard in a couple of years (First data release should be two years from next month).
 
  • #19
Chalnoth said:
The argument is as follows:
1. Model X predicts inflation is highly unlikely.
2. We observe inflation.
3. Therefore, model X is wrong (or at least incomplete), or our understanding of inflation is actually incorrect, and something else that mimics inflation's behavior is the cause of these observations.

That is certainly true as stated however in this context I would disagree with 1, namely I disagree that model X predicts that inflation is highly unlikely.

Penrose's (and others like Sean) argument is simple (or at least part of it is) eg: The subset of all possible initial conditions that allows for subsequent inflation is of measure zero, ergo in the standard story inflation is vanishingly unlikely, ergo it cannot be inflation that accounts for flatness/homogeniety/whatever.

But let's go back to the billiard ball analogy. We have B, PHI and we use those two to calculate for A and thus we can bound these initial conditions to a small epsilon ball (or even a point if certain mathematical conditions on Phi are satisfied). Manifestly this set, is also of measure zero in the space of all initial conditions.

What is wrong would be to claim that b/c of this fact, the observed billiard ball scattering event couldn't have happened as stated or that PHI is somehow incorrect!
 
  • #20
Haelfix said:
That is certainly true as stated however in this context I would disagree with 1, namely I disagree that model X predicts that inflation is highly unlikely.

Penrose's (and others like Sean) argument is simple (or at least part of it is) eg: The subset of all possible initial conditions that allows for subsequent inflation is of measure zero, ergo in the standard story inflation is vanishingly unlikely, ergo it cannot be inflation that accounts for flatness/homogeniety/whatever.
A better way of saying it is that the model seems to predict that in the space of all observers, a vanishingly small fraction will observe a universe that began through inflation. Now, the paper doesn't say this alone, but that is the direction in which (in my opinion) it is attempting to point.

Haelfix said:
But let's go back to the billiard ball analogy. We have B, PHI and we use those two to calculate for A and thus we can bound these initial conditions to a small epsilon ball (or even a point if certain mathematical conditions on Phi are satisfied). Manifestly this set, is also of measure zero in the space of all initial conditions.

What is wrong would be to claim that b/c of this fact, the observed billiard ball scattering event couldn't have happened as stated or that PHI is somehow incorrect!
Well, it's not an entirely wrong comparison to make. If one is to consider the space of all initial conditions to consider one ball striking another, one is simply wrong. All initial conditions do not occur equally, because the way this happens in reality is that you have a person that strikes the cue ball with the intention of hitting the cue ball into a certain ball, so that it goes at a particular angle. One could, with a bit of work, predict with reasonable accuracy the outcome of such a strike after watching a player play the game for a while.

But if we consider the perfectly randomized case with the pool table, we end up with nonsensical results when compared against reality. This isn't surprising, because the perfectly randomized case is wrong.

In a similar way, discovery of nonsensical results in our models of the formation of our universe is an indication that we are missing something about how the early universe behaved. For instance, as Marcus pointed out, some calculations that include LQG appear to show that inflation is actually very likely, which would indicate that the "something missing" is not taking into account the quantum nature of gravity (I'm skeptical of this result, but it illustrates the point).
 
  • #21
"In a similar way, discovery of nonsensical results in our models of the formation of our universe is an indication that we are missing something about how the early universe behaved. "

Again, just b/c an initial condition seems less than generic, doesn't mean that it wasn't the seed that created us physically. You cannot use that particular information (the unlikeliness of the particular ic) in order to discriminate against the dynamics (say the existence of inflation) so long as the dynamics correctly reproduces the observed final state. That's why it irrates me to say that 'inflation is very unlikely', when instead one should say inflation is nearly 100% certain b/c its the only set of dynamics possible that logically links A to B -I am exagerating of course, but you get the point.

There's another issue that troubles me. While its true that by far the majority of initial conditions has the inflaton far from the top of its potential, in the context of eternal inflation you can always wait awhile (or a very long while, since we are presumably talking about infinities) and simply invoke the anthropic principle after that point and it wouldn't be all that wrong. There is nothing that automatically implies that inflation has to happen immediately.
 
  • #22
As of the billiard ball analogy, suppose we witness only a static end condition of a billiard shot, and didn't witness any fragment of the dynamic situation nor the initial condition. How in the world are we to figure out - even when all the dynamics is known that could have occurred - what possible initial conditions caused that later condition? And for sure there isn't a unique possible initial condition.

Same as the situation prior to the Big bang. We might find all kinds of clues of what could have happened, and what not. But I think we'll never find a unique match for it, since all kind of things could have occured.

The question is then: what difference does it make? For the sake of what are we wanting to find that out?
 
  • #23
heusdens said:
The question is then: what difference does it make? For the sake of what are we wanting to find that out?
Why wouldn't we want to know where we came from?

Anyway, I don't think things will turn out to be as bleak as you paint them. We have quite a bit more investigation to do. There may be roadblocks we don't yet see, but we don't know that yet. And we also don't need to demonstrate a unique set of initial conditions in order to learn a lot about how our universe began.
 
  • #24
Chalnoth said:
Why wouldn't we want to know where we came from?

Anyway, I don't think things will turn out to be as bleak as you paint them. We have quite a bit more investigation to do. There may be roadblocks we don't yet see, but we don't know that yet. And we also don't need to demonstrate a unique set of initial conditions in order to learn a lot about how our universe began.

Well my remark was more of a societal sort of remark then a scientific one, as a scientists, one wants to know that. But as a society we could think that there are better things to explore that have more of a practical purpose. But that is probably a whole new discussion in and of it's own.
 
  • #25
heusdens said:
Well my remark was more of a societal sort of remark then a scientific one, as a scientists, one wants to know that. But as a society we could think that there are better things to explore that have more of a practical purpose. But that is probably a whole new discussion in and of it's own.
These things aren't mutually exclusive. It is entirely possible for some scientists to study the early universe, while other scientists work on antibiotics or cures for cancers.

Still, from a purely practical standpoint, pure science investigations tend to have benefits that far exceed their costs. One way in which this occurs is that answering these hard questions often requires the construction of specialized devices that themselves require brand-new technologies to be developed. These new technologies then often make their way into the private sector where they, on average, prove to have economic benefits far in excess of the original research cost to develop them.

Then there's the point to be made that all of science is interconnected, and we never know what we'll learn by investigating one area that will apply in others.

So no, even from a societal point of view, there are good reasons to investigate inflation, even if it isn't personally interesting to everybody (though, from my experience, this sort of investigation actually is interesting to a very large number of people).
 
  • #26
I like the billiard analogy. What would the table look like without rails or pockets? A lot like, IMO, the universe we observe today. If we run all the ball positions backwards, they appear to be packed together at infinite density before being flung apart at near infinite velocity. Heck, we can even throw in a slight curvature to the table top to mimic accelerated expansion without overcomplicating the model.
 
  • #27
heusdens said:
Well my remark was more of a societal sort of remark then a scientific one, as a scientists, one wants to know that. But as a society we could think that there are better things to explore that have more of a practical purpose. But that is probably a whole new discussion in and of it's own.
Yeah, I agree. And get rid of literature, art, theater, poetry, and history while you're at it!
 

1. What is inflation?

Inflation refers to the rapid and sustained increase in the overall price level of goods and services in an economy.

2. Why is inflation unlikely to be less than 10^-6.6×10^7?

This value is equivalent to 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001, which is an extremely small number. Inflation at this level is highly unlikely because it would mean that the prices of goods and services would almost remain unchanged over a long period of time, which is not in line with historical data and economic theories.

3. How is inflation measured?

Inflation is typically measured using an index, such as the Consumer Price Index (CPI), which tracks the changes in the prices of a basket of goods and services over time. The percentage change in this index is used to determine the inflation rate.

4. What are the effects of high inflation?

High inflation can have a number of negative effects on an economy, including reducing the purchasing power of consumers, causing uncertainty and instability in financial markets, and potentially leading to a decrease in economic growth.

5. Can inflation ever be a good thing?

Inflation at a moderate and stable level (typically around 2-3%) can actually have some positive effects on an economy. It can encourage spending and investment, help reduce debt burdens, and promote economic growth. However, high or unpredictable inflation is generally seen as detrimental to an economy.

Similar threads

Replies
4
Views
2K
Replies
8
Views
3K
  • Astronomy and Astrophysics
Replies
8
Views
5K
Back
Top