Does the Friedmann Model Suggest an Unbounded Universe?

In summary: These results show that including curvature as a free parameter is imperative in any future analyses attempting to pin down the dynamics of dark energy, especially at moderate or high redshifts."In summary, the metric of the Friedmann model yields an unbouded universe. Both bounded and unbounded versions are permitted. If the density is above the ciritical value, the universe is closed. Above the critical value, it's open. Interestingly enough, it appears that the universe is right at the critical value. This is currently explained as a result of inflation.
  • #1
da_willem
599
1
Does the metric of the Friedmann model yield an unbouded universe? I mean, are the geodesics of this metric closed?
 
Space news on Phys.org
  • #2
No. Both bounded and unbounded versions are permitted.
 
  • #3
If you do a websearch, you should find mention of a "ciritical density" for the Freidmann universes. If the density is above the ciritical value, the universe is closed. Above the critical value, it's open. Interestingly enough, it appears that the universe is right at the critical value. This is currently explained as a result of inflation. Look up "flatness oldness problem" for more (advanced) detail.
 
  • #4
pervect said:
... it appears that the universe is right at the critical value. This is currently explained as a result of inflation...

you may want to qualify that. It is currently controversial to claim that Omega is exactly equal to 1.

there are attractive scenarios that make it plausible that Omega = 1.
involving plausible stuff about inflation.
but lately when they actually measure they often get errorbars all on the positive side of 1.

the practice of assuming Omega = 1 has come under fire from professional cosmologists.

about all one can say, without getting into controversial territory, is that the universe is NEAR the critical value----i.e. spatially NEARLY flat.
(In fact that was the conclusion of David Spergel's team when they presented the results of the WMAP third year data last year)
 
  • #5
As long as people say the universe is at or NEAR critical density, and that it is nearly spatially flat, that seems fine. And the data is certainly CONSISTENT with flat.
But AFAIK some qualification is needed to avoid giving the impression that it is known to be EXACTLY of critical density (even though some inflation scenarios make that plausible).

In case anyone is not aware of the background on this there was a paper
by Bruce Bassett et al that appeared recently, citing recent paper(s) by Ned Wright along similar lines.

Bruce Bassett and Ned Wright are both raising the warning flag and saying that one should NOT automatically assume Omega = 1, especially when studying dark energy, BECAUSE IT INTRODUCES SYSTEMATIC ERROR IN THE ANALYSIS. In other words, it is a bad idea to assume more than we actually know from empirical observation, in this case.

Bruce Basset et al
http://arxiv.org/abs/astro-ph/0702670
Dynamical Dark Energy or Simply Cosmic Curvature?
Chris Clarkson, Marina Cortes, Bruce A. Bassett
5 pages, 1 figure
"We show that the assumption of a flat universe induces critically large errors in reconstructing the dark energy equation of state at z>~0.9 even if the true cosmic curvature is very small, O(1%) or less. The spuriously reconstructed w(z) shows a range of unusual behaviour, including crossing of the phantom divide and mimicking of standard tracking quintessence models. For 1% curvature and LCDM, the error in w grows rapidly above z~0.9 reaching (50%,100%) by redshifts of (2.5,2.9) respectively, due to the long cosmological lever arm. Interestingly, the w(z) reconstructed from distance data and Hubble rate measurements have opposite trends due to the asymmetric influence of the curved geodesics. These results show that including curvature as a free parameter is imperative in any future analyses attempting to pin down the dynamics of dark energy, especially at moderate or high redshifts."

Bassett cites Dunkley et al.
Joanna Dunkley et al
http://arxiv.org/abs/astro-ph/0507473
Measuring the geometry of the Universe in the presence of isocurvature modes
J. Dunkley, M. Bucher, P. G. Ferreira, K. Moodley, C. Skordis
4 pages, 5 figs.
Phys.Rev.Lett. 95 (2005) 261303

"The Cosmic Microwave Background (CMB) anisotropy constrains the geometry of the Universe because the positions of the acoustic peaks of the angular power spectrum depend strongly on the curvature of underlying three-dimensional space. In this Letter we exploit current observations to determine the spatial geometry of the Universe in the presence of isocurvature modes. Previous analyses have always assumed that the cosmological perturbations were initially adiabatic. A priori one might expect that allowing additional isocurvature modes would substantially degrade the constraints on the curvature of the Universe. We find, however, that if one considers additional data sets, the geometry remains well constrained. When the most general isocurvature perturbation is allowed, the CMB alone can only poorly constrain the geometry to Omega_0=1.6+-0.3. Including large-scale structure (LSS) data one obtains Omega_0=1.07+-0.03, and Omega_0=1.06+-0.02 when supplemented by the Hubble Space Telescope (HST) Key Project determination of H_0 and SNIa data."

The point is not whether you like flat or don't like flat. The point is we don't know and ASSUMING FLAT INTRODUCES ERRORS.. Assuming flat encourages circular reasoning (according to Ned Wright) and makes what you do unreliable. This is how Basset et al argue, and they cite Ned Wright too:
===quote Basset===
However, we will show that ignoring Omega_k induces errors in the reconstructed dark energy equation of state, w(z), that grow very rapidly with redshift and dominate the w(z) error budget at redshifts (z > 0.9) even if Omega_k is very small. The aim of this paper is to argue that future studies of dark energy, and in particular, of observational data, should include Omega_k as a parameter to be fitted alongside the w(z) parameters.

Looking back, this conclusion should not be unexpected. Firstly the case for flatness at the sub-percent level is not yet compelling: a general CDM analysis [13 the Dunkley paper], allowing for general correlated adiabatic and isocurvature perturbations, found that WMAP, together with largescale structure and HST Hubble constant constraints, yields
Omega_k = −0.06 ± 0.02.
We will show that significantly smaller values of Omega_k lead to large effects at redshifts z ~ 0.9 well within reach of the next generation of surveys.

Secondly, Wright (e.g.[14]) has petitioned hard against the circular logic that one can prove the joint statement (Omega_k = 0,w = −1) by simply proving the two conditional statements (Omega_k = 0 given that w = −1) and (w = −1 given that Omega_k = 0). ...

Given that the constraints on Omega_k evaporate precisely when w deviates most strongly from a cosmological constant, it is clearly inconsistent to assume Omega_k = 0 when deriving constraints on dynamical dark energy...
===endquote===
 
  • #6
The issue is one of model selection though. Inflation is a physical model that makes a prediction that [tex]\Omega = 1[/tex]. Due to observational uncertainties however, we can never say with infinite certainty that it is measured to be precisely unity.

What we can do is note that this, along with other predictions from inflation (such as Gaussinity (sp?) ) have been observed and therefore it is likely (give current data and theoretical understanding) that inflation occurred and the Universe is flat.

The way cosmology is parametrised makes it easy to forgot that there must be physics going on and the point of finding the best fit parameters is not to find the best fit parameters in a continuous space but to use these fits to discriminate between discreet models.

At heart this is really the problem with dark energy. Using w(a) as a catch all parameter for dark energy let's us find all kinds of constraints depending how we parameterize the w(a) and try all kinds of assumptions about this like perturbations and sound speed. The root problem though is that since we lack a solid set of micro-physical models that predict discreet values for the w(a) parameters all we are doing is parameter fitting but not physical model selection. Say we set [tex] w(a) = w_0 + w_a(1-a)[/tex] which is a common parametrisation. If we do some advanced observational campaign that tells us that [tex]w_0=-0.98[tex] and [tex]w_a=-0.02[/tex] or something like that, what, if anything, have we really learned about dark energy or cosmology generally, if we don't have a set of discreet models to judge between based on these findings?

The one model selection criteria that we can do is to ask is w=-1 for all time, which is a cosmological constant or vacuum energy (since they are identical as far as GR is concerned). This at least is a definite physical theory (albeit with some problems) that we can test against. We can then ask the question of which model, i.e. w=-1 or w/ne-1 is most supported by the data.

Incidentally on current data overwhelmingly the answer is that w=-1, but the data isn't great. In 5 years things could be very different. The lack of solid micro-physical theories for the infinite range of w(a) values is an issue though, there's really no getting around that.
 
Last edited:
  • #7
Here is some more stuff on this.
Among working professional cosmologists there is a move to abandon the automatice Omega = 1 assumption and allow positive curvature.


http://arxiv.org/abs/astro-ph/0603449
Wilkinson Microwave Anisotropy Probe (WMAP) Three Year Results: Implications for Cosmology
Authors: D. N. Spergel, R. Bean, O. Doré, M. R. Nolta, C. L. Bennett, J. Dunkley, G. Hinshaw, N. Jarosik, E. Komatsu, L. Page, H. V. Peiris, L. Verde, M. Halpern, R. S. Hill, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack, E. L. Wright
Comments: 91 pgs, 28 figs. Accepted version of the 3-year paper as posted to this http URL in January 2007

NOTE THAT JOANNA DUNKLEY AND NED WRIGHT ARE co-authors with David Spergel of this key WMAP3 results paper. The people involved here are "principal investigator level" WMAP/cosmologist types.

http://arxiv.org/abs/astro-ph/0701584
Constraints on Dark Energy from Supernovae, Gamma Ray Bursts, Acoustic Oscillations, Nucleosynthesis and Large Scale Structure and the Hubble constant
Authors: Edward L. Wright (UCLA)
Comments: 17 pages Latex with 8 Postscript figure files. One new Table, one new Figure, and several new references added. Submitted to the ApJ

This paper by Wright attacks the usual practice of assuming flat, although he happily concedes that the data is consistent with flat LCDM.
That is, flat has not been ruled out yet, and neither has curved been.

The WMAP FIRST YEAR data had a confidence interval that inclued 1 as an endpoint. WMAP1 said 1.02 +/- 0.02

The WMAP3 data was the first time I saw a major paper (Spergel et al) with a confidence interval for Omega that did NOT include 1, at least as an endpoint. There was a confidence interval that was all on the positive side of 1.
 
  • #8
Wallace said:
...Inflation is a physical model that makes a prediction that [tex]\Omega = 1[/tex]...

I'm unsure about a couple of things.
I'm not sure inflation happened, now that we don't need it to solve the horizon problem (QG bounce seems to take care of different parts of the sky having roughly the same CMB temp).

Magueijo and Singh just posted a paper undermining another support for inflation scenarios. So inflation may just be a passing fad.

What counts is what they measure and lately I see errorbars on the positive side of 1 which do not contain 1. So I think things are in flux.

But the main thing I am unsure about, Wallace, is how do you get that inflation (all inflation scenarios) PREDICT that Omega must be EXACTLY equal to 1.

I was under the impression that inflation could provide one possible explanation for why Omega is either one or very very very near one.
How does it work that inflation predicts that Omega has to be exactly 1?
 
  • #9
marcus said:
But the main thing I am unsure about, Wallace, is how do you get that inflation (all inflation scenarios) PREDICT that Omega must be EXACTLY equal to 1.

I was under the impression that inflation could provide one possible explanation for why Omega is either one or very very very near one.
How does it work that inflation predicts that Omega has to be exactly 1?

You can find this in any text on cosmology, but essentially this occurs because curvature goes as 1/R^2 whereas an 'inflaton' mechanism goes as ~1/R. Since inflation increases R by many times, any curvature term is beaten down to an insignificant level.

The mystery is why the inflaton mechanism turned off at some point?

I would recommend having a read of some better source than me though, John Peacock has a good chapter on this in 'Cosmological Physics'.

BTW, could you give a brief description of how 'QG bounce' solves the horizon problem? I know essentially nothing about this theory so a from the basic description would be nice.
 
  • #10
Wallace said:
...Since inflation increases R by many times, any curvature term is beaten down to an insignificant level.
...

Yes! this is how I remembered it, for example as explained in Lineweaver 2003 article!
Curvature is beaten down to an "insignificant level" but not forced to be exactly zero.

The mechanism you are talking about does not, I think, produce an EXACT Omega = 1, only approximately equal to 1 (which I'm comfortable with.)

My question: Is inflation compatible with space being topologically S3

Since you say "insignif" which can mean a positive quantity near zero, I think you may agree after all.

Recent errorbars for Omega are often like (for instance) [1.003, 1.021].
Ned Wright's recent paper gave 1.011 as "best fit".
But these results are not confidence level 95% or 99%, they are only CL 65%, so they are still "consistent" with spatial flat infinite.

Suppose future more accurate observations increase the CL, so we have, say, [1.003, 1.021] with CL 99%. Would this rule out inflation?

Suppose Omega actually IS , say 1.01. Can this have arisen even assuming inflation occurred or does it rule out inflation?
 
Last edited:
  • #11
Wallace said:
...
BTW, could you give a brief description of how 'QG bounce' solves the horizon problem? I know essentially nothing about this theory so a from the basic description would be nice.

Numerous LQC papers since 2001.
LQC reproduces ordinary cosmology soon after BB, but does not experience a singularity and works smoothly back in time past where the classical breakdown occurred. What it finds is a collapsing classical spacetime.

Coherent quantum states evolve deterministically thru the ex-singularity. A semiclassical state, after collapse and bounce, reexpands to a semiclassical state.

the original paper was Bojowald (2001) "The absence of singularity in LQC"

He has been working on removing the assumptions of homog and isotropy, generalizing the result, by perturbing around an exactly solvable theory. His most recent paper on this (2007) raises questions about whether a bounce occurs in all cases.

Ashtekar's group has been doing numerical simulations of the universe passing thru the bounce, in various cases.

Ashtekar et al (2006) find that bounce happens typically when density reaches about 0.8 Planck (80 percent of Planck density).
Quantum corrections to gravity make it repellant instead of attractive at Planck-levels of density and curvature

Magueijo and Singh (2007) cite references to the effect that the bounce gets rid of the horizon problem because the universe has plenty of time during the prior contracting phase. Plenty of time for distant sectors to communicate and arrive at equilibrium. (That's my understanding of their brief discussion of the horizon problem, before they moved on to structure formation)

Magueijo and Singh's recent paper was about getting rid of a structure formation problem----finding an alternative solution not requiring inflation.

The point about inflation is that nobody really knows how these scenarios work or what an "inflaton" is. The reasons people entertain notions of inflation is because the scenarios address certain problems. If there turn out to be alternative solutions to the problems this undermines the logical support for inflation scenarios.
 
Last edited:
  • #12
Hi, da willem,

Looks like others gave you some good basic information about FRW models, but for more advanced students there is another issue: the local versus global distinction. All FRW models possesses a unique family of spatial hyperslices which are everywhere orthogonal to the world lines of the matter (pressureless perfect fluid or "dust" for "matter dominated" and "radiation fluid" for "radiation dominated"). But it is perfectly possible to write down models in which these slices are each constant negative curvature but compact Riemannian three-manifolds. The easiest way to see why is to read about tilings of the hyperbolic plane and to identify edges of a tile to form a "compact quotient manifold" of H^2. This point was overlooked in classic textbooks (including MTW!) and was only emphasized rather recently by Jeffrey Weeks and Neil Cornish.
 
  • #14
pervect said:
If you do a websearch, you should find mention of a "ciritical density" for the Freidmann universes. If the density is above the ciritical value, the universe is closed. Above the critical value, it's open. Interestingly enough, it appears that the universe is right at the critical value. This is currently explained as a result of inflation. Look up "flatness oldness problem" for more (advanced) detail.

But open and closed are something different than unbounded and bounded, right?! I know that open and closed versions are possible, depending on the energy content. But I was actually wondering wether an unbounded universe is possible, i.e. with closed geodesics that end on themselves
 
  • #15
da_willem said:
But open and closed are something different than unbounded and bounded, right?! I know that open and closed versions are possible, depending on the energy content. But I was actually wondering wether an unbounded universe is possible, i.e. with closed geodesics that end on themselves

the words 'open' and 'closed' cause a lot of confusion. I am curious what you mean by them, da_willem.

I also don't understand why you say that "unbounded" means having closed geodesics.
A universe based on a spatial slice which is simply flat infinite R3 (plus a time axis) would have no closed geodesics----and it would be boundaryless.

If all you are asking is whether an universe is possible where the spatial slices are boundaryless then clearly the answer is YES.

the slices could be infinite flat R3
or they could be three-spheres S3
or they could be TOROIDAL (like a cube with opposite faces glued, analogous to Pacman world: a square with opposite edges glued)
and other things too grim to mention that Chris Hillman referred to.

I only take the first two seriously----infinite flat 3D space and the 3D sphere.
Both of them are boundaryless.

The 3D sphere alternative is spatially closed-----and you could imagine spatial geodesics which come around to the starting point. But a universe with that kind of space could still expand forever.
Just to make sure we understand each other, my idea of a spatially closed universe does not necessarily involve a big crunch. That is a separate issue. Do you agree?
 
  • #16
Chris Hillman said:
Hi, da willem,

Looks like others gave you some good basic information about FRW models, but for more advanced students there is another issue: the local versus global distinction. All FRW models possesses a unique family of spatial hyperslices which are everywhere orthogonal to the world lines of the matter (pressureless perfect fluid or "dust" for "matter dominated" and "radiation fluid" for "radiation dominated"). But it is perfectly possible to write down models in which these slices are each constant negative curvature but compact Riemannian three-manifolds. The easiest way to see why is to read about tilings of the hyperbolic plane and to identify edges of a tile to form a "compact quotient manifold" of H^2. This point was overlooked in classic textbooks (including MTW!) and was only emphasized rather recently by Jeffrey Weeks and Neil Cornish.

My impression of some fairly recent Neil Cornish papers is that he is not proposing these periodic universes as in any sense likely.
He seems to be presenting them as mathematical constructs so that he can use the data to RULE THEM OUT to whatever extent is possible.

That is, he finds a scale like, say, 70 billion LY, within which he can show periodicity does not occur. If there is periodicity, then it must therefore only happen if you go out MORE than 70 billion LY or whatever scale he finds.

This is ultimately the only way to defeat the periodic universe idea whether it is toroidal or soccerball or some exotic tiling of R3, so I see what Cornish has done and needful. Somebody had to do it.

and as more data comes in he can hopefully keep extending that 70 billion LY distance within which periodicity is proven not to occur.

I have a high opinion of Neil Cornish, in part because when I saw his website in 2004 it had a picture of his pet monkey. I hope he still has a pet monkey and that both of them are in good health.
 
Last edited:
  • #17
marcus said:
Yes! this is how I remembered it, for example as explained in Lineweaver 2003 article!
Curvature is beaten down to an "insignificant level" but not forced to be exactly zero.

If I remember correctly, large-scale inhomogeneities are expected to yield a non-flat observable universe at about the 0.001% level. A universe that is flat to much higher precision than that would, I should think, be inconsistent with inflation (or result in another fine-tuning problem).

I haven't heard anyone making noise about the flatness assumption of late. It is certainly true that higher-precision measurements could yield a non-flat universe at, say, the 0.1% level and act as evidence against inflation. However, nobody I've spoken to thinks that there is any evidence for that just yet. There are only a few cosmology experiments that are precise enough to even distinguish between a flat and non-flat universe and, to my knowledge, these have all allowed the parameter to float.

I don't think you'll hear much controversy on this topic for a while yet, the measurements simply aren't precise enough.
 
  • #18
SpaceTiger said:
...
I haven't heard anyone making noise about the flatness assumption of late. ...

I have, as recently as February 2007:

Bruce Basset et al
http://arxiv.org/abs/astro-ph/0702670
Dynamical Dark Energy or Simply Cosmic Curvature?
Chris Clarkson, Marina Cortes, Bruce A. Bassett
5 pages, 1 figure
"We show that the assumption of a flat universe induces critically large errors in reconstructing the dark energy equation of state at z>~0.9 even if the true cosmic curvature is very small, O(1%) or less. The spuriously reconstructed w(z) shows a range of unusual behaviour, including crossing of the phantom divide and mimicking of standard tracking quintessence models. For 1% curvature and LCDM, the error in w grows rapidly above z~0.9 reaching (50%,100%) by redshifts of (2.5,2.9) respectively, due to the long cosmological lever arm. Interestingly, the w(z) reconstructed from distance data and Hubble rate measurements have opposite trends due to the asymmetric influence of the curved geodesics. These results show that including curvature as a free parameter is imperative in any future analyses attempting to pin down the dynamics of dark energy, especially at moderate or high redshifts."

to the extent that more people stop automatically assuming zero-curvature and start "including curvature as a free parameter" as Bassett advocates, I would expect the noise about it to die down. One way to resolve the problem would be for everybody to stop making that assumption.
 
Last edited:
  • #19
marcus said:
I'm unsure about a couple of things.
I'm not sure inflation happened, now that we don't need it to solve the horizon problem (QG bounce seems to take care of different parts of the sky having roughly the same CMB temp).

Nor do I.
To me, as I hesitate to say, it seems that the horizon problem and its solution (inflation) were invented as an attempt to save the idea of a beginning of the BB.
As a non professional even I myself can calculate that our observable universe compressed to Planck-density had a dimension of about 10E-12 m (taking into account 10E-87 baryons + DM + DE). This distance is by far not enough to allow communication between opposite points during Planck-time (10E-43s). The maximum possible distance with the speed of light during that time would only be 10E-34m. Indeed theories giving us plenty time before the BB don’t need a horizon problem and a solution (inflation) for this, ( IMO non existing) problem if there is plenty of time for communication see also post #11.
I ask myself what is wrong with my reasoning where as nowadays it seems to me that inflation is almost accepted by a majority of professional cosmologists as part of a standard-model.

Kind regards hurk4
 
  • #20
When man discovered the soap bubbles the "inflaton" committed hari-kari.
jal
 
  • #21
hurk4 said:
Nor do I.
To me, as I hesitate to say, it seems that the horizon problem and its solution (inflation) were invented as an attempt to save the idea of a beginning of the BB.
As a non professional even I myself can calculate that our observable universe compressed to Planck-density had a dimension of about 10E-12 m (taking into account 10E-87 baryons + DM + DE). This distance is by far not enough to allow communication between opposite points during Planck-time (10E-43s).

You can find an explanation in http://www.astro.ucla.edu/~wright/cosmo_03.htm
http://www.astro.ucla.edu/~wright/cosmo_04.htm

Basically, Planck time has nothing to do with the problem.
 
  • #22
pervect said:
You can find an explanation in http://www.astro.ucla.edu/~wright/cosmo_03.htm
http://www.astro.ucla.edu/~wright/cosmo_04.htm

Basically, Planck time has nothing to do with the problem.

Thank you pervect.
I took Planck-time as the shortest time since a presumed beginning of a BB and indeed this seems not the good approach. A. Guth's 10E-32s and of course even 380000 years since such a presumed beginning would not be enough to prevent a communication problem. It is the presumed beginning which is the problem. No beginning at all serves us with a condition to avoid the problem. However Maguayo's solution gives us sufficient time to avoid the horizon problem, his proposal to increase the speed of light (still after a presumed beginning) during a shorttime also seems to me as a kind of inventing something artificial just like inflation looks like.
By the way, to me another argument I have against the inflation model is that, in that case, before inflation BB universe was understood to have a density far far far beyond Planck-density, which I think is forbidden by QM. I love bounce ideas like Astekar's.

kind regards,
hurk4
 
  • #23
Here's the way I look at it:

GR is currently the best tested theory of gravity we have. Thus most of modern cosmology is based on it, and derivations from it, such as the Friedmann equations. And so far it has turned out to be a pretty good theory of gravity, making many important predictions that have been confirmed, which is what science asks of theories.

There's probably a limit as to how far it is "safe" to use GR to go back in time, though. If you look at it from a particle physics point of view, anything that involves particle energies higher than we have been able to create in a laboratory or study in some other means is suspect.

We don't have any particular way of knowing how much beyond the physics we've studied in accelerators that it will be safe to use GR. Will quantum gravity effects really turn up at the Planck scale? Before? After? We can't really say. This is why I'm a bit critical of the Plank scale argument - it assumes that the physics changes at some particular point, but we don't actually know when it's going to change.

So what I'd basically suggest is to use GR as the best available and tested theory of gravity, and note that any predictions beyond a certain time (when the average energy of the universe rises above anything that we've ever been able to create) are based on extrapolation.

One could also try the same sort of analysis with other theories of gravity (though I think GR has been bettter tested) to find the point where they start to involve extrapolation of currently unmeasured physics.
 
  • #24
I've been asked to expand on my comment.
It seems that I'm agreeing with most of what is being said by others in this thread.

INFLATON
When man discovered the soap bubbles the "inflaton" committed hari-kari.
Observations have determined that the galaxies are spread out as if they are on the surface of soap bubbles.
Let’s look at what gravity would be doing.
1. From the center of one bubble, you would be feeling the force of gravity equally from all direction if the galaxies and dark matter/energy was evenly spread out on the surface of the bubble and if the distance from you to the surface of the bubble was equal (that’s what center means).
2. Are the galaxies and dark matter/energy homogeneously distributed on the sphere? (I would doubt very much that future observations would arrive at such a conclusion.) If not, then gravity will be pulling at you more in one direction than another.
3. Will future observation determine that the bubbles are spherical? ( I would doubt it) If not, then gravity will be pulling at you more in one direction than another.
4. Would observations confirm that all of the bubbles are the same size?? ?( I would doubt it) If not, then gravity will be pulling at you more in one direction than another.

Now, let’s see what the inflaton has got to do.
The “math kids” have rendered their verdict …. They cannot make a model because of the curvature that has been created by gravity. Gravity is not the same everywhere and the inflaton cannot act uniformly everywhere. If the inflaton existed there would be observable differences in the expansion rate which we do not see. I assumed that the papers referred in this thread are sufficient references.

Now, let’s try to do a “lookback”.
If you remove one unit of inflaton from one bubble you must remove an inflaton from every bubble. Suddenly, you will find that bubbles are collapsing and merging where you never expected it. You can never reach the situation of having gravity evenly distributed (“dust bag”)

Of course, all of this is common sense that has to be supported by mathematical proof. ( OOOPS! I forgot. The “math kids” have rendered their verdict)
I’m sure that other relevant reference papers are available from the cosmology people of this forum.
The inflaton was a mathematical invention/mechanism to explain expansion of the universe. I was willing to accept it for the dust bag model. However, I was told that the galaxies are spread out on “bubbles” and it no longer works.
If you want to keep the inflaton as a mechanism then you will be going against what the observations are telling us.
jal
 
  • #25
jal said:
I've been asked to expand on my comment.
It seems that I'm agreeing with most of what is being said by others in this thread.

INFLATON

Observations have determined that the galaxies are spread out as if they are on the surface of soap bubbles.

OK, I've seen references for that, for instance http://csep10.phys.utk.edu/astr162/lect/gclusters/soap.html

(there may be better ones).

Let’s look at what gravity would be doing.

If we take a large enough sphere, non-uniformities such as the voids should average out if we assume the cosmological principle holds.

Birkhoff's theorem would say that the net effect of the area outside this large sphere should be on the average zero, therefore we should only have to consider nearby "bubble-bubble" attraction to model the perturbing effect of the bubbles.

I'm not quite sure if this agrees with what you wrote, or not, anyway that's the way it looks to me. I'd have to do some more vigorous handwaving to justify ignoring the cosmological constant, I will just appeal to symmetry for the time being.

Now, let’s see what the inflaton has got to do.
The “math kids” have rendered their verdict …. They cannot make a model because of the curvature that has been created by gravity.

Maybe you quoted a reference on this earlier, but I coudlnt' find it. I have no clue as to who the "math kids" are, or what their verdict is, or why you are saying this.

Gravity is not the same everywhere and the inflaton cannot act uniformly everywhere. If the inflaton existed there would be observable differences in the expansion rate which we do not see. I assumed that the papers referred in this thread are sufficient references.

You've lost me. I looked through this thread for previous papers, I didn't see them. Possibly this was in some other thread?
 
  • #26
pervect
If we take a large enough sphere, non-uniformities such as the voids should average out if we assume the cosmological principle holds.
jal
I was willing to accept it for the dust bag model. However, I was told that the galaxies are spread out on “bubbles” and it no longer works.
If you want to keep the inflaton as a mechanism then you will be going against what the observations are telling us.
I don't like to argue. I'll just listen and read any new evidence that is presented.
jal
 
  • #27
I would say that from what I've seen so far, saying that "inflation committed hari-kari" is a rather considerable overstatement at this point. Something more restrained, like "XXX has suggested that there may be problems in accounting for large scale structure within an inflationary model" would seem to me to be a better representation of the mainstream view.
 
  • #28
I'll accept that reproach.
jal
 
  • #29
The mainstream model has survived most such tests. Asking for perfect predictability is not a test of that model. The universe is not smooth, that we already knew.
 
  • #30
WOW! wow! What a paper!
Tell Martin Bojowald (and those that he cited), that I’m throwing a party and supplying the refreshments and photo ops. (They can get an expense account from their depts.).
Marcus, you can forget all the other papers …. This is the most influential paper … and it will be for years to come.
I want to tell all the “seekers” about this paper.
I want to tell the whole world!
Contrarily to Martin Bojowald, I can take a definite position and say that his paper presents a strong argument as to why the “inflaton” is not needed.
http://arxiv.org/PS_cache/arxiv/pdf/0705/0705.4398v1.pdf
The Dark Side of a Patchwork Universe
Martin Bojowald
30 may 2007
A complete understanding of the universe currently faces several problems, most of which are occasionally expected to be solved by some version of quantum gravity. This also applies to the dark energy problem.
Schematically, one has a picture where space is presented as a discrete structure building up from a small state at the big bang to a highly refined, nearly continuous fabric today. The evolution picture is thus that of an irregular lattice structure which changes in internal time by elementary changes of geometry.

Note: just add one more unit every once in a while

From the point of view of quantum field theory on curved space-times one can effectively view the finiteness of vacuum energy in loop quantum gravity as a cut-off provided by the underlying discrete structure of loop quantum gravity. On the grounds of dimensional arguments one would expect that the cut-off occurs at Planckian values of energy or length, which would certainly result in the well known mismatch between the predicted and observed cosmological constants.

Note: I would like to see arguments why the cutoff cannot be at 10^18 (gluon interaction sizes/length)

It is to be expected that vacuum energy in this formalism does not only depend on the matter state but also on quantum geometry.

In fact, such a quantum geometry epoch of inflation typically does not last long enough to provide all 60 e-foldings required for successful structure formation.
Moreover, such an isotropic model with only inverse volume corrections is not
very accurate at large volume because it does not fully take into account the dynamical discreteness of space manifesting itself in lattice refinements determined by the elementary moves of a Hamiltonian constraint.
Rather, during expansion the discrete structure of space subdivides as described in Sec. 2 which can be modeled by adding new small, discrete patches resulting from new vertices of graphs. When the number of patches increases with volume, their size stays nearly constant or could even decrease.

francesca
I did not even look at the other papers that you mentioned. Martin Bojowald’s paper was just tooooo much!
jal
 

1. What is the Friedmann model 'unbounded'?

The Friedmann model 'unbounded' is a cosmological model that describes the expansion of the universe. It is based on the theory of general relativity and was developed by physicist Alexander Friedmann in the 1920s.

2. How does the Friedmann model 'unbounded' differ from other cosmological models?

The Friedmann model 'unbounded' differs from other cosmological models in that it assumes the universe is spatially infinite and has no boundaries or edges. This is in contrast to other models, such as the Friedmann model 'closed', which assumes a finite universe with a spherical shape.

3. What is the significance of the unbounded aspect of the Friedmann model?

The unbounded aspect of the Friedmann model has significant implications for the expansion of the universe. It suggests that the universe has no center and no edge, and that it is expanding uniformly in all directions.

4. How does the unbounded aspect of the Friedmann model affect our understanding of the universe?

The unbounded aspect of the Friedmann model challenges our traditional understanding of the universe as a finite and contained entity. It suggests that the universe is much larger and more complex than we previously thought, and that it may continue to expand indefinitely.

5. Are there any observational evidence for the unbounded aspect of the Friedmann model?

Yes, there is observational evidence that supports the unbounded aspect of the Friedmann model. For example, the cosmic microwave background radiation, which is a remnant of the early universe, appears to be uniform in all directions, indicating a spatially infinite universe. Additionally, the large-scale structure of the universe, such as the distribution of galaxies, also supports the idea of an unbounded universe.

Similar threads

Replies
13
Views
1K
Replies
6
Views
2K
Replies
27
Views
4K
Replies
6
Views
1K
Replies
1
Views
4K
Replies
9
Views
1K
Replies
21
Views
2K
Replies
6
Views
1K
  • Cosmology
Replies
12
Views
2K
Replies
1
Views
1K
Back
Top