New cosmic model parameters from South Pole Telescope

In summary: The tightest constraint on the mean curvature that we consider comes from combining the CMB, H0, and BAO datasets:k = -0:0059 +/- 0.0040 : (21)While the CMB+BAO constraint shows a 2.0 sigma preference for k < 0, the significance of this preference decreases as more data are added. The tightest constraint, coming from CMB+H0+BAO, is consistent with zero mean curvature at 1.5 sigma. ...==endquote==So I am not sure what you mean by "that result" and "highest signal" and "cheating", but if you are talking about the result in
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
I just posted about the new parameters here:
https://www.physicsforums.com/showthread.php?p=4152380#post4152380

Basically the new numbers look rather close to what Jorrie already put as the default in his
calculator. zEQ looks familiar.
Hubble rate 70 looks familiar :biggrin:
Here's the A25 calculator:
http://www.einsteins-theory-of-relativity-4engineers.com/CosmoLean_A25.html

Sean Carroll has a blog post about the new SPT report.
http://blogs.discovermagazine.com/c.../05/south-pole-telescope-and-cmb-constraints/
 
Astronomy news on Phys.org
  • #2
A curious thing about the SPT report is that it implies with 95% confidence the U is spatially finite. (Not just the observable portion, the whole thing.) They gave a errorbar or confidence interval for the overall largescale curvature which was all to the positive. Precisely flat, zero curvature, seems to be RULED OUT if you take seriously the SPT report. Here is their confidence interval for Omega.

1.0019 < Omega < 1.0099

If you accept their Hubble radius of 14.0 billion LY (which is very close to what other recent studies have found) then we are talking about space, in the standard LCDM model, being a hollow 3-sphere with radius-of-curvature no larger than 14/sqrt(0.0019) = 320 billion LY. In fact this brackets the radius of curvature R:

140 Gly < R < 320 Gly

So we can imagine CIRCUMNAVIGATING the universe, if we could somehow halt its expansion so that its circumference wouldn't be growing while we were making the circuit.

6.28*140 = 880
6.28*320 = 2010

So the South Pole Telescope folks are telling me that I stop the expansion and shoot a laser flash off in some direction then eventually it will come back to me from the other direction, having circumnavigated space. And it will take AT LEAST 880 billion years to come around back and NO MORE than 2010 billion years.

See equation 21 on page 14, of http://arxiv.org/pdf/1210.7231v1.pdf
 
  • #3
Wait...you're saying that the results say that the universe is finite but unbound?
 
  • #4
Drakkith said:
Wait...you're saying that the results say that the universe is finite but unbound?

Yes spatially finite. They give a confidence interval, I think 95%. Overall curvature has to be in that range. The interval is entirely on the positive curvature side---does not include zero. So it rules the perfectly flat case out. If you believe the report. Rules it out with 95% probability, if you like.

What do you mean by "unbound"? In normal cosmology space is boundaryless so the positive curvature case typically means something like the 3-sphere.
 
  • #5
marcus said:
What do you mean by "unbound"? In normal cosmology space is boundaryless so the positive curvature case typically means something like the 3-sphere.

I think that's what I mean. :biggrin:
 
  • #6
Drakkith said:
I think that's what I mean. :biggrin:

Well yes then, finite 3D volume, no boundary, no "space outside of space"

overall positive curvature

curvature must be measured from inside the space since there is no outside

and it may (or may not) expand indefinitely. if the cosmological constant is that, is as it seems to be from observations, then it will expand indefinitely.
=================

so the picture one gets from the SPT report is very much analogous to the 2D balloon surface world, an expanding hollow 2-sphere, not embedded in any surrounding space, all existence concentrated on that finite area. Like that analogy except a 3D version of it. A boundaryless finite volume 3D space that you can circumnavigate. That has a constant positive curvature experienced (for example) by triangles adding up to more than 180 degrees, how much more depending on size, positive curvature experienced from within the space IOW.

I don't know how much of this is familiar to you already, I guess a lot. But it is possible you have more questions, if so please ask.

The sad thing is that the SPT report may get contradicted by another report that studied much more nearby stuff, Spitzer telescope, Wendy Freedman and Barry Madore et al. I don't know how this will play out. Sean Carroll liked the SPT report, so do I. But some new supernovae data may affect the next iteration. I am in suspense as to whether this confidence interval for the curvature gets confirmed and "gels" so to speak. I want it to. Its really neat to have a finite volume boundaryless universe. But we have to wait and see.
 
  • #7
Yes, this result is freakin cool. Of course, any result that says one way or another what shape and size our universe may be is freakin cool!
 
  • #8
From page 14 of the paper -
"... The tightest constraint on the mean curvature that we consider comes from combining the CMB, H0, and BAO datasets:
k = -0:0059 +/- 0.0040 : (21)
While the CMB+BAO constraint shows a 2.0 sigma preference for k < 0, the significance of this preference decreases as more data are added. The tightest constraint, coming from CMB+H0+BAO, is consistent with zero mean curvature at 1.5 sigma. ..."
The case for a closed universe looks promising but, not yet compelling, IMO.
 
  • #9
Infinite universe is much more beautiful mathematically.
 
  • #10
Dmitry67 said:
Infinite universe is much more beautiful mathematically.
Speak for yourself, I find that idea mathematically ugly, something of an awkward monstrosity. No "multiverse" fantasies required if you assume S3 Sweet, compact. Nice fit to observation. Bounces coherently. etc. Unquestionably more beautiful :biggrin:

But we'll see which way Nature herself inclines.
 
  • #11
marcus said:
See equation 21 on page 14, of http://arxiv.org/pdf/1210.7231v1.pdf

But that result is obtained by cherry-picking the combination of data sets which gives you the highest signal. That is just cheating, unless you have a good reason to not to trust the local H0 observations.
 
  • #12
clamtrox said:
But that result is obtained by cherry-picking the combination of data sets which gives you the highest signal. That is just cheating, unless you have a good reason to not to trust the local H0 observations.

I think we got the opposite sense from reading the same words. You might try reading what they said again and see if you get the same message that I do. What I read is that they got the result in equation 21 (which is what we are talking about) by NOT cherrypicking but by including ALL the types of data INCLUDING "local H0 observations", and also BAO which is also a lower-z type observation.

==quote page 14 of SPT report==
CMB lensing enables an independent constraint on curvature, although the most powerful curvature constraints still come from combining CMB data with other low-redshift probes (e.g., H0, BAO). The curvature constraint using CMB+H0 data is Ωk = 0.0018 ± 0.0048, while the constraint using CMB+BAO data is Ωk = −0.0089 ± 0.0043. The tightest constraint on the mean curvature that we consider comes from combining the CMB, H0 , and BAO datasets:
Ωk =−0.0059±0.0040. (21)
==endquote==

The tightest constraint, they say, came from including all three types and that is what equation 21 is based on. It is how it should be if the data is sound, including more gives more reliable results, and it represents scientific integrity (rather than "cheating") to include more. It seems to be the opposite of "cherrypicking." Do you understand my reasoning here?
 
  • #13
I found this thread after reading https://www.physicsforums.com/showthread.php?t=662146, where marcus reiterated the conclusion that a combined fit suggests nonzero mean curvature. This is an old thread, but my post makes more sense here.

It's important to look at the most recent SPT paper 1212.6267 to get a better handle on this. There, they do likelihood estimates for various extensions of [itex]\Lambda\text{CDM}[/itex] models. Table 2 presents the one-parameter model where mean curvature is added. The improvement in fit provided by curvature is completely negligible for WMAP+SPT and substantially below 2[itex]\sigma[/itex] for WMAP+SPT+BAO+[itex]H_0[/itex]. This is marginally insignificant.

The next set of observations involves extension involving non-zero neutrino masses, for which we have independent confirmations. There, both combined data sets also show less than 2[itex]\sigma[/itex] improvements. In particular, while WMAP+SPT shows a marginally insignificant improvement for adding [itex]m_\nu[/itex], the addition of the low-redshift data raises the significance by a considerable fraction. Furthermore, if we look at the 2-parameter extensions that are included in that table, the neutrino mass plus number of effective neutrino species model appears to be significant above 2[itex]\sigma[/itex] when low [itex]z[/itex] is added. While it's not in that table, the 2-parameter model with neutrino mass and curvature is discussed at the bottom of section 6.3. We're told that adding curvature to the neutrino mass extension "reduces the preference for massive neutrinos to 1.7σ."

Finally, it's important to look at section 4, where they discuss the consistency between the WMAP+SPT and low-redshift BAO+[itex]H_0[/itex] data sets. An important graphic is Fig 2, showing contours for the dilated comoving distance vs the Hubble constant. In the rightmost graph, the colored bubble is WMAP+SPT, while the BAO+[itex]H_0[/itex] data is in greyscale. We see that there is a greater than 2σ gap between the two combined data sets. The paper remarks that "We find the apparent tension significant enough in some model spaces, including [itex]\Lambda\text{CDM}[/itex], to suggest caution in interpretation of the results. However, in no model spaces is the significance sufficient to rule out statistical fluctuations, and we have no evidence for either systematic biases or underestimated uncertinties in the data."

It seems that the low statistical significance of the mean curvature extensions combined with the apparent problems in combining these data sets suggests that it's much too early to conclude that nonzero mean curvature is preferred.
 
  • #14
My point is it's much too early to pretend that perfectly flat exactly zero curvature is preferred
marcus said:
But we'll see which way Nature herself inclines.

I certainly wouldn't "conclude" either way, at this point! I'm especially skeptical of the flat infinite universe. And I hope you are too, zero, a little. :biggrin:
 
  • #15
Reading the Hou paper, the thing that struck me most was how much degeneracy remains in the parameters. If a non-zero neutrino mass and the six sigma detection of a running spectral index are accepted, I think the detection of curvature virtually dissapears, but that's not a combination they analyse. The value of Neff also seems to come out closer to 3.7 in WMAP and Hou and throwing that into the mix complicates it further.

One thing is clear, patience is required!
 
  • #16
fzero said:
It seems that the low statistical significance of the mean curvature extensions combined with the apparent problems in combining these data sets suggests that it's much too early to conclude that nonzero mean curvature is preferred.

I disagree, it is purely statistical common sense that a not exactly zero spatial curvature (no matter how small that deviation is) is overwhemingly preferred over an exactly zero curvature.

Also I have to admit like Marcus that I'm geometrically attracted towards the hypersphere, maybe influenced by Einstein early model that had this spatial geometry (of course his had constant radius unlike what would be the case with our universe).
 
  • #17
TrickyDicky said:
I disagree, it is purely statistical common sense that a not exactly zero spatial curvature (no matter how small that deviation is) is overwhemingly preferred over an exactly zero curvature.

Also I have to admit like Marcus that I'm geometrically attracted towards the hypersphere, maybe influenced by Einstein early model that had this spatial geometry (of course his had constant radius unlike what would be the case with our universe).

I can understand why you would believe this, but it is simply not valid scientific reasoning. For any experiment, we must define a hypothesis and a null hypothesis. In the case here, the hypothesis is that the curvature is non-zero, while the null hypothesis is that the curvature is zero. I argue that the data does not confirm that the curvature is non-zero with statistical significance, therefore we must default to the null hypothesis that there is no curvature.

We cannot infer from a null result that our hypothesis is nevertheless true, but just too small an effect to have measured. "Common sense" should not be confused with the scientific method, instead it usually involves the absence of scientific reasoning. While it is correct to say that the data do not rule out a small curvature, it is incorrect to say that the data suggest a non-zero curvature. This is an extremely important point that one of the experimentalists around here would do much more justice to.

My specific objection was to the part of marcus' statement

marcus said:
You can't take for granted "flat and infinite". Most recent batch of CMB data (SPT) suggested not flat, and finite.

that I've marked in bold. I don't have a problem with the first sentence there. The version that marcus posted in reply,

marcus said:
My point is it's much too early to pretend that perfectly flat exactly zero curvature is preferred

isn't the most appropriate way to phrase things, for the reasons I explained above. The issue is not whether exactly zero curvature is preferred, since it is the null hypothesis. The issue is whether we have evidence of non-zero curvature, which we do not.
 
  • #18
Out of all possible numbers, pretending that the curvature is EXACTLY zero is a really radical assumption. That has always been the hypothesis that one would look for evidence to support. The null hypothesis is, of course, that it is not exactly zero.

I think there is insufficient reason to claim exact zero perfect flat. As I said it is much too early to assume that.

I have never claimed that the curvature is nonzero, although I've pointed out that some observations SUGGEST this. I just don't think there is a valid scientific reason to believe exact zero out of all the things it could be.

Back in October the SPT report had an Omega_k confidence interval that was all on the negative side! It did not even include zero. Of course other studies have published Omega_k confidence intervals that include zero and some positive territory.
But in recent years they've mostly been lopsided (big on the negative side of zero). That PROVES nothing. But it is suggestive when it happens repeatedly with several different studies.

So, we will wait and see.
 
Last edited:
  • #19
TrickyDicky said:
I disagree, it is purely statistical common sense that a not exactly zero spatial curvature (no matter how small that deviation is) is overwhemingly preferred over an exactly zero curvature.

Also I have to admit like Marcus that I'm geometrically attracted towards the hypersphere, maybe influenced by Einstein early model that had this spatial geometry (of course his had constant radius unlike what would be the case with our universe).

Hi TD, I agree about the overwhelming statistical common sense. Paul Steinhardt in his recent Perimeter talk (google "steinhardt pirsa") made the point that the usual inflation theories do NOT support exact zero curvature. They only suggest near zero. And we certainly seem, from the observation evidence, to be looking at near zero.

I guess the hypersphere model must appeal aesthetically/philosophically to quite a few us, who nevertheless try not to let it influence what we believe or assume to be the case. I have no belief either way and assume flat in calculations because the numerical difference between that and near flat is not enough to matter in what I do.
 
Last edited:
  • #20
Had one of those experiments picked out a nonzero spatial curvature, the theory of inflation would either be wrong, or finely tuned through many orders of magnitude.

Therefore given the evidence of lambdaCDM as well as the inflationary paradigm, the overwhelming null hypothesis is that it is zero to within the sensitivity of any conceivable Earth based experiment.

You can turn this around the other way. Without inflation, given the almost exact observed flatness, you are looking at a finetuning on the order of one part in 10^60 in your initial conditions at the Planck epoch, assuming standard FRW evolution.

Generally speaking, when you are confronted with a dimensional problem of that magnitude, it usually means that there is a mechanism at play to fix the sensitivity to initial conditions. That mechanism of course is inflation, or at the very least something that has an exponential character.
 
  • #21
fzero said:
I can understand why you would believe this, but it is simply not valid scientific reasoning. For any experiment, we must define a hypothesis and a null hypothesis. In the case here, the hypothesis is that the curvature is non-zero, while the null hypothesis is that the curvature is zero. I argue that the data does not confirm that the curvature is non-zero with statistical significance, therefore we must default to the null hypothesis that there is no curvature.

We cannot infer from a null result that our hypothesis is nevertheless true, but just too small an effect to have measured. "Common sense" should not be confused with the scientific method, instead it usually involves the absence of scientific reasoning. While it is correct to say that the data do not rule out a small curvature, it is incorrect to say that the data suggest a non-zero curvature. This is an extremely important point that one of the experimentalists around here would do much more justice to.

My specific objection was to the part of marcus' statement



that I've marked in bold. I don't have a problem with the first sentence there. The version that marcus posted in reply,



isn't the most appropriate way to phrase things, for the reasons I explained above. The issue is not whether exactly zero curvature is preferred, since it is the null hypothesis. The issue is whether we have evidence of non-zero curvature, which we do not.
What I was trying to say is that we have to take into account the "a priori" chances before we evaluate the statistical data in order to interpret it correctly.
And "a priori" we know that a exactly flat curvature is impossible to measure given the limits of our detectors. So we can only hope to ever measure a definite nonzero curvature, a zero curvature cannot be measured. And we also know that while 0 is a single value, nonzero curvature possible values are infinite, as there are infinitely many values between 0 and whatever value of cuvature. And note that no matter how small the curvature, for instance in the case it was positive, it would inmediately mean a change from infinite to finite universe so it does matter.
So the "a priori" knowledge leads us to consider nonzero curvature as the null hypothesis.
Now Planck has approximated a little bit more the upper limit of curvature to zero, but the "a priori" probabilities are still infinite possible nonzero values to one 0 value , and they will remain so unless we finally found a nonzero value, regardless of how much we improve our detectors, now we have accuracy to 1/100 which might look great, but compare 100 with infinity.
I claim this is valid scientific reasoning.
 
  • #22
Haelfix said:
Had one of those experiments picked out a nonzero spatial curvature, the theory of inflation would either be wrong, or finely tuned through many orders of magnitude.

Therefore given the evidence of lambdaCDM as well as the inflationary paradigm, the overwhelming null hypothesis is that it is zero to within the sensitivity of any conceivable Earth based experiment.

You can turn this around the other way. Without inflation, given the almost exact observed flatness, you are looking at a finetuning on the order of one part in 10^60 in your initial conditions at the Planck epoch, assuming standard FRW evolution.

Generally speaking, when you are confronted with a dimensional problem of that magnitude, it usually means that there is a mechanism at play to fix the sensitivity to initial conditions. That mechanism of course is inflation, or at the very least something that has an exponential character.
This is a strange way to reason about the scientific method.
We are doing observations precisely to try to discern whether models like inflation are correct. If you start with the assumption that inflation must be correct you are certainly not using the scientific method. Certainly you woudn't be using the observational data to try and falsify the inflationary model, right?

IOW, it is fine to use inflation as "a priori" knowledge to try and analyze scientifically a different cosmological issue, but not to validate inflation itself.
 
  • #23
Hi TrickyDicky.

"What I was trying to say is that we have to take into account the "a priori" chances before we evaluate the statistical data in order to interpret it correctly.
And "a priori" we know that a exactly flat curvature is impossible to measure given the limits of our detectors. So we can only hope to ever measure a definite nonzero curvature, a zero curvature cannot be measured. And we also know that while 0 is a single value, nonzero curvature possible values are infinite, as there are infinitely many values between 0 and whatever value of cuvature. And note that no matter how small the curvature, for instance in the case it was positive, it would inmediately mean a change from infinite to finite universe so it does matter.
So the "a priori" knowledge leads us to consider nonzero curvature as the null hypothesis."If you really want to go down that road (and you are making a classic mistake regarding putting a probability measure over the real numbers) , then you have to redefine what you mean by spatial flatness in the first place, since the assumptions of the FRW metric breaks down.

Remember, this model of the universe is not exact either. We use it to model structures that are much larger than some characteristic distance scale. And if we start talking about deviations in flatness in one part to the 10^90 then you start having to talk about deviations due to much larger perturbations, like galaxies.
 
  • #24
Haelfix said:
Hi TrickyDicky.

If you really want to go down that road (and you are making a classic mistake regarding putting a probability measure over the real numbers) , then you have to redefine what you mean by spatial flatness in the first place, since the assumptions of the FRW metric breaks down. Remember, this model of the universe is not exact either. We use it to model structures that are much larger than some characteristic distance scale. And if we start talking about deviations in flatness in one part to the 10^90 then you start having to talk about deviations due to much larger perturbations, like galaxies.
Hi Haelfix, I'm not aware of any mistake, we are talking about a continuous measure. A different thing is using a model as constraint for the possible deviations from flatness which is what I think you mean here. But then again we run the risk as commented above of assuming what we want observations to check.
 
  • #25
TrickyDicky said:
IOW, it is fine to use inflation as "a priori" knowledge to try and analyze scientifically a different cosmological issue, but not to validate inflation itself.

Which is what we are doing here. Although of course, all of scientific reasoning is ultimately circular. What matters is that it is self consistent with data.

Anyway, the evidence for inflation comes from multiple independant experiments and physical consequences, so it is ok to use it to in fact put some sort of prior on whether any new Earth based experiment will measure non zero flatness (again to within the conceivable sensitivity). Indeed such a departurewould be inconsistent with inflation and therefore inconsistent with all the existing data that leads up to the theory.
 
  • #26
marcus said:
Out of all possible numbers, pretending that the curvature is EXACTLY zero is a really radical assumption. That has always been the hypothesis that one would look for evidence to support. The null hypothesis is, of course, that it is not exactly zero.

That the curvature is non-zero is the alternative hypothesis. The null hypothesis has to be the simplest one consistent with available data. In the absence of any of the datasets WMAP, SPT, BAO, ##H_0##, etc., we have no evidence of non-zero curvature. So if we are going to use these data sets to test the hypothesis that the curvature is non-zero, we are necessarily testing against the null hypothesis that the curvature is zero.

I think there is insufficient reason to claim exact zero perfect flat. As I said it is much too early to assume that.

It is always suspicious when some parameter is small or zero in the absence of a symmetry, so I would not claim that we know there is exactly zero curvature either. But that is the simplest model that fits the data within statistical significance.

I have never claimed that the curvature is nonzero, although I've pointed out that some observations SUGGEST this. I just don't think there is a valid scientific reason to believe exact zero out of all the things it could be.

My point is that it is incorrect to say that any of these observations suggest non-zero curvature. One could fill volumes with the number of ##2\sigma## observations that turned out to be nothing in particle physics history alone. I am not saying that zero curvature has been demonstrated either, but that is the best model to use for independent purposes.

Back in October the SPT report had an Omega_k confidence interval that was all on the negative side! It did not even include zero. Of course other studies have published Omega_k confidence intervals that include zero and some positive territory.
But in recent years they've mostly been lopsided (big on the negative side of zero). That PROVES nothing. But it is suggestive when it happens repeatedly with several different studies.

So, we will wait and see.

None of these results were statistically significant. The more statistically significant non-zero curvature fits were in tension with things like neutrino masses. And the separate datasets themselves are in tension, so it is problematic to draw conclusions by combining them.

I am not saying that the universe is flat. I am saying that it is incorrect to insist that we really have evidence that it is not flat.


TrickyDicky said:
What I was trying to say is that we have to take into account the "a priori" chances before we evaluate the statistical data in order to interpret it correctly.
And "a priori" we know that a exactly flat curvature is impossible to measure given the limits of our detectors. So we can only hope to ever measure a definite nonzero curvature, a zero curvature cannot be measured. And we also know that while 0 is a single value, nonzero curvature possible values are infinite, as there are infinitely many values between 0 and whatever value of cuvature. And note that no matter how small the curvature, for instance in the case it was positive, it would inmediately mean a change from infinite to finite universe so it does matter.
So the "a priori" knowledge leads us to consider nonzero curvature as the null hypothesis.
Now Planck has approximated a little bit more the upper limit of curvature to zero, but the "a priori" probabilities are still infinite possible nonzero values to one 0 value , and they will remain so unless we finally found a nonzero value, regardless of how much we improve our detectors, now we have accuracy to 1/100 which might look great, but compare 100 with infinity.
I claim this is valid scientific reasoning.

The scientific method has the goal of finding the simplest model that supports all available data. If we choose the parameters correctly, setting a parameter to zero makes the model simpler. It does not matter that someone throwing a dart at a line of values would tend to find something nonzero. Until we actually measure a parameter to be nonzero, that component of the model does not enter the textbook theory.

For example, there are an infinite number of terms that one can add as corrections to Einstein's equation. A priori, there's no good reason for the coefficients of any of them to vanish, but there are arguments that the coefficients are too small to measure. If you took the first ten terms and added them to ##\Lambda##CDM, we could use the existing datasets to fit their coefficients and we would quite possibly find non-zero parameters, with enormous errorbars. No one would suggest that you should actually use the resulting fit to draw any physical conclusions.
 
  • #27
Haelfix said:
Had one of those experiments picked out a nonzero spatial curvature, the theory of inflation would either be wrong, or finely tuned through many orders of magnitude...

That SOUNDS like you are saying what you are in fact not saying :biggrin::rolleyes:
Usual inflation does not get you zero curvature, so a small positive mean curvature would be consistent with inflation and would not require any extra fine tuning.
It SOUNDS like you are contradicting that. But I'm sure you know.

What you said is if one of THOSE experiments had picked out... They didn't have the resolution to pick out a very small positive curvature. So that is not what we are talking about. If THOSE experiments had picked out a positive curvature it would necessarily have been awkwardly large, and if the result were confirmed it would have made trouble for inflation.

But nothing you said negates the idea that there could , in fact, be a small overall positive curvature which would be perfectly compatible with the usual inflation.

I just want to point that out casual readers aren't misled.
 
  • #28
Its actually more accurate to say that standard minimal, slow roll, single field inflation with 60 efolds+ predicts zero spatial curvature to within the viability and approximations of the model (see eg Peacock p326).

If we start talking about curvature effects that survive through such an inflationary regime, either we are talking about values that are so parametrically small that they are not included within the regime of validity of standard FRW universes (in which case it doesn't make sense to talk about global curvature in this way), or we are talking about values that might be detectable, but then would require an incredible and delicate conspiracy within the initial conditions of the initial inflationary patch.

Now it is possible to cook up more complicated and fantastic models that predict different values, but again that's always possible. You can fit anything in science with an arbitrarily contrived model, but you must always pick the simplest baseline first.
 
  • #29
The basic topic of discussion in this thread is what I cited in post#1.
We still don't BELIEVE any of this about the curvature because there is quite a variety among the results reported in the recent studies. There is however one October 2012 report that gave a confidence interval which did not include zero.

This could be what Haelfix could have been talking about when he said "if one of THOSE studies had picked out..."

And in fact that particular recent study did determine a range of overall mean curvature which was sufficiently far from zero that it would probably not sit well with inflation.

But the jury is still out...
==quote post#1==
http://arxiv.org/pdf/1210.7231v1.pdf
Scroll to Table 3 on page 12 and look at the rightmost column which combines the most data:
Code:
Ω[SUB]Λ[/SUB]     0.7152 ± 0.0098
H[SUB]0[/SUB]     69.62 ± 0.79
σ[SUB]8 [/SUB]    0.823 ± 0.015
z[SUB]EQ[/SUB]    3301 ± 47

Perhaps the most remarkable thing is the tilt towards positive overall curvature, corresponding to a negative value of Ωk

For that, see equation (21) on page 14
Ωk =−0.0059±0.0040.
Basically they are saying that with high probability you are looking at a spatial finite slight positive curvature. The flattest it could be IOW is 0.0019, with
Ωtotal = 1.0019
And a radius of curvature 14/sqrt(.0019) ≈ 320 billion LY.
Plus they are saying Omega total COULD be as high as 1.0099 which would mean
radius of curvature 14/sqrt(.0099) ≈ 140 billion LY.

So the idea which is traditionally favored of perfect flatness and spatial infinite is hanging on by its 2 sigma fingernails. It is still "consistent" with the data at a 2 sigma level.
==endquote==
And of course that particular study, the latest SPT, could have something wrong with it! I'm reacting more to the overall collective drift I see in the latest reports---nothing definite or final but consistent with a small positive curvature (i.e. small negative Omega_k) and MIGHT still be small enough to be compatible with inflation.
 
  • #30
marcus said:
So the idea which is traditionally favored of perfect flatness and spatial infinite is hanging on by its 2 sigma fingernails. It is still "consistent" with the data at a 2 sigma level..

It's important to remember that ##2\sigma## is also the measure of the inconsistency between the high and low-redshift data sets. Did any of the Planck papers suggest that this tension has eased? I have only seen the comment in the parameters paper 1303.5076 that says that the Hubble and matter density fits were in excellent agreement with BAO.
 
  • #31
fzero said:
The scientific method has the goal of finding the simplest model that supports all available data.
Usually it is stated the other way around, from the hypothetical models we look for the observational data that leads us to either reject the hypothesis or maintain it.
fzero said:
If we choose the parameters correctly, setting a parameter to zero makes the model simpler.
That is called tweaking (finetuning) parameters to simplify the models or to adjust them to a previous prejudice, but I'm not sure that is exactly what the scientific method is about. It is no doubt a practical way to manage data.
 
  • #32
marcus said:
==quote post#1==
http://arxiv.org/pdf/1210.7231v1.pdf
Scroll to Table 3 on page 12 and look at the rightmost column which combines the most data:
Code:
Ω[SUB]Λ[/SUB]     0.7152 ± 0.0098
H[SUB]0[/SUB]     69.62 ± 0.79
σ[SUB]8 [/SUB]    0.823 ± 0.015
z[SUB]EQ[/SUB]    3301 ± 47

Perhaps the most remarkable thing is the tilt towards positive overall curvature, corresponding to a negative value of Ωk

For that, see equation (21) on page 14
Ωk =−0.0059±0.0040.
Basically they are saying that with high probability you are looking at a spatial finite slight positive curvature.
==endquote==
And of course that particular study, the latest SPT, could have something wrong with it!

I think it is important also to read the associated paper by Hou et al:

http://arxiv.org/abs/1212.6267

"These extensions have similar observational consequences and are partially degenerate when considered simultaneously. These degeneracies can weaken or enhance the apparent deviation of any single extension from the CDM model. Of the 6 one-parameter model extensions considered, we find the CMB data to have the largest statistical preference for running with -0.046 < dns / d ln k < -0.003 at 95% confidence. This preference for dns / d ln k strengthens to 2.7σ for the combination of CMB+BAO+H0. Running of this magnitude is difficult to explain in the context of single-field, slow-roll inflation models. When varying the effective number of massless neutrino species, we find Neff = 3.62 ± 0.48 for the CMB data. Adding H0 and BAO measurements tightens the constraint to Neff = 3.71 ± 0.35, 1.9σ above the expected value for three neutrino species. Larger values of Neff relieve the mild tension between the CMB, H0, and BAO datasets in CDM."

There is significant evidence from other sources for Ʃm[itex]\nu[/itex] > 0. The value of Neff ~ 3.7 is similar to that found by WMAP. The next most favoured seems to be a running spectral index as stated above and after that perhaps a different value of the helium fraction, Yp. Curvature seems to be some way down the list and once the degeneracies are taken into account, the evidence seems to weaken, though that's just my reading of the paper, they don't specifically look at the four-parameter combination of running with Ʃm[itex]\nu[/itex]>0 and Neff~3.7 plus a variable curvature.
 
  • #33
I'm just having a first look through the Planck results but two graphs have jumped out at me.

http://arxiv.org/abs/1303.5076

For curvature, look at the top row of figure 21 on page 36.

For the degeneracy of ns versus Neff and Yp see figure 24 on page 39.

There is a complete appendix starting on page 59 which discusses the discrepancy between the Planck results and those of Story and Hou from the SPT in that Planck finds no need for any 'new physics'.
 
  • #34
Figure 21 on page 36 is nice!

There is also section 6.2.3 Curvature around page 40, where they conclude that in view of equation (68) the universe is spatially flat to within 1%. Nothing new there, the WMAP reports had a similar nearlyflat conclusions, so it is more of a confirmation of what has become accepted wisdom.

Here are equations 68a and 68b that they stress in the concluding paragraph of their section on Curvature:
==quote==
...by the addition of BAO data. We then find
100ΩK = −0.05+0.65-0.66 (95%; Planck+WP+highL+BAO), (68a)

100ΩK = −0.10+0.62-0.65 (95%; Planck+lensing+WP+highL+BAO). (68b)
==endquote==

Note that the central values are, as usual, negative. They've been coming out mostly negative for years, I guess everyone realizes. For what it's worth. :smile:

==quote==
These limits are consistent with (and slightly tighter than) the results reported by Hinshaw et al. (2012) from combining the nine-year WMAP data with high resolution CMB measurements and BAO data.
==endquote==
http://arxiv.org/abs/1303.5076

The idea of near but not necessarily exact flatness is appealing for several reasons, and by now I would imagine it is widely accepted. For one thing it makes calculation easier---you get to use perfect flatness because (without being assured that it is really the case) it is such a good approximation!

People who, for philosophical/aesthetic reasons, prefer to imagine the universe as exactly flat, infinite and containing an infinite amount of matter and energy, can picture it according to their taste. Others, who like an infinite panorama of bubbles, can imagine things according to their taste. And those who for different philosophical/aesthetic reasons, are more comfortable with the large hypersphere picture, can think along those lines with equal justification. The "near flat" conclusion accommodates everybody without prejudice. :biggrin:

The main thing though, to repeat, is that you get to *calculate* using hypothetical exact flatness.

And FWIW in study after study the central values of the Omega_k confidence intervals keep on coming out negative in most cases, particularly before the addition of late-universe observations like BAO (i.e. galaxy counting, census-taking in the more contemporaneous universe), but also as in equations 68a and 68b, FWIW, with the inclusion of BAO data. I don't think we have any idea what that means, if anything. Maybe someone has a guess.
 
Last edited:
  • #35
Maybe I should mention that early universe observation (like Planck mission) is viewed as the main testing arena in other words a proving ground for Quantum Gravity.
This paper which came out a couple of days ago examples that.

http://arxiv.org/abs/1303.4989
Loop Quantum Gravity and the The Planck Regime of Cosmology
Abhay Ashtekar
(Submitted on 20 Mar 2013)
The very early universe provides the best arena we currently have to test quantum gravity theories. The success of the inflationary paradigm in accounting for the observed inhomogeneities in the cosmic microwave background already illustrates this point to a certain extent because the paradigm is based on quantum field theory on the curved cosmological space-times. However, this analysis excludes the Planck era because the background space-time satisfies Einstein's equations all the way back to the big bang singularity. Using techniques from loop quantum gravity, the paradigm has now been extended to a self-consistent theory from the Planck regime to the onset of inflation, covering some 11 orders of magnitude in curvature. In addition, for a narrow window of initial conditions, there are departures from the standard paradigm, with novel effects, such as a modification of the consistency relation involving the scalar and tensor power spectra and a new source for non-Gaussianities. Thus, the genesis of the large scale structure of the universe can be traced back to quantum gravity fluctuations in the Planck regime. This report provides a bird's eye view of these developments for the general relativity community.
23 pages, 4 figures. Plenary talk at the Conference: Relativity and Gravitation: 100 Years after Einstein in Prague. To appear in the Proceedings to be published by Edition Open Access. Summarizes results that appeared in journal articles [2-13]

According to Ashtekar, LQG can be used to model an era of expansion before inflation in which conditions might arise that affect how it plays out in novel and measurable ways. In papers leading up to this one synonyms like "pre-inflationary era" have been used in place of "Planck regime".
 
<h2>1. What is the South Pole Telescope (SPT) and how does it contribute to the study of cosmic models?</h2><p>The South Pole Telescope is a ground-based telescope located at the Amundsen-Scott South Pole Station in Antarctica. It is designed to study the cosmic microwave background (CMB) radiation, which is the oldest light in the universe. By measuring the CMB, the SPT helps scientists understand the structure and evolution of the universe, including the parameters of different cosmic models.</p><h2>2. What are the new cosmic model parameters that have been derived from the SPT data?</h2><p>The new cosmic model parameters derived from the SPT data include the Hubble constant, the density of dark matter, and the density of dark energy. These parameters are important in determining the age, expansion rate, and composition of the universe.</p><h2>3. How does the SPT data improve upon previous measurements of cosmic model parameters?</h2><p>The SPT data provides more precise and accurate measurements of cosmic model parameters compared to previous experiments. This is due to the SPT's location in the South Pole, which allows for clearer and more direct observations of the CMB radiation.</p><h2>4. How do the new cosmic model parameters affect our understanding of the universe?</h2><p>The new cosmic model parameters help refine and validate our understanding of the universe and its evolution. By accurately measuring these parameters, scientists can better understand the distribution of matter and energy in the universe and how it has changed over time.</p><h2>5. What are the potential implications of the new cosmic model parameters on future research and discoveries?</h2><p>The new cosmic model parameters derived from the SPT data can have significant implications on future research and discoveries in the field of cosmology. They can help guide and inform future experiments and simulations, leading to a better understanding of the fundamental nature of the universe.</p>

1. What is the South Pole Telescope (SPT) and how does it contribute to the study of cosmic models?

The South Pole Telescope is a ground-based telescope located at the Amundsen-Scott South Pole Station in Antarctica. It is designed to study the cosmic microwave background (CMB) radiation, which is the oldest light in the universe. By measuring the CMB, the SPT helps scientists understand the structure and evolution of the universe, including the parameters of different cosmic models.

2. What are the new cosmic model parameters that have been derived from the SPT data?

The new cosmic model parameters derived from the SPT data include the Hubble constant, the density of dark matter, and the density of dark energy. These parameters are important in determining the age, expansion rate, and composition of the universe.

3. How does the SPT data improve upon previous measurements of cosmic model parameters?

The SPT data provides more precise and accurate measurements of cosmic model parameters compared to previous experiments. This is due to the SPT's location in the South Pole, which allows for clearer and more direct observations of the CMB radiation.

4. How do the new cosmic model parameters affect our understanding of the universe?

The new cosmic model parameters help refine and validate our understanding of the universe and its evolution. By accurately measuring these parameters, scientists can better understand the distribution of matter and energy in the universe and how it has changed over time.

5. What are the potential implications of the new cosmic model parameters on future research and discoveries?

The new cosmic model parameters derived from the SPT data can have significant implications on future research and discoveries in the field of cosmology. They can help guide and inform future experiments and simulations, leading to a better understanding of the fundamental nature of the universe.

Similar threads

Replies
3
Views
2K
Replies
4
Views
2K
Back
Top