An article stating Cosmology is in crisis. Is it in crisis?

  • Thread starter Thread starter Tanelorn
  • Start date Start date
  • Tags Tags
    article Cosmology
Space news on Phys.org


I don't think so, cosmologists just adhere to the concept of Occum's Razor. If observations fit our beliefs, we assume with greater certainty that it's correct, but scientists tend not to shy away from saying that something is wrong, that's how you make a name for yourself.
 
  • Like
Likes Tanelorn
I don't really think the paper justifies "crisis".

Croft said:
On average, results in the last 10 years are consistent with expectations, given their error bars, something which should instill confidence in future measurements. There are some signs that recent measurements of dark energy parameters are closer to the “expected” values for ΛCDM than statistically likely. These may be explainable by correlations between measurements which we have not included. On the other hand this may serve as a sign that as cosmology collaboration sizes increase carrying out more blind analyses (as in particle physics) may be a good idea.

It's a very good idea, which is why people are doing it. Now, more than 4 years before Euclid is launched or DESI gets running and people are already building mock surveys to understand systematics and uncertainties in these measurements.

There's one thing I'm a little cautious of the paper however. From the data section in the paper:

Croft said:
We have made use of the NASA Astrophysics Data System 2 to generate our dataset by carrying out an automated search of publication abstracts for the years (1990-2010). We limited the search to published papers which include cosmological parameter values and their error bars in the paper abstract itself

This worries me greatly. Their cause for concern is that only 2 measurements of ##\Omega_{\Lambda}## out of 28 deviate from the WMAP result by more than 1 ##\sigma##. What is the probability some of these papers quoted not the raw parameter constraint but the joint constraint of their data plus the CMB or other data? It's certain because some of these tests require the CMB. It's a nice idea but I'm a bit skeptical these results are as independent as they assumed they are, for example WMAP7 isn't independent of WMAP5 but they seem to have made no allowance for that. I would love to test my hypothesis but sadly they don't publish their data or a list of papers used. Just to give an example, I looked up the famous Eisenstein SDSS 2005 and Cole 2005 2dFGRS BAO papers to see if the quoted parameters used the CMB.

Eisenstein said:
Independent of the constraints provided by the CMB acoustic scale, we find Ωm=0.273+/-0.025+0.123(1+w0)+0.137ΩK. Including the CMB acoustic scale, we find that the spatial curvature is ΩK=-0.010+/-0.009 if the dark energy is a cosmological constant.

Cole said:
itting to a CDM model, assuming a primordial ns= 1 spectrum, h= 0.72 and negligible neutrino mass, the preferred parameters are Ωmh= 0.168 +/- 0.016 and a baryon fraction Ωb/Ωm= 0.185 +/- 0.046 (1σ errors). The value of Ωmh is 1σ lower than the 0.20 +/- 0.03 in our 2001 analysis of the partially complete 2dFGRS. This shift is largely due to the signal from the newly sampled regions of space, rather than the refinements in the treatment of observational selection. This analysis therefore implies a density significantly below the standard Ωm= 0.3: in combination with cosmic microwave background (CMB) data from the Wilkinson Microwave Anisotropy Probe (WMAP), we infer Ωm= 0.231 +/- 0.021.

So in my entirely statistically useless study we have 6 parameters with quoted errors, 2 rely on WMAP cosmology and 2 are the same measurement but updated. However Croft and Daily don't actually consider 3 of these parameters (##\Omega_{m}h## twice , ##\frac{\Omega{b}}{\Omega_{m}}##) so in fact they would actually only get 3 parameters out of these two papers, 2 of them rely on WMAP. Assuming all of these papers are statistically independent is a weak assumption and unless they took steps they didn't tell us about I would be skeptical of any claims of over-correlation. You can't just take cosmological parameters and assume they're all independent, some tests quoted by Croft and Daily cannot measure these parameters without other datasets as priors and other people may well use the same data.

This was going to be a quick post but it turned into a rant. It's a nice idea for a paper but I don't think an automated search can be trusted, you just don't know what numbers they are quoting. I however support the author of the blog that we should do bind analysis where possible (many big projects do already), data and code should be open (this paper is a key example of why) but I don't really know what he means by systems engineering (I'm not sure why mocks wouldn't do the same).
 
Last edited:
  • Like
Likes Tanelorn
Any physics field is in a sort of crisis when it seems to be converging on a final theory which explains all observations.

Because even though the practical goal of physics is to provide good models of the Nature, for many *physicists* the thrill is in the process of discovery. When there is nothing else to discover...
 
  • Like
Likes Tanelorn
Back
Top