# An article stating Cosmology is in crisis. Is it in crisis?

Tags:
1. Jan 11, 2016

### Tanelorn

2. Jan 11, 2016

### newjerseyrunner

I don't think so, cosmologists just adhere to the concept of Occum's Razor. If observations fit our beliefs, we assume with greater certainty that it's correct, but scientists tend not to shy away from saying that something is wrong, that's how you make a name for yourself.

3. Jan 11, 2016

### ruarimac

I don't really think the paper justifies "crisis".

It's a very good idea, which is why people are doing it. Now, more than 4 years before Euclid is launched or DESI gets running and people are already building mock surveys to understand systematics and uncertainties in these measurements.

There's one thing I'm a little cautious of the paper however. From the data section in the paper:

This worries me greatly. Their cause for concern is that only 2 measurements of $\Omega_{\Lambda}$ out of 28 deviate from the WMAP result by more than 1 $\sigma$. What is the probability some of these papers quoted not the raw parameter constraint but the joint constraint of their data plus the CMB or other data? It's certain because some of these tests require the CMB. It's a nice idea but I'm a bit skeptical these results are as independent as they assumed they are, for example WMAP7 isn't independent of WMAP5 but they seem to have made no allowance for that. I would love to test my hypothesis but sadly they don't publish their data or a list of papers used. Just to give an example, I looked up the famous Eisenstein SDSS 2005 and Cole 2005 2dFGRS BAO papers to see if the quoted parameters used the CMB.

So in my entirely statistically useless study we have 6 parameters with quoted errors, 2 rely on WMAP cosmology and 2 are the same measurement but updated. However Croft and Daily don't actually consider 3 of these parameters ($\Omega_{m}h$ twice , $\frac{\Omega{b}}{\Omega_{m}}$) so in fact they would actually only get 3 parameters out of these two papers, 2 of them rely on WMAP. Assuming all of these papers are statistically independent is a weak assumption and unless they took steps they didn't tell us about I would be skeptical of any claims of over-correlation. You can't just take cosmological parameters and assume they're all independent, some tests quoted by Croft and Daily cannot measure these parameters without other datasets as priors and other people may well use the same data.

This was going to be a quick post but it turned into a rant. It's a nice idea for a paper but I don't think an automated search can be trusted, you just don't know what numbers they are quoting. I however support the author of the blog that we should do bind analysis where possible (many big projects do already), data and code should be open (this paper is a key example of why) but I don't really know what he means by systems engineering (I'm not sure why mocks wouldn't do the same).

Last edited: Jan 11, 2016
4. Jan 11, 2016

### nikkkom

Any physics field is in a sort of crisis when it seems to be converging on a final theory which explains all observations.

Because even though the practical goal of physics is to provide good models of the Nature, for many *physicists* the thrill is in the process of discovery. When there is nothing else to discover...