Systematic study comparing cosmological models?

  • #1
Vincentius
78
1
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.

In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.

Apart from this particular case, do comparisons exist of LCDM with the single fluid model where ρ∝a-1 and with the dual fluid model where ρ1∝a-2, ρ2Λ∝a0, and possibly other models?

I would very much appreciate if someone could help out.
 

Answers and Replies

  • #2
Chalnoth
Science Advisor
6,197
447
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.
Unfortunately, it's pretty difficult to do this kind of comparison in general. There are a number of arbitrary decisions that have to be made when comparing two different models of the universe. Typically, physicists trust that as the data gets better and better, these arbitrary decisions will make less and less of a difference and it'll become clear which models are or are not accurate.

In other words, you can do this sort of model comparison, but you have to make certain arbitrary choices that have a critical effect on the outcome, making it so that unless the statistics are extremely one-sided, there's just no way to solidly say one model is better than another.

In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.
Perhaps it fits some data better, but certainly not all. The Rh=ct model is quite close to LCDM for a good amount of time, but diverges greatly in the early universe. This means that CMB and nucleosynthesis observations are particularly difficult to reconcile with an Rh=ct model.

For example, the Rh=ct model does fairly well with the supernova data, but this is of no surprise at all because the supernova data is only good at estimating the ratio of dark energy density to matter density, but doesn't constrain the total amount of either very well at all. The Rh=ct universe, with [itex]\Omega_\Lambda = \Omega_m = 0[/itex], has little trouble fitting this.

But add in some CMB data, and this completely breaks down, because the CMB does a very good job of estimating the total energy/matter density of the universe (but it doesn't measure the ratio very well, which is why combining the two data types is best).
 
  • #3
Vincentius
78
1
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
 
  • #4
Vincentius
78
1
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
 
  • #5
Chalnoth
Science Advisor
6,197
447
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
Unfortunately, it's just impossible to do those kinds of comparisons in an unambiguous way. Here's a presentation for a talk that went over some of these issues, for example:
http://astrostatistics.psu.edu/su11scma5/lectures/Trotta_scmav.pdf

Of particular relevance for this discussion is page 5, which references three different studies asking whether or how the data favors [itex]n \neq 1[/itex], where the range in measured odds of [itex]n \neq 1[/itex] varied by more than a factor of two.

In essence, if you're in a situation where you might want to do a careful model comparison, then you won't be able to produce a strong conclusion anyway because you had to make some arbitrary choices that impact the result. And if you wait until the data is strong enough that you can make that strong conclusion, then the careful model comparison doesn't really net you any additional information.
 
  • #6
Chalnoth
Science Advisor
6,197
447
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
See Bayes' Theorem:
https://en.wikipedia.org/wiki/Bayes'_theorem

[tex]P(m|d) = {P(d|m) P(m) \over P(d)}[/tex]

[itex]P(m|d)[/itex]: The probability the model (m) given the data (d). This is what you're trying to infer when making a measurement.
[itex]P(d|m)[/itex]: The probability of observing specific data given a particular model. This is what you can actually measure.
[itex]P(m)[/itex]: The probability that a model is accurate before examining the data at all.
[itex]P(d)[/itex]: This is just a normalization factor that makes the probabilities sum to one. It doesn't impact the result.

It's [itex]P(m)[/itex] that is arbitrary, and there's no way to fix this. How likely you assume the model to be before even looking at the data fundamentally changes how likely you infer the model to be after examining the data.

What makes this situation not completely hopeless is that the result you get for [itex]P(m|d)[/itex] converges to the same value no matter what [itex]P(m)[/itex] you assume given that you have accurate enough data (provided you don't assume that P(m) = 0 for a given model). So basically this all comes down to pretty general suggestion that you keep an open mind.
 
  • #7
Vincentius
78
1
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
 
  • #8
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
2021 Award
19,552
9,931
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
It seems natural at first, but is actually also an arbitrary choice. In particular, when it comes to estimating parameters in a model you simply cannot say that "all parameters have equal probability" because this will depend on the parametrisation.

Of course, you could always go frequentist, but that has its own problems.

These partially illustrate the problems:
http://www.smbc-comics.com/index.php?id=4127
https://xkcd.com/1132/
 
  • #9
Vincentius
78
1
I understand it is not necessarily easy, certainly not trivial, to set up a comparison, but I fail to see a good reason to not run a comparison for a class of simple models. Models are difficult to compare if data are ill conditioned and/or the number of degrees of freedom in the models are too large (e.g. a neural net fits everything you present it). But in cosmology there is complementary data and we usually consider simple models with only few parameters.
Is there perhaps the technical problem that certain measurements are indirect (like distances) and presume a certain underlying cosmology, which is different for each model in the comparison? So that one would have to iterate models until convergence? Just guessing.
 
  • #10
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
2021 Award
19,552
9,931
Do you understand the basics of frequentist or Bayesian hypothesis testing?
 
  • #11
Chalnoth
Science Advisor
6,197
447
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
One problem with this is that models with more parameters tend to fit the data better no matter what.

For example, if I take the LCDM model and add a single parameter to it to allow dark energy to vary over time, then I am guaranteed to get a better fit to the data, whether or not dark energy actually varies over time.
 

Suggested for: Systematic study comparing cosmological models?

Replies
10
Views
926
Replies
3
Views
638
  • Last Post
Replies
7
Views
906
Replies
34
Views
1K
  • Last Post
Replies
4
Views
777
  • Last Post
Replies
9
Views
594
Replies
34
Views
1K
Replies
4
Views
744
Replies
1
Views
705
Replies
7
Views
632
Top