Systematic study comparing cosmological models?

Click For Summary
SUMMARY

This discussion centers on the challenges of systematically comparing different cosmological models, particularly in relation to standard cosmological data sets such as the Cosmic Microwave Background (CMB), luminosity, and supernovae (SNe). The conversation highlights the difficulties in making arbitrary choices that significantly affect model comparisons, as seen in the debate surrounding Melia's coasting model ("Rh=ct") versus the Lambda Cold Dark Matter (LCDM) model. It concludes that while comparisons can be made, they often lead to confusion and do not yield definitive results due to the inherent complexities and assumptions involved in model selection.

PREREQUISITES
  • Understanding of cosmological models, specifically LCDM and coasting models.
  • Familiarity with standard cosmological data sets, including CMB, SNe, and lensing.
  • Knowledge of Bayesian statistics and Bayes' Theorem.
  • Basic grasp of frequentist versus Bayesian hypothesis testing.
NEXT STEPS
  • Research the implications of the Cosmic Microwave Background (CMB) on cosmological model fitting.
  • Explore the differences between Bayesian and frequentist approaches in cosmological data analysis.
  • Investigate the role of parameterization in model comparisons and its impact on fitting accuracy.
  • Examine existing studies that compare cosmological models, focusing on methodologies and outcomes.
USEFUL FOR

Astronomers, cosmologists, and researchers in astrophysics who are involved in model comparison and analysis of cosmological data will benefit from this discussion.

Vincentius
Messages
78
Reaction score
1
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.

In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.

Apart from this particular case, do comparisons exist of LCDM with the single fluid model where ρ∝a-1 and with the dual fluid model where ρ1∝a-2, ρ2Λ∝a0, and possibly other models?

I would very much appreciate if someone could help out.
 
Space news on Phys.org
Vincentius said:
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.
Unfortunately, it's pretty difficult to do this kind of comparison in general. There are a number of arbitrary decisions that have to be made when comparing two different models of the universe. Typically, physicists trust that as the data gets better and better, these arbitrary decisions will make less and less of a difference and it'll become clear which models are or are not accurate.

In other words, you can do this sort of model comparison, but you have to make certain arbitrary choices that have a critical effect on the outcome, making it so that unless the statistics are extremely one-sided, there's just no way to solidly say one model is better than another.

Vincentius said:
In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.
Perhaps it fits some data better, but certainly not all. The Rh=ct model is quite close to LCDM for a good amount of time, but diverges greatly in the early universe. This means that CMB and nucleosynthesis observations are particularly difficult to reconcile with an Rh=ct model.

For example, the Rh=ct model does fairly well with the supernova data, but this is of no surprise at all because the supernova data is only good at estimating the ratio of dark energy density to matter density, but doesn't constrain the total amount of either very well at all. The Rh=ct universe, with \Omega_\Lambda = \Omega_m = 0, has little trouble fitting this.

But add in some CMB data, and this completely breaks down, because the CMB does a very good job of estimating the total energy/matter density of the universe (but it doesn't measure the ratio very well, which is why combining the two data types is best).
 
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
 
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
 
Vincentius said:
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
Unfortunately, it's just impossible to do those kinds of comparisons in an unambiguous way. Here's a presentation for a talk that went over some of these issues, for example:
http://astrostatistics.psu.edu/su11scma5/lectures/Trotta_scmav.pdf

Of particular relevance for this discussion is page 5, which references three different studies asking whether or how the data favors n \neq 1, where the range in measured odds of n \neq 1 varied by more than a factor of two.

In essence, if you're in a situation where you might want to do a careful model comparison, then you won't be able to produce a strong conclusion anyway because you had to make some arbitrary choices that impact the result. And if you wait until the data is strong enough that you can make that strong conclusion, then the careful model comparison doesn't really net you any additional information.
 
Vincentius said:
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
See Bayes' Theorem:
https://en.wikipedia.org/wiki/Bayes'_theorem

P(m|d) = {P(d|m) P(m) \over P(d)}

P(m|d): The probability the model (m) given the data (d). This is what you're trying to infer when making a measurement.
P(d|m): The probability of observing specific data given a particular model. This is what you can actually measure.
P(m): The probability that a model is accurate before examining the data at all.
P(d): This is just a normalization factor that makes the probabilities sum to one. It doesn't impact the result.

It's P(m) that is arbitrary, and there's no way to fix this. How likely you assume the model to be before even looking at the data fundamentally changes how likely you infer the model to be after examining the data.

What makes this situation not completely hopeless is that the result you get for P(m|d) converges to the same value no matter what P(m) you assume given that you have accurate enough data (provided you don't assume that P(m) = 0 for a given model). So basically this all comes down to pretty general suggestion that you keep an open mind.
 
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
 
Vincentius said:
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
It seems natural at first, but is actually also an arbitrary choice. In particular, when it comes to estimating parameters in a model you simply cannot say that "all parameters have equal probability" because this will depend on the parametrisation.

Of course, you could always go frequentist, but that has its own problems.

These partially illustrate the problems:
http://www.smbc-comics.com/index.php?id=4127
https://xkcd.com/1132/
 
I understand it is not necessarily easy, certainly not trivial, to set up a comparison, but I fail to see a good reason to not run a comparison for a class of simple models. Models are difficult to compare if data are ill conditioned and/or the number of degrees of freedom in the models are too large (e.g. a neural net fits everything you present it). But in cosmology there is complementary data and we usually consider simple models with only few parameters.
Is there perhaps the technical problem that certain measurements are indirect (like distances) and presume a certain underlying cosmology, which is different for each model in the comparison? So that one would have to iterate models until convergence? Just guessing.
 
  • #10
Do you understand the basics of frequentist or Bayesian hypothesis testing?
 
  • #11
Vincentius said:
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
One problem with this is that models with more parameters tend to fit the data better no matter what.

For example, if I take the LCDM model and add a single parameter to it to allow dark energy to vary over time, then I am guaranteed to get a better fit to the data, whether or not dark energy actually varies over time.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 28 ·
Replies
28
Views
5K