Systematic study comparing cosmological models?

Click For Summary

Discussion Overview

The discussion centers around the existence of systematic studies that compare various cosmological models based on their fit to standard cosmological data sets, such as the Cosmic Microwave Background (CMB), luminosity, baryon acoustic oscillations (BAO), supernovae (SNe), and gravitational lensing. Participants explore the challenges and complexities involved in making such comparisons, particularly in the context of different models like LCDM and alternative theories.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Exploratory

Main Points Raised

  • Some participants express difficulty in finding studies that systematically compare cosmological models beyond LCDM, noting that comparisons often lead to confusion.
  • One participant mentions Melia's coasting model ("Rh=ct") and its claims of fitting data better than LCDM, but acknowledges that this model diverges significantly in the early universe, complicating its compatibility with CMB and nucleosynthesis observations.
  • Another participant highlights the arbitrary choices involved in model comparisons, suggesting that these choices can critically affect outcomes and make it challenging to definitively state which model is superior.
  • There is a discussion about the implications of using Bayes' Theorem in model comparison, particularly the role of prior probabilities (P(m)) and how they can influence the results.
  • Some participants propose that setting equal prior probabilities for all models could mitigate a priori preferences, but others argue that this approach is also arbitrary and may not be feasible due to the nature of parameter estimation.
  • Concerns are raised about the indirect nature of certain measurements and the potential need for iterative modeling to achieve convergence in comparisons.
  • One participant questions the feasibility of comparisons for simple models, suggesting that complementary data in cosmology could allow for more straightforward comparisons despite the challenges.
  • There is a mention of the tendency for more complex models to fit data better, regardless of their physical validity, which complicates the comparison process.

Areas of Agreement / Disagreement

Participants generally agree that systematic comparisons of cosmological models are fraught with challenges and that arbitrary choices can significantly influence outcomes. However, there is no consensus on the best approach to conducting these comparisons or on the validity of specific models discussed.

Contextual Notes

Limitations include the dependence on arbitrary choices in model comparison, the impact of parameterization on prior probabilities, and the challenges posed by indirect measurements that may require assumptions about underlying cosmologies.

Vincentius
Messages
78
Reaction score
1
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.

In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.

Apart from this particular case, do comparisons exist of LCDM with the single fluid model where ρ∝a-1 and with the dual fluid model where ρ1∝a-2, ρ2Λ∝a0, and possibly other models?

I would very much appreciate if someone could help out.
 
Space news on Phys.org
Vincentius said:
Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.
Unfortunately, it's pretty difficult to do this kind of comparison in general. There are a number of arbitrary decisions that have to be made when comparing two different models of the universe. Typically, physicists trust that as the data gets better and better, these arbitrary decisions will make less and less of a difference and it'll become clear which models are or are not accurate.

In other words, you can do this sort of model comparison, but you have to make certain arbitrary choices that have a critical effect on the outcome, making it so that unless the statistics are extremely one-sided, there's just no way to solidly say one model is better than another.

Vincentius said:
In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.
Perhaps it fits some data better, but certainly not all. The Rh=ct model is quite close to LCDM for a good amount of time, but diverges greatly in the early universe. This means that CMB and nucleosynthesis observations are particularly difficult to reconcile with an Rh=ct model.

For example, the Rh=ct model does fairly well with the supernova data, but this is of no surprise at all because the supernova data is only good at estimating the ratio of dark energy density to matter density, but doesn't constrain the total amount of either very well at all. The Rh=ct universe, with \Omega_\Lambda = \Omega_m = 0, has little trouble fitting this.

But add in some CMB data, and this completely breaks down, because the CMB does a very good job of estimating the total energy/matter density of the universe (but it doesn't measure the ratio very well, which is why combining the two data types is best).
 
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
 
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
 
Vincentius said:
Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
Unfortunately, it's just impossible to do those kinds of comparisons in an unambiguous way. Here's a presentation for a talk that went over some of these issues, for example:
http://astrostatistics.psu.edu/su11scma5/lectures/Trotta_scmav.pdf

Of particular relevance for this discussion is page 5, which references three different studies asking whether or how the data favors n \neq 1, where the range in measured odds of n \neq 1 varied by more than a factor of two.

In essence, if you're in a situation where you might want to do a careful model comparison, then you won't be able to produce a strong conclusion anyway because you had to make some arbitrary choices that impact the result. And if you wait until the data is strong enough that you can make that strong conclusion, then the careful model comparison doesn't really net you any additional information.
 
Vincentius said:
Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
See Bayes' Theorem:
https://en.wikipedia.org/wiki/Bayes'_theorem

P(m|d) = {P(d|m) P(m) \over P(d)}

P(m|d): The probability the model (m) given the data (d). This is what you're trying to infer when making a measurement.
P(d|m): The probability of observing specific data given a particular model. This is what you can actually measure.
P(m): The probability that a model is accurate before examining the data at all.
P(d): This is just a normalization factor that makes the probabilities sum to one. It doesn't impact the result.

It's P(m) that is arbitrary, and there's no way to fix this. How likely you assume the model to be before even looking at the data fundamentally changes how likely you infer the model to be after examining the data.

What makes this situation not completely hopeless is that the result you get for P(m|d) converges to the same value no matter what P(m) you assume given that you have accurate enough data (provided you don't assume that P(m) = 0 for a given model). So basically this all comes down to pretty general suggestion that you keep an open mind.
 
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
 
Vincentius said:
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
It seems natural at first, but is actually also an arbitrary choice. In particular, when it comes to estimating parameters in a model you simply cannot say that "all parameters have equal probability" because this will depend on the parametrisation.

Of course, you could always go frequentist, but that has its own problems.

These partially illustrate the problems:
http://www.smbc-comics.com/index.php?id=4127
https://xkcd.com/1132/
 
I understand it is not necessarily easy, certainly not trivial, to set up a comparison, but I fail to see a good reason to not run a comparison for a class of simple models. Models are difficult to compare if data are ill conditioned and/or the number of degrees of freedom in the models are too large (e.g. a neural net fits everything you present it). But in cosmology there is complementary data and we usually consider simple models with only few parameters.
Is there perhaps the technical problem that certain measurements are indirect (like distances) and presume a certain underlying cosmology, which is different for each model in the comparison? So that one would have to iterate models until convergence? Just guessing.
 
  • #10
Do you understand the basics of frequentist or Bayesian hypothesis testing?
 
  • #11
Vincentius said:
Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
One problem with this is that models with more parameters tend to fit the data better no matter what.

For example, if I take the LCDM model and add a single parameter to it to allow dark energy to vary over time, then I am guaranteed to get a better fit to the data, whether or not dark energy actually varies over time.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 28 ·
Replies
28
Views
5K