Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Systematic study comparing cosmological models?

  1. Jun 15, 2016 #1
    Hi, my question is if there exists a study systematically comparing different cosmological models in how well they fit the same standard cosmological data sets (CMB, luminosity, BOA, SNe, lensing,...). I can find very little besides LCDM.

    In the rare case of a comparison, it leads to confusion. For instance, Melia claims that a coasting model ("Rh=ct"), i.e. ρ∝a-2 fits data beter than LCDM, leading to long (perhaps unnecessary) disputes over the subject, while my impression is that Melia did a serious atempt.

    Apart from this particular case, do comparisons exist of LCDM with the single fluid model where ρ∝a-1 and with the dual fluid model where ρ1∝a-2, ρ2Λ∝a0, and possibly other models?

    I would very much appreciate if someone could help out.
     
  2. jcsd
  3. Jun 15, 2016 #2

    Chalnoth

    User Avatar
    Science Advisor

    Unfortunately, it's pretty difficult to do this kind of comparison in general. There are a number of arbitrary decisions that have to be made when comparing two different models of the universe. Typically, physicists trust that as the data gets better and better, these arbitrary decisions will make less and less of a difference and it'll become clear which models are or are not accurate.

    In other words, you can do this sort of model comparison, but you have to make certain arbitrary choices that have a critical effect on the outcome, making it so that unless the statistics are extremely one-sided, there's just no way to solidly say one model is better than another.

    Perhaps it fits some data better, but certainly not all. The Rh=ct model is quite close to LCDM for a good amount of time, but diverges greatly in the early universe. This means that CMB and nucleosynthesis observations are particularly difficult to reconcile with an Rh=ct model.

    For example, the Rh=ct model does fairly well with the supernova data, but this is of no surprise at all because the supernova data is only good at estimating the ratio of dark energy density to matter density, but doesn't constrain the total amount of either very well at all. The Rh=ct universe, with [itex]\Omega_\Lambda = \Omega_m = 0[/itex], has little trouble fitting this.

    But add in some CMB data, and this completely breaks down, because the CMB does a very good job of estimating the total energy/matter density of the universe (but it doesn't measure the ratio very well, which is why combining the two data types is best).
     
  4. Jun 15, 2016 #3
    Thanks Chalnoth. I gave Melia's case just as an example of the problem. My question is really if there exists a rigorous systematic comparison of (all kinds of) different models, in particular the ones I mentioned.
     
  5. Jun 15, 2016 #4
    Could you expand a little on the arbitrary choices that one has to make, and which determine the outcome? This gives the impression that data is pretty much useless as a discriminator.
     
  6. Jun 15, 2016 #5

    Chalnoth

    User Avatar
    Science Advisor

    Unfortunately, it's just impossible to do those kinds of comparisons in an unambiguous way. Here's a presentation for a talk that went over some of these issues, for example:
    http://astrostatistics.psu.edu/su11scma5/lectures/Trotta_scmav.pdf

    Of particular relevance for this discussion is page 5, which references three different studies asking whether or how the data favors [itex]n \neq 1[/itex], where the range in measured odds of [itex]n \neq 1[/itex] varied by more than a factor of two.

    In essence, if you're in a situation where you might want to do a careful model comparison, then you won't be able to produce a strong conclusion anyway because you had to make some arbitrary choices that impact the result. And if you wait until the data is strong enough that you can make that strong conclusion, then the careful model comparison doesn't really net you any additional information.
     
  7. Jun 15, 2016 #6

    Chalnoth

    User Avatar
    Science Advisor

    See Bayes' Theorem:
    https://en.wikipedia.org/wiki/Bayes'_theorem

    [tex]P(m|d) = {P(d|m) P(m) \over P(d)}[/tex]

    [itex]P(m|d)[/itex]: The probability the model (m) given the data (d). This is what you're trying to infer when making a measurement.
    [itex]P(d|m)[/itex]: The probability of observing specific data given a particular model. This is what you can actually measure.
    [itex]P(m)[/itex]: The probability that a model is accurate before examining the data at all.
    [itex]P(d)[/itex]: This is just a normalization factor that makes the probabilities sum to one. It doesn't impact the result.

    It's [itex]P(m)[/itex] that is arbitrary, and there's no way to fix this. How likely you assume the model to be before even looking at the data fundamentally changes how likely you infer the model to be after examining the data.

    What makes this situation not completely hopeless is that the result you get for [itex]P(m|d)[/itex] converges to the same value no matter what [itex]P(m)[/itex] you assume given that you have accurate enough data (provided you don't assume that P(m) = 0 for a given model). So basically this all comes down to pretty general suggestion that you keep an open mind.
     
  8. Jun 15, 2016 #7
    Ok, interesting. Then why not take P(m) equal for all models, as to avoid a priori preference for any model? Let the data decide?
     
  9. Jun 15, 2016 #8

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    It seems natural at first, but is actually also an arbitrary choice. In particular, when it comes to estimating parameters in a model you simply cannot say that "all parameters have equal probability" because this will depend on the parametrisation.

    Of course, you could always go frequentist, but that has its own problems.

    These partially illustrate the problems:
    http://www.smbc-comics.com/index.php?id=4127
    https://xkcd.com/1132/
     
  10. Jun 16, 2016 #9
    I understand it is not necessarily easy, certainly not trivial, to set up a comparison, but I fail to see a good reason to not run a comparison for a class of simple models. Models are difficult to compare if data are ill conditioned and/or the number of degrees of freedom in the models are too large (e.g. a neural net fits everything you present it). But in cosmology there is complementary data and we usually consider simple models with only few parameters.
    Is there perhaps the technical problem that certain measurements are indirect (like distances) and presume a certain underlying cosmology, which is different for each model in the comparison? So that one would have to iterate models until convergence? Just guessing.
     
  11. Jun 16, 2016 #10

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    Do you understand the basics of frequentist or Bayesian hypothesis testing?
     
  12. Jun 16, 2016 #11

    Chalnoth

    User Avatar
    Science Advisor

    One problem with this is that models with more parameters tend to fit the data better no matter what.

    For example, if I take the LCDM model and add a single parameter to it to allow dark energy to vary over time, then I am guaranteed to get a better fit to the data, whether or not dark energy actually varies over time.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Systematic study comparing cosmological models?
  1. Cosmological models (Replies: 4)

Loading...