Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What are the best parameters for λCDM?

  1. Sep 10, 2015 #1
    A man with one watch knows what time it is. A man with two watches is never sure. I've seen several combinations of 'standard' λCDM parameters. The NED search site has the three year WMAP as the default and you can select the 5 year as an option. I've seen several papers on the Plank study with a different set than the WMAP. Is there a set of parameters that everyone agrees are the best use as of 2015?
     
  2. jcsd
  3. Sep 10, 2015 #2

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Look up the latest Planck mission reports
    Planck was a similar mission to WMAP but with colder instruments
    The Planck report tables usually have a column over to the right where they combine their data with WMAP and one or two other studies and give kind of average collective results for the cosmic model parameters.
     
    Last edited: Sep 10, 2015
  4. Sep 10, 2015 #3

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    http://xxx.lanl.gov/abs/1502.01589
    Planck 2015 results. XIII. Cosmological parameters
    where to look? what parameters interest you?

    I would suggest looking at TABLE 4 ON PAGE 31 over to the right where it includes not only Planck but also a selection of "ext" or external data, from other studies.

    Table 4. Parameter 68 % confidence limits for the base ΛCDM model from Planck CMB power spectra, in combination with lensing reconstruction (“lensing”) and external data (“ext,” BAO+JLA+H0)

    In that column the figure for H0 given is
    67.74 ± 0.46

    I'm curious, which parameters are you especially looking for?
     
  5. Sep 10, 2015 #4
    The latest paper of the 2015 results give six columns. The earlier paper from 2013 gave three. That's nine different set of results. Throw in the Sullivan paper which primarily dealt with Type Ia SNe and you've got ten sets of parameters to choose from. I want to plug these numbers into a model: I need one set, not ten options.
     
  6. Sep 10, 2015 #5
    I'm just looking for the right set of numbers to plug into Ned Wright's Cosmology calculator as well as the parameters to use when querying the NED database (Ned and NED are not related as far as I know).
     
  7. Sep 10, 2015 #6
    That appears to be what Wikipedia is using. I guess if Wikipedia says it, it must be so.
     
  8. Sep 10, 2015 #7

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    At some point it is a personal choice. Just be sure you SAY which column of the Planck report you are using. I strongly favor using the most recent major comprehensive study. So Table 4 on page 31 and I would use 67.74 for H0 as I said.
    Also in the same column Omega_Lambda = 0.6911
     
  9. Sep 10, 2015 #8
    Religion is a personal choice, science must be objective.
    Thanks for the info.
     
  10. Sep 10, 2015 #9

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Try to follow expert community consensus but at some points you HAVE to make a choice. so be explicit and honest to your readers about which choices you are making.

    Nothing here is completely objective. But as for cosmo parameters, today (with collective results spanning several recent studies) the differences tend to be small, less than a percent on H0. Not a big deal.

    BTW what is square root of 0.6911 times 67.74? That would be the longterm Hubble rate H.
    Google calculator says:
    .6911^(1/2)*67.74 = 56.314
    Google calculator says:
    1/(67.74 km/s per Mpc) = 14.43 billion years
    Google calculator says:
    1/(56.314 km/s per Mpc) = 17.36 billion years

    You may as well use Jorrie's Lightcone calculator, which makes tables and draws curves as well. More fun and informative than Wright's.
    It uses default parameters of 14.4 and 17.3, but you can tweak those to 14.43 and 17.36 if you want. It's quick and easy to adjust the parameters
    Check it out:
    http://www.einsteins-theory-of-relativity-4engineers.com/LightCone7/LightCone.html
     
  11. Sep 10, 2015 #10

    Chalnoth

    User Avatar
    Science Advisor

    This is why scientific papers publish multiple parameter sets. The thing to pay attention to is that the parameter results agree with one another to within the error bars. I.e., scientists use different assumptions in an attempt to make sure that their assumptions don't change the result.

    It's an unfortunate fact of statistical inference that it is fundamentally impossible to not make some assumptions. One way to see this is to look at Bayes' theorem:

    [tex]P(v|d) = {P(v)P(d|v) \over P(d)}[/tex]

    Here I've used ##v## to represent that parameter values and ##d## to represent the data. Here's a description of what each probability in the above equation means:

    ##P(v|d)##: The probability of certain parameter values being true given the data. This is the probability that we're interested in when performing measurements.
    ##P(v)##: The probability of certain parameter values being true if you have complete ignorance as to what the data says. This is known as the "prior probability."
    ##P(d|v)##: The probability of seeing specific data values given that the true parameters are described by ##v##. This is the probability distribution that is most directly measured by the experimental apparatus.
    ##P(d)##: This turns out to just be an overall normalization factor to make sure the total probability is equal to one. It has no impact on the interpretation of the equation.

    From this, there are two subjective decisions that cannot be avoided:
    1. What experimental measurements do I include in ##d##?
    2. What probability distribution do I use for the prior probability ##P(v)##?

    For the first question, you might be tempted to answer, "all of it," but that turns out to be a very difficult thing to do in practice. Subtle calibration differences between different experiments can lead to unexpected errors if you try, so it requires quite a bit of work, and how much data to include becomes a matter of prioritization. For example, if you are making use of the 2015 Planck data release, there's not much benefit from including the WMAP data because Planck is so much more sensitive than WMAP.

    For the second question, there just is no possibility of an objective answer, so scientists do the best they can. The most common thing to do in cosmology, for most parameters, is to use a "uniform prior", which is the equivalent of saying ##P(v) = 1##. But there are exceptions: for the amplitude of the primordial fluctuations, for instance, a common choice is to use what's known as the "Jeffreys prior," which is a uniform probability in terms of scale. It's the equivalent of saying, "I don't know what power of 10 this parameter should take."
     
  12. Sep 12, 2015 #11

    bapowell

    User Avatar
    Science Advisor

    Because science does not confirm a single model, we are always able to add parameters as long as they maintain or improve the fit (eg should we add a running of the spectral index to the model or will a power law spectrum do?) How to know when you've got the "best" set of parameters depends on a balance of goodness of fit and model complexity. Bayesian model selection is useful for identifying the model best supported by the data.
     
  13. Sep 13, 2015 #12
    I'm sorry, but I can't agree. One of the biggest failures of modern Cosmology is the notion that you can keep on adding parameters to a formula and that gets you closer to a solution. The fact of the matter is that there are very few things in nature that can't be modeled with six parameters (I can't think of a single one, but I'm trying to give the benefit of the doubt). Adding parameters and then fine-tuning the initial values doesn't result in a deeper understanding of the universe. It may be some interesting branch of mathematics, but it's not science.
     
    Last edited: Sep 13, 2015
  14. Sep 13, 2015 #13

    Chalnoth

    User Avatar
    Science Advisor

    This is a rather crass misunderstanding of the current state of cosmology. There are two components of modern cosmology that are usually considered doubtful: dark matter and dark energy.

    As for dark matter, there is today a wide body of independent evidence that corroborates its existence. This is a pretty good overview of the evidence:
    https://medium.com/starts-with-a-bang/five-reasons-we-think-dark-matter-exists-a122bd606ba8

    There's really no sense in doubting dark matter's existence these days, or in claiming it's an extra fitting parameter.

    Dark energy comes closer to this. But again, it's not nearly as bad a situation as you paint.

    First, the cosmological constant has been a component of General Relativity pretty much from the start. The way that General Relativity is derived, in fact, essentially requires the existence of the cosmological constant. Its value had long assumed to be zero because it has to take on a value smaller than about ##10^{-120}## in natural units in order for any gravitational collapse to occur. Theorists largely assumed that there must be some kind of symmetry that sets the cosmological constant to be zero. However, no such symmetry has been found. Our theories, in other words, seem to be telling us that the cosmological constant probably must exist, and therefore we really shouldn't have been all that surprised to see it occur.

    Furthermore, there is an independent check on the cosmological constant from observations of the CMB: if there is no cosmological constant, then large-scale gravitational potentials don't change over time. A small positive cosmological constant causes those gravitational potentials to decay slowly. So when a photon enters a gravitational potential well, it picks up energy. By the time the photon leaves the potential well, the well will have become more shallow, so it doesn't lose as much energy as it gained, giving the photon a little boost of energy. This effect has been observed in the CMB (it's known as the Sachs-Wolfe Effect.

    Finally, theorists have explored a huge variety of potential alternatives to both dark matter and dark energy, and so far haven't been able to come up with anything better. For example, some theorists have proposed that a type of modified gravity might explain the Bullet Cluster mentioned above without the need for dark matter, but in order to make the fit they assumed a fourth type of neutrino. So they couldn't actually get rid of the dark matter: all that their model did was make it so that we didn't need a dark matter particle quite as massive.
     
  15. Sep 13, 2015 #14

    bapowell

    User Avatar
    Science Advisor

    Sounds like you didn't read my post. Nobody said that blindly adding parameters to a model is any way to do science. So what exactly don't you agree with?
     
  16. Sep 13, 2015 #15
     
  17. Sep 13, 2015 #16

    bapowell

    User Avatar
    Science Advisor

    Are you saying that science has the ability to confirm hypotheses?
     
  18. Sep 13, 2015 #17
    Perhaps you can give me a science history lesson. When, in the history of science, has adding a parameter to a formula resulted in a model that stood the test of time? All of our greatest theories have come from a reduction, or simplification, of what came before. Occam's Razor is not a proof, but it is the best rule of thumb there is when predicting a theory's ability to handle new data. Other than Ptolemaic Astronomy, do you have an example where adding degrees of freedom in order to match the data has worked?
     
  19. Sep 13, 2015 #18

    bapowell

    User Avatar
    Science Advisor

    Adding the spectral index as a parameter to describe the spectrum of primordial density perturbation resulted in a model that is significantly improved over the scale invariant spectrum. Plus, we expect a close-but-not-quite scale invariant spectrum from inflation, and so this is not a blind profusion of parameters in a statistical model -- it's actually telling us something about the underlying physics.

    I whole-heartedly agree that Occam's razor is an important component in constructing predictive, simple models. I say as much in my initial post, that the problem of constructing the "best" model given the data necessarily sets up a tension between goodness of fit and model complexity.
     
  20. Sep 13, 2015 #19

    atyy

    User Avatar
    Science Advisor

    Do the uniform prior and the Jeffreys prior assign non-zero weight to the same possibilities? If they do, then (IIRC) it should make no difference given enough data, so in a sense the data can override the subjectivity of the prior.

    The data cannot override the subjectivity if one chooses between priors that differ over things to which they assign non-zero probability - for example, string theory or whatever possibilities yet unimagined.
     
  21. Sep 13, 2015 #20

    Chalnoth

    User Avatar
    Science Advisor

    The only restriction is that the Jeffreys prior is positive definite (it's uniform in the logarithm of the parameter). For the amplitude this is a perfectly sensible restriction.

    That's true.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: What are the best parameters for λCDM?
  1. Hubble parameter (Replies: 2)

  2. The density parameter (Replies: 9)

Loading...