Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Standard candle - in question - affects distance estimates

  1. Apr 11, 2015 #1

    jim mcnamara

    User Avatar

    Staff: Mentor

    http://iopscience.iop.org/0004-637X/803/1/20/
    is a link to the abstract of:
    THE CHANGING FRACTIONS OF TYPE IA SUPERNOVA NUV–OPTICAL SUBCLASSES WITH REDSHIFT
    Peter A. Milne, Ryan J. Foley Peter, J. Brown, and Gautham Narayan

    Type Ia supernovae were considered to have fixed brightness, which allowed distance estimates of very distant objects. According to my understanding of this paper, the assumption of same brightness, the idea of 'a standard candle', may not hold as previously believed.

    If this is correct, it means distance estimates of objects at huge distances may be incorrect, and that the distances may be overestimated. My take on this -- models dependent on distance estimates may have to change, for example: accelerating expansion based on this kind of observation of the Universe could be impacted.

    Does anyone familiar with this area have a more clear understanding? Garth started a thread on this topic in early February, for example.
    https://www.physicsforums.com/threads/type-ia-supernova-not-standard-candles-im-confused.795968/
     
  2. jcsd
  3. Apr 12, 2015 #2

    Chronos

    User Avatar
    Science Advisor
    Gold Member

    We have good evidence SN1a's are reliable distance indicators when they have the right spectral properties.
     
  4. Apr 12, 2015 #3

    Chalnoth

    User Avatar
    Science Advisor

    Also, cosmologists don't just rely on one type of data. There's lots of corroborating data for the current model: baryon acoustic oscillations, cluster counts, CMB data, nearby measurements of the current expansion rate, and more. It's really hard for there to be a serious problem with the supernova data, because then there'd have to be different problems with all of the other data that has been collected.
     
  5. Apr 13, 2015 #4
    I wish I could say I am surprised, but this has been a long time coming. It began with the discovery of the first super-Chandrasekhar Type 1a SN in 2003, and was further compounded with the discovery of sub-Chandrasekhar Type 1a SN. These sub-Chandrasekhar Type 1a SN were so numerous that they created a new type in 2013 - Type 1ax SNe. They further estimated that between 18% and 48% of all Type 1a SNe discoveries made prior to 2013 have been misclassified, and should be Type 1ax SNe instead.

    Type 1ax Supernovae: A New Class of Stellar Explosion - Astrophysical Journal, 767:57 (28pp), April 10, 2013 (free issue)

    There are certainly distinctions between Type 1a and Type 1ax SNe, but as Chronos pointed out, we need a good spectral analysis over time. The most notably distinction is that Type 1ax have a Mv peak between -14.2 and -18.9, well below the Mv peak of -19.3 of a Type 1a SNe.

    This really only effects objects further away than about one megaparsec, because at closer distances we have other means of determining cosmological distances, such as Cepheid Variables. However, it does call into question not just the rate at which the universe is expanding/accelerating, but also the age of the universe.

    Type 1a Supernovae: Why Our Standard Candle Isn't Really Standard - National Geographic, August 28, 2014
     
    Last edited: Apr 13, 2015
  6. Apr 13, 2015 #5

    Garth

    User Avatar
    Science Advisor
    Gold Member

    As Jim pointed out in his OP I have been arguing for a long time that the simple assumption that Type 1a SNe are standard candles over cosmological time scales (z=1 and beyond) is naive.

    The question should be, "How does the luminosity of SNe 1a evolve over such time scales?" if the answer is it is constant then fine, but that answer is by no means certain.

    As I said in here there are two issues because the conclusion (the amount of cosmological acceleration) drawn from the SNe 1a data depends on:

    There are now thought to be three main species of detonating white dwarfs:
    Single Degenerate (SD) systems - A white dwarf accretes matter from a companion red giant that has expanded into its Roche limit until it approaches the Chandrasekhar limit of about 1.44 M when it detonates. As they all detonate at ~ 1.4 M all SNe 1a are meant to have the same intinsic luminosity.

    However there are also
    Double Degenerate (DD) systems - where a a binary WD or WD/neutron star system spiral into each other through the emission of gravity waves and detonate - but these would seem to have a mass of double or more of the SD SNe 1a system and hence perhaps twice the luminosity.

    And now we have about half of all SNe 1a being
    Contaminated White Dwarf (CWD) systems - with detonation at around 0.85 - 1.2 M depending on the amount of hydrogen contamination. As only a tiny amount of hydrogen (concentrations from 10−16 to 10−21) is required they would still be classified as SNe 1a from their spectra. With less than Chandrasekhar mass they would be less luminous than the SD's.

    The problem over cosmological time scales is that the ratio of the three types of detonation (SD : DD : CWD) within any particular set of observations is likely to change because of the different lifetimes the three systems require.

    Chalnoth's point that other types of data corroborate the result is an important one but it could be misleading - if the conclusion from the SNe 1a data is wrong then it is wrong, and might well call into question the interpretation put upon the other data sets.

    Garth
     
    Last edited: Apr 13, 2015
  7. Apr 13, 2015 #6
    There would appear to be two different methods of deflagration for the single degenerate systems:
    • Sub-Chandrasekhar SNe, or Type 1ax, which have an Mv peak between -14.2 and -18.9; and
    • Chandrasekhar SNe, or Type 1a, which yield an Mv peak of -19.3.
    It would seem that the most likely culprit for the super-Chandrasekhar Type 1a SNe would be the double degenerate systems, which produce an Mv peak brighter than -19.3. Although, it has been theorized that a strongly magnetized white dwarf in a single degenerate system will allow it to exceed the Chandrasekhar limit.

    New Mass Limit for White Dwarfs: Super-Chandrasekhar Type Ia Supernova as a New Standard Candle - Physical Review Letters 110, 071102, February 11, 2013 (paid subscription)
    arXiv : 1301.5965 - reprint
     
  8. Apr 13, 2015 #7

    Garth

    User Avatar
    Science Advisor
    Gold Member

    Yes, thank you |Glitch|, I was working from the theory behind what the various species of SNe 1a might be, your post has identified the observations of different types of SNe 1a and indeed they seem to easily fall into the three theoretical categories.

    Furthermore you have given their different Absolute Magnitudes derived from those observations.

    Single Degenerate Mv peak of -19.3,
    Contaminated Degenerate, Type 1ax, Mv peak between -14.2 and -18.9
    and Super Chandrasekhar Type 1a (New mass limit for white dwarfs: super-Chandrasekhar type Ia supernova as a new standard candle) which might be strongly magnetized white dwarfs, as in that paper, or the Double Degenerate model. The strongly magnetized model would also explain those exhibiting low kinetic energy.

    A good paper on the subject: Progenitors of type Ia supernovae (319 references!) of the DD model it says:
    The variety of models and observations highlight the problem of treating the SNe 1A as standard candles.

    Again the question is with this mix of different types, "how does the ratio of the different species and hence the Absolute Magnitudes of SNe 1a data sets evolve over cosmological time scales?"

    Garth
     
    Last edited: Apr 13, 2015
  9. Apr 17, 2015 #8

    Ken G

    User Avatar
    Gold Member

    It is important to stress that it is indeed the ratio of the different subtypes that must evolve with age to cause the kind of problem being reported, not just the existence of different subtypes. Also, these different subtypes must be impossible to tell apart by their light curves, or we have no problem. The use of type Ia as standard candles does not rely on them all being the same explosion, as indeed that isn't even true of Cepheids-- they are not all the same stars, some are bigger than others. Yet they make a good standard candle unless we confuse the low-metallicity ones with the higher metallicity ones-- all we have is their pulsation period, so throwing luminosity variations in there all together makes us subject to Malmqvist bias (as happened to Hubble). But that always causes you to underestimate the distance of the farther ones, so that is not what is going on for the type Ia SNe, for whom it is being claimed we are overestimating the distance of the farther ones.

    In other words, the fundamental assumption in the "standard candle" idea is that we have some means of sorting the objects such that we can create subgroups that have the same luminosity-- those are the standard candles we use, post sorting. With Cepheids, it is stars that have the same pulsation period that form the standard-candle subgroups. With type Ia SN, it is SN with the same light curves (after correcting for cosmological time dilation of course, but that's easy because the redshift is measured). So having multiple classes of SN wouldn't create any problems with standard candles, unless the multiple classes had similar light curves that could be mistaken for each other. I believe that is the issue here-- not that there is more than one way to blow up a white dwarf, but that some of the different ways of doing it look the same, and worse, the proportion of each has a systematic dependence on age of the universe. This could in principle allow for a kind of reverse Malmqvist bias. The Malmqvist bias is if you have a fixed luminosity variation within your standard-candle subgroups, then you will tend to see a higher proportion of the intrinsically more luminous ones at large distance (since you won't see the dim ones), and this causes you to underestimate the distance to the more faraway ones.

    The effect they are claiming is the opposite-- we are overestimating the distance to the SNe that happened earlier in the age of the universe, because at that age, there simply weren't the more luminous ones that we are assuming we are seeing when we group by similar light curves. So this is not an effect stemming from a spread in luminosities, it is an effect stemming from a secular trend in those luminosities. A spread just gives Malmqvist bias, which would have the opposite effect that they are claiming.
     
    Last edited: Apr 17, 2015
  10. Apr 17, 2015 #9
    Type 1ax and Type 1a SNe have a very similar light curve, but Type 1ax SNe can be 100 times less luminous than Type 1a SNe. If the light curve was the only information available, then Type 1ax SNe would indeed cause an overestimation of the distance. Which appears to have been the case prior to 2013. Anywhere from 18% to 48% of the Type 1ax SNe have been misclassified as Type 1a SNe. It is indeed the spread in luminosity between Type 1ax and Type 1a that is causing this problem. The assumption has been that all Type 1a SNe have a Mv peak of -19.3, hence a "standard candle." If they make that same assumption with Type 1ax SNe, then all Type 1ax SNe will appear to be much further away than they are actually.

    Besides measuring the light curve and red shift of the SNe, it is also critical to obtain a good spectral analysis over time. A spectral analysis would clearly distinguish Type 1ax from Type 1a SNe, where measuring just the light curve and red shift may not. Type 1a SNe that are more than a megaparsec away recorded prior to 2013, where we have no spectral analysis over time, should not be used to determine cosmological distances.
     
    Last edited: Apr 17, 2015
  11. Apr 17, 2015 #10

    Ken G

    User Avatar
    Gold Member

    Yes, that's a very sticky problem indeed, we'd like to be able to rely on the light curve. But like with higher and lower metallicity Cepheids, we can use auxiliary spectroscopic information to hopefully re-establish the standard candle sorting we need. If we do that, and it ends up changing the inferences about dark energy, that's going to be a big problem indeed for "precision cosmology."
    Agreed, so the big question now is, what is the dark energy we infer from re-analysis of the data?
     
  12. Apr 17, 2015 #11
    The nature of dark energy has not changed due to the possible overestimation of cosmological distances, but there may be less dark energy than originally estimated. Just as the expansion of the universe may not be accelerating quite as fast as originally calculated. We have enough good data to know that both dark energy must exist and that the universe is accelerating in its expansion. However, once we eliminate the incomplete data we should be able to obtain a more accurate measurement of both.
     
    Last edited: Apr 17, 2015
  13. Apr 17, 2015 #12

    Ken G

    User Avatar
    Gold Member

    The problem is, if that's all that happens, the picture will lose consistency. If you reduce dark energy, you have no way to increase dark matter, as it has its own constraints. But the dynamics need to be flat if GR is right, so we would need something else to fill in the missing gap. Astronomers did not have a great deal of fun trying to convince people there are two mysterious elements in the dynamics of the universe-- they are going to have awful pains trying to say there need to be three. So let's hope the reanalysis of the data does not lead to much.
     
  14. Apr 17, 2015 #13
    By reducing the amount of dark energy all you are really saying is that the universe is not accelerating quite as fast as originally estimated. Which would also imply that the universe may not be as old as 13.78 billion years. If we assume the amount of dark matter, and all other forms of baryonic matter in the universe is static, then it must be dark energy that is increasing over time. If everything, including dark energy, was static then the universe could not be accelerating in its expansion. The "Cosmological Constant" is not really a fixed constant, but rather a value that constantly increases with time.
     
  15. Apr 17, 2015 #14

    Ken G

    User Avatar
    Gold Member

    What HST measured very accurately is the current value of the Hubble parameter. That parameter determines the critical density, and the universe must have the critical density to be flat (which seems to be the upshot of the WMAP results, we do have critical density). None of those things rely at all on dark matter or type Ia SN observations, we have the result that there must be the critical density. Then we go and look at the density we actually have, via matter and dark matter, and it's only 30% of critical. So without even looking at type Ia observations, we already know we need 70% of the critical density to come from some other source. Enter our interpretations of type Ia observations. I think what you are saying is that if we reinterpret type Ias, we may find that the expansion is not accelerating as much. That could well be true, but what I'm saying is that we would not be happy to resolve that by simply reducing the amount of dark energy, because we still need the 70% of the critical density to come from somewhere. So we cannot reduce the dark energy without postulating something "even darker," and nobody is going to like that. Alternatively, we could try keeping the 70% of critical density, but playing with the equation of state so that it is not a cosmological constant any more. That isn't going to be too popular either, because the constant-density-that-is-a-law-of-physics approach that is the cosmological constant seemed like the simplest solution-- we'll instead be facing the very real prospect that GR is wrong, or else we'll have to cook up some arbitrary equation of state for the dark matter to fit the type Ia observations.

    So none of these are going to be fun, it's not just changing the age of the universe and everything is fine. We either need to replace GR, or we need 70% of the critical density to come from something, and we liked it a lot better when that 70% looked like it was coming from a cosmological constant (or, equivalently, a constant energy density in vacuum). Maybe some clever theorist will come up with a new type of gravity that fits this result nicely to replace GR in a way that doesn't feel retrofit simply to fit this datapoint, or else find an equation of state for dark energy that makes perfect sense and fits the data. Otherwise, cosmology is going to start looking like a Rube Goldberg mechanism instead of the "precision cosmology" we thought we had.
    Yes, we'd need a new equation of state for dark energy, not less dark energy. People tend to think of dark energy as bad, because it's mysterious and we don't want it, but just having less of it wouldn't be better and it wouldn't even work-- what we'd need is the same amount of it but with an even more arbitrary equation of state, and that is going to make the situation much worse than it already is.
     
  16. Apr 18, 2015 #15
    I had not considered that perspective, but after your detailed explanation I realize that you are absolutely right. Critical density has to be maintained for GR to be valid, and since all the matter (dark or otherwise) is fixed, there must also be a finite amount of dark energy in order to match our observations. I was under the mistaken impression that in order for there to be continuous acceleration, the amount of dark energy would have to increase proportionally. Since the amount of dark energy in the universe is finite, then the only thing that could explain any amount of acceleration is the nature of dark energy must somehow change. A finite amount of dark energy could easily explain expansion if there is enough to overwhelm gravity, or even contraction if there isn't enough, but it would not explain acceleration. It would be like Earth's gravity getting stronger and stronger even though its mass, radius, and density hasn't changed. Somehow dark energy's "repulsive force" would have to continually increase over time, without increasing the amount of dark energy, in order to explain an accelerating universe. But that can't be right either since that would violate the law of conservation. :confused:

    I do not think dark energy is good or bad, it just is. We are doing our best to understand the nature of dark energy by observing its effects on the universe. It seems to me that dark energy is contradictory. I cannot explain the acceleration of the universe without violating some well established law of physics, and I find that frustrating but not unexpected. There is so much that I don't know if I let it get to me I would be a complete wreck by now.
     
  17. Apr 18, 2015 #16

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    I think it is too early to speculate about multiple forms of dark matter as long as it is unclear how strong the effect of the supernova issue is.
    I'm sure there are tons of arXiv articles written about it right now.
     
  18. Apr 18, 2015 #17

    Garth

    User Avatar
    Science Advisor
    Gold Member

    OK - now that were are talking about the consequences of SNe 1a not being standard candles because of an evolution in the ratio of the three species, SD, DD and Type 1ax, I will add my own take on the matter.

    Ken G's point is a reiteration of Chalnoth's #3.

    My response #5 still stands:
    Much is made of the concordance in the standard [itex]\Lambda[/itex]CDM model between the various data sets,

    Let us look at those other data sets, in particular the CMB and spatial flatness. I am now going to introduce an alternative to that standard model in order to show that in may indeed be possible to interpret those data sets in other ways.

    One alternative model is the linearly expanding model proposed by various authors under different guises such as: http://arxiv.org/abs/astro-ph/0306448[/URL [Broken], Introducing the Dirac-Milne universe, The Rh = ct universe without inflation
    Such a model expands as the Milne empty universe and requires either an EoS of [itex]\omega= - \frac{1}{3}[/itex] or replusive antimatter as in the Dirac-Milne theory in order to produce the Milne model without it being empty.

    Immediate advantages of such a model are that it requires no Inflation, it resolves any age problem in the early universe and it readily explains why the age of the universe coincidentally happens to be equal to Hubble Time.

    In the standard model the angular size of the first peak in the CMB power spectrum agrees with the angular size of sound speed limited fluctuations magnified by inflation at t ~ 380,000 years if space is flat.

    In the R=ct model the CMB is emitted at t=12.5 Myrs, nearly 40 times later than in the standard model and the sound horizon limited fluctuations are similarly larger, however the hyperbolic space of the Milne model makes distant objects look smaller than in flat space exactly compensating for the enlarged size.

    The same shrinking of angular measurement by hyperbolic space applies also to the 'standard ruler' of baryonic acoustic oscillations. They are larger than in the in standard model but have the same angular diameter, and also the shrinking of angular measurement applies to the baryon loading second peak of the CMB power spectrum.

    There is a degeneracy in the CMB data as it confirms both the flat geometry of space in the [itex]\Lambda[/itex]CDM model and the hyperbolic geometry of space of the Milne model.

    BB nucleosynthesis in the R=ct universe has been explored by several authors such ashttp://arxiv.org/abs/nucl-th/9902022[/URL [Broken]
    In order to produce the right amount of helium the baryon density has to be increased such that it may explain DM, of course that leaves the question of where that missing baryonic DM is now hiding. My guess would be in IMBHs and inter-cluster neutral hydrogen. There is also a deuterium problem with the model but it relieves the [itex]\Lambda[/itex]CDM lithium problem.

    Now the subject of this thread: are SNe 1a standard candles or not.
    [/PLAIN] [Broken]
    Perlmutter et al. in their seminal paper
    MEASUREMENTS OF OMEGA AND LAMBDA FROM 42 HIGH-REDSHIFT SUPERNOVAE (figures 1 & 2) point out that the ([itex]\Omega_M[/itex] = 0 and [itex]\Omega_\Lambda[/itex] = 0) Milne model fits the data as well as the standard model.

    [EDIT - notice the diagram in Buzz Bloom's post in Fitting a model to astronomical data - the purple solid line flat dark energy model and the green solid 'empty' model]

    That was out to z ~ 1. We are always told that the linearly expanding model is ruled out by subsequent studies of SNe 1a beyond z = 1 where the SNe 1a become brighter again, as expected in the standard model but not the linearly expanding one.

    However if the SNe 1a are not standard candles, particularly at high z where a secular evolution of species ratio may kick in, then that conclusion may no longer hold. And alternatives such as the R=ct model ought to be re-examined.

    Just my penny worth....

    Garth
     
    Last edited by a moderator: May 7, 2017
  19. Apr 18, 2015 #18

    Ken G

    User Avatar
    Gold Member

    I agree, I confess I was speaking rather colloquially. The truth is, we will eventually be stuck with whatever the observations tell us, but it would sure be nice if it all had a simple explanation, rather than a zoo of new and mysterious effects! I guess some scientists are hoping for the zoo, it will give them more to do, but it's hard to keep a straight face as we are explaining how well we understand our universe with all these mysterious unknowns floating around!
    I think the fact is, there are no well established laws of physics when it comes to cosmology, this is truly a new frontier.
     
  20. Apr 18, 2015 #19

    Ken G

    User Avatar
    Gold Member

    Sure, but if it does turn out that no changes to the interpretation of the type Ia SN are needed, then the whole matter is a tempest in a teacup. The cited article doesn't say there is a "signficant bias" to the parameters, but it certainly raises the prospect that there could be. If there isn't, they are coming close to crying wolf.
     
  21. Apr 18, 2015 #20

    Ken G

    User Avatar
    Gold Member

    That is certainly an interesting thing to point out, it does raise the possibility that a simple explanation may yet be possible, if it turns out that the Ia data needs signficant reinterpretation.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Standard candle - in question - affects distance estimates
  1. Standard candles (Replies: 34)

Loading...