Standard candle - in question - affects distance estimates

  • Thread starter jim mcnamara
  • Start date
  • Tags
    Standard
In summary, the paper discusses how the assumption of a fixed brightness for Type Ia supernovae may not hold true, despite being used as a standard candle for estimating distances of very distant objects. The discovery of sub-Chandrasekhar Type Ia supernovae and the estimation that a significant number of previously classified Type Ia supernovae may actually be the new Type 1ax SNe, raises questions about the accuracy of distance estimates for objects beyond one megaparsec. This could potentially impact models that rely on these distance estimates, such as the accelerating expansion of the universe. The paper also discusses the need for further analysis of the luminosity evolution of Type Ia supernovae over cosmological time scales and the
  • #1
jim mcnamara
Mentor
4,770
3,816
http://iopscience.iop.org/0004-637X/803/1/20/
is a link to the abstract of:
THE CHANGING FRACTIONS OF TYPE IA SUPERNOVA NUV–OPTICAL SUBCLASSES WITH REDSHIFT
Peter A. Milne, Ryan J. Foley Peter, J. Brown, and Gautham Narayan

Type Ia supernovae were considered to have fixed brightness, which allowed distance estimates of very distant objects. According to my understanding of this paper, the assumption of same brightness, the idea of 'a standard candle', may not hold as previously believed.

If this is correct, it means distance estimates of objects at huge distances may be incorrect, and that the distances may be overestimated. My take on this -- models dependent on distance estimates may have to change, for example: accelerating expansion based on this kind of observation of the Universe could be impacted.

Does anyone familiar with this area have a more clear understanding? Garth started a thread on this topic in early February, for example.
https://www.physicsforums.com/threads/type-ia-supernova-not-standard-candles-im-confused.795968/
 
Space news on Phys.org
  • #2
We have good evidence SN1a's are reliable distance indicators when they have the right spectral properties.
 
  • #3
Also, cosmologists don't just rely on one type of data. There's lots of corroborating data for the current model: baryon acoustic oscillations, cluster counts, CMB data, nearby measurements of the current expansion rate, and more. It's really hard for there to be a serious problem with the supernova data, because then there'd have to be different problems with all of the other data that has been collected.
 
  • #4
I wish I could say I am surprised, but this has been a long time coming. It began with the discovery of the first super-Chandrasekhar Type 1a SN in 2003, and was further compounded with the discovery of sub-Chandrasekhar Type 1a SN. These sub-Chandrasekhar Type 1a SN were so numerous that they created a new type in 2013 - Type 1ax SNe. They further estimated that between 18% and 48% of all Type 1a SNe discoveries made prior to 2013 have been misclassified, and should be Type 1ax SNe instead.

We estimate that in a given volume there are 31 (+17 or −13) SNe Iax for every 100 SNe Ia, and for every 1 M of iron generated by SNe Ia at z = 0, SNe Iax generate ∼0.036 M.
Type 1ax Supernovae: A New Class of Stellar Explosion - Astrophysical Journal, 767:57 (28pp), April 10, 2013 (free issue)

There are certainly distinctions between Type 1a and Type 1ax SNe, but as Chronos pointed out, we need a good spectral analysis over time. The most notably distinction is that Type 1ax have a Mv peak between -14.2 and -18.9, well below the Mv peak of -19.3 of a Type 1a SNe.

This really only effects objects further away than about one megaparsec, because at closer distances we have other means of determining cosmological distances, such as Cepheid Variables. However, it does call into question not just the rate at which the universe is expanding/accelerating, but also the age of the universe.

Type 1a Supernovae: Why Our Standard Candle Isn't Really Standard - National Geographic, August 28, 2014
 
Last edited:
  • #5
As Jim pointed out in his OP I have been arguing for a long time that the simple assumption that Type 1a SNe are standard candles over cosmological time scales (z=1 and beyond) is naive.

The question should be, "How does the luminosity of SNe 1a evolve over such time scales?" if the answer is it is constant then fine, but that answer is by no means certain.

As I said in here there are two issues because the conclusion (the amount of cosmological acceleration) drawn from the SNe 1a data depends on:

In the far universe: the distance calculated from the Absolute Magnitudes of those very distant SNe Ia being deduced correctly from their apparent magnitudes. This depends on such factors as: the amount of dust extinction, the modelling of their luminosity curves with the delay in detection given their faintness, the correct cosmological geometry, the correct application of the Phillips relationship coupled with cosmological time dilation, any correction because of a secular evolution of metallicity, selection effect (not detecting the fainter members), and probably a few more!
In the near universe: the accurate calibration of the Absolute Magnitude of these supernovae to be used as Standard Candles. Confusion as to different classes of these supernovae and a possible evolution of the ratio of these classes in any set of distant SNe Ia will introduce errors of Absolute Magnitude in the 'Gold set'.
There are now thought to be three main species of detonating white dwarfs:
Single Degenerate (SD) systems - A white dwarf accretes matter from a companion red giant that has expanded into its Roche limit until it approaches the Chandrasekhar limit of about 1.44 M when it detonates. As they all detonate at ~ 1.4 M all SNe 1a are meant to have the same intinsic luminosity.

However there are also
Double Degenerate (DD) systems - where a a binary WD or WD/neutron star system spiral into each other through the emission of gravity waves and detonate - but these would seem to have a mass of double or more of the SD SNe 1a system and hence perhaps twice the luminosity.

And now we have about half of all SNe 1a being
Contaminated White Dwarf (CWD) systems - with detonation at around 0.85 - 1.2 M depending on the amount of hydrogen contamination. As only a tiny amount of hydrogen (concentrations from 10−16 to 10−21) is required they would still be classified as SNe 1a from their spectra. With less than Chandrasekhar mass they would be less luminous than the SD's.

The problem over cosmological time scales is that the ratio of the three types of detonation (SD : DD : CWD) within any particular set of observations is likely to change because of the different lifetimes the three systems require.

Chalnoth's point that other types of data corroborate the result is an important one but it could be misleading - if the conclusion from the SNe 1a data is wrong then it is wrong, and might well call into question the interpretation put upon the other data sets.

Garth
 
Last edited:
  • #6
Garth said:
As Jim pointed out in his OP I have been arguing for a long time that the simple assumption that Type 1a SNe are standard candles over cosmological time scales (z=1 and beyond) is naive.

The question should be, "How does the luminosity of SNe 1a evolve over such time scales?" if the answer is it is constant then fine, but that answer is by no means certain.

As I said in here there are two issues because the conclusion (the amount of cosmological acceleration) drawn from the SNe 1a data depends on:There are now thought to be three main species of detonating white dwarfs:
Single Degenerate (SD) systems - A white dwarf accretes matter from a companion red giant that has expanded into its Roche limit until it approaches the Chandrasekhar limit of about 1.44 M when it detonates. As they all detonate at ~ 1.4 M all SNe 1a are meant to have the same intinsic luminosity.

However there are also
Double Degenerate (DD) systems - where a a binary WD or WD/neutron star system spiral into each other through the emission of gravity waves and detonate - but these would seem to have a mass of double or more of the SD SNe 1a system and hence perhaps twice the luminosity.

And now we have about half of all SNe 1a being
Contaminated White Dwarf (CWD) systems - with detonation at around 0.85 - 1.2 M depending on the amount of hydrogen contamination. As only a tiny amount of hydrogen (concentrations from 10−16 to 10−21) is required they would still be classified as SNe 1a from their spectra. With less than Chandrasekhar mass they would be less luminous than the SD's.

The problem over cosmological time scales is that the ratio of the three types of detonation (SD : DD : CWD) within any particular set of observations is likely to change because of the different lifetimes the three systems require.

Chalnoth's point that other types of data corroborate the result is an important one but it could be misleading - if the conclusion from the SNe 1a data is wrong then it is wrong, and might well call into question the interpretation put upon the other data sets.

Garth
There would appear to be two different methods of deflagration for the single degenerate systems:
  • Sub-Chandrasekhar SNe, or Type 1ax, which have an Mv peak between -14.2 and -18.9; and
  • Chandrasekhar SNe, or Type 1a, which yield an Mv peak of -19.3.
It would seem that the most likely culprit for the super-Chandrasekhar Type 1a SNe would be the double degenerate systems, which produce an Mv peak brighter than -19.3. Although, it has been theorized that a strongly magnetized white dwarf in a single degenerate system will allow it to exceed the Chandrasekhar limit.

We show that strongly magnetized white dwarfs not only can violate the Chandrasekhar mass limit significantly, but exhibit a different mass limit. We establish from a foundational level that the generic mass limit of white dwarfs is 2.58 solar mass.
New Mass Limit for White Dwarfs: Super-Chandrasekhar Type Ia Supernova as a New Standard Candle - Physical Review Letters 110, 071102, February 11, 2013 (paid subscription)
arXiv : 1301.5965 - reprint
 
  • Like
Likes Garth
  • #7
Yes, thank you |Glitch|, I was working from the theory behind what the various species of SNe 1a might be, your post has identified the observations of different types of SNe 1a and indeed they seem to easily fall into the three theoretical categories.

Furthermore you have given their different Absolute Magnitudes derived from those observations.

Single Degenerate Mv peak of -19.3,
Contaminated Degenerate, Type 1ax, Mv peak between -14.2 and -18.9
and Super Chandrasekhar Type 1a (New mass limit for white dwarfs: super-Chandrasekhar type Ia supernova as a new standard candle) which might be strongly magnetized white dwarfs, as in that paper, or the Double Degenerate model. The strongly magnetized model would also explain those exhibiting low kinetic energy.

A good paper on the subject: Progenitors of type Ia supernovae (319 references!) of the DD model it says:
Although a DD merger is thought to experience an accretion induced collapse rather than a thermonuclear explosion, any definitive conclusion about the DD model is currently premature:
(1) There are some parameter ranges in which the accretion induced collapse can be avoided. Recent simulations indicate that the violent mergers of two massive WDs can closely resemble normal SN Ia explosion with the assumption of the detonation formation as an artificial parameter, although these mergers may only contribute a small fraction to the observed population of normal SNe Ia.
(2) This model can naturally reproduce the observed birthrates and delay times of SNe Ia and may explain the formation of some observed super-luminous SNe Ia.
(3) This model can explain the lack of H or He seen in the nebular spectra of SNe Ia.
(4) Recent observational studies of SN 2011fe seem to favor a DD progenitor. In addition, there is no signal of a surviving companion star from the central region of SNR 0509-67.5 (the site of a SN Ia explosion whose light swept Earth about 400 years ago), which may indicate that the progenitor for this particular SN Ia is a DD system.

The variety of models and observations highlight the problem of treating the SNe 1A as standard candles.

Again the question is with this mix of different types, "how does the ratio of the different species and hence the Absolute Magnitudes of SNe 1a data sets evolve over cosmological time scales?"

Garth
 
Last edited:
  • #8
It is important to stress that it is indeed the ratio of the different subtypes that must evolve with age to cause the kind of problem being reported, not just the existence of different subtypes. Also, these different subtypes must be impossible to tell apart by their light curves, or we have no problem. The use of type Ia as standard candles does not rely on them all being the same explosion, as indeed that isn't even true of Cepheids-- they are not all the same stars, some are bigger than others. Yet they make a good standard candle unless we confuse the low-metallicity ones with the higher metallicity ones-- all we have is their pulsation period, so throwing luminosity variations in there all together makes us subject to Malmqvist bias (as happened to Hubble). But that always causes you to underestimate the distance of the farther ones, so that is not what is going on for the type Ia SNe, for whom it is being claimed we are overestimating the distance of the farther ones.

In other words, the fundamental assumption in the "standard candle" idea is that we have some means of sorting the objects such that we can create subgroups that have the same luminosity-- those are the standard candles we use, post sorting. With Cepheids, it is stars that have the same pulsation period that form the standard-candle subgroups. With type Ia SN, it is SN with the same light curves (after correcting for cosmological time dilation of course, but that's easy because the redshift is measured). So having multiple classes of SN wouldn't create any problems with standard candles, unless the multiple classes had similar light curves that could be mistaken for each other. I believe that is the issue here-- not that there is more than one way to blow up a white dwarf, but that some of the different ways of doing it look the same, and worse, the proportion of each has a systematic dependence on age of the universe. This could in principle allow for a kind of reverse Malmqvist bias. The Malmqvist bias is if you have a fixed luminosity variation within your standard-candle subgroups, then you will tend to see a higher proportion of the intrinsically more luminous ones at large distance (since you won't see the dim ones), and this causes you to underestimate the distance to the more faraway ones.

The effect they are claiming is the opposite-- we are overestimating the distance to the SNe that happened earlier in the age of the universe, because at that age, there simply weren't the more luminous ones that we are assuming we are seeing when we group by similar light curves. So this is not an effect stemming from a spread in luminosities, it is an effect stemming from a secular trend in those luminosities. A spread just gives Malmqvist bias, which would have the opposite effect that they are claiming.
 
Last edited:
  • Like
Likes spacejunkie, wabbit, Garth and 1 other person
  • #9
Ken G said:
It is important to stress that it is indeed the ratio of the different subtypes that must evolve with age to cause the kind of problem being reported, not just the existence of different subtypes. Also, these different subtypes must be impossible to tell apart by their light curves, or we have no problem. The use of type Ia as standard candles does not rely on them all being the same explosion, as indeed that isn't even true of Cepheids-- they are not all the same stars, some are bigger than others. Yet they make a good standard candle unless we confuse the low-metallicity ones with the higher metallicity ones-- all we have is their pulsation period, so throwing luminosity variations in there all together makes us subject to Malmqvist bias (as happened to Hubble). But that always causes you to underestimate the distance of the farther ones, so that is not what is going on for the type Ia SNe, for whom it is being claimed we are overestimating the distance of the farther ones.

In other words, the fundamental assumption in the "standard candle" idea is that we have some means of sorting the objects such that we can create subgroups that have the same luminosity-- those are the standard candles we use, post sorting. With Cepheids, it is stars that have the same pulsation period that form the standard-candle subgroups. With type Ia SN, it is SN with the same light curves (after correcting for cosmological time dilation of course, but that's easy because the redshift is measured). So having multiple classes of SN wouldn't create any problems with standard candles, unless the multiple classes had similar light curves that could be mistaken for each other. I believe that is the issue here-- not that there is more than one way to blow up a white dwarf, but that some of the different ways of doing it look the same, and worse, the proportion of each has a systematic dependence on age of the universe. This could in principle allow for a kind of reverse Malmqvist bias. The Malmqvist bias is if you have a fixed luminosity variation within your standard-candle subgroups, then you will tend to see a higher proportion of the intrinsically more luminous ones at large distance (since you won't see the dim ones), and this causes you to underestimate the distance to the more faraway ones.

The effect they are claiming is the opposite-- we are overestimating the distance to the SNe that happened earlier in the age of the universe, because at that age, there simply weren't the more luminous ones that we are assuming we are seeing when we group by similar light curves. So this is not an effect stemming from a spread in luminosities, it is an effect stemming from a secular trend in those luminosities. A spread just gives Malmqvist bias, which would have the opposite effect that they are claiming.
Type 1ax and Type 1a SNe have a very similar light curve, but Type 1ax SNe can be 100 times less luminous than Type 1a SNe. If the light curve was the only information available, then Type 1ax SNe would indeed cause an overestimation of the distance. Which appears to have been the case prior to 2013. Anywhere from 18% to 48% of the Type 1ax SNe have been misclassified as Type 1a SNe. It is indeed the spread in luminosity between Type 1ax and Type 1a that is causing this problem. The assumption has been that all Type 1a SNe have a Mv peak of -19.3, hence a "standard candle." If they make that same assumption with Type 1ax SNe, then all Type 1ax SNe will appear to be much further away than they are actually.

Besides measuring the light curve and red shift of the SNe, it is also critical to obtain a good spectral analysis over time. A spectral analysis would clearly distinguish Type 1ax from Type 1a SNe, where measuring just the light curve and red shift may not. Type 1a SNe that are more than a megaparsec away recorded prior to 2013, where we have no spectral analysis over time, should not be used to determine cosmological distances.
 
Last edited:
  • Like
Likes Garth and Ken G
  • #10
|Glitch| said:
Type 1ax and Type 1a SNe have a very similar light curve, but Type 1ax SNe can be 100 times less luminous than Type 1a SNe. If the light curve was the only information available, then Type 1ax SNe would indeed cause an overestimation of the distance.
Yes, that's a very sticky problem indeed, we'd like to be able to rely on the light curve. But like with higher and lower metallicity Cepheids, we can use auxiliary spectroscopic information to hopefully re-establish the standard candle sorting we need. If we do that, and it ends up changing the inferences about dark energy, that's going to be a big problem indeed for "precision cosmology."
Besides measuring the light curve and red shift of the SNe, it is also critical to obtain a good spectral analysis over time. A spectral analysis would clearly distinguish Type 1ax from Type 1a SNe, where measuring just the light curve and red shift may not. Type 1a SNe that are more than a megaparsec away recorded prior to 2013, where we have no spectral analysis over time, should not be used to determine cosmological distances.
Agreed, so the big question now is, what is the dark energy we infer from re-analysis of the data?
 
  • #11
Ken G said:
Agreed, so the big question now is, what is the dark energy we infer from re-analysis of the data?
The nature of dark energy has not changed due to the possible overestimation of cosmological distances, but there may be less dark energy than originally estimated. Just as the expansion of the universe may not be accelerating quite as fast as originally calculated. We have enough good data to know that both dark energy must exist and that the universe is accelerating in its expansion. However, once we eliminate the incomplete data we should be able to obtain a more accurate measurement of both.
 
Last edited:
  • #12
|Glitch| said:
The nature of dark energy has not changed due to the possible overestimation of cosmological distances, but there may be less dark energy than originally estimated. Just as the expansion of the universe may not be accelerating quite as fast as originally calculated. We have enough good data to know that both dark energy must exist and that the universe is accelerating in its expansion. However, once we eliminate the incomplete data we should be able to obtain a more accurate measurement of both.
The problem is, if that's all that happens, the picture will lose consistency. If you reduce dark energy, you have no way to increase dark matter, as it has its own constraints. But the dynamics need to be flat if GR is right, so we would need something else to fill in the missing gap. Astronomers did not have a great deal of fun trying to convince people there are two mysterious elements in the dynamics of the universe-- they are going to have awful pains trying to say there need to be three. So let's hope the reanalysis of the data does not lead to much.
 
  • #13
Ken G said:
The problem is, if that's all that happens, the picture will lose consistency. If you reduce dark energy, you have no way to increase dark matter, as it has its own constraints. But the dynamics need to be flat if GR is right, so we would need something else to fill in the missing gap. Astronomers did not have a great deal of fun trying to convince people there are two mysterious elements in the dynamics of the universe-- they are going to have awful pains trying to say there need to be three. So let's hope the reanalysis of the data does not lead to much.
By reducing the amount of dark energy all you are really saying is that the universe is not accelerating quite as fast as originally estimated. Which would also imply that the universe may not be as old as 13.78 billion years. If we assume the amount of dark matter, and all other forms of baryonic matter in the universe is static, then it must be dark energy that is increasing over time. If everything, including dark energy, was static then the universe could not be accelerating in its expansion. The "Cosmological Constant" is not really a fixed constant, but rather a value that constantly increases with time.
 
  • #14
|Glitch| said:
By reducing the amount of dark energy all you are really saying is that the universe is not accelerating quite as fast as originally estimated. Which would also imply that the universe may not be as old as 13.78 billion years.
What HST measured very accurately is the current value of the Hubble parameter. That parameter determines the critical density, and the universe must have the critical density to be flat (which seems to be the upshot of the WMAP results, we do have critical density). None of those things rely at all on dark matter or type Ia SN observations, we have the result that there must be the critical density. Then we go and look at the density we actually have, via matter and dark matter, and it's only 30% of critical. So without even looking at type Ia observations, we already know we need 70% of the critical density to come from some other source. Enter our interpretations of type Ia observations. I think what you are saying is that if we reinterpret type Ias, we may find that the expansion is not accelerating as much. That could well be true, but what I'm saying is that we would not be happy to resolve that by simply reducing the amount of dark energy, because we still need the 70% of the critical density to come from somewhere. So we cannot reduce the dark energy without postulating something "even darker," and nobody is going to like that. Alternatively, we could try keeping the 70% of critical density, but playing with the equation of state so that it is not a cosmological constant any more. That isn't going to be too popular either, because the constant-density-that-is-a-law-of-physics approach that is the cosmological constant seemed like the simplest solution-- we'll instead be facing the very real prospect that GR is wrong, or else we'll have to cook up some arbitrary equation of state for the dark matter to fit the type Ia observations.

So none of these are going to be fun, it's not just changing the age of the universe and everything is fine. We either need to replace GR, or we need 70% of the critical density to come from something, and we liked it a lot better when that 70% looked like it was coming from a cosmological constant (or, equivalently, a constant energy density in vacuum). Maybe some clever theorist will come up with a new type of gravity that fits this result nicely to replace GR in a way that doesn't feel retrofit simply to fit this datapoint, or else find an equation of state for dark energy that makes perfect sense and fits the data. Otherwise, cosmology is going to start looking like a Rube Goldberg mechanism instead of the "precision cosmology" we thought we had.
The "Cosmological Constant" is not really a fixed constant, but rather a value that constantly increases with time.
Yes, we'd need a new equation of state for dark energy, not less dark energy. People tend to think of dark energy as bad, because it's mysterious and we don't want it, but just having less of it wouldn't be better and it wouldn't even work-- what we'd need is the same amount of it but with an even more arbitrary equation of state, and that is going to make the situation much worse than it already is.
 
  • Like
Likes |Glitch|
  • #15
Ken G said:
What HST measured very accurately is the current value of the Hubble parameter. That parameter determines the critical density, and the universe must have the critical density to be flat (which seems to be the upshot of the WMAP results, we do have critical density). None of those things rely at all on dark matter or type Ia SN observations, we have the result that there must be the critical density. Then we go and look at the density we actually have, via matter and dark matter, and it's only 30% of critical. So without even looking at type Ia observations, we already know we need 70% of the critical density to come from some other source. Enter our interpretations of type Ia observations. I think what you are saying is that if we reinterpret type Ias, we may find that the expansion is not accelerating as much. That could well be true, but what I'm saying is that we would not be happy to resolve that by simply reducing the amount of dark energy, because we still need the 70% of the critical density to come from somewhere. So we cannot reduce the dark energy without postulating something "even darker," and nobody is going to like that. Alternatively, we could try keeping the 70% of critical density, but playing with the equation of state so that it is not a cosmological constant any more. That isn't going to be too popular either, because the constant-density-that-is-a-law-of-physics approach that is the cosmological constant seemed like the simplest solution-- we'll instead be facing the very real prospect that GR is wrong, or else we'll have to cook up some arbitrary equation of state for the dark matter to fit the type Ia observations.

So none of these are going to be fun, it's not just changing the age of the universe and everything is fine. We either need to replace GR, or we need 70% of the critical density to come from something, and we liked it a lot better when that 70% looked like it was coming from a cosmological constant (or, equivalently, a constant energy density in vacuum). Maybe some clever theorist will come up with a new type of gravity that fits this result nicely to replace GR in a way that doesn't feel retrofit simply to fit this datapoint, or else find an equation of state for dark energy that makes perfect sense and fits the data. Otherwise, cosmology is going to start looking like a Rube Goldberg mechanism instead of the "precision cosmology" we thought we had.
Yes, we'd need a new equation of state for dark energy, not less dark energy. People tend to think of dark energy as bad, because it's mysterious and we don't want it, but just having less of it wouldn't be better and it wouldn't even work-- what we'd need is the same amount of it but with an even more arbitrary equation of state, and that is going to make the situation much worse than it already is.
I had not considered that perspective, but after your detailed explanation I realize that you are absolutely right. Critical density has to be maintained for GR to be valid, and since all the matter (dark or otherwise) is fixed, there must also be a finite amount of dark energy in order to match our observations. I was under the mistaken impression that in order for there to be continuous acceleration, the amount of dark energy would have to increase proportionally. Since the amount of dark energy in the universe is finite, then the only thing that could explain any amount of acceleration is the nature of dark energy must somehow change. A finite amount of dark energy could easily explain expansion if there is enough to overwhelm gravity, or even contraction if there isn't enough, but it would not explain acceleration. It would be like Earth's gravity getting stronger and stronger even though its mass, radius, and density hasn't changed. Somehow dark energy's "repulsive force" would have to continually increase over time, without increasing the amount of dark energy, in order to explain an accelerating universe. But that can't be right either since that would violate the law of conservation. :confused:

I do not think dark energy is good or bad, it just is. We are doing our best to understand the nature of dark energy by observing its effects on the universe. It seems to me that dark energy is contradictory. I cannot explain the acceleration of the universe without violating some well established law of physics, and I find that frustrating but not unexpected. There is so much that I don't know if I let it get to me I would be a complete wreck by now.
 
  • #16
I think it is too early to speculate about multiple forms of dark matter as long as it is unclear how strong the effect of the supernova issue is.
I'm sure there are tons of arXiv articles written about it right now.
 
  • Like
Likes wabbit
  • #17
OK - now that were are talking about the consequences of SNe 1a not being standard candles because of an evolution in the ratio of the three species, SD, DD and Type 1ax, I will add my own take on the matter.

Ken G's point is a reiteration of Chalnoth's #3.

My response #5 still stands:
Chalnoth's point that other types of data corroborate the result is an important one but it could be misleading - if the conclusion from the SNe 1a data is wrong then it is wrong, and might well call into question the interpretation put upon the other data sets.

Much is made of the concordance in the standard [itex]\Lambda[/itex]CDM model between the various data sets,

Let us look at those other data sets, in particular the CMB and spatial flatness. I am now going to introduce an alternative to that standard model in order to show that in may indeed be possible to interpret those data sets in other ways.

One alternative model is the linearly expanding model proposed by various authors under different guises such as: http://arxiv.org/abs/astro-ph/0306448[/URL , Introducing the Dirac-Milne universe, The Rh = ct universe without inflation
Such a model expands as the Milne empty universe and requires either an EoS of [itex]\omega= - \frac{1}{3}[/itex] or replusive antimatter as in the Dirac-Milne theory in order to produce the Milne model without it being empty.

Immediate advantages of such a model are that it requires no Inflation, it resolves any age problem in the early universe and it readily explains why the age of the universe coincidentally happens to be equal to Hubble Time.

In the standard model the angular size of the first peak in the CMB power spectrum agrees with the angular size of sound speed limited fluctuations magnified by inflation at t ~ 380,000 years if space is flat.

In the R=ct model the CMB is emitted at t=12.5 Myrs, nearly 40 times later than in the standard model and the sound horizon limited fluctuations are similarly larger, however the hyperbolic space of the Milne model makes distant objects look smaller than in flat space exactly compensating for the enlarged size.

The same shrinking of angular measurement by hyperbolic space applies also to the 'standard ruler' of baryonic acoustic oscillations. They are larger than in the in standard model but have the same angular diameter, and also the shrinking of angular measurement applies to the baryon loading second peak of the CMB power spectrum.

There is a degeneracy in the CMB data as it confirms both the flat geometry of space in the [itex]\Lambda[/itex]CDM model and the hyperbolic geometry of space of the Milne model.

BB nucleosynthesis in the R=ct universe has been explored by several authors such ashttp://arxiv.org/abs/nucl-th/9902022[/URL
In order to produce the right amount of helium the baryon density has to be increased such that it may explain DM, of course that leaves the question of where that missing baryonic DM is now hiding. My guess would be in IMBHs and inter-cluster neutral hydrogen. There is also a deuterium problem with the model but it relieves the [itex]\Lambda[/itex]CDM lithium problem.

Now the subject of this thread: are SNe 1a standard candles or not.
[/PLAIN]
Perlmutter et al. in their seminal paper
MEASUREMENTS OF OMEGA AND LAMBDA FROM 42 HIGH-REDSHIFT SUPERNOVAE (figures 1 & 2) point out that the ([itex]\Omega_M[/itex] = 0 and [itex]\Omega_\Lambda[/itex] = 0) Milne model fits the data as well as the standard model.

[EDIT - notice the diagram in Buzz Bloom's post in Fitting a model to astronomical data - the purple solid line flat dark energy model and the green solid 'empty' model]

That was out to z ~ 1. We are always told that the linearly expanding model is ruled out by subsequent studies of SNe 1a beyond z = 1 where the SNe 1a become brighter again, as expected in the standard model but not the linearly expanding one.

However if the SNe 1a are not standard candles, particularly at high z where a secular evolution of species ratio may kick in, then that conclusion may no longer hold. And alternatives such as the R=ct model ought to be re-examined.

Just my penny worth...

Garth
 
Last edited by a moderator:
  • #18
|Glitch| said:
I do not think dark energy is good or bad, it just is.
I agree, I confess I was speaking rather colloquially. The truth is, we will eventually be stuck with whatever the observations tell us, but it would sure be nice if it all had a simple explanation, rather than a zoo of new and mysterious effects! I guess some scientists are hoping for the zoo, it will give them more to do, but it's hard to keep a straight face as we are explaining how well we understand our universe with all these mysterious unknowns floating around!
I cannot explain the acceleration of the universe without violating some well established law of physics, and I find that frustrating but not unexpected. There is so much that I don't know if I let it get to me I would be a complete wreck by now.
I think the fact is, there are no well established laws of physics when it comes to cosmology, this is truly a new frontier.
 
  • #19
mfb said:
I think it is too early to speculate about multiple forms of dark matter as long as it is unclear how strong the effect of the supernova issue is.
I'm sure there are tons of arXiv articles written about it right now.
Sure, but if it does turn out that no changes to the interpretation of the type Ia SN are needed, then the whole matter is a tempest in a teacup. The cited article doesn't say there is a "signficant bias" to the parameters, but it certainly raises the prospect that there could be. If there isn't, they are coming close to crying wolf.
 
  • #20
Garth said:
There is a degeneracy in the CMB data as it confirms both the flat geometry of space in the [itex]\Lambda[/itex]CDM model and the hyperbolic geometry of space of the Milne model.
That is certainly an interesting thing to point out, it does raise the possibility that a simple explanation may yet be possible, if it turns out that the Ia data needs signficant reinterpretation.
 
  • #21
Garth said:
OK - now that were are talking about the consequences of SNe 1a not being standard candles because of an evolution in the ratio of the three species, SD, DD and Type 1ax, I will add my own take on the matter.

Ken G's point is a reiteration of Chalnoth's #3.

My response #5 still stands:

Much is made of the concordance in the standard [itex]\Lambda[/itex]CDM model between the various data sets,

Let us look at those other data sets, in particular the CMB and spatial flatness. I am now going to introduce an alternative to that standard model in order to show that in may indeed be possible to interpret those data sets in other ways.

One alternative model is the linearly expanding model proposed by various authors under different guises such as: http://arxiv.org/abs/astro-ph/0306448[/URL , Introducing the Dirac-Milne universe, The Rh = ct universe without inflation
Such a model expands as the Milne empty universe and requires either an EoS of [itex]\omega= - \frac{1}{3}[/itex] or replusive antimatter as in the Dirac-Milne theory in order to produce the Milne model without it being empty.

Immediate advantages of such a model are that it requires no Inflation, it resolves any age problem in the early universe and it readily explains why the age of the universe coincidentally happens to be equal to Hubble Time.

In the standard model the angular size of the first peak in the CMB power spectrum agrees with the angular size of sound speed limited fluctuations magnified by inflation at t ~ 380,000 years if space is flat.

In the R=ct model the CMB is emitted at t=12.5 Myrs, nearly 40 times later than in the standard model and the sound horizon limited fluctuations are similarly larger, however the hyperbolic space of the Milne model makes distant objects look smaller than in flat space exactly compensating for the enlarged size.

The same shrinking of angular measurement by hyperbolic space applies also to the 'standard ruler' of baryonic acoustic oscillations. They are larger than in the in standard model but have the same angular diameter, and also the shrinking of angular measurement applies to the baryon loading second peak of the CMB power spectrum.

There is a degeneracy in the CMB data as it confirms both the flat geometry of space in the [itex]\Lambda[/itex]CDM model and the hyperbolic geometry of space of the Milne model.

BB nucleosynthesis in the R=ct universe has been explored by several authors such ashttp://arxiv.org/abs/nucl-th/9902022[/URL
In order to produce the right amount of helium the baryon density has to be increased such that it may explain DM, of course that leaves the question of where that missing baryonic DM is now hiding. My guess would be in IMBHs and inter-cluster neutral hydrogen. There is also a deuterium problem with the model but it relieves the [itex]\Lambda[/itex]CDM lithium problem.

Now the subject of this thread: are SNe 1a standard candles or not.
[/PLAIN]
Perlmutter et al. in their seminal paper
MEASUREMENTS OF OMEGA AND LAMBDA FROM 42 HIGH-REDSHIFT SUPERNOVAE (figures 1 & 2) point out that the ([itex]\Omega_M[/itex] = 0 and [itex]\Omega_\Lambda[/itex] = 0) Milne model fits the data as well as the standard model.

[EDIT - notice the diagram in Buzz Bloom's post in Fitting a model to astronomical data - the purple solid line flat dark energy model and the green solid 'empty' model]

That was out to z ~ 1. We are always told that the linearly expanding model is ruled out by subsequent studies of SNe 1a beyond z = 1 where the SNe 1a become brighter again, as expected in the standard model but not the linearly expanding one.

However if the SNe 1a are not standard candles, particularly at high z where a secular evolution of species ratio may kick in, then that conclusion may no longer hold. And alternatives such as the R=ct model ought to be re-examined.

Just my penny worth...

Garth
I would not go so far as to claim that any of the data collected with regard to Type 1a SNe is "wrong." However, it may be incomplete in some cases. Type 1a SNe can still be used as a "standard candle," but we need to make certain that we have not just the light curve and red shift, but also spectrographic information over time as well. With that information we can distinguish between a sub-Chandrasekhar and a super-Chandrasekhar SNe, and eliminate them from distance calculations.

It is really no different than using Cepheid variables for calculating cosmological distances in our local group. We have to be able to distinguish between Pop. II low-metal Cepheid variables, classical Pop. I Cepheid variables, and RR Lyrae variables to ensure that their rate of pulsation matches their luminosity.
 
Last edited by a moderator:
  • #22
|Glitch| said:
I would not go so far as to claim that any of the data collected with regard to Type 1a SNe is "wrong." However, it may be incomplete in some cases. Type 1a SNe can still be used as a "standard candle," but we need to make certain that we have not just the light curve and red shift, but also spectrographic information over time as well. With that information we can distinguish between a sub-Chandrasekhar and a super-Chandrasekhar SNe, and eliminate them from distance calculations.

It is really no different than using Cepheid variables for calculating cosmological distances in our local group. We have to be able to distinguish between Pop. II low-metal Cepheid variables, classical Pop. I Cepheid variables, and RR Lyrae variables to ensure that their rate of pulsation matches their luminosity.
Agreed,

So, what will be the result when the spectographic elimination is used?

Garth
 
  • #23
We shall have to see!
 
  • Like
Likes wabbit
  • #24
Ken G said:
I agree, I confess I was speaking rather colloquially. The truth is, we will eventually be stuck with whatever the observations tell us, but it would sure be nice if it all had a simple explanation, rather than a zoo of new and mysterious effects! I guess some scientists are hoping for the zoo, it will give them more to do, but it's hard to keep a straight face as we are explaining how well we understand our universe with all these mysterious unknowns floating around!
I think the fact is, there are no well established laws of physics when it comes to cosmology, this is truly a new frontier.
It reminds me of Ptolemy's geocentric model. His "epicycles" matched our observations fairly well, but were overly complex. Our observations can be deceptive if there is limited points of reference. For example, how can one tell whether the Earth is rotating, or whether the Earth is fixed and the sun is orbiting the Earth, if the only reference you have is the sun appears to rise in the morning and sets in the evening? We observe the effect, but not necessarily the correct cause of that effect. Dark energy may be similar in that regard, the effect of its apparent accelerating repulsive force on the universe may have a cause that we have not yet considered.
 
  • #25
Garth said:
Agreed,

So, what will be the result when the spectographic elimination is used?

Garth
Since sub-Chandrasekhar SNe appear to be more numerous than super-Chandrasekhar SNe, I would expect an overestimation of cosmological distances beyond a megaparsec. However, I would not expect a significant alteration because in many cases we do have complete data. The universe is still accelerating, but it may not be accelerating as fast as previously estimated.
 
  • #26
Ken G said:
Yes, that's a very sticky problem indeed, we'd like to be able to rely on the light curve. But like with higher and lower metallicity Cepheids, we can use auxiliary spectroscopic information to hopefully re-establish the standard candle sorting we need. If we do that, and it ends up changing the inferences about dark energy, that's going to be a big problem indeed for "precision cosmology."
Seems like the opposite to me ? The better we understand the various types of supernovas, the better the observations can be filtered out / corrected for "truer" standard candles, and this improves the precision - my impression was that this is already what's been happenning over time in the successive studies.
 
  • #27
|Glitch| said:
It reminds me of Ptolemy's geocentric model. His "epicycles" matched our observations fairly well, but were overly complex. ? .
It was still a pretty good model (it stayed the most accurate available for hundreds of years if I am not mistaken) and changing coordinates to heliocentric isn't a big deal - my understanding is that by itself this didn't improve precision at all. We could still use epicycles (as i understand it, in essence a kind of Fourier series expansion of the orbit), but time varying Keplerian elements (isn't that the modern version of the epicyclic description ? ) now make this obsolete, since they are more convenient/precise, no ?
 
  • #28
wabbit said:
It was still a pretty good model (it stayed the most accurate available for hundreds of years if I am not mistaken) and changing coordinates to heliocentric isn't a big deal - my understanding is that by itself this didn't improve precision at all. We could still use epicycles (as i understand it, in essence a kind of Fourier series expansion of the orbit), but time varying Keplerian elements (isn't that the modern version of the epicyclic description ? ) now make this obsolete, since they are more convenient/precise, no ?
Yes, Ptolemy's model matched our observations very well, and was the standard for ~1,400 years. It was not until Copernicus' "heliocentric" model, and Galileo's observations with a new tool - the telescope - that the geocentric model was finally invalidated. Which demonstrates how our observations with limited references can be deceptive and cause us to construct models that are completely wrong, even though they fit our observations. Once we construct a model that fits all our observations we have a tendency not to look for additional possible solutions.
 
  • #29
|Glitch| said:
Yes, Ptolemy's model matched our observations very well, and was the standard for ~1,400 years. It was not until Copernicus' "heliocentric" model, and Galileo's observations with a new tool - the telescope - that the geocentric model was finally invalidated. Which demonstrates how our observations with limited references can be deceptive and cause us to construct models that are completely wrong, even though they fit our observations. Once we construct a model that fits all our observations we have a tendency not to look for additional possible solutions.
This isn't the topic of this thread so I'll stop after this - but I just don't see anything "wrong" in Ptolemy's model, much less "completely wrong" - as I understand it it purported to model the motion relative to us of heavenly bodies, and was successful at that. It is not contradicted by changing the origin of coordinates, nor by later modelling of the forces (or geometry) causing that motion - in fact I believe it would be as valid today (with a few patameter adjusments presumably) as it ever was, though it has now been made obsolete by newer tools better suited to the task, and more precise than it was.

I am not aware of a theory within Ptolemy's model of what causes the motion, but this may well be simply ignorance on my part - and if that is the shortcoming you are referring to, then I expect I would agree with you : presumably such a theory must have been quite primitive.

In a similar way I do not consider General Relativity to be wrong in any meaningful sense, although I do expect its description of motion below the Planck scale to be so wrong as to be probably meaningless.
 
Last edited:
  • #30
wabbit said:
Seems like the opposite to me ? The better we understand the various types of supernovas, the better the observations can be filtered out / corrected for "truer" standard candles, and this improves the precision - my impression was that this is already what's been happenning over time in the successive studies.
No question is it always better science to better understand and calibrate one's own observations. My point is only that when we embark on science, we always have to hope that there is some relatively simple discovery waiting for us down the road, something our minds are capable of getting some kind of handle on. That's never a guarantee-- plenty of scientists have been unlucky enough to set out on a journey that had no such simple payoff waiting for them. Others, like Galileo, were in the opposite boat, of having a world of seemingly intractably complicated phenomena that were unified under the simple discovery that the Earth was like the rest of the cosmos and not some completely different object at its center. We find ourselves in similar waters with cosmology, but we don't yet know that the simple unifying concepts are going to come. We had a great start with the cosmological principle, that one seems to check out nicely and is spectacularly unifying. But then came dark matter and dark energy, and depending on whether or not there are unifying concepts out there that will work for them, we can either walk in Galileo's footsteps again, or begin a long wander in the wilderness. How the type Ia supernova data plays out may have a lot to say about which of those journeys we are embarking on.

By the way, the above issue is the sense to which Ptolemy's model was "wrong." It's true that it was a fine model for predicting the motions of the heavenly bodies, that wasn't the problem with it. It also wasn't the epicycles-- yes we prefer simpler theories, but sometimes we have to muddle along with a more complicated one while we are looking for better unifying principles. None of those would make Ptolemy's model "wrong," just destined for being superceded-- and what model doesn't share that destiny? But what made it "wrong" is the wrong way it caused us to think about the relationship between the Earth and the rest of the cosmos. As long as we hold to the ancient Greek view that the Earth is the stationary center of the universe, we are vulnerable to thinking that the Earth is something different, some kind of special element as the Greeks thought. Then we don't look at the rest of the universe with the right eyes-- we don't look for laboratory experiments on Earth to tell us the physics of the Sun, for example. We also don't get Newton's insight into the incredibly unifying concept of gravity, because we think gravity is just the tendency for things to go to the center of the universe, and we have no idea of how to make a satellite orbit Mars, for example.
 
  • Like
Likes |Glitch|
  • #31
OK you re saying that some interpretations of Ptolemy's model were wrong, or leading to dead ends. No quarrel about that, in a way it s similar to saying that the ether interpretation of the Lorentz transformation was wrong - certainly it looks to me like it obscured simple facts.

I don't think that the ancient Greeks held that the Earth was stationary ar the center of the universe, I believe there was a range of views then (Aristarchus of Samos is reported as a proponent of heliocentrism for instance but he was by no means the only one), debates about them, and probably the awareness at least for some that they just didnt have enough data to know for sure. I don't know on the other hand if the idea that the universe has no center at all was proposed then.

Back to the topic of this thread - if we have alternate models and they fit the data better, wonderful. Modern cosmology is a young science, and despite amazing advances in such a short time, it is unlikely to be complete - and perhaps it even took a wrong turn as you seem to suggest (some may not see DM and even less a CC as indicative of that but the data will sort it out), in which case it will still leave at least the legacy of great data to be reworked under new assumptions - time will tell, and I sure don't have the expertise required to foresee what it will tell.
 
Last edited:
  • #32
This paper is pertinent to this thread: Theoretical uncertainties of the type Ia supernova rate, especially figure 15 on page 19.
7.5. Outlook
Our models indicate that the progenitor evolution of SNe Ia does not consist of one evolutionary channel, but has many different branches with the relevance of each depending on different aspects of binary evolution.With upcoming supernova surveys we will get more detailed information on the differences between individual SNe Ia. Therefore it is possible to gain insight in the sub-populations of SNe Ia. As a next step the characteristics of newly proposed progenitor channels should be investigated in more detail, including the properties of the merger products or the remaining companion stars. The outcome of such a study, combined with the results of this paper, can be linked with the different sub-populations of SNe Ia. However,when studying the rates of the other progenitor channels the uncertainties discussed in this work should always be kept in mind.

We wait for those upcoming supernova surveys!

Garth
 
  • #33
Garth said:
This paper is pertinent to this thread: Theoretical uncertainties of the type Ia supernova rate, especially figure 15 on page 19.

We wait for those upcoming supernova surveys!
So it is not feasible in the meantime to identify subtly different signatures that would be detectable in the data available from existing surveys, using multiple spectra captured over a decay curve ? I was sort of expecting such a kind of reestimation might provide at least a first go at the correction implied by such models, but this may have been overly optimistic.
 
  • #34
wabbit said:
So it is not feasible in the meantime to identify subtly different signatures that would be detectable in the data available from existing surveys, using multiple spectra captured over a decay curve ? I was sort of expecting such a kind of reestimation might provide at least a first go at the correction implied by such models, but this may have been overly optimistic.
That paper was on the errors in the SNe 1a formation rate - so its going to be complicated and messy.

My point has always been that the simple assumption that Type 1a SNe are standard candles over cosmological time scales (z=1 and beyond) is naive.

Garth
 
  • #35
Right, my assumption was that some signature might allow to isolate the separate channels. This is what they appear to suggest in
As a next step the characteristics of newly proposed progenitor channels should be investigated in more detail, including the properties of the merger products or the remaining companion stars
Then the qestion is whether existing surveys did capture enough data to identify this signature, and proceed from there, or whether these leave too much uncertainty and new surveys are required to sort things out. From what you're saying the latter appears to be more likely.

In any case, as I understand it if different channels estimated separately then yield the same distance/redshift curve this would corroborate the existing (LCDM) model, while different curves for different channels would require some explanation.
 
Last edited:

Similar threads

Replies
3
Views
1K
Replies
16
Views
3K
Replies
93
Views
12K
  • Astronomy and Astrophysics
Replies
9
Views
2K
Replies
6
Views
2K
  • Astronomy and Astrophysics
Replies
1
Views
1K
Replies
3
Views
3K
Replies
1
Views
2K
Replies
34
Views
6K
Replies
4
Views
3K
Back
Top