Challenges in Understanding Dark Energy and Dark Matter: A Scientific Inquiry

  • Thread starter FayeKane
  • Start date
  • Tags
    Expansion
In summary, the conversation revolves around the topic of dark energy and dark matter in cosmology. The opinions and theories discussed include the possibility of intrinsic redshift and the role it may play in explaining the dimming of type 1a supernovae. There is also a mention of the importance of properly calibrating SN1ae data and the potential to test cosmology through observation. The conversation also touches on the limitations and assumptions in current cosmological theories.
  • #1
FayeKane
31
0
I'm sure it's just me since I've never seen this opinion anywhere, but has it occurred to anyone else that when type 1a supernovæ are dimmer than you expect, the proper response is "Gee, let's find out why the type 1a supernovæ are dimmer than we expect", rather than "Let's explain it by inventing a mysterious invisible entity about which we know nothing"? They've been doing THAT for thousands of years.

Dark energy is more of a problem than what it (supposedly) explains. It's like saying "the big bang was caused by an invisible god". Not only do we then have the far larger problem of explaining where the invisible"god" came from, but far worse, that answer doesn't actually tell us how the big bang happened, either.

Same with dark energy. I predict that when the source of the 1a attenuation is found, "dark energy" will be next to "Eoanthropus Dawsoni" in the footnotes of science history.

--faye kane homeless brain

PS
Don't EVEN get me started on "dark matter"!
 
Space news on Phys.org
  • #2
There are physicists looking for other possible explanations for both dark matter and dark energy. So far no one has come up with viable alternatives, although MOND has some followers as an alternative to dark matter.

Physical theories (in contrast to religion or economics) are not dogma. All theories must be consistent with observed data or else they are discarded.
 
  • #3
FayeKane said:
I'm sure it's just me since I've never seen this opinion anywhere, but has it occurred to anyone else that when type 1a supernovæ are dimmer than you expect, the proper response is "Gee, let's find out why the type 1a supernovæ are dimmer than we expect", rather than "Let's explain it by inventing a mysterious invisible entity about which we know nothing"?
Faye, as you may or may not know, there is a large body of peer-reviewed, published research showing that low-redshift bodies are shown to be physically associated with bodies of higher redshifts. You might want to Google on Arp, the Burbidges (both Geoffery and Margaret), Jayant Narlikar and others. Their work (and research by myself and my collaborators, drawn from publicly-available databases, and published after peer-review) suggests that there may be a component to redshift that is intrinsic. In other words, cosmological expansion, peculiar motion, and gravitational redshift may not be entirely responsible for the total redshift of an astronomical object.

What is the significance of this concept? Let's consider: in which objects are the redshifts most dominated by the Hubble/distance relationship and the least contaminated by any other possible contributor? They are the objects that are the farthest away from us. In light of this, SN1ae data should have been calibrated on the most distant galaxies, since they are the sample that is least contaminated with peculiar motions and other contributions to total redshift.

How does this pertain to the dark energy problem? If the SN1ae in distant galaxies are taken as our calibration standards, then the question should not be "Why are distant SN1ae too faint?" but "Why are nearby SN1ae too bright?". The simplest answer is that we have over-estimated the distances to nearby galaxies. If Arp, the Burbidges, Narlikar, and others are right, their oft-derided view of intrinsic redshift can neatly explain why nearby SN1ae are brighter than more distant ones. In this view, we have overestimated the distance to nearby galaxies because we have no way to separate the effects of intrinsic redshift from the total redshift of a galaxy. This is not a popular concept, but it is a realistic application of observational astronomy as a test of cosmology.

In cosmology, we know little and assume much. There are times when cosmology can be tested by observation. I believe that this is such a time, and the opportunity has been largely overlooked.
 
Last edited:
  • #4
turbo-1 said:
If the SN1ae in distant galaxies are taken as our calibration standards, then the question should not be "Why are distant SN1ae too faint?" but "Why are nearby SN1ae too bright?".

I'm not understanding this. The important thing about SN1a data is that there is very little calibration necessary. As far as we can tell, all SN1a have the same intrinsic brightness and are standard candles.

In this view, we have overestimated the distance to nearby galaxies because we have no way to separate the effects of intrinsic redshift from the total redshift of a galaxy.

But unless I'm wrong the calibrations for the SNIa bypass redshift all together. The galaxies in which we see nearby SNIa also have cepheid data, and so redshifts are irrelevant, since they aren't included in the calibration at all.

In cosmology, we know little and assume much. There are times when cosmology can be tested by observation. I believe that this is such a time, and the opportunity has been largely overlooked.

The reason that supernova Ia are important is that you considerably reduce the assumptions. The one big assumption (which people are testing both theoretically and observationally) is that supernova Ia don't evolve in brightness. However, in the original paper IIRC they mentioned that you had to have very large amounts of differences in order remove dark energy.
 
  • #5
FayeKane said:
I'm sure it's just me since I've never seen this opinion anywhere, but has it occurred to anyone else that when type 1a supernovæ are dimmer than you expect, the proper response is "Gee, let's find out why the type 1a supernovæ are dimmer than we expect"

Take a look at Section 5.1 in the original paper...

http://adsabs.harvard.edu/abs/1998AJ...116.1009R

The basic answer if you try to explain this as an evolution effect, you have to explain why 1) there seems to be no evolution effect in computer simulations of type Ia 2) why there doesn't seem to be any correlation between Sn Ia brightness and whether the galaxy is elliptical or spiral and 3) why you have to explain why it's not only dimmer but dimmer in such a way that the whole light curve shifts down.

"Let's explain it by inventing a mysterious invisible entity about which we know nothing"? They've been doing THAT for thousands of years.

And you are doing this anyone by invoking some mysterious invisible entity that makes SnIa's act differently.

Dark energy is more of a problem than what it (supposedly) explains.

Sure. Come up with a better explanation?

It's like saying "the big bang was caused by an invisible god".

It's not that bad because you can put in the amount of dark energy, and get out predictions for other things that seems to make sense. It's the difference between saying that the big bang was caused by an invisible god, and the big bang was caused by an invisible God whose name was Fred and was bigger than a breadbox but smaller than an SUV and likes to eat brocolli and snails, and we know his name is Fred because he signed his name in the Andromeda galaxy.

We are at the point that if dark energy does exist, that we can say thing about it.

Same with dark energy. I predict that when the source of the 1a attenuation is found, "dark energy" will be next to "Eoanthropus Dawsoni" in the footnotes of science history.

And you know this how? The original paper went through all of the obvious sources of attenuation and gave reasons why it couldn't be

1) Evolution
2) Extinction
3) Selection bias
4) Effect of local voids
5) Weak graviational lensing
6) Light curve filtering method
7) Sample contamination

Now what type of bias are you suggesting? Saying that "dark energy" is causing the universe to expand is much less mysterious than talking about some "unknown supernova Ia attentuation factor" because "dark energy" is falsifible, whereas some weird SNIa attentution field isn't until you give some more details.
 
Last edited:
  • #6
FayeKane said:
I'm sure it's just me since I've never seen this opinion anywhere, but has it occurred to anyone else that when type 1a supernovæ are dimmer than you expect, the proper response is "Gee, let's find out why the type 1a supernovæ are dimmer than we expect", rather than "Let's explain it by inventing a mysterious invisible entity about which we know nothing"? They've been doing THAT for thousands of years.
Yes. But the problem is that the brightness of supernovae is set by the physics of how the stars collapse, so there isn't a whole lot of room for there to be a systematic change in brightness with redshift.

FayeKane said:
Dark energy is more of a problem than what it (supposedly) explains. It's like saying "the big bang was caused by an invisible god". Not only do we then have the far larger problem of explaining where the invisible"god" came from, but far worse, that answer doesn't actually tell us how the big bang happened, either.
No, it's actually not even a close comparison. You see, the various dark energy proposals are specific proposals with specific observational predictions. This is vastly different from proposing a nebulous entity whose definition is so vague that you can't ever possibly test for it.

Anyway, the thing is that by now, we don't even need to use supernovae at all to measure the accelerated expansion. There are a number of other observations that show the same thing. Perhaps the most interesting of which is the baryon acoustic oscillation measurement: in the early universe normal matter (which cosmologists call baryons) was in a plasma state, and sound waves through this plasma propagated throughout our universe. We see the imprint of these sound waves on the CMB. These sound waves have a characteristic size. We can then compare the average distances between nearby galaxies, and correlate it with this length scale set by the CMB. This gives us a "standard ruler". And when we use this "standard ruler" to measure the expansion rate with time, we get the same accelerated expansion rate (to within the errors) as with supernovae.
 
  • #7
There are several factors that go into Chandaraska's equation that determines the brightness of a 1a SN - M and G may have behaved differently in early epochs - even though the MG product may be temporally constant (as is established by experiment) - G may have been larger and inertia may have been less - consequently earlier SN would be dimmer and the width of the luminosity profile would have been narrower when the brightness is calibrated against closer SN. Nonetheless, it is much more likely that the acceleration interpretation is correct.
 
  • #9
heldervelez said:
I'will defend the OP position, if allowed, in this https://www.physicsforums.com/showthread.php?t=354424"
Sorry, but that thread's topic is quite different from this thread's. While they are similar, one is addressing how we talk about the expansion. This one, on the other hand, is addressing how, if we agree on how we want to talk about the expansion, said expansion is progressing.
 
Last edited by a moderator:
  • #10
yogi said:
There are several factors that go into Chandaraska's equation that determines the brightness of a 1a SN - M and G may have behaved differently in early epochs

Won't work. If you change G and M enough to change the brightness of supernova Ia, then you change the brightness of all the other stars enough so that people will notice. Now if you can come up with a clever explanation of why a change in G and M would affect *only* SN Ia and nothing else (or maybe it's affecting something that we haven't measured) well... Let's see the explanation first...

I should point out that this is one problem with popular accounts of science, in that you end up with people thinking "scientists are stupid because they didn't think of this obvious explanation." What invariably happens is that people actually *do* think of obvious explanations and there are reasons why the obvious explanation won't work.
 
  • #11
twofish-quant said:
I should point out that this is one problem with popular accounts of science, in that you end up with people thinking "scientists are stupid because they didn't think of this obvious explanation." What invariably happens is that people actually *do* think of obvious explanations and there are reasons why the obvious explanation won't work.
This is one of the things that really struck home for me when I was a student at the university. I found, very quickly, that when tackling a currently-unknown problem, the scientific community as a whole is just extraordinarily resourceful at finding possible alternative explanations. Some of them are obvious. Some are extremely creative and insightful. But in any case the sheer scope and variety I found to be really impressive.

Of course, as with many people interested in this stuff, I also had my own ideas of possible alternative explanations. What I invariably found was that either my own alternative explanation was just flatly and obviously wrong (this was the case most of the time), or somebody else had already thought of it and had developed the idea far, far more than I ever did. Now that I am a trained scientist, I would still say that 99% of my creative ideas turn out to be flat wrong (which I usually realize within a few seconds of making the idea).
 
  • #12
I know exactly what you mean Chalnoth! One of the tricky things in being a scientist is to get the right balance between self confidence and humility. On the one hand you can place too much confidence in your ideas without carefully checking the literature to see who has already thought of them and who has shown why they don't work. On the other hand you can get too used to assuming your idea won't work for some reason or mustn't be novel, which can hold you back. I've seen that in myself and colleagues at times. The balance is really quite tricky!

I spent about 6 months when I was a PhD student trying to work out why a result I was getting was wrong, since it contradicted a previous work by a very well known and successful researcher. I was originally hoping to extend this previous work, and re-producing it was going to be the first step. Eventually it dawned on me that just maybe I was actually right, and fortunately the 6 months I spent working out why I was wrong turned into an exhaustive proof of my result! After 2 years and several journal articles back and forth between me and my supervisors and the other group we've finally agreed that I was right all along, very gratifying!

That being said, most of my ideas do turn out to be wrong, the hard part is being able to both be confident when you really think you're onto something and also honest enough to relent when the evidence turns against you. That's the lesson Halton Arp never learned (although he's not alone there!).
 
  • #13
Chalnoth said:
What I invariably found was that either my own alternative explanation was just flatly and obviously wrong (this was the case most of the time), or somebody else had already thought of it and had developed the idea far, far more than I ever did.

At which point you think some more. When I tell someone "changing G won't work because you can't obvious change the magnitude of SN Ia without changing other things." That's a challenge. If you think for a while and come up with a way so that you *can* change SN Ia by changing G without changing anything else, GREAT! That's worth an letter to Ap. J.

If you think that you can explain dark matter by SN Ia differences, then you should work at it. Here are all of the papers on SN Ia, read them for a few weeks. If you can find a gap in logic or you can think of something original then GREAT! Again, a letter to Ap.J. If you can mention how to fix the gap, then this is progress. Don't just say "I think there might be something weird about SN Ia that may cause distances to be off". Use your imagination and knowledge to figure out what it might be, and then come up with some specific observations that can address the weirdness, write a paper, and the upload it to the Los Alamos server.

OK, intrinsic redshifts might throw off the accelerating universe. Quantify the effect. Come up with experiments that would show what that effect is.

Yes, it's hard, but you'll be doing science.

The neat thing is that when the web, the sum total of what has been tried before is now online, so it becomes possible for people to do this sort of thinking without needing access to a university library.

Now that I am a trained scientist, I would still say that 99% of my creative ideas turn out to be flat wrong (which I usually realize within a few seconds of making the idea).

That's why it's important to come up with 100 ideas. Also from a theorist point of view, the goal is not to be right. It's up to Mother Nature to tell you whether you are right or not. The goal is to be *original* and *interesting*. If you come up with an new idea then most of them die pretty quickly. Occasionally, you'll come up with a new approach or a new way of looking at things, which is publishable because it adds to the conversation, and you get progress even if you end up being wrong.

As a theorist, you can't avoid being dead wrong, since it's nature that decides right and wrong in science. You can at least be wrong in interesting and productive ways.
 
  • #14
twofish-quant said:
At which point you think some more. When I tell someone "changing G won't work because you can't obvious change the magnitude of SN Ia without changing other things." That's a challenge. If you think for a while and come up with a way so that you *can* change SN Ia by changing G without changing anything else, GREAT! That's worth an letter to Ap. J.

If you think that you can explain dark matter by SN Ia differences, then you should work at it. Here are all of the papers on SN Ia, read them for a few weeks. If you can find a gap in logic or you can think of something original then GREAT! Again, a letter to Ap.J. If you can mention how to fix the gap, then this is progress. Don't just say "I think there might be something weird about SN Ia that may cause distances to be off". Use your imagination and knowledge to figure out what it might be, and then come up with some specific observations that can address the weirdness, write a paper, and the upload it to the Los Alamos server.
Well, right, and this has been tried. I think the general consensus in the supernova community right now is that we actually know that there are some biases here, such that the measured expansion rate isn't quite right, but they're not large enough to change the overall picture.
 
  • #15
FayeKane said:
PS
Don't EVEN get me started on "dark matter"!

You do know that people have been detecting dark matter particles in laboratories for over 50 years now? So could you please "start on it". :)
 
  • #16
twofish-quant said:
Won't work. If you change G and M enough to change the brightness of supernova Ia, then you change the brightness of all the other stars enough so that people will notice. Now if you can come up with a clever explanation of why a change in G and M would affect *only* SN Ia and nothing else (or maybe it's affecting something that we haven't measured) well... Let's see the explanation first...

I don't want to beat this idea to death - but to answer the above - in Chandrasekar's limit - G and the mass of the star M do not appear as a product MG which determines the simple force relationship of gravity - therefore it is not proper to conclude that a diminished G would have the same affect on the brightness of other objects (those governed by the simple MG product).

In short - the changing G idea is probably of no merit - but not for the reason given above
 
  • #17
Chalnoth said:
Well, right, and this has been tried. I think the general consensus in the supernova community right now is that we actually know that there are some biases here, such that the measured expansion rate isn't quite right, but they're not large enough to change the overall picture.

That's the consensus, but there are enough gaps in our understanding of Sn Ia that it wouldn't hurt for someone to think some more about these issues. Also, once you get past the overall picture, then there is a lot of room for pinning down the exact numbers. Variations in SN Ia may not be enough to kill dark energy, but it is a good idea to get an exact number for how much difference it does make because it may matter for other things like the exact nature of dark matter.
 
  • #18
yogi said:
therefore it is not proper to conclude that a diminished G would have the same affect on the brightness of other objects (those governed by the simple MG product).

Except that they the main sequence line isn't governed by the simple MG product. If you change G, then you change the pressures/temperatures in stellar cores, which changes fusion rates which changes just about everything else.

In short - the changing G idea is probably of no merit - but not for the reason given above

Whoa here. Suppose you were to get around that objection, then why wouldn't changing G be of no merit?
 
  • #19
There is an industry of people thinking more about those issues. Google for "Nearby supernovae factory" for an example of an obervational effort and "snob ssupernovae simulation" for an example of the theory. These are just tips of the iceberg.

The consensus view at present is that SN have hit the 'systematic limit', meaning that we can't really learn anything more about cosmology by simply getting more high z SN data of the quality we can get with current instruments (that includes the HST). The systematic errors (from possible evolution amongst other things) are now greater than the statisical ones and more data just beats down the statistics not the systematics.

The way out of this is to either make some theoretical breakthrough, so that the empirical relationships (such as stretch correction) could be put on a firmer basis, get many many more low z supernovae (as the NSF is doing) or preferably to get shiny new instrument, such as the JDEM (Joint Dark Energy Mission) proposed by NASA/DOE and/or the Euclid mission proposed by ESO. When the James Webb Space Telescope comes along this will probably get some good SN data, although the wavelength coverage isn't prefectly suited for SN1A, so a dedicated mission would be better.

Edit: On the varying G issue, there are plenty of papers looking at this as well. You can't just take the standard equations and simply change G -> G(t) though, since that would violate (in general) Lorentz invariance. To do it properly you need to add an extra scalar field to Einsteins Equations, and in general G will vary with time and space, although the spatial perturbations are relatively small (at least in models obeying observational constraints). Saying 'varying G' then becomes equivalent to a class of modified gravity models containing extra scalar fields, and these are being looked at in significant detail at present.

Note that it's a strawman to try focus on SN alone though, the evidence for DE is strong, and comes at least as much from structure formation than from SN. It is in structure formations that you get the tighest constraints of gravity variation, and those constraints mean that the internal physics of SN explosions will be unaffected compared to the standard model for gravity models obeying observational constraints. You can't simply look at one data set in isolation.
 
Last edited:
  • #20
twofish-quant said:
That's the consensus, but there are enough gaps in our understanding of Sn Ia that it wouldn't hurt for someone to think some more about these issues.
The supernova community is very much interested in these issues, and they're continuing to work on them. There really isn't a problem there.

twofish-quant said:
Also, once you get past the overall picture, then there is a lot of room for pinning down the exact numbers. Variations in SN Ia may not be enough to kill dark energy, but it is a good idea to get an exact number for how much difference it does make because it may matter for other things like the exact nature of dark matter.
Naturally, and one way of doing that is to make use of different measures of expansion entirely. When they start to disagree (which they really haven't yet), then we can say there's something interesting to say about cosmology from improving on the systematic errors.
 
  • #21
twofish-quant said:
Except that they the main sequence line isn't governed by the simple MG product. If you change G, then you change the pressures/temperatures in stellar cores, which changes fusion rates which changes just about everything else.

If the MG product remains constant - forces remain the same - If G decreases and and M proportionately increases - there is no change in the force - and this is what governs the burning rate of stars - Chad's equation G and M are not simple related - since the theory is based upon the equation of state of the degenerate electron -



Whoa here. Suppose you were to get around that objection, then why wouldn't changing G be of no merit?

Its probably of no merit because the accelerating universe is most likely the correct interpretation - it comports well with the idea of a negative pressure that can be derived from the critical density in some models - there is more physics to support acceleration than to support changing inertial mass and complimentary changes in G
 
  • #22
Chalnoth said:
The supernova community is very much interested in these issues, and they're continuing to work on them. There really isn't a problem there.

Yup, but there is always room for more people in the party. I'm a somewhat inactive member of the supernova community. One thing that I'd really like to do is to start pushing out some of the theoretical codes that are out, because I think there may be some room for semi-amateurs to get into the game. One thing that I think will happen (although it may take a few years) is that computers are going to blur the gap between amateur and professional astronomers.

One interesting side bit is that there is a deep connection with the physics of SN Ia and automobile engine design. What happens in an SN Ia is a delayed detonation. Things start burning very quickly and then shock waves start to form. This is known in layman's terms as "pinging".
 
  • #23
SN detonation codes are very complex. It's not just a matter of shear grunt, there's a lot of complex physics you have to think hard about implementing. The gap between Pro and Amateur is not just about the number of processors cycles available to you!
 
  • #24
Wallace said:
SN detonation codes are very complex. It's not just a matter of shear grunt, there's a lot of complex physics you have to think hard about implementing. The gap between Pro and Amateur is not just about the number of processors cycles available to you!
Right, I've seen some of the recent work that's gone into these supernova simulations, and the difficulties remaining are pretty darned impressive. To date, we don't even know for sure what the mechanism that causes the explosion is in the first place!

The basic problem is that starting the collapse of the pre-supernova star is easy, but that collapse causes a shockwave that tends to stall before blowing apart the star. The precise mechanism for this shockwave to "unstick" and cause a supernova is not understood.
 
  • #25
I can't remeber who it was, but I saw a talk recently by someone who'd done some interesting work on assymmetric SN1a explosions, caused I think by some larger than normal lump accreting onto the white dwarf just as it crosses the Chandresarkar limit. The claim was that the simulation gave you an abnormal SN1a, but that the results matched well to some known 'unusual' 1a's. It was an interesting talk, but it was clear that a lot of work remained to be done.
 
  • #26
Wallace said:
SN detonation codes are very complex.

I know. I used and extended one for my Ph.D. dissertation.

It's not just a matter of shear grunt, there's a lot of complex physics you have to think hard about implementing.

What normally happens with these codes is that you have a ton of physics that has already been added, and you are interested in adding one small part that you are interested in. Often that physics isn't terribly complex since you are often just added something phenomonlogical. I think that someone with junior undergraduate level understanding of physics can make themselves useful with these sorts of codes. since the amount of expertise that needs to be added is roughly that of a lot of open source science codes that are on the net.

And there is a ton of stuff that is more or less grunt work.

The gap between Pro and Amateur is not just about the number of processors cycles available to you!

I think a lot of it involves social networks, and time and energy available. Also one curious thing is that the more CPU you have available the simpler the code turns out to be. A lot of the complexity in SN simulation revolves around making approxiamations to get rid of CPU limits.
 
  • #27
Chalnoth said:
Right, I've seen some of the recent work that's gone into these supernova simulations, and the difficulties remaining are pretty darned impressive. To date, we don't even know for sure what the mechanism that causes the explosion is in the first place!

For SNII's, that's not a product of the simulation but rather the fact that there is a major piece of the puzzle that is missing.

The basic problem is that starting the collapse of the pre-supernova star is easy, but that collapse causes a shockwave that tends to stall before blowing apart the star.

And that happens because you have massive neutrino losses. One big "problem" in supernova simulations is that the obvious fix of somehow magically increasing the neutrino absorption rate just won't work, because we are dealing with densities, energies, and particles that are supposedly well known from electroweak theory.

The type of problem that an amateur could be useful with is to run the simulation a hundred times with a "magic coupling" factor, and then analyze the results to see how the evolution of the explosion changes once you hit the limit at which you get an explosion. Or if they understand *nothing* at all about supernova but just have basic programming skills, then writing scripts that convert the output of the codes into something more user-friendly, would be useful.
 
  • #28
twofish-quant said:
And that happens because you have massive neutrino losses. One big "problem" in supernova simulations is that the obvious fix of somehow magically increasing the neutrino absorption rate just won't work, because we are dealing with densities, energies, and particles that are supposedly well known from electroweak theory.
This isn't necessarily the case. Like I said, it's not clear precisely what's causing the explosion. Many of the codes, for instance, are either one-dimensional or two-dimensional. It is possible that the explosion only occurs if it uses the full three-dimensional behavior of the system.

The basic idea there is that the stalled shock front is an unstable equilibrium, but the simulations that assume spherical or cylindrical symmetry don't have enough freedom to 'unstick' the front. Whereas with a full three-dimensional simulation, instabilities can quickly grow, leading to an eventual explosion.
 
  • #29
Chalnoth said:
Like I said, it's not clear precisely what's causing the explosion.

It's pretty clear what's causing things not to explode. You just have massive energy losses due to neutrino interactions, and that kills the explosion. The name of the game has been to somehow push some of the neutrino energy back into the material

Many of the codes, for instance, are either one-dimensional or two-dimensional. It is possible that the explosion only occurs if it uses the full three-dimensional behavior of the system.

In that case it's either a hydro effect, asphericity effect, or a radiation effect. Having a three-d simulation that explodes when a 2-d doesn't, isn't useful. You should be able to run the 3-d simulation and explain why 3-d causes a difference and then work that back into a 1-d code.

The problem with 3-d codes is that even those are missing some important and possibly critical physics. In order to get good hydro, you have to skimp on something else.

The basic idea there is that the stalled shock front is an unstable equilibrium

Been away from the literature for a while so something new may have come up in the last year, but the stalled shock front is not an unstable equilibrium. If you start reviving the shock the shock jump conditions increase the shock heating rate which increases the neutrino losses. So a stalled shock is quite stable for type II's (type Ia is a whole other ball game).

the simulations that assume spherical or cylindrical symmetry don't have enough freedom to 'unstick' the front. Whereas with a full three-dimensional simulation, instabilities can quickly grow, leading to an eventual explosion.

I need to review the literature on type II's over the last year to see if someone has come up with something new, but off the top of my head I don't see how this is going to work. If you have instabilities that affect the shock itself, then you are hosed because you don't have nearly the resolution to see the shock itself. If you have the instabilities develop behind the shock then you have the problem that I mentioned earlier.

One big problem with full three-d simulations is that if you have very detailed hydrodynamics, then most of the time they are using much less detailed neutrino physics, and if you use less detailed neutrino physics, it's not obvious that the explosions that you are getting are the result of having a crude neutrino algorithm that reduce losses. If you are using 3-d hydro but 1-d neutrino physics, it's pretty easy to come up with a calculation that is inconsistent.
 
  • #30
Just looking over some literature. If you are talking about the standing accretion shock instability, then it's far from obvious to me how that is going to revive a stalled shock because it's not obvious deal with the fundamental problem on why the shock stalls. If you read the papers very carefully, you'll see a lot of "hopeful" language (i.e. this *could* be responsible, this *might* etc. etc.).

One pattern with 3-d simulations is that the first ones tend to have explosions because if you focus on the hydro you have to skimp on the neutrino physics. Then as you put in better neutrino physics, the explosion disappears. One thing that neutrinos do is that any time you have a dissipative element, this tends to damp instabilities.
 
  • #31
twofish-quant said:
In that case it's either a hydro effect, asphericity effect, or a radiation effect. Having a three-d simulation that explodes when a 2-d doesn't, isn't useful. You should be able to run the 3-d simulation and explain why 3-d causes a difference and then work that back into a 1-d code.
Nope, because the explosion in those cases arises from asymmetric instabilities, which cannot be modeled in one dimension.

twofish-quant said:
I need to review the literature on type II's over the last year to see if someone has come up with something new, but off the top of my head I don't see how this is going to work. If you have instabilities that affect the shock itself, then you are hosed because you don't have nearly the resolution to see the shock itself. If you have the instabilities develop behind the shock then you have the problem that I mentioned earlier.
Basically, from the simulations I've seen, what happens is that small oscillations lead to oscillation of the shock front along one axis (e.g. up/down). Those oscillations then grow until the shock front is destabilized and the supernova explodes.

twofish-quant said:
One big problem with full three-d simulations is that if you have very detailed hydrodynamics, then most of the time they are using much less detailed neutrino physics, and if you use less detailed neutrino physics, it's not obvious that the explosions that you are getting are the result of having a crude neutrino algorithm that reduce losses. If you are using 3-d hydro but 1-d neutrino physics, it's pretty easy to come up with a calculation that is inconsistent.
From what I can tell, this is a big reason why we still don't know what's causing the explosions.
 
  • #32
Chalnoth said:
Nope, because the explosion in those cases arises from asymmetric instabilities, which cannot be modeled in one dimension.

If it's something like SSAI, then once you understand the basic physics, you can insert it into a 1-d code either by incorporating a neutrino enhancement term or by including a drag term behind the shock. The strategy is to use 3-d models to tune the parameters for a 1-d model that has the basic physics. You can then use the 1-d model for things like nucleosynthetic calculations.

If it's a low dimensional axisymmetric instability then you can incorporate that sort of physics in 1-d.

Basically, from the simulations I've seen, what happens is that small oscillations lead to oscillation of the shock front along one axis (e.g. up/down). Those oscillations then grow until the shock front is destabilized and the supernova explodes.

And I'll be skeptical that this really does solve the explosion problem until someone explains how it overwhelms the stabilizing effects of neutrino losses. This is a *very* interesting line of research, but since the 1970's, the story of type II supernova modeling has been one where you have an interesting effect that turn out not to work after you put in more realistic neutrino physics. It doesn't help that people aren't completely sure that energy is conserved and that there are at least three mechanisms for what might going on (SASI, acoustic coupling, and MRI).

So I'm not breaking open the champagne yet. :-) :-)

From what I can tell, this is a big reason why we still don't know what's causing the explosions.

The basic problem with type II supernova is that there are about ten different things that are going on, all of which may interact with each other in very complicated ways.
 
  • #33
I'm fairly confident there are no conservation of energy issues, tq, just confused mathematical models.
 
  • #34
Wallace said:
I can't remeber who it was, but I saw a talk recently by someone who'd done some interesting work on assymmetric SN1a explosions, caused I think by some larger than normal lump accreting onto the white dwarf just as it crosses the Chandresarkar limit. The claim was that the simulation gave you an abnormal SN1a, but that the results matched well to some known 'unusual' 1a's. It was an interesting talk, but it was clear that a lot of work remained to be done.

Was the talk be similar to this discussion? Nebular Spectra and Explosion Asymmetry of Type Ia Supernovae

Garth
 
  • #35
Chronos said:
I'm fairly confident there are no conservation of energy issues, tq, just confused mathematical models.

I'm not so sure. Getting global energy to conserve in a hydro-simulation is much, much harder than it seems. There is a strong likelihood that the first time you run the code, it *won't* conserve energy and you'll be spending a month trying to figure out why.

Even trying to check if you've conserved energy in a 3-d Euler simulation is extremely non-trivial. You have matter falling into the simulation. You have radiation leaving the simulation, you have non-trivial matter-radiation interactions.

One problem with hydro simulations is that even very, very slight differences in energy balance will cause reasonable but incorrect results. It's not the huge bugs that you worry about, since huge bugs are obvious. It's the subtle complicated ones that keep you up at nights. There is also the fact that all numerical codes will have bugs. This is why "run the simulation and declare victory" won't work.

In any case, I won't feel confident in breaking out the champagne until you have three or four groups with different simulations and code bases, come up with the same basic mechanism for an explosion.
 

1. What is dark energy and how is it different from dark matter?

Dark energy is a hypothetical form of energy that is thought to make up about 68% of the total energy content of the universe. It is believed to be responsible for the observed accelerated expansion of the universe. On the other hand, dark matter is a type of matter that does not interact with light and makes up about 27% of the total energy content of the universe. It is thought to play a crucial role in the formation and evolution of galaxies.

2. How do scientists study dark energy and dark matter?

Scientists study dark energy and dark matter through a variety of methods, including observations of the large-scale structure of the universe, measurements of the cosmic microwave background radiation, and simulations using supercomputers. They also use data from gravitational lensing, which is the bending of light by the gravitational pull of dark matter, to study its distribution in the universe.

3. What are the current challenges in understanding dark energy and dark matter?

One of the main challenges in understanding dark energy and dark matter is that they cannot be directly observed or detected using traditional telescopes or instruments. This makes it difficult for scientists to study their properties and behavior. Additionally, the nature of dark energy and dark matter is still largely unknown, making it challenging to develop theories and models to explain their existence.

4. What are some proposed explanations for dark energy and dark matter?

There are several proposed explanations for dark energy and dark matter. Some theories suggest that dark energy is a property of space itself, while others propose the existence of a new type of energy field. As for dark matter, some theories suggest that it could be made up of particles that have not yet been discovered, such as weakly interacting massive particles (WIMPs) or axions.

5. How does understanding dark energy and dark matter contribute to our understanding of the universe?

Understanding dark energy and dark matter is crucial for our understanding of the universe and its evolution. These two mysterious components make up the majority of the energy content of the universe and play a significant role in its structure and expansion. By studying them, scientists can gain insights into the formation of galaxies, the fate of the universe, and potentially uncover new physics and fundamental laws of the universe.

Similar threads

Replies
37
Views
3K
Replies
31
Views
3K
Replies
9
Views
953
Replies
7
Views
2K
Replies
8
Views
1K
Replies
20
Views
2K
Replies
2
Views
2K
  • Astronomy and Astrophysics
Replies
4
Views
2K
Replies
11
Views
2K
Replies
4
Views
1K
Back
Top