Undergrad Questions about the lifecycle of stars

  • Thread starter Thread starter JohnnyGui
  • Start date Start date
  • Tags Tags
    Stars
Click For Summary
SUMMARY

The discussion focuses on the lifecycle of stars, addressing key concepts such as mass effects on volume, fusion rates, and the roles of hydrogen and helium in stellar evolution. It is established that larger stars burn fuel faster due to increased gravitational pressure, leading to a quicker end of the fusion process. The conversation also clarifies that electron degeneracy pressure prevents core collapse in low to medium mass stars, while exceeding the Chandrasekhar limit results in different outcomes, including carbon fusion and potential supernova events.

PREREQUISITES
  • Understanding of stellar evolution and lifecycle stages
  • Knowledge of nuclear fusion processes in stars
  • Familiarity with concepts of gravitational pressure and electron degeneracy pressure
  • Awareness of the Chandrasekhar limit and its implications for white dwarfs
NEXT STEPS
  • Research the process of helium fusion in stars and its energy release mechanisms
  • Explore the implications of the Chandrasekhar limit on stellar remnants
  • Study the differences between low mass and high mass star evolution
  • Learn about the conditions leading to supernova explosions and black hole formation
USEFUL FOR

Astronomy students, astrophysicists, and anyone interested in understanding stellar evolution and the physical processes governing the lifecycle of stars.

  • #31
Ken G said:
The situation for a red giant is completely different, where you really need to think of a red giant as three separate entities, coexisting and controlling each other. A red giant is a degenerate core, a fusing shell around that core, and a puffed out envelope. The degenerate core has a strong gravity that sets the temperature of the fusing shell, and this is not at all how core fusion works, because core fusion self-regulates its own temperature as I mentioned. Shell fusion has its temperature dictated to it, and is typically quite hot, so the fusion rate just goes nuts. That's why red giants are so bright. In fact, red giants would be so bright they'd explode like supernovae, if not for the fact that this heat goes into the envelope and puffs it out. Puffing out the envelope reduces the weight on the fusing shell, which reduces the density and amount of gas in the fusing shell, which dials down the fusion rate even though the temperature is very high. So we should say that core fusion self-regulates its own temperature, while shell fusion self-regulates its density and amount of material. The difference there makes the latter way brighter.

Consider the other limit. White or nearly white dwarf.
In that case, the gravity of degenerate core is counteracted by degeneracy pressure of the core. And sets no lower bound on temperature - a white dwarf can cool without contraction.
Now suppose that there is a thin layer of fusible material on top of the degenerate core. It might be cold, and not undergo any fusion. If it is thin - compared to the radius of the underlying degenerate core - then its pressure is independent of its temperature, because it is simply dictated by the weight of the shell.
What prevents a thin layer of fusible material on top of a degenerate core from cooling down and also going degenerate?
Or is it what happens when a red giant turns into a white dwarf? Then what keeps a shell fusing/allows it to go out in due time?
 
Astronomy news on Phys.org
  • #32
snorkack said:
If it is thin - compared to the radius of the underlying degenerate core - then its pressure is independent of its temperature, because it is simply dictated by the weight of the shell.
I'm with you so far.
What prevents a thin layer of fusible material on top of a degenerate core from cooling down and also going degenerate?
Nothing, that is the idea for how white dwarfs accrete matter and make type Ia supernovae. Of course, the opposite can also happen-- the thin layer gets very hot, and undergoes explosive fusion, creating classical novae. Whether the net result of accretion is explosion or accretion is a big question as to how type Ia SN can happen via that channel (rather than via white dwarf merger).
Or is it what happens when a red giant turns into a white dwarf? Then what keeps a shell fusing/allows it to go out in due time?
I suppose the shells could burn out if their temperature drops, like a wildfire burning out. Thin-shell fusion is notoriously unstable, for just the reason you mention, expansion doesn't weaken the gravity so adding a given heat does not cause as much expansion and enough adiabatic cooling to stabilize it. This causes something known as "thermal pulses" in shell fusion.
 
  • #33
Ken G said:
Expansion already cools the star due to adiabatic cooling, which causes re-contraction. That means stars are "dynamically stable", they return if you kick them adiabatically. The self-regulation of the fusion is a different issue, it is the issue of thermal stability-- what happens if, instead of kicking the star, you put some excess heat into it. Excess heat causes expansion (and cooling) on the dynamical timescale, but on the much slower thermal timescale you then have to ask if the new configuration satisfies the energy balance. That's where the back-reaction on fusion comes in-- normally the expanded and cooled star will have slower fusion, yet a similar luminosity (remember the cancellation between the reduction in the light content due to the adiabatic cooling, coupled with the reduction in the leakage time for that light), so it suffers a net loss of heat and recontracts. If fusion stayed the same, and the luminosity stayed the same, if you put heat into a star it would merely expand and reach a new perfectly happy equilibrium-- neither continuing to expand nor recontracting. If the fusion rate actually increased when you added heat and the star expanded, then the fusion would add more heat, and you'd have a thermal instability. This is precisely what happens in a helium flash and in a type Ia supernova, where the electrons are degenerate.
Actually that's a very good question and requires understanding the key differences between main-sequence stars and red giants. The net total leakage rate (i.e., the luminosity) does not really change when a main-sequence star expands in force balance. What happens when you add heat to make the star expand is that the two effects I mentioned above cancel, and so the luminosity does not change much. But all this only applies to main-sequence stars, because it uses a very simple description where the star is treated as "all one thing." The situation for a red giant is completely different, where you really need to think of a red giant as three separate entities, coexisting and controlling each other. A red giant is a degenerate core, a fusing shell around that core, and a puffed out envelope. The degenerate core has a strong gravity that sets the temperature of the fusing shell, and this is not at all how core fusion works, because core fusion self-regulates its own temperature as I mentioned. Shell fusion has its temperature dictated to it, and is typically quite hot, so the fusion rate just goes nuts. That's why red giants are so bright. In fact, red giants would be so bright they'd explode like supernovae, if not for the fact that this heat goes into the envelope and puffs it out. Puffing out the envelope reduces the weight on the fusing shell, which reduces the density and amount of gas in the fusing shell, which dials down the fusion rate even though the temperature is very high. So we should say that core fusion self-regulates its own temperature, while shell fusion self-regulates its density and amount of material. The difference there makes the latter way brighter.

Ah, I think I got it now. Can I say that since the shell in a red giant has its own dedicated temperature source (from the degenerate core), the temperature of that shell isn't affected by expansion as much as the core temperature of an expanding main-sequence star would?
 
  • #34
JohnnyGui said:
Ah, I think I got it now. Can I say that since the shell in a red giant has its own dedicated temperature source (from the degenerate core), the temperature of that shell isn't affected by expansion as much as the core temperature of an expanding main-sequence star would?

Hardly. A degenerate core is not a source of energy.
But compare the limiting case of a thin layer of fusible material on top of a cold white dwarf.
If the thin layer starts to heat up because of fusion, it will undergo heat loss at an increasing rate. Increasing rate because of radiation upwards into space and also increasing rate because of conduction downwards into the cooler core.
Yet the expansion will only slow the rate of temperature growth - it will not cause actual cooling. Expanding a thin layer to 8 times its previous volume requires heating it to 8 times its previous temperature. Expanding a self-gravitating core to 8 times its previous volume causes its temperature to fall to 1/2 of its previous temperature.
 
  • #35
JohnnyGui said:
Ah, I think I got it now. Can I say that since the shell in a red giant has its own dedicated temperature source (from the degenerate core), the temperature of that shell isn't affected by expansion as much as the core temperature of an expanding main-sequence star would?
Yes, you can indeed say that. As the envelope puffs out, what drops is the amount of material and the density in the fusing shell, not its temperature. That is indeed the key difference with stars fusing in their cores.
 
  • #36
snorkack said:
Hardly. A degenerate core is not a source of energy.
But it is a very powerful source of gravity, which does serve as an energy source for the shell around it.
Expanding a thin layer to 8 times its previous volume requires heating it to 8 times its previous temperature. Expanding a self-gravitating core to 8 times its previous volume causes its temperature to fall to 1/2 of its previous temperature.
Yes, your former point is what leads to thermal pulses in shell sources, and also to classical novae, while your latter point is what stabilizes fusion in the Sun right now. But in the case of red giants, the fusing shell is not thin compared to the degenerate core, so is dynamically stabilized in a way that is more like core fusion.
 
  • #37
Ken G said:
Yes, you can indeed say that. As the envelope puffs out, what drops is the amount of material and the density in the fusing shell, not its temperature. That is indeed the key difference with stars fusing in their cores.

Then what happens to the top of the formerly fusing shell?
If a fusing shell expands on heating, the bottom of the shell expands, but does not move. The top of the shell moves outwards.
What makes the upper part of the shell from fusing to nonfusing as it expands?
 
  • #38
snorkack said:
Then what happens to the top of the formerly fusing shell?
If a fusing shell expands on heating, the bottom of the shell expands, but does not move. The top of the shell moves outwards.
What makes the upper part of the shell from fusing to nonfusing as it expands?
What stays the same is the temperature as a function of radius, not the temperature of a given parcel of gas. Thus, "the fusing shell" is defined by a temperature layer, not a given set of gas. As the gas expands, it simply leaves the fusing shell, while the temperature of the fusing shell continues to be kept fixed by the gravity from the core. Thus we should say that the fusing shell self-regulates how much material it contains and what is the density of that material, such that you don't have the same material in the shell that you had prior to the expansion. Note also that the expansion we are talking about here is happening on the evolutionary timescale, not the dynamical timescale where we have the usual adiabatic dynamical stability, and not on the thermal timescale where we have solar-like thermostatic stability. What the degenerate core controls is the temperature at which the thermostat is "set to," and that's the main difference from main-sequence stars-- in the latter, the thermostat is set to the temperature that allows fusion to replace the heat leaking out.
 
  • #39
Let me see if I have this straight:

When hydrogen fusion ceases in the core of solar mass star, the core contracts until it is a hot, degenerate mass of helium. This contraction increases the gravitational pull on the shell of hydrogen just outside the core. This increased gravity causes the shell to compress and heat up until it reaches fusion temperatures. But because the gravity is so high, the temperature needed to stabilize it against further contraction is much higher than the temperature in the main-sequence core. This causes the fusion rate to skyrocket until it provides enough energy to offset the energy loss from the shell and to puff out the shell and outer envelope. This reduces the density of material in the shell,stabilizing the fusion rate by way of limiting the amount of fusion fuel in the shell.

Is that mostly correct?
 
  • #40
Yes, that sounds good to me.
 
  • #41
Ken G said:
What stays the same is the temperature as a function of radius, not the temperature of a given parcel of gas. Thus, "the fusing shell" is defined by a temperature layer, not a given set of gas. As the gas expands, it simply leaves the fusing shell, while the temperature of the fusing shell continues to be kept fixed by the gravity from the core.
Why would anything stay the same?
For a star that is 100 % fusible sphere, no core, the temperature decreases with expansion - falls 2 times when volume increases 8 times.
For a star that is nearly 100 % exhausted core, thin fusible shell, the temperature rises with expansion - rises 8 times when volume increases 8 times.
Core size is a continuous argument. You can have a star which is 5 % exhausted, 95 % fusible, or 50 % exhausted 50 % fusible, or 95 % exhausted 5 % fusible.
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
 
  • #42
snorkack said:
Why would anything stay the same?
The goal is to understand. Hence, looking for things that stay the same is a device for understanding, a useful tool if you will.
For a star that is 100 % fusible sphere, no core, the temperature decreases with expansion - falls 2 times when volume increases 8 times.
Yes, for homologous expansion that is correct. Of course, a star with a degenerate core does not expand homologously, which is the point.
For a star that is nearly 100 % exhausted core, thin fusible shell, the temperature rises with expansion - rises 8 times when volume increases 8 times.
As I said above, the fusing shell is not thin compared to the core, so a red giant does not follow either of these categories as a whole, but the fusing shell in a red giant responds more like the first category, i.e., its fusion is dynamically stabilized. The hydrogen fusing shell in an asymptotic giant is further out in the star and can act more like the second category, which can lead to unstable fusion events called thermal pulses.
Core size is a continuous argument. You can have a star which is 5 % exhausted, 95 % fusible, or 50 % exhausted 50 % fusible, or 95 % exhausted 5 % fusible.
That is correct, the physics of a red giant is very much controlled by the mass of the core. Hence, red giants change as the core mass grows-- their radius and luminosity increases, all for reasons that are readily understandable.
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
There is a period of evolution after the center runs out of fusible material, a transition from one way of describing the situation to another, and during that transition, the changes in heat capacity occur (though remember that one of the changes is that the star ceases to respond homologously so no longer can be characterized as having a single gravothermal heat capacity). At first, the center is not important, it is not even degenerate, and the star is not recognizably different from a main-sequence star. With time, a degenerate core builds up, but the star is still not a red giant until the core gets enough mass to start to dictate the structure of the star. Also, the envelope is not fully convective either. This intermediate phase is called the "subgiant" phase and is of course more difficult to understand, being a transitional phase. However, once the core mass has built up enough that it is dictating the structure to the rest of the star (roughly when the binding energy of the core is comparable to the binding energy of the rest of the star, which does not take a lot of core mass because it is so highly contracted), at this point we can regard the object as an evolving red giant (evolving as the core mass builds up more and more), and we can understand it via the means I've described above. By the time the core mass reaches about 0.5 solar masses, regardless of the mass of the rest of the star, the structure that this core mass dictates controls both the maximum luminosity the giant reaches, and the point where helium fusion initiates in the core. This is why the evolutionary tracks of all red giants are quite similar.
 
  • Like
Likes rootone, davenn and Drakkith
  • #43
snorkack said:
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
In the largest of stars which have cores fusing into iron and nickel, that is the end of the game.
While outer layers may still be burning carbon and silicon for a while it doesn't last long.
Neutron star is the next stage, but that might not be stable, so supernova instead,
 
  • #44
Ken G said:
However, once the core mass has built up enough that it is dictating the structure to the rest of the star (roughly when the binding energy of the core is comparable to the binding energy of the rest of the star, which does not take a lot of core mass because it is so highly contracted), at this point we can regard the object as an evolving red giant (evolving as the core mass builds up more and more), and we can understand it via the means I've described above. By the time the core mass reaches about 0.5 solar masses, regardless of the mass of the rest of the star, the structure that this core mass dictates controls both the maximum luminosity the giant reaches, and the point where helium fusion initiates in the core. This is why the evolutionary tracks of all red giants are quite similar.

Stars which are more massive have convective cores while on main sequence. Therefore protium is exhausted over the whole core. If a star starts with a core mass over 0,5 solar, what does it look like as a red giant?
 
  • #45
snorkack said:
Stars which are more massive have convective cores while on main sequence. Therefore protium is exhausted over the whole core. If a star starts with a core mass over 0,5 solar, what does it look like as a red giant?
Ah, important question. When the core is already over 0.5 solar masses when the star leaves the main sequence, it cannot go degenerate at all, because it will start fusing helium first. Thus, it will never be a red giant, it will not have a degenerate core that dictates to the structure of the rest of the star and its luminosity will not rise as a result. This is why high-mass stars evolve more or less horizontally across the H-R diagram, they keep their radiative-diffusive luminosity, but they do puff out in radius as their core contracts (still as an ideal gas), and the core and envelope are still separated by a shell of fusion-- but that shell does not have an uncomfortably high temperature dictated to it, it simply continues to fuse to replace the light that leaks out. Since the total rate that light leaks out is not dependent on the radius of the star (when to a first approximation the cross section per gram stays fairly constant), and the fusion temperature is not dictated to it by some hugely contracted degenerate core, it can self-regulate its temperature to simply replace the heat being lost, and that doesn't require increasing the luminosity to get a huge puffing of the envelope. Ironically, this type of star is called a "red supergiant", but it doesn't puff out as much as red giants do in a relative sense, because high-mass main-sequence stars start out larger and lower density in the first place. That's why they can remain ideal gases the whole time.

Incidentally, there is an intermediate mass range, say 2 - 8 solar masses, where the transition between these phases happens very suddenly, creating what is known as the "Hertsprung gap" observed in the H-R diagram. This sudden core contraction happens because of a gravitational instability that exists for a non-fusing core in that mass range, but that instability only happens to ideal gases, and the core never goes degenerate for the reason you are asking about-- it would have more than 0.5 solar masses by the time it would otherwise go degenerate, so it just starts fusing helium instead.
 
  • #46
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave
 
  • Like
Likes JohnnyGui
  • #47
Ken G said:
Ah, important question. When the core is already over 0.5 solar masses when the star leaves the main sequence, it cannot go degenerate at all, because it will start fusing helium first. Thus, it will never be a red giant, it will not have a degenerate core that dictates to the structure of the rest of the star and its luminosity will not rise as a result. This is why high-mass stars evolve more or less horizontally across the H-R diagram, they keep their radiative-diffusive luminosity, but they do puff out in radius as their core contracts (still as an ideal gas), and the core and envelope are still separated by a shell of fusion-- but that shell does not have an uncomfortably high temperature dictated to it, it simply continues to fuse to replace the light that leaks out. Since the total rate that light leaks out is not dependent on the radius of the star (when to a first approximation the cross section per gram stays fairly constant), and the fusion temperature is not dictated to it by some hugely contracted degenerate core, it can self-regulate its temperature to simply replace the heat being lost, and that doesn't require increasing the luminosity to get a huge puffing of the envelope. Ironically, this type of star is called a "red supergiant", but it doesn't puff out as much as red giants do in a relative sense, because high-mass main-sequence stars start out larger and lower density in the first place. That's why they can remain ideal gases the whole time.
Why, then, would red supergiants puff out at all?
 
  • #48
davenn said:
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave
You are more than welcome! I have indeed found it hard to find this information in most sources, they generally don't do a great job past the main sequence.
 
  • Like
Likes Drakkith
  • #49
snorkack said:
Why, then, would red supergiants puff out at all?
Also an important question. The most natural expectation when fusion runs out in the core is that the star would simply continue the same homologous contraction it was doing prior to the onset of fusion. One might expect fusion to simply be a long pause in this inexorable homologous contraction while there is a net loss of heat. But the "homologous" in the above essentially means "as though the star were basically all one thing," but that's only true up to (and including, mostly) the main sequence. After core fusion ends, what reverses the contraction of the outer radius of the star is a very significant break in the homology-- the star can no longer be treated as all one thing, it must be treated as three things, a core, a fusing shell, and an envelope.

Thus the key difference between red giants and red supergiants is how compact is the core, and as a result, what is its gravitational affect on the rest of the star. A red giant is a remarkable object that has a core with a volume some one trillionth of the volume of the rest of the star, yet the gravitational binding energy of that core comes to vastly exceed the binding energy of the rest of the object! It's a crucial feature that generates a similarly small fusion engine, a shell around that core (though not thin relative to the core) that is responsible for the huge luminosity of the star because the temperature is forced (by the core) to be very high (by fusion standards). Fusion goes nuts, the rest of the star must dial it down by removing weight from the shell, and that's why red giants puff out, in ways all controlled by the mass of that little tiny degenerate core. The luminosity goes way up because the light need only diffuse through the tiny shell mass before it gets picked up and efficiently carried by the convective envelope, so when the light leaks out so quickly and easily, the luminosity must rise.

In red supergiants, the homology is still broken, and the star must be treated as three things rather than one, and that's why the envelope puffs out rather than contracting as the core contracts. But here the core remains an ideal gas, so it never gets to the huge gravitational scale of a red giant and it never dictates a high temperature to the shell. Nevertheless, as the core contracts, the shell temperature does rise, so the fusion does overproduce a bit, and does need to lift off some weight by puffing out the envelope. But this doesn't change the luminosity much, it remains mostly the same radiative diffusion process it was on the main sequence, because radius is not a key factor in radiative diffusion. The key difference is that in a red supergiant, the virialized temperature of the fusion zone is set by the mass and radius of the fusion zone, and this represents a significant fraction of the stellar mass-- it's a little like that middle zone is still more or less a main-sequence star of its own, with a hole punched out of its center that has a tendency to contract and force up the fusion temperature above its equilibrium value, which is compensated by having the envelope puff out to lift off weight and keep the fusion rate nearly fixed by the nearly-constant radiative diffusion through the fusion zone. In red giants, the tiny mass of the fusion zone has no input into its temperature, that's all controlled by the significantly more massive core, so the fusion rate and luminosity shoot up before the puffing out of the envelope can finally recover an equilibrium at this new crazy fusion temperature. The red supergiant luminosity might still be two orders of magnitude higher than the red giant luminosity, but it started out more like five orders of magnitude higher on the main sequence, and gram for gram of gas that is actually in the fusion zone, the fusion rate in the red giant is much much faster than in the red supergiant.
 
Last edited:
  • Like
Likes Drakkith
  • #50
davenn said:
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave

Wanted to say the same thing. @Ken G really enlightened me on how stars really behave physics-wise. Thanks!

I wanted to make sure if I understand the mathematical relationship of radius ##r## and some other measurements in a star.

Are these correct to say?

- Temperature in a star is inversely proportional to ##r##

- Light (photon) loss per unit time has several relationships with ##r##:
1. It's proportional to the area, thus proportional to ##r^2##
2. It's also inversely proportional to the time it takes that a photon travels from within the star to the surface, thus it's inversely proportional to ##r##
3. It's proportional to the temperature (energy per photon-wise) and since temperature is inversely proportional to ##r##, that means it's again inversely proportional to ##r##
4. This all means that the net change in light loss per unit time when you expand a main-sequence star is 0, before a star contracts again after the expansion "kick". This conclusion is without taking the fusion rate into account that is also affected during expansion.

- Fusion rate is proportional to the temperature ##T## but to different extents depending on what is being fused. Fusion rate of hydrogen is proportional to ##T^4##, fusion rate of helium is proportional to ##T^{40}##. Thus it's proportional to ##r## and influences the light loss per unit time in different amounts depending on what is being fused.

One other question @Ken G ; you said fusion rate is proportional to mass to the 3rd or 4th power. Is this apart from the temperature being higher or lower with mass? So if I add more mass to a star while keeping the temperature constant per unit mass, fusion rate would still go up?
 
Last edited:
  • #51
JohnnyGui said:
- Temperature in a star is inversely proportional to ##r##
Yes, this is the virial theorem, but there are a few important caveats. First of all, the virial theorem is a kind of average statement, so is really only useful when the whole star can be treated as "all one thing," where the temperature is characterized by the temperature over most of the interior mass, and the radius characterizes that mass. So it's best for pre-main-sequence and main-sequence stars, failing badly for giants and supergiants which have decoupled outer radii. Secondly, we must be very clear that the temperature we mean is the interior temperature, not the surface temperature you find in an H-R diagram. You may well know this, but this confusion comes up in a lot of places where people try to marry the Stefan-Boltzmann law, applying only to surface temperature, to the interior temperature. The surface is like the clothes worn by the star, much more than it is like the star itself, but since we only see the surface this can cause confusion.
- Light (photon) loss per unit time has several relationships with ##r##:
1. It's proportional to the area, thus proportional to ##r^2##
This is the Stefan-Boltzmann law, but one must be wary of the cause and effect. In protostars that are fully convective and have surface temperatures controlled to be about 3000-4000 K or so, this law is quite useful for understanding the rate that energy is transported through the star. In effect, the luminosity is controlled outside-in, because the convective interior will pony up whatever heat flux the surface says it needs to (via the relation you mention). However, when stars are not fully convective, or when they have fully convective envelopes controlled by tiny interior fusion engines (like red giants), the cause and effect reverses, and the luminosity is handed to the surface. In that case, it is not that the luminosity is proportional to ##1/r^2##, it is that the radius is proportional to the inverse root of the luminosity.
2. It's also inversely proportional to the time it takes that a photon travels from within the star to the surface, thus it's inversely proportional to ##r##
When radiative diffusion controls the luminosity (pre-main-sequence and main-sequence, and also giants and supergiants to some degree), what you mention is one of the factors. But not the only one-- diffusion is a random walk, so optical depth enters as well, not just distance to cross.
3. It's proportional to the temperature (energy per photon-wise) and since temperature is inversely proportional to ##r##, that means it's again inversely proportional to ##r##
Best not to think in terms of energy per photon, but rather energy per unit volume. That scales like ##T^4## (that's the other half of the Stefan-Boltzmann law), not T.
4. This all means that the net change in light loss per unit time when you expand a main-sequence star is 0, before a star contracts again after the expansion "kick". This conclusion is without taking the fusion rate into account that is also affected during expansion.
If you are testing dynamical stability (the usual meaning of a "kick," you would kick it on adiabatic timescales, i.e., timescales very short compared to the energy transport processes that set the luminosity. So for dynamical timescales, use adiabatic expansion, and ignore all energy release and transport. If you want to know how the luminosity evolves as the stellar radius (gradually) changes, that's when the above considerations about the leaky bucket of light come into play.
- Fusion rate is proportional to the temperature ##T## but to different extents depending on what is being fused. Fusion rate of hydrogen is proportional to ##T^4##, fusion rate of helium is proportional to ##T^{40}##. Thus it's proportional to ##r## and influences the light loss per unit time in different amounts depending on what is being fused.
The simplest way to treat fusion is to pretend the exponent of T is very high, and just say T makes minor insignificant adjustments until the fusion rate matches the pre-determined luminosity. For p-p fusion, the exponent is a little low (about 4, as you say), so that's not a terrific approximation, but it's something. For all other fusion (including CNO cycle hydrogen fusion), it's a darn good approximation. So if you are making this approximation, you don't care about the value of the exponent, the fusion just turns on at some T and self-regulates. However, in red giants, where the fusion T cannot self-regulate, there you do need the full exponent, you need to explicitly model the T dependence of the fusion because T is preset to be quite high.
One other question @Ken G ; you said fusion rate is proportional to mass to the 3rd or 4th power. Is this apart from the temperature being higher or lower with mass? So if I add more mass to a star while keeping the temperature constant per unit mass, fusion rate would still go up?
Yes, for p-p hydrogen fusion, say like in the Sun. In fact, this is not a bad approximation for what would actually happen if you added mass to the Sun-- you wouldn't need to keep the interior temperature the same, the thermostatic effects of fusion would do that for you. The Sun would expand a little, and its luminosity would go up a little because it is now a bigger leakier bucket of light. Fusion would simply increase its own rate to match the light leaking out, and it would do that with very little change in temperature, expressly because it is so steeply dependent on T. But this story would work even better if the fusion rate was even more sensitive to T, say for the CNO cycle fusion in somewhat more massive stars than the Sun. (Ironically, many seemingly authoritative sources get this reasoning backward, and claim that the temperature sensitivity of fusion is why the luminosity is higher for higher mass, on grounds that adding mass will increase the temperature which will increase the fusion rate which will increase the luminosity. They are saying that the sensitivity of the fusion rate to T is why it rules the star's luminosity, when the opposite is true-- it is why the fusion rate is the slave of the luminosity. The situation is similar to having a thermostat in your house, and throwing open the windows in winter-- opening the windows is what causes the heat to escape, not the presence of a furnace, but the extreme sensitivity of a thermostat is what causes the furnace burn rate to be enslaved to how wide you open the windows.)
 
Last edited:
  • #52
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.

Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
 
  • #53
snorkack said:
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
A main-sequence star with a convective core that massive is a fairly high-mass star, so its core will remain an ideal gas, so it will never undergo a "helium flash". But it will start to fuse helium at some point, so let's continue from there:
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Neither necessarily begins helium fusion promptly, their ideal-gas cores simply accumulate mass gradually as ash is added to them, and are maintained at the temperature of the shell around them. The number 0.5 solar masses only matters if the core goes degenerate before it reaches 0.5 solar masses, as that will produce a red giant, but if the core is still ideal when it reaches 0.5 solar masses, it will never go degenerate, never make a red giant, and never have its luminosity shoot up, it will instead make a red supergiant and keep its luminosity almost the same. If the star is in the range 2-8 solar masses, the transition will happen rather abruptly as the core collapses in a gravitational instability, and at the lower-mass end of that range it will still have less than 0.5 solar masses so will indeed make a red giant and will later have a helium flash. At the higher mass end of that range, the core will already exceed 0.5 solar masses before it goes degenerate, so it will never go degenerate, even after the core collapses and jumps the star across the Hertsprung gap. That makes a red supergiant, because the core is still ideal.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.
Yes, any time the core gets above 0.5 solar masses before going degenerate, it will start helium fusion without ever going degenerate, and will therefore not create a red giant-- we will call it a red supergiant prior to helium fusion. The name "supergiant" is a bit misleading, because although the star will be larger than a giant, it will not be as puffed out relative to its own core, and will actually behave more like a main-sequence star, merely cloaked in a suprisingly large cool envelope due to the effects of having a central hole that is not participating in the fusion and has a tendency to either collapse (below 8 solar masses), or at least contract as ash is added to it..
Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
Stars that have undergone a helium flash are fusing helium in their cores, whereas red supergiants (and red giants) have inert cores. So no, they have very different structures. However, stars undergoing core fusion tend to be more homologous, though what they are fusing makes a big difference in the composition of the star, and the presence or absence of additional fusing shells is a complicated break in the homology.
 
  • #54
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
 
  • #55
snorkack said:
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
That's incorrect. As I explained above, the leaky bucket picture gives you the luminosity of a radiatively diffusive star (nearly) independently of its radius and temperature. This is called the "Henyey track", it was known about before fusion was even discovered. The fact that the radiative diffusive luminosity is independent of radius and temperature is the reason that the thermostat has little to do with the luminosity, it only has to do with the fusion. Again, Eddington understood the luminosity of the Sun quite well before anybody knew there was a thing called fusion. This is not contradiction, it is historical fact that the leaky bucket picture was understood before fusion was known about, and the discovery of fusion allowed the thermostatic piece to be added, helping us understand how the Sun could be so static for so long.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
I have no idea what this is intended to mean.
 
Last edited:
  • #56
Ken G said:
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center. I have no idea what this is intended to mean.
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
If fusion takes place at the centre of zero size (where alone temperature reaches the fusion thermostat temperature) then the radiative flux density diverges to infinity at centre. In which case so does the temperature gradient.
 
  • #57
snorkack said:
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
Apparently you are not understanding the concept of a "thermostat," or are interpreting it too narrowly to be of much use to you. All it means is that the fusion self-regulates the temperature such that the fusion rate replaces the heat lost, so if the core temperature found itself for whatever reason being too high or too low compared to the "thermostat setting," the action of fusion would quickly return the core to the necessary temperature. This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature, it is informative to recognize that the fusion domain will lie within a fairly narrow temperature regime. Indeed, even over the entire main sequence, the central temperature remains within about a factor of 2. So we have some 8 orders of magnitude in luminosity, and only about a factor of 2 in central temperature. That's what I call a remarkable thermostat, though it appears you mean something more restrictive by the term than how it used in most sources.
 
  • #58
Ken G said:
This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature,
And my point is that fusion cannot be "extremely" sensitive to temperature, in order for fusion to take place over an extended volume of space AND support a temperature gradient allowing radiative conduction across that volume.
Indeed, if the sensitivity of fusion to temperature were too strong, fusion would get concentrated into too small volume, leading to too high heat fluxes and temperature gradients and violating the assumption of radiative conduction.
 
  • #59
It is certainly true that the more temperature sensitive is the fusion, the more centrally concentrated is the fusion zone. It is not true that there is some limit to how T sensitive the fusion can be, at least in the sense of solutions to the basic equations (if and when those equations actually apply is another matter, we are working within a given mathematical model). You simply equate the local fusion rate to the divergence of the heat flux, the former is a function of T and the latter a function of derivatives of T, so you simply find the T structure that solves it, you could do it easily it's generally just solving a second-order differential equation for the T. There's always a solution, for any fusion rate that is a continuous function of T no matter how steep. But of course, that is within a given mathematical framework, other issues might appear like convection and so on. They don't change the basic picture, which is why Eddington met with so much success using only simple models (and indeed, even the inclusion of fusion does not change the situation drastically, it only changes the evolutionary timescales drastically, much to Eddington's chagrin).

There are two separate meanings of "thermostat" that I think you are confusing-- one is a tendency to keep the entire star at the same T (which is not what we are talking about), and the other is tendency to keep the central T at the same value, but there is still a T structure. It is the latter type of "thermostat" that applies for stars on the main sequence, though of course it is only an insightful approximate picture. In actuality, the central T does vary across the main sequence, but surprisingly little-- as the stellar luminosity increases by some 6 orders of magnitude over the bulk of the main sequence, the central T increases by only some factor of 2. The thermostat in my house isn't much more effective that that against things like throwing open all the windows.
 
  • #60
Furthermore, the basic assumption that stars are leaky buckets of light holds only for a narrow mass range, or not at all.
A third of Sun´s radius is convecting, not radiating. For stars less massive than Sun, that fraction is bigger. For stars less than about 0,25 solar masses, the whole star is convective - yet fusion does happen.

What should happen to the size of a star when fusion happens?
4 atoms of protium, once ionized, are 8 particles (4 protons, 4 electrons).
1 atom of helium 4, once ionized, is 3 particles (1 alpha, 2 electrons).
pV=nRT.
If pV were constant, nT would have to be constant. Then T would have to increase 8/3 times. But that´s forbidden by the assumption of thermostat.
What then? Does the radius of the star have to increase as the number of particles decreases?
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 42 ·
2
Replies
42
Views
4K
  • · Replies 49 ·
2
Replies
49
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K