I Questions about the lifecycle of stars

  • I
  • Thread starter Thread starter JohnnyGui
  • Start date Start date
  • Tags Tags
    Stars
AI Thread Summary
The discussion centers on the lifecycle of stars, addressing various aspects of stellar evolution and fusion processes. It explains that adding mass to a small planet eventually leads to volume shrinkage due to overwhelming gravitational forces. Larger stars burn fuel faster because their greater gravity compresses the core, increasing the fusion rate despite having more fuel. The transition from hydrogen to helium fusion involves complex dynamics where the star shrinks initially, creating conditions for helium fusion to ignite, which then causes expansion. Additionally, the role of electron degeneracy pressure is clarified, emphasizing that it only becomes significant after fusion ceases, allowing for the potential formation of white dwarfs or neutron stars based on mass.
  • #51
JohnnyGui said:
- Temperature in a star is inversely proportional to ##r##
Yes, this is the virial theorem, but there are a few important caveats. First of all, the virial theorem is a kind of average statement, so is really only useful when the whole star can be treated as "all one thing," where the temperature is characterized by the temperature over most of the interior mass, and the radius characterizes that mass. So it's best for pre-main-sequence and main-sequence stars, failing badly for giants and supergiants which have decoupled outer radii. Secondly, we must be very clear that the temperature we mean is the interior temperature, not the surface temperature you find in an H-R diagram. You may well know this, but this confusion comes up in a lot of places where people try to marry the Stefan-Boltzmann law, applying only to surface temperature, to the interior temperature. The surface is like the clothes worn by the star, much more than it is like the star itself, but since we only see the surface this can cause confusion.
- Light (photon) loss per unit time has several relationships with ##r##:
1. It's proportional to the area, thus proportional to ##r^2##
This is the Stefan-Boltzmann law, but one must be wary of the cause and effect. In protostars that are fully convective and have surface temperatures controlled to be about 3000-4000 K or so, this law is quite useful for understanding the rate that energy is transported through the star. In effect, the luminosity is controlled outside-in, because the convective interior will pony up whatever heat flux the surface says it needs to (via the relation you mention). However, when stars are not fully convective, or when they have fully convective envelopes controlled by tiny interior fusion engines (like red giants), the cause and effect reverses, and the luminosity is handed to the surface. In that case, it is not that the luminosity is proportional to ##1/r^2##, it is that the radius is proportional to the inverse root of the luminosity.
2. It's also inversely proportional to the time it takes that a photon travels from within the star to the surface, thus it's inversely proportional to ##r##
When radiative diffusion controls the luminosity (pre-main-sequence and main-sequence, and also giants and supergiants to some degree), what you mention is one of the factors. But not the only one-- diffusion is a random walk, so optical depth enters as well, not just distance to cross.
3. It's proportional to the temperature (energy per photon-wise) and since temperature is inversely proportional to ##r##, that means it's again inversely proportional to ##r##
Best not to think in terms of energy per photon, but rather energy per unit volume. That scales like ##T^4## (that's the other half of the Stefan-Boltzmann law), not T.
4. This all means that the net change in light loss per unit time when you expand a main-sequence star is 0, before a star contracts again after the expansion "kick". This conclusion is without taking the fusion rate into account that is also affected during expansion.
If you are testing dynamical stability (the usual meaning of a "kick," you would kick it on adiabatic timescales, i.e., timescales very short compared to the energy transport processes that set the luminosity. So for dynamical timescales, use adiabatic expansion, and ignore all energy release and transport. If you want to know how the luminosity evolves as the stellar radius (gradually) changes, that's when the above considerations about the leaky bucket of light come into play.
- Fusion rate is proportional to the temperature ##T## but to different extents depending on what is being fused. Fusion rate of hydrogen is proportional to ##T^4##, fusion rate of helium is proportional to ##T^{40}##. Thus it's proportional to ##r## and influences the light loss per unit time in different amounts depending on what is being fused.
The simplest way to treat fusion is to pretend the exponent of T is very high, and just say T makes minor insignificant adjustments until the fusion rate matches the pre-determined luminosity. For p-p fusion, the exponent is a little low (about 4, as you say), so that's not a terrific approximation, but it's something. For all other fusion (including CNO cycle hydrogen fusion), it's a darn good approximation. So if you are making this approximation, you don't care about the value of the exponent, the fusion just turns on at some T and self-regulates. However, in red giants, where the fusion T cannot self-regulate, there you do need the full exponent, you need to explicitly model the T dependence of the fusion because T is preset to be quite high.
One other question @Ken G ; you said fusion rate is proportional to mass to the 3rd or 4th power. Is this apart from the temperature being higher or lower with mass? So if I add more mass to a star while keeping the temperature constant per unit mass, fusion rate would still go up?
Yes, for p-p hydrogen fusion, say like in the Sun. In fact, this is not a bad approximation for what would actually happen if you added mass to the Sun-- you wouldn't need to keep the interior temperature the same, the thermostatic effects of fusion would do that for you. The Sun would expand a little, and its luminosity would go up a little because it is now a bigger leakier bucket of light. Fusion would simply increase its own rate to match the light leaking out, and it would do that with very little change in temperature, expressly because it is so steeply dependent on T. But this story would work even better if the fusion rate was even more sensitive to T, say for the CNO cycle fusion in somewhat more massive stars than the Sun. (Ironically, many seemingly authoritative sources get this reasoning backward, and claim that the temperature sensitivity of fusion is why the luminosity is higher for higher mass, on grounds that adding mass will increase the temperature which will increase the fusion rate which will increase the luminosity. They are saying that the sensitivity of the fusion rate to T is why it rules the star's luminosity, when the opposite is true-- it is why the fusion rate is the slave of the luminosity. The situation is similar to having a thermostat in your house, and throwing open the windows in winter-- opening the windows is what causes the heat to escape, not the presence of a furnace, but the extreme sensitivity of a thermostat is what causes the furnace burn rate to be enslaved to how wide you open the windows.)
 
Last edited:
Astronomy news on Phys.org
  • #52
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.

Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
 
  • #53
snorkack said:
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
A main-sequence star with a convective core that massive is a fairly high-mass star, so its core will remain an ideal gas, so it will never undergo a "helium flash". But it will start to fuse helium at some point, so let's continue from there:
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Neither necessarily begins helium fusion promptly, their ideal-gas cores simply accumulate mass gradually as ash is added to them, and are maintained at the temperature of the shell around them. The number 0.5 solar masses only matters if the core goes degenerate before it reaches 0.5 solar masses, as that will produce a red giant, but if the core is still ideal when it reaches 0.5 solar masses, it will never go degenerate, never make a red giant, and never have its luminosity shoot up, it will instead make a red supergiant and keep its luminosity almost the same. If the star is in the range 2-8 solar masses, the transition will happen rather abruptly as the core collapses in a gravitational instability, and at the lower-mass end of that range it will still have less than 0.5 solar masses so will indeed make a red giant and will later have a helium flash. At the higher mass end of that range, the core will already exceed 0.5 solar masses before it goes degenerate, so it will never go degenerate, even after the core collapses and jumps the star across the Hertsprung gap. That makes a red supergiant, because the core is still ideal.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.
Yes, any time the core gets above 0.5 solar masses before going degenerate, it will start helium fusion without ever going degenerate, and will therefore not create a red giant-- we will call it a red supergiant prior to helium fusion. The name "supergiant" is a bit misleading, because although the star will be larger than a giant, it will not be as puffed out relative to its own core, and will actually behave more like a main-sequence star, merely cloaked in a suprisingly large cool envelope due to the effects of having a central hole that is not participating in the fusion and has a tendency to either collapse (below 8 solar masses), or at least contract as ash is added to it..
Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
Stars that have undergone a helium flash are fusing helium in their cores, whereas red supergiants (and red giants) have inert cores. So no, they have very different structures. However, stars undergoing core fusion tend to be more homologous, though what they are fusing makes a big difference in the composition of the star, and the presence or absence of additional fusing shells is a complicated break in the homology.
 
  • #54
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
 
  • #55
snorkack said:
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
That's incorrect. As I explained above, the leaky bucket picture gives you the luminosity of a radiatively diffusive star (nearly) independently of its radius and temperature. This is called the "Henyey track", it was known about before fusion was even discovered. The fact that the radiative diffusive luminosity is independent of radius and temperature is the reason that the thermostat has little to do with the luminosity, it only has to do with the fusion. Again, Eddington understood the luminosity of the Sun quite well before anybody knew there was a thing called fusion. This is not contradiction, it is historical fact that the leaky bucket picture was understood before fusion was known about, and the discovery of fusion allowed the thermostatic piece to be added, helping us understand how the Sun could be so static for so long.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
I have no idea what this is intended to mean.
 
Last edited:
  • #56
Ken G said:
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center. I have no idea what this is intended to mean.
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
If fusion takes place at the centre of zero size (where alone temperature reaches the fusion thermostat temperature) then the radiative flux density diverges to infinity at centre. In which case so does the temperature gradient.
 
  • #57
snorkack said:
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
Apparently you are not understanding the concept of a "thermostat," or are interpreting it too narrowly to be of much use to you. All it means is that the fusion self-regulates the temperature such that the fusion rate replaces the heat lost, so if the core temperature found itself for whatever reason being too high or too low compared to the "thermostat setting," the action of fusion would quickly return the core to the necessary temperature. This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature, it is informative to recognize that the fusion domain will lie within a fairly narrow temperature regime. Indeed, even over the entire main sequence, the central temperature remains within about a factor of 2. So we have some 8 orders of magnitude in luminosity, and only about a factor of 2 in central temperature. That's what I call a remarkable thermostat, though it appears you mean something more restrictive by the term than how it used in most sources.
 
  • #58
Ken G said:
This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature,
And my point is that fusion cannot be "extremely" sensitive to temperature, in order for fusion to take place over an extended volume of space AND support a temperature gradient allowing radiative conduction across that volume.
Indeed, if the sensitivity of fusion to temperature were too strong, fusion would get concentrated into too small volume, leading to too high heat fluxes and temperature gradients and violating the assumption of radiative conduction.
 
  • #59
It is certainly true that the more temperature sensitive is the fusion, the more centrally concentrated is the fusion zone. It is not true that there is some limit to how T sensitive the fusion can be, at least in the sense of solutions to the basic equations (if and when those equations actually apply is another matter, we are working within a given mathematical model). You simply equate the local fusion rate to the divergence of the heat flux, the former is a function of T and the latter a function of derivatives of T, so you simply find the T structure that solves it, you could do it easily it's generally just solving a second-order differential equation for the T. There's always a solution, for any fusion rate that is a continuous function of T no matter how steep. But of course, that is within a given mathematical framework, other issues might appear like convection and so on. They don't change the basic picture, which is why Eddington met with so much success using only simple models (and indeed, even the inclusion of fusion does not change the situation drastically, it only changes the evolutionary timescales drastically, much to Eddington's chagrin).

There are two separate meanings of "thermostat" that I think you are confusing-- one is a tendency to keep the entire star at the same T (which is not what we are talking about), and the other is tendency to keep the central T at the same value, but there is still a T structure. It is the latter type of "thermostat" that applies for stars on the main sequence, though of course it is only an insightful approximate picture. In actuality, the central T does vary across the main sequence, but surprisingly little-- as the stellar luminosity increases by some 6 orders of magnitude over the bulk of the main sequence, the central T increases by only some factor of 2. The thermostat in my house isn't much more effective that that against things like throwing open all the windows.
 
  • #60
Furthermore, the basic assumption that stars are leaky buckets of light holds only for a narrow mass range, or not at all.
A third of Sun´s radius is convecting, not radiating. For stars less massive than Sun, that fraction is bigger. For stars less than about 0,25 solar masses, the whole star is convective - yet fusion does happen.

What should happen to the size of a star when fusion happens?
4 atoms of protium, once ionized, are 8 particles (4 protons, 4 electrons).
1 atom of helium 4, once ionized, is 3 particles (1 alpha, 2 electrons).
pV=nRT.
If pV were constant, nT would have to be constant. Then T would have to increase 8/3 times. But that´s forbidden by the assumption of thermostat.
What then? Does the radius of the star have to increase as the number of particles decreases?
 
  • #61
snorkack said:
Furthermore, the basic assumption that stars are leaky buckets of light holds only for a narrow mass range, or not at all.
That's also incorrect, in fact it works over most of the main sequence. It only fails at the lowest masses where the main sequence approaches the Hayashi track and there is no Henyey track leading to the main sequence. However, red dwarfs down there are not only highly convective, they are even starting to become degenerate, which is why there is a mass bottom to the main sequence in the first place.
A third of Sun´s radius is convecting, not radiating.
Which is irrelevant, because that region wouldn't have much effect on the radiative diffusion time anyway, given how little mass is up there.
For stars less massive than Sun, that fraction is bigger.
Again, only relevant for the red dwarfs, which are getting close to brown dwarfs, which aren't main sequence stars at all. Should we be surprised a good approximation for understanding the main sequence starts to fail when the main sequence concept itself starts to lose relevance?
For stars less than about 0,25 solar masses, the whole star is convective - yet fusion does happen.
Yup, that's close to where all the main sequence concepts cease to work, it's close to the edge of the main sequence. Nobody should be surprised that approximations start to break down when you get near the edge of a domain.
What should happen to the size of a star when fusion happens?
Almost nothing when it initiates. As it plays out, some changes in the radius do occur, nothing terribly significant at a first level of approximation. This is easy to see from the virial theorem that sets R when T is thermostatic, we say GM^2/R ~ NkT, where N is the number of particles. If we treat T as nearly thermostatic as N is lowered by fusion, and M stays nearly fixed, we expect R to be inversely proportional to N. Of course this is highly approximate, as it treats the star as "all one thing" that is perfectly thermostatic. Actually, as light escapes more easily from the Sun (as its electrons blocking the light start getting eaten up by fusion), the luminosity rises, and so the core temperature must self-regulate its thermostat to be a little higher, which I'm neglecting to first order. Also, as the central regions get a different composition from the rest of the star, the homology starts to break down, and treating the star as "all one thing" will begin to become a worse approximation. Nevertheless, as we shall see shortly, it's still a good way to understand the evolution of the Sun while it is still on the main sequence.
4 atoms of protium, once ionized, are 8 particles (4 protons, 4 electrons).
1 atom of helium 4, once ionized, is 3 particles (1 alpha, 2 electrons).
Except that this change does not happen to the whole star, only to a fairly small fraction of it. What's more, the star already had some helium in it. So between the beginning and ending of the main sequence, the number of particles in the star goes from a situation where some 30% of the stellar mass went from 12 protons, 1 helium, and 14 electrons (that's 27 particles) to 4 helium and 8 electrons (that's 12 particles). The remaining 70% had 27 particles stay 27 particles. So that means in total, 27 particles goes to 0.7*27 + 0.3*12, or about 22.5 particles. No big whoop there, but it does lead us to expect a rise in radius of about 20%. Yup, that's what happens all right, to a reasonable approximation. So what's your issue?
If pV were constant, nT would have to be constant. Then T would have to increase 8/3 times. But that´s forbidden by the assumption of thermostat.
Yup, indeed it is. So what happens instead is what I just said.
What then? Does the radius of the star have to increase as the number of particles decreases?
That's called evolution on the main sequence.
 
  • Like
Likes davenn
  • #62
Ken G said:
That's also incorrect, in fact it works over most of the main sequence. It only fails at the lowest masses where the main sequence approaches the Hayashi track and there is no Henyey track leading to the main sequence. However, red dwarfs down there are not only highly convective, they are even starting to become degenerate, which is why there is a mass bottom to the main sequence in the first place.
Which is irrelevant, because that region wouldn't have much effect on the radiative diffusion time anyway, given how little mass is up there.
So how is the luminosity affected by convection? How to demonstrate that convection over most of Sun´s volume does not affect Sun´s luminosity?
And massive stars have convection in cores. Which is a high density region of the star.
Ken G said:
Again, only relevant for the red dwarfs, which are getting close to brown dwarfs, which aren't main sequence stars at all. Should we be surprised a good approximation for understanding the main sequence starts to fail when the main sequence concept itself starts to lose relevance?
Yup, that's close to where all the main sequence concepts cease to work, it's close to the edge of the main sequence. Nobody should be surprised that approximations start to break down when you get near the edge of a domain.
The assumption of ideal gas breaks down at both ends of main sequence, causing these ends.
The assumption of radiative conduction is inapplicable over much more of the main sequence.
 
  • #63
snorkack said:
So how is the luminosity affected by convection?
Very little.
How to demonstrate that convection over most of Sun´s volume does not affect Sun´s luminosity?
Because you can get a reasonable estimate of the Sun's luminosity without including it, that's what Henyey did.
And massive stars have convection in cores. Which is a high density region of the star.
Yet it still does not significantly alter the average energy escape time, it is dominated by radiative diffusion which is then used to determine the mass-luminosity relation. Obviously this involves approximations, it is for people who wish to treat a stellar interior as something other than a computerized black box. It's not for everyone.
The assumption of ideal gas breaks down at both ends of main sequence, causing these ends.
Actually, the ideal gas approximation only breaks down at the low-mass end. What happens at the high-mass end is relativity becomes important, because much of the pressure is from relativistic particles (photons).
The assumption of radiative conduction is inapplicable over much more of the main sequence.
That would certainly have come as a big surprise to Eddington and Henyey, and all the progress they made understanding the structure of those stars using precisely that approximation-- even before they knew about fusion. Of course this is all a matter of historical record, I'm not sure you have gained a good conceptual understanding of either main-sequence stars, or the history of their modeling. The first pass for understanding all main-sequence stars (except the low-mass end where convection starts to dominate and degeneracy can also appear) is radiative diffusion. One can then add convection as an improvement that does not have a crucial effect on the luminosity, and one then faces the problem that there is no ab initio model for the complex process that is convection.

What we can see is that you asked my how stars evolve while on the main sequence if we use the thermostatic effects of fusion as our guide, and what we get is a good understanding of what actually does happen. One would think that would have been enough for you.
 
  • Like
Likes davenn
  • #64
snorkack said:
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.

snorkack said:
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.

Fusion is not something that switches on at a certain temperature. The rate of fusion increases as temperature increases until the temperature reaches some peak value, beyond which the rate decreases once more. Obviously the rate here at room temperature is virtually zero, but as you get into the multi-million kelvin range the rate begins to rise sharply.

Also, the core of a star is not at a single temperature. The temperature at the very center is highest and there is a gradual falloff as you move outwards. This is exactly what we would expect for any hot object surrounded by a cooler environment. (Like the hot-pocket whose insides burn your mouth when you bite into it despite the outside being merely warm to the touch)
 
  • #65
Ken G said:
Actually, the ideal gas approximation only breaks down at the low-mass end. What happens at the high-mass end is relativity becomes important, because much of the pressure is from relativistic particles (photons).
Either way, the pressure ceases being described by PV=nRT. And that makes a crucial difference to stability.
Ken G said:
The first pass for understanding all main-sequence stars (except the low-mass end where convection starts to dominate and degeneracy can also appear) is radiative diffusion. One can then add convection as an improvement that does not have a crucial effect on the luminosity, and one then faces the problem that there is no ab initio model for the complex process that is convection.
Then what you need to do is find out, qualitatively, what the effect of convection is.
 
  • #66
snorkack said:
ither way, the pressure ceases being described by PV=nRT.
Sure, it becomes radiation pressure. Which is also easy, but that's for a different thread. Suffice it to say that the mass-luminosity relation is somewhat easier at the high-mass end than over the rest of the main sequence, since the luminosity is characterized by the "Eddington luminosity", which means it is proportional to the stellar mass divided by the average opacity per gram. As is always true with radiative diffusion, the tricky step is in getting the opacity, but it helps when it is predominantly Thomson scattering opacity.
Then what you need to do is find out, qualitatively, what the effect of convection is.
Don't forget about the role of rotation. Magnetic fields. Non-Maxwellian velocity distributions. Plasma instabilities. Nah, I think I'll just understand how the star basically works, instead. For that, we already know the answer from the successes in the above narrative in making reasonable (yet certainly approximate) predictions about the effects of evolution on the luminosity and radius. That was the goal, all along.
 
Last edited:
  • Like
Likes davenn
  • #67
snorkack said:
Now, make the liquid 8 times denser by compressing the drop at unchanged total mass to 2 times smaller linear size.
In that case, the surface gravity of the drop increases 4 times (because of inverse square law of gravity). Since the density of the liquid was increased 8 times, the weight of an 1 m column was increased 32 times. But since the depth of column from surface to centre was halved, the pressure at the centre was increased 16 times.

Now, try making the liquid 8 times denser but this time by adding mass to the drop at constant size.
In this case, the surface gravity of the drop increases 8 times (gravity is proportional to mass). Since the density of liquid was still increased 8 times, the weight of the 1 m column is increased 64 times, and since now the depth of column is unchanged, the pressure at the centre was increased also 64 times.

Sorry for my late reply regarding this. I was wondering if there's a way to calculate the critical point at which adding mass actually makes a planet not increase in size anymore. I understand that this depends on the compressibility of the material that a planet consists of.
Could you perhaps give me an example of a calculation for that when a planet completely consists of water theoretically? I'm aware you've given me its compressibility but I'm not sure how to calculate the critical point.

Also, I have a few remarks that I'd really like some verification on to make sure I understand this;

- If the compressibility of a material is 0 (i.e. density doesn't increase) then adding that material to make a planet would make the planet increase in radius, where ##M ∝ r^3##,
So that if the planet's mass is increased by a factor of x, the radius ##r## would increase by a factor of ##x^{\frac{1}{3}}##, and the gravity force is increased by a factor of ##x / x^{\frac{2}{3}} = x^{\frac{1}{3}}##?

- If a material has a compressibility of 100% (i.e. density changes but not radius), then adding that material to make a planet would make the planet increase in density, where ##M ∝ ρ##. So that if a planet's mass is increased by a factor of x, the radius doesn't change and the density, gravity force and pressure are all increased by a factor of x?

- If compressibility at 100% makes a planet not change in radius, what is then the reason that makes a planet shrink in size when adding matter? The only conclusion that I can pull is that the opposing pressure force is not enough. But why is it that pressure force is eventually not enough?
 
Last edited:
  • #68
Drakkith said:
Let me see if I have this straight:

When hydrogen fusion ceases in the core of solar mass star, the core contracts until it is a hot, degenerate mass of helium. This contraction increases the gravitational pull on the shell of hydrogen just outside the core. This increased gravity causes the shell to compress and heat up until it reaches fusion temperatures. But because the gravity is so high, the temperature needed to stabilize it against further contraction is much higher than the temperature in the main-sequence core. This causes the fusion rate to skyrocket until it provides enough energy to offset the energy loss from the shell and to puff out the shell and outer envelope. This reduces the density of material in the shell,stabilizing the fusion rate by way of limiting the amount of fusion fuel in the shell.

Is that mostly correct?

Degenerate matter does not respond to temperature the same way as non-degenerate matter. The available electron orbitals are all filled with electrons. The electrons can not change to higher or lower energy states.

I think of heat as the random motion of atoms. If you imagine locking the electrons into an outside frame then the motion can only be movement of the nucleus. Doubling or halving the momentum of the nucleus does not change the degenerate frame it is moving in so the higher temperature does not cause increased volume. Increased gravity changes the electron degenerate frame itself. The degenerate gas remains electrically neutral so the nuclei are also closer when gravity increases.

Fusion is still temperature dependent. The fusion rate can rapidly increase because the temperature is not changing volume. Collisions happen more frequently as the temperature increases.

The outer layer(s) are not degenerate. Fusion taking place inside the degenerate core can cause electrons in the surface gas to move away. Surface gasses with escape velocity can move away join the outer shells or leave as part of the planetary nebula or nova. Surface gasses without escape velocity remain on the surface and can radiate energy.
 
  • #69
stefan r said:
Degenerate matter does not respond to temperature the same way as non-degenerate matter. The available electron orbitals are all filled with electrons. The electrons can not change to higher or lower energy states.

The part of my post that you highlighted refers to the non-degenerate shell, not the degenerate core.
 
  • #70
JohnnyGui said:
Sorry for my late reply regarding this. I was wondering if there's a way to calculate the critical point at which adding mass actually makes a planet not increase in size anymore. I understand that this depends on the compressibility of the material that a planet consists of.
Could you perhaps give me an example of a calculation for that when a planet completely consists of water theoretically? I'm aware you've given me its compressibility but I'm not sure how to calculate the critical point.
It certainly gets complex, because the compressibility has a complex dependence on pressure, and therefore also is not the same at different depth of the same body.
JohnnyGui said:
Also, I have a few remarks that I'd really like some verification on to make sure I understand this;

- If the compressibility of a material is 0 (i.e. density doesn't increase) then adding that material to make a planet would make the planet increase in radius, where ##M ∝ r^3##,
So that if the planet's mass is increased by a factor of x, the radius ##r## would increase by a factor of ##x^{\frac{1}{3}}##, and the gravity force is increased by a factor of ##x / x^{\frac{2}{3}} = x^{\frac{1}{3}}##?
To avoid roots: if the material has no compressibility, and mass is increased by factor of x3
then radius is increased by factor of x
surface gravity, and pressure of column of given depth, is increased by factor of x
central pressure is increased by factor of x2.
JohnnyGui said:
- If a material has a compressibility of 100% (i.e. density changes but not radius), then adding that material to make a planet would make the planet increase in density, where ##M ∝ ρ##. So that if a planet's mass is increased by a factor of x, the radius doesn't change and the density, gravity force and pressure are all increased by a factor of x?
If mass is increased by a factor of x3, then density is increased by factor of x3, surface gravity is increased by factor of x3, pressure of a column of fixed depth is increased by factor of x6, and so is central pressure.
JohnnyGui said:
- If compressibility at 100% makes a planet not change in radius, what is then the reason that makes a planet shrink in size when adding matter? The only conclusion that I can pull is that the opposing pressure force is not enough. But why is it that pressure force is eventually not enough?

Compressibility is more of a local property of matter. If the density is independent on pressure, so pressure increases suddenly for practically no change of density, then the density is constant. If density increases with square of pressure then the planet does not change in radius with added mass.
And if density increases proportional to pressure, as is the case for isothermal ideal gas, then the planet will shrink by itself, without any added mass.
 
  • #71
Drakkith said:
The part of my post that you highlighted refers to the non-degenerate shell, not the degenerate core.
You are right, that was the wrong sentence. Sorry. You probably understood it fine but I am not convinced someone reading it will.
This contraction increases the gravitational pull on the shell of hydrogen just outside the core.
Anything at a given radius has the same gravitational pull from a same mass object regardless of the object's density. The core contraction allows plasma from the shell to fall in. At the lower radius the plasma has a higher temperature.

If I understood it correct then there is no difference between the fusion in the non-degenerate shell and main-sequence stars. Identical temperature and pressure would spawn identical fusion rates.

High metal stars are more compact when they form and burn hydrogen faster. The core burns hydrogen faster as helium increases (assuming total mass stays the same). The older core would continue ramping the burn rate exponentially except that degeneracy pressure delays or stops the collapse.

Hypothetically, if degeneracy pressure did not exist would the sun burn straight to iron and a planetary nebula or would it form a micro-black hole?
 
  • #72
stefan r said:
If I understood it correct then there is no difference between the fusion in the non-degenerate shell and main-sequence stars. Identical temperature and pressure would spawn identical fusion rates.
That would be true if the temperatures were the same, but Drakkith was pointing out that the temperature in the shell is much higher than typical in the core of a main sequence star, and that's the fundamental reason that red giants are so much more luminous than the main-sequence stars that give rise to them.
Hypothetically, if degeneracy pressure did not exist would the sun burn straight to iron and a planetary nebula or would it form a micro-black hole?
That's an important question, and a good way to understand what degeneracy actually does. If there was not degeneracy (say, if every electron was distinguishable somehow), then there would be nothing to stop the solar core from continuing to lose heat and continue to contract, raising its temperature without limit. When the electrons go ultra relativistic, the core would become vulnerable to catastrophic collapse. So it would burn to iron, and continue to contract after that, ultimately collapsing into a mini black hole (if there was not degeneracy in neutrons either).
 
  • #73
Ken G said:
If there was not degeneracy (say, if every electron was distinguishable somehow),
Or if every electron were indistinguishable boson.

Come to think of - protons are fermions. Alpha particles are not.
Do nuclei repel each other in purely electrostatic manner, or also as indistinguishable fermions?
What allows two hydrogen atoms to repel one another? Their electrons are fermions, and so are their protons, but the atoms in total are bosons.
 
  • #74
snorkack said:
Or if every electron were indistinguishable boson.

Come to think of - protons are fermions. Alpha particles are not.
Do nuclei repel each other in purely electrostatic manner, or also as indistinguishable fermions?
Electrostatic forces are not so important within nuclei. Instead, one typically just imagines the nucleons are physically pressed together with more or less constant size per nucleon, by the strong force. Degeneracy doesn't matter much either, because the protons and neutrons also press together, and they're distinguishable.
What allows two hydrogen atoms to repel one another? Their electrons are fermions, and so are their protons, but the atoms in total are bosons.
I would have expected hydrogen atoms to experience van der Waals attraction, rather than repulsion, but either way it's electrostatic, not fermionic. Where you see fermionic effects are in the multielectron atoms, where you have to open up the atoms and look at their guts. But what you are saying is, you can't make a cool degenerate gas out of neutral hydrogen atoms, and that's true-- what happens instead is the electrostatic forces create a crystal. But the electrons wouldn't stay in the neutral hydrogen, they'd tunnel out and become like a gas of free particles in a "box" made by the proton lattice, I presume that's what will eventually happen deep in the core of Jupiter as it cools, but I don't know if the repulsion then will be more from the crystal lattice or the degenerate electrons. It's the same issue for cooling white dwarfs, the ions eventually crystallize but I believe the repulsion is still mostly from the degenerate electron pressure.
 
  • #75
Ken G said:
I would have expected hydrogen atoms to experience van der Waals attraction, rather than repulsion, but either way it's electrostatic, not fermionic. Where you see fermionic effects are in the multielectron atoms, where you have to open up the atoms and look at their guts. But what you are saying is, you can't make a cool degenerate gas out of neutral hydrogen atoms, and that's true-- what happens instead is the electrostatic forces create a crystal. But the electrons wouldn't stay in the neutral hydrogen, they'd tunnel out and become like a gas of free particles in a "box" made by the proton lattice,
But they don´t. While in solid diprotium, the molecules do tunnel to rotate around in their positions, the electrons are trapped in molecules. Solid hydrogen is an insulator.
For comparison: He-4 atoms are bosons. But while all He-4 atoms do occupy the same state, they do not occupy the same volume - despite consisting of indistinguishable bosons, ground state helium has a finite density. They still repel.
 
  • #76
snorkack said:
But they don´t. While in solid diprotium, the molecules do tunnel to rotate around in their positions, the electrons are trapped in molecules. Solid hydrogen is an insulator.

Are you talking about solid hydrogen as in metallic hydrogen or something else?
 
  • #77
snorkack said:
But they don´t. While in solid diprotium, the molecules do tunnel to rotate around in their positions, the electrons are trapped in molecules. Solid hydrogen is an insulator.
For comparison: He-4 atoms are bosons. But while all He-4 atoms do occupy the same state, they do not occupy the same volume - despite consisting of indistinguishable bosons, ground state helium has a finite density. They still repel.
Good point, I forgot the Lennard-Jones potential (often used as a proxy for these kinds of forces) goes repulsive at close distance. It's electrostatic, the fact that the particles are bosons merely means that their ground state is the zero point kinetic energy. Perhaps we should say that the ground state of bosons acts like they have springs between nearest neighbors, whereas the ground state of fermions acts like a Fermi sea and is generally reached while the interparticle forces are still attractive.
 
  • #78
snorkack said:
Compressibility is more of a local property of matter. If the density is independent on pressure, so pressure increases suddenly for practically no change of density, then the density is constant. If density increases with square of pressure then the planet does not change in radius with added mass.
And if density increases proportional to pressure, as is the case for isothermal ideal gas, then the planet will shrink by itself, without any added mass.

So it's the compressibility of a material that determines whether the relationship between pressure and density is proportional or something else? If so, is there a way to calculate the amount of compressibility that would give a proportional relationship?
 
Back
Top