Understanding the Luminosity of Radiative Stars

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Luminosity Stars
AI Thread Summary
The discussion centers on the luminosity of main-sequence stars, emphasizing that it can be understood without direct reference to nuclear fusion. The mass-luminosity relationship, where luminosity scales with mass cubed, is derived from the star's thermodynamic structure, which is influenced by temperature, density, and radius. Key points include that luminosity is primarily determined by the star's internal structure rather than its surface temperature or fusion processes. The conversation also highlights that the surface temperature is a consequence of luminosity, not the other way around. Overall, the insights challenge the conventional view that nuclear fusion is the primary driver of a star's luminosity.
  • #51
Ken G said:
You were basically correct up to that last part. The temperature of the mirror is not defined

If you would touch the mirror, you could convince yourself that its temperature is defined.

Ken G said:
But the key point is, the emergent light will be bluer, so the star will look hotter-- as well it should, the temperature of the gas will be higher. But the luminosity is the same!

But that could not be a blackbody spectrum anymore, as otherwise it would violate energy conservation..
 
Astronomy news on Phys.org
  • #52
Ken G said:
(free electron opacity is kind of a lower bound there of about kappa = 0.4 cm2/g, so using that would yield an upper bound to the luminosity, but real stellar opacities are larger by up to a factor of 10 or so),
Does it mean that subdwarfs are brighter for the same mass, not only smaller and hotter?
Ken G said:
This is negligible for all but the highest mass stars, and would require modifications to the connection between T and M/R that I used, and yield the "Eddington limit" where L is proportional to M. My derivation is for all stars with M below about 50 times solar or so.
Massive stars run into Eddington limit in main sequence, other stars encounter it later.
Ken G said:
True enough, but not relevant.
It is the reason I give for excluding case 1). Bright stars tend to be shortlived not only because they are bright (duh) but because they have poor stability.
Ken G said:
Not necessarily, it depends on whether they reach temperatures capable of fusing any remaining nuclear fuel they possess.
If they don´t then the unstable thermal runaway simply is operating in cooling direction.
Ken G said:
Yes.I used the "virial theorem" to arrive at kT ~ GMm/R. That is a first principle. It doesn't apply to your case (1) because it neglects radiation pressure, and it doesn't apply to case (3) because it associates kT with the average kinetic energy per particle, but degeneracy reduces T way below that.
And that shows the question of what the significance of that T is.
 
  • #53
Ken G said:
Bottom line, we not only get the L ~ M3 scaling we find in the mass-luminosity relationship

It is the mass-luminosity relationship (essentially the same derivation as the one on Wikipedia page). And it is not really surprising that the luminosity is basically only determined by the mass (after all, the mass of the primordial cloud is the only parameter that can possibly make any difference for the star formation (assuming identical chemical composition)).

Ken G said:
we can even estimate the constant A if we know something about the opacity kappa, so we can flat out estimate the luminosity of a star knowing only its mass. No fusion, no surface T

It is not further surprising that fusion didn't come into it, as the assumption of 'blackbody' radiation doesn't have to care about the details of the processes by means of which radiation is created and destroyed. It is a 'black box' model based on the assumption of an equilibrium between emission and absorption processes (whatever they may be).

In any case, you can calculate the luminosity from the surface temperature (as determined from the spectrum), and I bet you will get a far more accurate value for it than from your mass-luminosity relationship (where, as you seem to realize yourself, you have to make certain assumptions about the stellar structure and other parameters determining the diffusion process if you want to arrive at an absolute numerical value for the luminosity).

Ken G said:
you can see all kinds of erroneous arguments about why high-mass stars fuse faster, but we can now see the reason they do that: they emit light faster,

That would contradict your derivation above: the time t increases with increasing radius and thus with increasing mass. So a more massive star should take longer to emit a certain percentage of the radiative energy it contains.

Ken G said:
and the fusion just has to keep up (because fusion is self-regulated to supply whatever heat is lost by the star, much like a thermostat).

I don't think the fusion rate cares about the radiation lost from the star. It is only determined by the local temperature and density. If you put a 100% reflective mirror around the star, the temperature will steadily increase, and I don't think the fusion will regulate itself down in response. On the contrary, it will result in a fusion bomb.
 
  • #54
Fantasist said:
If you would touch the mirror, you could convince yourself that its temperature is defined.
I'll presume you are being facetious, but the mirror would feel hot because it is radiating light. A perfect mirror does not have a temperature.
But that could not be a blackbody spectrum anymore, as otherwise it would violate energy conservation..
It would have the same spectrum as a blackbody, but not the same flux as the Stefan-Boltzmann law. This is called "albedo."
 
  • #55
snorkack said:
Does it mean that subdwarfs are brighter for the same mass, not only smaller and hotter?
Yes, that occurred to me as well. Subdwarfs must not just have lower metallicity at their surfaces, but all over, so they should have higher luminosity for the same mass. But they fall below the main-sequence for the same spectral type. So I think what must be happening there is, they are actually superluminous for their mass, but because the main sequence is so steep in an HR diagram, and their surface temperatures are shifted upward (perhaps by the very albedo effect we are talking about), they end up looking underluminous for their surface T.
Massive stars run into Eddington limit in main sequence, other stars encounter it later.
Yes, I mentioned that, but only very massive stars.
It is the reason I give for excluding case 1). Bright stars tend to be shortlived not only because they are bright (duh) but because they have poor stability.
They are short-lived because they burn up their nuclear fuel quickly, and nuclear fuel is the main thing that delays a star's evolution. Also, low-mass stars have access to the white dwarf stage, which is extremely long-lived. So we really have two issues here-- one is, how quickly do they evolve to their "end stage" (and that is all about how fast their heat leaks out in the form of light), and the other is, what is that end stage and how long-lived is that. I speak only to the first issue here, the second is another thread.
If they don´t then the unstable thermal runaway simply is operating in cooling direction.
No, white dwarfs in the absence of fusion have no runaway effects, they just gradually cool as their heat leaks out. The reason nuclear fusion is thermally unstable in a white dwarf is that the faster it occurs, the more it piles up heat, which increases the temperature of the nuclei, and that increases the fusion rate. If no fusion is occurring, no instabilities are present.
And that shows the question of what the significance of that T is.
T is quite important, that's why I invoke it. But this is the characteristic interior T, not the surface T, which is totally different and is set by the luminosity. The interior T is set by the hydrostatic equilibrium. It's apples and oranges, which is why that Wiki approach is a conceptual boondoggle.
 
  • #56
Ken G said:
They are short-lived because they burn up their nuclear fuel quickly, and nuclear fuel is the main thing that delays a star's evolution.
If it were the case, Eddington limit would set a lower bound to stellar lifetime.
Ken G said:
No, white dwarfs in the absence of fusion have no runaway effects, they just gradually cool as their heat leaks out. The reason nuclear fusion is thermally unstable in a white dwarf is that the faster it occurs, the more it piles up heat, which increases the temperature of the nuclei, and that increases the fusion rate. If no fusion is occurring, no instabilities are present.
The same thermal instability can operate in the other direction. The slower the fusion occurs, the cooler the star gets, and that further slows down fusion, etc. Which is why brown dwarfs do not sustain long term protium fusion even if they fuse some small amounts of protium when heated up by initial contraction, and also sustain even lower rate of protium fusion due to pure pycnonuclear reactions.
Ken G said:
T is quite important, that's why I invoke it. But this is the characteristic interior T, not the surface T, which is totally different and is set by the luminosity. The interior T is set by the hydrostatic equilibrium.

Where is that "characteristic" T?
 
  • #57
Fantasist said:
It is the mass-luminosity relationship (essentially the same derivation as the one on Wikipedia page).
Yes it is, but the Wiki derivation is horrendous, because it first does it completely wrong (plug in the numbers you'd get from their approach, you'll see how staggeringly wrong it is), and then applies a "correction," which completely eradicates the original horrendous physics, and swaps in the real physics through the back door. It is a perfect example of what a conceptual morass you end up in if you think you should be using surface temperature to infer luminosity. When you understand what they really did there, you'll see what I mean.
And it is not really surprising that the luminosity is basically only determined by the mass (after all, the mass of the primordial cloud is the only parameter that can possibly make any difference for the star formation (assuming identical chemical composition)).
It is extremely surprising that it depends only on the mass, in the sense that it is surprising it does not depend on either R or the fusion physics.

The lack of dependence on R means that if you have a radiating star that is gradually contracting (prior to reaching the main sequence), its luminosity should not change! That would be true even if the star contracted by a factor of 10, if the opacity did not change, and the internal physics did not shift from convection to radiation. But contracting stars do tend to start out highly convective, so do make that transition, and that's why we generally have not noticed this remarkable absence of a dependence on R.

The lack of dependence on fusion physics means that when a star initiates fusion, nothing really happens to the star except it stops contracting. That's not necessarily what must happen, for example when later in the star's life it begins to fuse hydrogen, it will undergo a radical change in structure, and change luminosity drastically. But the onset of hydrogen fusion does not come with any such drastic restructuring of the star, because it started out with a fairly simple, mostly radiative structure, and when fusion begins, it just maintains that same structure because all the fusion does is replace the heat that is leaking out.
It is not further surprising that fusion didn't come into it, as the assumption of 'blackbody' radiation doesn't have to care about the details of the processes by means of which radiation is created and destroyed.
Try telling that to a red giant that begins fusing helium in its core! But you are certainly right that if we get away with assuming that fusion does not affect the internal structure of the star, then that structure is indeed a kind of black box. That's how Eddington was able to deduce that internal structure before he even knew that fusion existed. Still, if you think it's not surprising that fusion doesn't matter, then not only have you learned an important lesson, you may also find it hard to read all the textbooks and online course notes that tell you the fusion physics explains the mass-luminosity relationship!
In any case, you can calculate the luminosity from the surface temperature (as determined from the spectrum), and I bet you will get a far more accurate value for it than from your mass-luminosity relationship (where, as you seem to realize yourself, you have to make certain assumptions about the stellar structure and other parameters determining the diffusion process if you want to arrive at an absolute numerical value for the luminosity).
I'm sure that's true, but it fails the objective of understanding the luminosity from first principles. We can also just measure the luminosity, that's the most accurate way yet!
That would contradict your derivation above: the time t increases with increasing radius and thus with increasing mass.
That's not what I meant by "emit light faster", I did not mean "the diffusion time is less", I meant "they emit light from their surface at a faster rate."
I don't think the fusion rate cares about the radiation lost from the star.
Well, we know that cannot be true, because the fusion rate equals the rate that radiation is lost from the star.
It is only determined by the local temperature and density.
Thank you for bringing that up, it's an important part of the mistake that many people make. You will see a lot of places that say words to the effect that "because fusion depends so sensitively on temperature, the fusion rate controls the luminosity". That's exactly backward. Because the fusion rate depends so sensitively on temperature, tiny changes in T affect the fusion rate a lot, so the fusion rate has no power to affect the star at all. After all, the thermodynamic properties of the star are not nearly as sensitive to T, so we just need a basic idea of what T is to get a basic idea of what the star is doing. But since fusion needs a very precise idea of what T is, we can always get the fusion to fall in line with minor T modifications. That's why fusion acts like a thermostat on the T, but it has little power to alter the stellar characteristics other than establishing at what central T the star will stop contracting.

If you don't see that, look at it this way. Imagine you are trying to iterate a model of the Sun to get its luminosity right. You give it an M and a R, and you start playing with T. You can get the T basically right just from the gravitational physics (the force balance), and you see that it is in the ballpark of where fusion can happen. You also get L in the right ballpark, before you say anything about fusion (as I showed). But now you want to bring in fusion, so you tinker with T. Let's say originally your T was too high, so the fusion rate was too fast and was way more than L. So you lower T just a little, and poof, the fusion rate responds mightily (this is especially true of CNO cycle fusion, more so than p-p chain, so it works even better for stars a bit more massive than the Sun). So you don't need to change T much at all, so you don't need to update the rest of your calculation much, so you end up not changing L to reach a self-consistent solution! So we see, it is precisely the T-sensitivity of fusion that has made it not affect L much, though many places you will see that logic exactly reversed.
If you put a 100% reflective mirror around the star, the temperature will steadily increase, and I don't think the fusion will regulate itself down in response. On the contrary, it will result in a fusion bomb.
Yes, 100% reflection causes a lot of physical difficulties, because you can't reach an equilibrium. Even if you just stick to 99%, you would not have much problem-- L would still not be changed much.
 
  • #58
snorkack said:
If it were the case, Eddington limit would set a lower bound to stellar lifetime.
Well, the Eddington limit does set a lower bound to stellar lifetime! Any star with given mass M has a lower limit to its main-sequence lifetime, set by the Eddington limit, but it is generally way shorter than the actual main-sequence lifetime-- except for stars of mass of about 50 solar masses or more.
The same thermal instability can operate in the other direction. The slower the fusion occurs, the cooler the star gets, and that further slows down fusion, etc.
Yes, if there is something that is fusing in the first place. As I said, there is no instability if there is no fusion going on.
Which is why brown dwarfs do not sustain long term protium fusion even if they fuse some small amounts of protium when heated up by initial contraction, and also sustain even lower rate of protium fusion due to pure pycnonuclear reactions.
That can't be right. Any instability can go in either direction, so the issue is, which direction is going to dominate? If you have an instability that can either turn off fusion, or make it go nuts, then in some places you will turn the fusion off, and in other places you will make it go nuts. Which of those places is going to matter more, say in an H bomb?
Where is that "characteristic" T?
Throughout the interior of a star, where T is uniformly high and not varying dramatically (though obviously it monotonically decreases with r). If you want to make it precise, you "de-dimensionalize" your T variable. That means you write T(r) = To*y(x) using r = R*x, where y(x) is a dimensionless order-unity function that determines the details of the T structure, and x runs from 0 to 1. Here To is what I am calling the "characteristic T." Then we assume a "homology class", which means that as we vary M from one model to another, we assume that the function y(x) stays the same, so we can look for scaling relations between T and M and R and L and so on. This is also a key aspect of what are called "polytropic models", used routinely (and by Eddington) to model stars. What you don't seem to recognize is that everything I'm saying is basic stellar physics, nothing but a simplified and more conceptually accessible version of Eddington's work on stellar structure.
 
Last edited:
  • #59
Ken G said:
Well, the Eddington limit does set a lower bound to stellar lifetime! Any star with given mass M has a lower limit to its main-sequence lifetime, set by the Eddington limit, but it is generally way shorter than the actual main-sequence lifetime-- except for stars of mass of about 50 solar masses or more.
Then where are all these stars with different large masses and equal main sequence lifetimes?
Ken G said:
That can't be right. Any instability can go in either direction, so the issue is, which direction is going to dominate? If you have an instability that can either turn off fusion, or make it go nuts, then in some places you will turn the fusion off, and in other places you will make it go nuts. Which of those places is going to matter more, say in an H bomb?
If the instability goes into fusion direction then the instability disappears and causes stable fusion, like in case there was no instability to begin with.
Ken G said:
Throughout the interior of a star, where T is uniformly high and not varying dramatically (though obviously it monotonically decreases with r). If you want to make it precise, you "de-dimensionalize" your T variable. That means you write T(r) = To*y(x) using r = R*x, where y(x) is a dimensionless order-unity function that determines the details of the T structure, and x runs from 0 to 1. Here To is what I am calling the "characteristic T." Then we assume a "homology class", which means that as we vary M from one model to another, we assume that the function y(x) stays the same, so we can look for scaling relations between T and M and R and L and so on.

But the matter is that opacity varies with temperature in a complex manner.
 
  • #60
snorkack said:
Then where are all these stars with different large masses and equal main sequence lifetimes?
At this very moment? Mostly in star-forming regions in the spiral arms of galaxies I should imagine. They're just rare, stars with such high masses are rare. Many seem to think they would have been much more common in the very early universe, so we might perhaps conclude that population III stars largely have that property. It is easy to estimate that minimum lifetime, set L = 4 Pi GMc/kappa and t = fMc2/L where f is some small fusion efficiency factor like .001 which accounts for how much mass is in the core and how much energy it can release. We get that the minimum main-sequence lifetime, which is also the main-sequence lifetime of all the highest-mass stars, is about t = f c kappa/4 Pi G. We also have to estimate the cross section per gram, which is kappa, but if we take free electrons as our opacity, then kappa is about 0.4, which is a lower bound so perhaps just take 1. The result is then about a million years, not a bad estimate.
If the instability goes into fusion direction then the instability disappears and causes stable fusion, like in case there was no instability to begin with.
Then you will have stable fusion, not fusion turning off everywhere like you claimed above. I just don't see how that flavor of instability is of any particular importance, eventually the star will be in a state of stable fusion if it has the instability you describe. Indeed, that's probably more or less just what's happening in the Sun right now, where fusion on very small scales can either turn itself off or go unstable, but on larger scales you see stable burning. The details don't matter, the total fusion rate is still set by the luminosity! That's the most important thing to get from this thread: the details of fusion don't matter, and that's why you don't see any difference in the star when fusion initiaties, or any difference along the main sequence when p-p chain fusion at lower mass gives over to CNO cycle fusion for higher mass stars. Even the one detail that is somewhat important in some ways, which is the fact that fusion is very T-sensitive and quite capable of yielding any L you need, only comes into play in explaining why the main-sequence is so narrow in an H-R diagram, which means that stars cease contracting when in that phase.
But the matter is that opacity varies with temperature in a complex manner.
A fact I pointed myself. That's why idealizations are necessary to understand the mass-luminosity relation. If you want high accuracy, you must put that in, plus a whole lot of other things like convection zones, neutrino losses, winds, metallicity, rotation, perhaps magnetic fields...etc.
 
Last edited:
  • #61
Ken G said:
At this very moment? Mostly in star-forming regions in the spiral arms of galaxies I should imagine. They're just rare, stars with such high masses are rare. Many seem to think they would have been much more common in the very early universe, so we might perhaps conclude that population III stars largely have that property. It is easy to estimate that minimum lifetime, set L = 4 Pi GMc/kappa and t = fMc2/L where f is some small fusion efficiency factor like .001 which accounts for how much mass is in the core and how much energy it can release. We get that the minimum main-sequence lifetime, which is also the main-sequence lifetime of all the highest-mass stars, is about t = f c kappa/4 Pi G. We also have to estimate the cross section per gram, which is kappa, but if we take free electrons as our opacity, then kappa is about 0.4, which is a lower bound so perhaps just take 1. The result is then about a million years, not a bad estimate.
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
Ken G said:
Then you will have stable fusion, not fusion turning off everywhere like you claimed above.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
Ken G said:
I just don't see how that flavor of instability is of any particular importance, eventually the star will be in a state of stable fusion if it has the instability you describe.
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Ken G said:
Indeed, that's probably more or less just what's happening in the Sun right now, where fusion on very small scales can either turn itself off or go unstable, but on larger scales you see stable burning.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
 
  • #62
snorkack said:
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
That is indeed an open question. This analysis only covers the luminosity of the star, other evolutionary channels require a different analysis.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Again, I don't say there is no interesting physics happening to stars that are not ideal gases, I say that if they are subject primarily to ideal-gas physics, then the above derivation applies to them. If they are not, it doesn't.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
 
Last edited:
  • #63
Ken G said:
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Ken G said:
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
Ken G said:
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
 
  • #64
snorkack said:
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Then when the heating runs away for the whole star, in the heating way, what stabilizes it, and how does it ever go unstable again? This model just sounds like the helium flash of a normal star, which stabilizes when it knocks the core completely out of the unstable state. That's occurs when the gas is highly degenerate, perhaps there's some different physics when the degeneracy is only partial. In any event, the derivation I gave is for ideal gases with minimal radiation pressure, like for the main sequence below about 50 solar masses but enough mass to not have become degerate by the time fusion begins (which is generally not called the main sequence).
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Sure, and if they are radiative ideal gases, my derivation applies to them. The nature of the fusion is irrelevant, as long as it is stabilized in the usual way that fusion is stable in a large ideal gas. The other branch you are describing just sounds like it's not ideal gas physics, so it says nothing about my derivation.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
The adiabatic response is due to the heat generation! But yes, the net result is the stabilization of the fusion, so it can do what I have been saying it does: replace the lost heat, period.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
When the physics is ideal gas physics, as in the Sun, there is no "remaining thermal imbalance", the adiabatic response stabilizes the thermal state. It makes the fusion do nothing but replace the heat lost due to the luminosity of the star, as derived above.
 
Last edited:
  • #65
Ken G said:
Then when the heating runs away for the whole star, in the heating way, what stabilizes it,
Increasing contribution of thermal pressure.
Ken G said:
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
Yes, it is the contribution of degeneracy pressure.
Now imagine a shrinking ball of gas, and make the assumption that its radial distribution of temperature and density remains unchanged, that it obeys ideal gas laws, and also that its heat capacity is constant (this last is least likely).
If the radius shrinks twice
then the density increases 8 times
the surface gravity increases 4 times
the pressure of a column of fixed depth thus increases 32 times
since the column of gas from surface to centre gets 2 times shorter, the central pressure grows 16 times
but since the central density grew just 8 times, the central temperature must have doubled.

Now, think what degeneracy pressure does.
If you heat water at 1 atmospheres from 273 K to 277 K, it does NOT expand 1,5 % like an ideal gas would - it actually shrinks.
When you heat water from 277 K to 373 K, it does expand - but not 35 % like ideal gas, only 1,5 %
Then, when you heat water from 373,14 to 373,16 K, it expands over 1000 times!

If you heat water at higher pressures, you will find:
that it is slightly denser, because very slightly compressed, at any equal temperature below boiling point
that the boiling point rises with pressure
that water expands on heating near the boiling point at all pressures over about 0,01 atm
that the density of water at boiling point decreases with higher temperature and pressure
that steam, like ideal gas, expands on heating at each single pressure
that steam, like ideal gas, is compressed by pressure at each single temperature
that the density of steam at boiling point increases with pressure and temperature
that the contrast between boiling water and boiling steam densities decreases with temperature and pressure.

At about 220 atmosphere pressure, the contrast disappears.
Now, if you heat water at slightly over 220 bar then the thermal expansion still starts very slight at low temperatures but increases and is, though continuous, very rapid around the critical point (a bit over 374 Celsius).

But when you increase pressure further, you would find that the increase of water thermal expansion from the low temperature liquid-like minimal expansion to the ideal gas expansion proportional to temperature would take place at increasing temperatures and also become monotonous, no longer having a maximum near the critical point.

And interiors of planets and stars typically have much higher pressures than critical pressure. The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
 
  • #66
snorkack said:
Yes, it is the contribution of degeneracy pressure.
OK, so that's a different situation. It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
And interiors of planets and stars typically have much higher pressures than critical pressure.
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
Sure, but the same could be said about general relativistic corrections as you go from a main-sequence star to a neutron star. You are still not using GR in most stellar models, because the corrections would be unimportant.
 
  • #67
Ken G said:
It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
Quite relevant.
Ken G said:
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
They do.
Now, excluding general relativistic effects but also heat production, and assuming only one radial distribution of temperature and density for each radius:

when a shrinking ball of gas is large and tenuous, its pressure is dominated by thermal pressure and therefore its internal temperature is proportional to the inverse of its radius, as demonstrated before;
whereas when the ball is dense and cool, its pressure is dominated by degeneracy pressure and therefore it has minimal thermal expansion - its radius is near a finite minimum and increases very slightly with temperature.
This is a continuous transition. The temperature of a shrinking ball of gas goes through a smooth maximum - first the temperature increases with inverse of radius, then the temperature increase slows below that rate, the temperature reaches a certain maximum, then the temperature falls while still being high and accompanied by significant further shrinking, finally the temperature falls to low levels with very little further shrinking near the minimum size.

If there is no heat production then this is what happens to the shrinking ball of gas. The speed of evolution varies with heat loss rate, which gets slow at the low temperatures, so the ball would spend most of its evolution with temperature slowly falling towards zero and radius slowly shrinking towards nonzero minimum value. But the maximum of internal temperature would happen just the same.

Now what happens if there IS heat production through fusion?
Thermonuclear fusion is strongly dependent on temperature - but the dependence is still continuous. So the heat production rate goes through a continuous maximum roughly where temperature goes through its continuous maximum.
The rate of heat loss via radiation and convection is also dependent on temperature. But it also depends on the temperature gradient and area for the same temperature but different radius, opacity, thermal expansivity, viscosity... all of which change with density around the continuous maximum of temperature.

Therefore, the ratio of heat production rate through fusion to heat loss rate goes through a continuous maximum which is generally somewhere else than the continuous maximum of temperature (in which direction?), but since the heat production rate through fusion is strongly dependent on temperature, the maximum of heat production/heat loss rate is somewhere quite near the maximum of temperature.

Now, if a shrinking ball of gas near the maximum of temperature, at which point it is significantly degenerate and nonideal gas (otherwise it would be nowhere near maximum!) reaches a maximum of heat production/heat loss rate which is close to one but does not reach it then it never reaches thermal equilibrium - the brown dwarf goes on to cool, whereupon the heat generation decreases. Note that there WAS significant amount of fusion - since the heat generation rate through fusion did approach the heat loss rate near the maximum temperature, it significantly slowed down shrinking in that period. So fusion was significant but not sustained.

If, however, the maximum of heat production/heat loss ratio is slightly over one then it is never reached. The star will stop shrinking when the heat production/heat loss ratio equals one, so it will not reach the target maximum temperature, nor the maximum (over one) ratio of heat production to heat loss.

But as shown above, it has a very significant contribution of degeneracy pressure (otherwise it would have been nowhere near the maximum temperature, and the maximum heat production/heat loss ratio would have been far over one, not slightly over one).

And such a stable star IS, by definition, a main sequence star. Most main sequence stars are red dwarfs... and have a significant contribution of degeneracy pressure/nonideal behaviour.
 
  • #68
@snorkack, your analysis essentially begins from the perspective of a star that does not have enough mass to ever reach the main sequence, and then you gradually increase the mass and ask what happens when you get to stars that barely reach the main sequence. These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection. So your point is well taken that this is a kind of "forgotten population", because no one ever sees any of these stars when they look up in the night sky, yet they are extremely numerous and no doubt play some important role in the grand scheme that those who research them must keep reminding others of. That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform, but either way, it is clearly a transitional population, no matter how numerous, between the standard "main sequence" and the brown dwarfs.)

Anyway, you make interesting points about the different physics in stars that are kind of like main-sequence stars, but have important degeneracy effects, in that transitional population that does include a lot of stars by number. The standard simplifications are to either treat the fusion physics in an ideal gas (the standard main-sequence star), or the degeneracy physics in the absence of fusion (a white dwarf), but this leaves out the transitional population that you are discussing. Your remarks are an effort to fill in that missing territory, but are a bit of a sidelight to this thread.

Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion. The interesting conclusions I reach are that not only is the surface temperature of no particular interest in deriving these mass-luminosity relationships, neither is the presence or absence of fusion, in stark refutation of all the places that say you need to understand the fusion rate if you want to derive the luminosity.
 
  • #69
Ken G said:
These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection.
Yes.
Ken G said:
That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform,
Pretty obviously it does. See my derivation of the definition in previous point.
But trying to restate it:
Any ideal gas sphere with no inner heat source, no matter how small its mass, would keep contracting at Kelvin-Helmholz timescale to arbitrarily small size and arbitrarily high internal temperature.
This contraction can be stopped by one of the two effects:
1)the gas becomes significantly nonideal, and the gas sphere cools down and slowly finishes contraction to nonzero final size
or 2) the fusion does provide an internal heat source sufficient to stop the contraction
A gas ball which is still contracting and heating is not yet on main sequence, whether or not it eventually reaches main sequence.
Now a low mass gas ball stops heating because it passes through the maximum pressure as of 1)
A massive gas ball would reach a much higher maximum temperature but, because of fusion, it never reaches that point. Instead, it acquires internal heat source that balances heat loss while the temperature is far below maximum, and the gas behaviour is still close to ideal.
So what happens to an intermediate mass gas ball? Well, 1) takes place continuously, so the gas behaviour is significantly nonideal while the temperature is still rising towards the maximum but the rise is slowing because of nonideal behaviour.
But since 2) can happen at any point where temperature is rising, it can happen on the region where the temperature is approaching maximum.
Note that these stars are on the main sequence side of the end of main sequence. Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
Ken G said:
Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion.

But besides the degeneracy, another important effect is convection.
The whole assumption of radiative heat conduction is that the heat transport is proportional to temperature gradient, so the temperature gradient changes with heat flow.

Not the case with convection! The heat transport is negligibly small below a certain gradient (the conductive heat flow), then arbitrarily large at a fixed (adiabatic) temperature gradient. Convection also is thermostat, but it fixes the temperature gradient.

And convection is significant far from the lower mass end of main sequence! Sun is convective for 30 % of its radius.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
 
  • #70
snorkack said:
Pretty obviously it does.
No it's not obvious at all, nor does your argument answer the question. You would need actual numbers to answer it-- you would need the protium burning rate, the deuterium burning rate, and the luminosity. If the first and last are close, it's a main-sequence star. If the second and last are close, it's a brown dwarf. If the last is unbalanced, it is a protostar. And if it is a protostar, my derivation still applies, unless either convection or degeneracy dominate the internal structure. The rest of what you said I already know.
Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
A point I have been making all along-- non-ideal behavior bounds the "bottom of the main sequence," so once degeneracy dominates, we don't call it a main-sequence star any more. There is of course a transition zone which is a "gray area" to the nomenclature-- my derivation begins to break down in that gray area. All the same, everything I said above is correct, and if you want to add some additional physics at the degenerate end of the main sequence, fine, but it is something of a distraction from what this thread is actually about.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
Read the title of the thread.
 
  • #71
Ken G said:
It is extremely surprising that it [the luminosity] depends only on the mass, in the sense that it is surprising it does not depend on either R or the fusion physics.

OK, assume you have a randomly varying radiation source in the center. What would the luminosity look like then?
 
  • #72
snorkack said:
Any ideal gas sphere with no inner heat source, no matter how small its mass, would keep contracting at Kelvin-Helmholz timescale to arbitrarily small size and arbitrarily high internal temperature.

Only if it is losing energy i.e. if it is luminous (and then it is strictly speaking not an ideal gas anymore).
 
  • #73
Fantasist said:
OK, assume you have a randomly varying radiation source in the center. What would the luminosity look like then?
If you had different physics than an actual star, you could get a different luminosity than actual stars have. But the way fusion really works is, it self-regulates to replace whatever heat is lost by the mechanism I describe. This is why fusion is stable-- if it didn't do this, our Sun would be a very large H-bomb.
 
  • #74
Ken G said:
If you had different physics than an actual star, you could get a different luminosity than actual stars have. But the way fusion really works is, it self-regulates to replace whatever heat is lost by the mechanism I describe. This is why fusion is stable-- if it didn't do this, our Sun would be a very large H-bomb.

Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.

Irrespective of the stability issue, the bottom line is that only (and only) radiation is lost from the star which has been produced by some kind of radiative process in the first place, whatever the structure and physics of the star may be (and whatever mass-luminosity relationship you may derive from this).
 
  • #75
Fantasist said:
Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.
Yes, but you have to include the entire situation. Fusion, in an environment that expands when it gets hot, acts like a stable thermostat. That's all that has to be true for the situation I described to occur.
Irrespective of the stability issue, the bottom line is that only (and only) radiation is lost from the star which has been produced by some kind of radiative process in the first place, whatever the structure and physics of the star may be (and whatever mass-luminosity relationship you may derive from this).
Yes, radiation is created by processes that create radiation, that is true. But we know that, what I'm saying is something very few people realize: the physics of fusion has little effect on the luminosity of a star that transports energy radiatively and obeys ideal-gas physics. Hopefully, more people know this now.
 
  • #76
Fantasist said:
Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.

It's always best not to take analogies too far. Both a thermostat and the physics of a star result in the same effect: the regulation of temperature.
 
Back
Top