Understanding the Luminosity of Radiative Stars

  • Thread starter Thread starter Ken G
  • Start date Start date
  • Tags Tags
    Luminosity Stars
Click For Summary
The discussion centers on the luminosity of main-sequence stars, emphasizing that it can be understood without direct reference to nuclear fusion. The mass-luminosity relationship, where luminosity scales with mass cubed, is derived from the star's thermodynamic structure, which is influenced by temperature, density, and radius. Key points include that luminosity is primarily determined by the star's internal structure rather than its surface temperature or fusion processes. The conversation also highlights that the surface temperature is a consequence of luminosity, not the other way around. Overall, the insights challenge the conventional view that nuclear fusion is the primary driver of a star's luminosity.
  • #61
Ken G said:
At this very moment? Mostly in star-forming regions in the spiral arms of galaxies I should imagine. They're just rare, stars with such high masses are rare. Many seem to think they would have been much more common in the very early universe, so we might perhaps conclude that population III stars largely have that property. It is easy to estimate that minimum lifetime, set L = 4 Pi GMc/kappa and t = fMc2/L where f is some small fusion efficiency factor like .001 which accounts for how much mass is in the core and how much energy it can release. We get that the minimum main-sequence lifetime, which is also the main-sequence lifetime of all the highest-mass stars, is about t = f c kappa/4 Pi G. We also have to estimate the cross section per gram, which is kappa, but if we take free electrons as our opacity, then kappa is about 0.4, which is a lower bound so perhaps just take 1. The result is then about a million years, not a bad estimate.
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
Ken G said:
Then you will have stable fusion, not fusion turning off everywhere like you claimed above.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
Ken G said:
I just don't see how that flavor of instability is of any particular importance, eventually the star will be in a state of stable fusion if it has the instability you describe.
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Ken G said:
Indeed, that's probably more or less just what's happening in the Sun right now, where fusion on very small scales can either turn itself off or go unstable, but on larger scales you see stable burning.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
 
Astronomy news on Phys.org
  • #62
snorkack said:
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
That is indeed an open question. This analysis only covers the luminosity of the star, other evolutionary channels require a different analysis.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Again, I don't say there is no interesting physics happening to stars that are not ideal gases, I say that if they are subject primarily to ideal-gas physics, then the above derivation applies to them. If they are not, it doesn't.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
 
Last edited:
  • #63
Ken G said:
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Ken G said:
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
Ken G said:
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
 
  • #64
snorkack said:
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Then when the heating runs away for the whole star, in the heating way, what stabilizes it, and how does it ever go unstable again? This model just sounds like the helium flash of a normal star, which stabilizes when it knocks the core completely out of the unstable state. That's occurs when the gas is highly degenerate, perhaps there's some different physics when the degeneracy is only partial. In any event, the derivation I gave is for ideal gases with minimal radiation pressure, like for the main sequence below about 50 solar masses but enough mass to not have become degerate by the time fusion begins (which is generally not called the main sequence).
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Sure, and if they are radiative ideal gases, my derivation applies to them. The nature of the fusion is irrelevant, as long as it is stabilized in the usual way that fusion is stable in a large ideal gas. The other branch you are describing just sounds like it's not ideal gas physics, so it says nothing about my derivation.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
The adiabatic response is due to the heat generation! But yes, the net result is the stabilization of the fusion, so it can do what I have been saying it does: replace the lost heat, period.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
When the physics is ideal gas physics, as in the Sun, there is no "remaining thermal imbalance", the adiabatic response stabilizes the thermal state. It makes the fusion do nothing but replace the heat lost due to the luminosity of the star, as derived above.
 
Last edited:
  • #65
Ken G said:
Then when the heating runs away for the whole star, in the heating way, what stabilizes it,
Increasing contribution of thermal pressure.
Ken G said:
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
Yes, it is the contribution of degeneracy pressure.
Now imagine a shrinking ball of gas, and make the assumption that its radial distribution of temperature and density remains unchanged, that it obeys ideal gas laws, and also that its heat capacity is constant (this last is least likely).
If the radius shrinks twice
then the density increases 8 times
the surface gravity increases 4 times
the pressure of a column of fixed depth thus increases 32 times
since the column of gas from surface to centre gets 2 times shorter, the central pressure grows 16 times
but since the central density grew just 8 times, the central temperature must have doubled.

Now, think what degeneracy pressure does.
If you heat water at 1 atmospheres from 273 K to 277 K, it does NOT expand 1,5 % like an ideal gas would - it actually shrinks.
When you heat water from 277 K to 373 K, it does expand - but not 35 % like ideal gas, only 1,5 %
Then, when you heat water from 373,14 to 373,16 K, it expands over 1000 times!

If you heat water at higher pressures, you will find:
that it is slightly denser, because very slightly compressed, at any equal temperature below boiling point
that the boiling point rises with pressure
that water expands on heating near the boiling point at all pressures over about 0,01 atm
that the density of water at boiling point decreases with higher temperature and pressure
that steam, like ideal gas, expands on heating at each single pressure
that steam, like ideal gas, is compressed by pressure at each single temperature
that the density of steam at boiling point increases with pressure and temperature
that the contrast between boiling water and boiling steam densities decreases with temperature and pressure.

At about 220 atmosphere pressure, the contrast disappears.
Now, if you heat water at slightly over 220 bar then the thermal expansion still starts very slight at low temperatures but increases and is, though continuous, very rapid around the critical point (a bit over 374 Celsius).

But when you increase pressure further, you would find that the increase of water thermal expansion from the low temperature liquid-like minimal expansion to the ideal gas expansion proportional to temperature would take place at increasing temperatures and also become monotonous, no longer having a maximum near the critical point.

And interiors of planets and stars typically have much higher pressures than critical pressure. The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
 
  • #66
snorkack said:
Yes, it is the contribution of degeneracy pressure.
OK, so that's a different situation. It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
And interiors of planets and stars typically have much higher pressures than critical pressure.
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
Sure, but the same could be said about general relativistic corrections as you go from a main-sequence star to a neutron star. You are still not using GR in most stellar models, because the corrections would be unimportant.
 
  • #67
Ken G said:
It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
Quite relevant.
Ken G said:
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
They do.
Now, excluding general relativistic effects but also heat production, and assuming only one radial distribution of temperature and density for each radius:

when a shrinking ball of gas is large and tenuous, its pressure is dominated by thermal pressure and therefore its internal temperature is proportional to the inverse of its radius, as demonstrated before;
whereas when the ball is dense and cool, its pressure is dominated by degeneracy pressure and therefore it has minimal thermal expansion - its radius is near a finite minimum and increases very slightly with temperature.
This is a continuous transition. The temperature of a shrinking ball of gas goes through a smooth maximum - first the temperature increases with inverse of radius, then the temperature increase slows below that rate, the temperature reaches a certain maximum, then the temperature falls while still being high and accompanied by significant further shrinking, finally the temperature falls to low levels with very little further shrinking near the minimum size.

If there is no heat production then this is what happens to the shrinking ball of gas. The speed of evolution varies with heat loss rate, which gets slow at the low temperatures, so the ball would spend most of its evolution with temperature slowly falling towards zero and radius slowly shrinking towards nonzero minimum value. But the maximum of internal temperature would happen just the same.

Now what happens if there IS heat production through fusion?
Thermonuclear fusion is strongly dependent on temperature - but the dependence is still continuous. So the heat production rate goes through a continuous maximum roughly where temperature goes through its continuous maximum.
The rate of heat loss via radiation and convection is also dependent on temperature. But it also depends on the temperature gradient and area for the same temperature but different radius, opacity, thermal expansivity, viscosity... all of which change with density around the continuous maximum of temperature.

Therefore, the ratio of heat production rate through fusion to heat loss rate goes through a continuous maximum which is generally somewhere else than the continuous maximum of temperature (in which direction?), but since the heat production rate through fusion is strongly dependent on temperature, the maximum of heat production/heat loss rate is somewhere quite near the maximum of temperature.

Now, if a shrinking ball of gas near the maximum of temperature, at which point it is significantly degenerate and nonideal gas (otherwise it would be nowhere near maximum!) reaches a maximum of heat production/heat loss rate which is close to one but does not reach it then it never reaches thermal equilibrium - the brown dwarf goes on to cool, whereupon the heat generation decreases. Note that there WAS significant amount of fusion - since the heat generation rate through fusion did approach the heat loss rate near the maximum temperature, it significantly slowed down shrinking in that period. So fusion was significant but not sustained.

If, however, the maximum of heat production/heat loss ratio is slightly over one then it is never reached. The star will stop shrinking when the heat production/heat loss ratio equals one, so it will not reach the target maximum temperature, nor the maximum (over one) ratio of heat production to heat loss.

But as shown above, it has a very significant contribution of degeneracy pressure (otherwise it would have been nowhere near the maximum temperature, and the maximum heat production/heat loss ratio would have been far over one, not slightly over one).

And such a stable star IS, by definition, a main sequence star. Most main sequence stars are red dwarfs... and have a significant contribution of degeneracy pressure/nonideal behaviour.
 
  • #68
@snorkack, your analysis essentially begins from the perspective of a star that does not have enough mass to ever reach the main sequence, and then you gradually increase the mass and ask what happens when you get to stars that barely reach the main sequence. These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection. So your point is well taken that this is a kind of "forgotten population", because no one ever sees any of these stars when they look up in the night sky, yet they are extremely numerous and no doubt play some important role in the grand scheme that those who research them must keep reminding others of. That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform, but either way, it is clearly a transitional population, no matter how numerous, between the standard "main sequence" and the brown dwarfs.)

Anyway, you make interesting points about the different physics in stars that are kind of like main-sequence stars, but have important degeneracy effects, in that transitional population that does include a lot of stars by number. The standard simplifications are to either treat the fusion physics in an ideal gas (the standard main-sequence star), or the degeneracy physics in the absence of fusion (a white dwarf), but this leaves out the transitional population that you are discussing. Your remarks are an effort to fill in that missing territory, but are a bit of a sidelight to this thread.

Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion. The interesting conclusions I reach are that not only is the surface temperature of no particular interest in deriving these mass-luminosity relationships, neither is the presence or absence of fusion, in stark refutation of all the places that say you need to understand the fusion rate if you want to derive the luminosity.
 
  • #69
Ken G said:
These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection.
Yes.
Ken G said:
That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform,
Pretty obviously it does. See my derivation of the definition in previous point.
But trying to restate it:
Any ideal gas sphere with no inner heat source, no matter how small its mass, would keep contracting at Kelvin-Helmholz timescale to arbitrarily small size and arbitrarily high internal temperature.
This contraction can be stopped by one of the two effects:
1)the gas becomes significantly nonideal, and the gas sphere cools down and slowly finishes contraction to nonzero final size
or 2) the fusion does provide an internal heat source sufficient to stop the contraction
A gas ball which is still contracting and heating is not yet on main sequence, whether or not it eventually reaches main sequence.
Now a low mass gas ball stops heating because it passes through the maximum pressure as of 1)
A massive gas ball would reach a much higher maximum temperature but, because of fusion, it never reaches that point. Instead, it acquires internal heat source that balances heat loss while the temperature is far below maximum, and the gas behaviour is still close to ideal.
So what happens to an intermediate mass gas ball? Well, 1) takes place continuously, so the gas behaviour is significantly nonideal while the temperature is still rising towards the maximum but the rise is slowing because of nonideal behaviour.
But since 2) can happen at any point where temperature is rising, it can happen on the region where the temperature is approaching maximum.
Note that these stars are on the main sequence side of the end of main sequence. Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
Ken G said:
Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion.

But besides the degeneracy, another important effect is convection.
The whole assumption of radiative heat conduction is that the heat transport is proportional to temperature gradient, so the temperature gradient changes with heat flow.

Not the case with convection! The heat transport is negligibly small below a certain gradient (the conductive heat flow), then arbitrarily large at a fixed (adiabatic) temperature gradient. Convection also is thermostat, but it fixes the temperature gradient.

And convection is significant far from the lower mass end of main sequence! Sun is convective for 30 % of its radius.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
 
  • #70
snorkack said:
Pretty obviously it does.
No it's not obvious at all, nor does your argument answer the question. You would need actual numbers to answer it-- you would need the protium burning rate, the deuterium burning rate, and the luminosity. If the first and last are close, it's a main-sequence star. If the second and last are close, it's a brown dwarf. If the last is unbalanced, it is a protostar. And if it is a protostar, my derivation still applies, unless either convection or degeneracy dominate the internal structure. The rest of what you said I already know.
Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
A point I have been making all along-- non-ideal behavior bounds the "bottom of the main sequence," so once degeneracy dominates, we don't call it a main-sequence star any more. There is of course a transition zone which is a "gray area" to the nomenclature-- my derivation begins to break down in that gray area. All the same, everything I said above is correct, and if you want to add some additional physics at the degenerate end of the main sequence, fine, but it is something of a distraction from what this thread is actually about.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
Read the title of the thread.
 
  • #71
Ken G said:
It is extremely surprising that it [the luminosity] depends only on the mass, in the sense that it is surprising it does not depend on either R or the fusion physics.

OK, assume you have a randomly varying radiation source in the center. What would the luminosity look like then?
 
  • #72
snorkack said:
Any ideal gas sphere with no inner heat source, no matter how small its mass, would keep contracting at Kelvin-Helmholz timescale to arbitrarily small size and arbitrarily high internal temperature.

Only if it is losing energy i.e. if it is luminous (and then it is strictly speaking not an ideal gas anymore).
 
  • #73
Fantasist said:
OK, assume you have a randomly varying radiation source in the center. What would the luminosity look like then?
If you had different physics than an actual star, you could get a different luminosity than actual stars have. But the way fusion really works is, it self-regulates to replace whatever heat is lost by the mechanism I describe. This is why fusion is stable-- if it didn't do this, our Sun would be a very large H-bomb.
 
  • #74
Ken G said:
If you had different physics than an actual star, you could get a different luminosity than actual stars have. But the way fusion really works is, it self-regulates to replace whatever heat is lost by the mechanism I describe. This is why fusion is stable-- if it didn't do this, our Sun would be a very large H-bomb.

Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.

Irrespective of the stability issue, the bottom line is that only (and only) radiation is lost from the star which has been produced by some kind of radiative process in the first place, whatever the structure and physics of the star may be (and whatever mass-luminosity relationship you may derive from this).
 
  • #75
Fantasist said:
Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.
Yes, but you have to include the entire situation. Fusion, in an environment that expands when it gets hot, acts like a stable thermostat. That's all that has to be true for the situation I described to occur.
Irrespective of the stability issue, the bottom line is that only (and only) radiation is lost from the star which has been produced by some kind of radiative process in the first place, whatever the structure and physics of the star may be (and whatever mass-luminosity relationship you may derive from this).
Yes, radiation is created by processes that create radiation, that is true. But we know that, what I'm saying is something very few people realize: the physics of fusion has little effect on the luminosity of a star that transports energy radiatively and obeys ideal-gas physics. Hopefully, more people know this now.
 
  • #76
Fantasist said:
Your comparison of fusion with a thermostat appears to be paradoxical to me: a thermostat decreases the energy production when the temperature increases, but fusion, on the contrary, increases it, so it is potentially destabilizing. The star is only stabilized by the fact that it expands when it is heated, and in the process cools again due to the work done against its own gravitational field.

It's always best not to take analogies too far. Both a thermostat and the physics of a star result in the same effect: the regulation of temperature.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
3K
Replies
21
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 15 ·
Replies
15
Views
10K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 72 ·
3
Replies
72
Views
7K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
33
Views
6K