Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Stars in the early universe and stellar processes

  1. Dec 21, 2015 #1
    Hey PF,

    Since there are stars that can be powered predominantly (>50%) by the CNO cycle, which requires carbon as a catalyst, and i understand the core temperatures of these stars is about 106 K. Does this mean that stars where the triple-alpha process is dominant (108 K) had to exist and die previously for there to be enough carbon available to dominantly CNO power a star.



    I'd say the bigger question attached to this is "Is there a limit to the earliest period where it is possible for stars to exist powered more than 50% by the CNO cycle?"

    and maybe to go a step further
    "alternatively, does this mean that regardless of the over density regions in the primordial times, where it would be more likely for a hot star to form, (correct me if i've misunderstood the consequences of over and underdensity regions) that there still couldn't be dominantly CNO powered stars (occupying a region around 106K) until enough hotter stars existed to generate enough carbon for the cooler stars to exist?"

    I feel like i'm missing something blatantly obvious here? Possibly, I've made the assumption that i am ignorant of either an early universe carbon nucleosynthesis event, or just that i am not appreciating the abundance of carbon generated by supernova nucleosynthesis and rare fusion events in some stars.
     
  2. jcsd
  3. Dec 21, 2015 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Sure.
    The first generation of stars didn't have heavier elements than helium (at least not in relevant quantities). They also didn't have effective cooling mechanisms, so they were quite massive and short-living, their ash then added C(N)O to the universe.
     
  4. Dec 22, 2015 #3
    Only sufficiently massive stars.
    Massive stars with low metallicity are hotter than higher metallicity stars of similar mass. Because pp cycle is slower, and it takes higher temperature before the pp luminosity can stop contraction of the star.
    Sufficiently hot and metal-poor stars can, while still fusing protium by pp process, heat up to the point where triple alpha process also operates, and produces the carbon necessary to shift the hydrogen fusion from pp to CNO cycle.
    Of course, this process is self-limiting to a certain low but nonzero metallicity. And for sufficiently massive and metal-poor stars, it can happen already in the pre-main-sequence evolution, long before protium is exhausted.
     
  5. Dec 22, 2015 #4
    Brilliant, i think both the answers here field enough of what i wanted to know. I'll look more into the details of stellar evolution i think. Thanks a million!
     
  6. Dec 22, 2015 #5

    phyzguy

    User Avatar
    Science Advisor

    As snorkack said, stars don't have to live and die to produce the CNO needed to start the CNO cycle. A massive star that starts being composed only of H and He will build up its own CNO and switch over to the CNO cycle fairly quickly. I ran some simulations a while back with the Mesa stellar evolution code and found that it only takes about 10,000 years for a star to build up enough CNO to power the CNO cycle.
     
  7. Dec 22, 2015 #6

    Ken G

    User Avatar
    Gold Member

    And I'll make a prediction for you, if you still have access to those calculations: as the C builds up and CNO-cycle fusion of H kicks in, the nuclear cross sections, as a function of temperature, obviously increase markedly for the more massive stars. But the luminosity of the star will not increase much at all.
     
  8. Dec 23, 2015 #7
    Go on with predictions:
    Core temperature should fall;
    The concentration of luminosity near the centre, with highest temperature and metallicity, should promote convection in the core of the star.
     
  9. Dec 23, 2015 #8
    That sounds quite interesting. I'll assume that the carbon generation events are rare and that this is made negligible because of the sheer mass of the star and thus the temperature of the core? or is it more complicated than that?

    Also i would like to refine an earlier question. Does the mass of the star formed from a gas cloud merely depend on the density fluctuations present in that space after the radiation era, if we are talking about the first stars? and thus a high region of overdensity would cause more matter from the relatively homogeneous gas cloud to collapse to a given point?

    A little further, does this make it possible for black holes to form after 100 million years simply due to such regions of extreme overdensity creating very very short lived, massive stars in the early universe?
     
  10. Dec 23, 2015 #9

    Ken G

    User Avatar
    Gold Member

    Certainly, because the star will self-regulate its core temperature to avoid over-producing light faster than it can diffuse out, which is the dominant physics that sets the luminosity. There will be some small feedback into the luminosity, when the structure adjusts, but the radiative diffusion rate is surprisingly insensitive to the temperature, so the luminosity should not change like you might expect, given how the fusion cross-section function is rising. I hope this experiment is done, I'd like to know exactly how it comes out-- but it seems clear that the luminosity will not increase as much as the fusion cross section function, in fact it might even drop. But if you really want luminosity to rise, reduce the radiative opacity! Now you will see a whopping effect, way more important than anything happening to fusion cross sections, because the dominant physics that sets luminosity in reasonably massive main-sequence stars is simply the rate of escape of light.
    Yet this rarely sets the timescale for heat escape, i.e., luminosity, until you get to extremely massive stars that are almost fully convective. In fact, I've always wondered about what is happening for the super-massive stars, because they are both highly convective, and very far from the Hayashi track, so there is something that requires explaining there. But if we steer clear of the very highest mass stars (where radiative diffusion may give way to convection), and to the very lowest mass stars (which get degenerate before reaching the main sequence), then we can use the simple luminosity arguments I'm giving, regardless of the fusion environment.
     
  11. Dec 23, 2015 #10

    Ken G

    User Avatar
    Gold Member

    You are actually asking a research-level question here. It is still debated if the seed black holes for what end up being the supermassive black holes in quasars are "quasistars" (very massive stars that just kind of fall into their own central black hole), of more normal versions of supernovae that leave black hole remnants that grow via mergers. What we desperately need are observations of the first stars! It was originally thought that "quasistars" would essentially eat too much of their own luminosity to be visible, but a recent paper (http://arxiv.org/abs/1509.07511) suggests otherwise.
     
  12. Dec 23, 2015 #11
    Then the next prediction:
    Low mass, low metallicity stars should have higher luminosity on Henyey track than equally massive stars of higher metallicity.
     
  13. Dec 23, 2015 #12

    Ken G

    User Avatar
    Gold Member

    Yes, as long as the mass does not go so low that the stars are going degenerate-- that messes up the connection between pressure and temperature that is assumed in Eddington-type models. If that happens,, I'd have to do a different estimate to see what would happen. But if we steer clear of red dwarfs, then yes, low metallicity should certainly lead to higher luminosity at the same mass. So now all that remains is, does someone want to run Mesa and check the predictions, or can phyzguy just look up the results already existing in those old calculations?
     
  14. Dec 25, 2015 #13
    Why is Hayashi track different from Henyey track - because of degeneracy, or just because of convection?
     
  15. Dec 25, 2015 #14

    Ken G

    User Avatar
    Gold Member

    I think it's all pretty much about the convection. The Hayashi track is not a law that convective stars must follow, it is a limit that says there is simply no equilibrium solution for a star to the right of that track. The physical reason is essentially that if a star were to the right of it, it would go convective, which would bring more heat to the surface and raise the surface T. So when stars try to get to the right of that limit, they go fully convective instead.

    This effect does depend on the opacity at the surface of the star, and first generation stars have surfaces without metals, even if their cores are doing some triple-alpha fusion, so they might not hit the Hayashi limit at all. But, it is often thought that such stars would be of high mass, and high-mass stars don't have much of a Hayashi phase anyway, they become radiative very quickly, and they also undergo fusion very quickly, so it might not matter much if they don't have a Hayashi limit.

    Anyway, the important thing about the Hayashi limit is that it is one of the rare situations where stars essentially work "outside-in", instead of "inside-out." By that I mean, we usually imagine the star is its interior, and the surface just has to kind of "play ball" with whatever is going on inside. The classic example of this is the Henyey track and the main sequence, where the luminosity is primarily set by the rate that light leaks out, and the radius is set by the history of contraction, so the surface has to come to whatever temperature will allow that luminosity to be ejected into space.

    But not the Hayashi track, because there, if the surface T gets too low, the T gradient gets too steep, and the star goes convective. So what this means is, the surface T cannot go below about 3000 K, so hits that limit and is typically in that vicinity (it depends a bit on mass). So if we have the radius of the star from its contraction history, knowing its surface T then gives its luminosity, in an outside-in fashion. Or, if it is a red giant, then we don't know the radius, we know the luminosity (it comes from radiative escape and shell fusion working together self-consistently) and the surface T, so it sets the radius-- that's why they puff out into giants.

    So to answer your question, I believe what matters is the convection. The limit is derived from surface physics, and even red dwarfs shouldn't be degenerate at their surfaces.
     
  16. Dec 25, 2015 #15
    Yes, but all stars come from the right of Hayashi track!
    If gas clouds are 2,7 K now, and maybe 30 K in early universe, what does a star look like while it is evolving from surface temperature 300 K to surface temperature 3000 K?
     
  17. Dec 25, 2015 #16

    phyzguy

    User Avatar
    Science Advisor

    Figure 6 of this paper shows a simulation of the evolution of a collapsing protostar as it evolves from a few K up to ~100,000K central temperature. It best steadily hotter and denser as it collapses.
     
  18. Dec 25, 2015 #17
    Yes. That tracks the central, not "surface" temperature.
    The compressibility of hydrogen gets very big around 2000 K, because of dihydrogen dissociation.
    Does the location of star surface, and the density above surface, depend significantly on metallicity?
     
  19. Dec 25, 2015 #18

    Ken G

    User Avatar
    Gold Member

    True, but one would not call that a star, and the reason is because they come from a different equilibrium solution, on the "other side" of the Jeans instability, if you will. There are two important physical differences: 1) the energy transport timescales are faster than the dynamical time, so the gas is treated as nearly isothermal instead of having a convective or radiative temperature gradient. 2) the force balance is one in which the internal gravity of the "object" is largely insignificant. That situation breaks completely from the kinds of simple assumptions that go into the Hayashi track.
     
  20. Dec 26, 2015 #19
    How so?
    High metallicity gas, at least, becomes optically thick long before it reaches even central temperature of 2000 K, let alone Hayashi track.
     
  21. Dec 26, 2015 #20

    Ken G

    User Avatar
    Gold Member

    I don't think that's true, If it's in hydrostatic equilibrium with an important self-gravity, the Hayashi limit should apply, and the star should be fairly hot. The usual picture is that stellar gas starts out so spread out it is essentially isothermal and has no important gravity, so it's not a star. Then it hits the Jeans instability, and goes out of equilibrium, until it finds a new hydrostatic solution where it's own gravity is very important, and the gas is way hotter. This is where the Hayashi limit becomes relevant, in my understanding of the situation. If there is a short phase where some of these concepts are ambiguous, so be it, but the Hayashi track is reached soon enough.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Stars in the early universe and stellar processes
  1. Star Processing! (Replies: 4)

  2. The early universe (Replies: 22)

  3. The early universe (Replies: 1)

Loading...