# Star formation

1. Oct 14, 2005

### vincentm

I'm reading up on star formation and from what i've understood so far, is that protogalactic clouds with density fluctuations cool and then fragment after which, they fragment again into subfragments. Now do the density in these individual subfragments increase the temperature enough to start nuclear fusion? and what is the temperature in which fusion can start? I know that an increase in temperature alone isn't enough to start fusion. So what does happen to make the temperature increase besides the density and collapsing of the cloud?

2. Oct 14, 2005

### Labguy

A decent synopsis of the basics of protostar collapse can be found at http://www.astronomynotes.com/evolutn/s3.htm (and following pages). But, it doesn't mention temperature, which is about 12 to 14 million K. Any protostar with less than ~0.079 solar masses will not have enough mass for H fusion to start, so we have a brown dwarf instead of a star. Either way, the high temperatures in a new stellar core are caused only by gravity compressing the protostar material at the center.

Last edited: Oct 14, 2005
3. Oct 14, 2005

### vincentm

Thanks labguy.

4. Oct 14, 2005

### hellfire

Fragmentation requires energy dissipation. This phase of the collapse of a cloud is called isothermal collapse. As soon as the cloud cannot cool efficiently anymore because it becomes opaque due to the high density, fragmentation stops (the Jeans mass does not decrease anymore) and the temperature increases. This phase is called adiabatic collapse and lasts until there is enough radiation pressure that stops the collapse.

5. Oct 15, 2005

### SpaceTiger

Staff Emeritus
You're right, there's a density dependence for nuclear burning as well. Does this mean that it's wrong for Labguy and others to give you a temperature range for nuclear burning?

Nope.

Well, not for astronomy purposes anyway. The basic reason that the process occurs within a small range of temperatures is that the temperature dependence is very steep. For the proton-proton chain, for example, it goes roughly as:

$$\epsilon \propto \rho T^4$$

while another hydrogen burning process, the CNO cycle, goes

$$\epsilon \propto \rho T^{17}$$

That means that you can vary the density of the stellar interior quite a bit, but the onset of nuclear burning will still occur at roughly the same temperature. As stars move on to burn heavier elements, the temperature dependences become even steeper.

The basic reason that the collapse leads to an increase in temperature is that you're releasing gravitational potential energy. It's not too different from the reason that a dropping ball increases its speed as it approaches the ground. Gravitational potential energy gets converted into kinetic energy. In the collapsing star, it's the kinetic energy of the molecules -- and, therefore, the temperature -- that's increased.

Of course, things are not always this simple. Sometimes the energy can be released via other means (like radiating light), leaving the temperature constant as the cloud collapses. This is the "isothermal" phase that hellfire was talking about. However, the radiation can only escape as long as the material it's passing through is of low enough density that it's not absorbed. As the cloud collapses, its density increases and eventually it's capable of absorbing the light before it escapes. This then allows the temperature to rise and the cloud transitions to the "adiabatic" (constant heat) phase, again mentioned by hellfire. These are (relatively) simple cases and you can probably imagine that real stars are much more complicated than that. Nevertheless, it's always good to get a grasp of the conceptual picture before trying to understand the details.

Last edited: Oct 15, 2005