sophiecentaur said:
Something that I always wondered about is the possibility of controlling the descent rate by 'flying' / skimming through the upper atmosphere so that the vehicle loses KE at a lower rate. I appreciate that we are talking supersonic /ultrasonic flight and that control would not be trivial but, if you could extend the actual time in which the energy was dissipated, the power would be lower and so the temperature problem would be a lot less. You could even consider avoiding the radio blackout due to the plasma around the nose of the vehicle.
I think is must 'just' be a matter of flight control. IS that problem insoluble?
cjl said:
That's how pretty much all manned vehicles reenter actually - the shuttle and even capsules fly a lifting trajectory to decrease heating rates and acceleration loads. Interestingly enough, stretching out the reentry decreases the maximum heating rate, but it actually increases the integrated heat load, so it isn't completely beneficial. It's still used though, as much as anything to minimize the acceleration to which the passengers are submitted.
See this image for an example:
http://www-public.tu-bs.de:8080/~y0021684/pic/apollo11_reentry.png
It may be losing KE at a lower rate, but that means it is spending more time at higher speeds. This means it is exposed to the extreme temperatures for a longer time. The problem is that the temperatures aren't caused by the rate of deceleration, but the rate of motion. The faster the object moves, the hotter it gets. Basically, in the frame of reference of the entry vehicle, the air is coming on at a given atmospheric temperature, pressure, density, etc. and some free stream Mach number. This air stagnates (again, reference frame of the vehicle) against the walls of the vehicle, raising it to an even higher temperature. This temperature, which is typically slightly different from the stagnation temperature, is called either the
adiabatic wall temperature or
recovery temperature and is typically enormously high compared to the relatively warm free stream temperature in the atmosphere.
elegysix said:
So what I gather is that unless we measure it, or simulate it with a computer, we wouldn't even have an educated guess.
Now, I'm not asking for an exact model or anything... I'm just looking for a ballpark approximation. If you wanted to simulate this with a computer, you'd have to have some equations to start with... what would those be?
In 1957 the soviets did the first human spaceflight, so how did they decide whether or not their shuttle would disintegrate upon reentry?
Was it a Fortran nightmare or something? lol
Or did they melt enough equipment prior to that, that they knew what to make the front of their ship with?
The ballpark figure would be the adiabatic wall temperature, since that is the theoretically highest temperature the wall could reach. It is typically defined as:
\frac{T_{aw}}{T_{e}} = 1 + r \frac{\gamma-1}{2}M^{2}_{e}
where
T_{aw} is the adiabatic wall temperature
T_{e} is the total temperature or stagnation temperature of the free stream just outside the boundary layer but inside the shock
r is the recovery factor
\gamma is the ratio of specific heats (\approx 1.4 in air)
M_{e} is the edge Mach number just outside the boundary layer but inside the shock layer
The recovery factor, r, is approximated very closely by \textrm{Pr}^{1/2} for laminar flow and \textrm{Pr}^{1/3} for turbulent flow. Here, \textrm{Pr} is the Prandtl number.
Edge quantities can be calculated from either normal or oblique shock relations depending on your situation.
This would give an absolute worst-case scenario if you assumed the flow was fully turbulent and that the craft lingered long enough to heat up to that adiabatic wall temperature. Of course this will never actually happen, but early designs would have been extremely conservative like this or else incredibly risky. I would imagine the Soviets probably used some variation of this. They probably melted some equipment along the way as well.
It is interesting to note that we still don't have a good answer to this, so the thermal protection system (TPS) is typically vastly overdesigned. Getting a better grip on this would let us make a much less bulky TPS and greatly increase our payload and/or decrease the amount of fuel required to reach orbit.