Chestermiller said:
For pure materials, the ideal gas is only a model for real gas behavior above the melting point and at low reduced pressures, ##p/p_{critical}##. For real gases, the heat capacity is not constant, and varies with both temperature and pressure. So, the solution to your problem is, first of all, to take into account the temperature-dependence of the heat capacity (and pressure-dependence, if necessary). Secondly, real materials experience phase transitions, such as condensation, freezing, and changes in crystal structure (below the freezing point). So one needs to take into account the latent heat effects of these transitions in calculating the
change in entropy. And, finally, before and after phase transitions, the heat capacity of the material can be very different (e.g., ice and liquid water).
Chet,
Is this too long for a comment?
Thank you for explaining the solution, for real materials, to my infinite entropy change
problem--maybe; I, as indicated, suspected something of the kind might be the explanation. Do
you know, however, that taking into account both the variation of heat capacity with temperature
and pressure, and also phase transitions, always leads to a finite value of ∫dq/T when integrated
between 0°K and a higher temperature? Do you care? Maybe you are concerned only with
changes in entropy for processes operating between two non-zero temperatures. Did you use
entropy change calculations in your chemical engineering job? I know that they can be used in
some cases to indicate that a proposed process is impossible, by showing that it would involve
reduction in entropy of an isolated system, thus violating the second law. A well-known example
is the operation of a Carnot cycle heat engine with efficiency greater than that set by the
requirement that the reduction in entropy caused by the removal of thermal energy from the high
temperature heat bath must be accompanied by at least as great an increase in entropy caused
by the addition of thermal energy to the low temperature heat bath.
One might think that the Δ(entropy) = ∫dq/T law should always give a finite Δ(entropy) for pure
ideal gases as well as real materials, but as I (simply) demonstrated, it doesn’t do so for ideal
gases with the lower temperature equal to 0°K, even though ideal gases would not experience
any variation of heat capacity with temperature or pressure, or any phase transitions. I recently
thought about this problem some more, stimulated by the PF discussion, and arrived at a (now
obvious to me) solution, or at least an explanation, that actually favors the thermodynamic
definition of entropy change, which involves infinite, or arbitrarily large, entropy change for
process (involving ideal gases) with starting temperatures at, or arbitrarily near, absolute zero,
over the statistical mechanical view, which requires any finite system to have only finite absolute
entropy at any temperature, including absolute zero, so gives only finite entropy change for
processes, for finite systems, operating between any two states at any two finite temperatures,
even when one is absolute zero. This solution or explanation will require quite a number of lines
to state. I hope that it is not so obvious a one that I am wasting your, my, and any other persons
who are reading this posts’ time by going through it.
The basic reason that the statistical mechanical (SM) entropy (S) of a pure (classical) ideal gas
SYS in any equilibrium macrostate at 0°K or any temperature above that is zero or a finite
positive number, whereas its thermodynamic (THRM) entropy
change between an equilibrium
macrostate of SYS at 0°K and one at any temperature above that is infinite, is that the SM
entropy of SYS in some equilibrium macrostate MAC is calculated using a discrete
approximation NA to the uncertainty of the exact state of SYS when in MAC--NA is the number
of microstates available to SYS when it is in MAC, with some mostly arbitrary definition of the
size in phase space of a microstate of SYS-- whereas the THRM entropy change between two
equilibrium macrostates of SYS is calculated using the (multi-dimensional) area or volume in
phase space of the set of microstates available to SYS when in those macrostates, which can
be any positive real number (and for a macrostate at 0°K is 0). The details follow:
The state of an ideal gas SYS composed of N point-particles each of mass m which interact
only by elastic collision is specified by a point P
s in 6N-dimensional phase space, 3N
coordinates of them for position and 3N of them for momentum. If the gas is in equilibrium,
confined to a cube 1 unit on a side, and has a thermal energy of E, SM and THRM both consider P
s
to be equally likely to be anywhere on the energy surface ES determined by E, which is the set of all
points corresponding to SYS having a thermal energy of E, and the probability density of P
s
being at any point x is the same positive constant for each x ∈ ES, and 0 elsewhere . Since E is
purely (random) kinetic energy, E = Σ
1Np
i2/2m, where p
i is the ith particle's momentum,
so this energy surface is the set of all points with position coordinates within the unit cube in the
position part of the phase space for SYS, and whose momentum coordinates are on the 3N-1
dimensional sphere MS in momentum space centered at the origin with radius r = √(2mE). The
area (or volume) where P
s might be is proportional to the area A of MS, and A ∝ r
3N-1. The entropy
S of SYS is proportional to ln(the area of phase space where P
s might be), S ∝ ln(A), therefore
S = const1.+ const2. x ln(E), and since E ∝ T by the equipartition theorem, S = const1.+ const2. x
[const3. + ln(T)]. Thus dS/dE ∝ dS /dT = const2. x 1/T, so dS/dE = const4. x 1/T, and choosing const4.
to be 1, dS = dE/T. This shows the origin of your THRM dS law, for ideal gases (with dE = dq), which
you probably knew. SM approximates this law, adequately for high T and so large A, by dividing
phase space up into boxes with more-or-less arbitrary dimensions of position and momentum, and
replacing A by the number NA of boxes which contain at least one point of ES. This makes S a
function of T which is not even continuous, let alone differentiable, but for large T the jumps in NA,
and so in S, as a function of T are small enough compared to S to ignore, and the SM entropy
can approximately also follow the dS = dE/T law, and be about equal to the THRM entropy, for
suitable box dimensions. However, as T approaches 0°K, the divergence of the SM entropy from the
THRM entropy using these box dimensions becomes severe. As T decreases in steps by factors
of, say, D, the THRM entropy S decreases by some constant amount ln(D) per step, becoming
arbitrarily negative for low enough T, but with T never quite reaching 0°K by this process. For
T = 0°K, A = 0, so S = (some positive) const. x ln(A) = const. x ln(0) = minus infinity. Since the
energy surface ES must intersect at least one box of the SM partition of phase space, NA can never
go below 1, no matter how small T and so A become. Thus the SM entropy S can never go below
const. x ln(1) = 0. The THRM absolute entropy can be finite, except at T = 0, because, although Δ(S)
from a state of SYS whose T is arbitrarily close to 0°K to a state at a higher T can be arbitrarily
large (positive), S at the starting state can be negative enough that the resulting S for the state at
the higher temperature is some constant finite number, regardless of how near 0°K the starting
state is. For the SM entropy, a similar situation is not the case, since although the SM Δ(S) is
about as large as the THRM Δ(S), the SM S at the starting state can never be less than 0. The
temperature at which the SM entropy S gets stuck at 0, not being able to go lower for a lower T, is
not a basic feature of the laws of the universe. Making SYS bigger or making the boxes of the
SM partition of phase space smaller would result in the sticking temperature being lower, and
of course making SYS smaller or the boxes larger would raise the sticking temperature.
I have read somewhere (of course, maybe it was written by a low-temperature physicist) that the
amount of interesting phenomena for a system within a range of temperatures is proportional to
the ratio of the highest to the lowest temperature of that range, not to their difference. If so,
there would be as large an amount of such phenomena between .001°K and .01°K as between
100°K and 1000°K, but the usual SM entropy measure would show no entropy difference
between any two states of a very small system in the lower temperature range, but a non-zero
difference between different states in the upper range, so would be of no help in analyzing
processes in the lower range, even though of some help in the upper range (or if not, for a given
system, for these two temperature ranges, it would be so for some other two temperature ranges
each with a 10 to 1 temperature ratio). On the other hand, the THRM entropy measure would show
as much entropy difference (which would be non-zero) between states at the bottom and at the top
of the lower range as between states at the bottom and at the top of the upper range.