I Gibbs paradox: an urban legend in statistical physics

Click For Summary
The discussion centers on the assertion that the Gibbs paradox in classical statistical physics is a misconception, with claims that no real paradox exists regarding the mixing of distinguishable particles. Participants reference key works by Van Kampen and Jaynes, arguing that Gibbs's original conclusions about indistinguishability were flawed and that the N! term in entropy calculations is logically necessary rather than paradoxical. The conversation highlights the distinction between classical and quantum mechanics, emphasizing that classical mechanics treats particles as distinguishable, which should yield measurable mixing entropy. Some contributors challenge the notion that the paradox reveals inconsistencies in classical models, suggesting that it is an urban legend rather than a fundamental issue. The debate underscores the ongoing confusion and differing interpretations surrounding the implications of the Gibbs paradox in both classical and quantum contexts.
  • #181
Yes indeed. The only consistent theory of matter we have today is quantum theory and quantum statistics. To establish the classical theory you have to borrow some ingredients from this more comprehensive theory. At least I know: (a) The natural measure of phase-space volumes in ##h^{d}=(2 \pi \hbar)^{d}## and (b) the indistinguishability of particles in terms of bosons and fermions (depending on spin).

The most simple way is to use quantum field theory ("2nd quantization") and the grand-canonical ensemble. As an example take spin-0 bosons. The one-particle states we take as given by the wave functions defined in a cubic volume (length ##L##) with periodic boundary conditions (in order to have properly defined momentum observables).

Then the quantum field is completely determined by the annihilation and creation operators for momentum eigenstates ##\hat{a}(\vec{p})## and ##\hat{a}^{\dagger}(\vec{p})## and the Hamilton operator is given by
$$\hat{H}=\sum_{\vec{p}} \frac{\vec{p}^2}{2m} \hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p}).$$
With ##\vec{p} \in \frac{2 \pi \hbar}{L} \mathbb{Z}^3##.

The annihilation and creation operators obey the commutation relations (bosonic fields)
$$[\hat{a}(\vec{p}),\hat{a}^{\dagger}(\vec{p}')]=\delta_{\vec{p},\vec{p}'}, \quad [\hat{a}(\vec{p}),\hat{a}(\vec{p}')]=0.$$
A convenient complete set of orthonomalized basis functions are the Fock states, i.e., the eigenstates of the occupation-number operators ##\hat{N}(\vec{p})=\hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p})##. The eigenvalues are ##N(\vec{p}) \in \{0,1,2,\ldots\}=\mathbb{N}_0##.

To get the thermodynamics we need the grand-canonical partition sum
$$Z=\mathrm{Tr} \exp[-\beta (\hat{H}-\mu \hat{N})],$$
where
$$\hat{H}=\sum_{\vec{p}} E_{\vec{p}} \hat{N}(\vec{p}), \quad \hat{N}=\sum_{\vec{p}} \hat{N}(\vec{p}).$$
For the following it's more convenient to define the functional
$$Z[\alpha]=\mathrm{Tr} \exp[-\sum_{\vec{p}} \alpha(\vec{p}) \hat{N}(\vec{p})].$$
That's easy to calculate using the Fock basis (occupation-number basis)
$$Z(\alpha(\vec{p})=\prod_{\vec{p}} \sum_{N(\vec{p})=0}^{\infty} \exp[-\alpha(\vec{p}) N(\vec{p})] = \prod_{\vec{p}}\frac{1}{1-\exp(-\alpha(\vec{p})}.$$
The occupation number distribution is given by
$$f(\vec{p})=\langle \hat{N}(\vec{p}) \rangle=\frac{1}{Z} \mathrm{Tr} \hat{N}(\vec{p}) \exp[-\beta (\hat{H}=\mu)] .$$
This can be calculated from the functional
$$f(\vec{p})=-\left . \frac{\partial}{\partial \alpha(\vec{p})} \ln Z[\alpha] \right|_{\alpha(\vec{p})=\beta(E_{\vec{p}}-\mu)} = \frac{\exp[-\beta(E_{\vec{p}}-\mu)]}{1-\exp[-\beta(E_{\vec{p}}-\mu)]}=\frac{1}{\exp[\beta(E_{\vec{p}}-\mu)]-1}.$$
The partition sum itself is given by
$$\Omega(V,\beta,\mu)=\ln Z(V,\beta,\mu)=-\sum_{\vec{p}} \ln \{1-\exp[-\beta(E_{\vec{p}}-\mu)] \}.$$
The thermodynamic limit is not trivial since obviously we have the contraints ##\beta>0## and ##\mu<0##, and for too large ##\beta## and to large ##\mathcal{N}=\langle N \rangle## we cannot make ##L \rightarrow \infty## and keep ##n=\mathcal{N}/V## constant. The reason is that we need to treat the ground state ("zero mode" of the field) separately before doing the limit. The thorough investigation leads to the possibility of Bose-Einstein condensation for large ##n## and large ##\beta## (since ##\beta## turns out to be ##\beta=1/(k T)## that means low temperatures).

Restricting ourselves to non-degenerate states, i.e., high temperature and not too large ##n## we can naively make ##L \rightarrow \infty##. Then in any momentum-volume element ##\mathrm{d}^3 p## we have ##\frac{V}{(2 \pi \hbar)^3} \mathrm{d}^3 p## single-particle states and thus we can substitute the sum by an integral
$$\Omega=-\frac{V}{(2 \pi \hbar)^3} \int_{\mathbb{R}^3} \mathrm{d}^3 p \ln\{1-\exp[-\beta(E_{\vec{p}}-\mu)]\}.$$
The integral is non-trivial, but the classical limit is simple. That's given for small occupation numbers, i.e., for ##\exp[-\beta(E_{\vec{p}}-\mu]\ll 1##. Then we can set ##\ln(1-\exp(...))=-\exp(...)##
and
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \int_{mathbb{R}^3} \mathrm{d}^3 p \exp[-\beta(E_{\vec{p}}-\mu)].$$
With ##E_{\vec{p}}=\vec{p}^2/(2m)## we can evaluate the Gaussian integral, leading to
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \left (\frac{2 \pi m}{\beta} \right)^{3/2} \exp(\beta \mu).$$
Now the meaning of the constants become clear by evaluating the internal energy and the average particle number
$$\mathcal{N}=\langle N \rangle=\frac{1}{\beta} \partial_{\mu} \Omega=\Omega.$$
Further we have
$$U=\langle E \rangle=-\partial_{\beta} \Omega+ \mu \mathcal{N}=\frac{3}{2 \beta} \mathcal{N},$$
from which
$$\beta=1/(k T).$$
To get the relation to the more usual thermodynamic potentials we calculate the entropy. The statistical operator is
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H} + \beta \mu \hat{N})$$
and thus the entropy
$$S=-k \mathrm{Tr} \ln \hat{\rho}=-k (\Omega -\beta U + \beta \mu \mathcal{N})=-k \Omega+\frac{U-\mu}{T}.$$
To get the usual potentials we note that with
$$\Phi=\Phi(V,T,\mu)=-k T \Omega$$
one gets after some algebra
$$\mathrm{d} \Phi=\mathrm{d} V \partial_V \Phi - S \mathrm{d} T - \mathcal{N} \mathrm{d} \mu.$$
On the other hand from the above expression for the entropy we find
$$\Phi=U-S T - N \mu.$$
From this it follows
$$\mathrm{d} U = \mathrm{d} V \partial_V \Phi + T \mathrm{d} S+\mu \mathrm{d} \mathcal{N}$$
which gives
$$P=-\left (\frac{\partial \Phi}{\partial V} \right)_{T,\mu}.$$
 
  • Like
Likes Philip Koeck
Physics news on Phys.org
  • #182
Philip Koeck said:
How would you define the density of states without using QM?

Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
 
  • Like
Likes vanhees71
  • #183
PeterDonis said:
Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
I'm trying to point out the following:
For quantum particles I can use the quantum mechanical expression for the density of states of an ideal gas in the derivation and I can get all the macroscopic relations for an ideal gas as a result, for example U = 1.5 N k T.

For classical particles I don't have an expression for the density of states, as far as I know.
Therefore I'm forced to use the above relation between U and T as a normalization condition in order to arrive at the MB-distribution.
In other words, I'm not actually deriving the macroscopic relations from the statistical description in the case of classical particles, I'm using them as additional input.

My question is whether there is some way of arriving at the density of states for classical particles without resorting to QM. That's what seems to be missing in the classical description.
 
  • Like
Likes vanhees71
  • #184
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
 
  • Like
Likes Philip Koeck
  • #185
vanhees71 said:
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
Alternatively, does anybody know of a workaround that doesn't require the density of states or anything else from QM and doesn't use use the macroscopic relations as constraints or for normalizing in any way?
 
  • #186
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
 
  • #187
vanhees71 said:
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
I don't know anything about the phase space distribution function, but I can only find it in connection with QM on the internet.
I'm not sure if that's what I was looking for.
 
  • #188
The single-particle phase-space distribution function is the quantity usually called ##f(t,\vec{x},\vec{p})## appearing in the Boltzmann transport equation. It's defined as the particles per unit phase-space volume at the phase-space point ##(\vec{x},\vec{p})## at time ##t##.
 
  • Like
Likes Philip Koeck

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
3
Views
1K
Replies
4
Views
5K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K