Gibbs paradox: an urban legend in statistical physics

In summary, the conversation discusses the misconception surrounding the mixing of distinguishable particles and the existence of a paradox in classical statistical physics. The speaker discovered that there is no real paradox, and this is supported by various references and papers. The correction factor of 1/N! in the entropy calculation is not a paradox, but a logical necessity. The factor is imposed by logic, not to obtain an extensive entropy, but due to combinatorial logic. This is different from Gibbs' definition of entropy in terms of volume of phase space. Within classical mechanics, there is no logical reason to include this factor, but in quantum theory it is a result of indistinguishability. This correction factor persists in the classical limit and is a natural result of Bolt
  • #176
autoUFC said:
the argument that you can not place the particles in a given position after removing the barrier has a lot of power in the quantum setting.

Why wouldn't it also have the same power in a classical setting? Classical physics still says it takes work (and therefore expends entropy) to separate a mixture of two types of particles.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #177
DrDu said:
First attach n primers to each of two carriers and synthesise random single strand DNA of length N from e.g. A and C, only.
To better understand the thermodynamic cycle you described, could you maybe shortly explain (or provide a link to) what primers and carriers are and how they work in the context of your proposed experiment?
 
  • #178
HPt said:
I believe your misconception is that you think of putting one kind of gas of distinguishable particles in one part and another kind of gas of distinguishable particles in the other part (such that you have knowledge of what particle is in which part). But that's not the setup. Instead think of putting a wall in a box containing pairwise distinguishable particles. Now, if you take out the dividing wall again, I hope you will agree that you don't get an irreversible diffusion.

This statement only echos how the Gibbs paradox is introduced in most textbooks (e.g. Huang). It doesn't contain any original insight. I invite you to read a little bit beyond the first sentence of the abstract.

I have a few questions concerning your setup. I have little knowledge on quantum information, and that may be the reason of my doubts.

Supose you start with with a large number of pairwise distinguishable quantum particles in a volume. Then you place the barrier. One can not say that, after placing the barrier, each particle is necessarily in one side or the other. We should get a superposition, isn't that so?

Supose that you have sensors that can detect particles and is capable of identifying what kind of particles it detected. Using the detector we can change the state, destroying superposition, and determine precisely what are the particles in each side. The state after using the detector would be different from the state in superposition. Would that make any difference (in the increase or not in entropy) when the barrier is removed?
 
  • #179
Hi autoUFC,
you have to distinguish quantum states, microstates and macrostates:
  • Quantum states are represented by vectors in the state space and they can be superpositioned. If ##\binom 1 0## and ##\binom 0 1## represent quantum states then also does their superposition ##\frac{1}{\sqrt{2}}\binom 1 1##. (The factor ##\frac{1}{\sqrt{2}}## is there, because vectors representing quantum states are normalized.)
  • Microstates are represented by basis vectors of the state space. So, they depend on the basis you choose, but once you have chosen a basis, only certain quantum states are microstates.
  • Macrostates are represented by density operators. A density operator assigns each microstate a probability. Entropy is defined for macrostates as ##-k \sum p(m) \ln p(m)##, where ##p(m)## denotes the probabilty assigned to microstate ##m## and the sum goes over all microstates.
autoUFC said:
Supose you start with with a large number of pairwise distinguishable quantum particles in a volume. Then you place the barrier. One can not say that, after placing the barrier, each particle is necessarily in one side or the other. We should get a superposition, isn't that so?
Quantum states where a particle is "smeared" across both sides are possible. However, similarly to how you assume that in thermodynamic equilibrium the microstates can be chosen to be energy eigenstantes, you may assume that they can be chosen to not be smeared across both sides. Crucially, however, in the macrostate you describe there are microstates with non-zero probability, where a certain particle is on the left side, and other microstates with non-zero probability, where the same particle is on the right side. This uncertainty about which particle is on which side contributes to the entropy of the macrostate.
autoUFC said:
Supose that you have sensors that can detect particles and is capable of identifying what kind of particles it detected. Using the detector we can change the state, destroying superposition, and determine precisely what are the particles in each side. The state after using the detector would be different from the state in superposition. Would that make any difference (in the increase or not in entropy) when the barrier is removed?
Yes, using the detector would change the macrostate. You wouldn't destroy superpositions, because in the present setup we are regarding a macrostate and not a quantum state, but you would change the probabilities assigned to certain microstates. For example, as soon as you measure that a certain particle is on the left side, all the probabilities of those microstates where this particle is on the right side would collapse to zero. For that reason, such a measurement would decrease the entropy of the macrostate and removing the barrier again after the measurement would result in an entropy increase.
 
  • Like
Likes vanhees71 and PeterDonis
  • #180
Maybe slightly off topic:

Several posts have mentioned the way the density of states is calculated from first principles for (indistinguishable) quantum particles.

Having the density of states means, I believe, that it's possible to derive all the equations of an ideal gas, for example, from first principles. For example you can get the Sackur Tetrode equation and also the relationship between total energy and temperature directly from statistical physics.
See my text on RG for clarity if you want to ( https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas ).

I haven't seen anything comparable for classical particles. How would you define the density of states without using QM?
In my own text on RG I use the relation between energy and temperature as an extra requirement to find the right MB-distribution, but that seems like I'm just forcing the results to match the expected properties of an ideal gas rather than deriving the properties from first principles.
 
  • Like
Likes vanhees71
  • #181
Yes indeed. The only consistent theory of matter we have today is quantum theory and quantum statistics. To establish the classical theory you have to borrow some ingredients from this more comprehensive theory. At least I know: (a) The natural measure of phase-space volumes in ##h^{d}=(2 \pi \hbar)^{d}## and (b) the indistinguishability of particles in terms of bosons and fermions (depending on spin).

The most simple way is to use quantum field theory ("2nd quantization") and the grand-canonical ensemble. As an example take spin-0 bosons. The one-particle states we take as given by the wave functions defined in a cubic volume (length ##L##) with periodic boundary conditions (in order to have properly defined momentum observables).

Then the quantum field is completely determined by the annihilation and creation operators for momentum eigenstates ##\hat{a}(\vec{p})## and ##\hat{a}^{\dagger}(\vec{p})## and the Hamilton operator is given by
$$\hat{H}=\sum_{\vec{p}} \frac{\vec{p}^2}{2m} \hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p}).$$
With ##\vec{p} \in \frac{2 \pi \hbar}{L} \mathbb{Z}^3##.

The annihilation and creation operators obey the commutation relations (bosonic fields)
$$[\hat{a}(\vec{p}),\hat{a}^{\dagger}(\vec{p}')]=\delta_{\vec{p},\vec{p}'}, \quad [\hat{a}(\vec{p}),\hat{a}(\vec{p}')]=0.$$
A convenient complete set of orthonomalized basis functions are the Fock states, i.e., the eigenstates of the occupation-number operators ##\hat{N}(\vec{p})=\hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p})##. The eigenvalues are ##N(\vec{p}) \in \{0,1,2,\ldots\}=\mathbb{N}_0##.

To get the thermodynamics we need the grand-canonical partition sum
$$Z=\mathrm{Tr} \exp[-\beta (\hat{H}-\mu \hat{N})],$$
where
$$\hat{H}=\sum_{\vec{p}} E_{\vec{p}} \hat{N}(\vec{p}), \quad \hat{N}=\sum_{\vec{p}} \hat{N}(\vec{p}).$$
For the following it's more convenient to define the functional
$$Z[\alpha]=\mathrm{Tr} \exp[-\sum_{\vec{p}} \alpha(\vec{p}) \hat{N}(\vec{p})].$$
That's easy to calculate using the Fock basis (occupation-number basis)
$$Z(\alpha(\vec{p})=\prod_{\vec{p}} \sum_{N(\vec{p})=0}^{\infty} \exp[-\alpha(\vec{p}) N(\vec{p})] = \prod_{\vec{p}}\frac{1}{1-\exp(-\alpha(\vec{p})}.$$
The occupation number distribution is given by
$$f(\vec{p})=\langle \hat{N}(\vec{p}) \rangle=\frac{1}{Z} \mathrm{Tr} \hat{N}(\vec{p}) \exp[-\beta (\hat{H}=\mu)] .$$
This can be calculated from the functional
$$f(\vec{p})=-\left . \frac{\partial}{\partial \alpha(\vec{p})} \ln Z[\alpha] \right|_{\alpha(\vec{p})=\beta(E_{\vec{p}}-\mu)} = \frac{\exp[-\beta(E_{\vec{p}}-\mu)]}{1-\exp[-\beta(E_{\vec{p}}-\mu)]}=\frac{1}{\exp[\beta(E_{\vec{p}}-\mu)]-1}.$$
The partition sum itself is given by
$$\Omega(V,\beta,\mu)=\ln Z(V,\beta,\mu)=-\sum_{\vec{p}} \ln \{1-\exp[-\beta(E_{\vec{p}}-\mu)] \}.$$
The thermodynamic limit is not trivial since obviously we have the contraints ##\beta>0## and ##\mu<0##, and for too large ##\beta## and to large ##\mathcal{N}=\langle N \rangle## we cannot make ##L \rightarrow \infty## and keep ##n=\mathcal{N}/V## constant. The reason is that we need to treat the ground state ("zero mode" of the field) separately before doing the limit. The thorough investigation leads to the possibility of Bose-Einstein condensation for large ##n## and large ##\beta## (since ##\beta## turns out to be ##\beta=1/(k T)## that means low temperatures).

Restricting ourselves to non-degenerate states, i.e., high temperature and not too large ##n## we can naively make ##L \rightarrow \infty##. Then in any momentum-volume element ##\mathrm{d}^3 p## we have ##\frac{V}{(2 \pi \hbar)^3} \mathrm{d}^3 p## single-particle states and thus we can substitute the sum by an integral
$$\Omega=-\frac{V}{(2 \pi \hbar)^3} \int_{\mathbb{R}^3} \mathrm{d}^3 p \ln\{1-\exp[-\beta(E_{\vec{p}}-\mu)]\}.$$
The integral is non-trivial, but the classical limit is simple. That's given for small occupation numbers, i.e., for ##\exp[-\beta(E_{\vec{p}}-\mu]\ll 1##. Then we can set ##\ln(1-\exp(...))=-\exp(...)##
and
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \int_{mathbb{R}^3} \mathrm{d}^3 p \exp[-\beta(E_{\vec{p}}-\mu)].$$
With ##E_{\vec{p}}=\vec{p}^2/(2m)## we can evaluate the Gaussian integral, leading to
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \left (\frac{2 \pi m}{\beta} \right)^{3/2} \exp(\beta \mu).$$
Now the meaning of the constants become clear by evaluating the internal energy and the average particle number
$$\mathcal{N}=\langle N \rangle=\frac{1}{\beta} \partial_{\mu} \Omega=\Omega.$$
Further we have
$$U=\langle E \rangle=-\partial_{\beta} \Omega+ \mu \mathcal{N}=\frac{3}{2 \beta} \mathcal{N},$$
from which
$$\beta=1/(k T).$$
To get the relation to the more usual thermodynamic potentials we calculate the entropy. The statistical operator is
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H} + \beta \mu \hat{N})$$
and thus the entropy
$$S=-k \mathrm{Tr} \ln \hat{\rho}=-k (\Omega -\beta U + \beta \mu \mathcal{N})=-k \Omega+\frac{U-\mu}{T}.$$
To get the usual potentials we note that with
$$\Phi=\Phi(V,T,\mu)=-k T \Omega$$
one gets after some algebra
$$\mathrm{d} \Phi=\mathrm{d} V \partial_V \Phi - S \mathrm{d} T - \mathcal{N} \mathrm{d} \mu.$$
On the other hand from the above expression for the entropy we find
$$\Phi=U-S T - N \mu.$$
From this it follows
$$\mathrm{d} U = \mathrm{d} V \partial_V \Phi + T \mathrm{d} S+\mu \mathrm{d} \mathcal{N}$$
which gives
$$P=-\left (\frac{\partial \Phi}{\partial V} \right)_{T,\mu}.$$
 
  • Like
Likes Philip Koeck
  • #182
Philip Koeck said:
How would you define the density of states without using QM?

Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
 
  • Like
Likes vanhees71
  • #183
PeterDonis said:
Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
I'm trying to point out the following:
For quantum particles I can use the quantum mechanical expression for the density of states of an ideal gas in the derivation and I can get all the macroscopic relations for an ideal gas as a result, for example U = 1.5 N k T.

For classical particles I don't have an expression for the density of states, as far as I know.
Therefore I'm forced to use the above relation between U and T as a normalization condition in order to arrive at the MB-distribution.
In other words, I'm not actually deriving the macroscopic relations from the statistical description in the case of classical particles, I'm using them as additional input.

My question is whether there is some way of arriving at the density of states for classical particles without resorting to QM. That's what seems to be missing in the classical description.
 
  • Like
Likes vanhees71
  • #184
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
 
  • Like
Likes Philip Koeck
  • #185
vanhees71 said:
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
Alternatively, does anybody know of a workaround that doesn't require the density of states or anything else from QM and doesn't use use the macroscopic relations as constraints or for normalizing in any way?
 
  • #186
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
 
  • #187
vanhees71 said:
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
I don't know anything about the phase space distribution function, but I can only find it in connection with QM on the internet.
I'm not sure if that's what I was looking for.
 
  • #188
The single-particle phase-space distribution function is the quantity usually called ##f(t,\vec{x},\vec{p})## appearing in the Boltzmann transport equation. It's defined as the particles per unit phase-space volume at the phase-space point ##(\vec{x},\vec{p})## at time ##t##.
 
  • Like
Likes Philip Koeck

Similar threads

  • Thermodynamics
Replies
3
Views
951
  • Advanced Physics Homework Help
Replies
4
Views
3K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
2K
Replies
4
Views
1K
Replies
1
Views
990
  • Quantum Interpretations and Foundations
Replies
8
Views
2K
  • Thermodynamics
Replies
9
Views
1K
  • Classical Physics
Replies
19
Views
3K
  • STEM Academic Advising
Replies
6
Views
1K
Back
Top