I Gibbs paradox: an urban legend in statistical physics

AI Thread Summary
The discussion centers on the assertion that the Gibbs paradox in classical statistical physics is a misconception, with claims that no real paradox exists regarding the mixing of distinguishable particles. Participants reference key works by Van Kampen and Jaynes, arguing that Gibbs's original conclusions about indistinguishability were flawed and that the N! term in entropy calculations is logically necessary rather than paradoxical. The conversation highlights the distinction between classical and quantum mechanics, emphasizing that classical mechanics treats particles as distinguishable, which should yield measurable mixing entropy. Some contributors challenge the notion that the paradox reveals inconsistencies in classical models, suggesting that it is an urban legend rather than a fundamental issue. The debate underscores the ongoing confusion and differing interpretations surrounding the implications of the Gibbs paradox in both classical and quantum contexts.
autoUFC
Messages
53
Reaction score
9
TL;DR Summary
There is no paradox. Gibbs made a mistake. Ehrenfest explained the entropy of mixing of distinguishble elements with the problem of the dog's fleas. However, more than a century later this fact is ignored by most.
Hi,
I recently discovered that there is no real paradox in the question of the mixing of classical distinguishble particles. I was shocked. Most books and all my professors suggest that an extensible entropy could not be defined for distinguishble particles.

I believe that many of you will be skeptical of my claim that no paradox exists. After a careful read of the Wikipedia article on the gibbs paradox I learned that there is no real paradox. Gibbs made a mistake. Van Kampen 1984, and Jaynes 1996 (two references that I found in the wikipedia page) make the point clear. Jaynes goes as far as to suggest that Gibbs was senile when he wrote about the paradox of mixing.
A 2 pages text that is right to the point is
Swendsen, R. H. (2008). Gibbs’ paradox and the definition of entropy. Entropy, 10(1), 15-18.

I would like to know if other people is surprised with that. It seems to me that because physicists are more concerned with Bose and Fermi indistinguishable particles, this misconception regarding the mixing of distinguishble elements has lingered.
 
  • Like
Likes dextercioby
Physics news on Phys.org
I'm not aware of the claim that an an "extensible entropy" (whatever you mean by that) could not be defined for distinguishable particles. It's of course clear that there must be mixing entropy for distinguishable particles, because it's a real observable physical phenomenon.

That there is no Gibbs paradox in classical statistical physics, is a bold claim. You have to assume extensivity of entropy (in the thermodynamic limit) of a classical many-body theorem to get the correct ##1/N!## factor in the canonical and grand-canonical ensemble to get the entropy right. Quantum theory implies this indistinguishability factor from its very foundations.
 
Extensivity does not need to be assumed. Extensivety follows from the correct definition of entropy, where the 1/N! is included by logic, not to obtain an extensive entropy, but due to combinatorial logic. Most people seem to think that this term appears to make classical statistics agree with quantum statistics in a limit. This is not the case, as the articles I mentioned show, and as the dog's flea problem reveals. The N! is no paradox, it is logically needed.
 
I guess my point is being lost. I know that quantum mechanics imposes indistinguishable particles. My point is: At the time of Gibbs, one could consider a model of identical particles that are permutable, meaning that exchanging two of them would result in a different state. Gibbs suggested that counting states in this way would not produce the correct form for the entropy. This is not true. The N! Correction is imposed due to logic. In granular media, for instance, there is the Eduards entropy of jamming, it includes the N! term, but for sure grains are distinguishble, and exchanging two grains results in a different state. I could try to explain the logic behind it, but I guess that the references I included do a better job than I would.
 
But within classical mechanics, where is the logic to include this crucial factor ##1/N!##? In classical mechanics any particle is individually distinguishable from any other, because you can (in principle) follow its trajectory. So there's no "logical reason" to implement this factor other than the extensivity argument (which AFAIK Boltzmann did for the first time).

Of course today we don't have a problem with that, because we know that QT is the right theory of matter and there the indistinguishability is pretty "logical", leading to the existence of bosons and fermions (by a topological argument), and then the factor ##1/N!## of course persists in the classical limit (where both statistics get the classical Boltzmann statistics, including this factor).
 
vanhees71 said:
But within classical mechanics, where is the logic to include this crucial factor ##1/N!##? In classical mechanics any particle is individually distinguishable from any other, because you can (in principle) follow its trajectory. in the classical limit (where both statistics get the classical Boltzmann statistics, including this factor).

According to the cited paper, it comes naturally from Boltzmann definition of entropy. Which is different from Gibbs definition of entropy in terms of volume of phase space.
 
vanhees71 said:
I'm not aware of the claim that an an "extensible entropy" (whatever you mean by that) could not be defined for distinguishable particles.

@andresB
Is the general idea of an "extensible" property that it is a numerical quantity P of matter such that P(M1 + M2) = P(M1) + P(M2), where M1+M2 indicates combining masses M1 and M2 physically?
 
You mean extensive! Sure, that's the point. If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy of a gas consisting of (chemically) "identical" particles which leads to Gibbs's paradox. This paradox can only be resolved when assuming the indistinguishability of particles in the sense that you have to count any configuration which results from a specific configuration by only exchanging particles as one configuration, which leads to the inclusion of the crucial factor ##1/N!## in the canonical partition sum:
$$Z=\frac{1}{N!}\int_{\mathbb{R}^{6N}} \mathrm{d}^{6 N} \Gamma \exp[-\beta H(\Gamma)], \quad \beta=\frac{1}{k_{\text{B}} T}.$$
This was of course known by Gibbs and Boltzmann, but I still think there is no argument within fundamental classical physics to include this factor. So the textbooks are right when they say that this is an early hint that there's something not correct in the classical particle model. Today we know that this is indeed true and we need quantum theory and indistinguishability of identical particles as a fundamental property. Anyway, historically quantum theory has been discovered due to such subtle inconsistencies occurring in statistical physics (most famously the failure of the application of classical statistics to the electromagnetic radiation field in a cavity, whose solution led Planck to introduce ("in an act of desperation") the light-quantum hypothesis.
 
  • #10
autoUFC said:
Jaynes goes as far as to suggest that Gibbs was senile when he wrote about the paradox of mixing.

What Jaynes wrote is:
In reading Gibbs, it is important to distinguish between early and late Gibbs. His Heterogeneous Equilibrium of 1875 is the work of a man at the absolute peak of his intellectual powers; no logical subtlety escapes him and we can find no statement that appears technically incorrect today. In contrast, his Statistical Mechanics of 1902 is the work of an old man in rapidly failing health, with only one more year to live. Inevitably, some arguments are left imperfect and incomplete toward the end of the work.

It's interesting that Jayne's own book Probability Theory, The Logic of Science was left incomplete due to Jayne's death.
 
  • #11
vanhees71 said:
You mean extensive! Sure, that's the point. If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy of a gas consisting of (chemically) "identical" particles which leads to Gibbs's paradox.

According to Swendsen, a non-extensive entropy does not work even for distinguishable particles

https://aapt.scitation.org/doi/10.1119/1.2174962
https://www.mdpi.com/1099-4300/10/1/15
 
  • Like
Likes dextercioby
  • #12
These two sentences from vanshee are exactly what I meant when I wrote the this paradox is a Urban legend. In fact, there is no problem with the entropy of in the classical model. The N! term is imposed by logic. Gibbs was right in believing identical particles are impermutable (meaning that exchanging two of them leads to the same state). However, he was right by coincidence. There is no hint of something wrong in the classical particle model. These are the misconceptions that I see as urban legend. Not long ago I was one of the believers in these mistical arguments,
As van Kampen calls them.

vanhees71 said:
"If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy."

"textbooks are right when they say that this is an early hint that there's something not correct in the classical particle model."
 
  • #14
Hi, I'm the author of the paper Eur. J. Phys. 35 (2014) 015023 (also freely available at arXiv). As I explicity demonstrate in my paper, there is a paradox that manifests as a contradiction to the second law of thermodynamics. This contradiction does not only arise for classical distinguishable particles, but, as demonstrated in my paper, also within the quantum realm. The reason for this is that also quantum particles may be distinguishable: As an example, think of a gas of buckyball molecules where each buckyball is made up of two disinct carbon isotopes in a way that no two buckyballs are identical. Therefore, the claim that the Gibbs Paradox exposes an inconsistency of classical statistical mechanics or that the Gibbs Paradox is resolved by (the indistinguishability of identical particles in) QM is false.
 
  • Like
Likes dextercioby
  • #15
The different Buckyballs are then, well, different. How does this show anything?? Two hydrogen atoms are identical (and untraceable in QM) A deuterium atom and hydrogen atom are not identical. Am I misunderstanding something here?
 
  • Like
Likes vanhees71
  • #16
hutchphd said:
The different Buckyballs are then, well, different. How does this show anything?? Two hydrogen atoms are identical (and untraceable in QM) A deuterium atom and hydrogen atom are not identical. Am I misunderstanding something here?
You can make a gas out of pairwise different buckyballs and show that this gas (although treated strictly quantum-mechanically) suffers from the Gibbs Paradox just like a gas of classical distinguishable particles. I invite you to read my paper where I do exactly this.
 
  • #17
Pairwise different? Sorry I do not know the term. Perhaps the learning curve is large for me here.
 
  • #18
hutchphd said:
Pairwise different? Sorry I do not know the term. Perhaps the learning curve is large for me here.
Pairwise different means that you can choose any two of them (i.e., any pair) and they are always different.
 
  • #19
HPt said:
Hi, I'm the author of the paper Eur. J. Phys. 35 (2014) 015023 (also freely available at arXiv). As I explicity demonstrate in my paper, there is a paradox that manifests as a contradiction to the second law of thermodynamics. This contradiction does not only arise for classical distinguishable particles, but, as demonstrated in my paper, also within the quantum realm. The reason for this is that also quantum particles may be distinguishable: As an example, think of a gas of buckyball molecules where each buckyball is made up of two disinct carbon isotopes in a way that no two buckyballs are identical. Therefore, the claim that the Gibbs Paradox exposes an inconsistency of classical statistical mechanics or that the Gibbs Paradox is resolved by (the indistinguishability of identical particles in) QM is false.
I'm sorry to say that I'm confused. If in the two partial volumes the gases are of distinguishable kinds (i.e., gases of different molecules/atoms) then you must get a mixing entropy. So there's nothing paradoxical in this situation, because you must get a mixing entropy and indeed you can measure it.

Where the paradox comes into play is if the gases are of the same kinds of molecules/atoms. Then there must not be a mixing entropy, and you get rid of this only by assuming that the molecules/atoms are indistinguishable in the quantum-mechanical sense (also otherwise using classical counting of "complexions"), i.e., you have to multiply the number of classical microstates by ##1/N!##. Then using the Boltzmann-Planck formula you get the correct result that no mixing entropy occurs and also that the entropy of a single gas is extensive (Sackur-Tetrode formula for the entropy of an ideal gas is the exactly calculable example).
 
  • Like
Likes hutchphd
  • #20
hutchphd said:
The different Buckyballs are then, well, different. How does this show anything?? Two hydrogen atoms are identical (and untraceable in QM) A deuterium atom and hydrogen atom are not identical. Am I misunderstanding something here?
Of course in this case the molecules/atoms are different, because they have different masses, i.e., they are distinguishable and putting a gas of ##\mathrm{D}_2## molecules into one half of the box and ##\mathrm{H}_2## molecules at the same temperature and pressure on the other side and then remove the diaphragm and wait until the gas is equilibrated, i.e., completely mixed you gain the mixing entropy. The mixing entropy in that case is
$$\Delta S=k_{\text{B}} N \ln 2.$$

https://en.wikipedia.org/wiki/Entropy_of_mixing

If in both halves of the volume are the molecules (i.e., only ##\mathrm{D}_2## or only ##\mathrm{H}_2##) there's no mixing entropy.
 
  • #21
The problem is with semantics. Let's us define impermutable are two particles that are identical, and when one exchange two of them the state does not change. Quantum particles are impermutable. At the time of Gibbs, one could think of permutable identical particles. Meaning that, although identical, exchanging two particles counts was two states. That would apply, for instance, to the counting of states for the jamming entropy of macroscopic grains, or the buckballs of Hjalmar. Precisely, these exemples are of near identical particles, but the classical particle model considers particles as perfectly identical but permutable.

What is suprinsing is that textbooks suggest, erroneously, that the classical model predics increase in entropy due to the mixing of identical permutable particles, when in fact there is none.
 
  • #22
Of course quantum particles are impermutable but classical particles are not. That's the problem with classical statistical mechanics concerning the Gibbs paradox. It's resolved through quantum mechanics and you can "repair" the results from classical statistics by simply taking over the impermutability from quantum mechanics. I don't think that you can argue purely within classical mechanics that indistinguishable particles are impermutable. For that you need to invoke the arguments from quantum theory.

If, however, the particles are distinguishable and be it only by a "minimal difference" as in the examples with the buckyballs or gases only disginguished by having different isotopes of their atoms within their molecules, then you have the mixing entropy.
 
  • Like
Likes hutchphd
  • #23
vanhees71 said:
If in the two partial volumes the gases are of distinguishable kinds (i.e., gases of different molecules/atoms) then you must get a mixing entropy.

No, that's only the case if you know which particle is in which partial volume. Consider the following simple thought experiment: Take a volume, then put 1,000,000 different buckyballs in it, and now divide this volume into two equal halfs. After that you have two partial volumes, each containing a gas of approximately 500,000 buckyballs (but you don't know which 500,000 of the initial 1,000,000 buckyballs is in which partial volume). Now, since dividing the volume didn't decrease the entropy, removing this division obviously won't increase it. After all, you have exactly the same situation as before you divided the volume in the first place.
As an aside: If you knew which 500,000 buckyballs are on the left partial volume and which are on the right (say, you have measured each particle of each partial volume), then removing the partition between the two partial volumes would increase the entropy (because you would have lost the information about which particle is located in which partial volume).
 
  • Like
Likes dextercioby and PeterDonis
  • #24
I don't know which thread participants have read Jayne's paper http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf , but the characterization of that paper as "Gibbs made a mistake" is incorrect. The paper says that Gibbs explained the situation correctly, but in an obscure way.

The paper also says that classical mechanics and QM face similar difficulties in defining an extensive entropy, so it favors @HPt 's view.
 
  • Like
Likes dextercioby, PeterDonis and vanhees71
  • #25
HPt said:
No, that's only the case if you know which particle is in which partial volume. Consider the following simple thought experiment: Take a volume, then put 1,000,000 different buckyballs in it, and now divide this volume into two equal halfs. After that you have two partial volumes, each containing a gas of approximately 500,000 buckyballs (but you don't know which 500,000 of the initial 1,000,000 buckyballs is in which partial volume). Now, since dividing the volume didn't decrease the entropy, removing this division obviously won't increase it. After all, you have exactly the same situation as before you divided the volume in the first place.
As an aside: If you knew which 500,000 buckyballs are on the left partial volume and which are on the right (say, you have measured each particle of each partial volume), then removing the partition between the two partial volumes would increase the entropy (because you would have lost the information about which particle is located in which partial volume).
In the Gibbs paradox setup with different kinds of gases you have a volume divided by a diaphragm and you put one kind of gas in the each of the compartments at the same temperature and pressure. Then you take out the diaphragm and wait till both gases are completely mixed. Then you get an increase of entropy, because you have a irreverible change of state.

A paradox only ockurs if you do the same with identical gases at the beginning, because then there's no change at all if one takes the indistinguishability of the particles into account, because this indistinguishability is generically quantum and contradicts classical physics.
 
  • Like
Likes hutchphd
  • #26
Stephen Tashi said:
The paper also says that classical mechanics and QM face similar difficulties in defining an extensive entropy, so it favors @HPt 's view.
Very nice reference. I am particularly pleased that the questions I had in my head were in fact the appropriate questions ( l confess an irrational fear of thermodynamics from my undergraduate days). It seems to me that the remaining issues all have to do with defining an absolute Entropy: is it ever necessary to look at other than changes in S? In what context is unknown information relevant if it remains unknown (but knowable in principal). Seems to me never.
There a echoes of "hidden variables" here which also worry me...
 
  • #27
vanhees71 said:
A paradox only ockurs if you do the same with identical gases at the beginning, because then there's no change at all if one takes the indistinguishability of the particles into account, because this indistinguishability is generically quantum and contradicts classical physics.
No paradox exists. As you say nothing change. That is exactly what is predicted by the statistical mechanics of classical particles, that was the only partcle model at the time of Gibbs. Mixing of identical permutable particles does not increase entropy.
 
  • #28
It's predicted only if I take the indistinguishability of the classical particles from quantum theory into consideration. Then and only then indeed you get the correct result that the mixing of identical (I prefer to call them indistinguishable) particles does not increase entropy, while the mixing of distinguishable particles does indeed increase the entropy.

I don't know, why one would insist too much on deriving statistical physics (i.e., mostly the Boltzmann equation, from which everything else follows, including equilibrium thermodynamics) purely from classical mechanics. We all know that the only appropriate theory of matter is quantum theory anyway. This tells you that you have count identical particles as indistinguishable when counting the number of microstates consistent with a given macrostate in applying the Boltzmann-Planck equation. For details see Sects. 1.5 (for classical statistics) and 1.8 (for quantum statistics) in

https://itp.uni-frankfurt.de/~hees/publ/kolkata.pdf
 
Last edited:
  • #29
vanhees71 said:
It's predicted only if I take the indistinguishability of the classical particles from quantum theory into consideration. Then and only then indeed you get the correct result that the mixing of identical (I prefer to call them indistinguishable) particles does not increase entropy, while the mixing of distinguishable particles does indeed increase the entropy.

But that doesn't seems to the problems with classical statistics of distinguishable particles
https://www.researchgate.net/publication/237695485_Statistical_mechanics_of_colloids_and_Boltzmann's_definition_of_the_entropy
 
  • #31
vanhees71 said:
I don't know, why one would insist too much on deriving statistical physics purely from classical mechanics.

As mentioned. Granular media, coloidal systems, dogs fleas, fullorene balls, are systems of permutables classical particles. They surely are not identical, but are so similar that one should consider them identical. Not identical particles in the quantum meaning that exchanging two of then does not results in a different state, but in the sense that one can not tell two of them apart. For all these systems is nessessary and possible to define a consistent entropy.
Also, there is the mathematical combinatorial problem of sets of identical elements. For instance, understanding the logic that demands the introduction of the N! correction is needed to deduce the Poisson distribution from the maximum entropy principle.
https://math.stackexchange.com/questions/2241655/maximum-entropy-principle-for-poisson-distribution
 
  • #32
vanhees71 said:
[No increase in entropy when mixing classical identical permutable particles is] predicted only if I take the indistinguishability of the classical particles from quantum theory into consideration.

Not true. That is an urban mith.
 
  • #33
autoUFC said:
As mentioned. Granular media, coloidal systems, dogs fleas, fullorene balls, are systems of permutables classical particles. They surely are not identical, but are so similar that one should consider them identical. Not identical particles in the quantum meaning that exchanging two of then does not results in a different state, but in the sense that one can not tell two of them apart. For all these systems is nessessary and possible to define a consistent entropy.
Also, there is the mathematical combinatorial problem of sets of identical elements. For instance, understanding the logic that demands the introduction of the N! correction is needed to deduce the Poisson distribution from the maximum entropy principle.
https://math.stackexchange.com/questions/2241655/maximum-entropy-principle-for-poisson-distribution
autoUFC said:
Not true. That is an urban mith.
It's only a claim that this is an urban myth, but it's not convincing to just claim this. I think you need QT to argue for this factor ##1/N!##.

Also there should be mixing entropy when the constituents of the systems in the two compartments of the Gibbs-paradox setup are distinguishable (and be it only by a seemingly "small" difference, e.g., if you have the same gas but with atoms of different isotopes). I don't know, whether there are experiments which measure such mixing entropies though. In any case I'd be interested in corresponding papers.

I've no clue what the Poisson distribution has to do with all this.
 
  • #34
vanhees71 said:
I don't know, whether there are experiments which measure such mixing entropies though. In any case I'd be interested in corresponding papers.

Wouldn't this constitute a "test" for the existence of hidden variables (quantum or classical)?
 
  • #35
vanhees71 said:
I think you need QT to argue for this factor 1/N!.

No, QT is not needed. In section 4 of my paper I show that there is no entropy increase when mixing distinguishable identical classical particles.
 
  • Like
Likes dextercioby
  • #36
Can you give the gist of the argument here?
 
  • #37
We discuss in circles. Just once more: For distinguishable particles there IS mixing entropy!
 
  • Like
Likes Vanadium 50 and hutchphd
  • #38
hutchphd said:
Can you give the gist of the argument here?
Suppose two dogs. There are N fleas among the two. What is the most probable partition of the fleas among the two Dogs?
Assuming that fleas are permutable elements (meaning that exchanging two results in a different state) there are N!/(n1!n2!) ways to have n1 fleas in dog 1 and n2=N-n1 in dog 2. The maximum happens to be when d ln(1/n1!)/d n1 = d ln(1/n2!)/d n2, what leads to n1=n2.
In the case of classical ideal gases, besides the number of particles there are also energy and volume. All the terms from the entropy come as usual, the 1/n! comes from the enumeration of the permutation of particles among the two systems, just as in the dog's fleas problem.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #39
autoUFC said:
I recently discovered that there is no real paradox in the question of the mixing of classical distinguishble particles. I was shocked. Most books and all my professors suggest that an extensible entropy could not be defined for distinguishble particles.

I'm a bit late for this discussion and apologize if what I bring up has already been mentioned in some of the answers.

As I see it there are two paradoxes:
One is that statistical entropy for an ideal gas of distinguishable particles is non-extensive, whereas thermodynamic entropy is extensive. To resolve this one has to assume that gas-molecules are actually indistinguishable and that leads to the Sackur-Tetrode expression for entropy, which is extensive.

The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.

In the textbook by Blundell and Blundell the mixing paradox is demonstrated using the (extensive) Sackur-Tetrode expression. To me that indicates that the mixing paradox doesn't automatically go away just by making entropy extensive. You have to require that atoms are indistinguishable explicitly once more to resolve the mixing paradox.

Most comments welcome.
 
  • Like
Likes vanhees71
  • #40
vanhees71 said:
We discuss in circles. Just once more: For distinguishable particles there IS mixing entropy!
I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.
Is there anything like that?
 
  • #41
vanhees71 said:
You mean extensive! Sure, that's the point. If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy of a gas consisting of (chemically) "identical" particles which leads to Gibbs's paradox. This paradox can only be resolved when assuming the indistinguishability of particles in the sense that you have to count any configuration which results from a specific configuration by only exchanging particles as one configuration, which leads to the inclusion of the crucial factor ##1/N!## in the canonical partition sum:
$$Z=\frac{1}{N!}\int_{\mathbb{R}^{6N}} \mathrm{d}^{6 N} \Gamma \exp[-\beta H(\Gamma)], \quad \beta=\frac{1}{k_{\text{B}} T}.$$

This factor 1/N! is only correct for low occupancy, I believe. One has to assume that there is hardly ever more than one particle in the same state or phase space cell. Do you agree?
What happens if occupancy is not low?
 
  • #42
Philip Koeck said:
The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.

If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

The paradox occurs, whenever you have indistinguishable particles on both sides and do classical statistics with the non-extensive entropy formula. The correct formula is, of course, the Sackur-Tetrode formula, which you indeed get by dividing by ##N!##. It's the classical limit of the correct quantum counting of states in the approximation that Pauli blocking or Bose enhancement is negligible due to low occupation numbers. This correct classical limit takes nevertheless the indistinguishability into account and avoids the Gibbs paradox.
 
  • #43
vanhees71 said:
If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

How that apply to the mixing of distinguishable globules of butterfat in milk (or any other colloids)? where by removing a barrier between two containers (mixing) you can increase entropy but by restoring the barrier you can decrease entropy in a reversible way?
 
  • #44
vanhees71 said:
If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

Now I'm confused about your terminology.
Are you saying that identical particles can be distinguishable?
Would you say that ortho- and para-helium are identical?
 
  • #45
If you have distinct particles there is mixing entropy, but that does imply that diffusion of the distinct particles and mixing is IRreversible and thus you cannot restore the lower-entropy state by simply restoring the barrier but you need a lot of energy to sort the mixed particles into the two compartments, i.e., to lower the entropy you must do work, and this is precisely what the various phenomenological formulations of the 2nd law says.
 
  • Like
Likes Philip Koeck
  • #46
Philip Koeck said:
As I see it there are two paradoxes:
One is that statistical entropy for an ideal gas of distinguishable particles is non-extensive,
false.

Please note that I use the term permutable to differentiate from the term distinguishable. It may be impossible to distinguish between two identical particles, but it is the case that in the classical model for identical particles one should count as distinct two states where all particles are in the same positions with the same velocities, but two of them, that switch positions and velocities. That is, when counting the number of states of classical permutable particles one needs to account to the fact that permutations lead to different states.
Not relevant right now but I would like to say that identical quantum particles are impermutable.

Let us now tackle the problem of equilibrium in the exchange of particles.
Assume a system composed of two chambers 1 and 2. Assume that they can exchange particles, but the total is constant. n1+n2 = N.
Given that n1 permutable particles are in chamber 1, the entropy of the whole system is proportional to the number of accessible states for the whole system
Omega(n1)=Omega1(n1) Omega2(n2) [ N! / ( n1! n2! ) ]
Omega(n1) is the enumeration for the whole system given that n1 are in chamber 1
Omega1(n1) is the enumeration in chamber 1 and Omega2(n2) the enumeration for chamber 2.
The last term is the number of ways to choose which of the permutable particles are in chamber 1.The key is the term [ N! / ( n1! n2! ) ] . For equilibrium in temperature and pressure this term is not needed, and one rightly concludes that
Omega(n1)=Omega1(n1) Omega2(n2)
as in the cases of thermal and mechanical equilibrium there is no exchange of particles, and it is determined which particles are in which chamber.
However, when considering exchange of classical permutable particles that is no longer the case. And the number of possible permutations need to be included when counting the number of states for the whole system. I guess that is now clear that when considering exchange of particles between two subsystems, entropy should logically be defined as S=k ln(Omega(n)/n!). Extensivity follows.

Philip Koeck said:
To resolve this one has to assume that gas-molecules are actually indistinguishable and that leads to the Sackur-Tetrode expression for entropy, which is extensive.
It is true that assuming molecules are impermutable leads to an extensive entropy. Impermutable meaning that exchanging two of them leads to the same state. Usually, in quantum theory, the terms identical or indistinguishable are used. I am using impermutable to emphasize that permutable particles may be identical.
May point is that permutable particles also have an extensive entropy. In the case of the classical ideal gas the very Sackur-Tetrode. You can include the N! term either by assuming that the particles are impermutable or by including it to account for the permutations of classical particles between the two subsystems, both lead to extensivity.
Philip Koeck said:
The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.
There is no paradox. mixing identical permutable (classical) particles do not increase entropy. The inclusion of N! is nessessary due to the correct counting of accessible states. Extensivity follows.

Philip Koeck said:
In the textbook by Blundell and Blundell the mixing paradox is demonstrated using the (extensive) Sackur-Tetrode expression. To me that indicates that the mixing paradox doesn't automatically go away just by making entropy extensive. You have to require that atoms are indistinguishable explicitly once more to resolve the mixing paradox.

Most comments welcome.

Blundell, as most books, is very bad in this point. In his paper Vam Kapen includes a short list of doubtful quotes from textbooks then writes:

"Actually the problem of the N! was completely cleared up by Ehrenfest and Trkal
(Ann. Physik 1921), but their argument is ignored in most of the literature. It may
therefore be of some use to take up the matter again starting from scratch, as a service to future textbook writers. "

It seems to me that his efforts where for not, as textbooks of the 21st century are still misleading.
 
  • #47
autoUFC said:
..., entropy should logically be defined as S=k ln(Omega(n)/n!). Extensivity follows.

Interesting. I used something like that in a text I put on ResearchGate. Not sure whether it makes sense, though.
In deed, I get the the Sackur-Tetrode expression even for distinguishable particles like that.
Here's the link: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
 
  • #48
Philip Koeck said:
Now I'm confused about your terminology.
Are you saying that identical particles can be distinguishable?
Would you say that ortho- and para-helium are identical?
I don't like to use the word "identical" in this context. I just tried to use the language obviously implied by this argument by you: "mix two gases of identical, but distinguishable atoms".

I thought you mean two gases consisting of the same atoms, which are however distinguishable. This can only be if you have the same atoms (in my example He) in different states (in my example ortho- and para-He). These atoms are indeed distinguishable, because they are different states (distinguished by the total spin of the two electrons being either 0 or 1).

Now indeed transitions between the two states of He is pretty suppressed due to the different symmetries of the spatial part of the wave functions (the total wave function of the two electrons must of course be always antisymmetric because electrons are fermions). That's why really when He was discovered in the spectral decomposition of the Sun light one first believed that there are in fact two different new elements, but in fact it were only the two different states of Helium (spin singlet = para-helium, spin triplet = ortho-helium).

So for the "Gibbs-paradox experiment" you have to treat the two states of He as distinguishable and thus you'd expect mixing entropy, i.e., an increase in entropy when letting the two before separated gases diffuse into each other.
 
  • Like
Likes Philip Koeck
  • #49
Philip Koeck said:
Interesting. I used something like that in a text I put on ResearchGate. Not sure whether it makes sense, though.
In deed, I get the the Sackur-Tetrode expression even for distinguishable particles like that.
Here's the link: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.
 
  • #50
vanhees71 said:
But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.
My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.
In equilibrium the system will be close to the distribution with the highest W, which is also that with the highest entropy.
The exact relationship between S and W is not quite clear, however.
Swendsen, for example, states that there is an undefined additive function of N (which I chose to be -k ln(N!).
I assume that in the expression S = k ln(omega) the quantity omega stands for a probability (actually a quantity proportional to a probability) rather than a number. I believe Boltzmann might have reasoned like that too, since he used W, as in Wahrscheinlichkeit (I think Swendsen wrote something like that too.).
That's why I introduce the correction 1/N! for distinguishable particles.
I'm not saying this is the right way to think about it. I just tried to make sense of things for myself.

W is given by pure combinatorics so I can't really redefine W as you suggest.
The only place where I have some freedom in this derivation is where I connect my W to entropy, and, yes, I do that differently for distinguishable and indistinguishable particles.
 
Last edited:
Back
Top