I Gibbs paradox: an urban legend in statistical physics

Click For Summary
The discussion centers on the assertion that the Gibbs paradox in classical statistical physics is a misconception, with claims that no real paradox exists regarding the mixing of distinguishable particles. Participants reference key works by Van Kampen and Jaynes, arguing that Gibbs's original conclusions about indistinguishability were flawed and that the N! term in entropy calculations is logically necessary rather than paradoxical. The conversation highlights the distinction between classical and quantum mechanics, emphasizing that classical mechanics treats particles as distinguishable, which should yield measurable mixing entropy. Some contributors challenge the notion that the paradox reveals inconsistencies in classical models, suggesting that it is an urban legend rather than a fundamental issue. The debate underscores the ongoing confusion and differing interpretations surrounding the implications of the Gibbs paradox in both classical and quantum contexts.
  • #61
vanhees71 said:
From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.

Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.
I never saw this derivation by Einstein, but it I believe that, to derive Wien, he assumes that there is just one way that n permutable photons may occupy the same quantum state. However, since these are permutable photons, one could as well consider that there are n! ways for n permutable photons to be in the same state. In this case one finds that permutable photons would obey Planks law, just as normal photons. In similar fashion, one could get these hypothetical permutable quantum particles to follow Bose-Einstein, or Fermi-Dirac (imposing Pauli exclusion principle).

I am not saying that identical quantum particles are permutable. I just saying that the fact that they obey FD and BE statistics does not proves that they are impermutable.
 
Physics news on Phys.org
  • #62
autoUFC said:
Einstein, in one of the papers
Please.
 
  • #63
hutchphd said:
Please.
?
 
  • #64
autoUFC said:
Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.

That is the quote . I guess "mutually statistically independent entities" are what I call identical permutable particles. I see no other way to get to Wien's.

"An aspect of Bose’s theory of radiation and of my analogous theory of the ideal gases which has been criticized by Mr. Ehrenfest and other colleagues is that in these theories the quanta or molecules are not treated as mutually statistically independent entities; this matter not being pointed out explicitly in our treatments. This is absolutely correct. If the quanta are treated as mutually statistically independent in their localization, one arrives at Wien’s displacement law; if
one treats the gas molecules in an analogous manner, one arrives at the classical equation of
state of the ideal gases, even when proceeding in all other respects exactly as Bose and I have done."

From
Quantum theory of the monoatomic ideal gas
Second treatise. 1925
By
A. Einstein.
 
  • Like
Likes hutchphd
  • #65
autoUFC said:
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.
 
  • #66
Stephen Tashi said:
The link to the Jaynes paper given by the Wikipedia article is: http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf and, at the current time, the link works.

I read this paper long ago and just re-read it. I think it is quite on spot and I am also convinced that my DNA example is as close as possible to the example Jaynes gives with two gasses which only are distinguished by their solubility in "Whifnium 1" and "Whifnium 2", yet to be discovered. The two DNA samples will be similar in all macroscopic respects, e.g. their solubility, average molar weight, etc. On a macroscopic level, they differ in the affinity to their relative carriers, which take the place of Whifnium. If we have them available or not will change the definition of the macro state and our ability to exploit their difference in a thermodynamic cycle.
 
  • Like
Likes Dale
  • #67
DrDu said:
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.

I did not know about it. I read the Wikipedia article and one of its references by catati and bassalo https://arxiv.org/abs/0903.4773. In the article, the obtain Maxwell-Boltzmann as a special case of parastatistics when E-mu >kT
(E energy, mu chemical potential, k Boltzmann constant, and T temperature)
I see this as disingenuous. In this limit both Fermi-Dirac and Bose-Einstein reduce to Maxwell-Boltzmann and we do not say that MB is a special case of FD or BE.

Therefore, I do not agree that Maxwell-Boltzmann is a special case of parastatistics, since parastatistics never agrees with Maxwell-Boltzmann in the low temperatures regime.
 
  • #68
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.
 
  • #69
DrDu said:
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.

I read that in wikipedia. However this claim is not justifica there. In the work of catani and bassalo, they recover MB in high temperarures of gentile statistics. Are you aware of any work that demonstrates that MB is the limite of p->infinity parastatistics?
 
  • #70
R. Haag, Local Quantum Physics contains this statement. I suppose you can also find it in the article by Green cited by Catani and Bassalo. I would not trust too much a preprint which is not even consistently written in one language.
 
  • #71
autoUFC said:
I am not sure what you are saying that is not justified within a strict classical theory.

Is the idea that classical particles may be indistinguishable (or impermutable as I prefer)?
If so, I agree, indistinguishable particles (in the quantum sense) is not consistent with classical mechanics.

If you intend to say that the inclusion of the 1/N! is not justified in classical mechanics then you are wrong. This term is demanded by the definition of entropy as S=k ln(W), with W being the number of accessible states for a system with two partitions that can exchange identical classical particles.
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion of ##1/N!## must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles. The argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles.
 
  • Like
Likes hutchphd
  • #72
vanhees71 said:
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion ofThe argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles. must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles.

No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
When you count the number of states of a isolated system, with energy ##E##, composed of two subsystems in thermal contact, with energies ##E_1## and ##E_2##, you can consider that the number of states is the product of the number of states of each subsystem.
##W(E_1)=W_1(E_1) × W_2(E_2)##
Note that ##W(E_1)## is the number of accessible states for the whole system given the partition of the energy, with ##E_1## in subsystem 1, therefore it is a function of ##E_1##.

When you consider that the two substems can exchange classical particles, with ##N=N_1+N_2##, you have to include the number of possible permutations of classical particles between the two subsystems
##W(N_1)=W_1(N_1) × W_2(N_2) × [ N! /( N_1! N_2! ) ]##
or
##W(N_1)/N!= [W_1(N_1)/N_1!] × [W_2(N_2)/N_2! ]##

Therefore, when you consider exchange of classical particles the ##1/N!## needs to be included.

I agree that quantum particles do justify the inclusion of this term. However deny that this term is also necessary under the classical model is simply to deny combinatorial logic.

Note that the nice factorization leads to an extensive entropy. Meaning that extenvity follows from combinatorial logic. You do not include term to obtain extensivity. You include it due to logic and obtains extensivity as a result.
 
  • #73
autoUFC said:
No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm. The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
 
  • Like
Likes hutchphd
  • #74
vanhees71 said:
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm.

Are you trolling me?

vanhees71 said:
The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

Already explained that you are wrong. I guess you are really just trolling...
vanhees71 said:
The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
Would be nice to get some meaningful thoughts, though...
There are several interesting topics of discussion regarding this question. For instance:

The fact that the permuation term ##[ N ! / ( N_1! N_2! ) ]## is needed to account for all accessible states for systems that exchange classical particles. Something that I mentioned a few times but vanhees71 has yet to comment on.
Or the fact that this simple combinatorial logic has been missing from textbooks on statistical mechanics for more than a century.
Or the fact that incluing the ##1/N!## term in the entropy of the classical particle model was never a paradox. A fact that some people simply refuse to accept. I am not sure if the reason for this is a difficult in giving up preconceived ideais, or maybe just trolling fun.
 
  • Skeptical
Likes weirdoguy
  • #75
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
 
Last edited:
  • Like
Likes vanhees71
  • #76
Philip,
concerning your first point. Ortho and Para Hydrogen can in principle be separated easily e.g. using their different magnetic momentum and this can be done reversibly. So you could set up a thermodynamic cycle and measure the entropy difference of mixing the two gasses.
Concerning your second question, this is the point made in the paper by Jaynes which has been cited repeatedly in this thread. According to him, the entropy of a macro state depends on how many macro parameters we use to define it. If a student has no information on the difference of ortho- and para- hydrogen, he is discussing other macrostates as a student who has this information and can measure it.
Concerning your third question, I would say that we never describe like molecules or greater completely in terms of quantum mechanics. Born-Oppenheimer separation, separation of nuclear spin, rovibrational states etc. lead to a description of molecules which is also incompatible with pure Bose or Einstein statistics.
 
  • Like
Likes vanhees71
  • #77
Philip Koeck said:
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
ad 1) Concerning ortho- and para-He. These are non-identical (composed) particles and as such show mixing entropy if first separated in two parts of a volume and then diffusing through each other. They differ in spin (0 vs 1).

ad 2) I agree with this. In order to measure mixing entropy of course you need the information about the initial and the final state. As I already said, I couldn't find any real-world experiment in the literature demonstrating mixing entropy, though in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.

Of course, there are exceptions, usually for low-temperature situations. There you have collective "quantum behavior" (BECs, superconductivity, suprafluidity, specific heat of diamonds even at room temperature,...) of macroscopic observables.
 
  • Like
Likes Philip Koeck
  • #78
Philip Koeck said:
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?

There is something called Edward's entropy, that is the entropy for grains packed in a volume. Diferent from traditional entropy, it depends on the volume and number of grains, no energy. As you mention, this is a very obviously classical system to start with. People working on this problem say that the entropy is the logarithm of the number of accessible states divided by N!.
See for instance
Asenjo-Andrews, D. (2014). Direct computation of the packing entropy of granular materials (Doctoral thesis). https://doi.org/10.17863/CAM.16298
Accessible at
https://www.repository.cam.ac.uk/handle/1810/245871.

There you may see, section 6.2.7,

"When we plot ##S^∗## as a function of ##N## (Figure 6.16), we note (again) that its ##N-##dependence is not very linear, in other words, this form of ##S^∗## is also not extensive. This should come as no surprise because also in equilibrium statistical mechanics, the partition function of a system of N distinct objects is not extensive. We must follow Gibbs’ original argument about the equilibrium between macroscopically identical phases of classical particles and this implies that we should subtract ln ##N!## from ##S^∗##. We note that there is much confusion in the literature on this topic, although papers by van Kampen [93] and by Jaynes [39] clarify the issue. Indeed, if we plot ##S^∗ − \ln N!## versus ##N## we obtain an almost embarrassingly straight line that, moreover, goes through the origin. Previous studies on the entropy of jammed systems, such as the simulations of Lubachevsky et al. [55] presented in Chapter 2, ignored the ##N!## term. "

The author could be a little more clear. For instance, the sentence " the partition function of a system of N distinct objectsis not extensive," gives the impression that there is some paradox in this problem. In fact, for permutable objects, the entropy is ##S=\ln(W/N!)## the free energy is ##F=kT \ln(Z/N!)##. So, in the same way that ln(W) for a system of identical classical objects is not extensive, so ln(Z) is also not extensive.

Note that Boltzman principle was that S=k ln(W). However, this W should be the the number of accessible states for a isolated system composed of subsystems that exchange particles (or grains). In my previous posts I explained why Boltzman principle leads to the inclusion of the 1/N! term by combinatorial logic.

A curious thing is the fact that Lubachevsky did not included the 1/N!. I would say that he was mislead by what one reads in most textbooks that wrongly suggest that this term has no reason to be included in the classical model.
 
  • #79
vanhees71 said:
in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

Thats is a sad truth. Nearly any theory textbook does not present the most precise explanation for the inclusion of the ##1/N!## term.

vanhees71 said:
ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.
There are classical and quantum models of real systems. I do not think that the quantum model would be any good for describing a pack full os grains, or a galaxy of stars.

By the way vanhees71, can you tell me please. Do you think that the permuation term ##[ N! /(N_1! N_2!) ]## should be included when counting the number of states of an isolated system, divided in two parts that can exchange classical particles? I would like to know what your answer that.
 
  • #80
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
 
  • Like
Likes hutchphd
  • #81
vanhees71 said:
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
It is not a question of opinion. I presented a logical reason to the inclusion of this term under the classical model. I would imagine that to state that there is no justification under classical physics to the ##1/N!## you should refute my argument.
(I say my argument but vam Kampen attributes the explanation to ehrenfest)

This term comes from counting the possible permutations of classical particles between systems that can exchange particles. One does not need to appeal to quantum mechanics. Vam Kampen writes:

In statistical mechanics this dependence is obtained by inserting a factor ##1/N!## in the partition function. Quantum mechanically this factor enters automatically and in many textbooks that is the way in which it is justified. My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic — rather than by the somewhat mystical arguments of Gibbs and Planck. Specifically I take exception to such statements as: "It is not possible to understand classically why we must divide by N! to obtain the correct counting of states", and: "Classical statistics thus leads to a contradiction with experience even in the range in which quantum effects in the proper sense can be completely neglected".

These two quotes that vam Kamped disses are from Huang 1963 and Münster 1969.
 
  • #82
autoUFC said:
My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
 
  • Like
Likes vanhees71 and DrClaude
  • #83
hutchphd said:
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
Well...
I believe that to disagree you should refute my argument. Can you tell me the following.
How many states are available to an isolated system that is composed of two subsystems 1 and 2 that exchange classical particles. In your answer you should consider that subsystem 1 has ##W_1(E_1,N_1, V_1)## accessible states and subsystem 2 has ##W_2(E_2,N_2, V_2)## accessible states.

My answer is, the number of states ##W## are available to the isolated system that is composed of two subsystems of classical particles is
##W=W_1×W_2×[ N!/ (N_1! N_2!)]##

Is my answer wrong? If it is not wrong, can you see the factors ##1/N_1!## and ##1/N_2!##? These factors are NOT a rubric pasted on ex poste facto to make the the statistical theory work out. In fact, I would say that not including them would be a illogical redefinition of how we count items!

I really feel I am being trolled as people keep disagreing with me, but without addressing my arguments.
 
  • Skeptical
Likes weirdoguy
  • #84
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
 
  • Like
Likes vanhees71, DrClaude and weirdoguy
  • #85
hutchphd said:
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
Perhaps you are not trying hard enough to understand. You say there is no classical reason to count this way. What other way to count exists? As far as I know, counting is neither classical nor quantic. I do not count this way to obtain a desidered answer, what you mean by that anyway?

If you were do count in your way, what would be your result? Can you please to tell me?
 
  • #86
Stephen Tashi said:
I don't know which thread participants have read Jayne's paper http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf , but the characterization of that paper as "Gibbs made a mistake" is incorrect. The paper says that Gibbs explained the situation correctly, but in an obscure way.

I have read Jaynes again. He says that Gibbs probably presented the correct explanation in an early work. Only that he phrase his thoughts in a confusing way. As Jaynes writes about Gibbs text:
"The decipherment of this into plain English required much effort, sustained only by faith in Gibbs; but eventually there was the reward of knowing how Champollion felt when he realized that he had mastered the Rosetta stone."

But Jaynes places8 the blame in those that follow:
"In particular, Gibbs failed to point out that an "integration constant" was not an arbitrary
constant, but an arbitrary function. But this has, as we shall see, nontrivial physical consequences.
What is remarkable is not that Gibbs should have failed to stress a fine mathematical point in almost the last words he wrote; but that for 80 years thereafter all textbook writers (except possibly Pauli) failed to see it."

I have to say, however, that Jaynes is quite unclear also. One does not find in Jaynes the term ##[N! /( N_1! N_2!)]## that I see as the key to the problem. Van Kampen is a bit beter, but he also do not stress the mathematical point clearly. The binomial coeficient appears in an unnumbered equation between eqs (9) and (10). In my opinion, the best explanation is in the work by Swendsen. Also, the work by Hjalmar, regarding the buckballs, has the explanation with this binomial coeficient.
 
  • #87
Philip Koeck said:
This factor 1/N! is only correct for low occupancy, I believe. One has to assume that there is hardly ever more than one particle in the same state or phase space cell. Do you agree?
What happens if occupancy is not low?

In classical systems two particles can not be in the same state, as the state is determined by a point in a continuous phase space. If you consider phase cells, you could chose than so small that no two particles will ever occupy the same cell.
There is the Maxwell-Boltzmann statistics,
https://en.m.wikipedia.org/wiki/Maxwell–Boltzmann_statistics
As I see it, this would be a statistic for quantum-like particles (as states are assumed to be discreet and with possibly multiple occupancy) that permutable (if two change places you get a different state). As you correctly states, in the limit where occupancy is low, MB agrees with the quantum statistics.
 
  • Like
Likes Philip Koeck
  • #88
vanhees71 said:
I must admit, though looking for such experiments in Google scholar, I couldn't find any. I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
Pauli in his lecture notes on thermodynamics describes such an experiment which does not use semipermeable membranes but temperature changes, to freeze out the components.

You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
 
  • Like
Likes vanhees71
  • #89
DrDu said:
You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
I assume that you know this to be the basis for an entire class of "lateral flow assays" (see ERISA) and these can provide very high sensitivity optically. I am not sure whether these could be made easily reversible ( but I'm certain this reflects only my personal lack of knowledge). Interesting thoughts.
 
  • #90
vanhees71 said:
We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics)

If I am understanding Jaynes' argument correctly, he is arguing that you can justify the ##1 / N!## factor in classical physics using a phenomenological argument. At the bottom of p. 2 of his paper, he says:

...in the phenomenological theory Clausius defined entropy by the integral of ##dQ / T## over a reversible path; but in that path the size of a system was not varied, therefore the dependence of entropy on size was not defined.

He then references Pauli (1973) for a correct phenomenological analysis of extensivity, which would then apply to both the classical and quantum cases. He discusses this analysis in more detail in Section 7 of his paper.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
3
Views
1K
Replies
4
Views
5K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K