I Gibbs paradox: an urban legend in statistical physics

AI Thread Summary
The discussion centers on the assertion that the Gibbs paradox in classical statistical physics is a misconception, with claims that no real paradox exists regarding the mixing of distinguishable particles. Participants reference key works by Van Kampen and Jaynes, arguing that Gibbs's original conclusions about indistinguishability were flawed and that the N! term in entropy calculations is logically necessary rather than paradoxical. The conversation highlights the distinction between classical and quantum mechanics, emphasizing that classical mechanics treats particles as distinguishable, which should yield measurable mixing entropy. Some contributors challenge the notion that the paradox reveals inconsistencies in classical models, suggesting that it is an urban legend rather than a fundamental issue. The debate underscores the ongoing confusion and differing interpretations surrounding the implications of the Gibbs paradox in both classical and quantum contexts.
  • #51
Please disabuse me of any foolishness in the following. It seems to me the question of distinguishability is a real problem for classical mechanics which is obviated by quantum theory.
If I buy a dozen new golf balls they might all be identical to my eye when new, but when (assuming no errant shots) I examine them after a year that will no longer be so...they will be somewhat different. Molecules in the "classical" universe, each having a unique history surely could not be expected to be identical after millenia. It then seems to require an ill-defined artifice to arbitrarilly define them as such. These stipulations then need to give way to macroscopic rules at some scale between molecules and golf balls. Just throw in an N!
Whatever one does will not be pretty. Quantum mechanics ties this into a rather neat bundle. It seems to me the least objectionable solution (indeed this is the best of all possible worlds).
 
Physics news on Phys.org
  • #52
I think this problem was really solved by Landauer. Consider for example mixing two samples of hemoglobine.
The point is, that molecules like hemoglobine are idistinguishable on a macrocscopical level, but, they are distinguishable on a microscopic level due to two molecules almost certainly having a different isotopic composition (C-13 and Deuterium) so that with very high probability, there are no two identical hemoglobine moclecules within one human person. Let's say you are mixing two samples of hemoglobine from two different persons. Will you measure any increase in entropy? This somehow depends. Of course you could determine the isotopic pattern of each hemoglobine molecule of each person before mixing. Then you would find certainly an increase of entropy (= loss of information) upon mixing. But you don't have this information if you only have macroscopic information on the composition of the hemoglobine (which would be identical for the two persons).
Hence the point is the following: The information contained in the labels distinguishing different individuals would contribute to entropy. Usually, you simply don't have this information, whence it also does not contribute to entropy. Hence mixing entropy will be zero, because you can't loose information which you don't have.
 
  • Like
Likes HPt, dextercioby, DrStupid and 1 other person
  • #53
DrDu said:
Will you measure any increase in entropy?
What in fact would be measured?
If you do this by somehow "counting" each molecule type then your argument seems a tautology to me. You obviously cannot lose information you never had.
I really don't understand this stuff.
 
Last edited:
  • #54
vanhees71 said:
... only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##...

It is claimed that that was the original definition of entropy given by Boltzmann.
 
  • #55
Philip Koeck said:
My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.
In equilibrium the system will be close to the distribution with the highest W, which is also that with the highest entropy.
The exact relationship between S and W is not quite clear, however.
Swendsen, for example, states that there is an undefined additive function of N (which I chose to be -k ln(N!).
I assume that in the expression S = k ln(omega) the quantity omega stands for a probability (actually a quantity proportional to a probability) rather than a number. I believe Boltzmann might have reasoned like that too, since he used W, as in Wahrscheinlichkeit (I think Swendsen wrote something like that too.).
That's why I introduce the correction 1/N! for distinguishable particles.
I'm not saying this is the right way to think about it. I just tried to make sense of things for myself.

W is given by pure combinatorics so I can't really redefine W as you suggest.
The only place where I have some freedom in this derivation is where I connect my W to entropy, and, yes, I do that differently for distinguishable and indistinguishable particles.
Let me first say that I like your manuscript very much, because it gives a very clear derivation of the distributions in terms of counting microstates (Planck's "complexions") for the different statistics (Boltzmann, Bose, Fermi). Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.

That's of course not justified within a strict classical theory, because according to classical mechanics you can precisely follow each individual particle's trajectory in phase space and thus each particle is individually distinguished from any other identical particle simply by labelling it's initial point in phase space.

Of course, it's also right that in classical stastistical physics there's the fundamental problem of the "natural" meausure of phase-space volumes, because there is non within classical physics. What's clear in classical mechanics is that the available phase-space volume is a measure for the a-priori equal probabilities of micro states because of the Liouville theorem. Boltzmann introduced an arbitrary phase-space volume and thus the entropy is anyway only defined up to an arbitrary constant, which is chosen ##N##-dependent by some authors (e.g., in the very good book by Becker, Theory of Heat) and then used to adjust the entropy to make it extensive, leading to the Sackur-Tetrode formula.

I think it's a bit of an unnecessary discussion though, because today we know that the only adequate theory of matter is quantum theory including the indistinguishability of identical particles, leading (by topological arguments) to Bose-Einstein or Fermi-Dirac Fock space realizations of many-body states (in ##\geq 3## spatial dimensions; in 2 dimensions you can have "anyons", and indeed some quasiparticles have been found in condensed matter physics of 2D strutures like graphene, that behave as such).

As you very nicely demonstrate, the classical limit for BE as well as FD statistics leads to the MB statistics under consideration of the indistinguishability of identical particles, leading to the correct Sackur-Tetrode entropy for ideal gases.

From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
 
  • Like
Likes hutchphd and Philip Koeck
  • #56
andresB said:
It is claimed that that was the original definition of entropy given by Boltzmann.
As far as I know, Boltzmann introduced the factor consistently in both the probabilities (or numbers of microstates) and the entropy, i.e., he put it on both places, and from the information-theoretical point of view that should be so, because entropy is the meausure of missing information for a given probability (distribution), i.e., entropy should be a unique functional of the probability distribution.
 
  • #57
Philip Koeck said:
I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.
Is there anything like that?

In his book “Theory of Heat”, Richard Becker points to an idea to calculate the entropy increase for the irreversible mixing of different gases by performing an imagined reversible process using semipermeable walls. Have a look at the chapters “A gas consisting of several components” and “The increase of entropy for irreversible mixing”.
 
  • Like
Likes Philip Koeck and vanhees71
  • #58
I must admit, though looking for such experiments in Google scholar, I couldn't find any. I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
 
  • Like
Likes Philip Koeck and Lord Jestocost
  • #59
vanhees71 said:
I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.

Being in the 21st century, I think it is possible to perform this kind of experiment.
How did scientists in the 19th century imagine identical but distinguishable particles?
At least I am thinking of billiard balls meticulously numbered with indian ink.
We already note two points here:
a) Due to the labelling, even the classical billiard balls won't be exactly identical.
b) The amount of information, which can be stored on a billiard ball, is finite. Therefore extensivity, i.e. labelling an almost infinite amount of balls is not strictly possible.

How would we do this in the 21st century? Labelling of larger molecules is possible either by isotopic substitution or, using DNA tags. For the latter case, highly advanced methods exist, which would allow to set up a thermodynamic cycle in any lab.

First attach n primers to each of two carriers and synthesise random single strand DNA of length N from e.g. A and C, only.
In the second step, synthesize the complementary strands.
If the amount of DNA molecules per carrier ##n << 2^N##, the probability to find two identical sequences on the same or on different carriers is practically 0. Hence, all DNA molecules are distinguishable.
Note that this is clearly not extensive. But e.g. with N=1000, the amount of DNA necessary to find two identical sequences would probably spontaneously collapse into a star.

Now if you put each carrier with attached double stranded DNA into a vial with buffer and heat it up, the complementary strands will separate from the carrier. The amount of heat supplied as a function of temperature can easily be recorded with standard calorimeters.
On a macroscopic level, the DNA from different carriers will be indistinguishable and appear as a homogeneous chemical substance.

Remove the carriers and mix the DNA solutions.
Now put in the two carriers into the solution and lower the temperature. Heat will be released at a somewhat lower temperature than before, because the concentration of the individual DNA strands is lower.
At the end of the process, the DNA is bound to its original carrier again.
Hence we have completed a thermodynamic cycle consisting of an irreversible mixing step and a reversible separation. The entropy can easily be calculated from the calorimetric measurements.

So we see that
a) Mixing of distinguishable particles really leads to an increase of entropy.
b) This is not a problem, because distinguishability of microscopic particles cannot be made arbitrarily extensive.
 
Last edited:
  • Like
Likes hutchphd and vanhees71
  • #60
vanhees71 said:
Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.
That's of course not justified within a strict classical theory, because according to classical mechanics you can precisely follow each individual particle's trajectory in phase space and thus each particle is individually distinguished from any other identical particle simply by labelling it's initial point in phase space.

I am not sure what you are saying that is not justified within a strict classical theory.

Is the idea that classical particles may be indistinguishable (or impermutable as I prefer)?
If so, I agree, indistinguishable particles (in the quantum sense) is not consistent with classical mechanics.

If you intend to say that the inclusion of the 1/N! is not justified in classical mechanics then you are wrong. This term is demanded by the definition of entropy as S=k ln(W), with W being the number of accessible states for a system with two partitions that can exchange identical classical particles.
 
  • #61
vanhees71 said:
From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.

Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.
I never saw this derivation by Einstein, but it I believe that, to derive Wien, he assumes that there is just one way that n permutable photons may occupy the same quantum state. However, since these are permutable photons, one could as well consider that there are n! ways for n permutable photons to be in the same state. In this case one finds that permutable photons would obey Planks law, just as normal photons. In similar fashion, one could get these hypothetical permutable quantum particles to follow Bose-Einstein, or Fermi-Dirac (imposing Pauli exclusion principle).

I am not saying that identical quantum particles are permutable. I just saying that the fact that they obey FD and BE statistics does not proves that they are impermutable.
 
  • #62
autoUFC said:
Einstein, in one of the papers
Please.
 
  • #63
hutchphd said:
Please.
?
 
  • #64
autoUFC said:
Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.

That is the quote . I guess "mutually statistically independent entities" are what I call identical permutable particles. I see no other way to get to Wien's.

"An aspect of Bose’s theory of radiation and of my analogous theory of the ideal gases which has been criticized by Mr. Ehrenfest and other colleagues is that in these theories the quanta or molecules are not treated as mutually statistically independent entities; this matter not being pointed out explicitly in our treatments. This is absolutely correct. If the quanta are treated as mutually statistically independent in their localization, one arrives at Wien’s displacement law; if
one treats the gas molecules in an analogous manner, one arrives at the classical equation of
state of the ideal gases, even when proceeding in all other respects exactly as Bose and I have done."

From
Quantum theory of the monoatomic ideal gas
Second treatise. 1925
By
A. Einstein.
 
  • Like
Likes hutchphd
  • #65
autoUFC said:
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.
 
  • #66
Stephen Tashi said:
The link to the Jaynes paper given by the Wikipedia article is: http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf and, at the current time, the link works.

I read this paper long ago and just re-read it. I think it is quite on spot and I am also convinced that my DNA example is as close as possible to the example Jaynes gives with two gasses which only are distinguished by their solubility in "Whifnium 1" and "Whifnium 2", yet to be discovered. The two DNA samples will be similar in all macroscopic respects, e.g. their solubility, average molar weight, etc. On a macroscopic level, they differ in the affinity to their relative carriers, which take the place of Whifnium. If we have them available or not will change the definition of the macro state and our ability to exploit their difference in a thermodynamic cycle.
 
  • Like
Likes Dale
  • #67
DrDu said:
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.

I did not know about it. I read the Wikipedia article and one of its references by catati and bassalo https://arxiv.org/abs/0903.4773. In the article, the obtain Maxwell-Boltzmann as a special case of parastatistics when E-mu >kT
(E energy, mu chemical potential, k Boltzmann constant, and T temperature)
I see this as disingenuous. In this limit both Fermi-Dirac and Bose-Einstein reduce to Maxwell-Boltzmann and we do not say that MB is a special case of FD or BE.

Therefore, I do not agree that Maxwell-Boltzmann is a special case of parastatistics, since parastatistics never agrees with Maxwell-Boltzmann in the low temperatures regime.
 
  • #68
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.
 
  • #69
DrDu said:
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.

I read that in wikipedia. However this claim is not justifica there. In the work of catani and bassalo, they recover MB in high temperarures of gentile statistics. Are you aware of any work that demonstrates that MB is the limite of p->infinity parastatistics?
 
  • #70
R. Haag, Local Quantum Physics contains this statement. I suppose you can also find it in the article by Green cited by Catani and Bassalo. I would not trust too much a preprint which is not even consistently written in one language.
 
  • #71
autoUFC said:
I am not sure what you are saying that is not justified within a strict classical theory.

Is the idea that classical particles may be indistinguishable (or impermutable as I prefer)?
If so, I agree, indistinguishable particles (in the quantum sense) is not consistent with classical mechanics.

If you intend to say that the inclusion of the 1/N! is not justified in classical mechanics then you are wrong. This term is demanded by the definition of entropy as S=k ln(W), with W being the number of accessible states for a system with two partitions that can exchange identical classical particles.
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion of ##1/N!## must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles. The argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles.
 
  • Like
Likes hutchphd
  • #72
vanhees71 said:
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion ofThe argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles. must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles.

No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
When you count the number of states of a isolated system, with energy ##E##, composed of two subsystems in thermal contact, with energies ##E_1## and ##E_2##, you can consider that the number of states is the product of the number of states of each subsystem.
##W(E_1)=W_1(E_1) × W_2(E_2)##
Note that ##W(E_1)## is the number of accessible states for the whole system given the partition of the energy, with ##E_1## in subsystem 1, therefore it is a function of ##E_1##.

When you consider that the two substems can exchange classical particles, with ##N=N_1+N_2##, you have to include the number of possible permutations of classical particles between the two subsystems
##W(N_1)=W_1(N_1) × W_2(N_2) × [ N! /( N_1! N_2! ) ]##
or
##W(N_1)/N!= [W_1(N_1)/N_1!] × [W_2(N_2)/N_2! ]##

Therefore, when you consider exchange of classical particles the ##1/N!## needs to be included.

I agree that quantum particles do justify the inclusion of this term. However deny that this term is also necessary under the classical model is simply to deny combinatorial logic.

Note that the nice factorization leads to an extensive entropy. Meaning that extenvity follows from combinatorial logic. You do not include term to obtain extensivity. You include it due to logic and obtains extensivity as a result.
 
  • #73
autoUFC said:
No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm. The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
 
  • Like
Likes hutchphd
  • #74
vanhees71 said:
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm.

Are you trolling me?

vanhees71 said:
The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

Already explained that you are wrong. I guess you are really just trolling...
vanhees71 said:
The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
Would be nice to get some meaningful thoughts, though...
There are several interesting topics of discussion regarding this question. For instance:

The fact that the permuation term ##[ N ! / ( N_1! N_2! ) ]## is needed to account for all accessible states for systems that exchange classical particles. Something that I mentioned a few times but vanhees71 has yet to comment on.
Or the fact that this simple combinatorial logic has been missing from textbooks on statistical mechanics for more than a century.
Or the fact that incluing the ##1/N!## term in the entropy of the classical particle model was never a paradox. A fact that some people simply refuse to accept. I am not sure if the reason for this is a difficult in giving up preconceived ideais, or maybe just trolling fun.
 
  • Skeptical
Likes weirdoguy
  • #75
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
 
Last edited:
  • Like
Likes vanhees71
  • #76
Philip,
concerning your first point. Ortho and Para Hydrogen can in principle be separated easily e.g. using their different magnetic momentum and this can be done reversibly. So you could set up a thermodynamic cycle and measure the entropy difference of mixing the two gasses.
Concerning your second question, this is the point made in the paper by Jaynes which has been cited repeatedly in this thread. According to him, the entropy of a macro state depends on how many macro parameters we use to define it. If a student has no information on the difference of ortho- and para- hydrogen, he is discussing other macrostates as a student who has this information and can measure it.
Concerning your third question, I would say that we never describe like molecules or greater completely in terms of quantum mechanics. Born-Oppenheimer separation, separation of nuclear spin, rovibrational states etc. lead to a description of molecules which is also incompatible with pure Bose or Einstein statistics.
 
  • Like
Likes vanhees71
  • #77
Philip Koeck said:
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
ad 1) Concerning ortho- and para-He. These are non-identical (composed) particles and as such show mixing entropy if first separated in two parts of a volume and then diffusing through each other. They differ in spin (0 vs 1).

ad 2) I agree with this. In order to measure mixing entropy of course you need the information about the initial and the final state. As I already said, I couldn't find any real-world experiment in the literature demonstrating mixing entropy, though in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.

Of course, there are exceptions, usually for low-temperature situations. There you have collective "quantum behavior" (BECs, superconductivity, suprafluidity, specific heat of diamonds even at room temperature,...) of macroscopic observables.
 
  • Like
Likes Philip Koeck
  • #78
Philip Koeck said:
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?

There is something called Edward's entropy, that is the entropy for grains packed in a volume. Diferent from traditional entropy, it depends on the volume and number of grains, no energy. As you mention, this is a very obviously classical system to start with. People working on this problem say that the entropy is the logarithm of the number of accessible states divided by N!.
See for instance
Asenjo-Andrews, D. (2014). Direct computation of the packing entropy of granular materials (Doctoral thesis). https://doi.org/10.17863/CAM.16298
Accessible at
https://www.repository.cam.ac.uk/handle/1810/245871.

There you may see, section 6.2.7,

"When we plot ##S^∗## as a function of ##N## (Figure 6.16), we note (again) that its ##N-##dependence is not very linear, in other words, this form of ##S^∗## is also not extensive. This should come as no surprise because also in equilibrium statistical mechanics, the partition function of a system of N distinct objects is not extensive. We must follow Gibbs’ original argument about the equilibrium between macroscopically identical phases of classical particles and this implies that we should subtract ln ##N!## from ##S^∗##. We note that there is much confusion in the literature on this topic, although papers by van Kampen [93] and by Jaynes [39] clarify the issue. Indeed, if we plot ##S^∗ − \ln N!## versus ##N## we obtain an almost embarrassingly straight line that, moreover, goes through the origin. Previous studies on the entropy of jammed systems, such as the simulations of Lubachevsky et al. [55] presented in Chapter 2, ignored the ##N!## term. "

The author could be a little more clear. For instance, the sentence " the partition function of a system of N distinct objectsis not extensive," gives the impression that there is some paradox in this problem. In fact, for permutable objects, the entropy is ##S=\ln(W/N!)## the free energy is ##F=kT \ln(Z/N!)##. So, in the same way that ln(W) for a system of identical classical objects is not extensive, so ln(Z) is also not extensive.

Note that Boltzman principle was that S=k ln(W). However, this W should be the the number of accessible states for a isolated system composed of subsystems that exchange particles (or grains). In my previous posts I explained why Boltzman principle leads to the inclusion of the 1/N! term by combinatorial logic.

A curious thing is the fact that Lubachevsky did not included the 1/N!. I would say that he was mislead by what one reads in most textbooks that wrongly suggest that this term has no reason to be included in the classical model.
 
  • #79
vanhees71 said:
in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

Thats is a sad truth. Nearly any theory textbook does not present the most precise explanation for the inclusion of the ##1/N!## term.

vanhees71 said:
ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.
There are classical and quantum models of real systems. I do not think that the quantum model would be any good for describing a pack full os grains, or a galaxy of stars.

By the way vanhees71, can you tell me please. Do you think that the permuation term ##[ N! /(N_1! N_2!) ]## should be included when counting the number of states of an isolated system, divided in two parts that can exchange classical particles? I would like to know what your answer that.
 
  • #80
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
 
  • Like
Likes hutchphd
  • #81
vanhees71 said:
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
It is not a question of opinion. I presented a logical reason to the inclusion of this term under the classical model. I would imagine that to state that there is no justification under classical physics to the ##1/N!## you should refute my argument.
(I say my argument but vam Kampen attributes the explanation to ehrenfest)

This term comes from counting the possible permutations of classical particles between systems that can exchange particles. One does not need to appeal to quantum mechanics. Vam Kampen writes:

In statistical mechanics this dependence is obtained by inserting a factor ##1/N!## in the partition function. Quantum mechanically this factor enters automatically and in many textbooks that is the way in which it is justified. My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic — rather than by the somewhat mystical arguments of Gibbs and Planck. Specifically I take exception to such statements as: "It is not possible to understand classically why we must divide by N! to obtain the correct counting of states", and: "Classical statistics thus leads to a contradiction with experience even in the range in which quantum effects in the proper sense can be completely neglected".

These two quotes that vam Kamped disses are from Huang 1963 and Münster 1969.
 
  • #82
autoUFC said:
My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
 
  • Like
Likes vanhees71 and DrClaude
  • #83
hutchphd said:
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
Well...
I believe that to disagree you should refute my argument. Can you tell me the following.
How many states are available to an isolated system that is composed of two subsystems 1 and 2 that exchange classical particles. In your answer you should consider that subsystem 1 has ##W_1(E_1,N_1, V_1)## accessible states and subsystem 2 has ##W_2(E_2,N_2, V_2)## accessible states.

My answer is, the number of states ##W## are available to the isolated system that is composed of two subsystems of classical particles is
##W=W_1×W_2×[ N!/ (N_1! N_2!)]##

Is my answer wrong? If it is not wrong, can you see the factors ##1/N_1!## and ##1/N_2!##? These factors are NOT a rubric pasted on ex poste facto to make the the statistical theory work out. In fact, I would say that not including them would be a illogical redefinition of how we count items!

I really feel I am being trolled as people keep disagreing with me, but without addressing my arguments.
 
  • Skeptical
Likes weirdoguy
  • #84
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
 
  • Like
Likes vanhees71, DrClaude and weirdoguy
  • #85
hutchphd said:
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
Perhaps you are not trying hard enough to understand. You say there is no classical reason to count this way. What other way to count exists? As far as I know, counting is neither classical nor quantic. I do not count this way to obtain a desidered answer, what you mean by that anyway?

If you were do count in your way, what would be your result? Can you please to tell me?
 
  • #86
Stephen Tashi said:
I don't know which thread participants have read Jayne's paper http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf , but the characterization of that paper as "Gibbs made a mistake" is incorrect. The paper says that Gibbs explained the situation correctly, but in an obscure way.

I have read Jaynes again. He says that Gibbs probably presented the correct explanation in an early work. Only that he phrase his thoughts in a confusing way. As Jaynes writes about Gibbs text:
"The decipherment of this into plain English required much effort, sustained only by faith in Gibbs; but eventually there was the reward of knowing how Champollion felt when he realized that he had mastered the Rosetta stone."

But Jaynes places8 the blame in those that follow:
"In particular, Gibbs failed to point out that an "integration constant" was not an arbitrary
constant, but an arbitrary function. But this has, as we shall see, nontrivial physical consequences.
What is remarkable is not that Gibbs should have failed to stress a fine mathematical point in almost the last words he wrote; but that for 80 years thereafter all textbook writers (except possibly Pauli) failed to see it."

I have to say, however, that Jaynes is quite unclear also. One does not find in Jaynes the term ##[N! /( N_1! N_2!)]## that I see as the key to the problem. Van Kampen is a bit beter, but he also do not stress the mathematical point clearly. The binomial coeficient appears in an unnumbered equation between eqs (9) and (10). In my opinion, the best explanation is in the work by Swendsen. Also, the work by Hjalmar, regarding the buckballs, has the explanation with this binomial coeficient.
 
  • #87
Philip Koeck said:
This factor 1/N! is only correct for low occupancy, I believe. One has to assume that there is hardly ever more than one particle in the same state or phase space cell. Do you agree?
What happens if occupancy is not low?

In classical systems two particles can not be in the same state, as the state is determined by a point in a continuous phase space. If you consider phase cells, you could chose than so small that no two particles will ever occupy the same cell.
There is the Maxwell-Boltzmann statistics,
https://en.m.wikipedia.org/wiki/Maxwell–Boltzmann_statistics
As I see it, this would be a statistic for quantum-like particles (as states are assumed to be discreet and with possibly multiple occupancy) that permutable (if two change places you get a different state). As you correctly states, in the limit where occupancy is low, MB agrees with the quantum statistics.
 
  • Like
Likes Philip Koeck
  • #88
vanhees71 said:
I must admit, though looking for such experiments in Google scholar, I couldn't find any. I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
Pauli in his lecture notes on thermodynamics describes such an experiment which does not use semipermeable membranes but temperature changes, to freeze out the components.

You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
 
  • Like
Likes vanhees71
  • #89
DrDu said:
You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
I assume that you know this to be the basis for an entire class of "lateral flow assays" (see ERISA) and these can provide very high sensitivity optically. I am not sure whether these could be made easily reversible ( but I'm certain this reflects only my personal lack of knowledge). Interesting thoughts.
 
  • #90
vanhees71 said:
We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics)

If I am understanding Jaynes' argument correctly, he is arguing that you can justify the ##1 / N!## factor in classical physics using a phenomenological argument. At the bottom of p. 2 of his paper, he says:

...in the phenomenological theory Clausius defined entropy by the integral of ##dQ / T## over a reversible path; but in that path the size of a system was not varied, therefore the dependence of entropy on size was not defined.

He then references Pauli (1973) for a correct phenomenological analysis of extensivity, which would then apply to both the classical and quantum cases. He discusses this analysis in more detail in Section 7 of his paper.
 
  • #91
autoUFC said:
I really feel I am being trolled as people keep disagreing with me, but without addressing my arguments.

I see no indication that anyone in this thread is trolling.

autoUFC said:
You say there is no classical reason to count this way. What other way to count exists?

A way that takes into account what happens if ##N## changes. If you only consider processes in which ##N## is constant, which is all you have considered in your posts, you cannot say anything about the extensivity of entropy. To even address that question at all, you need to consider processes in which ##N## changes. That is the key point Jaynes makes in the passage from his paper that I quoted in my previous post.

autoUFC said:
One does not find in Jaynes the term ##[N! /( N_1! N_2!)]## that I see as the key to the problem.

Jaynes in Section 7 of his paper is discussing a general treatment of extensivity (how entropy varies with ##N##), not the particular case of two types of distinguishable particles that you are considering. His general analysis applies to your case and for that case it will give the term you are interested in.
 
  • Informative
Likes hutchphd
  • #92
PeterDonis said:
If I am understanding Jaynes' argument correctly, he is arguing that you can justify the ##1 / N!## factor in classical physics using a phenomenological argument. At the bottom of p. 2 of his paper, he says:
He then references Pauli (1973) for a correct phenomenological analysis of extensivity, which would then apply to both the classical and quantum cases. He discusses this analysis in more detail in Section 7 of his paper.
I checked Pauli vol. III. There is only the treatment of ideal-gas mixtures within phenomenological thermodynamics, and there you start from the correct extensive entropy. After the usual derivation of the entropy change when letting two different gases diffuse into each other,
$$\bar{S}-S=R [n_1 \ln(V/V_1)+n_2 \ln(V/V_2)] > 0,$$
he states:

"The increase in entropy, ##\bar{S}-S##, is independent of the nature of the two gases. They must simply be different. If both gases are the same, then the change in entropy is zero; that is
$$\bar{S}-S=0.$$
We see, therefore, that there is no continuous transition between two gases. The increase in entropy is always finite, even if the gases are only infinitesimally different. However, if the two gases are the same, then the change in entropy is zero. Therefore, it is not allowed to let the difference between two gases gradually vanish. (This is important in quantum theory.)"

The issue with the counting in statistical mechanics is not discussed, but I think the statement is very clear that there is mixing energy for different gases and (almost trivially) none for identical gases. I still don't see, where in all this is a justification in classical statistics for the factor ##1/N!## in the counting of "complexions" other than the phenomenological input that the entropy must be extensive. From a microscopic point of view there is no justification other than the indistinguishability due to quantum theory. I think you need quantum theory to justify the factor ##1/N!## in the counting of complexions and that you also need quantum theory to get a well-defined entropy, because you need ##h=2 \pi \hbar## as the phase-space-volume measure to count phase-space cells. Then you also don't get the horrible expressions for the classical entropy with dimensionful arguments of logarithms. That's all I'm saying.
 
  • #93
vanhees71 said:
I checked Pauli vol. III. There is only the treatment of ideal-gas mixtures within phenomenological thermodynamics, and there you start from the correct extensive entropy.

Yes, Jaynes explicitly says that Pauli did not prove that entropy must be extensive, he just assumed it and showed what the phenomenology would have to look like if it was true.

vanhees71 said:
there is no continuous transition between two gases. The increase in entropy is always finite, even if the gases are only infinitesimally different

What does "infinitesimally different" mean? There is no continuous parameter that takes one gas into the other. You just have two discrete cases: one type of gas particle, or two types of gas particle mixed together.

Jaynes' treatment seems to me to correctly address this as well. Let me try to summarize his argument:

(1) In the case where we have just one type of gas particle, removing the barrier between the two halves of the container does not change the macrostate at all. We still have one type of gas particle, spread through the entire container. So macroscopically, the process of removing the barrier is easily reversible: just reinsert the barrier. It is true that particles that were confined to the left half of the container before are now allowed to be in the right half, and vice versa, and reinserting the barrier does not put all of the particles that were confined to each half back where they were originally; the precise set of particles that are confined to each half will be different after the barrier is reinserted, as compared to before it was removed. But since all of the particles are of the same type, we have no way of distinguishing the state before the barrier was removed from the state after the barrier was reinserted, so there is no change in entropy.

(2) In the case where we have two types of gas particle, removing the barrier does change the macrostate; now we have to allow for particles of both types being in both halves of the container, instead of each type being confined to just one half. This is reflected in the fact that the process of removing the barrier is now not reversible: we can't just reinsert the barrier and get back the original macrostate. To get back the original macrostate, we would have to pick out all the particles that were in the "wrong" half of the container and move them back to where they were before the barrier was removed. The mixing entropy ##N k \ln 2## is a measure of how much information is required to perform that operation, which will require some external source of energy and will end up increasing the entropy of that external source by at least that much (for example by forcing heat to flow from a hot reservoir to a cold reservoir and decreasing the temperature difference between them).

There is no continuum between these two alternatives; they are discrete. Alternative 2 obtains if there is any way of distinguishing the two types of gas particle available to us. It doesn't depend on any notion of "how different" they are.

vanhees71 said:
I still don't see, where in all this is a justification in classical statistics for the factor ##1 / N!## in the counting of "complexions"

Jaynes, in section 7 of his paper, shows how this factor arises, in appropriate cases (as he notes, entropy is not always perfectly extensive), in a proper analysis that includes the effects of changing ##N##. As he comments earlier in his paper (and as I have referenced him in previous posts), if your analysis only considers the case of constant ##N##, you can't say anything about how entropy varies as ##N## changes. To address the question of extensivity of entropy at all, you have to analyze processes where ##N## changes.

vanhees71 said:
you need ##h = 2 \pi \hbar## as the phase-space-volume measure to count phase-space cells. Then you also don't get the horrible expressions for the classical entropy with dimensionful arguments of logarithms.

Jaynes addresses this as well: he notes that, in the phenomenological analysis, a factor arises which has dimensions of action, but there is no explanation for where it comes from. I agree you need quantum mechanics to explain where this factor comes from.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #94
PeterDonis said:
What does "infinitesimally different" mean? There is no continuous parameter that takes one gas into the other. You just have two discrete cases: one type of gas particle, or two types of gas particle mixed together.
Don't ask me. That's what Pauli wrote. Of course, there's no continuous parameter, but that's also not explainable within classical mechanics. There's no consistent classical model of matter. It's not by chance that quantum theory has been discovered because of the inconsistencies of classical statistical physics. It all started with Planck's black-body radiation law. Classical statistics of the em. field leads to the Rayleigh-Jeans catastrophe. So there was no other way out than Planck's "act of desparation". Other examples are the specific heats at low temperature, the impossibility to derive Nernst's 3rd Law from classical statistics, and last but not least also the here discussed Gibbs paradox.
Jaynes' treatment seems to me to correctly address this as well. Let me try to summarize his argument:
Sure, I fully agree with Jayne's treatment, and of course there's no paradox if you use the information-theoretical approach and the indistinguishability of identical particles from quantum theory.

I just don't think that there is any classical justification for the crucial factor ##1/N!##, because classical mechanics tells you that you can follow any individual particle's trajectory in phase space. That this is practically impossible is not an argument for this factor, because you have to count different microstates compatible with the macrostate under consideration.

It's anyway a useless discussion, because today we know the solution of all these quibbles: It's QT!
 
  • Like
Likes hutchphd
  • #95
vanhees71 said:
classical mechanics tells you that you can follow any individual particle's trajectory in phase space. That this is practically impossible is not an argument for this factor, because you have to count different microstates compatible with the macrostate under consideration.

This would imply that in classical mechanics there can never be any such thing as a gas with only one type of particle, or even a gas with just two types: a gas containing ##N## particles would have to have ##N## types of particles, since each individual particle must be its own type: each individual particle is in principle distinguishable from every other one, and microstates must be counted accordingly.

If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #96
I have a question related to this topic. Since the canonical ensemble corresponds to an open system, and the micro-canonical ensemble corresponds to a closed system, is it true that the idea of density matrix is the appropriate way to describe canonical ensembles and not micro-canonical ensembles?
 
  • #97
PeterDonis said:
If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
Not sure about that.
If entropy is only a function of macroscopic variables then you don't even take into account which particle is which. So it shouldn't matter whether they are distinguishable or not.
Even for distinguishable particles you would conclude that mixing two identical gases will not increase entropy, wouldn't you?
 
  • Like
Likes vanhees71
  • #98
PeterDonis said:
This would imply that in classical mechanics there can never be any such thing as a gas with only one type of particle, or even a gas with just two types: a gas containing ##N## particles would have to have ##N## types of particles, since each individual particle must be its own type: each individual particle is in principle distinguishable from every other one, and microstates must be counted accordingly.

If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
Well, yes. That's an extreme formulation, but I'd say it's correct. It shows only once more that classical mechanics is not the correct description of matter, because it contradicts the observations. You need quantum theory!
 
  • #99
love_42 said:
I have a question related to this topic. Since the canonical ensemble corresponds to an open system, and the micro-canonical ensemble corresponds to a closed system, is it true that the idea of density matrix is the appropriate way to describe canonical ensembles and not micro-canonical ensembles?
All quantum states are described by statistical operators, aka a density matrix.

The difference between the standard ensembles is, under which constraints you maximize the Shannon-Jaynes-von Neumann entropy to find the equilibrium statistical operator:

Microcanonical: You have a closed system and now only the values of the conserved quantities (the 10 from spacetime symmetry, energy, momentum, angular momentum, center-mass/momentum velocity and maybe some for conserved charges like baryon number, Strangeness, electric charge).

Canonical: The considered system is part of a larger system but you can exchange (heat) energy between the systems. The energy of the subsystem is known only on average.

Grand-Canonical: The considered system is part of a larger system but you can exchange (heat) energy and particles between the systems. The energy and conserved charges of the subsystem are known only on average.
 
  • Like
Likes dextercioby
  • #100
Philip Koeck said:
Not sure about that.
If entropy is only a function of macroscopic variables then you don't even take into account which particle is which. So it shouldn't matter whether they are distinguishable or not.
Even for distinguishable particles you would conclude that mixing two identical gases will not increase entropy, wouldn't you?
True, but this is the phenomenological approach, and you don't use statistical physics to derive the phenomenological thermodynamic quantities to begin with, and no problem with counting occurs. That's the strength of phenomenological thermodynamics: You take a view simple axioms based on observations and describe very many phenomena very well. That's why the physicists of the 18th and 19th century came so far with thermodynamics that they thought there are only a "few clouds" on the horizon but that these will be overcome with more accurate observations and better axioms.

Together with the statistical approach a la Boltzmann, Maxwell, Gibbs, and later Planck and Einstein, one had to learn that what was really needed was a "revolutionary act of desperation" and quantum theory had to be developed.
 
  • Like
Likes Philip Koeck
Back
Top