I Gibbs paradox: an urban legend in statistical physics

AI Thread Summary
The discussion centers on the assertion that the Gibbs paradox in classical statistical physics is a misconception, with claims that no real paradox exists regarding the mixing of distinguishable particles. Participants reference key works by Van Kampen and Jaynes, arguing that Gibbs's original conclusions about indistinguishability were flawed and that the N! term in entropy calculations is logically necessary rather than paradoxical. The conversation highlights the distinction between classical and quantum mechanics, emphasizing that classical mechanics treats particles as distinguishable, which should yield measurable mixing entropy. Some contributors challenge the notion that the paradox reveals inconsistencies in classical models, suggesting that it is an urban legend rather than a fundamental issue. The debate underscores the ongoing confusion and differing interpretations surrounding the implications of the Gibbs paradox in both classical and quantum contexts.
  • #151
autoUFC said:
So we have two possibilities.
1) you start alread in equilibrium when you remove the partition. Them you have no entropy increase. You may put the partion back and remain in equilibrium. In any moment you have extensivity.

2) you start out of equilibrium when you remove the partition. Them you have the partial preassures of each kind of gas to define the macroscopic state of each partition. Again, you have extensivity.

In the mixing scenario that has been under discussion throughout this entire thread, we are talking about 2). In your 1) above, the partial pressure of both gases must be the same on both sides of the partition, and that is not the initial condition we have been discussing. Of course if the gases are already mixed, you can insert and remove the partition as much as you like and it won't change anything--not entropy, and not the partial pressures of either gas. But that is not the scenario we have been discussing.

In the scenario we have been discussing, i.e., 2) above, the partial pressure of each gas is zero on one side of the partition, and positive on the other side. So yes, this partial pressure differential can be viewed as driving the mixing process. And, as I have said before, if you include a chemical potential term in your analysis (which here will include the partial pressure of each gas), you can keep track of the mixing process with it.

However, note carefully that, in case 2) as I have just described it, there is still mixing entropy. The process of mixing is still not reversible. And Jaynes' general point about all such cases still holds: in such cases it doesn't even make sense to talk about extensivity of entropy, because macroscopic variables are changing.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #152
autoUFC said:
You said that Jaynes writes that entropy of mixing and vapor pressure are instances of non-extensivity. Can you tell us what in Jaynes text gave you this impression?

Jaynes' discussion in sections 5, 6, and 7 of his paper shows, to begin with, that assigning any entropy at all to a system depends on how we choose to partition its microstates into macrostates. His discussion throughout the paper also shows that extensivity of entropy is not derived from first principles in most treatments, but simply assumed without argument. Probably the best general statement he gives for what you come up with when you try to actually derive conclusions about extensivity of entropy from first principles is item (b) at the bottom of p. 12 of the paper.

The discussion of mixing in section 5 of Jaynes' paper also makes the important point that, to assign entropy to a system at all, we have to make assumptions about how the microscopic state space of the system is partitioned in terms of macroscopic variables. Which means that assignments of entropy depend on our technological capabilities, since those determine what we consider to be the relevant macroscopic variables. So claims like "entropy is extensive", even in cases where they are true, must be understood as claims about our current knowledge, not inherent properties of the system. Jaynes' discussion of how one experimenter with technical capabilities that a second experimenter lacks can create effects that look to the second experimenter like second law violations is instructive.
 
  • Like
Likes vanhees71
  • #153
autoUFC said:
Most books and all my professors suggest that an extensible entropy could not be defined for distinguishble particles.

Btw, to take the discussion back to your original post that started this thread, Jaynes agrees with you that it is possible to define an extensive entropy for distinguishable particles. He shows how to do it.
 
  • Like
Likes vanhees71
  • #154
HPt said:
Hi, I'm the author of the paper Eur. J. Phys. 35 (2014) 015023 (also freely available at arXiv). As I explicity demonstrate in my paper, there is a paradox that manifests as a contradiction to the second law
PeterDonis said:
In the scenario we have been discussing, i.e., 2) above, the partial pressure of each gas is zero on one side of the partition, and positive on the other side. So yes, this partial pressure differential can be viewed as driving the mixing process. And, as I have said before, if you include a chemical potential term in your analysis (which here will include the partial pressure of each gas), you can keep track of the mixing process with it.

However, note carefully that, in case 2) as I have just described it, there is still mixing entropy. The process of mixing is still not reversible. And Jaynes' general point about all such cases still holds: in such cases it doesn't even make sense to talk about extensivity of entropy, because macroscopic variables are changing.

In the problem of thermal contact macroscopic variables are also changing. However you said that exchange of heat is not a violation of extensivity. Why would exchange of particles be?
 
Last edited:
  • #155
PeterDonis said:
The discussion of mixing in section 5 of Jaynes' paper also makes the important point that, to assign entropy to a system at all, we have to make assumptions about how the microscopic state space of the system is partitioned in terms of macroscopic variables. Which means that assignments of entropy depend on our technological capabilities, since those determine what we consider to be the relevant macroscopic variables. So claims like "entropy is extensive", even in cases where they are true, must be understood as claims about our current knowledge, not inherent properties of the system. Jaynes' discussion of how one experimenter with technical capabilities that a second experimenter lacks can create effects that look to the second experimenter like second law violations is instructive.

Yes. Jaynes is instrutive. However, I do not agree completely with the way he states his arguments. Maybe I misunderstand, but reading this discussion I have the impression that he argues that increase in entropy due to mixing depends on having the capability of segregating the mixed gases.
That gets us to the uncomfortable situation where the entropy of a system depends on our current capabilities.There is the question of a large number of pairwise distinguishable particles. We have agreed that in this case entropy may of may not increase in mixing. As you said, if one ignores where each of the particles are before mixing, there is no entropy increase due to mixing.

I say aditionely that entropy increases only if you have a simple rule to describe how the particles are separated before mixing. My particular way of thinking is that our capabilities only show how to convert the free energy before mixing in work. But entropy in a mixing process will increase as long as there is a simple way to describe the initial state.

Maybe these two ways to understand this question are like interpretations of quantum mechanics. That is, they lead to the same predictions.

If you have a simple way to determine how particles are segregated before mixing, you could have semi-permeable menbranes that only let through particles that where initialy in one subsystem. With these semi-permeable membranes you can convert free energy into work.

If the only way determine how particles are segregated before mixing is with a list of the initial positions of each particle, a semi-permeable membrane would be a Maxwell demon.

Jaynes interpretation is based on the experimenter capacity. The interpretation I am most comfortable is based on the experimenter information.
 
  • #156
PeterDonis said:
Btw, to take the discussion back to your original post that started this thread, Jaynes agrees with you that it is possible to define an extensive entropy for distinguishable particles. He shows how to do it.
Yes. We know that including the ##1/N!## leads to an extensive entropy, and Jaynes (following Pauli) tells us that. However, he does not present the arguments based on the inclusion of the permuation term that accounts for the ignorance of what particles are in what subsystem after the barrier is removed.

As he writes
"Note that the Pauli analysis has not demonstrated from the principles of physics that entropy
actually should be extensive; it has only indicated the form our equations must take if it is. "
 
Last edited:
  • #157
autoUFC said:
I see no problem. The arguments of those authors show that there is no entropy of mixing in the classical model of particles.
But there must be mixing entropy for non-identical gases which were separated by the divider wall and then diffuse over the entire volume after taking out the wall.

As I said before, I cannot find a real-world experimental measurement to demonstrate this.

I also still don't see a convincing argument for a strictly classical derivation of the correct extensive entropy without invoking quantum-mechanical arguments (absolute phase-space element scale being ##h=2 \pi \hbar## and indistinguishability of identical particles).
 
  • Like
Likes hutchphd
  • #158
vanhees71 said:
Of course, if you have a gas consisting of two kinds of non-identical particles in a large volume you don't separate them by simply putting in a divider adiabatically. You'd just have two parts of the mixed gas in thermal equilibrium and no entropy change in putting in the divider (that's a tautology, because adiabatic means it's without change of entropy). To separate the non-identical particles in two distinct parts of the volume you need to do work and the entropy of the gas sorted into the two parts is lowered. Of course, you need to increase the entropy elsewhere, and the answer to that puzzle is due to Landauer and Szilard using the information-theoretical approach to entropy in connection to the famous "Maxwell demon". Here you have a demon sorting non-identical particles, while in the original Maxwell-demon setup the demon sorts particles by energy. The principle is of course the same.

That this information-theoretical approach is the correct one has been empirically shown recently also in the context with quantum statistics. The quantum Maxwell demon precisely worked as predicted using the information-theoretical approach.

Although I believed in a Landauer type Maxwell demon explanation of the absence of mixing entropy before, and even expressed this earlier in this thread, I don't think any more this is relevant to explain mixing entropy of distinguishable particles. The example of mixing of DNA showed, that you can have a fishing rod for distinguishable particles and use it to make the mixing reversible. While the fishing rod contains information on the particle, it can be re-used so that a thermodynamic cycle can be run in principle arbitrarily often. Furthermore, the generation of the fishing rod and the DNA may itself take place in a random fashion, so that one does not necessarily have to record the identity of every single particle. It also puts a headlight onto the question of extensivity. With a finite size label, you cannot label an infinite number of particles. This also holds true for isotopic labelling. Obviously, there is only a finite number of ways to isotopically label a molecule, though this number may be huge.
 
  • Like
Likes vanhees71
  • #159
vanhees71 said:
But there must be mixing entropy for non-identical gases which were separated by the divider wall and then diffuse over the entire volume after taking out the wall.

As I said before, I cannot find a real-world experimental measurement to demonstrate this.

I also still don't see a convincing argument for a strictly classical derivation of the correct extensive entropy without invoking quantum-mechanical arguments (absolute phase-space element scale being ##h=2 \pi \hbar## and indistinguishability of identical particles).

The question of classic versus quantum becomes irrelevant considering the setup proposed by HPt, I think.

In the thougt experiment he proposed, with a large number of distinguishable quantum particles, there is no mixing entropy. Would you agree with that?
 
  • #160
autoUFC said:
In the thougt experiment he proposed, with a large number of distinguishable quantum particles, there is no mixing entropy. Would you agree with that?
Please point to a specific reference. Semantics do not provide specificity.
 
  • #161
In classical mechanics particles are not indistinguishable because, even if they look identical, they can be labeled by a set of coordinates in space and in time (that are unique for each particle!). In this way you can distinguish between "identical" particles by "following their path instant by instant". So if you embrace a fully classical point of view, particles can always be distinguished just because they occupy a specific position in space (time) and Gibbs paradox is not solvable (at least in my opinion) because there is no a priori reason to introduce the ##\frac 1 {N!}## factor. It is different however if you decide to disregard this difference by saying: well, even if I could distinguish particles by their position it should not really change anything because they are identical in all their properties so why don't just ignore this and divide by ##N!##? This is how you solve Gibbs paradox in a classical mechanics point of view, but to me, it is not rigorous. Only QM tells you a priori that particles are indistinguishable. I really don't see a way out: if you somehow manage to introduce that ##\frac 1 {N!}## you are also implicitly negating the possibility of labeling particles by their space (time) coordinates.
 
  • #162
autoUFC said:
The question of classic versus quantum becomes irrelevant considering the setup proposed by HPt, I think.

In the thougt experiment he proposed, with a large number of distinguishable quantum particles, there is no mixing entropy. Would you agree with that?
No, why? Whenever I put first distinguishable particles in separated parts of a box and then take (adiabatically) out the dividing walls, I get an irreversible diffusion mixing all the particles in the whole volume, no matter how many sorts of particles I use for the gedanken experiment.
 
  • Like
Likes hutchphd
  • #163
hutchphd said:
Please point to a specific reference. Semantics do not provide specificity.
HPt said:
Hi, I'm the author of the paper Eur. J. Phys. 35 (2014) 015023 (also freely available at arXiv). As I explicity demonstrate in my paper, there is a paradox that manifests as a contradiction to the second law of thermodynamics. This contradiction does not only arise for classical distinguishable particles, but, as demonstrated in my paper, also within the quantum realm. The reason for this is that also quantum particles may be distinguishable: As an example, think of a gas of buckyball molecules where each buckyball is made up of two disinct carbon isotopes in a way that no two buckyballs are identical. Therefore, the claim that the Gibbs Paradox exposes an inconsistency of classical statistical mechanics or that the Gibbs Paradox is resolved by (the indistinguishability of identical particles in) QM is false.
 
  • #164
vanhees71 said:
No, why? Whenever I put first distinguishable particles in separated parts of a box and then take (adiabatically) out the dividing walls, I get an irreversible diffusion mixing all the particles in the whole volume, no matter how many sorts of particles I use for the gedanken experiment.

I was under the impression that we had agreed in this point.
Would you point out what in HPt's paper is in fault? Could you tell us what mistake has he made that lead him to reach the his conclusion ( that you disagree with) that there will be no increase in entropy?
 
  • #165
Well, already the statement in the abstract that mixing entropy for the Gibbs paradox with non-identical gases was false, is wrong. In fact the statement is right, and you can qualitatively understand it intuitively for ideal gases from the fact that the initial state (two non-identical gases in a paper separated by a wall, one gas in each part) and the final state (wall taken out the gas completely mixed over the entire volume) are macroscopically distinct from each other and that the mixing is irreversible, i.e., getting to the new equilibrium state after the wall is taken out, i.e., the diffusion of the gases within the full volume, is an irreversible process. You cannot by simply putting in adiabatically a dividing wall sort the distinct types of gas molecules in their part but you'd need a "demon", similar to Maxwell's demon, for that.
 
  • #166
Motore said:
Just to be clear: the correct mixing entropy is not only extensive entropy but you also have to include additive entropy right? Which means saying mixing entropy is evidence of non-extensivitiy, is correct.

Friends,
I will be way from the thread for a while. Personal things will keep me busy. I apologize if I do not answer immediately to any question regarding one of my posts.

That was my first time posting in PF, and has been interesting. I confess that I was frustrated at times. But now I have to say that I may have been unfair. The discussion has provided lots of fruits for thought.

At this moment I wish to state a few things that I am still convinced, even after reading some of you arguing on the contrary.

1) there is no paradox in question of mixing classical particles of the same kind. Swendsen has an argument that is as simple as it gets (see my first post for an specific reference).

The same argument (or something similar) was employed by HPt in his setup of distinguishable quantum particles.

2) Entropy of mixing is no violation of extensivity.

I stand by this statement only in the classical model.

I initialy suggested that this idea was confusion. However the argument that you can not place the particles in a given position after removing the barrier has a lot of power in the quantum setting. Quantum field theory is out of confort zone, but I am aware that quantum theory has some strange effects connected to boundary conditions (Casimir effect comes to mind). BRB
 
  • #168
autoUFC said:
you said that exchange of heat is not a violation of extensivity

No, I said, following Jaynes, that in cases like exchange of heat, where a macroscopic variable is changing, it does not even make sense to talk about extensivity of entropy.
 
  • #169
autoUFC said:
he argues that increase in entropy due to mixing depends on having the capability of segregating the mixed gases.
That gets us to the uncomfortable situation where the entropy of a system depends on our current capabilities.

Yes, exactly. That is Jaynes' point in section 5 of his paper. (He makes similar arguments in his book Probability: The Logic of Science.)

autoUFC said:
entropy in a mixing process will increase as long as there is a simple way to describe the initial state.

Only if you know about the simple way.

For example, consider two experimenters: experimenter A does not know the difference between gas A and gas B, while experimenter B does. Experimenter A will assign zero entropy change to the process of mixing gas A and gas B, because he doesn't know there are two gases present; he thinks he is just mixing two containers of the same gas. (Note that it does not matter whether he considers individual particles of the gas to be distinguishable or not.)

Experimenter B, who knows there are two gases, can use that information to extract useful work from the mixing process. And to experimenter A, it will look like experimenter B can violate the second law: he is extracting useful work from a process that is reversible (no entropy change), without any other external input or output. This is the kind of thing Jaynes is describing in Section 5 of his paper.

Of course you could say that, once we tell experimenter A about the difference between gas A and gas B, he will realize that he was simply using an incomplete model, and that experimenter B's model is the correct one. But scientific models are always tentative; it is always possible that we could make further discoveries in the future that would force us to change our model. For example, experimenter C could come along tomorrow, knowing that there are actually two subtypes of gas A and gas B, and could extract useful work from what looks to experimenter B like an even mixture of the two gases. We can't rule out such a possibility, so we have to treat even our best current knowledge about how to assign entropy to various systems as tentative, a reflection of that best current knowledge, not an objective property of the system independent of our knowledge.

autoUFC said:
Jaynes interpretation is based on the experimenter capacity. The interpretation I am most comfortable is based on the experimenter information.

There's no difference; what Jaynes means by "capacity" is the same thing as what you mean by "information". Jaynes is simply focusing on how the information is used to extract useful work from a process. If the information can't be used to extract useful work, it makes no difference to the entropy. That is the correct way of capturing what you mean by "a simple way to describe the state".
 
  • #170
Then I don't know what you mean by extensivity. Where does Jaynes state this? In his paper "The Gibbs Paradox", which we discussed above, he agrees with what I say, i.e., that there is mixing entropy and extractable work if the two non-identical gases are well-separated first and then mixed. He uses pistons for his arguments, but that's equivalent to the argument with the separating wall. If you cannot of course distinguish the gases you cannot put them separate into the two parts of the volume and then you cannot extract the work because there's then no entropy difference (p. 8 of the paper). Of course, I also fully agree with the information-theoretical meaning of entropy, because there are real-world experiments just demonstrating this theoretical argument for "quantum Maxwell demons", as established in the classical paper by Szilard (and also one by Landauer) but that's another (though related) story.

Let's argue purely within phenomenological thermodynamics. Then you have for an ideal gas is
$$\mathrm{d} S=\frac{1}{T} (\mathrm{d} U + p \mathrm{d} V-\mu \mathrm{d} N).$$
So the natural independent variables for the entropy are the extensive variables ##U##, ##V##, and ##N##. Then extensivity means that entropy is a homogeneous function of degree 1, i.e.,
$$S(\alpha U,\alpha V,\alpha N)=\alpha S(U,V,N).$$
For an ideal gas the Sackur-Tetrode formula is extensive in this sense (see the formulae in my posting #122). Note that in this formula the thermal wavelength ##\lambda## is intensive. In the here used independent variables its definition is
$$\lambda=\sqrt{\frac{2 \pi \hbar^2}{m k T}}=\sqrt{\frac{3 \pi \hbar^2 N}{mU}}.$$

Is Jaynes saying that this forumula is wrong? If yes, where? That would be confusing, because it's the result of the classical limit of the Bose-Einstein or Fermi-Dirac results for the entropy of ideal gases, which I thought is undisputed in the literature to be correct.
 
Last edited:
  • #171
vanhees71 said:
No, why? Whenever I put first distinguishable particles in separated parts of a box and then take (adiabatically) out the dividing walls, I get an irreversible diffusion
I believe your misconception is that you think of putting one kind of gas of distinguishable particles in one part and another kind of gas of distinguishable particles in the other part (such that you have knowledge of what particle is in which part). But that's not the setup. Instead think of putting a wall in a box containing pairwise distinguishable particles. Now, if you take out the dividing wall again, I hope you will agree that you don't get an irreversible diffusion.
vanhees71 said:
Well, already the statement in the abstract that mixing entropy for the Gibbs paradox with non-identical gases was false, is wrong.
This statement only echos how the Gibbs paradox is introduced in most textbooks (e.g. Huang). It doesn't contain any original insight. I invite you to read a little bit beyond the first sentence of the abstract.
 
  • #172
Of course, if the gases were already mixed before the wall is taken out, then there's no mixing energy. The mixing entropy is defined by the entropy gain due to the irreversible process of diffusion when taking out the wall when gas ##A## was in one part and gas ##B## in the other part. That's a clearly different state than when you have a mixture of both gases in each part. Then of course taking out the walls doesn't change anything and no entropy change would occur.

In your example in the beginning of the paper with the ##\text{C}_{60}## consisting of 30 ##^{12}\text{C}## and 30 ##^{13} \text{C}## isotopes in the beginning you would have to put distinguishable mixings of isomers in one part and the other. Then taking out the wall you'd get mixing entropy. If, however, you cannot distinguish the isomers from the very beginning you get indistinguishable mixings of all isomers in both parts from the very beginning, and you'd of course not get any increase in entropy when taking out the wall.
 
  • #173
If I may be so presumptuous as to summarize this (very interesting and enlightening) discussion. The world seems to run on QM. In that framework the term "distinguishable identical particle" is a meaningless phrase. Of course Josiah Gibbs and company could not know that.

I think the rest is semantics and not so interesting.
 
Last edited:
  • Like
Likes vanhees71
  • #174
vanhees71 said:
Then of course taking out the walls doesn't change anything and no entropy change would occur.
Exactly. However, if you calculate the entropy change with statistical mechanics in the conventional way (see most textbooks, such as Huang, or section 2 of my paper) you do get a non-zero entropy change. This is the "false increase in entropy" I'm referring to in the abstract of my paper.
 
Last edited:
  • Like
Likes vanhees71
  • #175
Ok, then we completely agree. I never thought that this is discussed as a "Gibbs paradox".
 
  • #176
autoUFC said:
the argument that you can not place the particles in a given position after removing the barrier has a lot of power in the quantum setting.

Why wouldn't it also have the same power in a classical setting? Classical physics still says it takes work (and therefore expends entropy) to separate a mixture of two types of particles.
 
  • Like
Likes vanhees71
  • #177
DrDu said:
First attach n primers to each of two carriers and synthesise random single strand DNA of length N from e.g. A and C, only.
To better understand the thermodynamic cycle you described, could you maybe shortly explain (or provide a link to) what primers and carriers are and how they work in the context of your proposed experiment?
 
  • #178
HPt said:
I believe your misconception is that you think of putting one kind of gas of distinguishable particles in one part and another kind of gas of distinguishable particles in the other part (such that you have knowledge of what particle is in which part). But that's not the setup. Instead think of putting a wall in a box containing pairwise distinguishable particles. Now, if you take out the dividing wall again, I hope you will agree that you don't get an irreversible diffusion.

This statement only echos how the Gibbs paradox is introduced in most textbooks (e.g. Huang). It doesn't contain any original insight. I invite you to read a little bit beyond the first sentence of the abstract.

I have a few questions concerning your setup. I have little knowledge on quantum information, and that may be the reason of my doubts.

Supose you start with with a large number of pairwise distinguishable quantum particles in a volume. Then you place the barrier. One can not say that, after placing the barrier, each particle is necessarily in one side or the other. We should get a superposition, isn't that so?

Supose that you have sensors that can detect particles and is capable of identifying what kind of particles it detected. Using the detector we can change the state, destroying superposition, and determine precisely what are the particles in each side. The state after using the detector would be different from the state in superposition. Would that make any difference (in the increase or not in entropy) when the barrier is removed?
 
  • #179
Hi autoUFC,
you have to distinguish quantum states, microstates and macrostates:
  • Quantum states are represented by vectors in the state space and they can be superpositioned. If ##\binom 1 0## and ##\binom 0 1## represent quantum states then also does their superposition ##\frac{1}{\sqrt{2}}\binom 1 1##. (The factor ##\frac{1}{\sqrt{2}}## is there, because vectors representing quantum states are normalized.)
  • Microstates are represented by basis vectors of the state space. So, they depend on the basis you choose, but once you have chosen a basis, only certain quantum states are microstates.
  • Macrostates are represented by density operators. A density operator assigns each microstate a probability. Entropy is defined for macrostates as ##-k \sum p(m) \ln p(m)##, where ##p(m)## denotes the probabilty assigned to microstate ##m## and the sum goes over all microstates.
autoUFC said:
Supose you start with with a large number of pairwise distinguishable quantum particles in a volume. Then you place the barrier. One can not say that, after placing the barrier, each particle is necessarily in one side or the other. We should get a superposition, isn't that so?
Quantum states where a particle is "smeared" across both sides are possible. However, similarly to how you assume that in thermodynamic equilibrium the microstates can be chosen to be energy eigenstantes, you may assume that they can be chosen to not be smeared across both sides. Crucially, however, in the macrostate you describe there are microstates with non-zero probability, where a certain particle is on the left side, and other microstates with non-zero probability, where the same particle is on the right side. This uncertainty about which particle is on which side contributes to the entropy of the macrostate.
autoUFC said:
Supose that you have sensors that can detect particles and is capable of identifying what kind of particles it detected. Using the detector we can change the state, destroying superposition, and determine precisely what are the particles in each side. The state after using the detector would be different from the state in superposition. Would that make any difference (in the increase or not in entropy) when the barrier is removed?
Yes, using the detector would change the macrostate. You wouldn't destroy superpositions, because in the present setup we are regarding a macrostate and not a quantum state, but you would change the probabilities assigned to certain microstates. For example, as soon as you measure that a certain particle is on the left side, all the probabilities of those microstates where this particle is on the right side would collapse to zero. For that reason, such a measurement would decrease the entropy of the macrostate and removing the barrier again after the measurement would result in an entropy increase.
 
  • Like
Likes vanhees71 and PeterDonis
  • #180
Maybe slightly off topic:

Several posts have mentioned the way the density of states is calculated from first principles for (indistinguishable) quantum particles.

Having the density of states means, I believe, that it's possible to derive all the equations of an ideal gas, for example, from first principles. For example you can get the Sackur Tetrode equation and also the relationship between total energy and temperature directly from statistical physics.
See my text on RG for clarity if you want to ( https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas ).

I haven't seen anything comparable for classical particles. How would you define the density of states without using QM?
In my own text on RG I use the relation between energy and temperature as an extra requirement to find the right MB-distribution, but that seems like I'm just forcing the results to match the expected properties of an ideal gas rather than deriving the properties from first principles.
 
  • Like
Likes vanhees71
  • #181
Yes indeed. The only consistent theory of matter we have today is quantum theory and quantum statistics. To establish the classical theory you have to borrow some ingredients from this more comprehensive theory. At least I know: (a) The natural measure of phase-space volumes in ##h^{d}=(2 \pi \hbar)^{d}## and (b) the indistinguishability of particles in terms of bosons and fermions (depending on spin).

The most simple way is to use quantum field theory ("2nd quantization") and the grand-canonical ensemble. As an example take spin-0 bosons. The one-particle states we take as given by the wave functions defined in a cubic volume (length ##L##) with periodic boundary conditions (in order to have properly defined momentum observables).

Then the quantum field is completely determined by the annihilation and creation operators for momentum eigenstates ##\hat{a}(\vec{p})## and ##\hat{a}^{\dagger}(\vec{p})## and the Hamilton operator is given by
$$\hat{H}=\sum_{\vec{p}} \frac{\vec{p}^2}{2m} \hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p}).$$
With ##\vec{p} \in \frac{2 \pi \hbar}{L} \mathbb{Z}^3##.

The annihilation and creation operators obey the commutation relations (bosonic fields)
$$[\hat{a}(\vec{p}),\hat{a}^{\dagger}(\vec{p}')]=\delta_{\vec{p},\vec{p}'}, \quad [\hat{a}(\vec{p}),\hat{a}(\vec{p}')]=0.$$
A convenient complete set of orthonomalized basis functions are the Fock states, i.e., the eigenstates of the occupation-number operators ##\hat{N}(\vec{p})=\hat{a}^{\dagger}(\vec{p}) \hat{a}(\vec{p})##. The eigenvalues are ##N(\vec{p}) \in \{0,1,2,\ldots\}=\mathbb{N}_0##.

To get the thermodynamics we need the grand-canonical partition sum
$$Z=\mathrm{Tr} \exp[-\beta (\hat{H}-\mu \hat{N})],$$
where
$$\hat{H}=\sum_{\vec{p}} E_{\vec{p}} \hat{N}(\vec{p}), \quad \hat{N}=\sum_{\vec{p}} \hat{N}(\vec{p}).$$
For the following it's more convenient to define the functional
$$Z[\alpha]=\mathrm{Tr} \exp[-\sum_{\vec{p}} \alpha(\vec{p}) \hat{N}(\vec{p})].$$
That's easy to calculate using the Fock basis (occupation-number basis)
$$Z(\alpha(\vec{p})=\prod_{\vec{p}} \sum_{N(\vec{p})=0}^{\infty} \exp[-\alpha(\vec{p}) N(\vec{p})] = \prod_{\vec{p}}\frac{1}{1-\exp(-\alpha(\vec{p})}.$$
The occupation number distribution is given by
$$f(\vec{p})=\langle \hat{N}(\vec{p}) \rangle=\frac{1}{Z} \mathrm{Tr} \hat{N}(\vec{p}) \exp[-\beta (\hat{H}=\mu)] .$$
This can be calculated from the functional
$$f(\vec{p})=-\left . \frac{\partial}{\partial \alpha(\vec{p})} \ln Z[\alpha] \right|_{\alpha(\vec{p})=\beta(E_{\vec{p}}-\mu)} = \frac{\exp[-\beta(E_{\vec{p}}-\mu)]}{1-\exp[-\beta(E_{\vec{p}}-\mu)]}=\frac{1}{\exp[\beta(E_{\vec{p}}-\mu)]-1}.$$
The partition sum itself is given by
$$\Omega(V,\beta,\mu)=\ln Z(V,\beta,\mu)=-\sum_{\vec{p}} \ln \{1-\exp[-\beta(E_{\vec{p}}-\mu)] \}.$$
The thermodynamic limit is not trivial since obviously we have the contraints ##\beta>0## and ##\mu<0##, and for too large ##\beta## and to large ##\mathcal{N}=\langle N \rangle## we cannot make ##L \rightarrow \infty## and keep ##n=\mathcal{N}/V## constant. The reason is that we need to treat the ground state ("zero mode" of the field) separately before doing the limit. The thorough investigation leads to the possibility of Bose-Einstein condensation for large ##n## and large ##\beta## (since ##\beta## turns out to be ##\beta=1/(k T)## that means low temperatures).

Restricting ourselves to non-degenerate states, i.e., high temperature and not too large ##n## we can naively make ##L \rightarrow \infty##. Then in any momentum-volume element ##\mathrm{d}^3 p## we have ##\frac{V}{(2 \pi \hbar)^3} \mathrm{d}^3 p## single-particle states and thus we can substitute the sum by an integral
$$\Omega=-\frac{V}{(2 \pi \hbar)^3} \int_{\mathbb{R}^3} \mathrm{d}^3 p \ln\{1-\exp[-\beta(E_{\vec{p}}-\mu)]\}.$$
The integral is non-trivial, but the classical limit is simple. That's given for small occupation numbers, i.e., for ##\exp[-\beta(E_{\vec{p}}-\mu]\ll 1##. Then we can set ##\ln(1-\exp(...))=-\exp(...)##
and
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \int_{mathbb{R}^3} \mathrm{d}^3 p \exp[-\beta(E_{\vec{p}}-\mu)].$$
With ##E_{\vec{p}}=\vec{p}^2/(2m)## we can evaluate the Gaussian integral, leading to
$$\Omega=\frac{V}{(2 \pi \hbar)^3} \left (\frac{2 \pi m}{\beta} \right)^{3/2} \exp(\beta \mu).$$
Now the meaning of the constants become clear by evaluating the internal energy and the average particle number
$$\mathcal{N}=\langle N \rangle=\frac{1}{\beta} \partial_{\mu} \Omega=\Omega.$$
Further we have
$$U=\langle E \rangle=-\partial_{\beta} \Omega+ \mu \mathcal{N}=\frac{3}{2 \beta} \mathcal{N},$$
from which
$$\beta=1/(k T).$$
To get the relation to the more usual thermodynamic potentials we calculate the entropy. The statistical operator is
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H} + \beta \mu \hat{N})$$
and thus the entropy
$$S=-k \mathrm{Tr} \ln \hat{\rho}=-k (\Omega -\beta U + \beta \mu \mathcal{N})=-k \Omega+\frac{U-\mu}{T}.$$
To get the usual potentials we note that with
$$\Phi=\Phi(V,T,\mu)=-k T \Omega$$
one gets after some algebra
$$\mathrm{d} \Phi=\mathrm{d} V \partial_V \Phi - S \mathrm{d} T - \mathcal{N} \mathrm{d} \mu.$$
On the other hand from the above expression for the entropy we find
$$\Phi=U-S T - N \mu.$$
From this it follows
$$\mathrm{d} U = \mathrm{d} V \partial_V \Phi + T \mathrm{d} S+\mu \mathrm{d} \mathcal{N}$$
which gives
$$P=-\left (\frac{\partial \Phi}{\partial V} \right)_{T,\mu}.$$
 
  • Like
Likes Philip Koeck
  • #182
Philip Koeck said:
How would you define the density of states without using QM?

Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
 
  • Like
Likes vanhees71
  • #183
PeterDonis said:
Use the Boltzmann distribution. From the abstract of your paper, that's what you're using anyway to derive the ideal gas law. The only difference with classical physics (distinguishable particles) vs. quantum (indistinguishable particles) is that you get the Boltzmann distribution directly instead of as an approximation in the low density limit.
I'm trying to point out the following:
For quantum particles I can use the quantum mechanical expression for the density of states of an ideal gas in the derivation and I can get all the macroscopic relations for an ideal gas as a result, for example U = 1.5 N k T.

For classical particles I don't have an expression for the density of states, as far as I know.
Therefore I'm forced to use the above relation between U and T as a normalization condition in order to arrive at the MB-distribution.
In other words, I'm not actually deriving the macroscopic relations from the statistical description in the case of classical particles, I'm using them as additional input.

My question is whether there is some way of arriving at the density of states for classical particles without resorting to QM. That's what seems to be missing in the classical description.
 
  • Like
Likes vanhees71
  • #184
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
 
  • Like
Likes Philip Koeck
  • #185
vanhees71 said:
You can't get an absolute measure for the density of states in the classical description. The reason is that there's no "natural phase-space measure". What is clear is that phase space density is the right measure for the number of states due to Liouville's theorem. The missing "natural phase-space measure" is again given by quantum theory and the "absolute number of states" is the phase-space volume divided by ##h^f## (##f## is the configuration-space dimension, the phase-space dimension is ##2f##).
Alternatively, does anybody know of a workaround that doesn't require the density of states or anything else from QM and doesn't use use the macroscopic relations as constraints or for normalizing in any way?
 
  • #186
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
 
  • #187
vanhees71 said:
The workaround has been used with great success since classical statistical physics has been used. You simply normalize the phase-space distribution function to the given total number of particles.
I don't know anything about the phase space distribution function, but I can only find it in connection with QM on the internet.
I'm not sure if that's what I was looking for.
 
  • #188
The single-particle phase-space distribution function is the quantity usually called ##f(t,\vec{x},\vec{p})## appearing in the Boltzmann transport equation. It's defined as the particles per unit phase-space volume at the phase-space point ##(\vec{x},\vec{p})## at time ##t##.
 
  • Like
Likes Philip Koeck
Back
Top