Graduate Can indistinguishable particles obey Boltzmann statistics

Click For Summary
The discussion centers on the validity of Boltzmann statistics for indistinguishable particles, challenging the common textbook assertion that such particles must be indistinguishable to resolve the Gibbs paradox. A combinatorial derivation suggests that Boltzmann statistics apply to distinguishable particles, while Bose-Einstein statistics apply to indistinguishable ones, raising questions about the applicability of Boltzmann statistics at any temperature or density. Participants debate the historical context of models like the Drude model, which treats electrons as distinguishable, despite their quantum nature as fermions. The conversation highlights confusion over definitions of indistinguishability and the implications for statistical mechanics, particularly regarding the treatment of identical particles in classical versus quantum contexts. Ultimately, the discussion underscores the complexity of particle statistics and the need for clarity in definitions and assumptions.
  • #91
Philip Koeck said:
If so, which factor in the above distribution function accounts for this?
The chemical potential of an ideal gas is a function of ##N##, ##V##, and ##T## or simply ##P## and ##T##.
$$\mu=kT\ln\left(\frac{\lambda^{3}N}{V}\right)=kT\ln\left(\frac{\lambda^{3}P}{kT}\right)$$
where ##\lambda=h/\sqrt{2\pi mkT}##
 
Physics news on Phys.org
  • #92
NFuller said:
The chemical potential of an ideal gas is a function of ##N##, ##V##, and ##T## or simply ##P## and ##T##.
$$\mu=kT\ln\left(\frac{\lambda^{3}N}{V}\right)=kT\ln\left(\frac{\lambda^{3}P}{kT}\right)$$
where ##\lambda=h/\sqrt{2\pi mkT}##
Thanks for the help! It seems that this expression is derived based on "correct Boltzmann counting" (at least where I found it in Blundell's book) and it nicely puts the factor N back into the Boltzmann distribution.
 
  • #93
I've not read the entire thread till the end, but there seems to be a lot of confusion only due to the didactical mistake perpertuated for almost 200 years to treat classical statistics first. The problem with classical statistics is that there is no way to properly define phase-space distribution functions and the entropy. Boltzmann was ingenious enought to plug in the additional factor ##1/N!## to remedy the Gibbs paradox with a handwaving argument about indistinguishability of classical particles, but a true understanding for both the correct entropy expression and phase-space distribution functions you necessarily need quantum mechanics, which introduces the notion of a natural "action" or "phase-space-volume measure" in terms of Planck's constant ##h=2 \pi \hbar##.

Here it is of utmost help for the understanding to derive the Fermi-Dirac, Bose-Einstein, and Boltzmann statistics in the original way by counting the combinatorics to distribute particles over quantum (!) states. It's very clearly written in Landau and Lifshitz volume 5. I've stolen this from them in my transport-theory manuscript:

https://th.physik.uni-frankfurt.de/~hees/publ/kolkata.pdf

The derivation can be found for the Boltzmann statistics in Sec. 1.2.2 (using of course the necessary minimum of the quantum definition of the phase-space volume) and the quantum statistics cases in Sec. 1.8.

Of course, it's also a good idea to derive classical statistical mechanics from the very beginning starting from the Liouville equation for phase-space distributions and deriving the Boltzmann transport equation by cutting the BBGKY hierarchy at the lowest non-trivial order. That makes utmost clear why Boltzmann's H-theorem is valid and thus why equilibrium is the state of maximum entropy under the constraints due to the additive conservation laws.
 
  • #94
vanhees71 said:
I've not read the entire thread till the end, but there seems to be a lot of confusion only due to the didactical mistake perpertuated for almost 200 years to treat classical statistics first. The problem with classical statistics is that there is no way to properly define phase-space distribution functions and the entropy. Boltzmann was ingenious enought to plug in the additional factor ##1/N!## to remedy the Gibbs paradox with a handwaving argument about indistinguishability of classical particles, but a true understanding for both the correct entropy expression and phase-space distribution functions you necessarily need quantum mechanics, which introduces the notion of a natural "action" or "phase-space-volume measure" in terms of Planck's constant ##h=2 \pi \hbar##.

Here it is of utmost help for the understanding to derive the Fermi-Dirac, Bose-Einstein, and Boltzmann statistics in the original way by counting the combinatorics to distribute particles over quantum (!) states. It's very clearly written in Landau and Lifshitz volume 5. I've stolen this from them in my transport-theory manuscript:
The problem I have with this view is that both BE and FD statistics are based on indistinguishability (I think. Correct me if I'm wrong.) Clearly we can make the approximation of low occupancy and arrive at the Boltzmann statistics (as worked out for BE earlier in the thread by NFuller), but in my mind that doesn't remove the assumption of indistinguishability. Is there really no place for a purely classical description for an ideal gas of very large particles (C60 molecules, colloids, a heavy noble gas)? The way I see things these particles are definitely distinguishable, either because they can be tracked with a microscope or because they are actually slightly different such as in the case of colloids. The deBroglie wave length of these particles would also be tiny if they are fast enough (on average) so I see no reason to use quantum mechanics. In summary I see two reasons not to treat them as a limiting case of quantum statistics. They are distinguishable and they are much to heavy and fast.
 
  • #95
Of course, Bose-Einstein and Fermi-Dirac statistics are based on indistinguishability, and this is one of the most simple examples for the fact that classical physics is not entirely correct on the microscopic level of matter. It cannot be described in any classical way, but that is no problem but a feature! At the same time you cure the problems of classical statistical physics by interpreting it as an approximation of quantum statistics and you understand, why macroscopic matter behaves to such a high accuracy classically in almost all circumstances of our everyday lifes!

You also can't establish classical statistics properly without quantum theory since you have no natural measure for phase-space volumes within classical physics. Also about this problem Boltzmann was pretty much aware. Nowadays it's easy to derive the correct natural measure as ##h^{2f}=(2 \pi \hbar)^{2f}##, where ##f## is the number of degrees of freedom in configuration space. The factor ##2## in the exponent is due to the fact that phase space consists of configuration-space as well as canonical-momenta degrees of freedom, and this leads indeed to the correct dimension, because ##q p## for any pair of configuration and canonical-momentum observable has the dimension of an action.
 
  • Like
Likes DrClaude and Philip Koeck
  • #96
vanhees71 said:
Of course, Bose-Einstein and Fermi-Dirac statistics are based on indistinguishability, and this is one of the most simple examples for the fact that classical physics is not entirely correct on the microscopic level of matter. It cannot be described in any classical way, but that is no problem but a feature! At the same time you cure the problems of classical statistical physics by interpreting it as an approximation of quantum statistics and you understand, why macroscopic matter behaves to such a high accuracy classically in almost all circumstances of our everyday lifes!

You also can't establish classical statistics properly without quantum theory since you have no natural measure for phase-space volumes within classical physics. Also about this problem Boltzmann was pretty much aware. Nowadays it's easy to derive the correct natural measure as ##h^{2f}=(2 \pi \hbar)^{2f}##, where ##f## is the number of degrees of freedom in configuration space. The factor ##2## in the exponent is due to the fact that phase space consists of configuration-space as well as canonical-momenta degrees of freedom, and this leads indeed to the correct dimension, because ##q p## for any pair of configuration and canonical-momentum observable has the dimension of an action.
I'm not sure that you need an absolute measure for a volume in phase space. To derive the Maxwell Boltzmann distribution for an ideal gas, for example, it's sufficient to state that the number of states with energies between u and u+du is proportional to the volume of a spherical shell in momentum space (and to the real space volume). There's no need to come up with a phase space volume unit involving Plank's constant, which, I agree, is a rather strange thing to do for an entirely classical system (if such a system exists).
 
  • #97
Without an absolute measure of phase space you have to introduce an arbitrary one, because otherwise you cannot define entropy properly. There must be no dimensionful quantities in logarithms!
 
  • #98
Let me quote Callen, Thermodynamics and an Introduction to Thermostatics, 2nd ed., sec. 16-9:
Callen said:
[...] the partition function becomes
$$
z = \frac{1}{h^3} \int e^{-\beta \mathcal{H}} dx \, dy \, dz \, dp_x \, dp_y \, dp_z \quad \quad (16.68)
$$
Except for the appearance of the classically inexplicable prefactor (##1/h^3##), this representation of the partition sum (per mode) is fully classical. It was in this form that statistical mechanics was devised by Josiah Willard Gibbs in a series of papers in the Journal of the Connecticut Academy between 1875 and 1878. Gibbs' postulate of equation 16.68 (with the introduction of the quantity ##h##, for which there was no a priori classical justification) must stand as one of the most inspired insights in the history of physics. To Gibbs, the numerical value of ##h## was simply to be determined by comparison with empirical thermophysical data.
 
  • Like
Likes dextercioby, bhobba and vanhees71
  • #99
vanhees71 said:
Without an absolute measure of phase space you have to introduce an arbitrary one, because otherwise you cannot define entropy properly. There must be no dimensionful quantities in logarithms!
You are assuming that W in the expression S = k ln W stands for a volume in phase space. What about if we just regard W as a whole number, the number of ways that a system can realize a certain distribution of particles among energy levels? Obviously for a classical gas "energy level" actually refers to a small range of energies.
 
  • #100
To introduce entropy in the usual hand-waving way you have to count microstates, compatible with a given macrostate. In classical physics it's suggestive to use the phase-space volume as the measure of states because of Liouville's theorem, because the phase-space volume is conserved along the Hamiltonian flow of the system, and that's how Gibbs et al came to this correct assumption. To "count" you need a natural measure of phase space, i.e., a natural scale for phase space volumes (of the dimension of the appropriate power of action), and there is no such natural scale in classical physics.

A more convincing argument for me is the information-theoretical approach to statistical physics. There it's clear that the Shannon-Jaynes (von Neumann) entropy is alwasy relative to what is considered "complete possible information" and a corresponding reference probability distribution, which in the case of classical physics again is equipartition over the available phase-space volume. Then the same dilemma with the missing appropriate natural scale for phase-space volums arises as with the naive approach to entropy.

What you suggest, is of course a correct approach, using the microcanonical ensemble, but that doesn't help with the dilemma since again you need to count the available microstates in terms of phase-space volumes.
 
  • Like
Likes dextercioby and bhobba
  • #101
A little spin-off from this thread: A state for 1 particle is given by a small volume of size h3 in phase space. If two particles occupied the same volume in 1 particle phase space that would mean, in classical terms, that they are at the same spatial coordinates and moving with the same momentum vector at a given time. In other words they would be inside each other. For classical particles (C60-molecules etc.) I would say that's not possible. That seems to indicate that FD statistics is the obvious choice for describing classical particles. Most textbooks, however, introduce classical systems as having no limit for the number of particles per state. Do you agree with my thinking?
 
  • #102
Hm, that's a contradictio in adjecto, because classical particles make only things in a realm, where the Bose or Fermi nature is irrelevant. Both the Bose and the Fermi statistics have as the low-occupation-number limit the Boltzmann statistics (including the ##N!## factor "repairing" the Gibbs paradox). The low-occupation-number constraint makes the indistinguishability of particles irrelevant since there are on average less than 1 particle in a single-particle phase-space cell of size ##h^3##.
 
  • Like
Likes Philip Koeck
  • #103
vanhees71 said:
Hm, that's a contradictio in adjecto, because classical particles make only things in a realm, where the Bose or Fermi nature is irrelevant. Both the Bose and the Fermi statistics have as the low-occupation-number limit the Boltzmann statistics (including the ##N!## factor "repairing" the Gibbs paradox). The low-occupation-number constraint makes the indistinguishability of particles irrelevant since there are on average less than 1 particle in a single-particle phase-space cell of size ##h^3##.
Assuming we could create a system of classical particles like C60 molecules at high occupancy, would it follow FD or BE statistics? Or is this not even a sensible question?
 
  • #104
It depends on how high the occupancy is. As a whole C60 is a boson. So if not too close together they behave as bosons. Also the carbon atoms are bosons (if you have the usual ##^6\text{C}## isotope), but of course on the level of the fundamental constituents you have fermions. I guess, however, that to get this fermionic nature into action you's have to pack the buckyballs so close together that you destroy them ;-).
 
  • #105
vanhees71 said:
It depends on how high the occupancy is. As a whole C60 is a boson. So if not too close together they behave as bosons. Also the carbon atoms are bosons (if you have the usual ##^6\text{C}## isotope), but of course on the level of the fundamental constituents you have fermions. I guess, however, that to get this fermionic nature into action you's have to pack the buckyballs so close together that you destroy them ;-).
Are you beginning to see the problem? If C60 truly behaved like a boson you would be able to put any number of particles into the same state (or "point" in phase space). I find that really hard to imagine. I think they'll simply and very classically be in each others way, even considering the effects of uncertainty. To me it seems that quantum statistics simply doesn't apply to systems that are "too classical".
 
Last edited:
  • #106
Philip Koeck said:
Are you beginning to see the problem? If C60 truly behaved like a boson you would be able to put any number of particles into the same state (or "point" in phase space). I find that really hard to imagine. I think they'll simply and very classically be in each others way, even considering the effects of uncertainty. To me it seems that quantum statistics simply doesn't apply to systems that are "too classical".
Bose-Einstein condensates of molecules exist. While no one has been able to cool molecule as big as C60 down to temperatures where BEC happens, there is no reason to think it doesn't make sense for many C60 molecules to be in the same quantum state.

By the way, double-slit type experiments have been performed using C60 (and even bigger molecules), and quantum effects are visible.
 
  • Like
Likes bhobba and Philip Koeck
  • #107
DrClaude said:
Bose-Einstein condensates of molecules exist. While no one has been able to cool molecule as big as C60 down to temperatures where BEC happens, there is no reason to think it doesn't make sense for many C60 molecules to be in the same quantum state.

By the way, double-slit type experiments have been performed using C60 (and even bigger molecules), and quantum effects are visible.
Thanks. Experiments are always convincing. Maybe it is time to skip all classical statistics and start directly with quantum statistics as vanHees suggested earlier.
 
  • Like
Likes bhobba and vanhees71
  • #108
One more thing has turned up. It's been mentioned several times (also on Wikipedia and in textbooks) that the Boltzmann distribution is a high temperature and low occupancy limiting case of the BE and FD distributions. I can show that W approaches the correct Boltzmann counting for low occupancy as discussed in posts 69 to 73 (before calculating a distribution), but I'm having a hard time seeing how high T would help in general. Only if I insert expressions for the chemical potential and density of states that are valid for an ideal gas of indistinguishable particles into the BE or FD distribution, I get something that approaches the Boltzmann distribution for high T. Is the mentioned limiting case general or only valid for the ideal gas? Can anyone point me to some literature?
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K