Graduate Can indistinguishable particles obey Boltzmann statistics

Click For Summary
The discussion centers on the validity of Boltzmann statistics for indistinguishable particles, challenging the common textbook assertion that such particles must be indistinguishable to resolve the Gibbs paradox. A combinatorial derivation suggests that Boltzmann statistics apply to distinguishable particles, while Bose-Einstein statistics apply to indistinguishable ones, raising questions about the applicability of Boltzmann statistics at any temperature or density. Participants debate the historical context of models like the Drude model, which treats electrons as distinguishable, despite their quantum nature as fermions. The conversation highlights confusion over definitions of indistinguishability and the implications for statistical mechanics, particularly regarding the treatment of identical particles in classical versus quantum contexts. Ultimately, the discussion underscores the complexity of particle statistics and the need for clarity in definitions and assumptions.
  • #61
A micro-rant:

From the point of view of mathematical probability modeling, the language of statistical physics is a gosh-awful mess. Instead of clearly stated probability models ( e.g. clearly stated probability spaces) we have microstates, this-and-that kinds of ensembles, distinguishable identical particles, and identical particles that cannot be distinguished. We have "statistics" that do not satisfy the definition of "statistic" used in mathematical statistics and "Entropy" that seeks to be a property of matter instead of a property of a probability distribution.

A micro-reply to the micro-rant:

The origins of statistical physics preceded the modern development of probability theory and preceded the modern formulation of mathematical statistics. So it isn't surprising that presentations of statistical physics follow different traditions.
Philip Koeck said:
You can find my version of the mentioned derivations here:
https://www.researchgate.net/publication/322640913_A_microcanonical_derivation_gives_the_Boltzmann_for_distinguishable_and_the_Bose-Einstein_distribution_for_indistinguishable_particles

You say:
I will derive the most probable distribution of N particles among k energy levels from combinatorics and some classical thermodynamics.

We can survive the use of "distribution" to mean something different than a probability distribution. There is a probability distribution involved, but a particular "distribution of N particles among k energy levels" is an outcome of the probability distribution involved. For a probability distribution that assigns a probability for each possible "distribution" of N particles among k energy levels, you derive the most probable outcome of this probability distribution.

However, what physically is the interpretation of a particular outcome? After all, in a gas things are changing. So do we define an outcome with a reference to time? If we ignore time, do we think of realizing an outcome as picking a container of gas at random from a population of containers of gas sitting on a shelf , each in a static condition as far as the "distribution" of particles in energy levels goes?

I've read (somewhere, I can't remember) that Boltzman's original thinking did involve time - i.e that he thought of a point describing a specific container of gas moving around in phase space. In equilibrium, the probability of the gas have a given property was what fraction of the time the gas had that property within the confined set in phase space where its point moved around. So realizing an outcome physically amounted to picking a random time to observe the gas.

I also read that this concept of an outcome eventually caused problems - meaning problems within Boltzman's lifetime and before the advent of QM.

For a probability distribution whose outcomes give numbers of particles per energy level, what is the final (classical) verdict on the physical definition of such an outcome?
NFuller said:
A microstate is a unique distribution of particles in phase space. Swapping the position and momentum of two identical particles will give the same configuration in phase space and the same microstate. If we didn't get the same microstate, that would imply that some microstates with many possible permutations are much more likely than others. The problem is that such a system cannot be at equilibrium. At equilibrium, the system must be in a maximum entropy configuration which occurs when each microstate comprising the equilibrium macrostate is equally likely.

The above quote alludes to a probability distribution whose outcome is a microstate - or some property of a microstate. So the question again arises: what physically is meant by realizing such an outcome? Observe the physical system at a random time? Pick a physical system at random from a set of physical systems where the property is not changing in time?

The passage deals with
1) The way a microstate is defined
2) The assertion that at equilbrium, the probability distribution whose outcomes are microstates is a maximum entropy distribution.

The definition of microstate can be justified by "the voice of authority" or tradition. I assume it can also be justified by arguments about practicality along the lines of "It wouldn't make physical sense to define a microstate so it depended on which particular particles are in an energy level because ..." It isn't clear (to me) which type of justification is being used for item 1)

It also isn't clear what justification is implied for item 2). It could be justified soley by empirical tests- or It might be justified by a Bayesian form of reasoning. If we take the Bayesian approach we face the usual criticism: You defined a particlar type of outcome (i.e. microstate) and as assumed a maximum entropy distribution for it, but why didn't you define a different type of outcome and assume a maximum entropy distribution for that type of outcome?
 
  • Like
Likes NFuller
Physics news on Phys.org
  • #62
Stephen Tashi said:
The definition of microstate can be justified by "the voice of authority" or tradition. I assume it can also be justified by arguments about practicality along the lines of "It wouldn't make physical sense to define a microstate so it depended on which particular particles are in an energy level because ..." It isn't clear (to me) which type of justification is being used for item 1)
I agree with your rant that the definition of microstate, or whatever-state, is a bit sloppy. A microstate is generally interpreted to mean a specific configuration of the sub-units of the system. How to deal with those microstates, and what the physical meaning of those states is, depends on the type of ensemble used. In the micro-canonical ensemble, each microstate has an equal probability of being selected at equilibrium. In the canonical ensemble, the probability to select a microstate depends on the energy of the state and temperature of the system.
Stephen Tashi said:
It also isn't clear what justification is implied for item 2). It could be justified soley by empirical tests- or It might be justified by a Bayesian form of reasoning. If we take the Bayesian approach we face the usual criticism: You defined a particlar type of outcome (i.e. microstate) and as assumed a maximum entropy distribution for it, but why didn't you define a different type of outcome and assume a maximum entropy distribution for that type of outcome?
The justification used is the second law of thermodynamics which states that when the system reaches equilibrium, that is a maximum entropy state. Since we want statistical mechanics to reproduce classical thermodynamics, we impose the requirement that the equilibrium probability distributions maximize the entropy.
 
  • #63
Philip Koeck said:
Many textbooks claim that particles that obey Boltzmann statistics have to be indistinguishable in order to ensure an extensive expression for entropy. However, a first principle derivation using combinatorics gives the Boltzmann only for distinguishable and the Bose Einstein distribution for indistinguishable particles (see Beiser, Atkins or my own text on Research Gate). Is there any direct evidence that indistinguishable particles can obey Boltzmann statistics?

Reading Kardar's comments that I linked to in post #58, I think the answer is yes, classical indistinguishable particles can obey Boltzmann statistics. There is no derivation, simply a postulation. However, there seems to be no problem (in terms of consistency with thermodynamics and the other postulates of classical statistical mechanics) with postulating the 1/N! factor.
 
  • #64
NFuller said:
Strictly speaking this is true. The actual number of states valid for even small numbers of identical particles is
$$W=\frac{(N+g-1)!}{N!(g-1)!}$$
where ##g## is the number of states. For the case given above this means
$$W=\frac{(2+2-1)!}{2!(2-1)!}=3$$
In the high temperature limit where ##g>>1## and the low density limit where ##g>>N##, the expression above can be simplified to give the standard Boltzmann counting
$$W=\frac{g^{N}}{N!}$$
Do you have a pdf or a link for the approximation you could send easily?
I can't quite get it right.

About $$W = \frac {g^N} {N!}$$ This is actually not the number of ways of distributing N distinguishable particles among g states. The correct expression for distinguishable particles is $$W = g^N$$
To me it seems that Bose-Einstein does not give Boltzmann for g>>N>>1, but something similar to Boltzmann, only for indistiguishable particles
 
  • #65
One can also have indistinguishable particles in classical mechanics. Instead of using the configuration space ##Q^N## of distinguishable particles and its symplectic manifold ##T^\ast Q^N##, one can use the configuration space ##Q_N = (Q^N \setminus \Delta) / S_N##, where ##\Delta## is the set of coinciding points and ##S_N## is the group of permutations of ##N## elements. The phase space then becomes ##T^\ast Q_N##. One can define Hamiltonians almost as usual, but one has to make sure that they obey permutation symmetry in order to be well defined on the equivalence classes: ##H([p_1,\ldots, q_N]) = \sum_{i=1}^N \frac {p_i^2} {2m} + \sum_{i<j}^N V(\left|q_i - q_j\right|)##. The corresponding Liouville measure and the entropy will automatically get the correct ##\frac 1 {N!}## factors and the statistics is the usual Boltzmann statistics. So if one starts with a configuration space of indistinguishable particles, the Gibbs paradox is resolved naturally and no factors need to be smuggled in.
 
  • Like
Likes kith, atyy, dextercioby and 1 other person
  • #67
NFuller said:
Ieach microstate has an equal probability of being selected at equilibrium.
What is the physical interpretation of "being selected"? Are we talking about picking a random time and taking the microstate of the system at that time to be the one that is selected?

The justification used is the second law of thermodynamics which states that when the system reaches equilibrium, that is a maximum entropy state. Since we want statistical mechanics to reproduce classical thermodynamics, we impose the requirement that the equilibrium probability distributions maximize the entropy.

I can understand that as a purely empirical claim. What I don't understand is an attempt to justify the definition of microstate by deductive logic - if that's what's being attempted.

If we are discussing Shannon entropy then when a system is at equilibrium some probability distributions of its properties may be maximum entropy distributions and others may not. How do we explain why assuming a maximum entropy for microstates (as defined by occupancy numbers) is a good idea - as opposed to assuming a maximum entropy distribution for selecting the type of state whose description includes which particular particles occupy various energy levels.
 
Last edited:
  • Like
Likes Philip Koeck
  • #68
NFuller said:
This wiki page actually goes through much of the derivation. https://en.wikipedia.org/wiki/Maxwell–Boltzmann_statistics

From that article we have:
Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.

Which implies the author of the article is willing (at that stage of exposition) to define a microstate as set of information that includes more than just the occupancy numbers.

In post #24, in reply to
You seem to be saying that swapping two particles in different states does not lead to a different microsctate even if it's obvious that the particles have been swapped. My understanding was that swapping distinguishable particles in different states leads to a new micrstate even if the particles are identical.
You say in post #24

You wrote:

A microstate is a unique distribution of particles in phase space. Swapping the position and momentum of two identical particles will give the same configuration in phase space and the same microstate.

What is the consensus definition of a "microstate"?
 
  • Like
Likes Philip Koeck
  • #70
Stephen Tashi said:
What is the physical interpretation of "being selected"? Are we talking about picking a random time and taking the microstate of the system at that time to be the one that is selected?
Yes
Stephen Tashi said:
I can understand that as a purely empirical claim. What I don't understand is an attempt to justify the definition of microstate by deductive logic - if that's what's being attempted.

If we are discussing Shannon entropy then when a system is at equilibrium some probability distributions of its properties may be maximum entropy distributions and others may not. How do we explain why assuming a maximum entropy for microstates (as defined by occupancy numbers) is a good idea - as opposed to assuming a maximum entropy distribution for selecting the type of state whose description includes which particular particles occupy various energy levels.
I'm sorry but I don't think I understand what you are asking. Can you rephrase this?
Stephen Tashi said:
What is the consensus definition of a "microstate"?
I think the closest I can give to a "consensus definition" is the one given in Kardar's statistical physics book. He says
At any time ##t##, the microstate of a system of ##N## particles is described by specifying the positions ##\vec{q}(t)## and momenta ##\vec{p}(t)## of all of the particles. The microstate thus corresponds to a point ##\mu(t)##, in the ##6N##-dimensional phase space ##\Gamma=\Pi_{i=1}^{N}\{\vec{q}_{i},\vec{p}_{i}\}##
 
  • #71
Philip Koeck said:
I've tried to fill in the gaps of this derivation (see appended pdf) and I don't get the same result. Am I making a mistake?
It looks like something went wrong here:
upload_2018-2-16_11-26-27.png

After factoring out the ##g^{n}## the expression should read as
$$\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}$$
then use the approximation
$$\left(1+\frac{n}{g}\right)^{g}\approx e^{n}$$
for ##g>>n##.
 

Attachments

  • upload_2018-2-16_11-26-27.png
    upload_2018-2-16_11-26-27.png
    9.6 KB · Views: 1,016
  • #72
NFuller said:
It looks like something went wrong here:
View attachment 220421
After factoring out the ##g^{n}## the expression should read as
$$\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}$$
I really don't see a mistake there, I'm afraid. I'm not actually factoring out gn. What happens to the n in the exponent n+g in your result?
 
  • #73
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
 
  • Like
Likes Philip Koeck
  • #74
NFuller said:
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
Thanks, that must be it. I didn't see the additional approximation.
 
  • Like
Likes NFuller
  • #75
NFuller said:
I'm sorry but I don't think I understand what you are asking. Can you rephrase this?

I'll try. But we face the fundamental problem that the meaning of a physical system "selecting" a certain state hasn't been defined. (e.g. Are we talking about the state that is "selected" if we select a random time in [0,T] to measure the state of a system in equilibrium? )

Two competing definitions of micro-state have cropped up. In definition 1) a micro-state is only described by the occupancy numbers. In definition 2) the description also includes the labels of which particles are occupying the energy levels.

So a [classical] view is that a given physical system can be described by two probability distributions , f1 describes the probability of the system "selecting" the states of definition 1) and f2 describes the probability of selecting states of definition 2).

We are going to model the system in equilibrium either by assuming f1 to be a maximum entropy distribution subject to some constraints or we are going to model the system by assuming f2 is a maximum entropy distribution subject to the same constraints.

How do we choose between using f1 versus f2? Is the choice made on a purely empirical basis - to match data from experiments? Or is there some collection of assumptions and definitions that can deduce which distribution we choose?

Speculating about the deductive way - the appropriate choice may be dictated by how we define "equilibrium".
I think the closest I can give to a "consensus definition" is the one given in Kardar's statistical physics book. He

The literal interpretation of that definition would distinguish particle 1 from particle 2 via the position of its data in the 6N dimensional vector. So that definition agrees with definition 2) . It also agrees with definition used in the Wikipedia article https://en.wikipedia.org/wiki/Maxwell–Boltzmann_statistics.

To introduce the whatever-we-shall-call-it concept of definition 1), the Wikipedia article speaks of "degeneracies" of microstates.
 
Last edited:
  • #76
Stephen Tashi said:
Two competing definitions of micro-state have cropped up. In definition 1) a micro-state is only described by the occupancy numbers. In definition 2) the description also includes the labels of which particles are occupying the energy levels.
Stephen Tashi said:
How do we choose between using f1 versus f2? Is the choice made on a purely empirical basis - to match data from experiments? Or is there some collection of assumptions and definitions that can deduce which distribution we choose?
I think I understand your confusion. The choice of which description of microstate to use depends on the type of statistical ensemble being employed. For example, your definition 2 says to include the labels of which particles occupy which energy levels, but what if all the particles have the same energy? Then we must use the microcanonical ensemble and definition 1 and f1 is used. If all the particles are at the same temperature but may have different energy, then the canonical ensemble is used, which follows from definition 2.
 
  • #77
NFuller said:
The choice of which description of microstate to use depends on the type of statistical ensemble being employed. For example, your definition 2 says to include the labels of which particles occupy which energy levels, but what if all the particles have the same energy?
I understand a situation where the totality of the particles has a constant energy.

Then we must use the microcanonical ensemble and definition 1 and f1 is used.
I understand that's the standard procedure. I don't understand the justification for "must". Is it empirical or deductive? Even it it's only tradition, there must be some empirical reason why the tradition is followed.

If all the particles are at the same temperature but may have different energy, then the canonical ensemble is used, which follows from definition 2.

I understand that's standard procedure, but again, I don't see the justification for it. We can't justify it by saying that the procedure is justified by the definition and the definition justifies the procedure.
 
  • #78
NFuller said:
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
I have a sort of a summary of my view on things now:
I've appended some derivations that show most of what I've come up with.
In short: The Boltzmann distribution follows from "traditional" Boltzmann counting for distinguishable particles.
A distribution like Boltzmann, but without factor N, follows from "correct" Boltzmann counting, which is a limiting case of Bose-Einstein counting for indistinguishable particles when g >> n >> 1 for every energy level.
I don't see that this necessarily makes particles distinguishable. Low occupancy is not the same as distinguishability, in my opinion.
In both cases I assume S = k ln W when I determine the Lagrange multipliers and for deriving an expression for S at the end.
Obviously if I allow for S = k ln W + f(N) the results change.
In both cases I get an extensive expression for S, so there's no indication of a paradox, again assuming S = k ln W.
Two things worry me: No factor N in the Boltzmann distribution from "correct" counting. S for distinguishable particles is missing the "pV-term".
Any comments?
 

Attachments

  • #79
Philip Koeck said:
I don't see that this necessarily makes particles distinguishable. Low occupancy is not the same as distinguishability, in my opinion.
It may be helpful to look back at posts 59 and 60. There, a simple example was given showing how to count the states of two identical particles. The Bose-Einstein counting is the exact counting, but if ##g>>n>>1## then this can be approximated by "correct Boltzmann counting". This is not making the particles distinguishable, it is only a mathematical approximation.
Philip Koeck said:
Two things worry me: No factor N in the Boltzmann distribution from "correct" counting. S for distinguishable particles is missing the "pV-term".
This is really the whole point. The correct counting lacks the factor N and gives the correct thermodynamic relations. The incorrect counting has the factor N which cancels out with another factor N later, so you end up missing the pressure term.
 
  • Like
Likes Philip Koeck
  • #80
Stephen Tashi said:
I understand a situation where the totality of the particles has a constant energy.
It's not just the total energy is constant, but that each particle has a constant average energy. This is sufficient because equilibrium statistical mechanics is a time independent construction of the particle behavior.
Stephen Tashi said:
I understand that's the standard procedure. I don't understand the justification for "must". Is it empirical or deductive? Even it it's only tradition, there must be some empirical reason why the tradition is followed.
The ansatz of the microcanonical ensemble is that all the particles lie on the surface of a ##N##-dimensional sphere in momentum space, i.e. they all have the same energy. Thus if there is a system where all the particles have an average energy ##E##, then the ansatz is satisfied, and the microcanonical ensemble is valid.
 
  • #81
NFuller said:
It's not just the total energy is constant, but that each particle has a constant average energy.
How could there be a non-constant average? I can see how each particle could have the same expected value of energy. Mathematical expectations (and averages) are taken with respect to some variable. So to define what it means for a particle to have an average energy, we need to know what physical variable we are averaging over. Is the average taken with respect to time in some long time interval?

This is sufficient because equilibrium statistical mechanics is a time independent construction of the particle behavior.

Hearing time mentioned makes me hopeful. Are we getting closer to answering my question about what it means for a system to "select" a microstate? After all, if we are computing probabilities that the system "selects" a microstate, we need to know what that means physically to "select". I suggested that we pick a random time from a uniform distribution in some long time interval [0,T] and observe the microstate of the system at the selected time. Nobody has supported or opposed that definition of "selecting".

The ansatz of the microcanonical ensemble is that all the particles lie on the surface of a ##N##-dimensional sphere in momentum space, i.e. they all have the same energy.

I don't know what the word "ansatz" means in this context. Going by the Wikipedia article https://en.wikipedia.org/wiki/Microcanonical_ensemble , the microcannonical ensemble is used to represent a system of particles that has a time invariant value of energy. Is the only way to represent such a system to represent each individual particle has having the same time invariant value of energy?

Thus if there is a system where all the particles have an average energy ##E##, then the ansatz is satisfied, and the microcanonical ensemble is valid.

What's valid is that a system where all particles have the same average energy may satisfy the definition of a microcanonical ensemble.

But this doesn't answer the question of why, in a microcannonical ensemble, a particular definition of "microstate" is appropriate for defining events with equal probability. The definition of "microcannonical ensemble" is made without defining a "microstate".

My understanding so far:
By definition, in the "microcannonical ensemble", each particle has the same average energy ##E## where the average is taken with respect to time , say time over some long time interval. The system of particles has an average energy ##E_S## where the average is taken with respect to time. Since both ##E## and ##E_S## are averages taken with respect to time they are constants with respect to time.

I think the definition of "microcannonical ensemble" also says that the total energy ##E(t)## of the system at time ##t## is constant with respect to time. Assuming that is a requirement, then it must be that ##E(t) = E_S##. This still leaves open the possibility that the energy of an individual particle can vary with time.

Do I have the right picture?
 
  • #82
NFuller said:
It may be helpful to look back at posts 59 and 60. There, a simple example was given showing how to count the states of two identical particles. The Bose-Einstein counting is the exact counting, but if ##g>>n>>1## then this can be approximated by "correct Boltzmann counting". This is not making the particles distinguishable, it is only a mathematical approximation.
I completely agree.

NFuller said:
This is really the whole point. The correct counting lacks the factor N and gives the correct thermodynamic relations. The incorrect counting has the factor N which cancels out with another factor N later, so you end up missing the pressure term.
Here we might be at the core of my problem.
As I see it the Boltzmann distribution for classical, distinguishable particles, such as Xenon atoms (written without index i) is this:
n = N g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
If T is constant, I would say g does not depend on N. I don't think the chemical potential depends on N either, does it?
On the other hand n must be proportional to N.
That's why the factor N has to be there, I think (unless the chemical potential changes with N).

In this context, I think I've noticed that the cause of the non-extensive entropy expression for classical, distinguishable particles, based on "incorrect" Boltzmann counting, is actually that the density of states, g(u)du is made proportional to V. I've seen two ways of coming up with this expression for the density of states (in Beiser and in Blundell), and I don't quite buy either of them.

I agree that for quantum mechanical particles the situation can be different since g depends on N and/or V, for example for photons in a box and maybe for a hydrogen gas.
 
Last edited:
  • #83
Stephen Tashi said:
How could there be a non-constant average? I can see how each particle could have the same expected value of energy. Mathematical expectations (and averages) are taken with respect to some variable. So to define what it means for a particle to have an average energy, we need to know what physical variable we are averaging over. Is the average taken with respect to time in some long time interval?
What I meant to say was each particle has the same average energy in the microcanonical ensemble.
Stephen Tashi said:
Hearing time mentioned makes me hopeful. Are we getting closer to answering my question about what it means for a system to "select" a microstate? After all, if we are computing probabilities that the system "selects" a microstate, we need to know what that means physically to "select". I suggested that we pick a random time from a uniform distribution in some long time interval [0,T] and observe the microstate of the system at the selected time. Nobody has supported or opposed that definition of "selecting".
This definition is reasonable.
Stephen Tashi said:
But this doesn't answer the question of why, in a microcannonical ensemble, a particular definition of "microstate" is appropriate for defining events with equal probability. The definition of "microcannonical ensemble" is made without defining a "microstate".
As I mentioned before, a microstate is a set of ##N## positions in a ##6N## dimensional phase space. In the microcanonical ensemble, these points are restricted to lie on the surface of a ##3N## dimensional sphere which is sufficient to constrain one of the thermodynamic variables, i.e. the energy. I don't know of any other way to describe this.
 
  • #84
Philip Koeck said:
Here we might be at the core of my problem.
As I see it the Boltzmann distribution for classical, distinguishable particles, such as Xenon atoms (written without index i) is this:
n = N g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
It looks like you are starting to derive the Grand Canonical Ensemble because you have introduced the chemical potential ##\mu##. In that case, ##N## as you have defined it, does not exist because in this ensemble the particle number is not fixed. What you have is not exactly the grand canonical ensemble, it looks line there is a factor ##n## missing in one of the exponentials. I think it may be easier to start with constructing either the microcanonical or canonical ensemble where ##N## is a fixed value and convince yourself of the counting that way.
 
  • #85
NFuller said:
It looks like you are starting to derive the Grand Canonical Ensemble because you have introduced the chemical potential ##\mu##. In that case, ##N## as you have defined it, does not exist because in this ensemble the particle number is not fixed. What you have is not exactly the grand canonical ensemble, it looks line there is a factor ##n## missing in one of the exponentials. I think it may be easier to start with constructing either the microcanonical or canonical ensemble where ##N## is a fixed value and convince yourself of the counting that way.
Not at all. I assume constant U and constant N. The constant N constraint leads to the Lagrange multiplier alpha, which turns out to be -chem.pot./kT. So the chemical potential occurs in the distribution because of the constraint of constant N. I've appended the text once more.
You're not really answering my question, whether the factor N in the Boltzmann distribution makes sense or not.
 

Attachments

  • #86
NFuller said:
As I mentioned before, a microstate is a set of ##N## positions in a ##6N## dimensional phase space. In the microcanonical ensemble, these points are restricted to lie on the surface of a ##3N## dimensional sphere which is sufficient to constrain one of the thermodynamic variables, i.e. the energy. I don't know of any other way to describe this.

My understanding of that:

One point on the ##6N## dimensional sphere represents the state of the system (at a given time) and the changing state of the system is visualized by a "moving point" on the surface of the sphere. By definition of this ##6N## dimensional point, subsets of it components represent data for individual particles, so by definition of such a point, each individual particle is "distinguished".

However, to justify computing a probability distribution based on the above model in a particular way requires more assumptions that merely using the above as a definition. The basic concept must be the (imperfect) notion that "Each point on the sphere has the same probability of being where the system is" - meaning (the equally imperfect concept) that "The system spends the same fraction of time (in some long time interval [0,T] at each point on the sphere".

Naturally the notions of probability "at" a point must be replaced by a probability density. And the notion of the fraction of time a system spends "at" a point only makes literal sense if the systems stops dead in its tracks for some finite interval of time.

The calculations based on defining discrete microstates and doing combinatorics on them are unjustified unless we establish facts beyond the mere definition of the micro-cannonical ensemble. These facts are

1) The probability density for the system being at a point on the ##6N## dimensional sphere is a uniform distribution over the surface of the sphere.

2) The way the energy levels of the discrete microstates is defined, assuming a uniform distribution over the microstates approximates a uniform probability density over the surface of the sphere - and the correct answer (to a given computation) about the uniform probability density can be found by taking the limit of the calculation performed on the discrete microstates as the number of microstates approaches infinity.

I'll conjecture fact 1) can be established by defining "equilibrium" to mean exactly the situation described in fact 1). Instead of such legal trickery, there are probably experimental ways to test whether a system that is in equlibrium (using the empirical notion of that word) satisfies fact 1).

I'll conjecture that fact 2) is never established in typical expositions of thermodyamics! The mathematical aspects of it look imposing. They involve ergodic processes and limits of sequences of functions. (Maybe there's no mathematical way to make the classical model actually work!)
 
  • #87
Philip Koeck said:
Not at all. I assume constant U and constant N. The constant N constraint leads to the Lagrange multiplier alpha, which turns out to be -chem.pot./kT. So the chemical potential occurs in the distribution because of the constraint of constant N. I've appended the text once more.
You're not really answering my question, whether the factor N in the Boltzmann distribution makes sense or not.
It looks like when using the correct Boltzmann counting, without the factor ##N##, the number of particles in the ##i##th state is a function of the temperature only. This may be reasonable in the thermodynamic limit, but I'm not sure. Correct me if I'm wrong, but it looks like you are holding ##N##, ##T##, and ##U## constant in the derivation. What is bothering me is that this is not one of the five standard ensemble types used so I am wondering if this approach is meaningful when describing a thermodynamic state.
 
  • #88
Stephen Tashi said:
However, to justify computing a probability distribution based on the above model in a particular way requires more assumptions that merely using the above as a definition. The basic concept must be the (imperfect) notion that "Each point on the sphere has the same probability of being where the system is" - meaning (the equally imperfect concept) that "The system spends the same fraction of time (in some long time interval [0,T] at each point on the sphere".
This is generally justified a priori by stating that there is no directional preference in the momentum, so the points are uniformly distributed on the sphere.
Stephen Tashi said:
(Maybe there's no mathematical way to make the classical model actually work!)
You may be right. I think I have heard of people trying to prove the a priori arguments given and they always fail miserably.
 
  • #89
NFuller said:
It looks like when using the correct Boltzmann counting, without the factor ##N##, the number of particles in the ##i##th state is a function of the temperature only. This may be reasonable in the thermodynamic limit, but I'm not sure. Correct me if I'm wrong, but it looks like you are holding ##N##, ##T##, and ##U## constant in the derivation. What is bothering me is that this is not one of the five standard ensemble types used so I am wondering if this approach is meaningful when describing a thermodynamic state.
If I restrict myself to ideal gases then U depends only on T and N, so T is automatically constant, I agree.
Also notice that I say nothing about T until I interpret the Lagrange mutipliers, so T is not really part of the model I use in the derivations.
I was assuming my derivations were microcanonical due to the constant U, but I don't really know.
 
  • #90
Let's look at a very specific case:
The Boltzmann distribution for "correct Boltzmann counting", which is an approximation to Bose-Einstein, is this (written without index):
n = g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
Now assume we have two identical containers with equal V. In each there is an ideal gas at very low pressure at temperatur T.
Container A contains twice as many atoms as container B, so obviously the pressure and the inner energy are twice as high in A.
Let's divide the range of kinetic energies into discrete energy levels for the sake of the model.
I would say that for a given energy level u, the number of atoms in that level, n, should be twice as big for container A as for container B. Do you agree?
If so, which factor in the above distribution function accounts for this?
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K