I Boltzmann with degenerate levels

  • Thread starter Thread starter jostpuur
  • Start date Start date
  • Tags Tags
    Boltzmann Levels
Click For Summary
The discussion centers on the implications of using the Boltzmann distribution for a system with degenerate energy levels. It highlights how modifications to the energy levels, particularly the introduction of a small positive epsilon, affect the probability distributions and partition functions. The conversation emphasizes the need for careful consideration of degeneracy factors in the partition function, as well as the assumption that all accessible microstates are equally probable at equilibrium. There is debate over whether the uniformity of the probability distribution can be justified, especially in classical models where microstates may not be clearly defined. Ultimately, the discussion raises questions about the foundational assumptions of statistical mechanics and the relationship between classical and quantum descriptions of systems.
  • #31
jostpuur,
There is a great deal of literature out there for the justification for the Gibbs formula for entropy as a generalization of Boltzmann's. Note that the Boltzmann entropy formula presupposes the system is already in thermodynamic equilibrium. Gibb's is a natural generalization which reduces to Boltzmann's under the equilibrium assumption and retaining the additivity within the classical domain when you combine systems.

But the short answer to how I would justify that formula is that it leads to empirically confirmed predictions of the behavior of a broad class of systems from ideal gasses to magnetizable materials. You are welcome to try to improve upon that if you like.
 
Physics news on Phys.org
  • #32
If we've agreed that the entropy for discrete valued random variables is what it is, it's not going directly imply any unique entropy for continuously valued random variables. For example, suppose we have some probability density \rho(x) defined on the interval [0,1] so that
<br /> \int\limits_0^1 \rho(x)dx = 1<br />
holds. Suppose we want to approximate this random variable by a discretely valued random variable, but let's not use the most obvious discretization, but instead use some non-trivial points 0&lt;x_1&lt;x_2&lt;\cdots &lt;x_N&lt;1. Then we get probabilities p_1,p_2,\ldots, p_N by the formula
<br /> p_n \approx \rho(x_n) (x_{n+1}-x_n)<br />
and they will satisfy
<br /> \sum_{n=1}^N p_n=1<br />
It turns out that
<br /> -\sum_{n=1}^N p_n\log(p_n) \approx<br /> -\int\limits_0^1 \rho(x)\log\big(\rho(x)f(x)\big)dx + \underbrace{\log(N)}_{\to\infty}<br />
holds, where the additional function f(x) is something that satisfies f(x)\approx N(x_{n+1}-x_n). If x_{n+1}-x_n\approx \frac{1}{N} holds at least conserning the magnitudes, this f(x) makes sense. Based on this it would seem reasonable to state that the entropy should be given by the formula
<br /> S = -\int\limits_0^1 \rho(x)\log\big(\rho(x)f(x)\big)dx<br />

With a change of notation the entropy could also be written by a formula
<br /> -\int\limits_0^1 \bar{\rho}(x)\log(\bar{\rho}(x))w(x)dx<br />
with a weight w(x)=\frac{1}{f(x)}, which perhaps looks nicer.

This means that the use of entropy is not going solve the issues that arise from the different discretizations of continuous variables in the context of classical physics and Boltzmann's distribution.
 
Last edited:
  • #33
jostpuur said:
If we've agreed that the entropy for discrete valued random variables is what it is, it's not going directly imply any unique entropy for continuously valued random variables.

Right. One way to say it is that even though there is a unique "most natural" probability distribution on a finite set of possible events, which is to assume they are equally likely, there is no unique "most natural" probability distribution on any infinite set. To get agreement between statistical mechanics and thermodynamics, I think you have to assume something along the lines of "equal volumes in phase space imply equal probabilities". Classically, there is really no motivation for making this assumption, I don't think, other than the fact that it works.
 
  • Like
Likes mfb
  • #34
jostpuur,
What your exposition boils down to is simply the fact that you can carry out an arbitrary change of variable over the state space and change the form of Gibb's entropy to include the weighting factor you mentioned, or I would point out that once you're done I can introduce a change of coordinates to recover the Gibb's form.

6 one way, a half a dozen the other. The question is what is particular about the state-space coordinates which yield Gibb's form? We answer that in quantum theory and link it to the classical case via the correspondence principle.
 
  • #35
Some further elaboration... look at the dimensional factors in the entropy integral.
S= -\kappa \int \rho(\xi) \log(\rho(\xi)) d\xi
where \xi is our state space coordinate multi-variable. We should understand that the state space coordinates are not generally dimensionless quantities. As such the probability density function is not a dimensionless number either, but rather a probability density. Its occurrence alone within the logarithm begs the introduction of a unit canceling factor, let's say \eta. Call this the gauge factor.

Our entropy formula would then become:
S= -\kappa \int \rho(\xi) \log(\rho(\xi)\eta) d\xi
And naturally the begged question, in considering arbitrary coordinate systems on state space is the local change of this gauge and hence coordinate dependence on the gauge factor.
S= -\kappa \int \rho(\xi) \log(\rho(\xi)\eta(\xi)) d\xi
But if a unimodular coordinates are used then the gauge factor will be a constant and as mentioned before we recover Gibb's form plus an additive factor corresponding to, in this case, the expectation value of a constant. It is just shifting the zero entropy point. S = S&#039; +\kappa \log(\eta).

Mind you I'm saying nothing new here, just using an alternative narrative.
 

Similar threads

  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
836
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K