Boltzmann with degenerate levels

  • Context: Undergrad 
  • Thread starter Thread starter jostpuur
  • Start date Start date
  • Tags Tags
    Boltzmann Levels
Click For Summary

Discussion Overview

The discussion revolves around the application of the Boltzmann distribution to a system with degenerate energy levels. Participants explore how changes in the model's energy levels affect the probability distributions and the partition function, considering the implications of degeneracy and the limit of small perturbations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a model with energy levels and corresponding probabilities defined by the Boltzmann distribution, introducing a perturbation with a small epsilon to the energy levels.
  • Another participant emphasizes the need to account for degeneracy in the partition function, suggesting that it should sum over states rather than just energy levels.
  • A different participant questions the uniformity of the probability distribution, proposing that states with similar energies might compete for the same random events, which could affect their probabilities.
  • Another participant discusses the assumption of equal likelihood for all microstates at equilibrium, referencing the principle of indifference as a justification.
  • One participant introduces the density matrix of the canonical ensemble and its relation to the partition function, highlighting the role of degeneracy in the energy spectrum.
  • Some participants reiterate the fundamental assumption of equilibrium statistical physics that all accessible microstates are equally probable, while noting potential contradictions in classical models.

Areas of Agreement / Disagreement

Participants express differing views on the treatment of degeneracy and the implications for the probability distributions. There is no consensus on whether one probability distribution is correct over the other, and the discussion remains unresolved regarding the impact of degeneracy and the definition of the partition function.

Contextual Notes

Limitations include the dependence on definitions of states and degeneracy, as well as unresolved mathematical steps regarding the partition function and the treatment of perturbations.

  • #31
jostpuur,
There is a great deal of literature out there for the justification for the Gibbs formula for entropy as a generalization of Boltzmann's. Note that the Boltzmann entropy formula presupposes the system is already in thermodynamic equilibrium. Gibb's is a natural generalization which reduces to Boltzmann's under the equilibrium assumption and retaining the additivity within the classical domain when you combine systems.

But the short answer to how I would justify that formula is that it leads to empirically confirmed predictions of the behavior of a broad class of systems from ideal gasses to magnetizable materials. You are welcome to try to improve upon that if you like.
 
Physics news on Phys.org
  • #32
If we've agreed that the entropy for discrete valued random variables is what it is, it's not going directly imply any unique entropy for continuously valued random variables. For example, suppose we have some probability density \rho(x) defined on the interval [0,1] so that
<br /> \int\limits_0^1 \rho(x)dx = 1<br />
holds. Suppose we want to approximate this random variable by a discretely valued random variable, but let's not use the most obvious discretization, but instead use some non-trivial points 0&lt;x_1&lt;x_2&lt;\cdots &lt;x_N&lt;1. Then we get probabilities p_1,p_2,\ldots, p_N by the formula
<br /> p_n \approx \rho(x_n) (x_{n+1}-x_n)<br />
and they will satisfy
<br /> \sum_{n=1}^N p_n=1<br />
It turns out that
<br /> -\sum_{n=1}^N p_n\log(p_n) \approx<br /> -\int\limits_0^1 \rho(x)\log\big(\rho(x)f(x)\big)dx + \underbrace{\log(N)}_{\to\infty}<br />
holds, where the additional function f(x) is something that satisfies f(x)\approx N(x_{n+1}-x_n). If x_{n+1}-x_n\approx \frac{1}{N} holds at least conserning the magnitudes, this f(x) makes sense. Based on this it would seem reasonable to state that the entropy should be given by the formula
<br /> S = -\int\limits_0^1 \rho(x)\log\big(\rho(x)f(x)\big)dx<br />

With a change of notation the entropy could also be written by a formula
<br /> -\int\limits_0^1 \bar{\rho}(x)\log(\bar{\rho}(x))w(x)dx<br />
with a weight w(x)=\frac{1}{f(x)}, which perhaps looks nicer.

This means that the use of entropy is not going solve the issues that arise from the different discretizations of continuous variables in the context of classical physics and Boltzmann's distribution.
 
Last edited:
  • #33
jostpuur said:
If we've agreed that the entropy for discrete valued random variables is what it is, it's not going directly imply any unique entropy for continuously valued random variables.

Right. One way to say it is that even though there is a unique "most natural" probability distribution on a finite set of possible events, which is to assume they are equally likely, there is no unique "most natural" probability distribution on any infinite set. To get agreement between statistical mechanics and thermodynamics, I think you have to assume something along the lines of "equal volumes in phase space imply equal probabilities". Classically, there is really no motivation for making this assumption, I don't think, other than the fact that it works.
 
  • Like
Likes   Reactions: mfb
  • #34
jostpuur,
What your exposition boils down to is simply the fact that you can carry out an arbitrary change of variable over the state space and change the form of Gibb's entropy to include the weighting factor you mentioned, or I would point out that once you're done I can introduce a change of coordinates to recover the Gibb's form.

6 one way, a half a dozen the other. The question is what is particular about the state-space coordinates which yield Gibb's form? We answer that in quantum theory and link it to the classical case via the correspondence principle.
 
  • #35
Some further elaboration... look at the dimensional factors in the entropy integral.
S= -\kappa \int \rho(\xi) \log(\rho(\xi)) d\xi
where \xi is our state space coordinate multi-variable. We should understand that the state space coordinates are not generally dimensionless quantities. As such the probability density function is not a dimensionless number either, but rather a probability density. Its occurrence alone within the logarithm begs the introduction of a unit canceling factor, let's say \eta. Call this the gauge factor.

Our entropy formula would then become:
S= -\kappa \int \rho(\xi) \log(\rho(\xi)\eta) d\xi
And naturally the begged question, in considering arbitrary coordinate systems on state space is the local change of this gauge and hence coordinate dependence on the gauge factor.
S= -\kappa \int \rho(\xi) \log(\rho(\xi)\eta(\xi)) d\xi
But if a unimodular coordinates are used then the gauge factor will be a constant and as mentioned before we recover Gibb's form plus an additive factor corresponding to, in this case, the expectation value of a constant. It is just shifting the zero entropy point. S = S&#039; +\kappa \log(\eta).

Mind you I'm saying nothing new here, just using an alternative narrative.
 

Similar threads

  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
936
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K