Non-sensical negative entropy? (grand canonical ensemble)

AI Thread Summary
The discussion revolves around calculating the entropy for a system of N indistinguishable particles with energy states of ±ε using the grand canonical ensemble. The initial entropy formula presented, S = (positive terms) - N ln N, is problematic as it suggests non-extensive behavior and the potential for negative entropy at large N. Participants suggest that the confusion may stem from the distinction between extensive and extrinsic entropy, with a recommendation to focus on the Gibbs entropy formulation, which is inherently positive. The calculations provided indicate that while the entropy term -N ln N appears, other terms scale positively with N, raising concerns about the physical interpretation of the results. Ultimately, the discussion highlights the complexities of statistical mechanics and the importance of correctly applying ensemble theory to avoid contradictions in entropy calculations.
nonequilibrium
Messages
1,412
Reaction score
2
Hello,

I was investigating a system with N indistinguishable particles, each of which can have an energy \pm \epsilon, and using the grand canonical ensemble, i.e. \Xi = \sum_{N=0}^{\infty} e^{\beta \mu N} Z_N.

But my entropy formula is S = \left( \textrm{a couple of $\sim N $ positive terms } \right) - N \ln N. Not only is this formula not extensive, it also indicates that the entropy will get (arbitrarily) negative for large N! (Also, the formula depends on the temperature, but I'm keeping that constant.)
Note: to avoid confusion, the N that appears in the formula is actually \langle N \rangle.

Must this be a calculation error? The calculation is not long and I've looked through it carefully and everything is straight-forward... I'm quite confused at this point!
 
Science news on Phys.org
I think the more fundamental way to think of entropy is the sum of -p*ln(p) over all the possible states and their probability p. Since p<1, this is positive. Since p is proportional to N, this can look a lot like -N*ln(N) to within overall constants and terms proportional to N. I'm not sure the resolution of your specific issue, but I suspect it would be resolved by thinking in terms of -p*ln(p) rather than -N*ln(N).
 
mr. vodka said:
Hello,

I was investigating a system with N indistinguishable particles, each of which can have an energy \pm \epsilon, and using the grand canonical ensemble, i.e. \Xi = \sum_{N=0}^{\infty} e^{\beta \mu N} Z_N.

But my entropy formula is S = \left( \textrm{a couple of $\sim N $ positive terms } \right) - N \ln N. Not only is this formula not extensive, it also indicates that the entropy will get (arbitrarily) negative for large N! (Also, the formula depends on the temperature, but I'm keeping that constant.)
Note: to avoid confusion, the N that appears in the formula is actually \langle N \rangle.

Must this be a calculation error? The calculation is not long and I've looked through it carefully and everything is straight-forward... I'm quite confused at this point!

Without knowing the specific system I can't say for sure, however, one can definitely have systems whose ground-state degeneracy is a function of N (i.e. entropy is extrinsic). These occur in what are called frustrated system, a prototypical example is spins on a kagome lattice.
 
I think there may be a confusion between the terms "extensive" and "extrinsic." The latter just means depends on N-- which entropy usually does. But "extensive" means that the entropy of two systems is the sum of their individual entropies.
 
Thank you both for replying:

@ Ken G:

Are you referring to the Gibbs entropy? I'm not sure why: I cannot choose the entropy function myself, it follows from the grand canonical ensemble (\Xi, as described above) and the fact that the relevant free energy is \Phi = E - TS + \mu N where \Phi = -k_B T \ln \Xi. In other words: I've solved for S.

@ maverick:

I've described my system in my original post; did you read over it, or do you think my description is not sufficient?
 
mr. vodka said:
Are you referring to the Gibbs entropy? I'm not sure why: I cannot choose the entropy function myself, it follows from the grand canonical ensemble (\Xi, as described above) and the fact that the relevant free energy is \Phi = E - TS + \mu N where \Phi = -k_B T \ln \Xi. In other words: I've solved for S.
I'm trying to connect the entropy to a physically meaningful parameter that seems like it should be positive. Is it not true that the entropy function has physical effects only in terms of its changes, like energy, so wouldn't matter if it is positive or negative as long as the changes are correct? It seems to me the physical issue behind a grand canonical ensemble is that the creation of additional particles has an energy cost, which must be drawn from some reservoir at T and that has entropy consequences, but it also gives the system access to more states (the sum over -p ln(p) is a larger sum), and the combined system will always maximize its expected uncertainty because uncertainty is associated with likeliness, within the external conservation constraints. So it always has to boil down to maximizing the sum of -p ln(p), no matter how that result is derived. That will always be a positive quantity, so I'm wondering if there is not a way to recast the entropy function you are using in terms of a sum over -p ln(p), and assert that any physical system will maximize that quantity subject to the constraints.
 
I understand your general way of thinking, but it's not even that it is negative (if it were negative by a constant I wouldn't worry as much), but how it is negative. There are two weird things:

1) the N ln N term is not extensive
2) more importantly, the - N ln N suggests that the larger you make the system (and don't forget I'm increasing E proportionally) the lower the entropy is... I can't make sense out of that.

Peculiar is that when I calculate it for distinguishable particles, I don't get this mess!
 
@OP:

Would you mind showing us the steps of your calculation so that we can independently check your derivaion?
 
  • #10
mr. vodka said:
[...]each of which can have an energy \pm \epsilon, <snip>

I wonder if the problem is here- you allow particles to be either in a free (positive energy) or bound (negative energy) state. Perhaps if you instead use E \pm \epsilon, \epsilon &lt; E the problem goes away?
 
  • #11
Thank you both for the comments.

@ Ken G: I'm familiar with the Gibbs paradox and the Gibbs factor, but I don't think that it's the answer rather than the "problem": I know it's the Gibbs factor in the grand canonical ensemble that is giving me this, but sadly the Gibbs factor has to be included (as the wiki page says).

@ Dickfore: Sure!

So the grand canonical partition function for the system described in the original post is \Xi = \sum_{N=0}^{+\infty} \frac{ e^{\beta \mu N} Z_N }{N!} = \sum_{N=0}^{+\infty} \frac{e^{\beta \mu N} Z_1^N}{N!} = e^{e^{\beta \mu} Z_1} where Z_1 = e^{\beta \varepsilon} + e^{- \beta \varepsilon} = 2 \cosh{\beta \varepsilon} is the canonical partition function for a one particle system. Define for future ease y=e^{\beta \mu} Z_1, then \log \Xi = y.

We also know that the grand canonical potential \Phi = E - TS - \mu N and from statistical mechanics \Phi = -k_B T \log \Xi, hence: \boxed{S}=\frac{E}{T} + k_B \log \Xi - \frac{\mu}{T} N =\boxed{ \frac{E}{T} + k_B y - \frac{\mu}{T} N }.

It would be nice to express this expression for S in terms of N, so I calculate N:
\langle N \rangle = \frac{1}{\beta} \frac{\partial}{\partial \mu} \log \Xi = y which gives that the middle term in the boxed expression (i.e. k_b y) goes like N.

Now to rewrite the last term \frac{\mu}{T} N = k_B \left( \beta \mu \right) N note that \beta \mu = \log e^{\beta \mu} = \log y - \log Z_1 = \log N - \log Z_1.

Using these two rewritings, we get that S = \frac{E}{T} + k_B N - k_B N \log N + k_B N \log Z_1.
(Note that Z_1 only depends on temperature.)

So every term except the third (-N log N) scales like N.

EXTRA CALCULATION: Let's be a bit more careful and check whether E scales as N:
E = - \frac{\partial}{\partial \beta} \log \Xi |_{\beta\mu = \textrm{ constant }} = - e^{\beta \mu} \frac{\partial}{\partial \beta} Z_1 = - e^{\beta \mu} 2 \varepsilon \sinh \beta \epsilon = - \epsilon y \tanh \beta \epsilon = - \epsilon N \tanh \beta \epsilon
It does.

EDIT: Resnick, thanks for your post, it appeared while I was writing my post. I don't think it would matter. So say I used \varepsilon_0 \pm \varepsilon, then the only thing that changes in the calculation is Z_1 \to e^{-\beta \varepsilon_0} Z_1 and the only place where I use the explicit form of Z_1 is in the last line to show E goes as N, and I've redone the calculation with this adapted Z_1 and it doesn't change this fact.
 
Last edited:
  • #12
mr. vodka said:
So the grand canonical partition function for the system described in the original post is \Xi = \sum_{N=0}^{+\infty} \frac{ e^{\beta \mu N} Z_N }{N!} = \sum_{N=0}^{+\infty} \frac{e^{\beta \mu N} Z_1^N}{N!} = e^{e^{\beta \mu} Z_1} where Z_1 = e^{\beta \varepsilon} + e^{- \beta \varepsilon} = 2 \cosh{\beta \varepsilon} is the canonical partition function for a one particle system.
I believe there is a problem with this expression. If you just give each particle one possible state, of energy 0 (if we chose something else it would just show up in the chemical potential), then you would have Z1 = 1, and Z1N = 1. Yet you divide by N!, so your result for the number of ways the system of N indistinguishable particles can be arranged in one energy state is less than 1! That isn't right, there is 1 state, not 1/N! states, for N indistinguishable particles all at energy 0.
 
  • #13
mr. vodka said:
Thank you both for the comments.

@ Ken G: I'm familiar with the Gibbs paradox and the Gibbs factor, but I don't think that it's the answer rather than the "problem": I know it's the Gibbs factor in the grand canonical ensemble that is giving me this, but sadly the Gibbs factor has to be included (as the wiki page says).

@ Dickfore: Sure!

So the grand canonical partition function for the system described in the original post is \Xi = \sum_{N=0}^{+\infty} \frac{ e^{\beta \mu N} Z_N }{N!} = \sum_{N=0}^{+\infty} \frac{e^{\beta \mu N} Z_1^N}{N!} = e^{e^{\beta \mu} Z_1} where Z_1 = e^{\beta \varepsilon} + e^{- \beta \varepsilon} = 2 \cosh{\beta \varepsilon} is the canonical partition function for a one particle system. Define for future ease y=e^{\beta \mu} Z_1, then \log \Xi = y.

We also know that the grand canonical potential \Phi = E - TS - \mu N and from statistical mechanics \Phi = -k_B T \log \Xi, hence: \boxed{S}=\frac{E}{T} + k_B \log \Xi - \frac{\mu}{T} N =\boxed{ \frac{E}{T} + k_B y - \frac{\mu}{T} N }.

It would be nice to express this expression for S in terms of N, so I calculate N:
\langle N \rangle = \frac{1}{\beta} \frac{\partial}{\partial \mu} \log \Xi = y which gives that the middle term in the boxed expression (i.e. k_b y) goes like N.

Now to rewrite the last term \frac{\mu}{T} N = k_B \left( \beta \mu \right) N note that \beta \mu = \log e^{\beta \mu} = \log y - \log Z_1 = \log N - \log Z_1.

Using these two rewritings, we get that S = \frac{E}{T} + k_B N - k_B N \log N + k_B N \log Z_1.
(Note that Z_1 only depends on temperature.)

So every term except the third (-N log N) scales like N.

EXTRA CALCULATION: Let's be a bit more careful and check whether E scales as N:
E = - \frac{\partial}{\partial \beta} \log \Xi |_{\beta\mu = \textrm{ constant }} = - e^{\beta \mu} \frac{\partial}{\partial \beta} Z_1 = - e^{\beta \mu} 2 \varepsilon \sinh \beta \epsilon = - \epsilon y \tanh \beta \epsilon = - \epsilon N \tanh \beta \epsilon
It does.

EDIT: Resnick, thanks for your post, it appeared while I was writing my post. I don't think it would matter. So say I used \varepsilon_0 \pm \varepsilon, then the only thing that changes in the calculation is Z_1 \to e^{-\beta \varepsilon_0} Z_1 and the only place where I use the explicit form of Z_1 is in the last line to show E goes as N, and I've redone the calculation with this adapted Z_1 and it doesn't change this fact.

On a first attempt I get

S = - k y + \beta \mu y + \frac{\beta \epsilon y}{\tanh (\beta \epsilon)} = \langle N \rangle \left( \frac{\mu}{kT} + \frac{\epsilon}{kT(\frac{\epsilon}{kT} + \frac{(\epsilon)^3}{3(kT)^3} + \ldots)} - k \right)

Which may be the same as yours (up to the error you made for E, it's 1/tanh). I don't see the problem with this. S is extensive, it goes to infinity as T -> 0 and it's never negative. Did I mess up somewhere?
 
Last edited:
  • #14
Ken G said:
I believe there is a problem with this expression. If you just give each particle one possible state, of energy 0 (if we chose something else it would just show up in the chemical potential), then you would have Z1 = 1, and Z1N = 1. Yet you divide by N!, so your result for the number of ways the system of N indistinguishable particles can be arranged in one energy state is less than 1! That isn't right, there is 1 state, not 1/N! states, for N indistinguishable particles all at energy 0.

He's making a common approximation that tries to undo double counting with indistinguishable particles, it over estimates though because not all states are over counted
 
  • #15
Thank you all three for the replies. I don't consider the matter settled though, please read on.

@ Ken G & last post of maverick: I understand your reasoning Ken, but are you sure that it's correct? You regard Z as something that counts the number of ways of something, but I think this is incorrect. \Omega in the isolated system expression S=k \ln \Omega does this, and Z has been derived from it. More specifically, the formula Z = \sum e^{\beta E_n} was derived from it for distinguishable particles, and when you deal with indistinguishable particles you divide \Omega by N! and I think this factor (in the derivation) transfers unharmed to Z, i.e. the derivation shows that for indistinguishable particles one needs to divide Z by N!. Do you object to this chain of reasoning? It seems more exact, to me, than yours (although less intuitive, granted).

EDIT: I checked the derivation of Z from \Omega and indeed the N! factor transfers unscathed.

EDIT: Upon further reflection, I've changed my mind! I'll make a new post, in case you've already read this post (and thus won't notice the important EDIT).

@ first post of maverick:
How do you get 1/tanh? At what step of my derivation of E did I make a mistake?

But that is a side-note, more importantly: I agree with your formula, but the difference is that you leave \mu as it is, which seems sensical, I agree, but look at my calculation (the relevant part starts from "Now to rewrite the last term..."), but if you just want to know the result: I got that \mu \sim \log N. This small remark introduces the non-extensiveness which eventually leads to a negative entropy. This proportionality might surprise you -- it surprised me at least. Hopefully you might find an error in my reasoning.
 
Last edited:
  • #16
I'm not very used to the grand canonical ensemble and haven't looked at your calculation in detail, but maybe my thoughts are still helpful.

The grand canonical ensemble corresponds to a physical system with fixed temperature T and fixed chemical potential µ (along with other system parameters like V). The microscopic structure of the system determines the grand canonical partition function ZG(T,µ) and the grand canonical potential Φ(T,µ). Either of them determines all thermodynamical quantities.

If you're talking about S and N in this context, you are always talking about S(T,µ) and N(T,µ). So in order to change N, you have to change the system parameters T and µ. This makes the relationship between S and N non-obvious, I would say.

For the canonical ensemble, the situation is different. Here, we have T and N as system parameters. From the partition function Z(T,N) we get the entropy S(T,N) as a function of N. This means, we can change N without changing other system parameters, which should lead to extensive behaviour.
 
  • #17
Hello Kith, thank you for your comments.

Indeed I started with S(T,µ), but as I was interested in the dependence on N, I rewrote it in S(T,N). I rewrote any dependence on µ as a dependence on N.
But when interpreting S(T,N) I'm assuming that I can change N while keeping T constant. In other words I'm assuming that to, say, double N, I only need to let µ change. Maybe you say that this is impossible?
 
  • #18
Ken G: I must say I revoke my "proof". I still stand by my proof, but the N! is actually already wrong with \Omega, hence Z inherits its mistakes. Maverick is correct: dividing by N! overshoots, in exactly those cases you described: all the particles are in the same state, which is counted once even for distinguishable particles. My apologies for answering too soon about your objection. Thanks for making me realize this!

Do you think this might be the reason for my odd result?

EDIT: It might very well: I'm dividing by N! and then taking the log, which according to Stirling's approximation gives -N ln N, exactly the unwelcome term that I'm complaining about. (This also suggests I'd get the same weird answer with a regular canonical ensemble, also divided by N!)
 
  • #19
mr. vodka said:
But when interpreting S(T,N) I'm assuming that I can change N while keeping T constant. In other words I'm assuming that to, say, double N, I only need to let µ change. Maybe you say that this is impossible?
No, I suggested that the µ change could be the reason for the apparent non-extensiveness of S. But I only just realized, that µ also changes in the canonical case, so there should probably be no difference between both cases due to this.
 
  • #20
Hey kith, thanks for your input. In any matter, I've eliminated µ completely in the expression for S, so that shouldn't be the problem.

I don't know if you read the post above your head, but I think the problem might be resolved :) It seems the reason is that the N! that one usual divides by is not exact but rather an approximation, and a horribly bad approximation in my system.
 
  • #21
mr. vodka said:
Hey kith, thanks for your input. In any matter, I've eliminated µ completely in the expression for S, so that shouldn't be the problem.

I don't know if you read the post above your head, but I think the problem might be resolved :) It seems the reason is that the N! that one usual divides by is not exact but rather an approximation, and a horribly bad approximation in my system.

Well if you remove mu you're moving to the canonical ensemble which makes this the two-state paramagnet whose solution is in any stat mech book, the only catch being that they're now indistinguishable particles.


P.S. Nevermind my comment about tanh, I messed up
 
  • #22
Well if you remove mu you're moving to the canonical ensemble which makes this the two-state paramagnet whose solution is in any stat mech book, the only catch being that they're now indistinguishable particles.

Well, not exactly the canonical ensemble, but I think I get your point: they should give the same predictions for large N.

So this is all cleared up then :) Feels good. Thank you all your time.
 
  • #23
http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec13.pdf comments (after Eq IV.49) that the N! kludge is not needed in the two-state system because the particles are distinguished by being fixed at different points on the lattice.
 
Last edited by a moderator:
  • #24
atyy said:
http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec13.pdf comments (after Eq IV.49) that the N! kludge is not needed in the two-state system because the particles are distinguished by being fixed at different points on the lattice.

Well ya, that's the classic two-state paramagnet but he said they're indistinguishable. All we know is that each individual indistinguishable particle has two possible energy states.
 
Last edited by a moderator:
  • #25
maverick_starstrider said:
Well ya, that's the classic two-state paramagnet but he said they're indistinguishable. All we know is that each individual indistinguishable particle has two possible energy states.

Can such a system exist? If it does, what is the correct counting (I guess it should come from quantum stat mech)?
 
  • #26
atyy said:
http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec13.pdf comments (after Eq IV.49) that the N! kludge is not needed in the two-state system because the particles are distinguished by being fixed at different points on the lattice.

Another solution, not requiring quantum mechanics but providing the same result, was essentially put forth by Jaynes:

http://128.252.91.101/etj/articles/gibbs.paradox.pdf

in which the information associated with knowing if particles are indistinguishable or not is associated with a specific amount of entropy.
 
Last edited by a moderator:
  • #27
atyy said:
Can such a system exist? If it does, what is the correct counting (I guess it should come from quantum stat mech)?

Well based on his math it would be a boson system of indistinguishable non-interacting particles with two energy states. I'm a theorist so I can't like point to a specific substance and say "this is modeled by this Hamiltonian" but a priori I can't think of a good reason that forbids it.
 
  • #28
One way to count the states is to note that if the particles are indistinguishable and we have N of them, then we are going to fill N states, and there is going to be n in the lower energy state and m in the upper energy state, where n+m=N and n runs from 0 to N. So further subdivide the sum over N with a sum over n, and the partition function contribution in each term is enE/kT*e(n-N)E/kT, which = e(2n-N)E/kT. There's no need to divide by anything, because we are only counting once each "n" configuration, for each N. For given N, the sum over n from 0 to N of e(2n-N)E/kT is easy enough to calculate, then sum that over all N.
 
  • #29
Ken G said:
One way to count the states is to note that if the particles are indistinguishable and we have N of them, then we are going to fill N states, and there is going to be n in the lower energy state and m in the upper energy state, where n+m=N and n runs from 0 to N. So further subdivide the sum over N with a sum over n, and the partition function contribution in each term is enE/kT*e(n-N)E/kT, which = e(2n-N)E/kT. There's no need to divide by anything, because we are only counting once each "n" configuration, for each N. For given N, the sum over n from 0 to N of e(2n-N)E/kT is easy enough to calculate, then sum that over all N.

Seems reasonable, does it work?
 
  • #30
mr. vodka said:
Hello,

I was investigating a system with N indistinguishable particles, each of which can have an energy \pm \epsilon, and using the grand canonical ensemble, i.e. \Xi = \sum_{N=0}^{\infty} e^{\beta \mu N} Z_N...

mr. vodka said:
@ Dickfore: Sure!

So the grand canonical partition function for the system described in the original post is \Xi = \sum_{N=0}^{+\infty} \frac{ e^{\beta \mu N} Z_N }{N!} = \sum_{N=0}^{+\infty} \frac{e^{\beta \mu N} Z_1^N}{N!} = e^{e^{\beta \mu} Z_1} where Z_1 = e^{\beta \varepsilon} + e^{- \beta \varepsilon} = 2 \cosh{\beta \varepsilon} is the canonical partition function for a one particle system. Define for future ease y=e^{\beta \mu} Z_1, then \log \Xi = y...

Uhm, why do you divide by N! in your second post and you had not mentioned that in the OP?
 
  • #31
Dickfore said:
Uhm, why do you divide by N! in your second post and you had not mentioned that in the OP?

We covered this, it's an approximation for indistinguishable particles since you double count states otherwise.
 
  • #32
maverick_starstrider said:
We covered this, it's an approximation for indistinguishable particles since you double count states otherwise.

I was trying to follow a pedagogical approach so that the OP will see their mistake. :wink:
 
  • #33
mr. vodka said:
I don't know if you read the post above your head, but I think the problem might be resolved :) It seems the reason is that the N! that one usual divides by is not exact but rather an approximation, and a horribly bad approximation in my system.

I guess the basic issue is to count correctly as Ken G pointed out. However, I did find an example calculation where counting could involve exactly an N! (explicitly) as well as not (ie. it's hidden in the correct counting).

In http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec23.pdf the first method of counting introduces an N! between Eq VII.11 & VII.12, while there isn't such a factor in Eq VII.24 & VII.25.

Again in http://farside.ph.utexas.edu/teaching/sm1/lectures/node80.html there is no N! explicitly introduced to count, yet http://farside.ph.utexas.edu/teaching/sm1/lectures/node82.html shows that the N! is in fact not an approximation, but exact. So it's not so much that the N! is an approximation, but it's already secretly included in some methods of counting.
 
Last edited by a moderator:
  • #34
I did not read the whole thread and do not know for sure whether this argument had been brought up: usually the N ln N term and some term in Z which goes like N ln V yield N ln V/N . Now in the thermodynamic limit V/N, the density is hold constant so that the combined term is extensive.
 
  • #35
Yes, one should use the Sackur-Tetrode formula, but make sure that arguments of logarithms must be a dimensionless quantity, i.e., for a monatomic ideal gas with g=2s+1 spin-degrees of freedom, you have

S=k N \left \{\frac{5}{2}+ \ln \left [\frac{g V}{\hbar^3 N} \left (\frac{m U}{3 \pi N} \right) \right ] \right \}.

The important point is that you can get this formula of an extensive entropy (solving Gibbs's paradox) only as the semi-classical limit of the quantum distributions for indistinguishable particles. In this lowest-order approximation there's no difference concerning bosons or fermions. That's why \hbar enters in this formula. It cannot be eliminated here precisely because the argument of the logarithm must be dimensionless.
 
  • #36
I think the OP should have used the canonical ensemble if the # of particles had been fixed. Then, because we are talking about indistinguishable particles, the multiplicity factor is 1. Then, the partition function should read:
<br /> Z = \sum_{n = 0}^{N}{\exp \left[ -\beta \, (n \, \epsilon + (N - n)(-\epsilon)) \right]}<br />
<br /> Z = e^{\beta \, N \, \epsilon} \, \sum_{n = 0}^{N}{e^{-2 \, \beta \, \epsilon \, n}}<br />
This is a sum of a finite geometric sequence. The result is:
<br /> Z = e^{\beta \, \epsilon \, N} \, \frac{1 - e^{-2 \, \beta \, \epsilon \, (N + 1)}}{1 - e^{-2 \, \beta \, \epsilon}}<br />
<br /> Z = e^{\beta \, \epsilon \, N} \, \frac{e^{-\beta \, \epsilon (N + 1)} \, \left[ e^{\beta \, \epsilon \, (N + 1)} - e^{-\beta \, \epsilon (N + 1)} \right]}{e^{-\beta \, \epsilon} \, \left[ e^{\beta \, \epsilon} - e^{-\beta \, \epsilon} \right]}<br />
<br /> Z = \frac{\sinh \left[\beta \, \epsilon \, (N + 1) \right] }{ \sinh (\beta \, \epsilon) }<br />
Try finding the entropy associated with this partition function!
 
  • #37
I agree-- see post #28! But you advanced the calculation another step, I was lazy!
 
  • #38
Ken G said:
I agree-- see post #28! But you advanced the calculation another step, I was lazy!
Yes, you had proposed the same idea.

Anyway, from the partition function:
Dickfore said:
<br /> Z = \frac{\sinh \left[\beta \, \epsilon \, (N + 1) \right] }{ \sinh (\beta \, \epsilon) }<br />
Try finding the entropy associated with this partition function!
one can evaluate the free energy:
<br /> F = -\frac{1}{\beta} \, \ln{Z}<br />
and the entropy (\partial/\partial T = d \beta/d T \, \partial/\partial \beta = -\beta^2 \, \partial/\partial \beta) is:
<br /> S = -\frac{\partial F}{\partial T} = \beta^2 \, \frac{\partial F}{\partial \beta}<br />
Doing the derivatives, I get:
<br /> S = -\beta \, \epsilon \, \left[ (N + 1) \, \coth \left( \beta \, \epsilon \, (N + 1) \right) - \coth \left( \beta \, \epsilon \right) \right] + \ln \frac{\sinh \left( \beta \epsilon (N + 1) \right)}{\sinh \left( \beta \epsilon \right)}<br />
This has a finite limit when N \rightarrow \infty:
<br /> \lim_{N \rightarrow \infty} S = \beta \, \epsilon \, \coth \left( \beta \, \epsilon \right) - \ln \left[ 2 \, \sinh \left( \beta \, \epsilon \right) \right]<br />
Taking T = \frac{1}{\beta}, and plotting the above as a function of x \equiv T/\epsilon, we have the following plot:
http://www.wolframalpha.com/input/?i=Plot[Coth[1/x]/x+-+Log[2+Sinh[1/x]],{x,0,10}]

As can be seen, the entropy is non-negative.
 
  • #39
Thanks for carrying the calculation to its conclusion, but isn't it odd that the N dependence in S is not extensive? Or did you already calculate S/N here, the limits are tricky.
 
  • #40
Ken G said:
Thanks for carrying the calculation to its conclusion, but isn't it odd that the N dependence in S is not extensive? Or did you already calculate S/N here, the limits are tricky.

No, it is S, not S/N. The last expression is the limit of S when N \rightarrow \infty.

The "non-additivity" of the partition function comes from the fact that the partition function is not of the form:
<br /> Z \neq z^{N}_1(\beta, \epsilon)<br />
 
  • #41
Good point, the entropy here shouldn't be expected to be extensive. That answers mr vodka's question about that! We shouldn't have expected the entropy to be extensive, because indistinguishable particles don't get access to more states when you add more particles if they all go into the ground state. That's what must be happening here, a Bose-Einstein condensation.
 
  • #42
The chemical potential of the system is:

<br /> \mu = \left( \frac{\partial F}{\partial N} \right)_{\beta} = -\frac{1}{\beta} \beta \, \epsilon \, \coth \left[ \beta \, <br /> \epsilon \, (N + 1) \right] = -\epsilon \, \coth \left[ \beta \, \epsilon \, (N + 1) \right]<br />

As long as \mu &lt; -\epsilon (lower than the lowest possible energy level), there is no macroscopic population of the ground state (Bose-Einstein condensation).

This corresponds to:
<br /> \coth \left[ \beta \, \epsilon \, (N + 1) \right] &gt; 1<br />
which is always true!

So, I don't think this system can undergo Bose-Einstein condensation.
 
  • #43
Yes, I was replacing the level energies with 0 and E rather than -epsilon and +epsilon. It's problematic to have a negative energy state with an unfixed N value-- clearly N will go to infinity because the reservoir will gain energy by making particles, and eventually the reservoir will gain infinite energy, violating the capacity of the reservoir itself. I don't think we really want negative energy states in a grand canonical distribution.
 
  • #44
The choice of an energy reference level is irrelevant in statistical mechanics. The criterion for BEC is:
<br /> \mu = \epsilon_{\min}<br />
instead of:
<br /> \mu = 0<br />
 
  • #45
The reference level can't be irrelevant. If N is not fixed, and we have thermal contact with a reservoir, then there must be a very important physical difference between a positive and a negative energy level. The presence of a negative energy level will clearly create an energy divergence, and N will increase without bound, with no possibility of any equilibrium. Equilibrium occurs when the cost to the reservoir, in the currency of accessible configurations, of losing energy to the system is balanced by the number of configurations accessible to the system for having gained that energy. But if there is a negative energy state, then there is no such tradeoff, and it will populate infinitely in a grand canonical distribution.

I realize it is often said that energy is not specified to within a fixed overall constant additive term, but what is meant here by a state of -epsilon energy is that creating a particle and putting it in that state releases energy epsilon to the reservoir. That is already a change in energy, so is not ambiguous to within an additive constant term. If there is an energy cost for creating the particle, that has to be included in the meaning of -epsilon-- the chemical potential is not hard-wired to know how much energy it takes to make a particle, the chemical potential has to be told that (in our description of the energies of the states), and then it responds by telling us how much energy is invested on average per particle. If this has the wrong sign, and we have a grand canonical distribution, it has to blow up.
 
Last edited:
  • #46
Ken G said:
The reference level can't be irrelevant. If N is not fixed, and we have thermal contact with a reservoir, then there must be a very important physical difference between a positive and a negative energy level. The presence of a negative energy level will clearly create an energy divergence, and N will increase without bound, with no possibility of any equilibrium. Equilibrium occurs when the cost to the reservoir, in the currency of accessible configurations, of losing energy to the system is balanced by the number of configurations accessible to the system for having gained that energy. But if there is a negative energy state, then there is no such tradeoff, and it will populate infinitely in a grand canonical distribution.

I realize it is often said that energy is not specified to within a fixed overall constant additive term, but what is meant here by a state of -epsilon energy is that creating a particle and putting it in that state releases energy epsilon to the reservoir. That is already a change in energy, so is not ambiguous to within an additive constant term. If there is an energy cost for creating the particle, that has to be included in the meaning of -epsilon-- the chemical potential is not hard-wired to know how much energy it takes to make a particle, the chemical potential has to be told that (in our description of the energies of the states), and then it responds by telling us how much energy is invested on average per particle. If this has the wrong sign, and we have a grand canonical distribution, it has to blow up.
If N is not fixed, then we are in contact with a "particle reservoir" at some chemical potential μ (similar to the case where if E is not fixed, then we are in contact to a thermal reservoir with some temperature T). However, μ itself, being an energy, is determined "up to a reference level".

Without going into details, perhaps in inquiry into thermodynamics can help. The fundamental equation of thermodynamics for a system with a variable number of particles is given by:
<br /> dE = T \, dS - \Lambda \, d\lambda + \mu \, dN<br />
where we used a generalized coordinate \lambda, and a corresponding generalized force \Lambda.

Next, suppose:
<br /> \mu = \mu&#039; + \epsilon_0<br />
where \epsilon_0 is some arbitrary reference level. It is straightforward to show that a simultaneous redefinition:
<br /> E = E&#039; - \epsilon_0 \, N<br />
satisfies the fundamental equation for thermodynamics with E&#039; and \mu&#039;.

Many textbooks (especially describing BE condensation and photon systems), use the condition:
<br /> \mu = \left( \frac{\partial F}{\partial N} \right)_{\lambda, T} = 0<br />
as one of the necessary conditions for a minimum of the free energy when the number of particles is not fixed. However, a careful analysis shows that, when in contact to a "particle reservoir" at chemical potential \mu_0, this condition is:
<br /> \mu = \mu_0<br />
instead.

EDIT:
As for your example, if the chemical potential of the particle reservoir is \mu &lt; -\epsilon, then, actually it costs energy to populate the lower lying level.
 
  • #47
Something here just doesn't make physical sense. The energy epsilon is not an arbitrary energy that could have a fixed number added to it with no physical consequences, because epsilon is not an energy level, it has to be an energy difference. It is the difference between the energy of having a particle in that level, and having no particle at all (this would not be necessary if N were fixed, then only the energy difference between the two levels would be physically significant). So we cannot say that adding something to epsilon will just add something to mu and have no other physical consequence at all, because the conservation of energy says that any time we end up with a particle in the level with energy epsilon, the reservoir at T must provide that energy, which comes at a cost in number of configurations. Or, if we end up with a particle at energy -epsilon, the reservoir gains that energy. That's what epsilon has to mean here, it can't mean anything else for the problem to be well posed. So it cannot have an unspecified constant term added to it arbitrarily, or else the whole problem is underdetermined. For example, we didn't say if the particle has mass or not, so we must assume that epsilon accounts for any mass. Or, we didn't say if the particle has to be extracted from some deep potential well, so again that has to be in epsilon. We aren't free to let epsilon be whatever we want, because if it doesn't mean the energy the reservoir must part with to end up with a particle in that state, then we do not have enough information to solve for the thermodynamics of this system.
 
  • #48
But Ken G, isn't what you're saying based on the assumption that the ground energy state of the reservoir is zero? Why can it not also be negative, as the system's?

This ties into Dickfore's explanation: mu quantizes this, mu is the measure of how much energy changes if a particle is added. According to your assumption, namely that the reservoir has ground level 0 or higher, the mu of the reservoir is 0 or higher. In that case you indeed have a problem and the bosons come flooding in. But mu is taken to be negative in such a case.
 
  • #49
mr. vodka said:
But Ken G, isn't what you're saying based on the assumption that the ground energy state of the reservoir is zero? Why can it not also be negative, as the system's?
I'm saying epsilon has to be relative to that ground state energy, or else we cannot solve the problem, we won't have enough information. How can we find the entropy of a system if we don't even know the expectation value of N? I guess the point is, we are not solving for that, we are only solving for the entropy as a function of that expectation value, and finding that it reaches a constant value for large N, independent of the expectation of N. But that still might not be meaningful if there is no equilibrium for the system-- as would be true if there really is a negative energy for every particle that appears from the particle reservoir.
 
  • #50
I'm saying epsilon has to be relative to that ground state energy, or else we cannot solve the problem, we won't have enough information. How can we find the entropy of a system if we don't even know the expectation value of N?

I'm not sure what you're saying, why are you claiming "else we cannot solve the problem". mu determines the expectation value of N. Do you have any backing of the statement "I'm saying epsilon has to be relative to that ground state energy, or else we cannot solve the problem, we won't have enough information."? It seems wrong to me, but I don't know how to prove that before I know why you think it's true.
 
Back
Top