Why is there a minus sign in the fermionic gas entropy equation?

  • Thread starter Thread starter MathematicalPhysicist
  • Start date Start date
  • Tags Tags
    Entropy Gas
MathematicalPhysicist
Science Advisor
Gold Member
Messages
4,662
Reaction score
372
The problem is to show that for a fermionic gas the entropy is given by:
\sigma=-\int d\epsilon D(\epsilon )[f(\epsilon )log(f(\epsilon )-(1-f(\epsilon )log(1-f(\epsilon )] where D(epsilon) is the derivative operator wrt epsilon, and f(epsilon) is fermi-dirac distribution function.


Now what I think is that I only need to show that the entropy equals minus the integrand, but I'm not sure where did the minus come from.

I mean the entropy is defined as logarithm of the number of possible states, the function that counts this number is: (f^f)*((1-f)^(1-f))
cause f counts the number of possible states there are below the chemical potential and 1-f above it, and we take a power of themselves because there sum equals the number of states of the system.

but I don't where did the minus sign come from, can you help me on this?

thanks in advance.
 
Physics news on Phys.org
D(\varepsilon) is not an energy derivative, it is the single-particle density of states as a function of energy.

To derive that expression, you need to calculate the entropy in the grand canonical ensemble, using the relation

\sigma = -\left(\frac{\partial \mathcal{F}}{\partial \tau}\right)_{V,\mu}

where \tau = k_BT and \mathcal{F} is the grand free energy. If this is the way you did the problem I would guess you just forgot the minus sign in this relation.
 
I don't understand, what is D(epsilon)?
 
so because: -\tau *log(Z_G)=F
I only need to find what is Z_G but it equals:
Z_G=1+exp(\beta *(\mu -\epsilon))
but still what is D(epsilon).
you mean D is the function that counts the number of posiible states, if so then by definition the entropy equals log of this.
 
The density of states is precisely what it sounds like: if you plotted the number of particles, n(\varepsilon), in a given state as a function of energy, the density is the number of particles per energy \varepsilon. That is, d n(\varepsilon)/d\varepsilon = D(\varepsilon). Equivalently,
\int_{-\infty}^{\infty}d\varepsilon~D(\varepsilon) = N

where N is the number of particles in the system. You need this density when approximating sums by integrals. If you're summing over a discrete index, say n for example, then

\sum_{n} \rightarrow \int dn
when approximating the sum by an integral. In this case, D(n) = 1. However, if you wanted to write that integral as a function of energy instead, then because the states aren't typically equally spaced as a function of energy (and if they were it wouldn't be by "1" -> you need some dimensionful constant), you need the density of states:

\sum_{n} \rightarrow \int d\varepsilon~D(\varepsilon)

Now, as for determining the energy, you're almost right about the grand partition function. However, what you've written is the single particle partition function. If you have N particles, then your total grand partition function is going to be the product of the N single particle partition functions, each of which will have a different energy \varepsilon_n:

Z_N = \prod_{n}Z(\varepsilon_n) = \prod_n(1 + e^{-\beta(\varepsilon_n - \mu)})

Hence,

\sigma = -\left(\frac{\partial \mathcal{F}}{\partial \tau}\right)_{V,\mu} = -\frac{\partial}{\partial \tau} \left(\tau \ln Z_N\right) = -\sum_n \frac{\partial}{\partial \tau} \tau \ln \left(1 + \exp\left[-\beta(\varepsilon_n - \mu\right)\right]

In converting that sum to an integral over energy, you introduce the density of states. Some playing around with the summand/integrand of the entropy expression will yield the expression you have in your first post.
 
Last edited:
thanks.
 
wow, thanks for that, Mute! i'd always wondered why you needed to multiply the integrand by the density of states. fantastic explanation!
 
Back
Top