Deriving the thermodynamic beta from Lagrange Multipliers

Click For Summary
SUMMARY

This discussion focuses on deriving the thermodynamic beta (β) using Lagrange multipliers in statistical mechanics. The derivation begins with a system of N identical particles distributed across energy levels, subject to constraints on the total number of particles and total energy. By maximizing the logarithm of the number of microstates (Ω) using Stirling's approximation, the relationship between β and temperature (T) is established, leading to the conclusion that β = -1/(kB T), where kB is the Boltzmann constant. The discussion also clarifies the interpretation of β and the associated thermodynamic quantities.

PREREQUISITES
  • Understanding of statistical mechanics concepts, particularly microstates and macrostates.
  • Familiarity with Lagrange multipliers and their application in optimization problems.
  • Knowledge of thermodynamic quantities such as temperature (T) and energy (E).
  • Proficiency in mathematical techniques including Stirling's approximation and partial derivatives.
NEXT STEPS
  • Study the derivation of the partition function Z in statistical mechanics.
  • Explore the relationship between entropy (S) and thermodynamic potentials.
  • Learn about the implications of the first law of thermodynamics in statistical mechanics.
  • Investigate the role of Lagrange multipliers in other areas of physics and optimization.
USEFUL FOR

This discussion is beneficial for physicists, particularly those specializing in statistical mechanics, thermodynamics, and anyone interested in the mathematical foundations of these fields.

McLaren Rulez
Messages
289
Reaction score
3
I'm nearly at the end of this derivation but totally stuck so I'd appreciate a nudge in the right direction

Consider a set of N identical but distinguishable particles in a system of energy E. These particles are to be placed in energy levels ##E_i## for ##i = 1, 2 .. r##. Assume that we have ##n_i## particles in each energy level. The two constraints we impose are that ##\sum_{i}^{r}n_i = N## and ##\sum_{i}^{r}E_i n_i = E##.

The number of microstates in a given macrostate is given by
\begin{equation}
\Omega = \frac{N!}{\sum_{i}^r n_{i}!}
\end{equation}

We want to maximize this and for ease of notation, we work with ##\ln\Omega## and we use Stirling's approximation (##\ln x! = x\ln x - x##) to obtain
\begin{equation}
\ln\Omega = N\ln N - N - \sum_{i}^{r}n_i\ln n_i - n_i
\end{equation}

Maximizing this function subject to the constraints ##\sum_{i}^{r}n_i = N## and ##\sum_{i}^{r}E_i n_i = E## is a classic Lagrange multiplier problem. We represent the undetermined multipliers to be ##\alpha## and ##\beta## for the two constraints and obtain
\begin{align}
\frac{\partial\ln\Omega}{\partial n_i} &= \alpha\frac{\partial n_i}{\partial n_i} + \beta\frac{\partial E_i n_i}{\partial n_i} \\ \nonumber
\ln n_i &= \alpha + \beta E_i \\ \nonumber
\therefore n_i &= e^{\alpha}e^{\beta E_i}
\end{align}

Now, we use the first constraint equation to determine ##\alpha##. We get
\begin{align}
\sum_i^r n_i &= N \\ \nonumber
\sum_i^r e^{\alpha}e^{\beta E_i} &= N \\ \nonumber
e^\alpha &= \frac{N}{\sum_i^re^{\beta E_i}} \\ \nonumber
e^\alpha &= \frac{N}{Z}
\end{align}

We have introduced the partition function, ##Z=\sum_i^re^{\beta E_i}## in the last line. Next, we have the second constraint equation that determines ##\beta##
\begin{align}
\sum_i^r E_i n_i &= E \\ \nonumber
\frac{\sum_{i}^{r} E_i e^{\beta E_i}}{\sum_i^r e^{\beta E_i}} &= E \\ \nonumber
\end{align}

I'm assuming I should somehow connect ##E## with ##T## so let's say ##E=Nk_B T##. Then we have
\begin{align}
\frac{N}{Z}\frac{\partial Z}{\partial\beta} &= E \\
\frac{\partial\ln Z}{\partial\beta} &= k_B T
\end{align}

How do I get to ##\beta = -\frac{1}{k_B T}## here? Notice that this derivation requires an extra minus sign compared to the usual definition of ##\beta## and this should come out naturally too, shouldn't it?
 
Science news on Phys.org
Of course, it's easier to flip the sign of ##\beta## from the very beginning, because then you have ##\beta>0## for the usual case where the Hamiltonian of the system is bounded from below but can become infinite (e.g., for an ideal gas you have ##H=\vec{p}^2/(2m) \geq 0##. Then you have
$$Z(\beta,\alpha)=\sum_i \exp(-\beta E_i + \alpha n_i).$$
Now to get the interpretation of ##\beta## and ##\alpha## in terms of the thermodynamical quantities note that
$$U=\langle E \rangle=-\partial_{\beta} \ln Z, \quad \mathcal{N}=\langle N \rangle=\partial_{\alpha} \ln Z.$$
Further the probability distribution is
$$P_i=\frac{1}{Z} \exp(-\beta E_i+\alpha \mathcal{N}).$$
This implies that the entropy is given by
$$S=-k_B \sum_i P_i \ln P_i \rangle=k_B (\ln Z+\beta U-\alpha \mathcal{N}).$$
With the above relations you get
$$\frac{1}{k_B} \mathrm{d} S = \beta \mathrm{d} U - \alpha \mathrm{d} \mathcal{N}.$$
Then comparing this to the 1st Law
$$\mathrm{d} U=T \mathrm{d} S +\mu \mathrm{d} \mathcal{N}$$
you'll get
$$\beta=\frac{1}{k_B T}, \quad \alpha=\frac{\mu}{k_B T}.$$
 
  • Like
Likes   Reactions: McLaren Rulez

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
509