# Derivation of Boltzmann distribution, heatbaths

1. Aug 4, 2009

### haushofer

Hi, I have some questions about the derivation of the Boltzmann distribution, for instance as in Mandl's "statistical physics".

Put a system (1) in a heatbath (2) with temperature T. In thermal equilibrium system 1 will also then have temperature T. The energy of system (1) is not fixed due to heat exchange, so we say that the energy of (1) lies in the interval $[E,E+\delta E]$. My first question is: how can we fix the temperature with the heatbath but introduce a small deviation in the energy? Shouldn't we also say something that the temperature of (1) lies in the interval $[T, T+ \delta T]$? Or are the effects of different order?

Now we label the microstates of (1) as {1,2,...,r,...} with corresponding energies ${E_1,E_2,...,E_r,...}$ and assume that $E_1 \leq E_2 \leq ... \leq E_r \leq ...$. The interval $\delta E$ is smaller than the minimum spacing between these $E_r$'s. Now comes the thing that confuses me:

"The probability $p_r$ that system (1) will be in a definite (micro)state r with energy $E_r$ will be proportional to the number of states of the heat bath compatible with this, given that the total energy has a constant value $E_0$. These heatbathstates must have an energy lying in the interval $[E_0-E_r,E_0-E_r + \delta E]$ (due to the fact that $\delta E$ is smaller than these minimum spacings). There are $\Omega_2(E_0 - E_r)$ such states, SO THAT

$$p_r = const.\Omega_2(E_0 - E_r)$$

My question is: why not

$$p_r = const. \Omega_1(E_r)\Omega_2(E_0 - E_r)$$
?

It has to do something with the fact that $E_r$ is the variable here and that we have to choose one of the energies (of system 1, OR system 2) as our independent variable (so fixing one energy fixes the other), but I can't reason why my formula would be wrong. And the resulting distribution depends on it (you get also the entropy of system 1 exponentiated in the distribution). If we fix $E_0 - E_r$ we also fix $E_r$ so that makes $\Omega_1(E_r)$ constant, but that shouldn't allow us to drop it from the derivation, right?

So why is my proposal wrong, and how can we fix T but allow for an uncertainy in E? :)

Last edited: Aug 4, 2009
2. Aug 4, 2009

### Count Iblis

First: The reason why you have an uncertainty delta E in the energy contained in the heat bath has nothing to do with energy exchange, it is simply that you must take into account that the energy cannot be exactly specified when you choose to describe a system statistically. The reason for that is simply that if there were no uncertainty in the energy of an isolated system, the exact energy eigenstate the system is in would be fixed, and then the entropy of the system would be zero. This is called the fine grained entropy of a system.

Now, he whole point of statistical physics is that you know some macroscopic variables of the system to some accuracy and then there are many possible microstates the system could be in. But then, because of the above argument, that requires you to specify some uncertainty in the internal energy of the system. The entropy of te system described this way is called the coarse grained entropy.

In the derivaton of the Boltzmann disribution, what you do is you assume that the heat capacity of the heat bath is so large, that its temperature is not affected by energy exchange with the system it is in contact with. The delta E doesn't figure in here. Note that you write that:

"The interval dE is smaller than the minimum spacing between these E_r's",

so it should be clear that dE has nothing to do with energy exchange.

Now, in the derivation, what you do is you let the system be in some quantum state with energy E_r, which then means that the heat bath has energy of E - E_r with an unceratinty of dE. Since all possible accessible quantum states are equally likely, the probability for this situation is proportional to the total number of quantum states corresponding to this situation. This total number of states is the number of states the heat bath can be in, which is Omega(E-E_r), times the number of states the system can be in, but the system is in a uniquely defined quantum states so there is only one possible state oit can be in. So, the probability is proportional to Omega(E-E_r) which leads to the Boltzmann distribution.

3. Aug 5, 2009

### haushofer

But the entropy is defined by

$$S= -k \ln{\Omega}$$

so I don't see exactly why the entropy would be zero if I would know the energy exact; is this "fine grained entropy" of yours different from this definition above? I thought that taking dE smaller than those minimum spacings was an assumption which you justify later (it becomes very small if the number of particles N becomes very large), but you're right that besides that those two are not very clear related.

So you say that for a given $E_0 - E_r$, $\Omega_{1}(E_r) = 1$? I still don't see why I'm then not allowed to take this factor into my analysis; like I said, the resulting distribution is different; you would get that

$$p_r = const.e^{S_1(E_r)/k}e^{S_2(E_0 - E_r)/k}$$

4. Aug 5, 2009

### Count Iblis

If there is no uncertainty specified for the energy then (excluding cases where the system is exactly degenerate) there is only one state the system can be in. So, then Omega would be equal to 1, and thus the (fine grained ) entropy is k Log(Omega) = 0.

Now, in your formula involving Omega_1, this Omega_1 will be the fine grained Omega. This is because when the energy in the heat bath is given, you know exactly which state the system is in, because dE being less than the spacing of the energy level of the system means that the value of E_r can be determined accurately enought to be able to fix which state r the system is in. This means that Omega_1 is always 1.

5. Aug 5, 2009

### haushofer

But the entropy concerns the uncertainty about the specific microstate compatible with a definite energy $E_r$ in which the macrostate is. There is no $\delta E$ entering here; we just don't know WHICH microstate compatible with a definite energy our system occupies, and this is expressed via the entropy. With your reasoning the entropy of an isolated system in which the energy is definite would always be 0, right?

So I see that the omega's are functions of the ENERGY. So, if I fix the energy of the heatbath, I can see that this amounts to a fixed number of microstates of my subsystem which is compatible with the given heatbathenergy. What I don't see, is why this should be 1, and why I'm not allowed to take $\Omega_1$ into account in my analysis. As for as I know,

$$\Omega_{total}(E_1 + E_2) = \Omega_1(E_1)\Omega_2(E_2)$$

Maybe I'm being a pain in the *** here, but I really don't see this :)

6. Aug 6, 2009

### Count Iblis

Well, you are computing the probability that the subsystem is in some specific state r with energy E_r, not in some group of states containing Omega_1 states.

Also, what matters is not if we know or don't know in which state the heat bath is, what matters is if, given the specification of the exernal variables and the internal energy, we could, in principle tell. Because for physics, what matters is the number of accessible quantum states. If you have specified the volume and the geolmetry of the system and defined what the itneractions are, then the energy eiugenstates are fixed with infinite precison. If you then also say that the energy is precisely E, tyhen that defines exactly what state the system is in. There is then only 1 state for the system could possibly be in (asuming the energy levels are not degenerate).

This is why you need to specify some nonzero value for dE. If the number of particles is very large, the spacing between the energy levels of the system goes to zero, so you'll always have a very large number of states inside te interval of any finite dE for a macroscopic system.

You can also say that S/Log(2) is the number of bits one would need in order to specify exactly what quantum state the system is in, given a specification of the external variables (nature of the system, volume, internal energy etc.). Clearly if you were to specify the external variables in such a way that the exact quantum state the system is in is already fixed, then S = 0.

7. Aug 8, 2009

### haushofer

Yes, I now see that the p I propose doesn't give the chance we're interested in! I'l take a look again at the relevant chapters of the book by Huang, but your story makes already a lot more sense now for me :) Thanks!