H theorem: equilibrium in statistical mechanics

In summary, Reif's book on statistical mechanics talks about equilibrium and how it is related to the fundamental postulate. He goes on to say that an isolated system will approach equilibrium, and that the probabilities of states don't change in time. However, he argues that the fundamental postulate cannot be proved, and that it's an axiom on which statistical mechanics is based. He refers readers to appendix A.12 on the H theorem for more information. The equation for dS/dt before deltaS>=0 implies that the probabilities are all the same at equilibrium. The Fermi's Golden rule states that the probability of a system making a transition from a state to another is the same for all possible states, and replaces an assumed
  • #1
Sam_Goldberg
46
1
Hi guys, I am reading Reif's book on statistical mechanics and have a question on the H theorem. In section 2.3, Reif gives (on page 54) both the definition of equilibrium as well as a fundamental postulate.

Definition: "An equilibrium situation is characterized by the fact that the probability of finding the system in anyone state is independent of tiem (i.e., the representative ensemble is the same irrespective of time). All macroscopic parameters describing the isolated system are then also time-independent."

Fundamental Postulate: "An isolated system in equilibrium is equally likely to be in any of its accessible states."

A few pages later Reif talks about how nonequilibrium situations tend to approach equilibrium, and then refers me to his appendix A.12 on the H theorem (pages 624-626). His arguments are essentially those on this wikipedia page (read up to deltaS>=0):

http://en.wikipedia.org/wiki/H-theorem#Quantum_mechanical_H-theorem

Here is the question: look at the forumula for dS/dt before deltaS>=0. It appears as if dS/dt is equal to zero if and only if (correct?) Pa = Pb for all a and all b, since each term in the double summation contributes a positive value for dS/dt unless the probabilities are equal. It therefore seems as if one could prove the fundamental postulate I listed above as follows: if the system is isolated and in equilibrium, then the probabilities Pi do not change in time by virture of Reif's definition of equilibrium. Thus, in particular, H (or S = -kH) does not change in time. However, dH/dt (or dS/dt) is zero if and only if all the probabilities are equal, and, therefore, we may conclude that isolation and equilibrium implies that the system is equally likely to be in its possible states. This "proof" cannot be correct, for Reif emphasized that the fundamental postulate cannot be proved; it is an axiom on which statistical mechanics is based. Thus, how is the above incorrect? Where did I go wrong?

Thanks in advance for all your help. :smile: (By the way, how do you pronounce the author's name?)
 
Physics news on Phys.org
  • #2
I agree.
This equation:

e1e60684524ce1334e17d992e865e7c7.png


implies the probabilities are all the same at equilibrium.

Note that this appears then as a consequence of the Fermi's golden rule.
This also implies the existence of continuum states, like em radiation, and this brings us back close to the beginning of the story: Max Planck.

Have you had some nice reading about a link between continuum states and the second principle?
 
  • #3
Second comment: what about the classical equivalent?
I learned that by cutting the phase space in small hbar pieces.
I was very unsatisfied by that for two reasons:

- I never understood clearly the equiprobability in this case
- I was embarrassed by the reference to QM: could the H-theorem not be derived without ?

Any hint?
 
  • #4
Thre is no assumption of continuum of states here at all. What you need to assume is that the probability per unit time for a system to make a transition from state |i> to |j>, W_{i,j}, is symmetrical::

W_{i,j} = W_{j,i}
 
  • #5
The reason why you can't use the H-theorem to prove te equal prior probability postulate, is simply because the H-theorem makes an assumption which is known to be false.

To see that something must be wrong, recall that the transition probabilities are considered to be symmetrical, yet the H-theorem says that the entropy will increase in time ir stay the same. This then suggests that the entropy shoud stay constant and that's indeed what a more precise analysis shows: Due to unitary time evolution, entropy stays equal to zero at all times.

To regain the nonzero entropy as we use it in thermodynamics, requires one to perform a coarse graining procedure. Tpo see that thois increases, one has to assume some low entropy initial conditions to get an increasing entropy. The fact that entropy increases is then due to the assumed conspirational initial conditions.
 
  • #6
Count Iblis said:
To regain the nonzero entropy as we use it in thermodynamics, requires one to perform a coarse graining procedure. Tpo see that thois increases, one has to assume some low entropy initial conditions to get an increasing entropy. The fact that entropy increases is then due to the assumed conspirational initial conditions.

Isn't the coarse graining procedure not somehow included / hidden in the Fermi's Golden rule?
The question of the unitary evolution appears for me to be the same in the second law, the H-theorem, or the Fermi's Golden rule.

As a student I was really shocked by the Fermi's Golden rule precisely because it replaces an assumed unitary evolution by something which is not unitary anymore. And I keep silent about other difficulties like the recourse to a "perturbation" that nature obviously ignores. I also avoid relating that topic to the projection postulate of QM, to avoid more useless headaches.

I think that the H-theorem as well as the Fermi's Golden rule are difficult to teach and even more to learn. It doesn't harm to forget about the subtilities. After all, after long considerations and more subtile maths, one end up to the same resut.
 
  • #7
In case of Fermi's Golden Rule, you use the formula for the transition probability and then consider a transition to a group of final states in the infinite volume limit (in that limit there is a continuum of states) and consider the limit t --> infinity.

Because of these limits you can ignore coherences between the different final states (so, you can integrate the squared absolute values of the amolitudes). You will also not see the system evolving back to the initial state.
 
  • #8
Thanks guys for the responses.

Count Iblis:

I think I understood what you said, but let me make sure I really have it. The transition probabilities W{s,r} and W{r,s} are due to perturbations from a small perturbing Hamiltonian H_1, and they are written as (in quantum bra-ket notation) |<r|H_1|s>|^2 or |<s|H_1|r>|^2. If there is no perturbing Hamiltonian H_1, then if the system is in an exact energy eigenstate |E>, then it will, as you said, stay in it due to the unitary time evolution.

Now since the system is isolated (remember the wording of the fundamental postulate of statistical mechanics), there can not be any perturbing Hamiltonians H_1, just the main one; let's call it H. Therefore, we can not speak of a transition from one energy eigenstate to another, because such a thing does not happen. That is why Fermi's golden rule does not work in this case.

This begs the question: if the system is not in the exact quantum eigenstates, then how are we supposed to describe systems in statistical mechanics? The fundamental postulate speaks of probabilities; what, then, are these probabilities? I will show you in the next two paragraphs an idea I had for how we're supposed to describe systems in statistical mechanics. Is it correct? Please let me know.

As you well said in your second post, we take in statistical mechanics never an exact energy, but a range between E and E + dE, where dE is macroscopically small but microscopically large (large!). How would we describe the system? Here's how I believe it would be done: we take all the possible wavefunctions |s> of the system, where the mean (quantum mechanical mean) energy of each state, that is, <s|H|s>, is between E and E + dE. All these states will be in our statistical ensemble.

Now, there are many, many energy eigenstates between E and E + dE; call them |Ej>. In general, the quantum mechanical probability |<Ej|s>|^2 is different for different states |s>. But, if we average all of the quantum mechanical probabilities across all the different states |s> that fit in the energy range, we can obtain the various statistical mechanics probabilities Pj, and these would be the probabilities the fundamental postulate is referring to. Is this right? Is this how we do statistical mechanics?

Finally, I'm sorry to say, while I learned quantum mechanics at the level of Shankar's book, I'm just a beginner at thermal physics in general. Thus, I didn't really understand both lalbatros's and Count Iblis's last posts. Are they relavant to what I just posted above? Like I said, now that I have seen why Fermi's golden rule does not apply here, I'm only interested in figuring out how states are done in statistical mechanics.

Thank you.
 
  • #9
Yes, what you do is consider a big ensemble of systems that would all be in the same macrostate but whose exact quantum state are different. The assumption is then that averaging over this ensemble will yield the correct answer for macroscopic observables of an individual system.

If you have some individual member of the ensemble and measure e.g. the pressure, then that measurement can be formulated as performing a complicated observation of the many particle system that involves performing a time average. So, the assumption is actually that if you take an individual member of the ensemble and perform some averaging over time of an observable, then you can replace that chaotic time averaging by averaging over the entire ensemble. So, the time averaging can be seen as a sort of taking a poll of the entire ensemble (or vice versa).

Now, an ensemble of systems that are in exact eigenstates would look time independent. But you know that you can formulate the expectation value of an observable as a trace over states and then you can evaluate that in any basis.

The space spanned by energy eigenstates within the energy interval of dE is large enough to contain states that you can picture in a semi-classical way as containing molecules that move around.
 
  • #10
Count Iblis said:
In case of Fermi's Golden Rule, you use the formula for the transition probability and then consider a transition to a group of final states in the infinite volume limit (in that limit there is a continuum of states) and consider the limit t --> infinity.

Because of these limits you can ignore coherences between the different final states (so, you can integrate the squared absolute values of the amolitudes). You will also not see the system evolving back to the initial state.


Clearly the H-theorem is a consequence of the Fermi's Golden Rule.
I also think indeed that ignoring coherences is the quantum way for coarse graining.
 
  • #11
Count Iblis said:
Yes, what you do is consider a big ensemble of systems that would all be in the same macrostate but whose exact quantum state are different. The assumption is then that averaging over this ensemble will yield the correct answer for macroscopic observables of an individual system.

If you have some individual member of the ensemble and measure e.g. the pressure, then that measurement can be formulated as performing a complicated observation of the many particle system that involves performing a time average. So, the assumption is actually that if you take an individual member of the ensemble and perform some averaging over time of an observable, then you can replace that chaotic time averaging by averaging over the entire ensemble. So, the time averaging can be seen as a sort of taking a poll of the entire ensemble (or vice versa).

Now, an ensemble of systems that are in exact eigenstates would look time independent. But you know that you can formulate the expectation value of an observable as a trace over states and then you can evaluate that in any basis.

The space spanned by energy eigenstates within the energy interval of dE is large enough to contain states that you can picture in a semi-classical way as containing molecules that move around.

Okay, so you're saying that statistical mechanics is done the following way:

1. Specify the macrostate of the ensemble of systems. This includes the dimensions of the cylinder you contain the gas in (e.g., the volume). It may also include external electromagnetic fields. Finally, it must contain the energy E, measured to a macroscopic accuracy of dE.
2. Take a large ensemble, and measure certain values. You could, as you said, measure the average pressure by performing complicated observations on all the systems. Or, you could measure the exact energy and deterimine the average probability of being in an exact eigenstate (called in statistical mechanics by the name microstate). This statistical mechanical probability Pj is done by averaging out all the |<Ej|s>|^2. According to the fundamental postulate, if one starts with a system in equilibrium, then the values for all the average probabilities Pj should come out the same if we waited 10 seconds to take the measruements, or 20 seconds to take the measurements. But if the system is not in equilibrium, then we will get different and unequal values for the average probabilities if we waited 10 or 20 seconds. Of course, as mentioned above we cannot use the H theorem to prove the fundamental postulate, since the H theorem speaks of perturbations and transition probabilities, which is nonsense. It's applying a rule to the wrong situation.

This leaves only one question, and then I think I understood it: Count Iblis, you mentioned that we may either do a time averaging of the measurement or use a large ensemble and perform one measurement on each. How can the time averaging work? Doesn't a quantum mechanical measruement change the state of the system? Thus, don't we have to use an average over systems in an ensemble rather than one system over time? If we keep on measuring the value for pressure for a single system, we're going to get the same number again, again, again... and thus we won't have a good average value. Correct?
 

1. What is the H theorem and how does it relate to equilibrium in statistical mechanics?

The H theorem, also known as the second law of thermodynamics, states that the entropy of a closed system will tend to increase over time until it reaches a state of equilibrium. This means that the system becomes more disordered and less organized over time. In statistical mechanics, the H theorem provides a mathematical explanation for the tendency of systems to move towards equilibrium.

2. What is the significance of the H theorem in the field of statistical mechanics?

The H theorem is a fundamental principle in statistical mechanics that helps to explain and predict the behavior of systems at the microscopic level. It provides a mathematical framework for understanding how macroscopic properties, such as temperature and pressure, emerge from the random motion of particles at the microscopic level.

3. How does the H theorem relate to the concept of entropy?

The H theorem is closely related to the concept of entropy, which is a measure of the disorder or randomness in a system. The H theorem states that the entropy of a closed system will tend to increase over time, leading to a state of equilibrium. This means that as a system becomes more disordered, its entropy increases.

4. Can the H theorem be applied to all systems?

The H theorem can be applied to any closed system, meaning a system that does not exchange matter or energy with its surroundings. This includes both physical systems, such as a gas in a container, and abstract systems, such as a mathematical model. However, the H theorem does not apply to open systems, which can exchange matter and energy with their surroundings.

5. How is the H theorem tested and validated in experiments?

The H theorem has been tested and validated through a variety of experiments, such as studying the behavior of gas molecules in a container or the diffusion of particles in a liquid. These experiments have shown that the entropy of closed systems tends to increase over time, providing evidence for the validity of the H theorem and the second law of thermodynamics.

Similar threads

Replies
22
Views
1K
Replies
15
Views
928
Replies
5
Views
2K
  • Quantum Physics
Replies
29
Views
1K
  • Thermodynamics
Replies
3
Views
1K
Replies
6
Views
1K
  • Thermodynamics
Replies
4
Views
2K
Replies
4
Views
1K
  • Classical Physics
Replies
5
Views
950
Replies
3
Views
1K
Back
Top