Equilibrium, entropy, probability - Release of constraints

In summary: Question 2)Since the constraint fixes the particle number and there is no macroscopic change anymore... does that not mean that the constrained state is also an equilibrium state? The difference to the unconstrained equilibrium state would then be that there are no more microscopic fluctuations...The constrained state is an equilibrium state, but there are no more microscopic fluctuations. Please describe why that is.
  • #1
SchroedingersLion
215
57
Hi everyone,

I have a fundamental question to the first part of Swendsen's Intro to StatMech and Thermodynamics (book).

Suppose we have two isolated systems of volumes ##V_1## and ##V_2##. We distribute ##N## ideal gas particles across the two systems with total energy ##E##.
Suppose we bring the systems into contact so that they can exchange particles.

Now, following Boltzmann's original train of thought to relate the probabilities of individual microstates with the thermodynamic quantitiy called entropy, we make the following assumptions:
1) The macroscopic state of an isolated system goes from less probable to more probable states.
2) The most probable state is called equilibrium, a state where all macroscopic system descriptions are stationary.
3) Everything we do not know is equally likely.

From 3), it follows that each position within the two volumes is equally likely to be occupied by the particles. From this, one can derive a probability density ##P(N_1|N)## to find ##N_1## of the ##N## particles in ##V_1##. It is a binomial distribution with well-defined maximum. The author defines the equilibrium value for ##N_1## to be the maximizer of ##P(N_1|N)##, i.e.
$$
N_{1}^{eq}=\text{argmax}P(N_1|N).
$$

The author makes it very clear that the actual particle number ##N_1## keeps fluctuating in time. But close to the thermodynamic limit, the distribution ##P(N_1|N)## is sharply peaked, so that ##N_{1}^{eq}=\text{argmax}P(N_1|N)=<N_1>## with negligible variance. That means in equilibrium, a macroscopic measurement (= one that cannot resolve the small variance, i.e. the statistical fluctuations in ##N_1##) will always give the same result ##N_{1}^{eq}##.

The author gives Boltzmann's (configurational) entropy definition for the subsystems:
$$
S(N_i,V_i) = \text{ln}\left(P(N_i|N)\right), \text{for } i\in \left\{1,1\right\},
$$
The total configurational entropy can then be written as
$$
S(N,V) = S(N_1,V_1) + S(N_2,V_2).
$$

Then it follows that the equilibrium value ##N_{1}^{eq}## maximizes entropy ##S(N_1,V_1)##.

I am unsure about the entropy of a constrained state where we force ##N_1 \neq N_{1}^{eq}## particles to be within ##V_1##. Assuming the connection to ##V_2## is closed (adiabatic barrier).
Question 1)
If I want to have the entropy of this state, would I just insert this number ##N_1## into ##S(N_1,V_1) = \text{ln}\left(P(N_1|N)\right))## from before? Then I would have a state of low entropy, and based on the three assumptions from above, upon the release of the constraint the entropy will be maximized. But I don't see why I would be allowed to use function ##P(N_1|N) ##, as it was derived under the assumption that there is no barrier.
Question 2)
Since the constraint fixes the particle number and there is no macroscopic change anymore... does that not mean that the constrained state is also an equilibrium state? The difference to the unconstrained equilibrium state would then be that there are no more microscopic fluctuations...
The distribution ##P(N_1|N)## is a delta peak now. So what is the logarithm of a delta peak? You see how I am getting confused ^^"SL
 
Science news on Phys.org
  • #2
SchroedingersLion said:
Suppose we bring the systems into contact so that they can exchange particles...

I am unsure about the entropy of a constrained state where we force ##N_1 \neq N_{1}^{eq}## particles to be within ##V_1##. Assuming the connection to ##V_2## is closed (adiabatic barrier).
If there is an exchange of particles, there can be no adiabatic barrier. Please explain better this part of your doubts.

SchroedingersLion said:
Question 1)
If I want to have the entropy of this state, would I just insert this number ##N_1## into ##S(N_1,V_1) = \text{ln}\left(P(N_1|N)\right))## from before? Then I would have a state of low entropy, and based on the three assumptions from above, upon the release of the constraint the entropy will be maximized. But I don't see why I would be allowed to use function ##P(N_1|N) ##, as it was derived under the assumption that there is no barrier.
Note that the total number of particles is ##N = N_1 + N_2## and you don't know how they were originally distributed in the two volumes ##V_1## and ##V_2##
but in the book
##N_1## is the number of particles that remain in the volume ##V_1## when you let it take freely is the total volume ##V = V_1 + V_2##, this value is independent of the content of the volume ##V_2##
The contribution to the entropy of that part of the content only in ##V_1## system is ##S (N_1, V_1) = \text {ln} \left (P (N_1 | N) \right))##
this probability is equal to that of finding ##N-N_1## particles in volume ##V_2##
In the same way, releasing particles from ##V_2## towards ##V_1## has probability
##S (N_2, V_2) = \text {ln} \left (P (N_2 | N) \right))## which will be the same probability that ##N-N_2## particles have passed towards ##V_1## regardless of what ##V_1## contains

The entropy in each container will be the sum ##S (N_1, V_1) + S (N_2, V_2) = S (N, V)##
like is logic. Because the probability that a volume has a certain number of particles is the same probability that the other volume has a complementary number of particles, and "not" the sum of both probabilities.
You are understanding the meaning of ##N## in the book differently from what you propose in the problem.
SchroedingersLion said:
Question 2)
Since the constraint fixes the particle number and there is no macroscopic change anymore... does that not mean that the constrained state is also an equilibrium state? The difference to the unconstrained equilibrium state would then be that there are no more microscopic fluctuations...
The distribution ##P(N_1|N)## is a delta peak now. So what is the logarithm of a delta peak? You see how I am getting confused ^^"

I do not understand what you mean by restriction, at the beginning the volumes are separated and then the volumes are joined as a single larger volume
##\text{ln}\left(P(N_1|N)\right))## from before? Then I would have a state of low entropy, and based on the three assumptions from above, upon the release of the constraint the entropy will be maximized. But I don't see why I would be allowed to use function ##P(N_1|N) ##, as it was derived under the assumption that there is no barrier.
SchroedingersLion said:
Question 2)
Since the constraint fixes the particle number and there is no macroscopic change anymore... does that not mean that the constrained state is also an equilibrium state? The difference to the unconstrained equilibrium state would then be that there are no more microscopic fluctuations...
The distribution ##P(N_1|N)## is a delta peak now. So what is the logarithm of a delta peak? You see how I am getting confused ^^"SL
I do not understand what you mean by restriction, at the beginning the volumes are separated and then the volumes are joined as a single larger volume
 
  • #3
Thanks for the response.

I just don't understand how the entropy of the constrained system is connected to the unconstrained system.

I have ##N## total particles and a total volume ##V## that is split between two subvolumina, ##V=V_1+V_2##, where ##N_1## particles are in ##V_1##, and ##N_2## particles are in ##V_2##, with ##N=N_1 + N_2##.

Two cases:
1) There is no barrier, particles can distribute themselves freely. In this case, the probability ##P(N_1|N)## is maximized at the equilibrium value of ##N_1##. Thus, the function ##S(N_1) := ln(P(N_1|N)## is also maximized. Note that in deriving the function ##S(N_1)##, the author relied on the fact that there is no barrier.
That means in any other scenario, I would not be allowed to use the derived function ##S(N_1)##, because the probability density ##P## would look different.

2) There is a barrier, particles are stuck in either ##V_1## or ##V_2##. Now, if ##N_1## is not the equilibrium value, what is the entropy of this state? Do I still insert the value ##N_1## into the function ##S(N_1)## from before? That does not make sense in my eyes, as this function was derived with the assumption that particles can move freely... The probability ##P(N_1|N)## will have a different shape in this case, so the entropy function will look differently as well.
 

1. What is equilibrium and why is it important in science?

Equilibrium is a state of balance in a system where there is no net change in the overall properties of the system. In science, it is important because it allows us to predict and understand how systems will behave under certain conditions. It also helps us to determine the stability and sustainability of a system.

2. What is entropy and how does it relate to equilibrium?

Entropy is a measure of the disorder or randomness in a system. In equilibrium, the entropy of a system is at its maximum, meaning that there is no net change in the system and it is in a state of maximum disorder. As a system moves away from equilibrium, the entropy decreases and the system becomes more ordered.

3. How does the release of constraints affect a system's equilibrium?

The release of constraints on a system can cause it to move away from equilibrium. This is because constraints, such as external forces or limitations, help to maintain the system in a state of balance. When these constraints are removed, the system can shift and change, potentially leading to a new equilibrium state.

4. How does probability play a role in determining equilibrium?

Probability is a measure of the likelihood of a particular event occurring. In systems with multiple possible states, the equilibrium state is the one with the highest probability. This is because the system will naturally tend towards the most probable state, as it is the most stable and has the lowest energy.

5. How do scientists use the concepts of equilibrium, entropy, and probability in their research?

Scientists use these concepts to understand and predict the behavior of complex systems in the natural world. They also use them to design experiments and analyze data in order to make conclusions and draw insights about the systems they are studying. Additionally, these concepts are fundamental in many scientific fields, such as thermodynamics, chemistry, and ecology.

Similar threads

Replies
7
Views
749
Replies
19
Views
1K
Replies
3
Views
1K
Replies
13
Views
1K
Replies
11
Views
334
  • Thermodynamics
Replies
4
Views
2K
Replies
5
Views
2K
  • Thermodynamics
Replies
3
Views
1K
Replies
5
Views
1K
Replies
15
Views
933
Back
Top