# Understanding Entropy and the 2nd Law of Thermodynamics

## Introduction

The second law of thermodynamics and the associated concept of entropy have been sources of confusion to thermodynamics students for centuries.  The objective of the present development is to clear up much of this confusion.  We begin by first briefly reviewing the first law of thermodynamics, in order to introduce in a precise way the concepts of thermodynamic equilibrium states, heat flow, mechanical energy flow (work), and reversible and irreversible process paths.

## First Law of Thermodynamics

A thermodynamic equilibrium state of a system is defined as one in which the temperature and pressure are constant, and do not vary with either location within the system (i.e., spatially uniform temperature and pressure) or with time (i.e., temporally constant temperature and pressure).

Consider a closed system (no mass enters or exits) that, at initial time $t_i$, is in an initial equilibrium state, with internal energy $U_i$, and, at a later time $t_f$, is in a new equilibrium state with internal energy $U_f$.  The transition from the initial equilibrium state to the final equilibrium state is brought about by imposing a time-dependent heat flow across the interface between the system and the surroundings, and a time-dependent rate of doing work at the interface between the system and the surroundings. Let $\dot{q}(t)$ represent the rate of heat addition across the interface at time t, and let $\dot{w}(t)$ represent the rate at which the system does work at the interface at time t. According to the first law (basically conservation of energy),
$$\Delta U=U_f-U_i=\int_{t_i}^{t_f}{(\dot{q}(t)-\dot{w}(t))dt}=Q-W$$
where Q is the total amount of heat added and W is the total amount of work done by the system on the surroundings at the interface.

The time variation of $\dot{q}(t)$ and $\dot{w}(t)$ between the initial and final states uniquely defines the so-called process path. There are an infinite number of possible process paths that can take the system from the initial to the final equilibrium state. The only constraint is that Q-W must be the same for all of them.

A reversible process path is defined as one for which, at each instant of time along the path, the system is only slightly removed from being in thermodynamic equilibrium with its surroundings.  So the path can be considered as a continuous sequence of thermodynamic equilibrium states.  As such, the temperature and pressure throughout the system along the entire reversible process path are completely uniform spatially.  In order to maintain these conditions, a reversible path must be carried out very slowly so that $\dot{q}(t)$ and $\dot{w}(t)$ are both very close to zero over then entire path.

An irreversible process path is typically characterized by rapid rates of heat transfer $\dot{q}(t)$ and work  being done at the interface with the surroundings $\dot{w}(t)$.    This produces significant temperature and pressure gradients within the system (i.e., the pressure and temperature are not spatially uniform throughout), and thus, it is not possible to identify specific representative values for either the temperature or the pressure of the system (except at the initial and the final equilibrium states). However, the pressure ##P_{Int}(t)## and temperature ##T_{Int}(t)## at the interface can always be measured and controlled using the surroundings to impose whatever process path we desire.  (This is equivalent to specifying the rate of heat flow and the rate of doing work at the interface $\dot{q}(t)$ and $\dot{w}(t)$).

Both for reversible and irreversible process paths, the rate at which the system does work on the surroundings is given by:
$$\dot{w}(t)=P_{Int}(t)\dot{V}(t)$$
where, again, ##P_{Int}(t)## is the pressure at the interface with the surroundings, and where $\dot{V}(t)$ is the rate of change of system volume at time t.

If the process path is reversible, the pressure P throughout the system is uniform, and thus matches the pressure at the interface, such that

$$P_{Int}(t)=P(t)\mbox{ (reversible process path only)}$$

Therefore, in the case of a reversible process path, $$\dot{w}(t)=P(t)\dot{V}(t)\mbox{ (reversible process path only)}$$

This completes our discussion of the First Law of Thermodynamics.

## Second Law of Thermodynamics

In the previous section, we focused on the infinite number of process paths that are capable of taking a closed thermodynamic system from an initial equilibrium state to a final equilibrium state. Each of these process paths is uniquely determined by specifying the heat transfer rate $\dot{q}(t)$ and the rate of doing work $\dot{w}(t)$ as functions of time at the interface between the system and the surroundings. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path are given by the two integrals:
$$Q=\int_{t_i}^{t_f}{\dot{q}(t)dt}$$
$$W=\int_{t_i}^{t_f}{\dot{w}(t)dt}$$
In the present section, we will be introducing a third integral of this type (involving the heat transfer rate $\dot{q}(t)$) to provide a basis for establishing a precise mathematical statement of the Second Law of Thermodynamics.

The discovery of the Second Law came about in the 19th century, and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 1800’s.

(The following is a somewhat fictionalized account, designed to minimize the historical discussion, and focus more intently on the scientific findings.) Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:
$$I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}$$
where ##T_{Int}(t)## is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing:  For any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) is not arbitrary; instead, there is a unique upper bound to the value of the integral. Clausius also found that this observation is consistent with all the “word definitions” of the Second Law.

Clearly, if there is an upper bound for this integral, this upper bound has to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.

But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, out of the infinite number of possible process paths, there exists a well-defined subset, each member of which gives exactly the same maximum value for the integral. This subset consists of all the reversible process paths. Thus, to determine the change in entropy between two equilibrium states, one must first “dream up” a reversible path between the two states and then evaluate the integral over that path. Any other process path will give a value for the integral lower than the entropy change.  (Note that the reversible process path used to determine the entropy change does not necessarily need to bear any resemblance to the actual process path.  Thus, for example, if the actual process path were adiabatic, the reversible path would not need to be adiabatic.)

So, mathematically, we can now state the Second Law as follows:

$$I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}$$
where $\dot{q}_{rev}(t)$ is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the system temperature at time t (which, for a reversible path, matches the temperature at the interface with the surroundings ##T_{Int}(t)##). This constitutes a precise mathematical statement of the Second Law of Thermodynamics.  The relationship is referred to as the Clausius Inequality.

Paper of interest for further learning:
http://arxiv.org/abs/cond-mat/9901352

Tags:
74 replies
1. Jimster41 says:

Thanks Chet, I hope it’s okay if I keep asking you questions. It really is my favorite way to learn, and I can get enough of the second law, and I’m sure I will learn something – not the least of which will be precision of terms.

In the the case of a gas in a cylinder with a piston (aka “the damper”) why does the amount of dissipation vary with the amount of force per unit time? what does the time rate of force have to who with the efficiency of conversion to mechanical energy? Why does the difference at any given time between the system and surroundings, dictate the reversibility, as opposed to say the amount of energy transferred altogether?

2. Chestermiller says:

Thanks Jimster.

[QUOTE=”Jimster41, post: 5093960, member: 517770″]Thanks Chester.
Yes. I really did find that clear.

Which is not to say I understood it…

What is special about the reversible paths? [/quote]
Reversible paths minimize the dissipation of mechanical energy to thermal energy, and maximize the ability of temperature differences to be converted into mechanical energy. In reversible paths, the pressure exerted by the surroundings at the interface with the system is only slightly higher of lower than the pressure throughout the system, and the temperature at the interface with the surroundings is only slightly higher or lower than the temperature throughout the system. This situation is maintained over the entire path from the initial to the final equilibrium state of the system.

For irreversible paths, the dissipation of mechanical energy to thermal energy is the result of viscous dissipation. The same thing happens if you compress a combination of a spring and (viscous) damper connected in parallel. If you compress the combination very rapidly from an initial length to a final length, you generate lots of heat in the damper (since the force carried by the damper is proportional to the velocity difference across the damper). On the other hand, if you compress the combination very slowly, the force carried by the damper is much less, and you generate much less heat. The amount of work you need to do in the latter case to bring about the compression is also much less. This is a very close analogy to what happens when you cause a gas in a cylinder to compress.
[quote]
Are all the other paths, the non-reversible ones, the same, or do some integrate to different values <= DeltaS than others?[/QUOTE] They integrate to different values < ΔS. The equal sign does not apply to irreversible paths. They are all less.Chet

3. anorlunda says:

Kudos chestermiller. That was clear, and understandable, The historical perspective really helped.

I look forward to the day when chestermiller makes it similarly easy to understand why this second law implies that “the entopy of the universe tends to a maximum”. And how it relates to the kind of information debated in the Hawking/Susskind “black hole wars.”

4. Jimster41 says:

Thanks Chester.
Yes. I really did find that clear.

Which is not to say I understood it…

What is special about the reversible paths? Are all the other paths, the non-reversible ones, the same, or do some integrate to different values <= DeltaS than others?

5. DrDu says:

But you claimed your formulation of the Clausius inequality to be ##\emph{mathematically precise}##, didn't you?

6. DrDu says:

I fear that one may get from this article the impression that the concept of entropy can only be introduced under very restricing assumptions. Here some rethoric questions: Are the systems for which we can introduce entropy really restricted to those describable in terms of only T and P? How about chemical processes or magnetization? Can and work only enter through the boundaries? How about warming a glas of milk in the microwave, then? In this case, the pressure is constant, but we can't assign a unique temperature to the system, not even at the boundary, as the distribution of energy over the internal states of the molecules is out of equilibrium. This already shows that the Clausius inequality is of restricted value, as the integrals aren't defined for most of the irreversible processes. In fact, we don't need to break our heads about the complicated structure of non-equilibrium states. The point is that we can calculate entropy integrating over a sequence of equilibrium states. It plays no role whether we can approximate this integral by an actual quasistatic process.

7. GM Jackson says:

Thanks for your definitive take on thermodynamics one and two.

8. nothingkwt says:

I have always thought that a reversible process would give the minimum change in entropy, i.e. a lower bound for the integral. Is it not that the higher the entropy change the more energy it's dissipating from irreversibilities? In other words, why exactly is the integrand always less than or equal to and not greater than or equal to?

9. wvphysicist says:

This question goes beyond the scope of what I was trying to cover in my article.  It involves the thermodynamics of mixtures.  I'm trying to decide whether to answer this in the present Comments or write an introductory Insight article on the subject.  I need time to think about what I want to do.  Meanwhile, I can tell you that there is an entropy increase for the change that you described  and that the entropy change can be worked out using energy with the integral of dq[SUB]rev[/SUB]/T.I cannot understand the symbols in the integral.

10. wvphysicist says:

I have two questions about closed systems.  Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point.   One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water.  Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero.  Is there any difference?I don't have a clear idea of what this equation is about.  Let me try to articulate my understanding, and you can then correct it.  You have an isolated system containing an exothermic chemical reaction vessel in contact with a cold bath.  In one case, the bath contains ice floating in water at 0 C.  In the other case, the bath contains only water at 0 C.  Is there a difference in the energy transferred from the reaction vessel to the bath in the two cases.  How is this affected if the amount of water in the second case is increased?  (Are you also asking about the entropy changes in these cases?)Using identical reaction vessels the energy transferred is set the same. The question is about the entropy change.   Will heating water a tiny delta T or melting ice result in the same entropy change?

11. wvphysicist says:

OK, I understand a little more and accept the last sentence. I think primarily about heat engines.I have two questions about closed systems.  Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point.   One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water.  Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero.  Is there any difference?I have another problem with entropy.  Some folks say it involves information.  I have maintained that only energy is involved.  Consider a system containing two gasses. The atoms are identical except half are red and the other are blue. Initially the red and blue are separated by a card in the center of the container.  The card is removed and the atoms mix.  How can there be a change in entropy?Oh, one more please. Can you show an example where the entropy change is negative like you were saying?

12. insightful says:

So you're saying there is a temperature gradient between the "bulk" system and the interface, but no temperature gradient between the "bulk" surroundings and the interface?

13. insightful says:

I find your "temperature at the interface with the surroundings" confusing in that to me it implies an average temperature between the system and the surroundings at that point. Would it be more clear to say "temperature of the surroundings at the interface" or am I missing something?

14. Jimster41 says:

That was great Chetit helps to know the purpose of scope. Hey, can you explain to a confused student why the change in entropy in a closed sytem is not always greater than or equal to 0? I think I know (Poincare' recurrence?) but I also think I'm probably wrong.

15. MexChemE says:

Excellent article! One of the best definitions of entropy and the second law I’ve ever read.

16. davidbenari says:

In the analysis of mixtures, we have that for ideal mixtures ##\Delta_mix H=0##. So I think it could be argued that the entropy change for ideal mixtures is zero, according to ##dS=\frac{dq}{dT}##. However, this is not the case and in fact the entropy is given by ##-nR\Sum_i x_i ln x_i ##

How can I resolve this?

I’m not sure if this is the kind of reply that is expected here, so I would look to know that too hehe.

Thanks.

17. Evanish says:

This is probably a stupid question but what is dt? It appears in most of the equations, but I can’t find it’s definition in the article.