# Understanding Entropy and the 2nd Law of Thermodynamics

#### Introduction

The second law of thermodynamics and the associated concept of entropy have been sources of confusion to thermodynamics students for centuries. The objective of the present development is to clear up much of this confusion. We begin by first briefly reviewing the first law of thermodynamics, in order to introduce in a precise way the concepts of thermodynamic equilibrium states, heat flow, mechanical energy flow (work), and reversible and irreversible process paths.

#### First Law of Thermodynamics

A thermodynamic equilibrium state of a system is defined as one in which the temperature and pressure are constant, and do not vary with either location within the system (i.e., spatially uniform temperature and pressure) or with time (i.e., temporally constant temperature and pressure).

Consider a closed system (no mass enters or exits) that, at initial time [itex]t_i[/itex], is in an initial equilibrium state, with internal energy [itex]U_i[/itex], and, at a later time [itex]t_f[/itex], is in a new equilibrium state with internal energy [itex]U_f[/itex]. The transition from the initial equilibrium state to the final equilibrium state is brought about by imposing a time-dependent heat flow across the interface between the system and the surroundings, and a time-dependent rate of doing work at the interface between the system and the surroundings. Let [itex]\dot{q}(t)[/itex] represent the rate of heat addition across the interface at time t, and let [itex]\dot{w}(t)[/itex] represent the rate at which the system does work at the interface at time t. According to the first law (basically conservation of energy),

[tex]\Delta U=U_f-U_i=\int_{t_i}^{t_f}{(\dot{q}(t)-\dot{w}(t))dt}=Q-W[/tex]

where Q is the total amount of heat added and W is the total amount of work done by the system on the surroundings at the interface.

The time variation of [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] between the initial and final states uniquely defines the so-called process path. There are an infinite number of possible process paths that can take the system from the initial to the final equilibrium state. The only constraint is that Q-W must be the same for all of them.

A ** reversible process path** is defined as one for which, at each instant of time along the path, the system is only slightly removed from being in thermodynamic equilibrium with its surroundings. So the path can be considered as a continuous sequence of thermodynamic equilibrium states. As such,

**. In order to maintain these conditions, a reversible path must be carried out very slowly so that [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] are both very close to zero over then entire path.**

*the temperature and pressure throughout the system along the entire reversible process path are completely uniform spatially** An irreversible process path* is typically characterized by rapid rates of heat transfer [itex]\dot{q}(t)[/itex] and work being done at the interface with the surroundings [itex]\dot{w}(t)[/itex].

*This produces significant temperature and pressure gradients within the system (i.e.,*

*), and thus, it is not possible to identify specific representative values for either the temperature or the pressure of the system (except at the initial and the final equilibrium states). However, the pressure ##P_{Int}(t)## and temperature ##T_{Int}(t)## at the interface can always be measured and controlled using the surroundings to impose whatever process path we desire. (This is equivalent to specifying the rate of heat flow and the rate of doing work at the interface [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex]).*

**the pressure and temperature are not spatially uniform****throughout**Both for reversible and irreversible process paths, the rate at which the system does work on the surroundings is given by:

[tex]\dot{w}(t)=P_{Int}(t)\dot{V}(t)[/tex]

where, again, ##P_{Int}(t)## is the pressure at the interface with the surroundings, and where [itex]\dot{V}(t)[/itex] is the rate of change of system volume at time t.

If the process path is reversible, the pressure P throughout the system is uniform, and thus matches the pressure at the interface, such that

[tex]P_{Int}(t)=P(t)\mbox{ (reversible process path only)}[/tex]

Therefore, in the case of a reversible process path, [tex]\dot{w}(t)=P(t)\dot{V}(t)\mbox{ (reversible process path only)}[/tex]

This completes our discussion of the First Law of Thermodynamics.

#### Second Law of Thermodynamics

In the previous section, we focused on the infinite number of process paths that are capable of taking a closed thermodynamic system from an initial equilibrium state to a final equilibrium state. Each of these process paths is uniquely determined by specifying the heat transfer rate [itex]\dot{q}(t)[/itex] and the rate of doing work [itex]\dot{w}(t)[/itex] as functions of time at the interface between the system and the surroundings. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path are given by the two integrals:

[tex]Q=\int_{t_i}^{t_f}{\dot{q}(t)dt}[/tex]

[tex]W=\int_{t_i}^{t_f}{\dot{w}(t)dt}[/tex]

In the present section, we will be introducing a third integral of this type (involving the heat transfer rate [itex]\dot{q}(t)[/itex]) to provide a basis for establishing a precise mathematical statement of the Second Law of Thermodynamics.

The discovery of the Second Law came about in the 19th century, and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 1800’s.

(The following is a somewhat fictionalized account, designed to minimize the historical discussion, and focus more intently on the scientific findings.) Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:

[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}[/tex]

where ##T_{Int}(t)## is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing: ** For any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) is not arbitrary; instead, there is a unique upper bound to the value of the integral.** Clausius also found that this observation is consistent with all the “word definitions” of the Second Law.

Clearly, if there is an upper bound for this integral, this upper bound has to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.

But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, ** out of the infinite number of possible process paths, there exists a well-defined subset, each member of which gives exactly the same maximum value for the integral. This subset consists of all the reversible process paths**. Thus, to determine the change in entropy between two equilibrium states, one must first “dream up” a reversible path between the two states and then evaluate the integral over that path. Any other process path will give a value for the integral lower than the entropy change. (Note that the reversible process path used to determine the entropy change does not necessarily need to bear any resemblance to the actual process path. Thus, for example, if the actual process path were adiabatic, the reversible path would not need to be adiabatic.)

So, mathematically, we can now state the Second Law as follows:

[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}[/tex]

where [itex]\dot{q}_{rev}(t)[/itex] is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the *system* temperature at time t (which, for a reversible path, matches the temperature at the interface with the surroundings ##T_{Int}(t)##). This constitutes a precise mathematical statement of the Second Law of Thermodynamics. The relationship is referred to as the * Clausius Inequality*.

Paper of interest for further learning:

http://arxiv.org/abs/cond-mat/9901352

test

This is probably a stupid question but what is dt? It appears in most of the equations, but I can’t find it’s definition in the article.

dt is the differential element of change in time.

Thanks.

In the analysis of mixtures, we have that for ideal mixtures ##\Delta_mix H=0##. So I think it could be argued that the entropy change for ideal mixtures is zero, according to ##dS=\frac{dq}{dT}##. However, this is not the case and in fact the entropy is given by ##-nR\Sum_i x_i ln x_i ##

How can I resolve this?

I’m not sure if this is the kind of reply that is expected here, so I would look to know that too hehe.

Thanks.

Latex doesn’t seem to work here.

How can I like these pages?

Excellent article! One of the best definitions of entropy and the second law I’ve ever read.

That was great Chetit helps to know the purpose of scope. Hey, can you explain to a confused student why the change in entropy in a closed sytem is not always greater than or equal to 0? I think I know (Poincare' recurrence?) but I also think I'm probably wrong.

I find your "temperature at the interface with the surroundings" confusing in that to me it implies an average temperature between the system and the surroundings at that point. Would it be more clear to say "temperature of the surroundings at the interface" or am I missing something?

So you're saying there is a temperature gradient between the "bulk" system and the interface, but no temperature gradient between the "bulk" surroundings and the interface?

OK, I understand a little more and accept the last sentence. I think primarily about heat engines.I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference?I have another problem with entropy. Some folks say it involves information. I have maintained that only energy is involved. Consider a system containing two gasses. The atoms are identical except half are red and the other are blue. Initially the red and blue are separated by a card in the center of the container. The card is removed and the atoms mix. How can there be a change in entropy?Oh, one more please. Can you show an example where the entropy change is negative like you were saying?

I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference?I don't have a clear idea of what this equation is about. Let me try to articulate my understanding, and you can then correct it. You have an isolated system containing an exothermic chemical reaction vessel in contact with a cold bath. In one case, the bath contains ice floating in water at 0 C. In the other case, the bath contains only water at 0 C. Is there a difference in the energy transferred from the reaction vessel to the bath in the two cases. How is this affected if the amount of water in the second case is increased? (Are you also asking about the entropy changes in these cases?)Using identical reaction vessels the energy transferred is set the same. The question is about the entropy change. Will heating water a tiny delta T or melting ice result in the same entropy change?

This question goes beyond the scope of what I was trying to cover in my article. It involves the thermodynamics of mixtures. I'm trying to decide whether to answer this in the present Comments or write an introductory Insight article on the subject. I need time to think about what I want to do. Meanwhile, I can tell you that there is an entropy increase for the change that you described and that the entropy change can be worked out using energy with the integral of dq[SUB]rev[/SUB]/T.I cannot understand the symbols in the integral.