What is the connection between Maxwell's Demon and Liouville's Theorem?

Click For Summary
Maxwell's Demon presents a challenge to the second law of thermodynamics by suggesting a mechanism to separate fast and slow molecules, potentially creating a perpetual heat engine. The discussion highlights the importance of considering the entropy associated with the demon's actions, particularly the energy required to erase information about the gas particles. Participants debate the implications of using large "cannonball" molecules, questioning how their size and the demon's energy expenditures affect thermodynamic principles. The conversation also touches on the relationship between energy, entropy, and information, suggesting that the demon's knowledge of particle states complicates the traditional understanding of entropy. Ultimately, the discussion emphasizes the need to reconcile these concepts to understand the implications for thermodynamics fully.
  • #61
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.

A computer like that doesn't have anything we can *relevantly* identify as temperature, and why should it? We can have TdS+pdV systems that don't need FdL, or TdS+FdL without pdV, so why not pdV+FdL without any mention of temperature or entropy. A computer at 0 Kelvin would probably work rather well. If the program is basically cyclic then there is no accumulation of entropy in its logical state. If you want to claim that it's oozing entropy, then you'd have to mean that it dissipates heat somewhere in the universe, but is there really any prospect of finding a *mechanical* reason for putting a numerical limit on how much heat it has to dissipate? Maybe it runs on neutrinos or something.

What does likely mean anyway? In thermodynamics it means microstates per macrostate, but we seem to have agreed that macrostates are in the eye of the beholder. There's been no discussion of conditional probability, or dependent probabilities. The whole treatment of probability has been restricted to what dumb gases need.

There's been some suggestions that Turing machines can just as easily run backwards as forwards, but I don't see this either. I just need a 2 bit counter and I've got a one way system. I'm probably going to need to feed it some energy, but the cannonball gas argument already showed that this energy can be made negligible compared with the energy being switched around by the demon, so we concluded that it's not about the energy anyway. If it's not about the energy though, I can have my 2 bit counter, it can be arbitrarily efficient, and the entropy I need to generate elsewhere to keep it running can be made arbitrarily small.

With that cannonball gas we can actually shift a hell of a lot of entropy per decision. To see that, let the balls be ever so slightly inelastic and let them eventually dissipate their KE by warming the cannonballs over a long, long time. The demon was supposedly struggling to run on a limited entropy budget, but with big cannonballs he can afford to drink a bit of electricity. This has been my bottom line worry all along. We can hand-wave about this stuff, but can we write equations for it in units that match up?

I really think it's on the cards that computers could walk all over the 2nd law, but what would be the damage? Would we have to bin all the books? Nope. Gases would still behave just the same, and computers would still be ten orders away from finding out. I can't think of an area of physics that would totally implode if computers could break the 2nd law.
 
Physics news on Phys.org
  • #62
lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.

Boltzmann's H-theorem holds for equilibrium only (or if you prefer, the tendency for a system to approach equilibrium). The theorem itself, dH/dt ≤ 0, is not trivial but not 'hard to prove', either- classical and quantum mechanical proofs are available from many sources.
 
  • #63
AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system.

The second law can be broken for short times, in terms of the fluctuation-dissipation theorem (S is allowed to fluctuate, just like any other physical quantity), and there are challenges using systems far from equilibrium, but to date no meaningful violation of the second law of thermodynamics has ever been observed.

http://prl.aps.org/abstract/PRL/v89/i5/e050601
http://www.mdpi.org/entropy/papers/e6010001.pdf
 
  • #64
AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.
We should expect it to apply to computers because computers also represent a vast number of states, not a small number. Now, if you build a quantum computer, you have microstates interacting with macrostates, and you might be able to isolate the microstates and get into the area of quantum thermodynamics (which still has some questions associated with the different interpretations and so forth). Some hold that thermodynamics is just a kind of special case and might not hold for quantum systems, others (like that de Broglie quote) take the opposite view that thermodynamical thinking is quite fundamental, and even things like wave functions and spacetime are merely instances of deeper thermodynamic (entropy controlled) engines.
 
  • #65
lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.

I would say the second law is not a near tautology. It's a simple concept, "the more likely a situation, the more likely it is to occur" gets right to the core of the second law, but to state it precisely can get rather complicated.

You have to have the concept of a microstate and a macrostate to begin with, and what macrostate is associated with each microstate. This can be less than obvious, and can be different for different observers. You have to have a mechanism by which each microstate changes, in time, into another microstate. Well, no, not exactly, you have to have a mechanism by which ALMOST EVERY microstate changes into another microstate. That requires a very large number of microstates. Then you have to know that this mechanism allows, by a series of steps, almost every microstate to evolve into almost every other microstate. You have to know or assume that, as a result of this process, almost every microstate is just as likely to occur as any other microstate. You have to show that almost every microstate yields the same macrostate, (the equilibrium macrostate). Only then can you say that almost every microstate which does not yield the equilibrium macrostate, will evolve in time in a way that it approaches that equilibrium macrostate. This is a statement of the second law. The entropy is defined as proportional (lets say equal to) the logarithm of the number of microstates that yield a given macrostate. That means that the entropy of a non-equilibrium macrostate will be lower than that of the equilibrium macrostate, and its entropy will tend to increase to that of the equilibrium macrostate. This is another way of stating the second law. It also means that the entropy of the equilibrium macrostate is almost equal to the logarithm of the total number of microstates. Even this description is not complete. Its these details that cause the H-theorem to be so complicated.


Ken G said:
But there's no reason to think that using a machine to separate gas into different Ts is going to reduce the entropy, I think it's pretty clear that's a dead duck and the details of the machine (like if it has memory) are not terribly important or even advisable to analyze (since you cannot individually analyze every possible machine, just like the Patent Office cannot). But granted, you want to be convinced it's a dead duck, so for that you will need to invoke a lot of experience in how machines work, and if you see it for a few examples, you can develop the faith you seek in the second law. You can't add to my faith by analyzing a few more examples, you could only have an effect by finding a counterexample (certainly a noble effort, most likely doomed to fail but instructive in how it fails each time).

I think we agree on how things work, we disagree on what is interesting or important. I am interested in the idea that some fundamental statements can be made about the thermodynamics of computing. You may have to treat every real case as a separate example, but I think it is very interesting if some statements can be made about the thermodynamics of individual logical operations, like Landauer's statement that only irreversible logical operations will unavoidably generate thermodynamic entropy which must be greater than some minimum value. The patent office rejects perpetual motion machines because they are not in the business of finding where somebody screwed up when they are sure that they have screwed up. I think there may be situations where finding out where they screwed up can be interesting and informative, and yield a new or more complete understanding of the second law.

AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.

The fundamental (classical) statistical mechanics question is "how do you describe the evolution of a physical system when you don't have complete knowledge of its state?". You don't have complete knowledge of the initial state or any intermediate state. The second law does not apply to situations in which you have complete knowledge of the initial state.

If you don't have complete knowledge of initial conditions, one approach is to assign probabilities to every conceivable initial condition, and then, using physical principles, calculate the probabilities of a what you will finally measure. The second law says that if you have a situation where a particular final measurement is almost certain, then that's almost certainly what you will measure. Or maybe, more weakly the second law says that if you have a situation where a particular final measurement is most likely, then that's most likely what you will measure. Otherwise, the second law is not applicable.
 
  • #66
I agree with you that analyzing why the second law continues to apply no matter how hard you try to "trick it" into not applying is informative. It's kind of like the Lorentz symmetry in relativity, or the uncertainty principle in quantum mechanics-- laws that people tried very hard to "get around" with all kinds of examples, but eventually gave up the effort and instead just accepted the law. Indeed that is how physics works, to a large extent-- we never know our laws are correct, we just eventually gain faith in them after trying hard enough to refute them (and usually we eventually do refute them in ways that are very informative indeed). I'm just saying that the easiest way to analyze situations involving very high velocities, or very small systems, or Maxwell's Demons, is to accept that Lorentz symmetry, and the uncertainty principle, and the second law of thermodynamics, are going to tell you what will happen there. So the question then becomes, why are they going to work, not why are they not going to work. I feel we benefit more from understanding and accepting the law, that has so much empirical support in all these contexts, then we do by constantly doubting it, even though I admit maintaining constant doubt is a key part of scientific progress. So what I really mean is, when we analyze these Demons, we should be looking for the places that the entropy goes, given that we know the entropy has to go somewhere or the Demon just won't work (and indeed the time-reversed version of the Demon will work instead). In this way, we have a guide to keep us from overlooking some entropy destination-- rather than looking for why the second law isn't going to work, which some parts of this thread started to get the flavor of (I'm not saying that was your approach).
 
Last edited:
  • #67
Ken G said:
In this way, we have a guide to keep us from overlooking some entropy destination-- rather than looking for why the second law isn't going to work, which some parts of this thread started to get the flavor of (I'm not saying that was your approach).

Well, I had not looked at Maxwell's demon too much before this thread, and I thought that the answer was that the demon would not work due to the second law. This has changed.

I have been trying to understand the concept of indistinguishable particles in a classical gas, using what I called a "billiard ball" gas and supposing for clarity that each was imprinted with a unique serial number that had no effect on collisions, and why the thermodynamics of this gas is not changed by erasing the serial numbers, you still need to make the indistingushable particle assumption. As long as no thermodynamic process is a function of those serial numbers, so its not a quantum effect, QM just says its a matter of principle, rather than a happenstance. So when I saw a "gas of cannonballs" I was definitely interested.

Regarding the second law, its still not fixed in my mind. Not so much its validity, but its range of application, the trouble in defining a "macrostate" and the extent to which the macrostate is somewhat arbitrary, depending on the capabilities of the person doing the measurement, rather than on the system itself, pointing out again the arbitrariness of the entropy. Entropy is missing information, and if you manage to gain more information without disturbing the system (classical, again), then you reduce the entropy of the system, without having altered it in any way.
 
  • #68
Rap said:
Well, I had not looked at Maxwell's demon too much before this thread, and I thought that the answer was that the demon would not work due to the second law. This has changed.
Yes, the Demon does work. Whether or not it is a practical way to get free energy is not so clear, maybe it's just technologically unfeasible.
I have been trying to understand the concept of indistinguishable particles in a classical gas, using what I called a "billiard ball" gas and supposing for clarity that each was imprinted with a unique serial number that had no effect on collisions, and why the thermodynamics of this gas is not changed by erasing the serial numbers, you still need to make the indistingushable particle assumption.
I'm not clear on what you mean by still needing to make the indistinguishable assumption. It seems to me this will simply depend on your goals, and you can make it in some situations and not make it in others. You can treat distinguishable (classical) particles as indistinguishable if it doesn't matter to the outcomes you have in mind, and you can treat distinguishable particles as indistinguishable too, it depends on what you care about, or more correctly, what you can get away with not caring about. That's generally true of the entropy concept-- the order is, we choose what we care about and what we know, that controls the entropy, and the entropy gives us a second law.
As long as no thermodynamic process is a function of those serial numbers, so its not a quantum effect, QM just says its a matter of principle, rather than a happenstance.
Quantum mechanics just brings in another type of situation we might need to care about, because it brings in entanglement. In some situations, particles become entangled in ways where indistinguishability is of fundamental importance, and we have to care about it or we miss the necessary entanglements. In other words, sometimes nature tells us what we need to care about, rather than us telling her what we want to care about.
Regarding the second law, its still not fixed in my mind. Not so much its validity, but its range of application, the trouble in defining a "macrostate" and the extent to which the macrostate is somewhat arbitrary, depending on the capabilities of the person doing the measurement, rather than on the system itself, pointing out again the arbitrariness of the entropy. Entropy is missing information, and if you manage to gain more information without disturbing the system (classical, again), then you reduce the entropy of the system, without having altered it in any way.
I would say that concept is not supposed to be "fixed" in your mind, it is supposed to be highly fluid in your mind! Entropy is malleable, it is whatever we need it to be to describe the relative probability of various categories of outcomes. In my view, entropy is nothing but a classification scheme for states, and some classification schemes are more useful than others. So the trick to using entropy, and the second law, is simply finding a good classification scheme, and that's not always easy.
 
  • #69
I'm not sure I understand these brain arguments against the problem. It seems that the argument is that the mental processes required by the demon increase the entropy to preserve the second law of thermodynamics (unless I am misunderstanding).

I don't know if this has been brought up or not, but what if the demon were an extraordinarily stupid and simple being who had no idea what he was doing, but he still manipulated the system with such luck as to match exactly what the thinking demon would have done? I suppose this is similar to a sequence of atoms spontaneously forming into crystals, but that doesn't seem impossible either (though certainly unlikely).

I don't see how this could be considered anything other than statistical, though I am certainly no expert.
 
  • #70
Acala said:
I'm not sure I understand these brain arguments against the problem. It seems that the argument is that the mental processes required by the demon increase the entropy to preserve the second law of thermodynamics (unless I am misunderstanding).

I don't know if this has been brought up or not, but what if the demon were an extraordinarily stupid and simple being who had no idea what he was doing, but he still manipulated the system with such luck as to match exactly what the thinking demon would have done? I suppose this is similar to a sequence of atoms spontaneously forming into crystals, but that doesn't seem impossible either (though certainly unlikely).

I don't see how this could be considered anything other than statistical, though I am certainly no expert.

You get around this problem by assuming the demon is a computer. That way you have a definite physical system to deal with.
 
  • #71
Acala said:
I don't see how this could be considered anything other than statistical, though I am certainly no expert.
I think it's fair to say that the second law is purely statistical. It is not an unbreakable law, it's a way to make predictions. The more complex the system, the more reliable the prediction, but the systems that thermodynamics are applied to are so spectacularly complex that the predictions are essentially completely reliable. Good thing too-- our lives depend on that constantly.
 
  • #72
Ah, that is a clever solution, Rap.

And thanks Ken G, that always troubled me about the concept. I can see how it's virtually completely true even if it's possible to be broken in exceedingly rare circumstances.
 
  • #73
Acala said:
And thanks Ken G, that always troubled me about the concept. I can see how it's virtually completely true even if it's possible to be broken in exceedingly rare circumstances.
Indeed, some think that the origin of our universe was just one of those exceedingly rare circumstances where the second law had a hiccup!
 
  • #74
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.

A thermodynamic state is defined by specifying a small number of macroscopic quantities such as temperature, volume, magnetization, stress, etc--Denote them by {X1,X2,Xn} which are observed and/or controlled by the experimenter, where n is seldom greater than 4.

From this he goes on to give a number of illustrative physical examples which relate to the entropy of mixing of indistinguishable and vaguely distinguishable particles and uses this to support his thesis that entropy is not a property of the microstate, and that there is no paradox because the propositions of thermodynamics only relate to pre-defined sets of macrostates and are not meant to be statements about the physical microstates. He says about the second law:

Therefore, the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable;rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, X2, Xn} that we have chosen to define our macrostate. Any attempt to write a second law stronger than this will put one at the mercy of a trickster, who can produce a violation of it
 

Attachments

  • #75
JDStupi said:
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.
I don't think this is Ken G's viewpoint. Ken G, am I wrong?
 
  • #76
JDStupi said:
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.

From this he goes on to give a number of illustrative physical examples which relate to the entropy of mixing of indistinguishable and vaguely distinguishable particles and uses this to support his thesis that entropy is not a property of the microstate, and that there is no paradox because the propositions of thermodynamics only relate to pre-defined sets of macrostates and are not meant to be statements about the physical microstates. He says about the second law:

As far as I am concerned, Jaynes is up there with Boltzmann and Gibbs when it comes to understanding entropy. He is the number one contributor, in my mind, to the understanding of the relationship between thermodynamic entropy and information entropy.

His explanation of the Gibbs paradox gives a deep insight into entropy - If you have two gases separated by a partition, and they have identical particles, then removing the partition changes nothing - the resulting gas is in equilibrium and the entropy is the sum of the entropies of the two gases when the partition was in. If they are different particles, no matter how small the difference, upon removing the partition, you have non-equilibrium, and upon equilibrium, you have a net increase in entropy - the total entropy is greater than the sum of the two original entropies and the increase is always the same. The crucial point is that if they are different but you have no experimental ability to tell you that they are different, then removing the partition changes nothing - there is no detectable disequilibrium, and no entropy change. Entropy is not only a function of the system, its a function of what you happen to know, or choose to know about the system.

This is applicable to the concept of "correct Boltzmann counting", where, when you calculate the entropy of a gas assuming the particles have separate identities, you wind up with a non-extensive entropy (entropies do not add) and you have to subtract log(N!) to get the right answer. You can see that saying that the particles are distinguishable is equivalent to taking your original gas and instead of having two boxes as in the Gibbs paradox, you have N separate boxes, each containing one particle which is different in some way (i.e. its distinguishable) from every other particle. Again, as in the Gibbs paradox, the entropies will not add. But you have no experimental ability to tell you that the particles are different. Therefore, if you calculate entropy by assuming they are distinguishable, you have to subtract that log(N!) error you made by that assumption. And now entropies add up (i.e. its extensive).

If you want a really good book on this subject, which goes through it carefully and clearly, giving many examples, check out "A Farewell to Entropy: Statistical Thermodynamics Based on Information" by Arieh Ben-Naim.
 
  • #77
lugita15 said:
I don't think this is Ken G's viewpoint. Ken G, am I wrong?
JDStupi is right-- that is very much my viewpoint. Entropy emerges once we have defined the sets of categories of states that fit to what we know and what our goals are. Once we have that, we get entropy from ln(N), and then we have the second law, which simply states that less populated categories will give way to more populated ones, over time, and this progress will be more reliable if the systems involved are very huge. We also get the condition for processes that occur "spontaneously", which is everything that happens in thermodynamics, and that is Nf > Ni. It might not be quite tautological, but it is certainly pretty simple and logical. The difficulty is in keeping track of what N is for things like Demons, and that's a worthwhile exercise along the lines of what Rap is trying to do, but if we want to know how the Demon works, the issue isn't really if it will hold, but why it must hold, or else the Demon will work in reverse.
 
Last edited:
  • #78
According to March 8, 2012 article in "Nature" (see citation below), the "idea of a connection between information and thermodynamics can can be traced back to Maxwell’s ‘demon’ " and the Landauer Principle which helped to resolve the paradox has finally been experimentally verified.

According to the Nature article, "The paradox of the apparent violation of the second law can be resolved by noting that during a full thermodynamic cycle, the memory of the demon, which is used to record the coordinates of each molecule, has to be reset to its initial state11,12 . Indeed, according to Landauer’s principle, any logically irreversible transformation of classical information is necessarily accompanied by the dissipation of at least kTln(2) of heat per lost bit (about 3 3 10221 J at room temperature (300 K)), where k is the Boltzmann constant and T is the temperature."

See: Antoine Bérut, et al., "Experimental verification of Landauer’s principle linking information and thermodynamics" Nature 483, 187–189 (08 March 2012)
http://phys.org/news/2012-03-landauer-dissipated-memory-erased.html
 
  • #79
From JDStuple in post #74 on the possibility of a Mazwell's Demon, originally from "The Gibbs Paradox" by E.T. Jaynes:
Therefore, the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable;rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, X2, Xn} that we have chosen to define our macrostate. Any attempt to write a second law stronger than this will put one at the mercy of a trickster, who can produce a violation of it.
Simon Van der Meer of CERN, in his 1984 Nobel Prize in Physics lecture , explains how he used RF electronics to violate Liouville's Theorem (see http://en.wikipedia.org/wiki/Liouville's_theorem_(Hamiltonian ).) and reduce the size (phase space) of a relativistic antiproton beam. See Simon Van der Meer's lecture (http://ki1.nobel.ki.se/nobel_prizes/physics/laureates/1984/meer-lecture.pdf) following Equation (1) on page 294.
 
Last edited by a moderator:

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
Replies
1
Views
456
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 81 ·
3
Replies
81
Views
19K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
23
Views
3K
  • · Replies 25 ·
Replies
25
Views
5K