What is the connection between Maxwell's Demon and Liouville's Theorem?

AI Thread Summary
Maxwell's Demon presents a challenge to the second law of thermodynamics by suggesting a mechanism to separate fast and slow molecules, potentially creating a perpetual heat engine. The discussion highlights the importance of considering the entropy associated with the demon's actions, particularly the energy required to erase information about the gas particles. Participants debate the implications of using large "cannonball" molecules, questioning how their size and the demon's energy expenditures affect thermodynamic principles. The conversation also touches on the relationship between energy, entropy, and information, suggesting that the demon's knowledge of particle states complicates the traditional understanding of entropy. Ultimately, the discussion emphasizes the need to reconcile these concepts to understand the implications for thermodynamics fully.
  • #51
Ken G said:
I would say it must be consistent with any laws of mechanics, but it is independent of those laws, and it doesn't even really require any laws of mechanics be in place. Mechanics are the details, thermodynamics is what you can do without even knowing the details..
We can easily have a universe in which the laws of mechanics do not lead to the second law of thermodynamics. For instance, the laws could dictate that systems try to attain a specific ordered state.
Yes, if you have a model for the Demon. Do we have mechanical models of brains?
My point was that if we can use the 2nd law of thermodynamics to conclude that Maxwell's demon cannot reliably decrease the total entropy, and we can conclude this without examining in detail how it works, then we should be able to reach a similar conclusion using Newton's laws, again without looking at a specific model for the demon. If nothing else, we can adapt the proof of the second law of thermodyamics given in e.g. the later chapters of Feynman Lectures volume 1, for the case of a unspecified device which is assumed to have the properties we ascribe to Maxwell's demon. If we do something like this, we can find out some of what make the second law "tick", such as the role of information.
The "why" of the second law is independent of mechanics, it is thermodynamics. It all boils down to, entropy is our way of counting which collections of configurations contain more equally likely states, and hence are what will happen
I agree that the definition of entropy is independent of the laws of physics. But I disagree with your assertion that thermodynamics is independent of mechanics. The fact that entropy tends to increase with time more often than it decreases seems like a contingent fact of the universe. And in fact, even in our universe with our laws of physics, the fluctuation theorem suggests that if entropy gets really high, it is possible in principle for the second law of thermodynamics to go in reverse.
 
Last edited:
Physics news on Phys.org
  • #52
lugita15 said:
We can easily have a universe in which the laws of mechanics do not lead to the second law of thermodynamics. For instance, the laws could dictate that systems try to attain a specific ordered state.
"Disorder" simply means "more ways of being", which means "more likely", and that's the second law in a nutshell. The sole assumption is that you can just count the ways of being (the number of configurations)-- this is the crux of statistical mechanics, that every individual state is equally likely. That's the only assumption behind the second law, and if it weren't true, it would only mean that we would need to have a more sophisticated concept of what entropy is, beyond just ln(N), if some states were "preferred" by the mechanics. But any mechanics without that property yields the second law, quite generally.
My point was that if we can use the 2nd law of thermodynamics to conclude that Maxwell's demon cannot reliably decrease the total entropy, and we can conclude this without examining in detail how it works, then we should be able to reach a similar conclusion using Newton's laws, again without looking at a specific model for the demon.
And that is what is not true. Newton's laws are about the details, thermodynamics is what you can do without anything like Newton's laws. That's why the main principles of thermodynamics were discovered independently of Newton's laws (like the work of Carnot and Clausius), and sometimes even prior to them (like Boyle's law).
If nothing else, we can adapt the proof of the second law of thermodyamics given in e.g. the later chapters of Feynman Lectures volume 1, for the case of a unspecified device which is assumed to have the properties we ascribe to Maxwell's demon. If we do something like this, we can find out some of what make the second law "tick", such as the role of information.
Right, with no reference to any mechanism or mechanics of the Demon. This is crucial-- the mechanics only serve as informative examples of the second law, they are not part of the derivation of it. The derivation proceeds along the lines I gave above, and with no mention of any laws of mechanics, other than that they do not pick out preferred states. One might thus say that the second law arises from the universe being "non-teleological", which still remains the hardest thing for many to accept about it. It might not even be true-- it's only a law after all! But we won't know until we can really model thought, to see if it brings in some kind of teleology that could motivate essentially "magical" treatments of the Demon.
 
Last edited:
  • #53
Ken G said:
No, that is exactly what cannot be true, in any theory of mechanics exhibited by large systems. That's pretty much the whole point of thermodynamics! Again, "disorder" simply means "more ways of being", which means "more likely", and that's the second law in a nutshell. The sole assumption is that you can just count the ways of being (the number of configurations)-- this is the crux of statistical mechanics, that every individual state is equally likely. That's the only assumption behind the second law, and if it weren't true, it would only mean that we would need to have a more sophisticated concept of what entropy is, beyond just ln(N).
Are you saying that it is literally impossible to have laws of physics in which all the particles work together to produce a particular ordered state?
And that is what is not true. Newton's laws are about the details, thermodynamics is what you can do without anything like Newton's laws. That's why the main principles of thermodynamics were discovered independently of Newton's laws (like the work of Carnot and Clausius), and sometimes even prior to them (like Boyle's law).
Sure, just like Kepler's laws were discovered before Newton's law of gravitation and the Balmer series was discovered before the Schrodinger's equation. Phenomena of nature can be discovered independently even if they derive theoretically from a common source.
Right, with no reference to any mechanism or mechanics of the Demon. This is crucial-- the mechanics only serve as informative examples of the second law, they are not part of the derivation of it. The derivation proceeds along the lines I gave above, and with no mention of any laws of mechanics.
I was envisioning a different sort of procedure. I'm suggesting doing the statistical mechanics derivation of the second law of thermodynamics from Newton's laws of motion, as outlined in the Feynman lectures and fleshed out by Boltzmann, but restricting the proof to the case where you have a Maxwell's demon with unspecified mechanism. So the rest of the scenario will be analyzed according to mechanics, it is only the demon that is a black box.
If you set F=mv instead of ma, as the ancients imagined, you still get the second law of thermodynamics, without any difference. Indeed, this is the second law in highly dissipative situations, and it's still just thermodynamics.
I don't think this is too surprising; (this part of) Aristotelian physics is just Newtonian physics in the limit of strongly dissipative forces.
 
  • #54
I confess that I did some editing of my last post, just after I posted it, so some of the points were clarified and you might want to look at the improved version. I'm not saying it's impossible to have laws that create ordered states, I'm saying that only a very general assumption about the laws is required to rule that out (the assumption needed is that all possible states are equally likely, so none are picked out by the laws as special in some way). I would call that "non-teleological" laws, similar to what we get in relativity where there are no preferred reference frames. The key point about the Demon is that it is not given a pass to violate this rule-- it cannot target specific states, it must "throw darts" like everything else, and what it hits, ultimately, is simply the largest target, i.e., the highest entropy. Anything else is "magic". Now, of course we must include the entire "target", not just the gas and its entropy, so that's why violations of the second law invariably result from not recognizing the full space of possible outcomes that are being affected by the Demon. That's how the Demon works-- by gaining access to some other set of possible outcomes that can mitigate the "smaller target" of the reduced entropy in the gas. So my point is, this is not some technicality about how the Demon functions, it is how the Demon functions, thermodynamically speaking. No magic, no teleology, and you have the second law, regardless of the mechanics.

Now of course, this is the thermodynamics view-- I don't say reality actually contains no magic, and no teleology. It is physics that doesn't have those things, and does well without them, and that seems to be the reason that thermodynamics works so well. We don't have any reason to think a mind, demonic or human, can violate the second law, but we can notice that the second law stems from how our mind analyzes nature, so the law is as much a product of our minds as it is something that is a rule of nature itself. Hence we don't have to say that our minds result from the action of the second law, we can always assert the converse if we prefer. But either way, the two come together, and I would say the responsibility is on any who would claim a Demon can do something that the patent office rejects as plausible.
 
Last edited:
  • #55
Its the temperature and entropy thing again - When the second law talks about entropy, it is always in the context of temperature. Thermodynamic entropy is information entropy, but not all information entropy is thermodynamic entropy. Thermodynamic entropy always involves temperature.

Lets assume we can ignore the measurement process as far as its contribution to the second law balance sheet is concerned. Measuring the cannonballs in the cannonball gas suggests to me this is true. Let's ignore the trapdoor process as well. Then we have two containers of gas and a demon. In the beginning, the two gas systems are at the same temperature. Some time later, they are not. There has been a thermodynamic entropy reduction, if we ignore the demon. Let's say the second law holds, and the demon is some kind of computer, a Turing machine for simplicity. This means that the demon's thermodynamic (!) entropy must be increased, or, if it must remain at some low temperature, that its increasing entropy is continually dumped outside. The information entropy of the demon's logical state is not the subject of the second law, unless you can define some "logic temperature" and multiply it by k times the info entropy to get an energy. Let's say you cannot. Then it follows that there can exist no Turing machine that accomplishes the demon's purpose without the dissipation of the amount of lost gas entropy as HEAT. If you want to throw a monkey wrench into the workings of the second law, then you have to design a Turing machine that accomplishes the demon's purpose while dissipating less than that lost entropy. If you can design a Turing machine that is as general as possible, which dissipates no heat, you will have surely thrown a monkey wrench into the second law (given the arguable assumptions already made).

This is partly what Bennett and E&N are discussing, trying to figure out the thermodynamics of a Turing machine, or some other universal computer. If you break the computer down into individual logical operations, then one or more of those operations must dissipate heat. The heat dissipation, or absence of it, for various logical operations, is what they are discussing, and I see it as an interesting discussion, given the implications.

They also discuss Szilard's principle which says entropy is lost in the measurement process. As far as I can see, Bennett disagrees and E&N, I don't know.
 
Last edited:
  • #56
Rap said:
Its the temperature and entropy thing again - When the second law talks about entropy, it is always in the context of temperature. Thermodynamic entropy is information entropy, but not all information entropy is thermodynamic entropy. Thermodynamic entropy always involves temperature.
Actually I think it's the other way around-- the meaning of temperature stems from thermodynamical entropy. The best way to think about what temperature is is to multiply it by k and say that kT is the energy a system must take in in order to increase its access to undifferentiated states by a factor of e. But one can talk about the latter without ever referring to the former, if you simply don't care about energy (given that it's conserved, we usually do choose to care about it, but we don't have to and we still have a second law). So T comes in not so much with entropy, but when you want to connect entropy to energy. That means it relates to the importance of entropy, because energy measures the consequences of what the entropy is doing. But we still have a second law even if we have never heard of either energy or temperature.
Lets say the second law holds, and the demon is some kind of computer, a Turing machine for simplicity. This means that the demon's thermodynamic (!) entropy must be increased, or, if it must remain at some low temperature, that its increasing entropy is continually dumped outside.
Be careful not to confuse entropy with energy. If the Demon is at a very low T, then a very small energy change can correspond to a huge entropy change, and the Demon can remain at low T yet still increase its entropy without dumping anything to the outside. Indeed, if the Demon is nearly at absolute zero T, then it can increase its own entropy arbitrarily, and still be at nearly absolute zero T because it is taking on very little energy. But I agree that in practice, most Demons are not going to have an effective T that is incredibly small, so they are going to need an environment they can dump non-negligible heat into and increase the entropy of.

If you want to throw a monkey wrench into the workings of the second law, then you have to design a Turing machine that accomplishes the demon's purpose while dissipating less than that lost entropy. If you can design a Turing machine that is as general as possible, which dissipates no heat, you will have surely thrown a monkey wrench into the second law (given the arguable assumptions already made).
But you will also run afoul of the basic axioms of thermodynamics, which will not be possible to avoid. Let's say you do make such a Turing machine, whose functioning is to reduce the entropy in the gas by more than it increases the entropy in the environment in which it is functioning. Then the net result of the action of your machine is to go from more likely configurations to less likely ones. What keeps the machine from running backward? Remember, all the laws of physics involved are time reversible, so if I send t to -t, I get a version of your Turing machine that makes the opposite decisions and causes the gas to come to the same T, and does so in a way that increases entropy. So what is going to make your Turing machine not turn into mine? I claim that is just exactly what it will do. Remember, the Turing machine is not a magical genie, even if you give it a program it can follow that program in either order of time, and it will follow it in whatever order increases entropy, because the arrow of time comes from less likely configurations being replaced by more likely ones. The arrow of time isn't magic either.
 
Last edited:
  • #57
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.
 
  • #58
Ken G said:
Actually I think it's the other way around-- the meaning of temperature stems from thermodynamical entropy. The best way to think about what temperature is is to multiply it by k and say that kT is the energy a system must take in in order to increase its access to undifferentiated states by a factor of e. But one can talk about the latter without ever referring to the former, if you simply don't care about energy (given that it's conserved, we usually do choose to care about it, but we don't have to and we still have a second law). So T comes in not so much with entropy, but when you want to connect entropy to energy. That means it relates to the importance of entropy, because energy measures the consequences of what the entropy is doing. But we still have a second law even if we have never heard of either energy or temperature.

Hmmm - ok, right.

Ken G said:
Be careful not to confuse entropy with energy. If the Demon is at a very low T, then a very small energy change can correspond to a huge entropy change, and the Demon can remain at low T yet still increase its entropy without dumping anything to the outside. Indeed, if the Demon is nearly at absolute zero T, then it can increase its own entropy arbitrarily, and still be at nearly absolute zero T because it is taking on very little energy. But I agree that in practice, most Demons are not going to have an effective T that is incredibly small, so they are going to need an environment they can dump non-negligible heat into and increase the entropy of.

Yes. I am keeping close track of the difference between entropy and energy, and I agree with the above.

Ken G said:
But you will also run afoul of the basic axioms of thermodynamics, which will not be possible to avoid. Let's say you do make such a Turing machine, whose functioning is to reduce the entropy in the gas by more than it increases the entropy in the environment in which it is functioning. Then the net result of the action of your machine is to go from more likely configurations to less likely ones. What keeps the machine from running backward? Remember, all the laws of physics involved are time reversible, so if I send t to -t, I get a version of your Turing machine that makes the opposite decisions and causes the gas to come to the same T, and does so in a way that increases entropy. So what is going to make your Turing machine not turn into mine? I claim that is just exactly what it will do. Remember, the Turing machine is not a magical genie, even if you give it a program it can follow that program in either order of time, and it will follow it in whatever order increases entropy, because the arrow of time comes from less likely configurations being replaced by more likely ones. The arrow of time isn't magic either.

Good point. It shows that in order for a Turing machine to have a forward direction, it must generate entropy. Let's assume the Turing machine is made from macroscopic mechanical parts. The arrangement of its mechanical parts is its "logical state". This logical state has nothing to do with thermodynamics, it constitutes a macrostate, it does not factor into the many microstates which give rise to this configuration. The entropy the demon generates is thermodynamic entropy, which is heat (assuming its temperature is not absolute zero). Now, this entropy, by the second law, must be greater than the entropy it removed from the two gases. The problem now is to prove this, hopefully by conceptually breaking down the Turing machine into separate logical operations, and showing which logical operations are generating the entropy, and what is the least amount that they generate, in principle. Then show that when you add them all up for a program that contains the minimum number of entropy-generating logical steps, but which still implements the demon's purpose, that value of entropy is greater than the entropy removed from the two gases.
 
  • #59
lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.
This is certainly a valid challenge, and I'll give it some thought, but on the surface I would say the difference is in understanding what the second law is basically saying, in contrast with how to place that fairly intuitive statement into a more rigorous mathematical structure. In the mean time consider these words by de Broglie:
"When Boltzmann and his continuators developed their statistical interpretation of Thermodynamics, one could have considered Thermodynamics to be a complicated branch of Dynamics. But, with my actual ideas, it's Dynamics that appear to be a simplified branch of Thermodynamics. I think that, of all the ideas that I've introduced in quantum theory in these past years, it's that idea that is, by far, the most important and the most profound."
 
  • #60
Rap said:
The problem now is to prove this, hopefully by conceptually breaking down the Turing machine into separate logical operations, and showing which logical operations are generating the entropy, and what is the least amount that they generate, in principle. Then show that when you add them all up for a program that contains the minimum number of entropy-generating logical steps, but which still implements the demon's purpose, that value of entropy is greater than the entropy removed from the two gases.
But what I would say is, once you have accomplished that breakdown, you still don't know which way your machine will function until you either invoke the second law, or claim to have experience in similar machines. Both are determined by the answer to the entropy issue, not the other way around. So our job is not to prove that the machine increases entropy, it is to figure out what the machine will do, given that it increases entropy. The problem with the "Demon" is that we borrow from the function of our brains to give the Demon magical properties, but we only do that because we think we are familiar with thought and decisions-- as soon as you have to provide a machine instead, broken down as you say, then you immediately do not know what that machine will do (in particular, what sense will time have for your machine). That comes from entropy, there is no other way to calculate what the machine will actually do once you build it, unless you invoke experience with similar machines (but then it's not a theoretical analysis any more).

If you do go that latter way, and rely on your experience with machines (say computers) instead of with brains, then the way the second law works in our experience is already built into the analysis. So the choice is, invoke experience and use it to show that the second law is behind that experience (which is what you want to do), or invoke the second law and figure out what will happen without the benefit of experience (which is what the Patent Office does when it refuses to consider perpetual motion machines). But there's no reason to think that using a machine to separate gas into different Ts is going to reduce the entropy, I think it's pretty clear that's a dead duck and the details of the machine (like if it has memory) are not terribly important or even advisable to analyze (since you cannot individually analyze every possible machine, just like the Patent Office cannot). But granted, you want to be convinced it's a dead duck, so for that you will need to invoke a lot of experience in how machines work, and if you see it for a few examples, you can develop the faith you seek in the second law. You can't add to my faith by analyzing a few more examples, you could only have an effect by finding a counterexample (certainly a noble effort, most likely doomed to fail but instructive in how it fails each time).
 
Last edited:
  • #61
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.

A computer like that doesn't have anything we can *relevantly* identify as temperature, and why should it? We can have TdS+pdV systems that don't need FdL, or TdS+FdL without pdV, so why not pdV+FdL without any mention of temperature or entropy. A computer at 0 Kelvin would probably work rather well. If the program is basically cyclic then there is no accumulation of entropy in its logical state. If you want to claim that it's oozing entropy, then you'd have to mean that it dissipates heat somewhere in the universe, but is there really any prospect of finding a *mechanical* reason for putting a numerical limit on how much heat it has to dissipate? Maybe it runs on neutrinos or something.

What does likely mean anyway? In thermodynamics it means microstates per macrostate, but we seem to have agreed that macrostates are in the eye of the beholder. There's been no discussion of conditional probability, or dependent probabilities. The whole treatment of probability has been restricted to what dumb gases need.

There's been some suggestions that Turing machines can just as easily run backwards as forwards, but I don't see this either. I just need a 2 bit counter and I've got a one way system. I'm probably going to need to feed it some energy, but the cannonball gas argument already showed that this energy can be made negligible compared with the energy being switched around by the demon, so we concluded that it's not about the energy anyway. If it's not about the energy though, I can have my 2 bit counter, it can be arbitrarily efficient, and the entropy I need to generate elsewhere to keep it running can be made arbitrarily small.

With that cannonball gas we can actually shift a hell of a lot of entropy per decision. To see that, let the balls be ever so slightly inelastic and let them eventually dissipate their KE by warming the cannonballs over a long, long time. The demon was supposedly struggling to run on a limited entropy budget, but with big cannonballs he can afford to drink a bit of electricity. This has been my bottom line worry all along. We can hand-wave about this stuff, but can we write equations for it in units that match up?

I really think it's on the cards that computers could walk all over the 2nd law, but what would be the damage? Would we have to bin all the books? Nope. Gases would still behave just the same, and computers would still be ten orders away from finding out. I can't think of an area of physics that would totally implode if computers could break the 2nd law.
 
  • #62
lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.

Boltzmann's H-theorem holds for equilibrium only (or if you prefer, the tendency for a system to approach equilibrium). The theorem itself, dH/dt ≤ 0, is not trivial but not 'hard to prove', either- classical and quantum mechanical proofs are available from many sources.
 
  • #63
AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system.

The second law can be broken for short times, in terms of the fluctuation-dissipation theorem (S is allowed to fluctuate, just like any other physical quantity), and there are challenges using systems far from equilibrium, but to date no meaningful violation of the second law of thermodynamics has ever been observed.

http://prl.aps.org/abstract/PRL/v89/i5/e050601
http://www.mdpi.org/entropy/papers/e6010001.pdf
 
  • #64
AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.
We should expect it to apply to computers because computers also represent a vast number of states, not a small number. Now, if you build a quantum computer, you have microstates interacting with macrostates, and you might be able to isolate the microstates and get into the area of quantum thermodynamics (which still has some questions associated with the different interpretations and so forth). Some hold that thermodynamics is just a kind of special case and might not hold for quantum systems, others (like that de Broglie quote) take the opposite view that thermodynamical thinking is quite fundamental, and even things like wave functions and spacetime are merely instances of deeper thermodynamic (entropy controlled) engines.
 
  • #65
lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.

I would say the second law is not a near tautology. It's a simple concept, "the more likely a situation, the more likely it is to occur" gets right to the core of the second law, but to state it precisely can get rather complicated.

You have to have the concept of a microstate and a macrostate to begin with, and what macrostate is associated with each microstate. This can be less than obvious, and can be different for different observers. You have to have a mechanism by which each microstate changes, in time, into another microstate. Well, no, not exactly, you have to have a mechanism by which ALMOST EVERY microstate changes into another microstate. That requires a very large number of microstates. Then you have to know that this mechanism allows, by a series of steps, almost every microstate to evolve into almost every other microstate. You have to know or assume that, as a result of this process, almost every microstate is just as likely to occur as any other microstate. You have to show that almost every microstate yields the same macrostate, (the equilibrium macrostate). Only then can you say that almost every microstate which does not yield the equilibrium macrostate, will evolve in time in a way that it approaches that equilibrium macrostate. This is a statement of the second law. The entropy is defined as proportional (lets say equal to) the logarithm of the number of microstates that yield a given macrostate. That means that the entropy of a non-equilibrium macrostate will be lower than that of the equilibrium macrostate, and its entropy will tend to increase to that of the equilibrium macrostate. This is another way of stating the second law. It also means that the entropy of the equilibrium macrostate is almost equal to the logarithm of the total number of microstates. Even this description is not complete. Its these details that cause the H-theorem to be so complicated.


Ken G said:
But there's no reason to think that using a machine to separate gas into different Ts is going to reduce the entropy, I think it's pretty clear that's a dead duck and the details of the machine (like if it has memory) are not terribly important or even advisable to analyze (since you cannot individually analyze every possible machine, just like the Patent Office cannot). But granted, you want to be convinced it's a dead duck, so for that you will need to invoke a lot of experience in how machines work, and if you see it for a few examples, you can develop the faith you seek in the second law. You can't add to my faith by analyzing a few more examples, you could only have an effect by finding a counterexample (certainly a noble effort, most likely doomed to fail but instructive in how it fails each time).

I think we agree on how things work, we disagree on what is interesting or important. I am interested in the idea that some fundamental statements can be made about the thermodynamics of computing. You may have to treat every real case as a separate example, but I think it is very interesting if some statements can be made about the thermodynamics of individual logical operations, like Landauer's statement that only irreversible logical operations will unavoidably generate thermodynamic entropy which must be greater than some minimum value. The patent office rejects perpetual motion machines because they are not in the business of finding where somebody screwed up when they are sure that they have screwed up. I think there may be situations where finding out where they screwed up can be interesting and informative, and yield a new or more complete understanding of the second law.

AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.

The fundamental (classical) statistical mechanics question is "how do you describe the evolution of a physical system when you don't have complete knowledge of its state?". You don't have complete knowledge of the initial state or any intermediate state. The second law does not apply to situations in which you have complete knowledge of the initial state.

If you don't have complete knowledge of initial conditions, one approach is to assign probabilities to every conceivable initial condition, and then, using physical principles, calculate the probabilities of a what you will finally measure. The second law says that if you have a situation where a particular final measurement is almost certain, then that's almost certainly what you will measure. Or maybe, more weakly the second law says that if you have a situation where a particular final measurement is most likely, then that's most likely what you will measure. Otherwise, the second law is not applicable.
 
  • #66
I agree with you that analyzing why the second law continues to apply no matter how hard you try to "trick it" into not applying is informative. It's kind of like the Lorentz symmetry in relativity, or the uncertainty principle in quantum mechanics-- laws that people tried very hard to "get around" with all kinds of examples, but eventually gave up the effort and instead just accepted the law. Indeed that is how physics works, to a large extent-- we never know our laws are correct, we just eventually gain faith in them after trying hard enough to refute them (and usually we eventually do refute them in ways that are very informative indeed). I'm just saying that the easiest way to analyze situations involving very high velocities, or very small systems, or Maxwell's Demons, is to accept that Lorentz symmetry, and the uncertainty principle, and the second law of thermodynamics, are going to tell you what will happen there. So the question then becomes, why are they going to work, not why are they not going to work. I feel we benefit more from understanding and accepting the law, that has so much empirical support in all these contexts, then we do by constantly doubting it, even though I admit maintaining constant doubt is a key part of scientific progress. So what I really mean is, when we analyze these Demons, we should be looking for the places that the entropy goes, given that we know the entropy has to go somewhere or the Demon just won't work (and indeed the time-reversed version of the Demon will work instead). In this way, we have a guide to keep us from overlooking some entropy destination-- rather than looking for why the second law isn't going to work, which some parts of this thread started to get the flavor of (I'm not saying that was your approach).
 
Last edited:
  • #67
Ken G said:
In this way, we have a guide to keep us from overlooking some entropy destination-- rather than looking for why the second law isn't going to work, which some parts of this thread started to get the flavor of (I'm not saying that was your approach).

Well, I had not looked at Maxwell's demon too much before this thread, and I thought that the answer was that the demon would not work due to the second law. This has changed.

I have been trying to understand the concept of indistinguishable particles in a classical gas, using what I called a "billiard ball" gas and supposing for clarity that each was imprinted with a unique serial number that had no effect on collisions, and why the thermodynamics of this gas is not changed by erasing the serial numbers, you still need to make the indistingushable particle assumption. As long as no thermodynamic process is a function of those serial numbers, so its not a quantum effect, QM just says its a matter of principle, rather than a happenstance. So when I saw a "gas of cannonballs" I was definitely interested.

Regarding the second law, its still not fixed in my mind. Not so much its validity, but its range of application, the trouble in defining a "macrostate" and the extent to which the macrostate is somewhat arbitrary, depending on the capabilities of the person doing the measurement, rather than on the system itself, pointing out again the arbitrariness of the entropy. Entropy is missing information, and if you manage to gain more information without disturbing the system (classical, again), then you reduce the entropy of the system, without having altered it in any way.
 
  • #68
Rap said:
Well, I had not looked at Maxwell's demon too much before this thread, and I thought that the answer was that the demon would not work due to the second law. This has changed.
Yes, the Demon does work. Whether or not it is a practical way to get free energy is not so clear, maybe it's just technologically unfeasible.
I have been trying to understand the concept of indistinguishable particles in a classical gas, using what I called a "billiard ball" gas and supposing for clarity that each was imprinted with a unique serial number that had no effect on collisions, and why the thermodynamics of this gas is not changed by erasing the serial numbers, you still need to make the indistingushable particle assumption.
I'm not clear on what you mean by still needing to make the indistinguishable assumption. It seems to me this will simply depend on your goals, and you can make it in some situations and not make it in others. You can treat distinguishable (classical) particles as indistinguishable if it doesn't matter to the outcomes you have in mind, and you can treat distinguishable particles as indistinguishable too, it depends on what you care about, or more correctly, what you can get away with not caring about. That's generally true of the entropy concept-- the order is, we choose what we care about and what we know, that controls the entropy, and the entropy gives us a second law.
As long as no thermodynamic process is a function of those serial numbers, so its not a quantum effect, QM just says its a matter of principle, rather than a happenstance.
Quantum mechanics just brings in another type of situation we might need to care about, because it brings in entanglement. In some situations, particles become entangled in ways where indistinguishability is of fundamental importance, and we have to care about it or we miss the necessary entanglements. In other words, sometimes nature tells us what we need to care about, rather than us telling her what we want to care about.
Regarding the second law, its still not fixed in my mind. Not so much its validity, but its range of application, the trouble in defining a "macrostate" and the extent to which the macrostate is somewhat arbitrary, depending on the capabilities of the person doing the measurement, rather than on the system itself, pointing out again the arbitrariness of the entropy. Entropy is missing information, and if you manage to gain more information without disturbing the system (classical, again), then you reduce the entropy of the system, without having altered it in any way.
I would say that concept is not supposed to be "fixed" in your mind, it is supposed to be highly fluid in your mind! Entropy is malleable, it is whatever we need it to be to describe the relative probability of various categories of outcomes. In my view, entropy is nothing but a classification scheme for states, and some classification schemes are more useful than others. So the trick to using entropy, and the second law, is simply finding a good classification scheme, and that's not always easy.
 
  • #69
I'm not sure I understand these brain arguments against the problem. It seems that the argument is that the mental processes required by the demon increase the entropy to preserve the second law of thermodynamics (unless I am misunderstanding).

I don't know if this has been brought up or not, but what if the demon were an extraordinarily stupid and simple being who had no idea what he was doing, but he still manipulated the system with such luck as to match exactly what the thinking demon would have done? I suppose this is similar to a sequence of atoms spontaneously forming into crystals, but that doesn't seem impossible either (though certainly unlikely).

I don't see how this could be considered anything other than statistical, though I am certainly no expert.
 
  • #70
Acala said:
I'm not sure I understand these brain arguments against the problem. It seems that the argument is that the mental processes required by the demon increase the entropy to preserve the second law of thermodynamics (unless I am misunderstanding).

I don't know if this has been brought up or not, but what if the demon were an extraordinarily stupid and simple being who had no idea what he was doing, but he still manipulated the system with such luck as to match exactly what the thinking demon would have done? I suppose this is similar to a sequence of atoms spontaneously forming into crystals, but that doesn't seem impossible either (though certainly unlikely).

I don't see how this could be considered anything other than statistical, though I am certainly no expert.

You get around this problem by assuming the demon is a computer. That way you have a definite physical system to deal with.
 
  • #71
Acala said:
I don't see how this could be considered anything other than statistical, though I am certainly no expert.
I think it's fair to say that the second law is purely statistical. It is not an unbreakable law, it's a way to make predictions. The more complex the system, the more reliable the prediction, but the systems that thermodynamics are applied to are so spectacularly complex that the predictions are essentially completely reliable. Good thing too-- our lives depend on that constantly.
 
  • #72
Ah, that is a clever solution, Rap.

And thanks Ken G, that always troubled me about the concept. I can see how it's virtually completely true even if it's possible to be broken in exceedingly rare circumstances.
 
  • #73
Acala said:
And thanks Ken G, that always troubled me about the concept. I can see how it's virtually completely true even if it's possible to be broken in exceedingly rare circumstances.
Indeed, some think that the origin of our universe was just one of those exceedingly rare circumstances where the second law had a hiccup!
 
  • #74
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.

A thermodynamic state is defined by specifying a small number of macroscopic quantities such as temperature, volume, magnetization, stress, etc--Denote them by {X1,X2,Xn} which are observed and/or controlled by the experimenter, where n is seldom greater than 4.

From this he goes on to give a number of illustrative physical examples which relate to the entropy of mixing of indistinguishable and vaguely distinguishable particles and uses this to support his thesis that entropy is not a property of the microstate, and that there is no paradox because the propositions of thermodynamics only relate to pre-defined sets of macrostates and are not meant to be statements about the physical microstates. He says about the second law:

Therefore, the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable;rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, X2, Xn} that we have chosen to define our macrostate. Any attempt to write a second law stronger than this will put one at the mercy of a trickster, who can produce a violation of it
 

Attachments

  • #75
JDStupi said:
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.
I don't think this is Ken G's viewpoint. Ken G, am I wrong?
 
  • #76
JDStupi said:
In case anyone is interested, I have posted an attachement of a paper on "The Gibbs Paradox" by E.T. Jaynes. It seems relevent, especially because Rap posted about particle indistinguishability. Jaynes seems to be taking a viewpoint similar to Ken G's; namely that entropy is largely related to the observers knowledge and their intent in setting up an experimental apparatus.

From this he goes on to give a number of illustrative physical examples which relate to the entropy of mixing of indistinguishable and vaguely distinguishable particles and uses this to support his thesis that entropy is not a property of the microstate, and that there is no paradox because the propositions of thermodynamics only relate to pre-defined sets of macrostates and are not meant to be statements about the physical microstates. He says about the second law:

As far as I am concerned, Jaynes is up there with Boltzmann and Gibbs when it comes to understanding entropy. He is the number one contributor, in my mind, to the understanding of the relationship between thermodynamic entropy and information entropy.

His explanation of the Gibbs paradox gives a deep insight into entropy - If you have two gases separated by a partition, and they have identical particles, then removing the partition changes nothing - the resulting gas is in equilibrium and the entropy is the sum of the entropies of the two gases when the partition was in. If they are different particles, no matter how small the difference, upon removing the partition, you have non-equilibrium, and upon equilibrium, you have a net increase in entropy - the total entropy is greater than the sum of the two original entropies and the increase is always the same. The crucial point is that if they are different but you have no experimental ability to tell you that they are different, then removing the partition changes nothing - there is no detectable disequilibrium, and no entropy change. Entropy is not only a function of the system, its a function of what you happen to know, or choose to know about the system.

This is applicable to the concept of "correct Boltzmann counting", where, when you calculate the entropy of a gas assuming the particles have separate identities, you wind up with a non-extensive entropy (entropies do not add) and you have to subtract log(N!) to get the right answer. You can see that saying that the particles are distinguishable is equivalent to taking your original gas and instead of having two boxes as in the Gibbs paradox, you have N separate boxes, each containing one particle which is different in some way (i.e. its distinguishable) from every other particle. Again, as in the Gibbs paradox, the entropies will not add. But you have no experimental ability to tell you that the particles are different. Therefore, if you calculate entropy by assuming they are distinguishable, you have to subtract that log(N!) error you made by that assumption. And now entropies add up (i.e. its extensive).

If you want a really good book on this subject, which goes through it carefully and clearly, giving many examples, check out "A Farewell to Entropy: Statistical Thermodynamics Based on Information" by Arieh Ben-Naim.
 
  • #77
lugita15 said:
I don't think this is Ken G's viewpoint. Ken G, am I wrong?
JDStupi is right-- that is very much my viewpoint. Entropy emerges once we have defined the sets of categories of states that fit to what we know and what our goals are. Once we have that, we get entropy from ln(N), and then we have the second law, which simply states that less populated categories will give way to more populated ones, over time, and this progress will be more reliable if the systems involved are very huge. We also get the condition for processes that occur "spontaneously", which is everything that happens in thermodynamics, and that is Nf > Ni. It might not be quite tautological, but it is certainly pretty simple and logical. The difficulty is in keeping track of what N is for things like Demons, and that's a worthwhile exercise along the lines of what Rap is trying to do, but if we want to know how the Demon works, the issue isn't really if it will hold, but why it must hold, or else the Demon will work in reverse.
 
Last edited:
  • #78
According to March 8, 2012 article in "Nature" (see citation below), the "idea of a connection between information and thermodynamics can can be traced back to Maxwell’s ‘demon’ " and the Landauer Principle which helped to resolve the paradox has finally been experimentally verified.

According to the Nature article, "The paradox of the apparent violation of the second law can be resolved by noting that during a full thermodynamic cycle, the memory of the demon, which is used to record the coordinates of each molecule, has to be reset to its initial state11,12 . Indeed, according to Landauer’s principle, any logically irreversible transformation of classical information is necessarily accompanied by the dissipation of at least kTln(2) of heat per lost bit (about 3 3 10221 J at room temperature (300 K)), where k is the Boltzmann constant and T is the temperature."

See: Antoine Bérut, et al., "Experimental verification of Landauer’s principle linking information and thermodynamics" Nature 483, 187–189 (08 March 2012)
http://phys.org/news/2012-03-landauer-dissipated-memory-erased.html
 
  • #79
From JDStuple in post #74 on the possibility of a Mazwell's Demon, originally from "The Gibbs Paradox" by E.T. Jaynes:
Therefore, the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable;rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, X2, Xn} that we have chosen to define our macrostate. Any attempt to write a second law stronger than this will put one at the mercy of a trickster, who can produce a violation of it.
Simon Van der Meer of CERN, in his 1984 Nobel Prize in Physics lecture , explains how he used RF electronics to violate Liouville's Theorem (see http://en.wikipedia.org/wiki/Liouville's_theorem_(Hamiltonian ).) and reduce the size (phase space) of a relativistic antiproton beam. See Simon Van der Meer's lecture (http://ki1.nobel.ki.se/nobel_prizes/physics/laureates/1984/meer-lecture.pdf) following Equation (1) on page 294.
 
Last edited by a moderator:

Similar threads

Replies
8
Views
2K
Replies
28
Views
3K
Replies
5
Views
2K
Replies
2
Views
4K
Replies
81
Views
18K
Replies
25
Views
5K
Back
Top