lugita15 said:
Ken G, if you believe that the 2nd law of thermodynamics is so natural, and that it is an obvious consequence of not only Newton's laws but any laws remotely like them, then why is Boltzmann's H-theorem so hard to prove, involving advanced mathematics and clever pieces of reasoning? Your view of the 2nd law as a near-tautology "systems are more likely to evolve into more likely states" should allow for a much easier proof of the theorem.
I would say the second law is not a near tautology. It's a simple concept, "the more likely a situation, the more likely it is to occur" gets right to the core of the second law, but to state it precisely can get rather complicated.
You have to have the concept of a microstate and a macrostate to begin with, and what macrostate is associated with each microstate. This can be less than obvious, and can be different for different observers. You have to have a mechanism by which each microstate changes, in time, into another microstate. Well, no, not exactly, you have to have a mechanism by which ALMOST EVERY microstate changes into another microstate. That requires a very large number of microstates. Then you have to know that this mechanism allows, by a series of steps, almost every microstate to evolve into almost every other microstate. You have to know or assume that, as a result of this process, almost every microstate is just as likely to occur as any other microstate. You have to show that almost every microstate yields the same macrostate, (the equilibrium macrostate). Only then can you say that almost every microstate which does not yield the equilibrium macrostate, will evolve in time in a way that it approaches that equilibrium macrostate. This is a statement of the second law. The entropy is defined as proportional (lets say equal to) the logarithm of the number of microstates that yield a given macrostate. That means that the entropy of a non-equilibrium macrostate will be lower than that of the equilibrium macrostate, and its entropy will tend to increase to that of the equilibrium macrostate. This is another way of stating the second law. It also means that the entropy of the equilibrium macrostate is almost equal to the logarithm of the total number of microstates. Even this description is not complete. Its these details that cause the H-theorem to be so complicated.
Ken G said:
But there's no reason to think that using a machine to separate gas into different Ts is going to reduce the entropy, I think it's pretty clear that's a dead duck and the details of the machine (like if it has memory) are not terribly important or even advisable to analyze (since you cannot individually analyze every possible machine, just like the Patent Office cannot). But granted, you want to be convinced it's a dead duck, so for that you will need to invoke a lot of experience in how machines work, and if you see it for a few examples, you can develop the faith you seek in the second law. You can't add to my faith by analyzing a few more examples, you could only have an effect by finding a counterexample (certainly a noble effort, most likely doomed to fail but instructive in how it fails each time).
I think we agree on how things work, we disagree on what is interesting or important. I am interested in the idea that some fundamental statements can be made about the thermodynamics of computing. You may have to treat every real case as a separate example, but I think it is very interesting if some statements can be made about the thermodynamics of individual logical operations, like Landauer's statement that only irreversible logical operations will unavoidably generate thermodynamic entropy which must be greater than some minimum value. The patent office rejects perpetual motion machines because they are not in the business of finding where somebody screwed up when they are sure that they have screwed up. I think there may be situations where finding out where they screwed up can be interesting and informative, and yield a new or more complete understanding of the second law.
AdrianMay said:
I'm tending to think that the 2nd law honestly doesn't apply to every conceivable system. Why should it? It says that we expect systems to go from less likely states to more likely ones, but it was all worked out in the context of gases, which exhibit vast numbers of microstates and blindly wandering molecules. That means we can get away with using very naive probability theory. Why should that be generalisable to computers with very deliberate wiring and small numbers of states? Not micro-, not macro-, just states.
The fundamental (classical) statistical mechanics question is "how do you describe the evolution of a physical system when you don't have complete knowledge of its state?". You don't have complete knowledge of the initial state or any intermediate state. The second law does not apply to situations in which you have complete knowledge of the initial state.
If you don't have complete knowledge of initial conditions, one approach is to assign probabilities to every conceivable initial condition, and then, using physical principles, calculate the probabilities of a what you will finally measure. The second law says that if you have a situation where a particular final measurement is almost certain, then that's almost certainly what you will measure. Or maybe, more weakly the second law says that if you have a situation where a particular final measurement is most likely, then that's most likely what you will measure. Otherwise, the second law is not applicable.