Hi, I'd appreciate it if somebody would clarify a couple of things for me. First, it looks like the concept of determinism is not the same in physics and the theory of computation. Physical determinism requires a unique evolution forward AND backward. That is, the information doesn't get lost. You can view it as a film roll: a linear transition between states. In computation, however, determinism seems to imply a unique evolution only forward. That is, there is a unique path from the initial state to the end state, but not backwards. The information can get lost in a deterministic finite state machine as you transition and you can't go back to the initial state. Does that sound right? Why different definitions in two domains? Second, it sounds like classical physical systems are deterministic (by definition I provided above), although there are some exceptions in Newtonian classical mechanics (e.g. point particles accelerating to infinity under the force of gravity). Now if the classical world is deterministic, how does the concept of entropy come into this, conceptually? I understand that when entropy increases, it means that there are more avaliable states for the system to be in than a 'tick of time' before. But if there is a unique history forward, how much sense does it make to talk about available states? There's only one path forward. It's like saying that with every successive slide of the film roll, there are more possibile slides available to be the next slide. Well, not really. Given the transition rules (physical laws) and the initial state, there's one and only one next slide possible. It sounds like these available states are logical possibilities in some abstract space. What am I missing here? Thanks. Pavel.