Why the randomness

1. Oct 18, 2012

weisenhm

Disclaimer. I am a PhD level neurobiologist, and like most of my peers a crappy physicist. I went back to better complete my scientific understanding and just on my own am giving physics a crack as a curious adult, and in the process I have gotten hooked on the beauty of physics. I am working it at the level of the first calculus based course. I am especially blown away by the thermo and the quantum stuff. I feel like I can grasp each small piece of these topics, but I must confess the deep understanding, or clarity, is something I am still working towards. As an example, I can study the carnot cycle and understand entropy as it is introduced in the thermodynamic sense. But it is not obvious to me how this relates to the statistical derivation, aside from seeing that they give the same result. The mathematical equivalency doesn't help me figure out why they are related. The question I have deals with the statistical derivation and probabilities. In the maximization of microstates, once again the notion of probability comes into play as being a driver of physical processes. Is this just because the most likely events are the ones that happen, and there is nothing mysterious about it? Otherwise how does the universe "know" which state is the most likely? And also is this probability related to the probabilities in quantum mechanics. I know these are really big, and perhaps silly questions (see disclaimer), but I'd appreciate any feedback.

2. Oct 18, 2012

jambaugh

I found it helpful when I realized that entropy isn't about "disorder" because that is a subjective opinion. It is rather about (the, paradoxically, less subjective) idea of our knowledge about the system.

The key is in what we mean by a given physical system such as an amount of gas. In the system specification, constraints on quantities, there is a corresponding physical constraint. If we consider a specific volume of gas, we must have rigid walls holding the gas in. If we specify an amount of gas at a specific pressure we must have movable walls designed to press the gas with constant force per area. If we refer to an unspecified amount of gas with constant pressure and volume we are allowing the number of gas particles to change, say with a pressure valve, to maintain that pressure within a volume of rigid walls.

If we're careful physicists, all the terms we use have some operational meaning or some specifically mathematical meaning and there's no subjectivity.

Now in statistical mechanics we consider how specific we can get in describing a system. Start with say a fixed volume of gas consisting of a fixed number of particles which we have allowed to exchange energy randomly with a constant temperature heat bath for long enough time to be in approximate equilibrium as far as random heat exchange is concerned. In statistical mechanics we may ask the question "How more specific could we be in defining/constraining this system?" We try to reduce the lack of specificity to a count of the "most specific distinct descriptions" where "distinct" means physically distinguishable (and so we ignore swapping of indistinguishable particles). Classically these numbers are uncountable but we can define a phase-space volume unit (action unit) and get a relative number.

We see this number behaving multiplicatively when we combine two systems so we look at its logarithm (since that is then additive) and call it entropy. It is our measure of how non-specific we are being in constraining our system. The higher the entropy the more vague we are in restricting the range of states which are included as examples of "the system". In a laboratory we would reject experiments where the system violated our constraints, e.g. where the wall failed to stop the gas from leaking out.

I find this clarification to understand that entropy is not a function of the system's state but rather of the constraints we impose on the system in defining it vs other cases. Indeed "the system" can be a single particle, but a particle constrained to be, say, in a room of some volume. And even though classically that particle is always only in one exact state, our constraints allow that state to be one of many so the entropy is likewise non-zero.

The system really isn't just the particle but the particle plus its environment. And so we can talk about the "thermodynamic force" driving the particle from a small room to a bigger room. That force is nothing more than the aggregate of constraint forces imposed by the walls which when added up over equally probable cases will have a net statistical component that points toward the bigger room.

Now abstract these "rooms" to any regions in the system's state space (phase space) and you have the kernel of classical statistical thermodynamics in a nutshell. Quantum mechanics adds some features in that it is easier to justify our counting methods due to quantization. We end up counting dimensions of the Hilbert space (since it projectively defines sharp modes as 1-dim subspaces which we identify with vectors which span them) instead of arbitrary units of area in phase space.

I have not mentioned probabilistic constraints where we impose "soft" conditions on the system so that we can define a probability distribution over the state space. Think of the above example as a "crisp" constraint where we've specified a uniform distribution over some subset ("room") of state space. And then generalize from there.

We can derive a formula for entropy in terms of the probability distribution. We can consider hypothetical distributions and then find the ones that maximize entropy (subject to some less direct constraints such as that the expectation value of the energy be some value). We then simply assume that we are just as ignorant of the system as we say we are (that the two entropy formulas must agree) and we can calculate probabilities from the principle of maximum entropy.