Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Meaning of equiprobability principle in statistical mechanics

  1. Sep 8, 2008 #1

    This question is probably a dumb one but I admit that I am quite perturbed with this issue.
    Indeed, I don't understand why canonical ensembles like the microcanonical ensemble or
    the canonical one are called "equilibrium ensemble".
    I do agree that they correspond to steady measures of probability but why do they reffer only to
    equilibrium is a mystery for me.

    The reason of my misunderstanding is that microstates corresponding to out of equilibrium macrostates are allowed in these ensembles. For example microstates corresponding to non uniform density are allowed in the microcanonical ensemble (for perfect gazes for example) where all microstates in the hypersurface of constant energy (with a given uncertainty) are allowed. Now, I thought that, at equilibrium, the density was uniform in an homogen fluid...

    I am more dubious when, searching for answers about foundations of statistical mechanics, I read papers and courses where the Boltzmann H theorem or more generaly the second principle are correctly (it seems) understood assuming that all microstates of the surface at constant energy are "visited" with the same probability (the microcanonical ensemble appears again in a general context this time) and that the macrostate of equilibrium correspond to an overwelming number of microstates compared to non equilibrium microstates.

    Thinking a lot about it, it leads me to the conclusion, perhaps the wrong one, that the canonical ensemble distributions refer to systems that can be observed during an infinite time which lead to the equiprobability "principle" (somewhat explain with the historical idea of ergodicty) and the time independence of the distributions but not especially to the equilibrium (macro)states of these systems.

    What is your opinion about that ? Thank you for any comments that could help me !
  2. jcsd
  3. Sep 8, 2008 #2


    User Avatar
    Science Advisor

    The canonical ensemble asserts that the ideal gas, even at equilibrium, has a minute probability that all the molecules will gather in one corner of the box. This probability is very low, and so we have never seen it.

    The ergodic theorem is said by most books to be useless (I haven't seen it myself), because it requires longer than the lifetime of the universe for a system to attain equilibrium.

    This is the most widely accepted theory as to why statistical mechanics "works", even though all fundamental physical laws are time reversible (there's some funny stuff in a small part of particle physics, but we don't have to worry about that), is that it has to do with the large numbers of particles.
    Mehran Kardar's discusses this in Lecture 9 at:
    http://ocw.mit.edu/OcwWeb/Physics/8-333Fall-2005/LectureNotes/index.htm [Broken]
    Also try:

    There have been prominent dissenters to the above view, like Prigogine who wanted to add a term to Schroedinger's equation.
    Last edited by a moderator: May 3, 2017
  4. Sep 8, 2008 #3
    Precisely and that's the reason why I don't understand the well know name "equilibrium ensemble" (it seems that we don't have the same defintion of equilibrium actually and I would like to have your defintion if you don't mind).

    This is not the principal reason according to what I read. Actually the KAM theorem and the fact that there exist chaotic dynamical systems for which the ergodic theorem does not hold are the principal ones it seems.

    I do agree with that but as I said, I don't understand the qualifier "equilibrium distribution" because it seems that the reason why it works is only effective thanks to the overwelming number of microstates corresponding to the macrostate of equilibrium.

    I will try to see these refs thanks.
    Last edited by a moderator: May 3, 2017
  5. Sep 8, 2008 #4


    User Avatar
    Science Advisor

    The time after which the macroscopic variables such as pressure, and temperature no longer change.

    Yes, the ergodic theorem holds for most systems we study. I just meant that it is too weak to explain why the systems we study come to equilibrium much more rapidly than the lower limit calculated by the ergodic theorem.
  6. Sep 8, 2008 #5
    Yes but for me this definition is a little "wrong". You just have to consider the answer to the objection of Zermelo to the Boltzmann H theorem that can be said as : "yes the system will eventually enter a microstate corresponding to a macrostate out of equilibrium but you will wait a little longer than the age of the universe in order to see it dude !".
    Ok but that's not an answer to me ! This sentence explain why it does work i.e. why we can describe equilibrium systems with statistical distributions containing microstates corresponding to states of non equilibrium but that's all (and this is quite good after all) but what I want to underline is that the "equilibrium distributions" are a little more than just equilibrium distributions. Do you see my point ?
  7. Sep 8, 2008 #6


    User Avatar
    Science Advisor

    The canonical ensemble is given the qualifier "equilibrium" because it is only able to predict the results of classical thermodynamics. Yes, statistical mechanics is only an effective theory - the microcanonical and canonical ensembles are completely different ensembles conceptually - yet they are close enough for most practical purposes.
  8. Sep 8, 2008 #7


    User Avatar
    Science Advisor

    The definition I gave of equilibrium is just an experimental one. Of course, it may be that experimentally we never find systems in equilibrium, and the theory cannot be applied.

    Anyway, I agree with your point about the age of the universe - it's exactly why I said the ergodic theorem is useless in practice - all statistical mechanics is based on a prayer (which appears to be answered affirmatively very often).

    Edit: whoops, no I don't see your point. If an equilibrium system takes longer than the age of the universe to become non-equilibrium, then it's as good as being at equilibrium for us.
  9. Sep 8, 2008 #8


    User Avatar
    Science Advisor

    There's an interesting comment in Kardar's first lecture:

    "A system under study is said to be in equilibrium when its properties do not change appreciably with time over the intervals of interest (observation times). The dependence on the observation time makes the concept of equilibrium subjective.For example, window glass is in equilibrium as a solid over many decades, but flows like a fluid over time scales of millennia. At the other extreme, it is perfectly legitimate to consider the equilibrium between matter and radiation in the early universe during the first minutes of the big bang."

    Is it subjectivity of equilibrium that is worrying you?
  10. Sep 8, 2008 #9
    The fact is that you can make predictions about the probability of occurrence of a non equilibrium state thanks to a so called "equilibrium distribution" (indeed Boltzmann always uses the microcanonical ensemble (or equiprobability principle for all microstate) at least implicitly, in his argumentation (against Loschmidt and Zermelo for example).
    This only fact seems very odd to me and that's all.

    You can say that if the system contains the Avogadro's number of particles but what if you are trying to understand fundations of statistical mechanics with arbitrary numbers of particles ?

    What i want to discuss actually is that the fact that "all that stuff about ensembles works" and that they can be called "equilibrium ensembles" (it's two different things for me) looks like a big coincidence to me hidden behind the law of great numbers.
  11. Sep 8, 2008 #10
    That's a part of it yes. I don't know if it the same thing as what he wants to say or the example of window glass is inappropriate in this case. Indeed, it is well known from statistical physics of disordered systems that glass states are not "real" equilibrium states but metastable states. Well that can be interesting because it could make physicist find a more "rigorous" (less subjective) defintion of equilibrium that would make a consensus.
  12. Sep 9, 2008 #11
    Thermal equilibrium doesn't exist at the level of microstates. You can say that some particular member of the microcanonical ensemble of a gas consisting of N molecules has all its molecules in a small volume of the total available volume. But then you can write down many observables that only have a particular outcome for a small fraction of all the members of the ensemble.

    You can't strike them all out, because for any given member of the ensemble there is an observable that singles it out as special: the observable that measures if a state is the same as the given member.

    What you can say is that the members of the ensemble contains a subset of states that is identical to an equilibrium situation of lower entropy. E.g., if the gas was first confined to a smaller volume and then underwent free expansion into a larger volume, then the original set of microstates is contained in the final set of microstates. But this fact combined with the equal a priori probabilities forms the argument why entropy always increases. So, there is nothing strange about that.

    If you keep track of the microstates, then you have precise information about the gas anyway and the entropy is zero. It always remains zero. E.g. if you let the volume increase then the state it evolves into is simply related to the original state via a unitary transformations, So it ends up in some unique state. If you don't know which state the gas was in, then the number of states in which it can end up in is equal to the original number of states (because they are related to each other via a unitary transformation).

    But for all practical purposes we can replace this smaller set of states by the set of all the states that can exist in the larger volume, despite the fact that most of these states could not have evolved out of the smaller volume. What matters for statistical mechanics is that we can't see the difference between the real probability distribution and the equilibrium probabability distribution on the macro scale.

    The entropy has then increased because we cannot tell the microstates in which it can be in apart from the microstates it can't be in. So, we may just as well consider all the microstates to be equally likely. Or you can say that the information about which microstates it can be in has been scrambled by the (intractible) time evolution.

    This means that there are correlations in the state of the gas that betrays that it evolved from a low entropy state, but these are assumed to be invisible at the macro level. If we could reverse the direction of time, then these correlations would be relevant and the gas would evolve back to the low entropy initial conditions. So, the H-therem would fail, because the assumption in the H-theorem that the state is randomly chosen is not true. It is never true, of course, but in this case the correlations are relevant.
  13. Sep 10, 2008 #12
    That's not what I say, at least, i think.
    I know that equilibrium is about macrovariables. I don't see where is the problem if i am trying to build equivalence classes of microstates corresponding to some given values of macrovariables.

    I'm not sure i agree. In an ideal gas for instance, you must have uniform temperature, pressure and density at thermodynamic equilibrium. While density is easy to compute everywhere considering a given partition of the phase space, temperature and pressure seem more complicated to evaluate from a given microstate. This often leads to the conclusion that temperature and pressure have only an ensemble average meaning so that a microstate can't refer to particular configurations of the temperature and pressure fields. That's actually not correct, in my opinion. Indeed, following the idea of Boltzmann, one can make a linear partition of the one-particle phase space and counting the fraction of particles in each cell of the partition, one can evaluate in principle, the one particle distribution function f(x,p) for one given microstate. Tha validity of this procedure is ensured by the great number of particles in the system otherwise it doesn't hold but this is not a problem since temperature and pressure have no more meaning either.

    I see your point but I don't understand why it is obvious that equilibrium distribution of an ideal gas in a volume V > Vo must contain the microstates of an ideal gaz at equilibrium in Vo. You see, my point consist in saying "why would you use the equal a priori probabilities principle and what does it mean ? ". I argue that equal a priori probabilites principle is sufficient to describe a system at equilibrium (thanks to the law of great numbers) but not necessary. I'm actually the only one to think that way (as far as i know), and i don't understand why.

    Thanks for the recall but I don't think that's what I am doing.

    This is probably the reason why everybody seem to don't understand where is my problem...I know quite well how to do statistical mechanics for practical cases, but my question arises when wondering why does it work. After reading a lot of papers and books on the subject, i could not understand well the transition between invariant measure of probability and equilibrium measure of probability (the transition lies, as you both said, in the non distinction between equilibrium states and infinite time waiting hypothetical states for practical cases).
    To be more precise, it seems to me that canonical distributions we use to describe equilibrium contain almost the same information as the "real" equilibrium distributions (which remain unknown practicaly) but contain also less information than the "real" equilibrium distributions.

    My only "problem" is actually that I am not able to express clearly my way of thinking and that apparently I am the only one on Earth "perturbed" by this subject.
  14. Sep 10, 2008 #13


    User Avatar
    Science Advisor

    I'm sure not. After explaining the ergodic theorem, my lecturer gave a smirk and said something like "and beyond that, all statistical mechanics is based on a prayer". (N Berker - not sure if I'm quoting him correctly, so if that doesn't sound right, it's my error:rofl:)
  15. Sep 10, 2008 #14
    I've followed the thread with very interest although i didn't post. I found statistical mechanics a subject that physicist use without caring too much about its foundations; for example in this thread just only one person is replying to the author, thing that never happens in topic about quantum mechanics principles or stuff like that. So i understand Zacku when he feels "alone" but his question is perfectly logic. Rather than insert myself in the dialog with atyy i prefer to answer to "Meaning of equiprobability principle in statistical mechanics".

    In the beginning SM born for "reducing" thermodynamics to mechanics. The whole point is that a dynamical system with some statistical hypothesis present a behavior characteristic of thermodynamics systems, such as relaxing to equilibrium. So the first way for justifying the use of the ensembles is to (try to) demonstrate such hypothesis directly from dynamics; this path leads to ergodic problem and there aren't big physical results. Another way a physicist can imagine is to simply promote the use of ensembles to physic's principles. Even if it's a (right) way for the foundations of thermodynamics it loses meaning to apply SM to every other dynamical system and there are important prevision that SM do we cannot neglect.
    But there is ANOTHER way completely different and it starts OUT of physics. If i roll a dice and i ask you which number will spot you can only assign equal probability for all numbers, in other words you assign equal probability to every possible states when you cannot do better. If i tell you that my dice is cheated and only even numbers will spot you can do a better prevision. More information you have, more accurate your prevision will be. This how SM works: it makes prevision about the state of system on the partial knowledge you have. The quantity of your disinformation is called "entropy". Your find the state that maximize your disinformation compatible with your a priori information; if you have no information you'll obtain the uniform density (as for the dice), if your only information is the energy you'll obtain the canonical density.
    So SM doesn't "guarantee" that prevision works but it brings a criteria for obtaining an estimator based on partial knowledge. You have to view the ensembles in that spirit and never look to the dynamical meaning. Moreover, thermodynamic's systems need bit of information (temperature, pressure and volume) for making excellent prevision. That's very peculiar and belong to great numbers law: lots of microstates with same macrostates behavior.

  16. Sep 10, 2008 #15


    User Avatar
    Science Advisor

    Just to point out Count Iblis made an excellent post too (ok, that's 2 people, and now 3:smile:).
  17. Sep 10, 2008 #16


    User Avatar
    Science Advisor

    What I don't understand about this approach is - first I know only the energy, then I calculate. Now if I make a measurement of the pressure, shouldn't I recalculate that my entropy on assumption of fixed energy and pressure? Then entropy would decrease each time I made a measurement?
  18. Sep 10, 2008 #17
    I know all these approaches and actually my favorite is the last one. Better than that, I think that the last procedure you mentioned is the correct one to apply probability theory to classical or quantum mechanics (in a context of foundations of statistical mechanics I recall).
    Precisely, this is because of this procedure that I asked the question about the meaning of equiprobability principle. As a matter of fact, if you consider an hamiltonian system for which the energy is a constant of motion (known with a given uncertainty). Then, if this system is composed of N particles and is confined in a volume V you find that the best probabilty distribution a priori is the equiprobability of microstates on the hypersurface at constant energy. Ok. Now, why do everyone say then that "the ensemble equilibrium distribution for this system is the microcanonical one".
    How can we say that since we only know E,V and N, we don't even know if the system is at equilibrium or out of equilibrium. Indeed, as we don't know at all what is the microstate of the system, all states corresponding to the values E,V,N are allowed with the same probability : equilibrium ones as well as out of equilibrium ones.
    By chance, if N is great it seems that microcanonical averages coincides with the equilibrium values of other macrovariables (for instance intensive ones) but that's all.
  19. Sep 10, 2008 #18
    Indeed yes, your entropy decrease if you know more about your system. In fact if you knew all the particles position and velocity of a classical gas your entropy would be zero. You have to look to this quantity not as the quantity that you can measure with an entrometer but at the expectation value that the theory predicts. The decrease of entropy should no be regarded as a contradiction of a second principle, in fact your system is and stay at equilibrium and the value of entropy is fixed, you are only adjusting your prevision based of your knowledge.

    When you do the ensemble averages you are observing your system on a timescale where you assume that the average properties you are interested in don't change. For example it dosen't make sense to read the prevision of SM for a gas at microscopic scale because that isn't the correct timescale. Besides it may happens that you try to apply SM to a gas evolving at macroscale, for example an adiabatic expansion. The SM allows you to derive the collective properties of your system that are dependent from THAT state. So if your system is evolving you of course cannot taking seriously the predictions because the state, in terms of macroscopic variables, is not the same. Trying to describe macroscopic evolution in density operator is non-equilibrium SM, a fertile research field (which i know nothing).

  20. Sep 10, 2008 #19
    I'm not sure about that. Ensemble averages also exist in the case of non equilibrium statistical mechanics and correspond to macrovariables at a given time to. Now at a time t+dt you can have a different probability distribution which leads to different values of the macrovariables of interest...but i'm not sure I understood what you meant...

    Of course because all the information about the system is known and probability can't come from nowhere (it comes from a lack of information).

    Which state ?

    It can actually be done, you can see some papers of Roger Balian on this subject here
  21. Sep 10, 2008 #20
    You're right, some of my statement fail.
    The paper is very interest i'm going to read.

    Which is now the exact question of the thread?

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook