1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivation of the Boltzmann constant

  1. May 11, 2010 #1
    Hey so I'm just looking through a thermal physics textbook at a quick derivation of the Boltzmann distribution.
    It says to consider a small 2 state system with states of energies E=0 and E= [tex] \epsilon[/tex] .This system is connected to a resivor of energy [tex]U_0[/tex] when the small system is in the 0 energy state. When the system is in the [tex] \epsilon[/tex] energy state the energy of the reservoir has energy [tex]U_0 - \epsilon[/tex]. if the number of states available to the reservoir is denoted by g, then the numbers of states available to the reservoir will be [tex]g(U_0)[/tex] and [tex]g(U_0 - \epsilon)[/tex] respectively.

    The part i don't understand is the next step. He makes an argument by 'the fundamental assumption", which states that"quantum states are either accessible or inaccessible to the system, and the system is equally likely to be in one accessible state as in another." He says that the ratio probability of the system being in the E=[tex] \epsilon[/tex] state to the probability of the state being in the E=0 will be

    [tex] \frac{P(\epsilon )}{P(0)} = \frac{g(U_0 - \epsilon)}{g(U_0)} [/tex]

    I am struggling to see how one can come to this conclusion. Any help? does the fundamental assumption have a hidden meaning that the probability of being in an energy state is proportional to the states accessible in that energy? That is what this relationship implies i would think. But i can't see why this would be true. Thanks.

    Also, I don't know what is wrong with my epsilons but they aren't supposed to superscripts like that.
     
  2. jcsd
  3. May 11, 2010 #2

    Jano L.

    User Avatar
    Gold Member

    Hi Hellabyte,
    in fact we suppose that the probabilities of degenerate states of the reservoir (states with the same energy) are the same. This is the fundamental assumption, which is quite plausible, if we do not know anything else about these states. In classical statistical physics, we do the same thing and suppose that the probability function is the same on the energy surface in phase space.
    Best,
    Jano
     
  4. May 11, 2010 #3
    Note that the states are not assumed to be exactly degenerate (i.e. of exactly the same energy). The microcanonical ensemble is defined by specifying a small energy range which represents how accurately we know the internal enrgy of the system to be. One assumes that all energy eigentates with an energy wihin this interval are equally likely.
     
  5. May 11, 2010 #4

    Jano L.

    User Avatar
    Gold Member

    I've never understood this point. The system is isolated, so its energy should be constant and definite. Our knowledge or ignorance of it should not matter.
    If we allow the energy to fluctuate by means of an additional reservoir, the postulate of equal probabilities is clearly wrong.
    But suppose we allow the energy to make a random walk through the states with the energy in the interval [tex](E, E+ \Delta)[/tex].
    What is then the value of [tex]\Delta[/tex]? Can we calculate it somehow?
     
  6. May 11, 2010 #5
    In statistical mechanics, we consider not a single system but a large number of "identically" prepaired system, where "identically" does not mean exactly the same, but only the same as far as certain macroscopic properties are concerned.

    The microcanonical ensemble is a set of such systems, each with a definite energy but such that the energy of each member can be within a range of Delta E with equal probability. The value of Delta E is an arbitrary choice.

    You can easily see that specifying a finite value for Delta E is of crucial importance, even though we tend to ignore this in actual computations (it doesn't seem to matter how you chose Delta E). Suppose we have an isolated system that is not degenerate. It is isolated and it thus has well defined energy eigenstates. Specifying the energy exactly for such a system would thus specify the exact microstate the system is in. Therefore the entropy would have to be zero.

    This zero entropy of a system is called the "fine grained entropy". It always stays zero due to unitary time evolution. Information does not get lost, so if you have complete information about a system at t =0, you'll still have complete information any time later. Clearly, this is not the relevant notion of entropy we want to work with in thermodynamics.

    The entropy should reflect the lack of knowledge we have about a system. This we deal with by considering an ensemble of systems that are in a range of different microstates that are compatible with the specified macrostate. Now, due to the way quantum mechanics works, we only need to specify the energy exactly in order for the exact miscostate of the system to be specified. What this means in practice is that you have energy levels that are are astronomically close to each other. Specifying the exact energy eigenstate would thus require an astronomical amount of information.

    So, we simply specify that the energy of a system is between E and E + Delta E, consider the ensemble of such systems and define thermodynamic quantities as ensemble averages. The entropy can be interpreted as the amount of information you would have to specify in order to point out some specific member of the ensemble. The choice of Delta E will only affect the entropy by a few bits and can thus be ignored.

    In contrast, zero Delta E means that the ensemble consists of only one member and you need zero bits of information to point out that member.
     
  7. May 12, 2010 #6
    Yes I understand what the fundamental assumption is saying, But how does that relationship spring up?
     
  8. May 12, 2010 #7
    If all accessible states are equally likely. If the subsystem in in the excited state, the reservoir has g(U0 - epsilon) accessible states and if the subsystem is in the ground state, the reservoir has g(U0) accessible states. Then this is like throwing two dice, the sum of the numbers can range from 2 to 12, but the probabilities of the possible outcomes are not the same. 7 is more likely than 2 because there are more ways to throw 7 than there are to throw 2. There are 6 times as many states for 7 than there are for 2, so the ratio of the probabilties is 6.

    In this case, the number of different states in which the combined system comprising of the reservoir and the subsystem corresponds to the subsystem being in the excited state is g(U0-epsilon).

    The number of different states in which the combined system comprising of the reservoir and the subsystem corresponds to the subsystem being in the ground state is g(U0).

    But all these states are equally likely, so the ratio of the probabilities is:

    g(U0-epsilon)/g(U0)
     
  9. May 13, 2010 #8
    Great I think I got it. Thanks a lot!
     
  10. May 17, 2010 #9

    Jano L.

    User Avatar
    Gold Member

    Hi Count,

    I still do not understand this [tex]\Delta[/tex]. Nondegenerate system would cease to evolve both in classical and quantum theory, and we do not have any need to describe it statistically, as we know the state already.

    I think we can just say that the level [tex]E[/tex] is [tex]g(E)[/tex] times degenerate and entropy is the logarithm of this number. Then we make the hypothesis that all of these states are equally probable and introduce the ensemble of states with sharp energy [tex]E[/tex].

    Right?
     
  11. May 17, 2010 #10
    The problem here is that if the system is not exactly degenerate, the definition will, strictly speaking, not apply. Note that a system that is not degenerate can still evolve in time, because it doesn't have to be in an exact energy eigenstate. The spacing between the energy eigenvalues must necessarily be astronomically small if you have a system with an astronomically large number of degrees of freedom. This means that the system can evolve almost arbitrarily slowly.

    Then, to evaluate thermal averages we can average over some ensemble in which each member is in a "realistic" state which are then not the energy eigenstates. But this average will then be a trace over certain basis states with almost well defined energies. Due to the invariance of the trace, you can then evaluate the trace over the exact energy eigenstates.
     
  12. May 17, 2010 #11

    Jano L.

    User Avatar
    Gold Member

    I see the problem - in fact, we never know the energy precisely so we never have the situation where the system is in nondegenerate state (if it were some).

    I think the key point is in the fact that the isolated system is not necessarily in energy eigenstate. In fact, the initial state should depend on the preparation of such a system.
    Now the question is, why do we suppose that the probability of measuring the energy E is uniform in the interval [tex]E,E+\Delta[/tex]? It seems that it is just a mathematical convenience, as in reality this distribution should reflect the preparation process. I guess that peaked distribution around E would describe the results of energy measurements better.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Derivation of the Boltzmann constant
Loading...