|Register to reply||
Why is the uniform measure natural in (equilibrium) statistical mechanics?
|Share this thread:|
Apr12-12, 04:01 PM
For example the microcanonical ensemble uses a dirac delta distribution on a certain energy shell E, which is not actually a uniform distribution (even on the energy shell), but it comes close.
Why is uniformity (in phase space, or a relevant restriction thereof) natural for equilibrium?
Some say because it is invariant under Hamiltonian evolution, but that seems irrelevant.
Others say because it is an expression of indifference: one does not know in which state it is, so every microstate is seen as equally likely. However, this is more arbitrary than it seems, since it depends on the coordinate system on is using. For example say one is rather using the space x²,p, where position is labelled by the square of the position. By the same argument (indifference), one can look at a uniform distribution there, but that would give radically different results from a uniform distribution on normal phase space x,p.
So why uniformity?
The best answer I can think of: because it gives the correct results. But alright, that shouldn't be the end station for an explaining theory.
|Register to reply|
|Comparing Results from an Experiment: What Statistical Measure is Important?||Set Theory, Logic, Probability, Statistics||2|
|Non Equilibrium Statistical Mechanics||Classical Physics||6|
|H theorem: equilibrium in statistical mechanics||Classical Physics||10|
|How to measure compressed natural gas?||General Physics||0|
|Does equilibrium imply max. entropy in statistical mechanics?||Classical Physics||25|