atyy said:
So I think the "information alone" approach in the first place is not a good description of itself. Also, it should be the microscopic physics that permits or does not permit us to "coarse grain" successfully. Pretty much all the same points Zacku made - except he likes the "information only" approach!
Thinking about that since the opening of this thread, I think that the indifference principle allows one to retrieve the microcanonical distribution with a correct use of probability theory.
However, its other use - that is when assuming that the values of measured macrovariables are ensemble averages (as stated for instance R. Balian)- is more difficult to understand for me. In quantum mechanics when we make a measure, we must find eigenvalues of the observable of interest, that is, we don't find a quantum average value unless we make more than one measure.
As a matter of fact, if one is able to know the expression of a macrovariable observable, then an apparatus will measure a time average of this quantity (even if we are talking about QM). Assuming that this time average equals an ensemble average is equivalent, according to me, to the ergodic problem.
It seems, as we said earlier, that the answer of the statistical mechanics "problem" is not the ergodic theroem since, despite the famous work of Sinai on this subject, is to restrictive to build the bridge between time averages and ensemble averages.
It seems that the answers is perhaps in the ideas rised by Khinshin (I'm reading his book on statistical mechanics ).
So I agree with your comment atyy, there must be something else than only statistical inference (except for the microcanonical case i would say) to explain canonical ensembles. Noting that it doesn't solve the problem of the existence, or not, of a real ensemble distrubution probability for system at equilibrium.