Philip Koeck said:
That would definitely solve my problem. So some textbooks simply got it wrong and the factor 1/N! does not account for industinguishability but only for identity.
Perhaps this is a topic for the other thread, but what deductive process is going on between premises concerning things being identical or indistinguishable and conclusions about formulae for physical quantities? ( Is it a Bayesian form of deduction - as according to Jaynes ?)
In purely mathematical problems about combinatorics, the given information about things being identical or indistinguishable is used to define what is meant by "ways". This is needed in order to interpret the inevitable question: "In how many different ways can...?".
In physics, in addition to showing we have counted the number of "ways" correctly, we need some justification that says: The following is the physically correct way to define a "way": ... .
A frequently seen deductive pattern is:
1. Provide formulae for the number of "ways" using combinatorics.
2. Deduce probability distributions from the combinatorial results - usually by assuming each "way" has the same probability.
In this thread there is a concern for:
3. Use the probability distributions to compute (Shannon) entropy
4. Check that the entropy computations resolve GIbb's paradox. - i.e. make sure entropy is an "extensive" quantity.
My interpretation of the Jaynes paper "The Gibbs Paradox"
http://www.damtp.cam.ac.uk/user/tong/statphys/jaynes.pdf is that information about "identical" or "indistinguishable" particles is not an absolute form of information - i.e. it is not a property of Nature that is independent of who is performing experiments. If particles are "indistinguishable" to a certain experimenter then that experimenter doesn't know how to keep track of which one is which. An experimenter cannot perform any experiments that would require distinguishing among particles that are indistinguishable
to that experimenter. From the Bayesian perspective, Entropy (when defined as a function of a probability distribution) is defined relative the experimenter's capabilities.