A. Neumaier said:
Because the methods used are those of statistical mechanics. All is about the behavior of a macroscopic system (which can be analyzed only through statistical mechanics) interacting with a microscopic one. And because st
is based on a misinterpretation of how statistical mechanics relates to observation. I tried to correct this misinterpretation, but it is so deeply ingrained in stevendaryl's thinking that the information I provided only confuses him.
Yes, but s single spin is a tiny system, whereas statistical mechanics is about the behavior of macroscopic systems, in our case of the measurement device. Measuring a macroscopic system is not governed by Born's rule but by the identification of the expectation value with the measured value (within the intrinsic uncertainty). Born's rule is valid only for complete measurements of very small quantum systems such as a Stern-Gerlach spin.
I'm puzzled how you come to this conclusion. In many-body physics we describe the system's state as any other quantum system by a statistical operator ##\hat{\rho}## and define the expectation value of an observable, represented by a self-adjoint operator ##\hat{A}## by
$$\langle A \rangle_{\rho}=\mathrm{Tr}(\hat{\rho} \hat{A}).$$
This is Born's rule.
What you usually measure for macroscopic systems in a "classical setup" are not arbitrary expectation values of this kind, but relevant "pretty coarse-grained" observables. E.g., for an ideal gas you consider some volume and check quantities like density, pressure, temperature etc. (assuming you have a state in or close to thermal equilibrium). These are quantities describing a large number of particles in a "collective way". The density, e.g., is just counting particle numbers in a volume element containing many particles and then dividing by this volume. The volume is microscopically large, i.e., it' contains many particles, but on the other hand it's macroscopically small, i.e., the properties of the coarse-grained variable don't change much on the scale of the volume. Experience shows that such a "separation of scales" for such coarse-grained variables occurs quite often for macroscopic systems and that makes the success of the thermodynamical approach. You can derive the macroscopic equations like the (Vlasov-)Boltzmann(-Uehling-Uhlenbeck) from this approach (formalizing the "coarse graining procdedure" either by applying the gradient-expansion or some projection method) or further viscous or even ideal hydrodynamics.
There is some community of quantum physicists who likes to try to derive Born's rule from the other postulates, among them Weinberg with his newest textbook, as an attempt to solve an apparent "measurement problem". As far as I understand what they mean by this "problem" is, why a measurement leads to a certain outcome although according to quantum theory, if the system is not prepared such that the measured observable is determined, there are only probabilities to find a certain value, given by Born's rule. The idea behind these attempts seems to be to be able to derive how the certain outcome of the observable's value occurs when measured with these probabilities from the dynamical postulates alone. The problem is of course that you get a circular argument, and Weinberg finally comes to the conclusion that he cannot satisfactorily derive Born's rule from the other postulates. That doesn't mean that not some other clever physicist comes along and finds a convincing set of postulates for QT, from which you can derive Born's rule. Whether or not you find this line of argument convincing or not is subjective. I don't see any merit in such an attempt, except if you find something really new. The paradigmatic example where such a methodology opened a whole new universe (for general relativists quite literally) of mathematical thinking is the attempt to prove the axiom about parallels of Euclidean geometry from the other postulates. It lead to the discovery of non-Euclidean geometries and a plethora of new "geometrical ideas", including the group-theoretical approach a la Klein, which is so vital for modern physics through Noether's theorems and all that.
In the minimal interpretation there is no measurement problem. For the Stern-Gerlach experiment, i.e., measuring a spin component (usually chosen as ##\sigma_z##) of a spin-1/2 particle, it's pretty simple to explain everything quantum theoretically, because the system is so simple that you can do so. The experiment consists more or less just of an appropriately chosen static magnetic field, which has a large homogeneous component in ##z## direction and also some inhomogeneity (also along the ##z## direction, but of course one has necessarily another in another direction due to Gauss's Law ##\vec{\nabla} \cdot \vec{B}=0##; one simple model for such a field is ##\vec{B}=(B_0 + \beta z)\vec{e}_z-\beta y \vec{e}_y##. Then it's quite easy to show that if ##\beta |\langle y \rangle| \ll B_0## that the particle beam splits into two quite well separated partial beams which are almost 100% sorting the two possible states ##\sigma_z = \pm 1##. That's how you measure (or even prepare!) nearly pure ##\sigma_z## eigenstates by entangling position and ##\sigma_z##. There's nothing mysterious about this, but of course this argument uses the full standard QT postulates, including Born's rule.