- 24,488
- 15,057
Well, this point of view is also a bit dangerous, because what's meant by an ensemble is that you can prepare each single member of the ensemble in a well-defined way, which finally defines the idea of "state".A. Neumaier said:You are an outsider - where is your record of publications in the foundations of quantum mechanics? Or at least you are using a pseudonym so that you appear to be an outsider. This in itself would not be problematic. But you are making erroneous accusations based on a lack of sufficient understanding. This is very problematic.
These explanations are valid for the Copenhagen interpretation but are meaningless in the context of the minimal (statistical) interpretation. In the minimal (statistical) interpretation discussed by Ballentine and Peres, a single system has no associated state at all. Thus your statements ''each copy of the system is in the same [resp. a different] pure state'' do not apply in their interpretation. You are seeing errors in their book only because you project your own Copenhagen-like interpretation (where a single system has a state) into a different interpretation that explicitly denies this. If a single system has no state, there is nothing that could collapse, hence there is no collapse. Upon projecting away one of the spins, an ensemble of 2-spin system in an entangled pure state automatically is an ensemble in a mixed stated of the subsystem, without anything mysterious having to be in between. Looking at conditional expectations is all that is needed to verify this. No collapse is needed.
Thus not Ballentine and Peres but your understanding of their exposition is faulty. You should apologize for having discredited highly respectable experts on the foundations of quantum mechanics on insufficient grounds.
In the formalism, a state is just a self-adjoint trace-class 1 operator, but that's an empty phrase from the physics point of view, because physics is about real things in the lab, and thus it must be possible to define a state in an operational way for a single object, and in this sense the question of the collapse is of some importance, i.e., how can you make sure that you perpare a real-world system in a state which is described by the abstract statistical operator.
I take a pragmatic view on this: A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum. Accelerator physicists construct their devices without much use of quantum theory as far as I know, but they use the classical description of the motion of charged particles in the classical electromagnetic fields designed to achieve a particle beam of high quality (i.e., high luminosity with a pretty well defined momentum).
Also in the usually in textbooks discussed preparations in the sense of idealized von Neumann filter measurements, you can understand them in a very pragmatic way. Take the Stern-Gerlach experiment as an example. This can be fully treated quantum mechanically (although it's usually not done in the usual textbooks; for a very simple introduction, you can have a look at my QM 2 manuscript (in German) [1]). Then you have a rather well-defined spin-position entangled state with practically separated partial beams of definite spin (determined spin-z component). Then you simple block all unwanted partial beams by putting up some absorber material in the way at the corresponding position. What's left is then by construction a beam of well-defined ##\sigma_z## eigenstates.
Last but no least you have to check such claims empirically, i.e., you have to make a sufficient set of measurements to make sure that you have really prepared the state at sufficient accuracy you want.
Now comes the dilemma: We all tend to think in the naive collapse way when considering such filter measurements, assuming that the naive pragmatic way of filtering away the unwanted beams really does prepare the left beam in the way we think. This means that we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture). On the other hand, if we believe that relativistic quantum field theory provides the correct description, there's no action at a distance as implicitly assumed in the collapse hypothesis but only local interactions of the particles with all the elements of the preparation apparatus, including the "beam dumps" filtering away the unwanted beams. So you can take the collapse as a short-hand description of the preparation procedure, but not literally as something happening to the real-world entities (or ensembles of so prepared entities) without getting into a fundamental contradiction with the very foundations of local relativistic QT.
It's also interesting to see, what experts in this field think. Yesterday we had Anton Zeilinger in our Physics Colloquium, and he gave just a great talk about all his Bell experiments (including one of the recent loophole-free meausurements). In the discussion somebody asked the question about the collapse (so I could ask another question about whether the communication loophole is really closed by using "random number generators" to switch the distant measurements at A's and B's place in a way that no FTL information transfer at the two sites is possible, but that's another story). His answer was very pragmatic too: He took the epistemic point of view of Bohr (he also mentioned Heisenberg, but I'm not sure whether Bohr's and Heisenberg's view on this subject are really the same), i.e., that the quantum formalism is just a way to describe probabilities and that the collapse is indeed nothing else than updating the description due to reading off a measurement result. So at least Zeilinger, who did all these mind-boggling experiments for his whole life has a very down-to-earth no-nonsense view on this issue. I was very satisfied ;-)).