A. Neumaier said:
None of my views on the foundations of quantum mechanics, as argued in this thread, is standard. Does it matter? It resolves or at least greatly reduces all quantum mysteries - only that's what matters.
- It is fairly well known that real measurements are rarely von Neumann measurements but POVM measurement. Nevertheless, people are content to base their foundations on the former.
- It is well-known that real systems are dissipative, and it is known that these are modeled in the quantum domain by Lindblad equations (lots of quantum optics literature exists on this). Nevertheless, people are content to base their foundations on a conservative (lossless) dynamics.
- It is well-known how dissipation results from the interaction with the environment. Nevertheless, people are content to ignore the environment in their foundations. (This changed a little with time. There is now often a lip service to decoherence, and also often claims that it settles things when taken together with the traditional assumptions. It doesn't, in my opinion.)
- It is known (though less well-known) that models in which the electromagnetic field is treated classically and only the detector is quantized produce exactly the same Poisson statistics for photodetection as models employing a quantum field in a coherent state. This conclusively proves that the detector signals are artifacts produced by the detector and cannot be evidence of photons (since they are completely absent in the first model). Nevertheless, people are content to treat in their foundations detector signals as proof of photon arrival.
- It is well-known that the most fundamental theory of Nature is quantum field theory, in which particles are mere field excitations and not the basic ontological entities. Nevertheless, people are content to treat in their foundations quantum mechanics in terms of particles.
I agree with all of that, but I'm not at all convinced that taking into account all of that complexity makes any difference. There is a reason that discussions of Bell's inequality and other foundational issues use simplified models, and that is that reasoning about the more realistic models is much more difficult. The assumption is that if we can understand what is going on in the more abstract model, then we can extend that understanding to more realistic models. It's sort of like how when Einstein was reasoning about SR, he used idealized clocks and light signals, and didn't try to take into account that clocks might be damaged by rapid acceleration, or that the timing of arrival of a light signal may be ambiguous, etc. To make the judgment that a simplified model captures the essence of a conceptual problem is certainly error-prone, and any conclusion someone comes to is always eligible to be re-opened if someone argues that more realistic details would invalidate the conclusion.
But in the case of QM, I really don't have a feeling that any of the difficulties with interpreting QM are resolved by the complexities you bring up. It seems to me, on the contrary, that the complexities can't possibly resolve them in the way you seem to be suggesting.
Whether it's QM or QFT, you have the same situation:
- You have an experiment that involves a measurement with some set of possible outcomes: o_1, o_2, ..., o_N
- You use your theory to predict probabilities for each outcome: p_1, p_2, ..., p_N
- You perform the measurement and get some particular outcome: o_j
- Presumably, if you repeat the measurement often enough with the same initial conditions, the relative frequency of getting o_j will approach p_j. (If not, your theory is wrong, or you're making some error in your experimental setup, or in your calculations, or something)
What you seem to be saying is that the outcome o_j is actually determined by the details you left out of your analysis. That seems completely implausible to me, in light of the EPR experiment (unless, as in Bohmian mechanics, the details have a nonlocal effect). In EPR, Alice and Bob are far apart. Alice performs a spin measurement along a particular axis, and the theory says that she will get spin-up with probability 1/2 and spin-down with probability 1/2. It's certainly plausible, considering Alice's result in isolation, that the details of her measuring device, or the electromagnetic field, or the atmosphere in the neighborhood of her measurement might affect the measurement process, so that the result is actually deterministic, and the 50/50 probability is some kind of averaging over ignored details. But that possibility becomes completely implausible when you take into account the perfect anti-correlation between her result and Bob's. How do the details of Bob's device happen to always produce the opposite effect of the details of Alice's device?
I understand that you can claim that in reality, the anti-correlation isn't perfect. Maybe it's only 90% anti-correlation, or whatever. But that doesn't really change the implausibility much. In those 90% of the cases where they get opposite results, it seems to me that either the details of Bob's and Alice's devices are irrelevant, or that mysteriously, the details are perfectly matched to produce opposite results. I just don't believe that that makes sense. Another argument that it can't be the details of their devices that make the difference is that it is possible to produce electrons that are guaranteed to be spin-up along a certain axis. Then we can test whether Alice always gets spin-up, or whether the details of her measuring device sometimes convert that into spin-down. That way, we can get an estimate as to the importance of those details. My guess is that they aren't important, but I need somebody who knows about experimental results to confirm or contradict that guess.
So if the ignored, microscopic details of Alice's and Bob's devices aren't important (and I just don't see how they plausibly can be), that leaves the ignored environment: the rest of the universe. Can details about the rest of the universe be what determines Alice's and Bob's outcomes? To me, that sounds like a hidden-variables theory of exactly the type that Bell tried to rule out. The hidden variable \lambda in his analysis just represents any details that are common to Alice's and Bob's measurements. The common environment would certainly count. Of course, Bell's proof might have loopholes that haven't been completely closed. But it seems very implausible to me.
What I would like to see is some kind of simulation of the EPR experiment in which the supposed nondeterminism is actually resolved by the ignored details. That's what would convince me.