ttn said:
I agree that some things in the "Bell's concept..." paper are not as mathematically precise as one could wish. This is in part because it's a pedagogical paper (for physics students and teachers) and partly because I think there is a point at which mathematical precision actually gets in the way of clear understanding. Perhaps you would like the scholarpedia "Bell's theorem" entry more -- it's a bit more technical (with two of the four authors being mathematicians) and covers the same ideas.
I agree that technical expositions often aren't very pedagogical. But i think that at some point, there should be some place, where the axioms of the theory are written down in a precise way and the theorems are proven rigorously. I believe that much of the power of physics comes from the fact that the mathematical language that underpins physics is very pedantic. I think it's desirable to know the limitations of our theories. This can often lead to a deeper understanding and even new discoveries.
I will take a look at the scholarpedia article.
But as to the λ thing, it seems here is an example where the "vague paragraphs of text" are actually important to take in. The λ refers to the possible microstates of the particle pair that can be produced by a given sort of preparation procedure. For QM, the preparation procedure produces a pair in the spin singlet state, period. So ρ(λ) is a delta function! Do you think there's some problem integrating over that when the time comes?
If \rho(\lambda) were a delta function, then it's possible to formulate this using the dirac measure. But in QM, you can always multiply a state by a complex number and it still describe the same physical situation. So the distribution would really have to be a something like an indicator function. I'm not sure if this can be done. Maybe if you modify the theory a little and switch to the projective space. Then \lambda isn't the wave function itself, but rather an equivalence class of wave functions.
However, i have to admit that i misunderstood this at first. I thought you were considering arbitrary distributions of the \lambda's, as one usually does it in the derivation of Bell's theorem and this looked like a hopeless task. If you restrict the distributions you consider to only special cases, it looks much more feasible.
Bohr's Copenhagen interpretation insisted that the directly perceivable macroscopic classical world existed. (He literally insisted repeatedly on this, in the context of saying that all empirical data was ultimately statements about such macroscopic things.) So strictly speaking, the Copenhagen interpretation involves dividing the world into two realms -- the classical/macro realm and the quantum/micro realm. The former just unproblematically exists, essentially by postulate, and so there are lots and lots of beables there.
Well, i think that the term "Copenhagen interpretation" is used more loosely today. I would probably consider myself a quantum instrumentalist. I don't assume that the classical world exists (that means I'm agnostic but rather tend to neglect its existence if problems emerge). In fact, the measurement apparatus itself and everything else should also behave quantum mechanically. We just don't include it in our models most of the time (but we could do it and it leads to very useful results, see decoherence). The quantum-classical split doesn't have an ontological status in my opinion. We use classical theories only to interpret the results of measurements. They aren't part of the quantum theory itself. That means that if we obtain a value for a position measurement of a quantum particle for example then
if we were to use a classical theory for the further description of the system (instead of quantum mechanics), then it would probably be best to assign the obtained value to the position variable of that classical theory in order to model the situation best. That doesn't mean that the quantum particle has suddenly acquired a position. It just means that we have
mentally assigned a classical position to it in order to get a more intuitive understanding of the situation. We do this for the sole reason that we have more intuition for classical theories than for quantum theories. If this view of quantum theory makes me a non-Copenhagenist, so be it, but i think it is shared by most physicists at least in a similar way. I don't persist on being an advocate of any particular interpretation.
So, long story short, classical theories aren't part of the quantum description. They are only an interpretational tool. Quantum mechanics doesn't assign definite values to observables. It assigns only probability distributions, from which we can calculate expectation values. Nothing more. Quantum mechanics doesn't tell us that the outcome of a position measurement will be "5". It just tells us that if we prepare the system identically and perform the same measurement 100 times, then if we calculate the mean value, we will get "5". In fact, QM doesn't even have a mechanism to predict individual outcomes of an experiment. It's not a theory like classical mechanics where you just don't know the exact positions and momenta and thus supplement it with a probability distribution. In fact, QM is
solely probability. It's not a theory about an underlying reality. Individual outcomes aren't even observables of the pure quantum theory. So how can they be beables of the theory? You said yourself that a beable is
"whatever a certain candidate theory says one should take seriously, as corresponding to something that is physically real."
But quantum mechanics doesn't say that one should take the individual outcomes seriously, because it's not a theory of individual outcomes. It doesn't predict individual outcomes. They aren't an element of the theory (unless you artificially add them like in Bohmian mechanics, but i only talk about standard QM here). Individual outcomes are external things that aren't part of the theory. And if they aren't part of the theory, they can't be beables of the theory. In your own paper, you quoted Bell saying that beables are always to be viewed with respect to a
particular theory (in our case QM). They aren't global things that apply to all theories. I think that this is even the most relevant difference between Bohmian mechanics and standard QM. Assinging beable status to individual outcomes would probably cast standard QM almost into
being Bohmian mechanics.
In short, the "measurement outcomes" most certainly *are* beables for Copenhagen. Bohr himself insisted on it specifically. And to deny such outcomes beable status -- for *any* theory -- is frankly borderline crazy. We are talking here about concrete things like which way a certain pointer in a certain lab pointed at a certain time. To deny "physically real" status to such things is ... well ... to approach solipsism.
As i said, you could also include the measurement apparatus and thus the pointer into the quantum description like the decoherence people do. Then decoherence tells you that the quantum state of the pointer will be sharply peaked over certain values after a very short time and the peak is getting sharper and sharper every nanosecond, but in fact it will never reach an exact eigenstate, so technically it's always in a superposition unless you wait an infinitely long amount of time, even though the peak will become so sharp that it practically makes no sense to talk about superpositions anymore. In that sense, the pointer of the measurement apparatus -- if described quantum mechanically -- behaves no differently than a quantum particle. We can compute only probability distributions. It's just that macroscopic objects have sharply peaked quantum states, just like particles shortly after their measurements. Sharply peaked quantum states are the classical limit of quantum theory, so to speak, but they aren't classical. They are only
classical enough in the sense that the corresponding classical theory would provide a good approximation to the quantum description. I really don't have a problem with that. Especially i don't see why this would approach solipsism. I really have spent a considerable amount of time thinking about this kind of stuff. I haven't always thought about it this way.
You have to view it this way: A physical model is to nature like the word "banana" is to the yellow thing that you can buy in the supermarket. Ceci n'est pas une pipe (google it if you don't recognize it). Theories only
describe our world. Some theories have just turned out to be useful. It's the theories that we classify by words like "local", "realistic" and others. It's not nature itself. If i say that standard QM doesn't have a beable corresponding to individual outcomes, this means that standard QM isn't concerned with invidivual outcomes. It doesn't make predictions about them. It only describes some aspects of the world, just like Newtonian gravity doesn't describe nuclear physics. Still, we can classify these theories using words like "local". You wouldn't say that Newtonian gravity can't be classified as "local" or "nonlocal" just because it has no means to describe nuclear physics. QM has no means to describe individual outcomes. Maybe that doesn't satisfy you, but it's enough for virtually every application i can think of and it doesn't prevent it from being classified. Maybe there is a deeper theory that
can talk about individual outcomes and has them as beables. But yet, there is only Bohmian mechanics and i don't think that it has any particular advantage over standard QM.
What exactly you say is going on physically at the micro-level of course depends on which theory you're talking about and in particular what objects have beable status for the theory in question. But I fundamentally disagree about the last part of what you write. An individual measurement absolutely does tell us something about nature. Think about what an individual measurement means, concretely and physically. It means (at least) that some macroscopic object (think: pointer) moved a certain way. That is something we can directly perceive. It is pre-eminently physical, a fact about nature. (This was also one of the points that I guess you glossed over in the "long, vague paragraphs of text".)
But an individual measurement tells us very little about nature. It could as well be a measurement error. With only one datapoint, we are completely unable to tell. The actual value of the measurement is almost useless. We need a larger dataset to gain real information. The standard deviation is equally important as the measurements themselves. Measurements are always imprefect and physics somehow has to deal with this imperfection. There is no way out of this. There will never be a perfect measurement apparatus and thus, physics can't possibly live without statistics. This is a fundamental fact that can't be overcome. The only interesting values about a measured dataset are it's statistical properties. If you measure a single datapoint, say the position of an atom, to be "5", then this just tells you that the position of the atom might have been "5", but it might as well not have been "5", because the apparatus just gave you a wrong number due to the intrinsic imperfection of physical measurements. Even worse, if you measure the value "5", then this value is almost certainly wrong, because a measuremt error of "0" would be infinitely unlikely. We can never reliably reproduce individual outcomes, but we
can reproduce their statistics. That's by the way also one of the main reasons why I'm willing to give up the beable status of individual measurements so easily.
Some do, some don't. But (as discussed above) this doesn't matter. You can give it either status you want, and "Copenhagen QM" is still nonlocal.
But maybe giving up the individual outcomes as beables makes it local. In fact, i could imagine that this would make Bell's locality definition equivalent to the definition that quantum field theorists use, which would be really cool in my opinion.