Dadface said:
But what assumptions about properties, if any, are made in QM? Are either of the following assumptions made?
Sorry for coming to this a bit late. Been a tad busy
My advice for anyone trying to understand Bell's inequality is to completely forget about QM. The inequality itself is absolutely nothing to do with QM - it is a restriction on plain old probabilities.
What is the BI about then? Well we imagine 2 locations - say Alice's Lab, and Bob's Lab. There's some measuring device at each location. Each device has a dial that can be set to various values - and the devices also have a readout to give the result obtained during the measurement.
So nothing quantum, no assumption about anything at all - just settings and measurement results - just data.
We imagine that Alice and Bob do a whole series of runs of this experiment and then look at the data. So they're going to be able to work out (from the data) things like the probability of getting some result. They're also going to be able to work out (from the data) the probability of getting some result ##given## a particular setting that they chose. And if they get together at some later stage they can also pool their data to work out the joint probabilities.
Let's imagine they've got together to look at their joint data. They find that there's some evidence that their data are correlated. They want to explain this - correlation cries out for explanation. Surely there's some connection between the things they've measured if they're seeing a correlation?
So they make the assumption that there are some set of properties (unmeasured in their experiments) that is the underlying cause for the observed correlation in the data.
So experimentally they can work out the probabilities of particular results ##given## particular device settings, ##P(A,B | a,b)##, where ##A,B## are the measurement results they get, respectively and ##a,b## are the respective measurement device settings they chose.
Their assumption of some underlying cause means that really, if they could somehow know the underlying properties, they would have ##P(A,B | a,b, \lambda, \mu, . . .)## where ##\lambda, \mu, . . . ## are the values of these underlying properties. It turns out that we can lump all of these underlying properties together and just use the single symbol ##\lambda## to represent all of them. So ##\lambda## just means some set of properties.
These properties 'explain' the observed correlation. What does this mean? Well it means that if we've taken account (or we know) ##all## of these properties then any left over fluctuation in the data has to be independent (if it weren't, if there was still some correlation left, then we wouldn't have captured all of the underlying properties). That means we can write $$P(A,B | a,b, \lambda ) = P(A | a,b, \lambda ) P(B | a,b, \lambda )$$Now of course it would be rather strange to assume that the results in Alice's Lab depend in some way on the ##settings## in Bob's Lab (and vice versa). If there was some dependence we'd have to explain that - there'd have to be some connection, some difference to Alice's set-up when Bob turned his dial to another setting - colloquially we might say that Alice's experimental set-up would 'know' about any changes made to Bob's configuration. So it's very natural to assume that no such connection exists. This is the 'locality' assumption - and it's very reasonable, as you can see!
The upshot is that the conditional joint probability can now (with this locality assumption) be written as $$P(A,B | a,b, \lambda ) = P(A | a, \lambda ) P(B | b, \lambda )$$The last piece is the 'realism' bit - this gets used later on in the derivation where an assumption is made in the math. This assumption is tantamount to saying that properties exist independently of measurement. This is given the fancy name of 'counterfactual definiteness' - but it's really nothing more than a cornerstone of classical physics - in a nutshell it's saying that if I have an object I can measure its position, but I could have measured it's momentum instead an I'd have gotten such and such a value. If you think about it - it's pretty much an underlying assumption of all classical physics. The term 'counterfactual definiteness' just makes it sound like something mysterious and intellectual.
With these entirely reasonable assumptions it can then be shown that there exist constraints on the probability functions - not all choices of function will be consistent (this kind of result, in a totally different context, was derived by Boole a century before Bell - so it's known in classical probability theory). It simply says that given joint distributions of random variables the marginal distributions are constrained. The constraint for our experimental set-up above is, of course, simply the Bell inequality.
No QM here so far - no assumptions of any mechanisms, no 'fields', no 'particles', just measurement results and the probabilities that can be worked out from them and some very natural assumptions about what might be causing any correlation between the data.
The thing is, as we know, there are physical systems we can examine - and when we do the experiments we find the probabilities we work out from the data are not constrained as we expect from the analysis. Therefore at least one of the assumptions we've made in the analysis can't be correct. They might all be incorrect, but at least one has to be decidedly iffy.
So as far as QM is concerned (which does predict the right experimental result) we're saying that QM cannot be wholly replaced by any theory which makes all of these natural assumptions. That's Bell's theorem.
Don't know whether this answers your question or not - but hope it helps frame things a bit.