I'm not sure I would agree with that gva. If you look at one of the clearest papers ever written on the Bell Inequalities by the master himself (John Bell)
https://cds.cern.ch/record/142461/files/198009299.pdf
Then you'll see mid-way through that paper there is a very general model of the experimental situation that is used to derive the inequality. We have 2 measurement devices that measure 'something' and the results of the measurement are yes/no (or 1/0 or up/down - whichever takes your fancy). We can also adjust the setting on the measurement devices. If we let A and B stand for the measurement results and a and b stand for the measurement settings of the respective devices then many runs of the experiment will allow us to
measure P(A,B | a,b).
The idea is to explain any correlation between the results by some extra variables - something is assumed to be happening behind the scenes, so to speak, that gives rise to the correlations. So, it is assumed, what we actually have is P(A,B | a,b, {hv}) where hv is this collection of 'hidden' variables.- which if we only knew them we could use to explain the measured statistics. Bell takes very great care to stress that no assumption is made about the nature of these hidden variables - or even the thing we're measuring - so we could have particles, fields or something hybrid entity. The hidden variables could be a collection of discrete things or functions - or even wavefunctions! No model of physics is assumed - in particular no quantum assumptions are used.
If we then make some (on the surface) very reasonable assumptions about the properties of those variables then it can be shown that there is a constraint on the correlations. Since these are things we can directly measure we can test this if we can find a physical system that gives us these yes/no answers that might have correlations.
These 'reasonable' assumptions are the realism and locality conditions which might be loosely expressed as :
- things have properties independent of measurement
- statistics 'here' are not influenced by settings 'there'
Using these assumptions we get the constraint on the correlations that have to be satisfied by probabilities of the form P(A,B | a,b, {hv}) - and this is the celebrated Bell inequality.
It is important to note that absolutely no assumption is made about particles or fields or the nature of the extra variables (other than these 2 reasonable assumptions). I have heard it argued that Bell makes the
implicit assumption that we have particles - but I don't understand this myself since all we're connecting is measurement results using pretty straightforward probability models. The detectors just go 'ping' or 'ding' - and we just model the statistics of the pings and dings using good old probability functions. What causes the pings or dings is of earth-shattering irrelevancy.
There
is an implicit assumption - and that is that it is possible to make truly random choices of the measurement device settings.