Rade said:
In reading about Bell tests I found this link:
http://www.abc.net.au/science/features/quantum/, which has discussion of numerous loopholes in both theory and experimental design of Bell tests, which when combined, appear to suggest that "local realist" explanations of all Bell test violations should exist. That is, no single Bell test does not have at least one of the following seven loopholes, and since it only takes one loophole (1 of 7) to invalidate test results as suggesting, "local realism" does not exist, in fact, there are no Bell tests that disprove "local realism". Second part of thread, is anyone aware of yet additional "loopholes" in the published literature ?
Bell Test Loopholes (no single Bell test does not contain at least one loophole)
1. Fair sampling loophole
2. Subtraction of accidentials loophole
3. Failure of rotational invariance loophole
4. Synchronization loophole
5. Enhancement loophole
6. Double detection loophole
7. Locality-light cone interpretation loophole
8. Detection Loophole
9. Coincidence Loophole
That's a lot of loopholes. But, experimenters are quite careful
about their experimental designs and adjustments to the raw data.
Bell tests are getting better. Instrumentation and detection is getting
better. I'm betting that when a test is done that everybody can agree
is loophole free, then that test will show a violation of the inequality
that was applied to it.
The only questions remaining to be answered will then have to do with
how the results are interpreted. What do experimental violations of the
inequalities mean? Apparently most physicists don't take
the idea that nature is nonlocal too seriously. Keep in mind that
"quantum nonlocality" doesn't mean superluminal propagation
of anything. I would also guess that most physicists think that
there is a real world out there full of phenomena that exist whether
we happen to be looking at them or not -- and that these phenomena
have certain definite properties at any instant, and function according
to certain rules
So, what exactly does it mean to say that experimental violations
of Bell inequalities contradict local realism, or that such violations
disallow local realistic models? I'm not sure.
Here's Wikipedia's definition of local realism:
-----------
In physics, the principle of locality is that distant objects cannot have direct influence on one another: an object is influenced directly only by its immediate surroundings. This was stated as follows by Albert Einstein in his article "Quantum Mechanics and Reality" ("Quanten-Mechanik und Wirklichkeit", Dialectica 2:320-324, 1948):
"The following idea characterises the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory. If this axiom were to be completely abolished, the idea of the existence of quasienclosed systems, and thereby the postulation of laws which can be checked empirically in the accepted sense, would become impossible."
Local realism is the combination of the principle of locality with the assumption that all objects must objectively have their properties already before these properties are observed. Einstein liked to say that the Moon is "out there" even when no one is observing it.
------------
According to this definition, locality means that the settings (or results or whatever) at A don't affect the results at B, and vice versa. There's nothing in the experiments themselves to indicate that A and B are affecting each other. Saying that A and B are affecting each other is a matter of interpretation. My current approach is to assume that the experimental results are more or less correct, and that nature is local (ie., that the speed of light is a limit) -- so there's something wrong with interpretations
that conclude that violations of the inequalities means that 'something' is propagating superluminally between A and B.
Realism here is defined as "the assumption that all objects must objectively have their properties already before these properties are observed." Well, nobody really knows what the term "photon", as defined in quantum theory, corresponds to in nature.
In optical Bell tests, the photons that are detected are transmitted by
the filters. It isn't known exactly what is actually happening 'in nature' at the level of the interaction between the emitted light and the filter. It isn't known exactly what is happening between emission and detection, or during the emission process. There's no qualitative apprehension of entities or events or processes at the 'quantum level' of nature. There are models which relate experimental preparations and results of course, but these don't give a clear qualitative picture of the 'quantum world'.
Einstein wanted a more realistic theory. Everybody would like to have
a more realistic theory -- ie., an accurate 'picture' of what's happening at
the level of the quantum. But there isn't one.
So, what does it mean to say that a photon has certain properties
independent of detection? Not much so far -- all attempts, afaik,
to model Bell tests in a 'realistic' way have been contradicted.
Of course that doesn't mean that the light incident on the filters
doesn't have any physical properties. It wouldn't make any
sense to say that. But the 'realistic' models are not up to speed yet.
The optical Bell tests that use atomic cascades (see, for example, Aspect et al. 1982) indicate that in order to get entanglement (ie., violation of an inequality), coincidental detections have to be causally linked to the same oscillator (the same atom). Quantum theory says that two photons emitted in opposite directions by the same atom will have identical polarization. This doesn't refer to any property of the emitted light prior to detection. Nevertheless, the tacit assumption is that light emitted in opposite directions by the same oscillator at the same instant has some common physical characteristics (which would correspond to Schroedinger's
meaning when he coined the term, entanglement) -- which is the thing that Bell's theorem, as interpreted by some, is supposed to disallow.
The whole thing is an intriguing mystery, a significant part of which is evaluating the physical meaning of Bell's theorem. My current understanding is that, while experimental violations of inequalities can be used as a fairly trustworthy indicator of entanglement, the Bell tests aren't necessarily telling us that nature is nonlocal or that a more realistic theory is, in principle, impossible.
Anyway, instead of worrying about the loopholes I would rather marvel at the promise of actually being able to do stuff with quantum entanglement.
It's not that experimental loopholes aren't an important consideration -- it's that statements about nature being nonlocal or realistic theories being impossible because of Bell's theorem aren't an important consideration.
Inequalities based on a Bell-type model of the prototypical optical Bell test will always be violated. What A and B are measuring individually is not the same as what A and be are measuring jointly. In the joint context (the Bell test context), the settings and results at one end aren't separable from (aren't independent of) the settings and results at the other. They're being considered *together*. Inequalities based on a model that separates them will always be violated.
Keep in mind that the only physical connection that A and B have is that they're jointly measuring something that came from the same oscillator at the same instant. The coincidence correlations are produced by the joint settings of the filters.
So, there's only two possibilities. Either the filters are jointly analyzing the same thing, or the two arms of the setup are 'communicating' or 'influencing each other' via some unknown and undetectable, superluminally propagating 'disturbance' that has no upper limit. My guess is that most working physicists are inclined to believe the former rather than the latter.