What do violations of Bell's inequalities tell us about nature?

Click For Summary
Violations of Bell's inequalities suggest that either non-locality or anti-realism must be true in quantum mechanics, but they do not definitively imply one over the other. Bell's theorem indicates that classical locality cannot be maintained within quantum theory, challenging traditional materialist views. Some participants argue that without a clear mechanism, accepting non-locality is problematic, while others express skepticism about interpretations like superdeterminism or many worlds due to their untestable nature. The discussion highlights a divide in preferences for either anti-realism or non-locality, with many calling for more experimental evidence to clarify these interpretations. Ultimately, the implications of Bell's inequalities remain a complex and unresolved issue in the foundations of quantum physics.

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #91
audioloop said:
travis, do you believe in CFD ?

Interesting question.

The first thing I'd say is: who cares? If the topic is Bell's theorem, then it simply doesn't matter. CFD *follows* from locality in the same way that "realism" / hidden variables do. That is: the only way to locally (and, here crucially, non-conspiratorially) explain even the perfect correlations is with a "realistic" hidden-variable theory with pre-determined values for *all* possible measurements, i.e., a model with the CFD property. So... to whatever extent somebody thinks CFD needs to be assumed to then derive a Bell inequality, it doesn't provide any kind of "out" since CFD follows from locality. That is, the overall logic is still: locality --> X, and then X --> inequality. So whether X is just "realism" or "realism + CFD" or whatever, it simply doesn't make any difference to what the correct answer to this thread's poll is.

So, having argued that it's irrelevant to the official subject of the thread, let me now actually answer the question. Do I believe in CFD? I'm actually not sure. Or: yes and no. Or: it depends on a really subtle point about what, exactly, CFD means. Let me try to explain. As I think everybody knows, my favorite extant quantum theory is the dBBB pilot-wave theory. So maybe we can just consider the question: does the pilot-wave theory exhibit the CFD property?

To answer that, we have to be very careful. One's first thought is undoubtedly that, as a *deterministic* hidden variable theory, of course the pilot wave theory exhibits CFD: whatever the outcome is going to be, is determined by the initial conditions, so ... it exhibits CFD. Clear, right?

On the other hand, I've already tried to make a point in this thread about how, although the pilot-wave theory assigns definite pre-existing values (that are then simply revealed in appropriate measurements) to particle positions, it does *not* do this in regard to spin. That is, the pilot-wave theory is in an important sense not "realistic" in regard to spin. And that starts to make it sound like, actually, at least in regard to the spin measurements that are the main subject of modern EPR-Bell discussions, perhaps the pilot-wave theory does *not*, after all, exhibit CFD.

So, which is it? Actually both are true! The key point here is that, according to the pilot-wave theory, there will be many physically different ways of "measuring the same property". Here is the classic example that goes back to David Albert's classic book, "QM and Experience." Imagine a spin-1/2 particle whose wave function is in the "spin up along x" spin eigenstate. Now let's measure its spin along z. The point is, there are various ways of doing that. First, we might use a set of SG magnets that produce a field like B_z ~ B_0 + bz (i.e., a field in the +z direction that increases in the +z direction). Then it happens that if the particle starts in the upper half of its wave packet (upper here meaning w.r.t. the z-direction) it will come out the upper output port and be counted as "spin up along z"; whereas if it happens instead to start in the lower half of the wave packet it will come out the lower port and be counted as "spin down along z". So far so good. But notice that we could also have "measured the z-spin" using a SG device with fields like B_z ~ B_0 - bz (i.e., a field in the z-direction that *decreases* in the +z direction). Now, if the particle starts in the upper half of the packet it'll still come out of the upper port... *but now we'll call this "spin down along z"*. Whereas if it instead starts in the lower half of the packet it'll still come out of the lower port, but we'll now call this *spin up along z*.

And if you follow that, you can see the point. Despite being fully deterministic, what the outcome of a "measurement of the z-spin" will be -- for the same exact initial state of the particle (including the "hidden variable"!) -- is not fixed. It depends on which *way* the measurement is carried out!

Stepping back for a second, this all relates to the (rather weird) idea from ordinary QM that there is this a correspondence between experiments (that are usually thought of as "measuring some property" of something) and *operators*. So the point here is that, for the pilot-wave theory, this correspondence is actually many-to-one. That is, at least in some cases (spin being one of them), many physically distinct experiments all correspond to the same one operator (here, S_z). But (unsurprisingly) distinct experiments can have distinct results, even for the same input state.

So... back finally to the original question... if what "CFD" means is that for each *operator*, there is some definite fact of the matter about what the outcome of an unperformed measurement would have been, then NO, the pilot-wave theory does *not* exhibit CFD. On the other hand, if "CFD" means that for each *specific experiment*, there is some definite fact of the matter about what the outcome would have been, then YES, of course -- the theory is deterministic, so of course there is a fact about how unperformed experiments would have come out had they been performed.

This may seem like splitting hairs for no reason, but the fact is that all kinds of confusion has been caused by people just assuming -- wrongly, at least in so far as this particular candidate theory is concerned -- that it makes perfect sense to *identify* "physical properties" (that are revealed or made definite or whatever by appropriate measurements) with the corresponding QM operators. This is precisely what went wrong with all of the so-called "no hidden variable" theorems (Kochen-Specker, etc.). And it is also just the point that needs to be sorted out to understand whether the pilot-wave theory exhibits CFD or not. The answer, I guess, is: "it's complicated".

That make any sense?
 
Physics news on Phys.org
  • #92
The notion of 'particles' is oxymoronic. If microscopic entities obey Heisenberg’s uncertainty principle, as we know they do, one is forced to admit that the concept of “microscopic particle” is a self-contradictory concept. This is because if an entity obeys HUP, one cannot simultaneously determine its position and momentum and, as a consequence, one cannot determine, not even in principle, how the position of the entity will vary in time. Consequently, one cannot predict with certainty its future locations and it doesn't have the requisites of classical particles like exact position and momentum in spacetime. What is the reason why an entity of uncertain nature but evidently non-spatial should obey classical notions like locality at all times?
 
  • #93
ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal?

It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism
 
  • #94
ttn said:
It depends on exactly what you mean by "realism".
I think one of the easiest (for me) ways to understand "realism" as per pilot-wave is "contextual realism". Demystifier does a good job discussing this issue here when debating whether a particular paper discussed in that thread ruled out the pilot wave model:
What their experiment demonstrates is that realism, if exists, must be not only nonlocal, but also contextual. Contextuality means that the value of the measured variable may change by the act of measurement. BM is both nonlocal and contextual, making it consistent with the predictions of standard QM as well as with their experiment. In fact, after Eq. (4), they discuss BM explicitly and explain why it is consistent with their results. Their "mistake" is their definition of "reality" as an assumption that all measurement outcomes are determined by pre-existing properties of particles independent of the measurement. This is actually the definition of non-contextual reality, not of reality in general. The general definition of reality is the assumption that some objective properties exist even when measurements are not performed. It does not mean that these properties cannot change by the physical act of measurement. In simpler terms, they do not show that Moon does not exist if nobody looks at it. They only show that Moon, if exists when nobody looks at it, must change its properties by looking at it. I also emphasize that their experiment only confirms a fact that was theoretically known for a long time: that QM is contextual. In this sense, they have not discovered something new about QM, but only confirmed something old.
Non-local Realistic theories disproved
https://www.physicsforums.com/showthread.php?t=167320

Since I hate writing stuff in my own words since others write it down so more eloquently the necessary contextuality present in the pilot-wave model is summarized in an easily understandible way (for me) here also:
One of the basic ideas of Bohmian Mechanics is that position is the only basic observable to which all other observables of orthodox QM can be reduced. So, Bohmian Mechanics will qualify VD (value definiteness) as follows: “Not all observables defined in orthodox QM for a physical system are defined in Bohmian Mechanics, but those that are (i.e. only position) do have definite values at all times.” Both this modification of VD (value definiteness) and the rejection of NC (noncontextuality) immediately immunize Bohmian Mechanics against any no HV argument from the Kochen Specker Theorem.
The Kochen-Specker Theorem
http://plato.stanford.edu/entries/kochen-specker/index.html

So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.
 
Last edited:
  • #95
nanosiborg said:
I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment.
ttn said:
Well, you'd only convince the kind of person who voted (b) in the poll, if you somehow managed to show that *no* "local nonrealistic" model could match the quantum predictions. Just showcasing the silly local coin-flipping particles model doesn't do that.
Yes, I see.
ttn said:
But I absolutely agree with the way you put it, about what the question is post-Einstein. Einstein already showed (in the EPR argument, or some less flubbed version of it -- people know that Podolsky wrote the paper without showing it to Einstein first and Einstein was pissed when he saw it, right?) that "realism"/LHV is the only way to locally explain the perfect correlations. Post-Einstein, the LHV program was the only viable hope for locality! And then Bell showed that this only viable hope won't work. So, *no* local theory will work. I'm happy to hear we're on the same page about that. But my point here is just that, really, the best way to convince somebody that "local non-realistic" theories aren't viable is to just run the proof that local theories aren't viable (full stop). But somehow this never actually works. People have this misconception in their heads that a "local non-realistic" theory can work, even though they can't produce an explicit example, and they just won't let go of it.
Yes, I do think I'm following you on all this. That we're on the same page. Not sure when I changed from the "realism or locality has to go" way of thinking to the realization that it's all about the locality condition being incompatible with QM and experiment and that realism/hidden variables are actually irrelevant to that consideration.
ttn said:
Since it so perfectly captures the logic involved here, it's worth mentioning here the nice little paper by Tim Maudlin

http://www.stat.physik.uni-potsdam.de/~pikovsky/teaching/stud_seminar/Bell_EPR-2.pdf

where he introduces the phrase: "the fallacy of the unnecessary adjective". The idea is just that when somebody says "Bell proved that no local realist theory is viable", it is actually true -- but highly misleading since the extra adjective "realist" is totally superfluous. As Maudlin points out, you could also say "Bell proved that no local theory formulated in French is viable". It's true, he did! But that does not mean that we can avoid the spectre of nonlocality simply by re-formulating all our theories in English! Same with "realism". Yes, no "local realist" theory is viable. But anybody who thinks this means we can save locality by jettisoning realism, has been duped by the superfluous adjective fallacy.
Yes, as I mentioned, I get this now, and feel like I've made progress in my understanding of Bell.
I like the way Maudlin writes also. Thanks for the link. In the process of rereading it.
nanosiborg said:
I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end.
ttn said:
I don't understand what you mean here.
I don't think I do either. I'm just fishing for any way to understand Bell's theorem that will allow me to retain the assumption that nature is evolving in accordance with the principle of local action. That nature is exclusively local. Because the assumption that nonlocality exists in nature is pretty heavy duty. Just want to make sure any possible nuances and subtleties have been dealt with. I've come to think that experimental loopholes and hidden variables ('realism') are unimportant. That it has to do solely with the explicit denotation of the locality assumption. So, I'm just looking for (imagining) possible hidden assumptions in the denotation of locality that might preclude nonlocality as the cause of Bell inequality violations.

ttn said:
For the usual case of two spin-entangled spin-1/2 particles, the sample space for Bob's measurement is just {+,-}.
If the joint sample space is (+,-), (-,+), (+,+), (-,-), then a detection of, say, + at A does change the joint sample space from (+,-), (-,+), (+,+), (-,-) to (+,-), (+,+).

But yes I see that the sample space at either end is always (+,-) no matter what. At least in real experiments. In the ideal, iff θ is either 0° or 90°, then a detection at one end would change the sample space at the other end.

But the sample space of what's registered by the detectors isn't the sample space I was concerned about. There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.

ttn said:
This is certainly not affected by anything Alice or her particle do. So if you're somehow worried that the thing you call "2) statistical independence" might actually be violated, I don't think it is. But I don't think that even matters, since I don't see anything like this "2) ..." being in any way assumed in Bell's proof. But, basically, I just can't follow what you say here.
I think that statistical independence is explicated in the codification of Bell's locality condition. Whether or not it's relevant to the interpretation of Bell's theorem I have no idea at the moment. The more I think about it, the more it just seems too simplistic, too pedestrian.

nanosiborg said:
The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.).
ttn said:
Huh?
Well, the premise might be wrong, maybe this particular inconsistency between experimental design and Bell locality isn't significant or relevant to Bell inequality violations, but I have to believe that you understand the statement.

nanosiborg said:
Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality.

Is this a possibility, or has Bell (and/or you) dealt with this somewhere?
ttn said:
The closest I can come to making sense of your worry here is something like this: "Bell assumes that stuff going on by Bob should be independent of stuff going on by Alice, but the experiments reveal correlations, so one of Bell's premises isn't reflected in the experiment." I'm sure I have that wrong and you should correct me. But on the off chance that that's right, I think it would be better to express it this way: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal". That is, it sounds like you are trying to make "something about how the experimental data should come out" into a *premise* of Bell's argument, instead of the *conclusion* of the argument. But it's not a premise, it's the conclusion. And the fact that the real data contradicts that conclusion doesn't invalidate his reasoning; it just shows that his *actual* premise (namely, locality!) is false.
In a previous post I said something like that Bell locality places upper and lower boundaries on the correlations, and that QM predicted correlations lie, almost entirely, outside those boundaries

Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal."

Or are you saying that that's the correct way of saying it? Or what?

I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results.

I don't yet understand how/why it's concluded that nature is nonlocal.
 
Last edited by a moderator:
  • #96
nanosiborg said:
There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.

Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.



Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal."

Or are you saying that that's the correct way of saying it? Or what?

That's the simple (and correct) way to express what I thought you were saying.



I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results.

I don't yet understand how/why it's concluded that nature is nonlocal.

Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.
 
  • #97
bohm2 said:
So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.

Yes, that's right. Kochen-Specker rules out non-contextual hidden variable (VD) theories. The dBB pilot-wave theory is not a non-contextual hidden variable (VD) theory.

And, of course, separately: Bell's theorem rules out local theories. The pilot-wave theory is not a local theory.

People who voted for (b) in the poll evidently get these two theorems confused. They try to infer the conclusion of KS, from Bell.
 
  • #98
Quantumental said:
ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal?

I read it when it came out and haven't thought of it sense. In short, meh.


It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism

Haven't read DW's new book. Everything I've seen about the attempt to derive the Born rule from decision theory has been, to me, just ridiculous. But I would like to see DW's latest take on it. Not sure if you intended this, but (what I would call) the "ontology issue" and the "preferred basis issue" are certainly not the same thing. Not sure what you meant exactly with the last almost-sentence. (Shades of ... "the castle AAARRRGGGG")
 
  • #99
In my experience, whenever things are philosophically murky, and people are stuck into one or more "camps", it sometimes helps to ask a technical question whose answer is independent of how you interpret things, but which might throw some light on those interpretations. That's what Bell basically did with his inequality. They may not have solved anything about the interpretation of quantum mechanics, but certainly afterwards, any interpretation has to understood in light of his theorem.

Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, \Psi. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"?

Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object.

But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.
 
  • #100
ttn said:
Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.
No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'. At the outset, given a uniform λ distribution (is this what's called rotational invariance?) and the rapid and random varying of the a and b settings, then would the sample space for a or b be all λ values? Anyway, whatever the sample space for a or b (depending on the details of the local model), then given a detection at, say, A, associated with some a, then would the sample space for b be a reduced set of possible λ values?

ttn said:
That's the simple (and correct) way to express what I thought you were saying.
If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.

ttn said:
Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.
Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.
 
Last edited:
  • #101
stevendaryl said:
Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, \Psi. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"?

Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object.

But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.

This is just the normal way that all MWI proponents already think about the theory. It's a theory of the whole universe, described the the universal wave function, obeying Schroedinger's equation at all times. (No collapse postulates or other funny business.) Decoherence gives rise to a coherent "branch" structure such that it's possible to think of each branch as a separate (or at least, independent) world.

For more details, see any contemporary treatment of MWI, e.g., the David Wallace book that was mentioned earlier. (Incidentally, I just ordered myself a copy!)
 
  • #102
nanosiborg said:
No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'.

Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.)


If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.

Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal. So I guess you should think about the reasoning some more.


Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.

What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop?
 
  • #103
ttn said:
Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.)
Ok. Thanks.

ttn said:
Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal.
And my point is that nonlocality in nature doesn't necessarily follow from the generalized nonviability of the locality condition.

ttn said:
What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop?
Well, no to both statements. The point is that every possible local theory can disagree with experiment in an exclusively local universe if the general locality condition is encoding something (in addition to locality) that's necessarily incompatible with the experimental designs of Bell tests but which has nothing to do with locality.

I take Bell's formulation as general, and assume that the QM treatment of quantum entanglement will always agree with experiment. So, insofar as Bell locality and QM have been mathematically proven to be incompatible, then there's no possible viable local theory of quantum entanglement.

But consider that Bell tests are designed to produce statistical dependence by the entanglement creation process (eg., common emitter, interaction of the particles, common 'zapping' of separated particles, etc.) and the data pairing process, both of which proceed along exclusively local channels.

Then consider that the locality condition codifies statistical independence. I'm just wondering if there's anything significant enough about that inconsistency so that it, and not nonlocality, might be the effective cause of the inconsistency between local theories and experiment.
 
  • #104
nanosiborg said:
The point is that every possible local theory can disagree with experiment in an exclusively local universe if the general locality condition is encoding something (in addition to locality) that's necessarily incompatible with the experimental designs of Bell tests but which has nothing to do with locality.

True. That's also, for example, what Jarrett thought. But... I can't understand what exactly you are proposing this "extra illicit something" to *be*. If you have something definite in mind, I would enjoy hearing about it. Probably it will turn out that you haven't really fully understood Bell's locality condition (as Jarrett didn't when he made similar charges) and that actually whatever you have in mind is not at all smuggled in. But who knows, maybe you're right.

On the other hand, if you don't have anything definite in mind -- if it's just "well what if there's some illicit assumption smuggled in there? prove that there isn't such a thing!" -- then that would be quite silly and would certainly leave nothing to discuss.


But consider that Bell tests are designed to produce statistical dependence by the entanglement creation process (eg., common emitter, interaction of the particles, common 'zapping' of separated particles, etc.) and the data pairing process, both of which proceed along exclusively local channels.

If the claim is that there is some extra illicit assumption built into Bell's definition of locality, I don't see how you think it helps to bring up the experiments. Shouldn't you be talking about the mathematical proof of Bell's theorem, and arguing that there is an assumption in the theorem other than (genuine) locality?


Then consider that the locality condition codifies statistical independence.

I don't understand what you think you mean by that. What the locality condition codifies is ... locality. It certainly does *not* just say: A and B should be statistically independent. If you think that is the locality condition, you need to actually read Bell and understand what he did before you start criticizing him.
 
  • #105
ttn said:
True. That's also, for example, what Jarrett thought. But... I can't understand what exactly you are proposing this "extra illicit something" to *be*. If you have something definite in mind, I would enjoy hearing about it.
Just an intuited possibility of something ('statistical' independence) that's sort of hidden by the causal independence (locality) that's codified in the locality condition, and that might be inconsistent with the experimental designs to a significant enough extent that it would be considered the effective cause of the inconsistency between local theories and experiment.

ttn said:
Probably it will turn out that you haven't really fully understood Bell's locality condition (as Jarrett didn't when he made similar charges) and that actually whatever you have in mind is not at all smuggled in. But who knows, maybe you're right.
I'll agree that at this point the former seems much more likely than the latter.

ttn said:
On the other hand, if you don't have anything definite in mind -- if it's just "well what if there's some illicit assumption smuggled in there? prove that there isn't such a thing!" -- then that would be quite silly and would certainly leave nothing to discuss.
I agree. Certainly no disproof is required of what I'm suggesting, rather vaguely, might be the case. It's along the lines of, I have this vague notion, help me explore it if you think there's any possibility that there might be something to it. You've indicated that you don't, and the more I get into it the more I think you're probably right. But I'd like to at least get to the point where I have a clearly formulated hypothesis instead of just a vague notion.

ttn said:
If the claim is that there is some extra illicit assumption built into Bell's definition of locality, I don't see how you think it helps to bring up the experiments.
Shouldn't you be talking about the mathematical proof of Bell's theorem, and arguing that there is an assumption in the theorem other than (genuine) locality?
The mathematical proof only tells us that the locality condition is incompatible with QM. The possible incompatibility of the suggested extra illicit (and less visible) assumption can only be demonstrated when evaluated in relation to experimental design.

nanosiborg said:
Then consider that the locality condition codifies statistical independence.
ttn said:
I don't understand what you think you mean by that. What the locality condition codifies is ... locality. It certainly does *not* just say: A and B should be statistically independent.
I just left out, "in addition to codifying locality (ie., causal independence)", which I thought was understood. Certainly the locality condition doesn't only codify statistical dependence. Part of what I'm wondering is if it codifies statistical independence. Or, in other words, does the locality condition only codify locality (causal independence)?

If the locality condition codifies statistical independence in addition to codifying locality, then the question becomes: is the inconsistency between the statistical independence codified by the locality condition and the statistical dependency necessitated by the experimental design significant enough that this inconsistency is the effective cause of the inconsistency between the predictions of models incorporating the locality condition and experimental results? .
 
Last edited:
  • #106
nanosiborg said:
If the locality condition codifies statistical independence in addition to codifying locality, then the question becomes: is the inconsistency between the statistical independence codified by the locality condition and the statistical dependency necessitated by the experimental design significant enough that this inconsistency is the effective cause of the inconsistency between the predictions of models incorporating the locality condition and experimental results?

I think this is a restatement of the detection and fair-selection "loopholes"...?

That doesn't make it ipso facto wrong, just gives us a starting point for considering whether, in any given experiment, the experiment might not completely preclude locality.
 
  • #107
Nugatory said:
I think this is a restatement of the detection and fair-selection "loopholes"...?
If that's all it is, then I agree with T. Norsen that there's nothing there. But I think it's a different consideration than these loopholes, which have been more or less covered in more recent experiments, haven't they? Anyway, we're assuming that the applicable assumptions in the QM approach regarding the various usual experimental loopholes are correct and adequate so that when all the usual experimental loopholes are covered then the results will still be in agreement with QM predictions.
 
  • #108
nanosiborg said:
The mathematical proof only tells us that the locality condition is incompatible with QM. The possible incompatibility of the suggested extra illicit (and less visible) assumption can only be demonstrated when evaluated in relation to experimental design.

The mathematical proof tells us that the locality condition is incompatible with the *empirical predictions* of QM. QM, the theory, actually plays zero role whatever in Bell's argument. Or put it this way, it plays exactly the same role that the dBB pilot-wave theory plays. Here's what I mean. Bell formulates a careful definition of locality, and shows on its basis that local theories will always make predictions in accord with the inequality. OK, so now experimentalists go and do the tests and they find that the inequality is empirically violated. So locality is refuted. That's it. Now if you want you can also say: the theory called "QM" makes predictions that violate the inequality, which evidently shows that it must be a nonlocal theory and indeed, if you just look at the theory and test it against Bell's definition of locality, indeed, it's nonlocal. And the same is true for the pilot wave theory, the GRW theory, and all other empirically viable extant theories. But the point is that we never had to say anything -- or even *think* about -- any particular candidate theory in the course of proving that nature is nonlocal.


I just left out, "in addition to codifying locality (ie., causal independence)", which I thought was understood. Certainly the locality condition doesn't only codify statistical dependence. Part of what I'm wondering is if it codifies statistical independence. Or, in other words, does the locality condition only codify locality (causal independence)?

Statistical dependence/independence of what? Maybe that's what I'm missing. If what you mean is: statistical in/dependence of the outcomes, A and B, on the two sides, then there's a sense in which Bell's conditions does just assert statistical independence. Many people over the years have rejected Bell's conclusion on the grounds that, they say, it doesn't rule out *nonlocality*, it just rules out *nonlocal correlations* -- and, they say, two distant events can be *correlated* without one of them causally influencing the other. That's of course true, but such people fail to appreciate the special conditions that Bell described (e.g., that we completely specify the goings-on in the past light cone of one of the events, in a region with a certain special relationship to the other distant event, etc.) under which, actually, "statistical dependence" *does* necessarily require nonlocal causation. I always refer such people to Bell's last (and I think on this subject, best) paper, "la nouvelle cuisine", where he takes as an explicit theme the idea of needing to carefully distinguish causal connections from mere statistical correlations. To whatever extent this is what you're thinking, you would probably benefit from reading that paper too.
 
  • #109
Nugatory said:
I think this is a restatement of the detection and fair-selection "loopholes"...?

That's kind of what I was thinking when I suggested earlier that he was doubting the "no conspiracy" assumption. (Although, really, the "no conspiracies" assumption that is used in the proof of the theorem, is not exactly the same thing as the fair-sampling assumption that experimentalists use when they interpret their data as violating the inequality. They're related, though.)
 
  • #110
Maui said:
The point is not why there could potentially be non-locality but why there is locality.
I'm not very knowledgeable about the various quantum theories of gravity but a number of them try to do away with spacetime. And some physicists, like Gisin, who are convinced that violation of Bell's implies that nature is non-local, further argue that nonlocal quantum correlations would appear to emerge, from "outside" space-time:
To put the tension in other words: no story in space-time can tell us how nonlocal correlations happen, hence nonlocal quantum correlations seem to emerge, somehow, from outside space-time.
Quantum nonlocality: How does Nature perform the trick?
http://lanl.arxiv.org/pdf/0912.1475.pdf
If so, whatever causes entanglement does not travel from one place to the other; the category of “place” simply isn't meaningful to it. It might be said to lie *beyond* spacetime. Two particles that are half a world apart are, in some deeper sense, right on top of each other. If some level of reality underlies quantum mechanics, that level must be non-spatial.
How Quantum Entanglement Transcends Space and Time
http://www.fqxi.org/community/forum/topic/994?search=1

But since only entities localized in spacetime can ever be observed, it's not clear if "progress" can be made on this issue which kind of hi-lites Einstein's concerns; nevertheless, I found these 2 questions/problems discussed in the paper below very interesting and would support what you are suggesting:
...we define a theory to be empirically incoherent in case the truth of the theory undermines our empirical justification for believing it to be true. Thus, goes the worry, if a theory rejects the fundamental existence of spacetime, it is threatened with empirical incoherence because it entails that there are, fundamentally, no local beables situated in spacetime; but since any observations are of local beables, doesn't it then follow that none of our supposed observations are anything of the kind? The only escape would be if spacetime were in some way derived or (to use the term in a very general sense, as physicists do) 'emergent' from the theory. But the problem is that without fundamental spacetime, it is very hard to see how familiar space and time and the attendant notion of locality could emerge in some way...at least without some concrete proposals on the table.
Maudlin quoted in that paper also makes this point which the author refers to and ultimately criticizes (e.g. the bolded part) as Maudlin's challenge:
But one might also try instead to derive a physical structure with the form of local beables from a basic ontology that does not postulate them. This would allow the theory to make contact with evidence still at the level of local beables, but would also insist that, at a fundamental level, the local structure is not itself primitive...This approach turns critically on what such a derivation of something isomorphic to local structure would look like, where the derived structure deserves to be regarded as physically salient (rather than merely mathematically definable). Until we know how to identify physically serious derivative structure, it is not clear how to implement this strategy.
Emergent spacetime and Empirical (In)coherence
http://arxiv.org/pdf/1206.6290.pdf
 
Last edited:
  • #111
Before this thread goes quietly into the night, I would just like to point out one last time that -- despite the fact that "anti-realism" won the poll by a large margin -- not a single person has been willing to answer my challenge. Here it is one last time in case anybody missed it...

Bell's inequality, as everybody knows, is a constraint on the correlations that can be exhibited between the outcomes of spin measurements on pairs of entangled particles, as the alignments of the measuring devices are changed. In principle, to be empirically viable, a theory needs to be able to make the correct predictions for the statistics that will be observed for *all possible* alignments. But for the sake of discussion, let us focus here on a very small and simple subset -- namely, just the case where both Alice and Bob measure the spins of their particles along the z-direction.

Clearly, to be empirically viable, i.e., to be able to make the right predictions for *all possible* measurements, a theory will have to at least make the right predictions for this particular case. As it turns out, experiment tells us that, in this case, there is a perfect (anti-) correlation of outcomes: whenever Alice's particle goes up, Bob's goes down, and vice versa.

So here is the challenge. People who answered "anti-realism" in the poll evidently believe that there exists a theory that is (a) local and (b) non-realist and which is empirically viable. As noted, this theory must surely be able to explain what is empirically observed in the special case of parallel measurements, if it is really empirically viable. So... what theory is this? Explain how the perfectly anti-correlated outcomes (in just this case where Alice and Bob both measure along the z-direction) can be accounted for in a local but non-realistic model.

Or, if you can't do that, please have the dignity to retract your vote. Thank you very much.
 
  • #112
ttn, you make it sound like this is the first time that a classical explanation for a quantum phenomenon appears inadequate and incoherent. Of course, this is not the case - classical intuition is the number one barrier, you could raise the same Newtonian objections towards the uncertainty principle for instance and the people voting anti-realism are merely acknowledging the reality of observations. Quite a number of experiements have been performed that prove that quantum particles do not have fixed properties at all times, as you would expect classically. I do not understand why a quantum physicist would ever go on a rampage about something as undefensible as realism in quantum physics unless he wanted to turn known physics upside-down. Do you?
 
Last edited:
  • #113
ttn said:
Statistical dependence/independence of what? Maybe that's what I'm missing. If what you mean is: statistical in/dependence of the outcomes, A and B, on the two sides, ...
Yes.

ttn said:
... then there's a sense in which Bell's condition does just assert statistical independence.
Yes, and I think this might be significant for the following reasons.

The only way to make explicit, to codify, the assumption of locality in a model of quantum entanglement is via the formal expression of statistical independence.

Bell inequalities are based on the correlational boundaries imposed by this formal constraint, which means that any and all 'explicitly local' theories of quantum entanglement can't possibly violate a Bell inequality.

Bell tests are designed to produce statistical dependence (via entirely local means), and
a model explicitly based on statistical independence would not be expected to reproduce all the results of experiments based on statistical dependence.

All of this is fine for Bell's main purpose, which was to see if local (hidden variable, but as we've seen HVs are superfluous) theories of quantum entanglement can be compatible with QM. Or, in other words, if QM could be interpreted locally -- and he proved that it can't be.

However, many people want to extend the applicability of Bell's theorem to say that it means that nature is nonlocal. Which means that statistical dependence of the sort designed into Bell tests is impossible in a local universe. But that doesn't seem reasonable to me, so I wondered where it came from.

Those who believe that Bell's theorem proves that nature is nonlocal have assumed that (via codifying locality as statistical independence) in a local universe, we should expect the angular dependence (the correlation observed experimentally) to be bounded such that it can never reproduce the Malus' Law angular dependence that's observed experimentally.

Prior to the adoption of statistical independence as being formally synonymous with the assumption of locality, the Malus' Law angular dependence is what would have been expected from Bell tests. Following the adoption of statistical independence as being formally synonymous with the assumption of locality, and applying this in models of experiments designed to produce statistical dependence via local means, it was expected that the angular dependence produced in Bell tests would not only not be Malus' Law but would in some cases even be linear -- an expectation that runs contradictory to known empirical optics laws.

In considering this, it seemed to me then that the oddity wasn't the angular dependencies produced in Bell tests, but the fact that Bell inequalities are based on angular dependency expectations that have no foundation in physics. In fact, their sole foundation is the application of models based on statistical independence to experiments based on statistical dependence.

So, there seems to me to be a basic problem with extending the meaning of Bell's theorem to encompass nature. What Bell's theorem does, and the only thing it does (as far as I can tell), is definitively rule out local theories of quantum entanglement (a nonetheless monumental result).

And here I'll restate my position regarding bohm2's poll. Violations of Bell inequalities tell us nothing about nature.
 
Last edited:
  • #114
Maui said:
ttn, you make it sound like this is the first time that a classical explanation for a quantum phenomenon appears inadequate and incoherent. Of course, this is not the case - classical intuition is the number one barrier, you could raise the same Newtonian objections towards the uncertainty principle for instance and the people voting anti-realism are merely acknowledging the reality of observations. Quite a number of experiements have been performed that prove that quantum particles do not have fixed properties at all times, as you would expect classically. I do not understand why a quantum physicist would ever go on a rampage about something as undefensible as realism in quantum physics unless he wanted to turn known physics upside-down. Do you?

Sure, I love turning stuff upside down. But what you say here doesn't seem relevant. The question (that the poll was about) was not: "is realism true?" It was rather "what do violations of Bell inequalities tell us about nature?" So saying that there is all kinds of evidence that realism is not true -- I agree, it isn't, at least with the silly meaning that people give to it here (namely, non-contextual hidden variables) -- is irrelevant. The point is that something *more* than this -- something *much more interesting than this* -- follows from the violations of Bell's inequality, namely: nonlocality.

Also, part of your comments above suggest that you misunderstood the challenge. I never said that people voting (b) should provide a "classical" (also local and non-realist) explanation of the perfect correlations. The explanation can be "quantum" (whatever that means exactly) or whatever flavor you like. It just has to be local.

Surely the reasoning here is clear? If somebody thinks we get to choose whether to reject realism or locality in the face of Bell inequality violations, and opts for rejecting realism, surely they believe that the empirical data can be explained locally. I'm just saying: put up or shut up. Show me a local non-realist way to explain the perfect correlations or retract your false vote. Simple.
 
  • #115
nanosiborg said:
The only way to make explicit, to codify, the assumption of locality in a model of quantum entanglement is via the formal expression of statistical independence.

No, there's a whole heck of a lot more to it than that. You should read "la nouvelle cuisine" or perhaps my paper on Bell's formulation:

http://arxiv.org/abs/0707.0401


Those who believe that Bell's theorem proves that nature is nonlocal have assumed that (via codifying locality as statistical independence) in a local universe, we should expect the angular dependence (the correlation observed experimentally) to be bounded such that it can never reproduce the Malus' Law angular dependence that's observed experimentally.

They've *assumed* that??!? That's the whole content of the theorem!


Prior to the adoption of statistical independence as being formally synonymous with the assumption of locality, the Malus' Law angular dependence is what would have been expected from Bell tests.

I don't see why. Malus' law has nothing to do with it. That law describes the fraction of photons passing through a polarizer at one angle, which then also pass through a subsequent polarizer at a different angle. It's the probability for a single photon to pass one polarizer, given that it's passed another. In the Bell tests there are two particles. Thinking that it's somehow just "obvious" that they should exhibit statistics that have something to do with Malus' law can only be a confusion.



So, there seems to me to be a basic problem with extending the meaning of Bell's theorem to encompass nature. What Bell's theorem does, and the only thing it does (as far as I can tell), is definitively rule out local theories of quantum entanglement (a nonetheless monumental result).

And here I'll restate my position regarding bohm2's poll. Violations of Bell inequalities tell us nothing about nature.

I find this bizarre. If we know that no local theory can be true, then the correct description of nature is nonlocal. If the true theory is a nonlocal theory, then nature is nonlocal. Yes, it's amazing that we can know that the true theory is a nonlocal theory without (yet) knowing what the true theory *is*. But, that is the situation. Saying that, yes, we know the true theory is nonlocal -- but we can't say anything about nature -- that's bizarre.
 
  • #116
Maui said:
I do not understand why a quantum physicist would ever go on a rampage about something as undefensible as realism in quantum physics unless he wanted to turn known physics upside-down.
Maybe I'm misunderstanding but ttn's argument doesn't have much to with realism. As I understand it, his basic argument with respect to violations of Bell's inequalities is the following:
...the role of Bell’s theorem is not to set constraints on how ‘realist’ we are allowed to be about quantum systems but rather, much more interestingly, to characterize a structural property of any theory that aims to cover the domain of validity covered so far by quantum mechanics, namely non-locality. As a consequence, whether a theory aiming to supersede quantum theory will be ‘realist’, ‘non-realist’, ‘half-realist’ or ‘one-third realist’, this will concern the further conceptual and formal resources of that theory and not at all the Bell theorem.
Non-Local Realistic Theories and the Scope of the Bell Theorem
http://arxiv.org/ftp/arxiv/papers/0811/0811.2862.pdf
 
Last edited:
  • #117
ttn said:
No, there's a whole heck of a lot more to it than that. You should read "la nouvelle cuisine" or perhaps my paper on Bell's formulation:

http://arxiv.org/abs/0707.0401
Yes, lots of details. But, for the argument I'm currently making, that's the essence of it. I will read those papers, thanks. Maybe they'll change my mind.

ttn said:
They've *assumed* that??!? That's the whole content of the theorem!
It's assumed that what explicitly local models predict is what should be expected in a local universe, ignoring the inconsistency of explicitly local models with experimental design, and what prior empirical optical results would lead us to reasonably expect the results of such tests to be.

ttn said:
I don't see why. Malus' law has nothing to do with it. That law describes the fraction of photons passing through a polarizer at one angle, which then also pass through a subsequent polarizer at a different angle. It's the probability for a single photon to pass one polarizer, given that it's passed another. In the Bell tests there are two particles. Thinking that it's somehow just "obvious" that they should exhibit statistics that have something to do with Malus' law can only be a confusion.
Or maybe it's a not so obvious insight. In Bell tests there are streams of photons, paired by time correlation and a relationship that's presumably produced through some common (local) emission process. What's recorded at both ends as rate of coincidental detection might be seen as analogous to the resulting intensity in Malus' Law setups involving crossed polarizers, and the angular dependence or correlation between rate of coincidental detection and θ seen as analogous (in the ideal) to the Malus' Law angular dependence. But, then again, maybe that's not a good analogy. As I said, I'm just exploring alternatives, because the interpretationally based theoretical 'inference' of nonlocality in nature from Bell test results seems to me to be on rather shaky grounds. Yes, the outcome independence of the locality condition seems to be the only way to make an explicitly local model of quantum entanglement, but it doesn't follow from that that nonlocal models of quantum entanglement are true and correct models of deep reality. The assumption that nature is nonlocal isn't a verifiable or falsifiable hypothesis, and, so far in my explorations, there are at least as many reasons to think that nature is exclusively local as there are to think it's nonlocal.

ttn said:
I find this bizarre. If we know that no local theory can be true, then the correct description of nature is nonlocal. If the true theory is a nonlocal theory, then nature is nonlocal. Yes, it's amazing that we can know that the true theory is a nonlocal theory without (yet) knowing what the true theory *is*. But, that is the situation. Saying that, yes, we know the true theory is nonlocal -- but we can't say anything about nature -- that's bizarre.
What we know is that experiments designed to produce statistical dependence can't be viably modeled by explicit statistical independence. We don't know that any theory is a true 'description' of deep reality. Strictly speaking QM is neither a local nor a nonlocal theory. It doesn't model quantum entanglement in terms of statistical independence and the fact that it takes into account the statistical dependency produced by the experimental designs doesn't make it a nonlocal theory, and anyway it's not designed to be a 'description' of what's happening in deep reality. Imho, it's quite bizarre to conjecture that nature is nonlocal from optical Bell test results, ignoring the inconsistency of explicitly local models with experimental design, and what prior empirical optical results would lead (at least some of) us to reasonably expect the results of such tests to be.

It seems we're at an impasse on this, so, for the moment, we can just agree to disagree. Of course I do agree with your opposition to the "2." ('anti-realism) votes and your clarification of the issue and (non)relevancy of 'realism'. I admire your contributions to your field.

I'm willing to conjecture, even bet on, that nothing that applied physics can actually use (ie., no physical faster than light anything) will ever come from the assumption of nonlocality in nature per se. The most parsimonious 'explanation' for this will remain simply that there's no 'nonlocality' in nature.
 
Last edited:
  • #118
bohm2 said:
Maybe I'm misunderstanding but ttn's argument doesn't have much to with realism. As I understand it, his basic argument with respect to violations of Bell's inequalities is the following:

...the role of Bell’s theorem is not to set constraints on how ‘realist’ we are allowed to be about quantum systems but rather, much more interestingly, to characterize a structural property of any theory that aims to cover the domain of validity covered so far by quantum mechanics, namely non-locality. As a consequence, whether a theory aiming to supersede quantum theory will be ‘realist’, ‘non-realist’, ‘half-realist’ or ‘one-third realist’, this will concern the further conceptual and formal resources of that theory and not at all the Bell theorem.
Bu we already know that non-contextual chairs and table do not exist according to quantum mechanics, so realism as is usually(naively) defined is broken at the level of atoms and electrons. Given that, who needs additional magic like non-locality at all costs and what does explain better? He has no qm explanation for the reality of chairs and tables that matches both the postulates of qm and our experience of them, so adding non-locality brings nothing substantial. Though it seems obvious that if realism fails, so does locality and nonlocality is implied by the consistence of the classical world and in the end both will be found to be incorrect and incompatible with qm.
 
  • #119
nanosiborg said:
It's assumed that what explicitly local models predict is what should be expected in a local universe, ignoring the inconsistency of explicitly local models with experimental design, and what prior empirical optical results would lead us to reasonably expect the results of such tests to be.

This idea that you keep repeating -- that there is some inconsistency between the theorem and the "experimental design" that makes it improper for us to conclude anything from the experiments -- really makes no sense to me. What you're saying strikes me as just like the following silly scenario. Suppose everybody still thought the world was flat, but somebody figured out that if you designed a rocket and flew up to a very great altitude and looked down and took a picture of the earth, you could really see what shape it is. OK, so they decide to build the rocket and perform the experiment, even though everybody expects that, when they get up there, they'll just see the flat Earth stretching off forever in all directions. Then they run the experiment, take the picture, and -- lo and behold! -- it is immediately obvious that, actually, the Earth is round! Everyone is shocked and surprised!

But then nanosiborg comes along and says: not so fast. There is an inconsistency between the assumption that everybody held (namely that the Earth was flat) and the "experimental design" (meaning that the experiment actually shows that the Earth is round). This inconsistency (which I guess is just the fact that there is a conflict between what many people *expected* and what the experiment actually *showed*) means that actually we cannot conclude from the experiment that the Earth is round. The most we can say is that theories according to which the Earth is flat are no longer viable. But this tells us nothing about nature.

Tell me how what you're saying isn't just parallel to that (I think, manifestly absurd) response to the hypothetical scenario.


The assumption that nature is nonlocal isn't a verifiable or falsifiable hypothesis

Hogwash. Aspect's experiment (and other more recent and better versions of the same thing) experimentally prove that nature is nonlocal. They falsify locality.



Strictly speaking QM is neither a local nor a nonlocal theory.

Hogwash. QM is a nonlocal theory, at least by the best definition of locality that we have going -- namely, Bell's as presented in "la nouvelle cuisine". You have a better/different formulation of "locality" to propose? I'm all ears. Or you think there's some flaw in Bell's formulation? I'm all ears.



I'm willing to conjecture, even bet on, that nothing that applied physics can actually use (ie., no physical faster than light anything) will ever come from the assumption of nonlocality in nature per se. The most parsimonious 'explanation' for this will remain simply that there's no 'nonlocality' in nature.

Quantum teleportation?


Anyway, read the papers I mentioned. It's clear (to me at least) that you are clinging to loopholes that don't in fact exist, because you don't yet fully appreciate what Bell did. You need to study his work carefully before you take a strong position on whether he screwed up or not.
 
  • #120

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
10
Views
3K
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
63
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
58
Views
4K