View Poll Results: What do observed violation of Bell's inequality tell us about nature? Nature is non-local 10 30.30% Anti-realism (quantum measurement results do not pre-exist) 15 45.45% Other: Superdeterminism, backward causation, many worlds, etc. 8 24.24% Voters: 33. You may not vote on this poll

Recognitions:
Gold Member

## What do violations of Bell's inequalities tell us about nature?

 Quote by ttn You think that, by saying there are no pre-existing values, we can consistently maintain locality....That is, you do not accept that Einstein/EPR validly argued "from locality to" pre-existing values. That is, you think that it is possible to explain the perfect correlations locally but without pre-existing values. This is precisely why I issued "ttn's challenge" in my first post in this thread: please display an actual concrete (if toy) model that explains the perfect correlations locally without relying on pre-existing values.
This is the part that always confused me. What difference would there be between a local vs non-local non-realism? Maudlin notes this, I think, when he writes:
 The microscopic world, Bohr assured us, is at least unanschaulich (unvisualizable) or even non-existent. Unvisualizable we can deal with—a 10-dimensional space with compactified dimensions is, I suppose, unvisualizable but still clearly describable. Non-existent is a different matter. If the subatomic world is non-existent, then there is no ontological work to be done at all, since there is nothing to describe. Bohr sometimes sounds like this: there is a classical world, a world of laboratory equipment and middle-sized dry goods, but it is not composed of atoms or electrons or anything at all. All of the mathematical machinery that seems to be about atoms and electrons is just part of an uninterpreted apparatus designed to predict correlations among the behaviors of the classical objects. I take it that no one pretends anymore to understand this sort of gobbledegook, but a generation of physicists raised on it might well be inclined to consider a theory adequately understood if it provides a predictive apparatus for macroscopic events, and does not require that the apparatus itself be comprehensible in any way. If one takes this attitude, then the problem I have been trying to present will seem trivial. For there is a simple algorithm for associating certain clumped up wavefunctions with experimental situations: simply pretend that the wavefunction is defined on a configuration space, and pretend that there are atoms in a configuration, and read off the pretend configuration where the wavefunction is clumped up, and associate this with the state of the laboratory equipment in the obvious way. If there are no microscopic objects from which macroscopic objects are composed, then as long as the method works, there is nothing more to say. Needless to say, no one interested in the ontology of the world (such as a many-worlds theorist) can take this sort of instrumentalist approach.
Can the world be only wavefunction?
In Ch. 4 of "Many Worlds?: Everett, Quantum Theory, and Reality"

So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues. I mean what ontological difference would there be between the local vs non-local version of non-realism? Anyway, that's how I understood it or I'm not getting it. As I posted previously, I think Gisin argues similarily here:
 What is surprising is that so many good physicists interpret the violation of Bell’s inequality as an argument against realism. Apparently their hope is to thus save locality, though I have no idea what locality of a non-real world could mean? It might be interesting to remember that no physicist before the advent of relativity interpreted the instantaneous action at a distance of Newton’s gravity as a sign of non-realism...
Is realism compatible with true randomness?
http://arxiv.org/pdf/1012.2536v1.pdf

And even a Bayesian argument seems hard to swallow because as Timpson notes:
 We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian... The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems.
Quantum Bayesianism: A Study
http://arxiv.org/pdf/0804.2047v1.pdf

Recognitions:
Gold Member
 Quote by ttn Luckily, truth is not decided by majority vote. So far -- since nobody has risen to answer my challenge -- all the results prove is that 12 people hold a view that they have no actual basis for.
Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing:

It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality.

So in my book, every Bohmian is an anti-realist.

 Quote by bohm2 This is the part that always confused me. What difference would there be between a local vs non-local non-realism?
I certainly agree with you (and Maudlin) that -- if the rejection of "realism" means that there is no physical reality at all -- then the idea that there is still something meaningful for "locality" to mean is completely crazy. Clearly, if there's no physical reality, then it makes no sense to say that all the causal influences that propagate around from one physically real hunk of stuff to another move at or slower than 3 x 10^8 m/s. If there's no reality, then reality's neither local nor nonlocal because there's no reality!

But the point is that there are very few people who actually seriously think there's no physical reality at all. (This would be solipsism, right? Note that even the arch-quantum-solipsist Chris Fuchs denies being a solipsist! Point being, very few people, perhaps nobody, would openly confess to thinking there's no physical reality at all.)

And yet there are at least 12 people right here on this thread who say that Bell's theorem proves that realism is false! What gives? Well, those people simply don't mean by "realism" the claim that there's a physical world out there. They mean something much much much narrower, much subtler. They mean in particular something like: "there is a fact of the matter about what the outcome of a measurement was destined to be, before the measurement was even made, and indeed whether it is in fact made or not." That is, they mean, roughly, that there are "hidden variables" (not to be found in QM's wave functions) that determine how things are going to come out.

 So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues.
Correct... if "non-realism" means solipsism. But if instead "non-realism" just means the denial of hidden variables / pre-existing values / counter-factual definiteness, then it indeed makes perfect sense.

Of course, in the context of Bell's theorem, what really matters is just whether endorsing this (latter, non-insane) type of "non-realism" gives us a way of avoiding the unpalatable conclusion of non-locality. At least 12 people here think it does! And yet none of them have yet addressed the challenge: produce a local but non-realist model that accounts for the perfect correlations.

(Note, even if somebody did this, they'd still technically need to show that you can *also* account for the *rest* of the QM predictions -- namely the predictions for what happens when the analyzers are *not* parallel -- before they could really be in a position to say that local non-realism is compatible with all the QM predictions. My challenge is thus quite "easy" -- it only pertains to a subset of the full QM predictions! And yet no takers... This of course just shows how *bad* non-realism is. If you are a non-realist, you can't even account for this perfect-correlations *subset* of the QM predictions locally! That's what EPR pointed out long ago...)

 Quote by DrChinese Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing: It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality. So in my book, every Bohmian is an anti-realist.

But what you, Dr C, are missing above is that when Podolsky said something was "unreasonable", what he actually meant (and absolutely should have said instead!) was: "inconsistent with locality". But I've explained this so many times to you over the years, without getting through, there's really no point even trying again.

 We should all be thinking of reality as fields and particles as excitations of the fields, instead of crippled and incoherent classical-like models. Classical-like concepts like time, space, 'physical stuff', realism... could well be emergent. Just my unprofessional view(backed by some of the great names in physics). In the same way that we can not even in principle predict the behavior of certain large collections of bodies from the behavior of just one constituent(e.g. a flock of birds), it seems equally impossible to predict the behavior of a large ensemble of particles from looking at just one electron or proton. Hence why it could be totally impossible to understand the reality of chairs and tables by looking at just quantum mechanical rules and axioms. The fundamental aspect of the emergent system is its capacity to be what it is while being completely unlike any other version of what it is. And we are just beginning to approach problems in this direction - we also have to embrace the emergence of life from non-life and consciousnesss from non-consciousness among other similar phenomena(like the possible emergence of a reality from a non-reality - these 3/life, consciousness and physical stuff/ account for all that can be observed in the universe). Emergence is an observational fact and sounds much less abusrd than many of the other ideas put forward here. PP. Since none of my conscious thoughts can at present be modelled and framed in purely classical/physical terms, shouldn't we also be proposing hidden variables for explaning the reality of the paragraph i wrote above?

 Quote by audioloop travis, do you believe in CFD ?
Interesting question.

The first thing I'd say is: who cares? If the topic is Bell's theorem, then it simply doesn't matter. CFD *follows* from locality in the same way that "realism" / hidden variables do. That is: the only way to locally (and, here crucially, non-conspiratorially) explain even the perfect correlations is with a "realistic" hidden-variable theory with pre-determined values for *all* possible measurements, i.e., a model with the CFD property. So... to whatever extent somebody thinks CFD needs to be assumed to then derive a Bell inequality, it doesn't provide any kind of "out" since CFD follows from locality. That is, the overall logic is still: locality --> X, and then X --> inequality. So whether X is just "realism" or "realism + CFD" or whatever, it simply doesn't make any difference to what the correct answer to this thread's poll is.

So, having argued that it's irrelevant to the official subject of the thread, let me now actually answer the question. Do I believe in CFD? I'm actually not sure. Or: yes and no. Or: it depends on a really subtle point about what, exactly, CFD means. Let me try to explain. As I think everybody knows, my favorite extant quantum theory is the dBBB pilot-wave theory. So maybe we can just consider the question: does the pilot-wave theory exhibit the CFD property?

To answer that, we have to be very careful. One's first thought is undoubtedly that, as a *deterministic* hidden variable theory, of course the pilot wave theory exhibits CFD: whatever the outcome is going to be, is determined by the initial conditions, so ... it exhibits CFD. Clear, right?

On the other hand, I've already tried to make a point in this thread about how, although the pilot-wave theory assigns definite pre-existing values (that are then simply revealed in appropriate measurements) to particle positions, it does *not* do this in regard to spin. That is, the pilot-wave theory is in an important sense not "realistic" in regard to spin. And that starts to make it sound like, actually, at least in regard to the spin measurements that are the main subject of modern EPR-Bell discussions, perhaps the pilot-wave theory does *not*, after all, exhibit CFD.

So, which is it? Actually both are true! The key point here is that, according to the pilot-wave theory, there will be many physically different ways of "measuring the same property". Here is the classic example that goes back to David Albert's classic book, "QM and Experience." Imagine a spin-1/2 particle whose wave function is in the "spin up along x" spin eigenstate. Now let's measure its spin along z. The point is, there are various ways of doing that. First, we might use a set of SG magnets that produce a field like B_z ~ B_0 + bz (i.e., a field in the +z direction that increases in the +z direction). Then it happens that if the particle starts in the upper half of its wave packet (upper here meaning w.r.t. the z-direction) it will come out the upper output port and be counted as "spin up along z"; whereas if it happens instead to start in the lower half of the wave packet it will come out the lower port and be counted as "spin down along z". So far so good. But notice that we could also have "measured the z-spin" using a SG device with fields like B_z ~ B_0 - bz (i.e., a field in the z-direction that *decreases* in the +z direction). Now, if the particle starts in the upper half of the packet it'll still come out of the upper port... *but now we'll call this "spin down along z"*. Whereas if it instead starts in the lower half of the packet it'll still come out of the lower port, but we'll now call this *spin up along z*.

And if you follow that, you can see the point. Despite being fully deterministic, what the outcome of a "measurement of the z-spin" will be -- for the same exact initial state of the particle (including the "hidden variable"!) -- is not fixed. It depends on which *way* the measurement is carried out!

Stepping back for a second, this all relates to the (rather weird) idea from ordinary QM that there is this a correspondence between experiments (that are usually thought of as "measuring some property" of something) and *operators*. So the point here is that, for the pilot-wave theory, this correspondence is actually many-to-one. That is, at least in some cases (spin being one of them), many physically distinct experiments all correspond to the same one operator (here, S_z). But (unsurprisingly) distinct experiments can have distinct results, even for the same input state.

So... back finally to the original question... if what "CFD" means is that for each *operator*, there is some definite fact of the matter about what the outcome of an unperformed measurement would have been, then NO, the pilot-wave theory does *not* exhibit CFD. On the other hand, if "CFD" means that for each *specific experiment*, there is some definite fact of the matter about what the outcome would have been, then YES, of course -- the theory is deterministic, so of course there is a fact about how unperformed experiments would have come out had they been performed.

This may seem like splitting hairs for no reason, but the fact is that all kinds of confusion has been caused by people just assuming -- wrongly, at least in so far as this particular candidate theory is concerned -- that it makes perfect sense to *identify* "physical properties" (that are revealed or made definite or whatever by appropriate measurements) with the corresponding QM operators. This is precisely what went wrong with all of the so-called "no hidden variable" theorems (Kochen-Specker, etc.). And it is also just the point that needs to be sorted out to understand whether the pilot-wave theory exhibits CFD or not. The answer, I guess, is: "it's complicated".

That make any sense?

 The notion of 'particles' is oxymoronic. If microscopic entities obey Heisenberg’s uncertainty principle, as we know they do, one is forced to admit that the concept of “microscopic particle” is a self-contradictory concept. This is because if an entity obeys HUP, one cannot simultaneously determine its position and momentum and, as a consequence, one cannot determine, not even in principle, how the position of the entity will vary in time. Consequently, one cannot predict with certainty its future locations and it doesn't have the requisites of classical particles like exact position and momentum in spacetime. What is the reason why an entity of uncertain nature but evidently non-spatial should obey classical notions like locality at all times?
 ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal? It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism

Recognitions:
Gold Member
 Quote by ttn It depends on exactly what you mean by "realism".
I think one of the easiest (for me) ways to understand "realism" as per pilot-wave is "contextual realism". Demystifier does a good job discussing this issue here when debating whether a particular paper discussed in that thread ruled out the pilot wave model:
 What their experiment demonstrates is that realism, if exists, must be not only nonlocal, but also contextual. Contextuality means that the value of the measured variable may change by the act of measurement. BM is both nonlocal and contextual, making it consistent with the predictions of standard QM as well as with their experiment. In fact, after Eq. (4), they discuss BM explicitly and explain why it is consistent with their results. Their "mistake" is their definition of "reality" as an assumption that all measurement outcomes are determined by pre-existing properties of particles independent of the measurement. This is actually the definition of non-contextual reality, not of reality in general. The general definition of reality is the assumption that some objective properties exist even when measurements are not performed. It does not mean that these properties cannot change by the physical act of measurement. In simpler terms, they do not show that Moon does not exist if nobody looks at it. They only show that Moon, if exists when nobody looks at it, must change its properties by looking at it. I also emphasize that their experiment only confirms a fact that was theoretically known for a long time: that QM is contextual. In this sense, they have not discovered something new about QM, but only confirmed something old.
Non-local Realistic theories disproved

Since I hate writing stuff in my own words since others write it down so more eloquently the necessary contextuality present in the pilot-wave model is summarized in an easily understandible way (for me) here also:
 One of the basic ideas of Bohmian Mechanics is that position is the only basic observable to which all other observables of orthodox QM can be reduced. So, Bohmian Mechanics will qualify VD (value definiteness) as follows: “Not all observables defined in orthodox QM for a physical system are defined in Bohmian Mechanics, but those that are (i.e. only position) do have definite values at all times.” Both this modification of VD (value definiteness) and the rejection of NC (noncontextuality) immediately immunize Bohmian Mechanics against any no HV argument from the Kochen Specker Theorem.
The Kochen-Specker Theorem
http://plato.stanford.edu/entries/ko...ker/index.html

So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.

 Quote by nanosiborg I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment.
 Quote by ttn Well, you'd only convince the kind of person who voted (b) in the poll, if you somehow managed to show that *no* "local nonrealistic" model could match the quantum predictions. Just showcasing the silly local coin-flipping particles model doesn't do that.
Yes, I see.
 Quote by ttn But I absolutely agree with the way you put it, about what the question is post-Einstein. Einstein already showed (in the EPR argument, or some less flubbed version of it -- people know that Podolsky wrote the paper without showing it to Einstein first and Einstein was pissed when he saw it, right?) that "realism"/LHV is the only way to locally explain the perfect correlations. Post-Einstein, the LHV program was the only viable hope for locality! And then Bell showed that this only viable hope won't work. So, *no* local theory will work. I'm happy to hear we're on the same page about that. But my point here is just that, really, the best way to convince somebody that "local non-realistic" theories aren't viable is to just run the proof that local theories aren't viable (full stop). But somehow this never actually works. People have this misconception in their heads that a "local non-realistic" theory can work, even though they can't produce an explicit example, and they just won't let go of it.
Yes, I do think I'm following you on all this. That we're on the same page. Not sure when I changed from the "realism or locality has to go" way of thinking to the realization that it's all about the locality condition being incompatible with QM and experiment and that realism/hidden variables are actually irrelevant to that consideration.
 Quote by ttn Since it so perfectly captures the logic involved here, it's worth mentioning here the nice little paper by Tim Maudlin http://www.stat.physik.uni-potsdam.d...Bell_EPR-2.pdf where he introduces the phrase: "the fallacy of the unnecessary adjective". The idea is just that when somebody says "Bell proved that no local realist theory is viable", it is actually true -- but highly misleading since the extra adjective "realist" is totally superfluous. As Maudlin points out, you could also say "Bell proved that no local theory formulated in French is viable". It's true, he did! But that does not mean that we can avoid the spectre of nonlocality simply by re-formulating all our theories in English! Same with "realism". Yes, no "local realist" theory is viable. But anybody who thinks this means we can save locality by jettisoning realism, has been duped by the superfluous adjective fallacy.
Yes, as I mentioned, I get this now, and feel like I've made progress in my understanding of Bell.
I like the way Maudlin writes also. Thanks for the link. In the process of rereading it.
 Quote by nanosiborg I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end.
 Quote by ttn I don't understand what you mean here.
I don't think I do either. I'm just fishing for any way to understand Bell's theorem that will allow me to retain the assumption that nature is evolving in accordance with the principle of local action. That nature is exclusively local. Because the assumption that nonlocality exists in nature is pretty heavy duty. Just want to make sure any possible nuances and subtleties have been dealt with. I've come to think that experimental loopholes and hidden variables ('realism') are unimportant. That it has to do solely with the explicit denotation of the locality assumption. So, I'm just looking for (imagining) possible hidden assumptions in the denotation of locality that might preclude nonlocality as the cause of Bell inequality violations.

 Quote by ttn For the usual case of two spin-entangled spin-1/2 particles, the sample space for Bob's measurement is just {+,-}.
If the joint sample space is (+,-), (-,+), (+,+), (-,-), then a detection of, say, + at A does change the joint sample space from (+,-), (-,+), (+,+), (-,-) to (+,-), (+,+).

But yes I see that the sample space at either end is always (+,-) no matter what. At least in real experiments. In the ideal, iff θ is either 0° or 90°, then a detection at one end would change the sample space at the other end.

But the sample space of what's registered by the detectors isn't the sample space I was concerned about. There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.

 Quote by ttn This is certainly not affected by anything Alice or her particle do. So if you're somehow worried that the thing you call "2) statistical independence" might actually be violated, I don't think it is. But I don't think that even matters, since I don't see anything like this "2) ..." being in any way assumed in Bell's proof. But, basically, I just can't follow what you say here.
I think that statistical independence is explicated in the codification of Bell's locality condition. Whether or not it's relevant to the interpretation of Bell's theorem I have no idea at the moment. The more I think about it, the more it just seems too simplistic, too pedestrian.

 Quote by nanosiborg The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.).
 Quote by ttn Huh???
Well, the premise might be wrong, maybe this particular inconsistency between experimental design and Bell locality isn't significant or relevant to Bell inequality violations, but I have to believe that you understand the statement.

 Quote by nanosiborg Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality. Is this a possibility, or has Bell (and/or you) dealt with this somewhere?
 Quote by ttn The closest I can come to making sense of your worry here is something like this: "Bell assumes that stuff going on by Bob should be independent of stuff going on by Alice, but the experiments reveal correlations, so one of Bell's premises isn't reflected in the experiment." I'm sure I have that wrong and you should correct me. But on the off chance that that's right, I think it would be better to express it this way: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal". That is, it sounds like you are trying to make "something about how the experimental data should come out" into a *premise* of Bell's argument, instead of the *conclusion* of the argument. But it's not a premise, it's the conclusion. And the fact that the real data contradicts that conclusion doesn't invalidate his reasoning; it just shows that his *actual* premise (namely, locality!) is false.
In a previous post I said something like that Bell locality places upper and lower boundaries on the correlations, and that QM predicted correlations lie, almost entirely, outside those boundaries

Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal."

Or are you saying that that's the correct way of saying it? Or what?

I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results.

I don't yet understand how/why it's concluded that nature is nonlocal.

 Quote by nanosiborg There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.
Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.

 Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal." Or are you saying that that's the correct way of saying it? Or what?
That's the simple (and correct) way to express what I thought you were saying.

 I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results. I don't yet understand how/why it's concluded that nature is nonlocal.
Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.

 Quote by bohm2 So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.
Yes, that's right. Kochen-Specker rules out non-contextual hidden variable (VD) theories. The dBB pilot-wave theory is not a non-contextual hidden variable (VD) theory.

And, of course, separately: Bell's theorem rules out local theories. The pilot-wave theory is not a local theory.

People who voted for (b) in the poll evidently get these two theorems confused. They try to infer the conclusion of KS, from Bell.

 Quote by Quantumental ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal?
I read it when it came out and haven't thought of it sense. In short, meh.

 It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism
Haven't read DW's new book. Everything I've seen about the attempt to derive the Born rule from decision theory has been, to me, just ridiculous. But I would like to see DW's latest take on it. Not sure if you intended this, but (what I would call) the "ontology issue" and the "preferred basis issue" are certainly not the same thing. Not sure what you meant exactly with the last almost-sentence. (Shades of ... "the castle AAARRRGGGG")

 In my experience, whenever things are philosophically murky, and people are stuck into one or more "camps", it sometimes helps to ask a technical question whose answer is independent of how you interpret things, but which might throw some light on those interpretations. That's what Bell basically did with his inequality. They may not have solved anything about the interpretation of quantum mechanics, but certainly afterwards, any interpretation has to understood in light of his theorem. Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, $\Psi$. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"? Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object. But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.

 Quote by ttn Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.
No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'. At the outset, given a uniform λ distribution (is this what's called rotational invariance?) and the rapid and random varying of the a and b settings, then would the sample space for a or b be all λ values? Anyway, whatever the sample space for a or b (depending on the details of the local model), then given a detection at, say, A, associated with some a, then would the sample space for b be a reduced set of possible λ values?

 Quote by ttn That's the simple (and correct) way to express what I thought you were saying.
If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.

 Quote by ttn Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.
Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.

 Quote by stevendaryl Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, $\Psi$. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"? Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object. But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.
This is just the normal way that all MWI proponents already think about the theory. It's a theory of the whole universe, described the the universal wave function, obeying Schroedinger's equation at all times. (No collapse postulates or other funny business.) Decoherence gives rise to a coherent "branch" structure such that it's possible to think of each branch as a separate (or at least, independent) world.

For more details, see any contemporary treatment of MWI, e.g., the David Wallace book that was mentioned earlier. (Incidentally, I just ordered myself a copy!)

 Quote by nanosiborg No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'.
Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.)

 If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.
Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal. So I guess you should think about the reasoning some more.

 Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.
What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop?