What do violations of Bell's inequalities tell us about nature?

In summary: don't imply that nature is nonlocal ... though it's tempting to assume that nature is nonlocal by virtue of the fact that nonlocal hidden variable models of quantum entanglement are viable.

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #71
Gordon Watson said:
Anti-realism is not a good catch-phrase, imho. I suggest you change it. As I see it: There is nothing against "realism" in the well-known fact that a "measurement" perturbs the "measured" "system".

i agree, reality cannot be equated with values, values are just attributes of something (objects, entities, process etc).
 
Physics news on Phys.org
  • #72
I read the paper "Against `Realism'" here: http://arxiv.org/abs/quant-ph/0607057

It's thought-provoking, and I think most of the points are well-taken, but I wasn't completely convinced by all of the arguments.

The point of the two-word phrase "local realism" is really, it seems to me, to distinguish between interpretations like the Bohm interpretation, which are realistic, but not local, and interpretations like Many-Worlds, which are local, but not realistic.

The argument in the paper against MWI is in some ways compelling, and in other ways not very. If I can oversimplify, MWI is a useless theory of physics, because the whole point of a physical theory is to predict outcomes of experiments (or the probabilities of various outcomes) while in the MWI, there are no definite outcomes (or all possible outcomes occur). MWI denies that there is any fact of the matter as to whether Alice measured an electron to have spin-up or spin-down relative to a particular axis. So it's not clear how to relate MWI with what we actually observe.

That sounds like a plausible reductio ad absurdum. But my feeling, based on the experience with many similar arguments, is that any piece of science or math can appear meaningless if you subject it to a withering enough philosophical examination.

The problem, it seems to me is that when we're talking about tiny little systems, such as electrons or atoms or molecules or photons, the recipe given by quantum mechanics seems perfectly meaningful (if weird). The quantum recipe tells us the likelihood for various observable outcomes for certain experimental setups, and we can actually repeat the experiment and gather statistics, and check the correctness of the quantum predictions. So quantum mechanics, with the usual recipe, clearly has empirical content.

Now, the way I see it, the only step you have to make to get to something like MWI is to consider: A human being, together with a macroscopic measuring device is just a huge collection of particles, all of which empirically obey the Rules of Quantum Mechanics. Therefore, there is no reason not to treat macroscopic systems as quantum systems. But if you do that, you have to consider superpositions of macroscopically distinguishable states (cats that are a superposition of dead and alive). Either quantum mechanics is wrong (and there's no evidence of it being wrong), or it applies to macroscopic objects as well as microscopic objects.

I don't think that decoherence really changes the picture much. The way I understand decoherence is that it's a matter of realizing that the "system" in the case of a macroscopic object like a cat is not just the cat, but also electromagnetic and gravitational fields. So you don't have a universe in which a cat is in a superposition of dead and alive, you have the whole universe being in a superposition of a state in which the cat is dead and a state in which the cat is alive. It seems to me that decoherence is not an alternative to MWI--the MWI concept of the entire universe being in a superposition of states is an inevitable consequence of decoherence.

So even though I agree that MWI has disturbing philosophical implications, it seems that once you've accepted that quantum mechanics applies to electrons and photons and atoms, the MWI interpretation is an inexorable conclusion.

The alternative of considering some kind of "pilot wave" theory I don't think is philosophically any better, and I really don't think that it ends up being any different than MWI. The reason I say that is because even though a Bohm-style interpretation assumes that particles have definite positions, which sounds philosophically more acceptable, there is something weird about the trajectories of these particles. No, I'm not actually talking about the nonlocal interactions (even though that is pretty weird itself for someone who has spent a lot of time with Special Relativity). I'm talking about the fact that particles don't affect each other! The trajectory of a particle is determined, in a Bohm-type theory, by the wave function. The wave function evolves deterministically according to Schrodinger's equation (or Dirac, or whatever), completely independently of the locations of the particles. So the wave function influences the particles, but is not influenced by them. In this way, the particles are not really participants in the physics, they are just actors following a script provided by the wave function, and have no influence on each other.

To me, that's as big of a philosophical disaster as MWI is. We might be comforted that an electron really does have a location at each moment, but its location now has no causal effect on anything in the future. In a pilot-wave theory, it's still the case that all the action, and all the physics, is in the wave function, rather than in the particles. And the wave function evolves smoothly and doesn't hesitate to allow a dead cat's wave function to be in superposition with an alive cat's wave function.
 
  • #73
stevendaryl said:
I read the paper "Against `Realism'" here: http://arxiv.org/abs/quant-ph/0607057

It's thought-provoking, and I think most of the points are well-taken, but I wasn't completely convinced by all of the arguments. ...

That is a fair assessment of the paper. ttn and others (including myself) have debated this many times. Not surprisingly, Bohmians tend to agree with the conclusion more often than others. :smile:

I, being mostly a non-realist, reject his thesis. I find that Bohmian representations of such experiments as delayed choice entanglement swapping (DCES) are unsatisfactory. Those seem to me to require a non-realistic interpretation of some kind. I realize that Bohmians do not agree however, but you can judge for yourself.
 
  • #74
stevendaryl said:
I read the paper "Against `Realism'" here: http://arxiv.org/abs/quant-ph/0607057

It's thought-provoking, and I think most of the points are well-taken, but I wasn't completely convinced by all of the arguments.

Thanks for taking the time to read it and for your thoughtful comments. I'll resist the temptation to get into a huge discussion of all the issues raised, but just make a couple brief remarks.

The point of the two-word phrase "local realism" is really, it seems to me, to distinguish between interpretations like the Bohm interpretation, which are realistic, but not local, and interpretations like Many-Worlds, which are local, but not realistic.

1. Yes, clearly, that is the overall intent of people who speak of "local realism". But it doesn't really help clarify *exactly* what the "realism" in "local realism" is supposed to mean. Recall that there are some senses in which e.g. the Bohm interpretation is "realistic" and some senses in which it isn't. So just saying "theories like the Bohm interpretation are realistic" doesn't help much. We need a crisp statement of what "realism" means, and then a crisp identification of where, exactly, any such assumption is made in Bell's derivation -- and here I mean the *full* derivation: (Bell writes, footnote 10 of B's Sox) "My own first paper on this subject starts with a summary of the EPR argument *from locality to* deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables."

2. It is hardly as clear as you imply that MWI is local. I know everybody claims this, but in so far as MWI has *only* the wave function in its ontology, and insofar as the wave function doesn't live in physical space (but instead some high-dimensional configuration space), it seems that MWI doesn't posit any physically real stuff in ordinary physical space at all. And so I literally have no idea what it would even mean to say that it's local, i.e., that the causal influences that propagate around between different hunks of stuff in physical space do so exclusively slower than the speed of light. It's ... a bit like saying that Beethoven's 5th symphony is local. It's not so much that it's non-local, but just that it's not even clear what it could mean to make *either* claim. Incidentally, there is a really nice and interesting paper that suggests a way for MWI to posit some local beables, i.e., some physical stuff in 3-space. The authors end up concluding (correctly I think) that this theory is actually non-local:

http://arxiv.org/abs/0903.2211
The alternative of considering some kind of "pilot wave" theory I don't think is philosophically any better, and I really don't think that it ends up being any different than MWI.

The main difference with (traditional formulations of MWI), to me, is that the pilot-wave theory provides a way to have a fundamental/microscopic theory that actually accounts for the macroscopic stuff we observe: there are trees and tables and planets and people in the theory (namely, tree-shaped, table-shaped, etc., collections of particles in 3-space) and also the pointers on lab equipment are predicted to move the way, and with the statistics, we observe them to actually move in experiments. That's quite an achievement compared to virtually all other contenders. Ordinary QM only gets familiar macroscopic stuff by separately positing it, and then making up special ad hoc dynamical rules for how it interacts with the microworld. MWI, as I pointed out above, doesn't seem to have any "stuff" in 3-space at all, so evidently there are no trees, planets, etc. -- instead just tree-ish and planet-ish delusions in peoples' minds.
The reason I say that is because even though a Bohm-style interpretation assumes that particles have definite positions, which sounds philosophically more acceptable, there is something weird about the trajectories of these particles.

Sure, the trajectories are "weird". I understand the discomfort about the particles responding to, but not in turn affecting, the wave function. Sometimes people talk about this as a violation of Newton's third law -- the particles don't react back on the wf. So, sure, maybe that's "weird", but who said Newton's third law has to be respected?
To me, that's as big of a philosophical disaster as MWI is.

If you think that, it makes me think you haven't appreciated exactly why MWI is so "philosophically" (I would say: "physically"!) disasterous.
We might be comforted that an electron really does have a location at each moment, but its location now has no causal effect on anything in the future.

Seriously, the main value of the particles having definite positions is not that it makes you feel good (compared to feeling queasy about indefinite/fuzzy positions in ordinary QM or whatever), but that if you get a whole bunch of electrons (and other particles) together, into a macroscopic thing, then the macroscopic thing will actually have some definite shape, be at some definite place, etc., -- without the need to wave your arms and make up special rules and extra postulates. The macroscopic world (that we know about directly through sense perception) just comes out, just emerges, from the microscopic picture, without any philosophical mumbo jumbo.
 
  • #75
ttn said:
Sure, the trajectories are "weird". I understand the discomfort about the particles responding to, but not in turn affecting, the wave function. Sometimes people talk about this as a violation of Newton's third law -- the particles don't react back on the wf. So, sure, maybe that's "weird", but who said Newton's third law has to be respected?

If the particles don't affect the wave function, and the only thing that affects the particles is the wave function, then the particles aren't really participants in the physics. I know that the usual way of doing one-particle Bohmian mechanics has an ordinary Newtonian-type force term, and then an additional term due to the "quantum potential", which can be viewed as a small correction to the Newtonian prediction. But in the actual world, there is no "external" potential, there is only interactions with other particles. So a many-particle analog to the quantum potential is all there is to affect particle trajectories.

If you think that, it makes me think you haven't appreciated exactly why MWI is so "philosophically" (I would say: "physically"!) disasterous.

No, I think I understand exactly why MWI is conceptually a nightmare, when it comes to understanding the relationship between theory and experience. But I don't think a "pilot wave" theory is any better. It just supplements the universal wave function with particles that carry out a pantomime of actual physical interactions.

Seriously, the main value of the particles having definite positions is not that it makes you feel good (compared to feeling queasy about indefinite/fuzzy positions in ordinary QM or whatever), but that if you get a whole bunch of electrons (and other particles) together, into a macroscopic thing, then the macroscopic thing will actually have some definite shape, be at some definite place, etc., -- without the need to wave your arms and make up special rules and extra postulates. The macroscopic world (that we know about directly through sense perception) just comes out, just emerges, from the microscopic picture, without any philosophical mumbo jumbo.

It seems equally full of mumbo jumbo to me. The particles are just putting a do-nothing mask on the wave function.

Suppose you arrange a bunch of atoms into a solid brick wall. Then a Bohmian type of theory would predict that the wall would continue to exist for a good long time afterward, giving a reassuring sense of solidity. But now, you take a baseball (another clump of atoms arranged in a particular pattern) and throw it at the wall. What happens then?

The question for a Bohmian type theory is what wave function are you using to compute trajectories? The full wave function describes not the actual locations of the particles of the ball and the wall, but a probability amplitude for particles being elsewhere. If, as you seem to agree, the wave function affects the particles, but not the other way around, then the fact that you've gathered atoms into a wall doesn't imply that the wave function is any more highly peaked at the location of the wall. So if it's the wave function that affects the trajectory of the ball, then why should the ball bounce off the wall?

The principle fact that Bohmians use to show that Bohmian mechanics reproduces the predictions of quantum mechanics is that if particles are initially randomly distributed according to the square of the wave function, then the evolution of the wave function and the motion of the particles will maintain this relationship. That's good to know in an ensemble sense, but when you get down to a small number of particles--say one electron--the wave function may say that the electron has equal probabilities of being in New York and in Los Angeles, but the electron is actually only in one of those spots. So either the wave function has to be affected by the actual location (in a mechanism that hasn't been demonstrated, I don't think) or there has to be the possibility of an electron having a location that is in no way related to the wave function (except in the very weak sense that if the electron is at some position, then the wave function has to be nonzero at that position).

So either you have to have a "wave function collapse" or some other way for the wave function to change that doesn't involve evolution according to the Schrodinger equation, or you have the possibility that the trajectories of physical objects are unaffected by the locations of other physical objects. Which is certainly contrary to experience.
 
  • #76
You raise a number of important and interesting points... far more interesting than the lab reports I should probably be grading instead of writing this! =)

stevendaryl said:
If the particles don't affect the wave function, and the only thing that affects the particles is the wave function, then the particles aren't really participants in the physics.

Now this is a very strange statement. I think it reflects a view which your other comments also seem to reflect -- namely that you are accustomed (from ordinary QM or whatever) to thinking of the wave function as the thing where, so to speak, "the action is". So to you, if the wave function of a particle is split in half, with part in LA and part in NY, then there's going to be a 50/50 chance of detecting it in LA/NY, according to all the usual QM rules. And then you assume that this is still the case in Bohm's theory, such that the "real particle position" that supplements the wf is a kind of pointless epiphenomenon that "doesn't participate in the physics". If you want to understand the Bohm theory, though, you have to accept that it just doesn't work this way. You have to retrain yourself to think in a different way. In particular, you have to accept that the physical stuff we interact with in real life (particles, brick walls, balls, apparatus pointers, etc.) is not "made of" wave function, but is instead made of particles. This is hard for people to even understand as an option, because in ordinary QM (which everyone always learns first), there is only the one thing -- the wave function -- so *of course* that is where all the physics is. But in Bohm's theory there are really *two* things, the wave *and* the particle. So there is a legitimate and meaningful and important question: which one are things like tables and chairs made of? And the answer (that you have to provisionally accept if you want to understand the Bohmian view of the world at all) is that stuff is made of *particles*. Despite what the traditional terminology suggests, it's actually the *wave function* that is the "hidden variable" in Bohm's theory -- the particles are right there, visible, in front of your eyes when you look out on and interact physically with the world; whereas the wave function is this spooky ethereal invisible thing that is sort of magically acting behind the scenes to make the particles move the way they move.

That's the overview point. Now let me try to explain exactly how some of your comments exhibit this confusion about how to understand the "roles" of the two things, the wave and the particles...
I know that the usual way of doing one-particle Bohmian mechanics has an ordinary Newtonian-type force term, and then an additional term due to the "quantum potential", which can be viewed as a small correction to the Newtonian prediction. But in the actual world, there is no "external" potential, there is only interactions with other particles. So a many-particle analog to the quantum potential is all there is to affect particle trajectories.

This is really an aside, but actually the "usual way of doing ... bohmian mechanics" does *not* involve any "quantum potential". Yes, Bohm and a few others like to formulate the theory that way, as was discussed in the thread above. But (I think at least) it is much better (and certainly these days more standard) to forget the silly quantum potential, and just let the ordinary wave function be the thing that "guides" or "pilots" the particles. The quantum potential is a big bloated pointless middle man, at best. Better to just get rid of it and define the theory in terms of (a) the wave function obeying the usual Sch eq, and (b) the particles obeying the guidance law, basically v = j/rho (where j and rho are what are usually called the probability current and density respectively).
No, I think I understand exactly why MWI is conceptually a nightmare, when it comes to understanding the relationship between theory and experience. But I don't think a "pilot wave" theory is any better. It just supplements the universal wave function with particles that carry out a pantomime of actual physical interactions.

Here you assume that the "actual physical interactions" are happening in the wave function, so that the particles are (at best) pantomiming some one small part of the physics. But that's not the right way to think about it. If what we mean by "physical" is stuff like balls crashing into brick walls, then that is particles. What you call a ball or a brick wall is, in bohm's theory, a collection of *particles*.
Suppose you arrange a bunch of atoms into a solid brick wall. Then a Bohmian type of theory would predict that the wall would continue to exist for a good long time afterward, giving a reassuring sense of solidity. But now, you take a baseball (another clump of atoms arranged in a particular pattern) and throw it at the wall. What happens then?

It'll bounce off the wall. The theory predicts this. Here is how to think about it. Pretend the brick wall and the ball are each just single particles, and assume that they have an interaction potential which is basically zero for r>R, and basically infinite for r<R. Call the wall's coordinate "x" and the ball's "y". The configuration space is now the x-y plane. Suppose the wall is initially at rest near x=0, i.e., its initial wf is some sharply peaked stationary packet centered on x=0. The ball starts at some negative value of y, say -L, and has a positive velocity; so take its initial wf to be an appropriate packet. Now the wave function for the 2 particle system -- which we assume is a product state of the two one-particle wf's just described -- is thus a little packet located at (x=0, y=-L) and moving with some group velocity toward the origin (x=0, y=0). What Schroedinger's equation says now is that the packet (in the 2d config space) will propagate up, bounce off the big potential wall at (x=0,y=0), and reflect back down. (I assume here that the mass of the wall particle is large compared to the mass of the ball particle.) So much for the wave function.

What about the actual/bohmian particle positions? Well, at t=0, the wall has some actual position in the support of its wf, and likewise for the ball. And then the actual configuration point just moves along with the moving/bouncing packet in configuration space. So the story you'd tell about the two particles in real space is: the wall particle just sits there the whole time, while the ball particle comes toward it, bounces off, and heads away.

Now you want to ask: what happens if, instead of initially being in a (near) position eigenstate, the wall is initially in a superposition of two places? It's a good question, but if you think it through carefully, you'll find that the theory says exactly what anybody would consider the right/reasonable thing. So, just recapitulate the above, but now with the initial wf for the wall being a sum of two packets, one peaked at x=0 and one peaked at x=D. Now (I'm picturing all of this playing out in the x-y plane, and hopefully you are too) the initial 2-particle wave function in the 2D config space has *two* lumps: one at (x=0, y=-L), and the other at (x=D, y=-L). So then run the wf forward in time using the sch eq: the two lumps each propagate "upward" (i.e., in the y-direction). Eventually the first lump reaches the potential wall near (x=0,y=0) and bounces back down. Meanwhile the other lump continues to propagate up until it reaches the potential wall near (x=D,y=D) at which point it too reflects and starts propagating back down. So much for the wave function.

What about the particles? The point here is that in bohm's theory the *actual configuration* is in one, or the other, of the two initial lumps. If (by chance) it happens to be in the first lump, then the story is *exactly* as it was previously -- the other, "empty" part of the wave function (corresponding to the wall having been at x=D) is simply irrelevant. It plays no role whatever and could just as well have been dropped. On the other hand, if (by chance) the actual positions are initially in the second lump, then the story (of the particles) is as follows: the ball propagates toward the wall (which is at x=D) until the ball gets to x=D, and then it bounces off. That is, there is some fact of the matter about where the wall actually is, and the ball bounces off the wall just as one would expect it to.

The only thing that could possibly confuse anybody about this is that they are thinking: but the wall really *isn't* in one or the other of the definite places, x=0 or x=D, it's in a *superposition* of both! Indeed, that's what you'd say in ordinary QM. And then you'd have to make up some story about how throwing the ball at the wall constitutes a measurement of its position and so collapses its wave function and thus causes it (the wall) to acquire a definite position, just in time to let the ball bounce off it. But all of this is un-bohmian. In bohm's theory everything is just simple and clear and normal. The wall (meaning, the wall PARTICLE) is, from the beginning, definitely somewhere. Maybe we don't know, for a given run of the experiment, where it is, but who cares. It is somewhere. The ball bounces off this actual wall when it hits this actual wall. Simple.
The question for a Bohmian type theory is what wave function are you using to compute trajectories? The full wave function describes not the actual locations of the particles of the ball and the wall, but a probability amplitude for particles being elsewhere. If, as you seem to agree, the wave function affects the particles, but not the other way around, then the fact that you've gathered atoms into a wall doesn't imply that the wave function is any more highly peaked at the location of the wall.

Well, it certainly implies that there's some kind of "peak" (at the point in configuration space corresponding to the arrangement of atoms you just made). But you're right -- this is just one peak in a vast mountain range, so to speak. There are lots of other peaks. But these, as it turns out, are totally irrelevant. They don't affect the motion of the particles (because, so to speak, the evolution of the actual configuration -- the actual particle positions -- only depends on the structure of the wave function around this actual configuration point... the theory is "local in configuration space"). Of course, there can be interference effects, and so on, but the theory again perfectly agrees with the usual QM predictions -- it just does so without extra ad hoc philosophical magic postulates about what happens during "measurements".
So if it's the wave function that affects the trajectory of the ball, then why should the ball bounce off the wall?

Because that's what the theory's fundamental laws (the Sch eq and the guidance equation) say will happen.
The principle fact that Bohmians use to show that Bohmian mechanics reproduces the predictions of quantum mechanics is that if particles are initially randomly distributed according to the square of the wave function, then the evolution of the wave function and the motion of the particles will maintain this relationship. That's good to know in an ensemble sense, but when you get down to a small number of particles--say one electron--the wave function may say that the electron has equal probabilities of being in New York and in Los Angeles, but the electron is actually only in one of those spots. So either the wave function has to be affected by the actual location (in a mechanism that hasn't been demonstrated, I don't think) or there has to be the possibility of an electron having a location that is in no way related to the wave function (except in the very weak sense that if the electron is at some position, then the wave function has to be nonzero at that position).

I'm not seeing what you think the problem is. Let a single particle come up to a 50/50 beam splitter, and "split in half". Half of the wf goes to LA and half to NY. Ordinary QM says now if you make a measurement of the position (in LA, say) and (say) you actually *find* that the particle is there, that is because some magic happened -- the intervention by the measurement device pre-empted the normal (schroedinger) evolution of the particle's wave function, and made it collapse so that now *all* of its support is in LA, with the "lump" over in NY vanishing. According to Bohm's theory it's much simpler. Particle position detectors don't do anything magical -- they just respond to where the particles is (just like the ball above responds to the actual location of the wall). And note, the word "particle" there means "particle" -- as opposed to the wf! Got that? Particle detectors detect *particles*, not wave function. So the particle detector clicks or beeps or whatever if (as might have been the case with 50% probability) the particle was in fact already actually there in LA. Simple.

So either you have to have a "wave function collapse" or some other way for the wave function to change that doesn't involve evolution according to the Schrodinger equation, or you have the possibility that the trajectories of physical objects are unaffected by the locations of other physical objects. Which is certainly contrary to experience.

You are missing some important points about how the theory works. Hopefully the above clarifies. Note that there is certainly no "collapse postulate" in the axioms of bohm's theory -- the wf (of the universe, basically) obeys the sch eq *all the time*.

HOWEVER, there is a really cool and important thing about bohm's theory -- you can meaningfully define a wave function of a *sub-system*. Take the wall/ball system above. The wave function is a function \psi(x,y). But we also have in the picture the actual wall position X and the actual ball position Y. So we can construct a mathematical object like \psi(x,Y) -- the "universal" wave function, but evaluated at the point y=Y. This is called the "conditional wave function for the wall": \psi_w(x) = \psi(x,Y). And likewise, \psi_b(y) = \psi(X,y) is called the "conditional wave function of the ball".

Now here's the amazingly beautiful thing. Think about how the conditional wave function of the wall, \psi_w(x), evolves in time. To be sure, it starts off having two lumps, one at x=0 and one at x=D. But if you think about how \psi(x,y) evolves in time (with the two lumps becoming *separated* in the y-direction, because one of them reflects earlier than the other), you will see that \psi_w(x) actually "collapses" -- after all the reflecty business has run its course, \psi_w(x) will be *either* a one-lump function peaked at x=0, *or* a one-lump function peaked at x=D. Which one happens depends, of course, on the (random) initial positions of the particles.

The point is -- and this is really truly one of the most important and beautiful things about Bohm's theory -- the theory actually *predicts* (on the basis of fundamental dynamical laws which are simple and clear and which say *nothing* about "collapse" or "measurement") that *sub-system* wave functions (these "conditional wave functions") will collapse, in basically just the kinds of situations where, in ordinary QM, you'd have to bring in your separate measurement axioms to make sure the wfs collapsed appropriately. So not only does bohm's theory make all the right predictions (contrary to what I think you are worrying), it actually manages to *derive* the weird rules about measurement that are instead *postulated* in ordinary QM.
 
Last edited:
  • #77
ttn said:
If you want to understand the Bohm theory, though, you have to accept that it just doesn't work this way. You have to retrain yourself to think in a different way. In particular, you have to accept that the physical stuff we interact with in real life (particles, brick walls, balls, apparatus pointers, etc.) is not "made of" wave function, but is instead made of particles.

I'm not questioning what it's "made of", I'm questioning what it means to "interact with" something physical. In ordinary Newtonian physics it means, for example, that a ball will bounce off a wall. How, in Bohm's theory, is such an interaction modeled? Well, the original one-particle model is not really applicable, because it had "external" fields that interacted with particle. In a many-particle version of Bohm's theory (which I haven't seen written down, but I assume that such a thing exists), the only forces are those due to particle-particle interactions (such as electromagnetic interactions). The question is: What kind of force acts on an electron due to other electrons, in the Bohm theory? Is there an inverse-square law based on the positions of electrons? I don't think so---such an interaction would not (I don't think) reproduce the same predictions as orthodox quantum mechanics. Instead, what I think would be the case is that the "force" on one electron would depend not on the positions of other electrons, but on the shape of the wave function.

If that's not the case, I would like to see a simple example worked out; for example, a Bohmian model of two point-masses interacting through a harmonic oscillator potential. That seems simple enough that it could be worked out explicitly. Maybe I'll try myself.

Here you assume that the "actual physical interactions" are happening in the wave function, so that the particles are (at best) pantomiming some one small part of the physics. But that's not the right way to think about it. If what we mean by "physical" is stuff like balls crashing into brick walls, then that is particles. What you call a ball or a brick wall is, in bohm's theory, a collection of *particles*.

It's not a matter of "how to think about" it. It's a matter of what the theory says about how particles interact. Yes, I agree, a brick wall is a collection of particles, and a ball is another collection of particles, and all the particles are following some pilot wave. It's a quantitative, not a philosophical, question of what happens when the actual ball is far away from the central part of the wave function representing the amplitude for the ball's position. Does the ball behave in a classical way, bouncing off the wall, regardless of the shape of the wave function? It's not a philosophical question, but a technical question.
 
  • #78
stevendaryl said:
I'm not questioning what it's "made of", I'm questioning what it means to "interact with" something physical. In ordinary Newtonian physics it means, for example, that a ball will bounce off a wall. How, in Bohm's theory, is such an interaction modeled? Well, the original one-particle model is not really applicable, because it had "external" fields that interacted with particle. In a many-particle version of Bohm's theory (which I haven't seen written down, but I assume that such a thing exists), the only forces are those due to particle-particle interactions (such as electromagnetic interactions).

Did you actually read my last post? All of it? I spent a long time and actually explained in detail exactly how you'd model this. It's a two-particle system, just the kind of thing you say here you "haven't seen written down". Anyway, I get now that you're raising a slightly different sort of issue than what I addressed before, but please do read what I wrote before carefully, because it will help you understand the theory.



The question is: What kind of force acts on an electron due to other electrons, in the Bohm theory? Is there an inverse-square law based on the positions of electrons? I don't think so---such an interaction would not (I don't think) reproduce the same predictions as orthodox quantum mechanics. Instead, what I think would be the case is that the "force" on one electron would depend not on the positions of other electrons, but on the shape of the wave function.

Yes and no. The main point is just that the particles don't really interact directly with one another, in the (classical) way you're thinking of here. The theory instead works like this: one puts the usual interaction terms in the Hamiltonian (whatever one would do in ordinary QM for the situation under study, so, maybe an inverse-square-law coulomb type force if we are really trying to talk about two electrons scattering off each other) and then has the usual schroedinger equation in which these interaction terms influence how the wave function behaves. Then, as you pointed out earlier, the particles come along for the ride, surfing as it were on the wave function. So the fact that one has inverse-square-law (or whatever) forces in the Hamiltonian, will end up making the particles tend to repel each other, etc. (In some appropriate classical limit, they would just behave like classical particles interacting directly via 1/r^2 forces... but of course more interesting things can happen when one isn't in the classical limit.) But it's not like the particles experience two kinds of forces: the ones that the wave function exerts on them, and then also the 1/r^2 coulomb forces that they exert directly on each other. That's just not what the theory says. The particles *only* experience the "forces" (and really "force" is not at all the right word for it, since the relevant law is *nothing like* F=ma, but leave that aside here) exerted on them by the wave function. That's it. So it's not exactly that the particles don't exert forces on each other, but rather that their (coulomb, whatever) interaction is completely mediated by the wave function.

Notice that your wrong way of thinking it should work is actually kinda/sorta the way you might talk about it in the quantum potential formulation of the theory, which I don't like -- partly because it invites this kind of thinking, that "really", the theory is just "classical physics but with an extra quantumish force". But it's not. That's really just a wrong and misleading way to try to understand it.



If that's not the case, I would like to see a simple example worked out; for example, a Bohmian model of two point-masses interacting through a harmonic oscillator potential. That seems simple enough that it could be worked out explicitly. Maybe I'll try myself.

Sure, play with it. But actually there's not much to work out. You'd do all the standard things to understand how the wave function works (e.g., probably, switch to relative/average coordiantes so it decouples into free motion of the cm plus a 1-particle-type problem, which can be solved easily, for the relative coordinate). Then you make up some initial conditions and figure out what the time-dependent wave function will be. That is all totally standard and not at all bohmian. The bohmian part is now easy. Given the solution of the sch equation, i.e., given the time-dependent wf, see how the particles will move for various possible initial conditions.


It's not a matter of "how to think about" it. It's a matter of what the theory says about how particles interact. Yes, I agree, a brick wall is a collection of particles, and a ball is another collection of particles, and all the particles are following some pilot wave. It's a quantitative, not a philosophical, question of what happens when the actual ball is far away from the central part of the wave function representing the amplitude for the ball's position. Does the ball behave in a classical way, bouncing off the wall, regardless of the shape of the wave function? It's not a philosophical question, but a technical question.

Not sure if this is exactly what you want, but take a single particle in 1D, with a gaussian initial wf. As we all know, sch's eq implies that the gaussian wf will remain gaussian but spread in time. The bohmian trajectories will just spread with it. For example, if the particle happens to start right in the middle, it'll just sit there, but if it instead starts a little bit off to the right, it'll *accelerate* to the right (such that its distance from the middle increases, and indeed increases nonlinearly, with time), and if it starts a little to the left it'll accelerate to the left.
 
  • #79
nanosiborg said:
I'm using hidden variable theory and realistic theory interchangeably. So, any hidden variable theory is a realistic theory. Any theory which does not incorporate hidden variables is a nonrealistic theory.
ttn said:
Well that's liable to cause confusion when you talk to other people here. But whatever. The main question is: do you think that Bell's theorem leaves us a choice of giving up locality OR giving up hidden variables? If so, perhaps you can answer my challenge: provide an example of a local (toy) model that successfully predicts the perfect correlations but without "hidden variables".
No, I don't think that Bell's theorem leaves us a choice of giving up locality OR giving up hidden variables. I think it rules out any local hidden variable theory or interpretation of QM solely because of the locality condition. Further, I agree with what I take to be your position that, whether realistic or hidden variable or nonrealistic or whatever, no explicitly local theory of quantum entanglement can match all the predictions of QM or all the experimental results.

I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment.

Your coin-flip model, insofar as it would incorporate a λ representing the coin-flip, would be a hidden variable model. But because the coin-flip won't change the individual detection probability, λ can be omitted. (?) We can do that with Bell's general LHV form also, because in Bell tests λ is assumed to be varying randomly and therefore has no effect on the individual detection probability -- ie., rate of individual detection remains the same no matter what the setting of the polarizer, so the inclusion of a randomly varying λ is superfluous. (?) Bell only includes it (I suppose) because that's the question he's exploring. That is, it's because the inclusion of a λ term is a major part of an exercise aimed at answering whether a local hidden variable interpretation of standard QM is possible.

In the course of doing that it's been shown as well that a local interpretation of QM is impossible. So, it should be clear that I agree with you (and Bell) that it's all about the locality condition.

ttn said:
See Bell's paper "la nouvelle cuisine" (in the 2nd edition of "speakable and unspeakable"). Or see section 6 of

http://www.scholarpedia.org/article/Bell's_theorem

or (for more detail) this paper of mine:

http://arxiv.org/abs/0707.0401
Thanks. I like your writing style. It's very clear and clearly organized. Just that sometimes things get a bit complicated, and some of it (eg., in the scholarpedia article) is momentarily a bit over my head. But I expect to have everything sorted out for myself after another dozen or so slow readings of it and your papers.
ttn said:
This way of writing it also presupposes determinism. See how Bell formulated locality in such a way that neither determinism nor hidden variables are presupposed.
Ok.
ttn said:
I don't really disagree with any of that, except the implication that this λ represents a (specifically) *"hidden"* variable -- i.e., something supplementary to the usual QM wave function. It is better to understand the λ as denoting "whatever a given theory says constitutes a complete description of the system being analyzed". For ordinary QM, λ would thus (in the usual EPR-Bell kind of setup) just be the 2-particle wave function of the particle pair. For deBB it would be the wave function plus the two particle positions. And so on. Of course the point is then that you can derive the inequality without any constraints on λ.
Ok.
ttn said:
I agree that the violation of Bell locality looks a bit different, or manifests differently, in the two theories. My point was just that, in the abstract as it were, the two non-localities are "the same" in the sense that, for both theories, something that happens at a certain space-time point is *affected* by something outside its past light cone.
Ok.
ttn said:
Incidentally, I think you have the wrong idea about how deBB actually works. The "quantum potential" is a kind of pointless and weird way of formulating the theory that Bohm of course used, but basically nobody in the last 20-30 years who works on the theory thinks of it in those terms anymore. See this recent paper of mine (intended as an accessible introduction to the theory for physics students) to get a sense of how the theory should actually be understood:
http://arxiv.org/abs/1210.7265
Thanks.
ttn said:
Sure. How about this super-brief one: Jarrett only thought that Bell's formulation of locality could be broken into two parts -- one that captures genuine relativistic causality, and the other some other unrelated thing ...
I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end.

The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.).

ttn said:
... -- because he misunderstood a crucial aspect of Bell's formulation. In particular, he didn't (fully) understand that (roughly speaking) what we were calling "λ" above should be understood as denoting what some candidate theory says constitutes a *complete* description of the state of the system prior to measurement. (He missed the "complete" part. Then he discovered that, if λ does *not* provide a complete description of the system, then violation of the condition does not necessarily imply non-locality! The violation could instead be blamed on the use of incomplete state descriptions! Hence his idea that "Bell locality" = "genuine locality" + "completeness". But in fact Bell already saw this coming and carefully formulated the condition to ensure that its violation would indicate genuine nonlocality. Jarrett simply missed this.)
Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality.

Is this a possibility, or has Bell (and/or you) dealt with this somewhere?
 
  • #80
nanosiborg said:
I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment.

Well, you'd only convince the kind of person who voted (b) in the poll, if you somehow managed to show that *no* "local nonrealistic" model could match the quantum predictions. Just showcasing the silly local coin-flipping particles model doesn't do that.

But I absolutely agree with the way you put it, about what the question is post-Einstein. Einstein already showed (in the EPR argument, or some less flubbed version of it -- people know that Podolsky wrote the paper without showing it to Einstein first and Einstein was pissed when he saw it, right?) that "realism"/LHV is the only way to locally explain the perfect correlations. Post-Einstein, the LHV program was the only viable hope for locality! And then Bell showed that this only viable hope won't work. So, *no* local theory will work. I'm happy to hear we're on the same page about that. But my point here is just that, really, the best way to convince somebody that "local non-realistic" theories aren't viable is to just run the proof that local theories aren't viable (full stop). But somehow this never actually works. People have this misconception in their heads that a "local non-realistic" theory can work, even though they can't produce an explicit example, and they just won't let go of it.

Since it so perfectly captures the logic involved here, it's worth mentioning here the nice little paper by Tim Maudlin

http://www.stat.physik.uni-potsdam.de/~pikovsky/teaching/stud_seminar/Bell_EPR-2.pdf [Broken]

where he introduces the phrase: "the fallacy of the unnecessary adjective". The idea is just that when somebody says "Bell proved that no local realist theory is viable", it is actually true -- but highly misleading since the extra adjective "realist" is totally superfluous. As Maudlin points out, you could also say "Bell proved that no local theory formulated in French is viable". It's true, he did! But that does not mean that we can avoid the spectre of nonlocality simply by re-formulating all our theories in English! Same with "realism". Yes, no "local realist" theory is viable. But anybody who thinks this means we can save locality by jettisoning realism, has been duped by the superfluous adjective fallacy.


I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end.

I don't understand what you mean here. For the usual case of two spin-entangled spin-1/2 particles, the sample space for Bob's measurement is just {+,-}. This is certainly not affected by anything Alice or her particle do. So if you're somehow worried that the thing you call "2) statistical independence" might actually be violated, I don't think it is. But I don't think that even matters, since I don't see anything like this "2) ..." being in any way assumed in Bell's proof. But, basically, I just can't follow what you say here.

The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.).

Huh?

Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality.

Is this a possibility, or has Bell (and/or you) dealt with this somewhere?

The closest I can come to making sense of your worry here is something like this: "Bell assumes that stuff going on by Bob should be independent of stuff going on by Alice, but the experiments reveal correlations, so one of Bell's premises isn't reflected in the experiment." I'm sure I have that wrong and you should correct me. But on the off chance that that's right, I think it would be better to express it this way: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal". That is, it sounds like you are trying to make "something about how the experimental data should come out" into a *premise* of Bell's argument, instead of the *conclusion* of the argument. But it's not a premise, it's the conclusion. And the fact that the real data contradicts that conclusion doesn't invalidate his reasoning; it just shows that his *actual* premise (namely, locality!) is false.
 
Last edited by a moderator:
  • #81
nanosiborg said:
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.



There is another aspect to the form that Bell locality imposes on LHV models of quantum entanglement to consider. Any Bell LHV model of quantum entanglement must necessarily denote coincidental detection as a function of the product of the independent functions for individual detection at A and B. So the relevant underlying parameter determining coincidental detection is the same underlying parameter determining individual detection. I think the underlying parameter determining coincidental detection can be viewed as an invariant (per any specific run in any specific Bell test preparation) relationship between the motional properties of the entangled particles, and therefore a nonvariable underlying parameter. I'm not sure how to think about this. Is it significant? If so, how do we get from a randomly varying underlying parameter to a nonvarying underlying parameter?
Reading underlying parameter = hidden variable. Then nonvarying underlying parameters produces perfect correlations for spin measurements along parallel directions for spin - entangled particles. And a varying underlying parameter would produce spin measurements when directions are not parallel. Bell proved that a local deterministic hidden variable model does not explain the measurements when detector settings are not parallel.So this would be the challenge: To show a local hidden variable model with varying and nonvarying underlying parameters- that does explain the measurements.
 
Last edited:
  • #82
ttn said:
Good point! But I think the real lesson here is again just that "realistic" is used to mean all kinds of different things by all kinds of different people in all kinds of different contexts. There is surely a sense in which the coin-flipping-particles model could be considered "realistic" -- namely, it tells a perfectly clear and definite story about really-existing processes. There's nothing the least bit murky, unspeakable, metaphysically indefinite, or quantumish about it. So, if that's what "realistic" means, then it's realistic. But if "realistic" means instead specifically that there are pre-existing definite values (supporting statements about counter-factuals) then the coin-flipping-particles model is clearly not realistic.

So... anybody who talks about "realism" (and in particular, anybody who says that Bell's theorem leaves us the choice of abandoning "realism" to save locality) better say really really carefully exactly what they mean.

Incidentally, equivocation on the word "realism" is exactly how muddle-headed people manage to infer, from something like the Kochen-Specker theorem (which shows that you cannot consistently assign pre-existing definite values to a certain set of "observables"), that the moon isn't there when nobody looks.
well well said...

they counfuse real with counterfactual definitenessreal come from Latin res, thing, object just that.
values are just attributes of objects, quality, characteristics, attributes, values are just secondary aspects of objects, i.e properties of objects.reality: the state of things as they actually exist.
 
  • #83
ttn said:
So... anybody who talks about "realism" (and in particular, anybody who says that Bell's theorem leaves us the choice of abandoning "realism" to save locality) better say really really carefully exactly what they mean.

Incidentally, equivocation on the word "realism" is exactly how muddle-headed people manage to infer, from something like the Kochen-Specker theorem (which shows that you cannot consistently assign pre-existing definite values to a certain set of "observables"), that the moon isn't there when nobody looks.

I guess Einstein was one of those muddle-heads. :smile: He believed (but could not prove) that particles had pre-existing values for non-commuting observables, and said that any other position was unreasonable. He defined elements of reality and realism quite specifically.

ttn, no need for us to debate the point again; this is just the opposition's placard. Although by looking at the poll results as of now, it looks like you are losing 6-12. :biggrin:
 
  • #84
DrChinese said:
I guess Einstein was one of those muddle-heads. :smile: He believed (but could not prove) that particles had pre-existing values for non-commuting observables, and said that any other position was unreasonable. He defined elements of reality and realism quite specifically.

As usual, you suggest here that Einstein/EPR simply came out and said "we feel like believing in pre-existing values for non-commuting observables; we feel like anything else is unreasonable." In other words, you suggest that there was no *argument* for this *conclusion*. But, of course, there was. And (to quote Bell) it was an argument "from locality to" these pre-existing values. That is, what Einstein showed (and this was admittedly somewhat obscured in the EPR paper that Podolsky wrote and published before even showing it to Einstein!) was that believing in these pre-existing values is the only way to locally explain the perfect correlations.

To be sure, Einstein was wrong about something. In particular, he simply assumed that locality was true. Then, applying his perfectly valid *argument* "from locality to" pre-existing values (or "hidden variables" or "realism" or CFD or whatever anybody wants to call it), he *concluded* that these pre-existing values really existed. (Which of course in turn implies that the QM description, which fails to mention any pre-existing values, is incomplete.) Now it turns out locality is false. So Einstein was wrong to assume it. The EPR argument can no longer be used as a proof for the existence of pre-existing values since we now know that its premise (locality) is actually false! But none of this undermines in the slightest bit the validity of the argument "from locality to" these pre-existing values. That is, it remains absolutely true that pre-existing values are the only way to locally explain the perfect correlations -- whether locality is true or not.

ttn, no need for us to debate the point again; this is just the opposition's placard.

I no longer consider it possible to convince you of any of this, so, yes, we don't need to debate it. But I think re-hashing the points can help others make a better and more informed judgment about the subject of this poll.

Let me urge you to put up a better, or at least additional, placard. So far your placard amounts to "nuh uh". Your position, though, is clear. You think that, by saying there are no pre-existing values, we can consistently maintain locality. That is, you do not accept that Einstein/EPR validly argued "from locality to" pre-existing values. That is, you think that it is possible to explain the perfect correlations locally but without pre-existing values. This is precisely why I issued "ttn's challenge" in my first post in this thread: please display an actual concrete (if toy) model that explains the perfect correlations locally without relying on pre-existing values.

To not do this is to confess that your position (your vote for (b) in the poll) is indefensible.



Although by looking at the poll results as of now, it looks like you are losing 6-12.

Luckily, truth is not decided by majority vote. So far -- since nobody has risen to answer my challenge -- all the results prove is that 12 people hold a view that they have no actual basis for.
 
  • #85
amazing ! travis norsen, in person...
travis, do you believe in CFD ?
 
  • #86
ttn said:
You think that, by saying there are no pre-existing values, we can consistently maintain locality...That is, you do not accept that Einstein/EPR validly argued "from locality to" pre-existing values. That is, you think that it is possible to explain the perfect correlations locally but without pre-existing values. This is precisely why I issued "ttn's challenge" in my first post in this thread: please display an actual concrete (if toy) model that explains the perfect correlations locally without relying on pre-existing values.
This is the part that always confused me. What difference would there be between a local vs non-local non-realism? Maudlin notes this, I think, when he writes:
The microscopic world, Bohr assured us, is at least unanschaulich (unvisualizable) or even non-existent. Unvisualizable we can deal with—a 10-dimensional space with compactified dimensions is, I suppose, unvisualizable but still clearly describable. Non-existent is a different matter. If the subatomic world is non-existent, then there is no ontological work to be done at all, since there is nothing to describe. Bohr sometimes sounds like this: there is a classical world, a world of laboratory equipment and middle-sized dry goods, but it is not composed of atoms or electrons or anything at all. All of the mathematical machinery that seems to be about atoms and electrons is just part of an uninterpreted apparatus designed to predict correlations among the behaviors of the classical objects. I take it that no one pretends anymore to understand this sort of gobbledegook, but a generation of physicists raised on it might well be inclined to consider a theory adequately understood if it provides a predictive apparatus for macroscopic events, and does not require that the apparatus itself be comprehensible in any way.

If one takes this attitude, then the problem I have been trying to present will seem trivial. For there is a simple algorithm for associating certain clumped up wavefunctions with experimental situations: simply pretend that the wavefunction is defined on a configuration space, and pretend that there are atoms in a configuration, and read off the pretend configuration where the wavefunction is clumped up, and associate this with the state of the laboratory equipment in the obvious way. If there are no microscopic objects from which macroscopic objects are composed, then as long as the method works, there is nothing more to say. Needless to say, no one interested in the ontology of the world (such as a many-worlds theorist) can take this sort of instrumentalist approach.
Can the world be only wavefunction?
In Ch. 4 of "Many Worlds?: Everett, Quantum Theory, and Reality"

So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues. I mean what ontological difference would there be between the local vs non-local version of non-realism? Anyway, that's how I understood it or I'm not getting it. As I posted previously, I think Gisin argues similarily here:
What is surprising is that so many good physicists interpret the violation of Bell’s inequality as an argument against realism. Apparently their hope is to thus save locality, though I have no idea what locality of a non-real world could mean? It might be interesting to remember that no physicist before the advent of relativity interpreted the instantaneous action at a distance of Newton’s gravity as a sign of non-realism...
Is realism compatible with true randomness?
http://arxiv.org/pdf/1012.2536v1.pdf

And even a Bayesian argument seems hard to swallow because as Timpson notes:
We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian...

The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems.
Quantum Bayesianism: A Study
http://arxiv.org/pdf/0804.2047v1.pdf
 
Last edited:
  • #87
ttn said:
Luckily, truth is not decided by majority vote. So far -- since nobody has risen to answer my challenge -- all the results prove is that 12 people hold a view that they have no actual basis for.

Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing:

It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality.

So in my book, every Bohmian is an anti-realist.
 
  • #88
bohm2 said:
This is the part that always confused me. What difference would there be between a local vs non-local non-realism?

I certainly agree with you (and Maudlin) that -- if the rejection of "realism" means that there is no physical reality at all -- then the idea that there is still something meaningful for "locality" to mean is completely crazy. Clearly, if there's no physical reality, then it makes no sense to say that all the causal influences that propagate around from one physically real hunk of stuff to another move at or slower than 3 x 10^8 m/s. If there's no reality, then reality's neither local nor nonlocal because there's no reality!

But the point is that there are very few people who actually seriously think there's no physical reality at all. (This would be solipsism, right? Note that even the arch-quantum-solipsist Chris Fuchs denies being a solipsist! Point being, very few people, perhaps nobody, would openly confess to thinking there's no physical reality at all.)

And yet there are at least 12 people right here on this thread who say that Bell's theorem proves that realism is false! What gives? Well, those people simply don't mean by "realism" the claim that there's a physical world out there. They mean something much much much narrower, much subtler. They mean in particular something like: "there is a fact of the matter about what the outcome of a measurement was destined to be, before the measurement was even made, and indeed whether it is in fact made or not." That is, they mean, roughly, that there are "hidden variables" (not to be found in QM's wave functions) that determine how things are going to come out.


So , if non-realism, then the issue of locality vs non-locality seems kind of pointless since there doesn't appear to be any ontological issues.

Correct... if "non-realism" means solipsism. But if instead "non-realism" just means the denial of hidden variables / pre-existing values / counter-factual definiteness, then it indeed makes perfect sense.

Of course, in the context of Bell's theorem, what really matters is just whether endorsing this (latter, non-insane) type of "non-realism" gives us a way of avoiding the unpalatable conclusion of non-locality. At least 12 people here think it does! And yet none of them have yet addressed the challenge: produce a local but non-realist model that accounts for the perfect correlations.

(Note, even if somebody did this, they'd still technically need to show that you can *also* account for the *rest* of the QM predictions -- namely the predictions for what happens when the analyzers are *not* parallel -- before they could really be in a position to say that local non-realism is compatible with all the QM predictions. My challenge is thus quite "easy" -- it only pertains to a subset of the full QM predictions! And yet no takers... This of course just shows how *bad* non-realism is. If you are a non-realist, you can't even account for this perfect-correlations *subset* of the QM predictions locally! That's what EPR pointed out long ago...)
 
  • #89
DrChinese said:
Or maybe yours is not a strong enough argument. I will point out: I am not aware of any Bohmian that would say that EPR was correct in believing:

It is unreasonable to require that only those observables which can be simultaneously measured have reality. I.e. that counterfactual observables do have reality.

So in my book, every Bohmian is an anti-realist.

It depends on exactly what you mean by "realism". I'll say something about this later, in answer to audioloop's question.

But what you, Dr C, are missing above is that when Podolsky said something was "unreasonable", what he actually meant (and absolutely should have said instead!) was: "inconsistent with locality". But I've explained this so many times to you over the years, without getting through, there's really no point even trying again.
 
  • #90
We should all be thinking of reality as fields and particles as excitations of the fields, instead of crippled and incoherent classical-like models. Classical-like concepts like time, space, 'physical stuff', realism... could well be emergent. Just my unprofessional view(backed by some of the great names in physics).In the same way that we can not even in principle predict the behavior of certain large collections of bodies from the behavior of just one constituent(e.g. a flock of birds), it seems equally impossible to predict the behavior of a large ensemble of particles from looking at just one electron or proton. Hence why it could be totally impossible to understand the reality of chairs and tables by looking at just quantum mechanical rules and axioms. The fundamental aspect of the emergent system is its capacity to be what it is while being completely unlike any other version of what it is. And we are just beginning to approach problems in this direction - we also have to embrace the emergence of life from non-life and consciousnesss from non-consciousness among other similar phenomena(like the possible emergence of a reality from a non-reality - these 3/life, consciousness and physical stuff/ account for all that can be observed in the universe). Emergence is an observational fact and sounds much less abusrd than many of the other ideas put forward here.

PP. Since none of my conscious thoughts can at present be modeled and framed in purely classical/physical terms, shouldn't we also be proposing hidden variables for explaning the reality of the paragraph i wrote above? :tongue:
 
Last edited:
  • #91
audioloop said:
travis, do you believe in CFD ?

Interesting question.

The first thing I'd say is: who cares? If the topic is Bell's theorem, then it simply doesn't matter. CFD *follows* from locality in the same way that "realism" / hidden variables do. That is: the only way to locally (and, here crucially, non-conspiratorially) explain even the perfect correlations is with a "realistic" hidden-variable theory with pre-determined values for *all* possible measurements, i.e., a model with the CFD property. So... to whatever extent somebody thinks CFD needs to be assumed to then derive a Bell inequality, it doesn't provide any kind of "out" since CFD follows from locality. That is, the overall logic is still: locality --> X, and then X --> inequality. So whether X is just "realism" or "realism + CFD" or whatever, it simply doesn't make any difference to what the correct answer to this thread's poll is.

So, having argued that it's irrelevant to the official subject of the thread, let me now actually answer the question. Do I believe in CFD? I'm actually not sure. Or: yes and no. Or: it depends on a really subtle point about what, exactly, CFD means. Let me try to explain. As I think everybody knows, my favorite extant quantum theory is the dBBB pilot-wave theory. So maybe we can just consider the question: does the pilot-wave theory exhibit the CFD property?

To answer that, we have to be very careful. One's first thought is undoubtedly that, as a *deterministic* hidden variable theory, of course the pilot wave theory exhibits CFD: whatever the outcome is going to be, is determined by the initial conditions, so ... it exhibits CFD. Clear, right?

On the other hand, I've already tried to make a point in this thread about how, although the pilot-wave theory assigns definite pre-existing values (that are then simply revealed in appropriate measurements) to particle positions, it does *not* do this in regard to spin. That is, the pilot-wave theory is in an important sense not "realistic" in regard to spin. And that starts to make it sound like, actually, at least in regard to the spin measurements that are the main subject of modern EPR-Bell discussions, perhaps the pilot-wave theory does *not*, after all, exhibit CFD.

So, which is it? Actually both are true! The key point here is that, according to the pilot-wave theory, there will be many physically different ways of "measuring the same property". Here is the classic example that goes back to David Albert's classic book, "QM and Experience." Imagine a spin-1/2 particle whose wave function is in the "spin up along x" spin eigenstate. Now let's measure its spin along z. The point is, there are various ways of doing that. First, we might use a set of SG magnets that produce a field like B_z ~ B_0 + bz (i.e., a field in the +z direction that increases in the +z direction). Then it happens that if the particle starts in the upper half of its wave packet (upper here meaning w.r.t. the z-direction) it will come out the upper output port and be counted as "spin up along z"; whereas if it happens instead to start in the lower half of the wave packet it will come out the lower port and be counted as "spin down along z". So far so good. But notice that we could also have "measured the z-spin" using a SG device with fields like B_z ~ B_0 - bz (i.e., a field in the z-direction that *decreases* in the +z direction). Now, if the particle starts in the upper half of the packet it'll still come out of the upper port... *but now we'll call this "spin down along z"*. Whereas if it instead starts in the lower half of the packet it'll still come out of the lower port, but we'll now call this *spin up along z*.

And if you follow that, you can see the point. Despite being fully deterministic, what the outcome of a "measurement of the z-spin" will be -- for the same exact initial state of the particle (including the "hidden variable"!) -- is not fixed. It depends on which *way* the measurement is carried out!

Stepping back for a second, this all relates to the (rather weird) idea from ordinary QM that there is this a correspondence between experiments (that are usually thought of as "measuring some property" of something) and *operators*. So the point here is that, for the pilot-wave theory, this correspondence is actually many-to-one. That is, at least in some cases (spin being one of them), many physically distinct experiments all correspond to the same one operator (here, S_z). But (unsurprisingly) distinct experiments can have distinct results, even for the same input state.

So... back finally to the original question... if what "CFD" means is that for each *operator*, there is some definite fact of the matter about what the outcome of an unperformed measurement would have been, then NO, the pilot-wave theory does *not* exhibit CFD. On the other hand, if "CFD" means that for each *specific experiment*, there is some definite fact of the matter about what the outcome would have been, then YES, of course -- the theory is deterministic, so of course there is a fact about how unperformed experiments would have come out had they been performed.

This may seem like splitting hairs for no reason, but the fact is that all kinds of confusion has been caused by people just assuming -- wrongly, at least in so far as this particular candidate theory is concerned -- that it makes perfect sense to *identify* "physical properties" (that are revealed or made definite or whatever by appropriate measurements) with the corresponding QM operators. This is precisely what went wrong with all of the so-called "no hidden variable" theorems (Kochen-Specker, etc.). And it is also just the point that needs to be sorted out to understand whether the pilot-wave theory exhibits CFD or not. The answer, I guess, is: "it's complicated".

That make any sense?
 
  • #92
The notion of 'particles' is oxymoronic. If microscopic entities obey Heisenberg’s uncertainty principle, as we know they do, one is forced to admit that the concept of “microscopic particle” is a self-contradictory concept. This is because if an entity obeys HUP, one cannot simultaneously determine its position and momentum and, as a consequence, one cannot determine, not even in principle, how the position of the entity will vary in time. Consequently, one cannot predict with certainty its future locations and it doesn't have the requisites of classical particles like exact position and momentum in spacetime. What is the reason why an entity of uncertain nature but evidently non-spatial should obey classical notions like locality at all times?
 
  • #93
ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal?

It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism
 
  • #94
ttn said:
It depends on exactly what you mean by "realism".
I think one of the easiest (for me) ways to understand "realism" as per pilot-wave is "contextual realism". Demystifier does a good job discussing this issue here when debating whether a particular paper discussed in that thread ruled out the pilot wave model:
What their experiment demonstrates is that realism, if exists, must be not only nonlocal, but also contextual. Contextuality means that the value of the measured variable may change by the act of measurement. BM is both nonlocal and contextual, making it consistent with the predictions of standard QM as well as with their experiment. In fact, after Eq. (4), they discuss BM explicitly and explain why it is consistent with their results. Their "mistake" is their definition of "reality" as an assumption that all measurement outcomes are determined by pre-existing properties of particles independent of the measurement. This is actually the definition of non-contextual reality, not of reality in general. The general definition of reality is the assumption that some objective properties exist even when measurements are not performed. It does not mean that these properties cannot change by the physical act of measurement. In simpler terms, they do not show that Moon does not exist if nobody looks at it. They only show that Moon, if exists when nobody looks at it, must change its properties by looking at it. I also emphasize that their experiment only confirms a fact that was theoretically known for a long time: that QM is contextual. In this sense, they have not discovered something new about QM, but only confirmed something old.
Non-local Realistic theories disproved
https://www.physicsforums.com/showthread.php?t=167320

Since I hate writing stuff in my own words since others write it down so more eloquently the necessary contextuality present in the pilot-wave model is summarized in an easily understandible way (for me) here also:
One of the basic ideas of Bohmian Mechanics is that position is the only basic observable to which all other observables of orthodox QM can be reduced. So, Bohmian Mechanics will qualify VD (value definiteness) as follows: “Not all observables defined in orthodox QM for a physical system are defined in Bohmian Mechanics, but those that are (i.e. only position) do have definite values at all times.” Both this modification of VD (value definiteness) and the rejection of NC (noncontextuality) immediately immunize Bohmian Mechanics against any no HV argument from the Kochen Specker Theorem.
The Kochen-Specker Theorem
http://plato.stanford.edu/entries/kochen-specker/index.html

So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.
 
Last edited:
  • #95
nanosiborg said:
I had been thinking that it would be pointless to make a local nonrealistic theory, since the question, following Einstein (and Bell) was if a local model with hidden variables can be compatible with QM? But a local nonrealistic (and necessarily nonviable because of explicit locality) theory could be used to illustrate that hidden variables, ie., the realism of LHV models, have nothing to do with LHV models' incompatibility with QM and experiment.
ttn said:
Well, you'd only convince the kind of person who voted (b) in the poll, if you somehow managed to show that *no* "local nonrealistic" model could match the quantum predictions. Just showcasing the silly local coin-flipping particles model doesn't do that.
Yes, I see.
ttn said:
But I absolutely agree with the way you put it, about what the question is post-Einstein. Einstein already showed (in the EPR argument, or some less flubbed version of it -- people know that Podolsky wrote the paper without showing it to Einstein first and Einstein was pissed when he saw it, right?) that "realism"/LHV is the only way to locally explain the perfect correlations. Post-Einstein, the LHV program was the only viable hope for locality! And then Bell showed that this only viable hope won't work. So, *no* local theory will work. I'm happy to hear we're on the same page about that. But my point here is just that, really, the best way to convince somebody that "local non-realistic" theories aren't viable is to just run the proof that local theories aren't viable (full stop). But somehow this never actually works. People have this misconception in their heads that a "local non-realistic" theory can work, even though they can't produce an explicit example, and they just won't let go of it.
Yes, I do think I'm following you on all this. That we're on the same page. Not sure when I changed from the "realism or locality has to go" way of thinking to the realization that it's all about the locality condition being incompatible with QM and experiment and that realism/hidden variables are actually irrelevant to that consideration.
ttn said:
Since it so perfectly captures the logic involved here, it's worth mentioning here the nice little paper by Tim Maudlin

http://www.stat.physik.uni-potsdam.de/~pikovsky/teaching/stud_seminar/Bell_EPR-2.pdf [Broken]

where he introduces the phrase: "the fallacy of the unnecessary adjective". The idea is just that when somebody says "Bell proved that no local realist theory is viable", it is actually true -- but highly misleading since the extra adjective "realist" is totally superfluous. As Maudlin points out, you could also say "Bell proved that no local theory formulated in French is viable". It's true, he did! But that does not mean that we can avoid the spectre of nonlocality simply by re-formulating all our theories in English! Same with "realism". Yes, no "local realist" theory is viable. But anybody who thinks this means we can save locality by jettisoning realism, has been duped by the superfluous adjective fallacy.
Yes, as I mentioned, I get this now, and feel like I've made progress in my understanding of Bell.
I like the way Maudlin writes also. Thanks for the link. In the process of rereading it.
nanosiborg said:
I'd put it like this. Bell's formulation of locality, as it affects the general form of any model of any entanglement experiment designed to produce statistical dependence between the quantitative (data) attributes of spacelike separated paired detection events, refers to at least two things: 1) genuine relativistic causality, the independence of spacelike separated events, ie., that the result A doesn't depend on the setting b, and the result B doesn't depend on the setting a. 2) statistical independence, ie., that the result A doesn't alter the sample space for the result B, and vice versa. In other words, that the result at one end doesn't depend in any way on the result at the other end.
ttn said:
I don't understand what you mean here.
I don't think I do either. I'm just fishing for any way to understand Bell's theorem that will allow me to retain the assumption that nature is evolving in accordance with the principle of local action. That nature is exclusively local. Because the assumption that nonlocality exists in nature is pretty heavy duty. Just want to make sure any possible nuances and subtleties have been dealt with. I've come to think that experimental loopholes and hidden variables ('realism') are unimportant. That it has to do solely with the explicit denotation of the locality assumption. So, I'm just looking for (imagining) possible hidden assumptions in the denotation of locality that might preclude nonlocality as the cause of Bell inequality violations.

ttn said:
For the usual case of two spin-entangled spin-1/2 particles, the sample space for Bob's measurement is just {+,-}.
If the joint sample space is (+,-), (-,+), (+,+), (-,-), then a detection of, say, + at A does change the joint sample space from (+,-), (-,+), (+,+), (-,-) to (+,-), (+,+).

But yes I see that the sample space at either end is always (+,-) no matter what. At least in real experiments. In the ideal, iff θ is either 0° or 90°, then a detection at one end would change the sample space at the other end.

But the sample space of what's registered by the detectors isn't the sample space I was concerned about. There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.

ttn said:
This is certainly not affected by anything Alice or her particle do. So if you're somehow worried that the thing you call "2) statistical independence" might actually be violated, I don't think it is. But I don't think that even matters, since I don't see anything like this "2) ..." being in any way assumed in Bell's proof. But, basically, I just can't follow what you say here.
I think that statistical independence is explicated in the codification of Bell's locality condition. Whether or not it's relevant to the interpretation of Bell's theorem I have no idea at the moment. The more I think about it, the more it just seems too simplistic, too pedestrian.

nanosiborg said:
The problem is that a Bell-like (general) local form necessarily violates 2 (an incompatibility that has nothing to do with locality), because Bell tests are designed to produce statistical (ie., outcome) dependence via the selection process (which proceeds via exclusively local channels, and produces the correlations it does because of the entangling process which also proceeds via exclusively local channels, and produces a relationship between the entangled particles via, eg., emission from a common source, interaction, 'zapping' with identical stimulii, etc.).
ttn said:
Huh?
Well, the premise might be wrong, maybe this particular inconsistency between experimental design and Bell locality isn't significant or relevant to Bell inequality violations, but I have to believe that you understand the statement.

nanosiborg said:
Ok, I don't think it has anything to do with Jarrett's idea that "Bell locality" = "genuine locality" + "completeness", but rather the way I put it above, in terms of an incompatibility between the statistical dependence designed into the experiments and the statistical independence expressed by Bell locality.

Is this a possibility, or has Bell (and/or you) dealt with this somewhere?
ttn said:
The closest I can come to making sense of your worry here is something like this: "Bell assumes that stuff going on by Bob should be independent of stuff going on by Alice, but the experiments reveal correlations, so one of Bell's premises isn't reflected in the experiment." I'm sure I have that wrong and you should correct me. But on the off chance that that's right, I think it would be better to express it this way: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal". That is, it sounds like you are trying to make "something about how the experimental data should come out" into a *premise* of Bell's argument, instead of the *conclusion* of the argument. But it's not a premise, it's the conclusion. And the fact that the real data contradicts that conclusion doesn't invalidate his reasoning; it just shows that his *actual* premise (namely, locality!) is false.
In a previous post I said something like that Bell locality places upper and lower boundaries on the correlations, and that QM predicted correlations lie, almost entirely, outside those boundaries

Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal."

Or are you saying that that's the correct way of saying it? Or what?

I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results.

I don't yet understand how/why it's concluded that nature is nonlocal.
 
Last edited by a moderator:
  • #96
nanosiborg said:
There's also the sample space of what's transmitted by the filters, and the sample space ρ(λ) that's emitted by the source. It's how a detection might change ρ(λ) that I was concerned with.

Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.



Is the following quote what you're saying is a better way to say what you think I'm saying but is wrong?: "Bell assumes locality and shows that this implies a certain limit on the correlations; the experiments show that the correlations are stronger than the limit allows; therefore we conclude that nature is nonlocal."

Or are you saying that that's the correct way of saying it? Or what?

That's the simple (and correct) way to express what I thought you were saying.



I think the way I'd phrase it is that Bell codified the assumption of locality in a way that denotes the independence (from each other) of paired events at the filters and detectors. Bell proved that models of quantum entanglement that incorporate Bell's locality condition cannot be compatible with QM. It is so far the case that models of quantum entanglement that incorporate Bell's locality condition are inconsistent with experimental results.

I don't yet understand how/why it's concluded that nature is nonlocal.

Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.
 
  • #97
bohm2 said:
So, while the KS theorem establishes a contradiction between VD + NC and QM, the qualification above immunizes pilot-wave/deBroglie/Bohmian mechanics from contradiction.

Yes, that's right. Kochen-Specker rules out non-contextual hidden variable (VD) theories. The dBB pilot-wave theory is not a non-contextual hidden variable (VD) theory.

And, of course, separately: Bell's theorem rules out local theories. The pilot-wave theory is not a local theory.

People who voted for (b) in the poll evidently get these two theorems confused. They try to infer the conclusion of KS, from Bell.
 
  • #98
Quantumental said:
ttn: regarding MWI, I am aware of the difficulties with the pure WF view, but what do you think of Wallace and Timpson's Space State time realism proposal?

I read it when it came out and haven't thought of it sense. In short, meh.


It seems David Wallace is the only one every MWI adherent refers to when asking the difficult questions. He just wrote a huge *** book on the Everettian interpretation and argues for solving the Born Rule problem with decision-theory. He argues that the ontological/preferred basis issue is solved by decoherence + emergence. Lastly he posits the Space State realism

Haven't read DW's new book. Everything I've seen about the attempt to derive the Born rule from decision theory has been, to me, just ridiculous. But I would like to see DW's latest take on it. Not sure if you intended this, but (what I would call) the "ontology issue" and the "preferred basis issue" are certainly not the same thing. Not sure what you meant exactly with the last almost-sentence. (Shades of ... "the castle AAARRRGGGG")
 
  • #99
In my experience, whenever things are philosophically murky, and people are stuck into one or more "camps", it sometimes helps to ask a technical question whose answer is independent of how you interpret things, but which might throw some light on those interpretations. That's what Bell basically did with his inequality. They may not have solved anything about the interpretation of quantum mechanics, but certainly afterwards, any interpretation has to understood in light of his theorem.

Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, [itex]\Psi[/itex]. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"?

Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object.

But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.
 
  • #100
ttn said:
Now you're questioning the "no conspiracy" assumption. It's true that you can avoid the conclusion of nonlocality by denying that the choice of measurement settings is independent of the state of the particle pair -- or equivalently by saying that ρ(λ) varies as the measurement settings vary. But there lies "superdeterminism", i.e., cosmic conspiracy theory.
No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'. At the outset, given a uniform λ distribution (is this what's called rotational invariance?) and the rapid and random varying of the a and b settings, then would the sample space for a or b be all λ values? Anyway, whatever the sample space for a or b (depending on the details of the local model), then given a detection at, say, A, associated with some a, then would the sample space for b be a reduced set of possible λ values?

ttn said:
That's the simple (and correct) way to express what I thought you were saying.
If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.

ttn said:
Because if every possible local theory disagrees with experiment, then every possible local theory is FALSE.
Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.
 
Last edited:
  • #101
stevendaryl said:
Anyway, here's a technical question about Many-Worlds. Supposing that you have a wave function for the entire universe, [itex]\Psi[/itex]. Is there some mathematical way to interpret it as a superposition, or mixture, of macroscopic "worlds"?

Going the other way, from macroscopic to quantum, is certainly possible (although I'm not sure if it is unique--probably not). With every macroscopic object, you can associate a collection of wave packets for the particles making up the object, where the packet is highly peaked at the location of the macroscopic object.

But going from a microscopic description in terms of individual particle descriptions to a macroscopic description in terms of objects is much more complicated. Certainly it's not computationally tractable, since a macroscopic object involves unimaginable numbers of particles, but I'm wondering if it is possible, conceptually.

This is just the normal way that all MWI proponents already think about the theory. It's a theory of the whole universe, described the the universal wave function, obeying Schroedinger's equation at all times. (No collapse postulates or other funny business.) Decoherence gives rise to a coherent "branch" structure such that it's possible to think of each branch as a separate (or at least, independent) world.

For more details, see any contemporary treatment of MWI, e.g., the David Wallace book that was mentioned earlier. (Incidentally, I just ordered myself a copy!)
 
  • #102
nanosiborg said:
No I don't like any of that stuff. What I'm getting at has nothing to do with 'conspiracies'.

Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.)


If "therefore we conclude that nature is nonlocal" is omitted, then that's what I was saying.

Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal. So I guess you should think about the reasoning some more.


Ok, let's say that every possible local theory disagrees with experiment. It doesn't then follow that nature is nonlocal, unless it's proven that the local form (denoting causal independence of spacelike separated events) doesn't also codify something in addition to locality, some acausal sort of independence (such as statistical independence), which might act as the effective cause of the incompatibility between the local form and the experimental design, precluding nonlocality.

What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop?
 
  • #103
ttn said:
Well, what you suggested was a violation of what is actually called the "no conspiracy" assumption. I'm sure you didn't *mean* to endorse a conspiracy theory... (See the scholarpedia entry on Bell's theorem for more details on this no conspiracy assumption.)
Ok. Thanks.

ttn said:
Well yeah, OK, but my point was kind of that, if I was understanding the first part (and now it sounds like I was?), then what actually follows logically is that nature is nonlocal.
And my point is that nonlocality in nature doesn't necessarily follow from the generalized nonviability of the locality condition.

ttn said:
What you wrote after "unless" is just a way of saying that, actually, it wasn't established that "every possible local theory disagrees with experiment". Can we at least agree that, if every possible local theory disagrees with experiment, then nature is nonlocal -- full stop?
Well, no to both statements. The point is that every possible local theory can disagree with experiment in an exclusively local universe if the general locality condition is encoding something (in addition to locality) that's necessarily incompatible with the experimental designs of Bell tests but which has nothing to do with locality.

I take Bell's formulation as general, and assume that the QM treatment of quantum entanglement will always agree with experiment. So, insofar as Bell locality and QM have been mathematically proven to be incompatible, then there's no possible viable local theory of quantum entanglement.

But consider that Bell tests are designed to produce statistical dependence by the entanglement creation process (eg., common emitter, interaction of the particles, common 'zapping' of separated particles, etc.) and the data pairing process, both of which proceed along exclusively local channels.

Then consider that the locality condition codifies statistical independence. I'm just wondering if there's anything significant enough about that inconsistency so that it, and not nonlocality, might be the effective cause of the inconsistency between local theories and experiment.
 
  • #104
nanosiborg said:
The point is that every possible local theory can disagree with experiment in an exclusively local universe if the general locality condition is encoding something (in addition to locality) that's necessarily incompatible with the experimental designs of Bell tests but which has nothing to do with locality.

True. That's also, for example, what Jarrett thought. But... I can't understand what exactly you are proposing this "extra illicit something" to *be*. If you have something definite in mind, I would enjoy hearing about it. Probably it will turn out that you haven't really fully understood Bell's locality condition (as Jarrett didn't when he made similar charges) and that actually whatever you have in mind is not at all smuggled in. But who knows, maybe you're right.

On the other hand, if you don't have anything definite in mind -- if it's just "well what if there's some illicit assumption smuggled in there? prove that there isn't such a thing!" -- then that would be quite silly and would certainly leave nothing to discuss.


But consider that Bell tests are designed to produce statistical dependence by the entanglement creation process (eg., common emitter, interaction of the particles, common 'zapping' of separated particles, etc.) and the data pairing process, both of which proceed along exclusively local channels.

If the claim is that there is some extra illicit assumption built into Bell's definition of locality, I don't see how you think it helps to bring up the experiments. Shouldn't you be talking about the mathematical proof of Bell's theorem, and arguing that there is an assumption in the theorem other than (genuine) locality?


Then consider that the locality condition codifies statistical independence.

I don't understand what you think you mean by that. What the locality condition codifies is ... locality. It certainly does *not* just say: A and B should be statistically independent. If you think that is the locality condition, you need to actually read Bell and understand what he did before you start criticizing him.
 
  • #105
ttn said:
True. That's also, for example, what Jarrett thought. But... I can't understand what exactly you are proposing this "extra illicit something" to *be*. If you have something definite in mind, I would enjoy hearing about it.
Just an intuited possibility of something ('statistical' independence) that's sort of hidden by the causal independence (locality) that's codified in the locality condition, and that might be inconsistent with the experimental designs to a significant enough extent that it would be considered the effective cause of the inconsistency between local theories and experiment.

ttn said:
Probably it will turn out that you haven't really fully understood Bell's locality condition (as Jarrett didn't when he made similar charges) and that actually whatever you have in mind is not at all smuggled in. But who knows, maybe you're right.
I'll agree that at this point the former seems much more likely than the latter.

ttn said:
On the other hand, if you don't have anything definite in mind -- if it's just "well what if there's some illicit assumption smuggled in there? prove that there isn't such a thing!" -- then that would be quite silly and would certainly leave nothing to discuss.
I agree. Certainly no disproof is required of what I'm suggesting, rather vaguely, might be the case. It's along the lines of, I have this vague notion, help me explore it if you think there's any possibility that there might be something to it. You've indicated that you don't, and the more I get into it the more I think you're probably right. But I'd like to at least get to the point where I have a clearly formulated hypothesis instead of just a vague notion.

ttn said:
If the claim is that there is some extra illicit assumption built into Bell's definition of locality, I don't see how you think it helps to bring up the experiments.
Shouldn't you be talking about the mathematical proof of Bell's theorem, and arguing that there is an assumption in the theorem other than (genuine) locality?
The mathematical proof only tells us that the locality condition is incompatible with QM. The possible incompatibility of the suggested extra illicit (and less visible) assumption can only be demonstrated when evaluated in relation to experimental design.

nanosiborg said:
Then consider that the locality condition codifies statistical independence.
ttn said:
I don't understand what you think you mean by that. What the locality condition codifies is ... locality. It certainly does *not* just say: A and B should be statistically independent.
I just left out, "in addition to codifying locality (ie., causal independence)", which I thought was understood. Certainly the locality condition doesn't only codify statistical dependence. Part of what I'm wondering is if it codifies statistical independence. Or, in other words, does the locality condition only codify locality (causal independence)?

If the locality condition codifies statistical independence in addition to codifying locality, then the question becomes: is the inconsistency between the statistical independence codified by the locality condition and the statistical dependency necessitated by the experimental design significant enough that this inconsistency is the effective cause of the inconsistency between the predictions of models incorporating the locality condition and experimental results? .
 
Last edited:
<h2>1. What are Bell's inequalities and how do they relate to nature?</h2><p>Bell's inequalities are a set of mathematical inequalities that describe the limits of classical physics in explaining certain phenomena in nature. They are used to test the validity of quantum mechanics, which is a more accurate and comprehensive theory of nature.</p><h2>2. Why are violations of Bell's inequalities significant?</h2><p>Violations of Bell's inequalities indicate that classical physics is not sufficient to explain certain phenomena in nature, and that quantum mechanics is a more accurate and comprehensive theory. This challenges our understanding of the fundamental laws of nature and opens up new possibilities for scientific exploration.</p><h2>3. How are violations of Bell's inequalities detected?</h2><p>Violations of Bell's inequalities are detected through experiments that involve measuring the properties of entangled particles. These particles are connected in such a way that their properties are correlated, even when they are separated by large distances. By measuring the properties of these particles, scientists can determine if they violate Bell's inequalities.</p><h2>4. What do violations of Bell's inequalities tell us about the nature of reality?</h2><p>Violations of Bell's inequalities suggest that reality is not as deterministic as classical physics suggests. Instead, it supports the idea that quantum mechanics allows for non-local connections between particles, and that the act of measurement can affect the properties of these particles. This challenges our traditional understanding of causality and the nature of reality.</p><h2>5. How do violations of Bell's inequalities impact our understanding of the universe?</h2><p>Violations of Bell's inequalities have significant implications for our understanding of the universe. They suggest that there are fundamental aspects of reality that are beyond our current understanding, and that there may be new laws and principles at work in the universe. This opens up new avenues for research and exploration in the field of quantum mechanics and the nature of the universe.</p>

1. What are Bell's inequalities and how do they relate to nature?

Bell's inequalities are a set of mathematical inequalities that describe the limits of classical physics in explaining certain phenomena in nature. They are used to test the validity of quantum mechanics, which is a more accurate and comprehensive theory of nature.

2. Why are violations of Bell's inequalities significant?

Violations of Bell's inequalities indicate that classical physics is not sufficient to explain certain phenomena in nature, and that quantum mechanics is a more accurate and comprehensive theory. This challenges our understanding of the fundamental laws of nature and opens up new possibilities for scientific exploration.

3. How are violations of Bell's inequalities detected?

Violations of Bell's inequalities are detected through experiments that involve measuring the properties of entangled particles. These particles are connected in such a way that their properties are correlated, even when they are separated by large distances. By measuring the properties of these particles, scientists can determine if they violate Bell's inequalities.

4. What do violations of Bell's inequalities tell us about the nature of reality?

Violations of Bell's inequalities suggest that reality is not as deterministic as classical physics suggests. Instead, it supports the idea that quantum mechanics allows for non-local connections between particles, and that the act of measurement can affect the properties of these particles. This challenges our traditional understanding of causality and the nature of reality.

5. How do violations of Bell's inequalities impact our understanding of the universe?

Violations of Bell's inequalities have significant implications for our understanding of the universe. They suggest that there are fundamental aspects of reality that are beyond our current understanding, and that there may be new laws and principles at work in the universe. This opens up new avenues for research and exploration in the field of quantum mechanics and the nature of the universe.

Similar threads

Replies
50
Views
4K
Replies
25
Views
2K
  • Quantum Physics
Replies
10
Views
2K
  • Quantum Physics
Replies
1
Views
699
  • Quantum Interpretations and Foundations
Replies
2
Views
642
Replies
6
Views
2K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Physics
Replies
28
Views
1K
Replies
8
Views
2K
Back
Top