Bell Locality: New Paper Clarifies Arguments

In summary: There is a long history of proposed answers here, e.g., people saying you can have a local theory as long as it doesn't have any hidden variables, or you can have a local theory as long as it isn't deterministic, etc. Does anyone think those positions are viable? Does anyone think there is some other principle that can be rejected instead...perhaps one that is more fundamental?
  • #1
ttn
735
15
I've argued here in the past (with dr chinese and others) about what, exactly, is proved by Bell's Theorem. Here is a new paper which addresses
and clarifies many of those points:

http://www.arxiv.org/abs/quant-ph/0601205

I suspect it will be of interest to people here. But before Patrick jumps on me about MWI, I'll just say this: I am taking it as given that the measurement in each wing of these EPR/Bell correlation experiements has a definite outcome. Given that reasonable assumption, the conclusions in the paper follow. But if one holds (with MWI) that these experiments do not have definite outcomes, all bets are off.
 
Physics news on Phys.org
  • #2
ttn said:
But before Patrick jumps on me about MWI, I'll just say this: I am taking it as given that the measurement in each wing of these EPR/Bell correlation experiements has a definite outcome.


Hehe ! This brainwashing has had its effect all right :biggrin:
I'm staying out of here :!)
 
  • #3
ttn said:
I've argued here in the past (with dr chinese and others) about what, exactly, is proved by Bell's Theorem. Here is a new paper which addresses
and clarifies many of those points:

http://www.arxiv.org/abs/quant-ph/0601205

I suspect it will be of interest to people here. But before Patrick jumps on me about MWI, I'll just say this: I am taking it as given that the measurement in each wing of these EPR/Bell correlation experiements has a definite outcome. Given that reasonable assumption, the conclusions in the paper follow. But if one holds (with MWI) that these experiments do not have definite outcomes, all bets are off.
Thanks for posting the link to your "Bell Locality and the Nonlocal Character of Nature", ttn. Ever since you last visited this forum I've been meaning to reread your "EPR and Bell Locality", and research some other papers in order to better understand the argument(s) you advocate. Your latest paper looks like it might make that task a bit easier in a way.

Vanesch has been kind enough in a recent thread to discuss with me at some length his thinking behind his adoption of the MWI. But I still think that the reasoning behind the idea that experiments have definite outcomes is much stronger than the reasoning behind the idea that they don't.
 
  • #4
It's damn hard to stay out of here :grumpy:

The statement by the OP illustrates my POV: you'll have to choose between some form of non-locality and parallel outcomes if the empirical predictions of QM are correct.

Given the disaster of the first option, and the fact that the second option is already foreseen in the formalism of QM, I go for the latter: it is MWI or non-locality, in a nutshell.
 
Last edited:
  • #5
vanesch said:
it is MWI or non-locality, in a nutshell.

The nut in this shell prefers to go for nonlocality rather than an excess of ontological baggage :smile:
 
  • #6
Tez said:
The nut in this shell prefers to go for nonlocality rather than an excess of ontological baggage :smile:

Ok. Bye, general relativity, then...
 
  • #7
vanesch said:
Ok. Bye, general relativity, then...

I'll worry about that if and when we have a quantum theory of gravity. At the current energy scales there's no particular conflict, and I certainly lean to a belief that quantum mechanics reveals more of the "reality of the universe" than GR. However I highly doubt it'll be a problem, for the simple reason that I doubt by then we'll be seeing space and time as having the ontological status that GR gives them...
 
  • #8
Tez said:
I'll worry about that if and when we have a quantum theory of gravity. At the current energy scales there's no particular conflict, and I certainly lean to a belief that quantum mechanics reveals more of the "reality of the universe" than GR. However I highly doubt it'll be a problem, for the simple reason that I doubt by then we'll be seeing space and time as having the ontological status that GR gives them...

I agree with Tez, but I really don't think there's any point arguing about this. Neither side will convince the other.

Perhaps a more interesting question for discussion would be what *other* possible routes exist to avoid the conclusions in the above paper. Sure, you can get around the conclusion if you deny that experiments actually have definite outcomes. But if someone wasn't comfortable denying that (and many people, I think rightly, aren't), what *else* could they deny instead if they wanted to avoid the conclusion that nature is non-local?

There is a long history of proposed answers here, e.g., people saying you can have a local theory as long as it doesn't have any hidden variables, or you can have a local theory as long as it isn't deterministic, etc. Does anyone think those positions are viable? Does anyone think there is some other principle that can be rejected instead of locality (I mean other than the principle that when we see a pointer pointing left it's actually pointing left)?
 
  • #9
Well, there's this somewhat strange "blockworld" idea, based on the observation that one has to eventually "bring the notebooks together and compare data", that the manifestation of Bell correlations is due to the events in some sense having a common future rather than a common past.

But I don't claim to really understand it, or the the appeal of it, since it then seems to deny my free will not to meet up with the other person, and once one starts allowing such conspiracies its easy to get pretty much whatever you want via the backwards lightcone...
 
  • #10
I am looking at the paper. Should be the seed for some good discussion in there.

ttn, I had linked your (?) earlier paper ("EPR and Bell Locality") on my site a while back because I liked some of the things it covered. Even if I don't always agree...

In the meantime, Tez* has taught me another lesson in why it is always good to stay close to the formalism. So I will stick with the "either/or" for now.

(*Thanks again Tez; I am still digesting steering theorems but suspect this may be beyond my reach; but I haven't given up yet.)
 
  • #11
Tez said:
I'll worry about that if and when we have a quantum theory of gravity. At the current energy scales there's no particular conflict, and I certainly lean to a belief that quantum mechanics reveals more of the "reality of the universe" than GR.


The main argument for MWI is not locality of course, it is unitarity and the refusal to give special "ad hoc" status to the physics of measurement apparatus (you cannot, in the standard view, DERIVE the hermitian "measurement observable" of a device, if the quantum physical description of the device is given ; you have to decide "ad hoc" what is its measurement basis). Decoherence can give you the measurement basis, by looking at the pointer states which are stable against interactions of the environment. But decoherence only makes sense in an MWI setting. You get the locality (and the resolution of the apparent non-locality and spooky action at a distance in EPR situations) for free.

MWI is based upon taking unitarity seriously. And then you get locality for free, if the unitary interactions of the theory are local (which they are).
 
  • #12
ttn said:
Perhaps a more interesting question for discussion would be what *other* possible routes exist to avoid the conclusions in the above paper. Sure, you can get around the conclusion if you deny that experiments actually have definite outcomes. But if someone wasn't comfortable denying that (and many people, I think rightly, aren't), what *else* could they deny instead if they wanted to avoid the conclusion that nature is non-local?

I think that an explanation that is possible is that nature is inconsistent (in the sense that it cannot be described by a mathematical structure). Before crying wolf, this is exactly Bohr's point of view. Quantum theory does not describe nature, it just gives you relations between experimental preparations and outcomes. And when you DO try to describe nature (using hidden, or not, variables) you run into problems. So microphysics is NOT DESCRIBABLE by a mathematical structure. Maybe it even doesn't exist, and measurements "just are". Or "knowledge just is". And quantum theory is the statistics of inconsistency.

Another explanation that is possible, is solipsism. Quantum theory describes correctly our sensations, but there's no underlying reality that is responsible for it (and hence we will not find any mathematical structure that can do so).

Seriously, when I look at the alternatives (non-locality, inconsistency, solipsism), I think that MWI is not so bad!

EDIT: I forgot of course the most prozaic alternative: EPR situations do not exist, and the empirical evidence for it is flawed (the local realist loopholes).
 
Last edited:
  • #13
vanesch said:
MWI is based upon taking unitarity seriously. And then you get locality for free, if the unitary interactions of the theory are local (which they are).

Most Everettians I've talked to say that the things with ontological status in their view (quantum states) are nonlocal. (Sure, when doing a Bell experiment the Hamiltonians are local, but that doesn't make the whole description local - give me nonlocal Hamiltonians and I'll violate a Bell inequality for you with an unentangled "local" wavefunction!)

I think you have to go further than saying MWI is based upon taking Unitarity seriously. IMO it is based on taking one particular mathematical formulation of QM seriously, and in particular the tensor product structure is simply presumed (I'm far more worried about "preferred tensor product structures" than "preferred bases", since the former speaks to our notions of systems and locality far more deeply).

Let me elaborate on this "one particular math formalism point". I'm going to cut and paste from an email exchange, sorry!
>
> Imagine that we lived in a universe in which the Wigner distribution
> for every system was positive. That is, all of standard quantum
> mechanics is true (Born rule etc), but an additional restriction
> forces positivity of every Wigner distribution. The set of states with
> positive Wigner distribution is very large - it includes coherent
> states, and entangled states - such as squeezed states, in fact it
> includes all Gaussian states, but many other states as well. They form
> a convex set and thus a completely self contained subset of quantum
> mechanics (e.g. All Hamiltonians would be the same - implying the same
> hydrogen atom spectrum - and one can't evolve out of this set given
> only states within it). Interestingly, since the original EPR argument
> used only the
> (position-momentum) squeezed state and Gaussian type measurements,
> they could also have had an EPR paper and Bohrian answer!
>
> Now let's also imagine the people in this universe do not actually have
> the Wigner formalism - they only have the standard Hilbert space
> formalism. Thus they are writing down states in a Hilbert space,
> describing their measurements in the standard von-Neumann way as
> giving an entanglement between the system and apparatus and so on.
> They see an "intrinsically" probabilistic theory and nonseparability.
> They go through the same metaphysical convulsions we do about the
> collapse of the wavefunction.
>
> If these people adopt an Everettian approach to understanding the
> underlying reality implied by their physical theory, is it really
> justified? If one advocates that Everett follows from just accepting
> the math of QM, then it should be just as applicable to this universe
> - the math is identical. Only the original set of states is different.
>
> But in such a universe it is clear - the physicists have been tricked
> by their mathematics (inseparability of states with respect to a
> particular tensor product, a belief in "objective" probabilities and
> so on). In fact there is this perfectly good realist explanation (in
> terms of the Wiger probability densities as classical uncertainties
> over a phase
> space) lurking out there.
>
> Until I am convinced that this is not a good analogy for where we're
> at with quantum mechanics as it stands, I'm not prepared to take what
> I see as an extremist way out!
>

My point in that email is that the eye-glazing wonder an Everettian feels when they see
(|0>+|1>)|0>
evolve to
(|0>|0>+|1>|1>)
would be felt by inhabitants of "Gaussian World", since they have an identical Hilbert space structure. They may well adopt MWI. The only difference between that world and ours, IMO, is that we haven't found the equivalent probabilistic description over a suitable "ontic phase space".

I believe your other argument about not being able to "derive" the measurement observable of the device given its physical description as a particular justification for MWI is spurious. In Liouville mechanics, if I give you the Liouville distribution of one system (the apparatus) and the interaction Hamiltonian between it and some other system (which is described by another Liouville distribution), and you then evolve them both to the coupled (joint) Liouville distribution, you cannot "derive" what observable it corresponds to either. In this purely classical situation, just as in the quantum case, some other physical (generally empiricist) input is required.

I'm off to Pareee until monday afternoon, I look forward to reading your reply when I get back.

Tez
 
  • #14
Tez said:
> Imagine that we lived in a universe in which the Wigner distribution
> for every system was positive. That is, all of standard quantum
> mechanics is true (Born rule etc), but an additional restriction
> forces positivity of every Wigner distribution. The set of states with
> positive Wigner distribution is very large - it includes coherent
> states, and entangled states - such as squeezed states, in fact it
> includes all Gaussian states, but many other states as well. They form
> a convex set and thus a completely self contained subset of quantum
> mechanics (e.g. All Hamiltonians would be the same - implying the same
> hydrogen atom spectrum - and one can't evolve out of this set given
> only states within it). Interestingly, since the original EPR argument
> used only the
> (position-momentum) squeezed state and Gaussian type measurements,
> they could also have had an EPR paper and Bohrian answer!

I find this a rather amazing statement ; I didn't know that the positivity of the Wigner function (which turns it into a genuine probability densty in phase space) was conserved under unitary evolution!

What I know about Wigner functions is about what's written here:
http://en.wikipedia.org/wiki/Wigner_quasi-probability_distribution

But if what you write is true then I don't see how one could ever arrive at a Bell theorem violating state. After all, the x and p can serve as the "hidden variables" in this case (with distribution given by the Wigner function).

I'm even wondering how one could avoid obtaining non-coherent states from unitary evolution if we start out with only coherent states. After all, isn't that exactly what happens in, for instance, a PDC xtal ? A coherent pump beam is directed onto the system, which evolves into a non-coherent state of two entangled beams.

>
> Now let's also imagine the people in this universe do not actually have
> the Wigner formalism - they only have the standard Hilbert space
> formalism. Thus they are writing down states in a Hilbert space,
> describing their measurements in the standard von-Neumann way as
> giving an entanglement between the system and apparatus and so on.
> They see an "intrinsically" probabilistic theory and nonseparability.
> They go through the same metaphysical convulsions we do about the
> collapse of the wavefunction.
>
> If these people adopt an Everettian approach to understanding the
> underlying reality implied by their physical theory, is it really
> justified? If one advocates that Everett follows from just accepting
> the math of QM, then it should be just as applicable to this universe
> - the math is identical. Only the original set of states is different.

I understand your POV, I think. In fact, in standard classical probability theory, there is equivalence between taking several, parallel universes with weights given by the distribution over phase space, and saying that this is overkill and that there is ONE ontological universe, and that the others have never existed, and were simply part of our ignorance. So it would seem justified, in this view, to eliminate the parallel universes, because they could be eliminated all along. Nevertheless, they could be considered too. There's nothing WRONG with considering parallel worlds when doing classical probability. Only, there's no compelling reason to do so. But it is not wrong.

However, I would think that Bell's theorem and its violation indicate us that we WILL NOT FIND such an underlying distribution (if I understand things correctly, a positive Wigner distribution would allow exactly that: have a pre-existing phase space distribution of hidden variables which "tag" the individual parallel worlds with a definite probability) - at least, if we're bound to have only local dynamics on this phase space - Bohm's theory being an example of the possibility of doing so if this condition is relaxed.

As to the factorization of Hilbert space, you're right that it is the choice of factorisation which determines the basis and everything. But I think that there is a very evident factorisation: H_observerbody x H_rest
After all, what we need to explain is how the observer (connected to a body, that is to say, a certain number of physical degrees of freedom) is to see the rest of the universe.

Probably more later, have to go...
 
  • #15
Tez said:
In Liouville mechanics, if I give you the Liouville distribution of one system (the apparatus) and the interaction Hamiltonian between it and some other system (which is described by another Liouville distribution), and you then evolve them both to the coupled (joint) Liouville distribution, you cannot "derive" what observable it corresponds to either. In this purely classical situation, just as in the quantum case, some other physical (generally empiricist) input is required.

I don't understand this: after all, we will get out a distribution of probabilities of the state of the apparatus (integrating over the distribution of probabilities of the other system). That's good enough: you will find a probability distribution for the readings of the display of the instrument (5% chance that it reads "3 V", 20% chance that it reads "8 V" etc...). That is because each individual state of the apparatus (each point of its part of phase space) corresponds to a specific display reading.

But that is NOT the case for a quantum apparatus, which can be in superpositions of "classical display reading states". We can have just as well the state |3V + 8V> = |3V> + |8V> as the state |3V-8V>. But we don't know what to do with these states. So IN ORDER FOR US to be able to say that we should calculate probabilities in the {|3V>, |8V>} basis, we have already to say that this is the relevant basis, and not the |3V+8V> and the |3V-8V> basis. We have to pick "pointer states", which correspond to classical states. Once we have defined these pointer states, we can trace back through the unitary dynamics of the apparatus, and see with what system states this corresponds, and once we've done that, we know what are the "eigenspaces" on which the hermitean observable is to be constructed.
For instance, when having an instrument with a dial, one has to choose POSITION STATES of the dial as pointer states, and not momentum states of the dial, or others. If we were to choose wrongly, the momentum states, we would find that the instrument measures an entirely different quantity than we thought (for instance, if the instrument was thought to measure a particle momentum, then suddenly, it will turn out to measure a particle position). If we were going to apply the Born rule in this "wrong" pointer basis, we would come to entirely different conclusions about the behaviour of the microsystem.

In the classical view, ALL instrument states are classical states, hence we have not this difficulty. Of course we cannot really know what the apparatus is supposed to measure until we've said WHAT aspect of the state of the apparatus is the "measured quantity" (in our case, the display - I suppose this is what you are alluding to), but this has no incidence on the calculated probability distribution.
 
  • #16
vanesch said:
I find this a rather amazing statement ; I didn't know that the positivity of the Wigner function (which turns it into a genuine probability densty in phase space) was conserved under unitary evolution!

What I know about Wigner functions is about what's written here:
http://en.wikipedia.org/wiki/Wigner_quasi-probability_distribution

But if what you write is true then I don't see how one could ever arrive at a Bell theorem violating state. After all, the x and p can serve as the "hidden variables" in this case (with distribution given by the Wigner function).

I'm even wondering how one could avoid obtaining non-coherent states from unitary evolution if we start out with only coherent states. After all, isn't that exactly what happens in, for instance, a PDC xtal ? A coherent pump beam is directed onto the system, which evolves into a non-coherent state of two entangled beams.

A squeezed state (the output of a PDC) is still a gaussian state!

I'm not quite sure where your confusion is. Is it my claim that they could still have entanglement and an EPR paradox even though there is a hidden variable model given by the Wigner function? Check out chapter 21 of Speakable and Speakable, Bell discusses it very briefly there.

Of course, even though the EPR state has a positive Wigner distribution it could be used to demonstrate nonlocality, but only if you go beyond things like position and momentum measurements, which are all Gaussian. Bohr's reply to EPR also only used gaussian state arguments and so would remain equally valid.

It is a mathematical fact, but one that I can't think of how to prove to you in a few lines on an internet forum, that if everything starts off gaussian then nothing you can do (under energy conserving Hamiltonians) will take you out of such. I'll hunt around for some notes on this and try and stick them on my webpage...

More later,
Tez
 
  • #17
Tez said:
Of course, even though the EPR state has a positive Wigner distribution it could be used to demonstrate nonlocality, but only if you go beyond things like position and momentum measurements, which are all Gaussian. Bohr's reply to EPR also only used gaussian state arguments and so would remain equally valid.

Ok, I did a small calculation myself.
I started with the entangled state:
[tex]-e^{-{\left( -5 + x1 \right) }^2 -
{\left( -6 + x2 \right) }^2} +
e^{-{\left( -1 + x1 \right) }^2 -
{\left( -2 + x2 \right) }^2}[/tex]

Clearly, these are two entangled gaussian states:
|1>|2>-|5>|6>

where |u> stands for exp(-(x-u)^2)

These are entangled gaussian states (even with average momentum 0!).

When I apply, using mathematica, the formula for the Wigner distribution, after some crunching of the command:

wig[x1_, x2_, p1_, p2_] :=
1/(Pi hbar) Integrate[
psistar[x1 + y1, x2 + y2]psi[x1 - y1, x2 - y2]Exp[
2 I(p1 y1 + p2 y2)], {y1, -Infinity, Infinity}, {y2, -Infinity,
Infinity}]

I obtain:
[tex]
\frac{e^{\frac{-8\,\imag \,{p1} -
{{p1}}^2 - 8\,\imag \,{p2} -
{{p2}}^2 -
4\,\left( 61 - 2\,{x1} +
{{x1}}^2 - 4\,{x2} +
{{x2}}^2 \right) }{2}}\,
\left( e^
{4\,\imag \,\left( -28\,\imag + {p1} +
{p2} \right) } -
e^{8\,\left( 9 + {x1} +
{x2} \right) } -
e^{8\,\left( 9 + \imag \,{p1} +
\imag \,{p2} + {x1} +
{x2} \right) } +
e^{4\,\left( \imag \,{p1} +
\imag \,{p2} +
4\,\left( {x1} + {x2}
\right) \right) } \right) }{2\,
{hbar}}
[/tex]

And when you plot that function, say, for p1 = p2 = 0, then you have two POSITIVE bumps, around {1,2} and around {5,6} (as expected), but ALSO A NEGATIVE BUMP around {3,4}.

So I still don't understand your statement.
 
  • #18
Ok, another, slightly more realist situation:

Take the function of entangled gaussians:
[tex]
-e^{-{\left( -5 + {x1} \right) }^2 +
5\, i \,{x1} -
{\left( -6 + {x2} \right) }^2 +
3\, i \,{x2}} +
e^{-{\left( -1 + {x1} \right) }^2 +
3\, i \,{x1} -
{\left( -2 + {x2} \right) }^2 +
5\, i \,{x2}}
[/tex]

It is the same as before, except now that we have genuine position and momentum entanglement:
|1> has a central momentum of 3 and |2> has a central momentum of 5, while |5> has a central momentum of 5 and |6> has a central momentum of 3.

We can again do the same computation, and now the wigner function is:
[tex]
\frac{e^{-139 + 3\,{p1} -
\frac{{{p1}}^2}{2} + 3\,{p2} -
\frac{{{p2}}^2}{2} + 4\,{x1} -
2\,{{x1}}^2 + 8\,{x2} -
2\,{{x2}}^2}\,
\left( e^{2\,\left( 56 + {p2} \right) } +
e^{2\,\left( {p1} +
8\,\left( {x1} + {x2}
\right) \right) } -
2\,e^{73 + {p1} + {p2} +
8\,{x1} + 8\,{x2}}\,
\cos (2\,\left( -16 + 2\,{p1} +
2\,{p2} - {x1} +
{x2} \right) ) \right) }{2\,
{hbar}}
[/tex]

if I didn't make any mistake. For x1 = x2 = 4 and p1 = p2 = 4, it shows again a negative bump.

EDIT: I noticed that there is a problem displaying the imaginary unit I with the TeX display. Tried to fix it, but it didn't work...

EDIT2: ah, it worked now... (probably a cache problem)
 
Last edited:
  • #19
A superposition of gaussian states is not itself a gaussian state.
 
  • #20
Hi Patrick

As slyboy said - a superposition of gaussian states is not a gaussian state (the convexity I referred to above is under mixing, not coherent superposition).

But I realized while re-reading my notes from when I was thinking about this that the terminology I was using ("conservative Hamiltonians"), is a restricted class of Hamiltonians, and in my idiocy I wrote "energy conserving Hamiltonians". To get your superposition example give would require a Hamiltonian outside of this class. Sorry for wasting your time with that. [The stupid terminology comes from the analogy with classical mechanics where conservative Hamiltonians preserve the area of a Liouville distribution - likewise these Hamiltonians preserve the areas of the the Wigner distribution, thus keeping minimal uncertainty (gaussian) states as minimal uncertainty states].

The conservative Hamiltonians, which I'll simply define as those that preserve the gaussian nature of states, include basically everything representable as a quadrtic form in the canonical variables. Examples include the standard harmonic oscillator, the squeezing Hamiltonian you alluded to above (which will create the entangled states from unentangled ones), and, more interestingly for this particular discussion, the "x*p" Hamiltonian. This latter is the sort of Hamiltonian commonly invoked in a von-Neumannesque description of a position measurement: We start with a delocalised particle (e.g. in a momentum eigenstate, which is Gaussian) and a massive "pointer" system in a gaussian state well localized in position, and then couple the two systems under the xp Hamiltonian. Thus "gaussian world" has a "measurement problem". As I mentioned above, it also has all the same states and Hamiltonians as required to perform all elements of the EPR/Bohr argument (which despite a myriad of papers claiming the contrary clearly has nothing interesting to say about locality, it is an argument about the completeness of QM).

So the guassian toy universe has a standard Hilbert space representation which its inhabitants may well be tempted to try and understand using a MWI. However, once someone in this universe found this concrete Wigner distibution "hidden variable model", I'd imagine that those musings would be dropped quite quickly and there'd be a lot of embarrassed physicists looking around wondering why they'd ever entertained such beliefs. (I am not saying this to be insulting or polemic, I genuinely think something similar will happen with full QM.)

Some other comments based on your various postings: As you noted Bell's theorem tells us nothing about whether or not there exists a classical probability type of interpretation of quantum mechanics, only that whatever the underlying "ontic states" are (the equivalent of the position/momentum phases space for gaussian world say) they must be nonlocal. Is this so bad? As I mentioned above I have never heard an MWI'er claim that they have local ontic states, I'd be interested if this is your claim.

Regarding the "preferred factorizations" of the Hilbert space, I think there are a myriad of interesting problems to be tackled, but this isn't the place to list them I guess. [A new example that just popped to mind and thus may well be trivial: When one moves from non-relativistic QM to QFT, certain factorizations can get "mixed up". An example is spin and momentum, which non-relativistically are described in a tensor product, but which by relativisitic observers are described in a direct sum. These two possibilities yield quite different ontologies under MWI, as I understand it] I do agree that the "observer/rest" split is about as natural as one can get. However my (perhaps limited) understanding of standard MWI is that actual splittings are not limited to observer/everything else situations, and thus an ambiguous ontology is somewhat inevitable.

Regarding the Liouville distribution stuff: The reason you can claim that in the classical case one can simply look at the marginal of tha appratus system is because you and I know that the relevant phase space variables we want to describe the world in are position and momentum. However I can take any old canonical re-definition of the phase space variables (x->x+p,p->x-p or something), and give you the Liouville distribution in terms of these variables. My claim is simply that some form of empirical input is required in order to get the "correct" interpretation of the measurement. In the quantum case, (under the view the QM is incomplete) we simply cannot expect to extract the information you desire. Let me harp back to gaussian world (which, although you may not agree with how I'm using it I hope you see is a useful pedagogical tool!) - your argument applies with no modification to it. There, however, its clear that what you are expecting of the theory is something one can be tricked into by an assumption of completeness of the particular Hilbert space formalism.Oh, another thing popped into mind: A different sort of problem with regarding quantum states as ontic entities arises from the fact that in many physical situations I can use completely different quantum states to describe exactly the same physical situation. It can be done in a way that there is no operational way to distinguish which description is "correct". An example is the case of the laser, where some people advocate assigning a certain mixed state while others a pure coherent state, but there is provably no way to distinguish the two. [I have a light reading paper http://www.arxiv.org/abs/quant-ph/0507214 if you want to see how I think these situations should be understood, I am the middle author] From the point of view that quantum states are an incomplete encapsulation of an observer's information about the world, this presents little problems. But I suspect an Everettian must insist that there is once "actual fact" about which state is the correct description. Since there is no way to determine whether they are correct, I further suspect there is a dangerous philosophical cliff just waiting for them to fall off :) . Its not a well thought through argument, don't respond unless you think its interesting!

Man, this is one of the longest posts I've ever made on a forum. If one of my students catches it they'll be moaning all day at me! Travis I'm sorry for the derailment - if a moderator wants to split this discussion off somehow that'd be fine.

Finally let me point out that I am agnostic regarding interpretations, though I think the evidence suggests that none of them are completely satisfactory, but that the quantum state is incomplete. This latter assertion is not an interpretation because we have no concrete example of how to quantify this intuition and represent quantum states as states of knowledge over some sort of underlying ontological state space. If I was a betting man I'd bet its possible though.

Tez
 
Last edited:
  • #21
Tez said:
So the guassian toy universe has a standard Hilbert space representation which its inhabitants may well be tempted to try and understand using a MWI. However, once someone in this universe found this concrete Wigner distibution "hidden variable model", I'd imagine that those musings would be dropped quite quickly and there'd be a lot of embarrassed physicists looking around wondering why they'd ever entertained such beliefs. (I am not saying this to be insulting or polemic, I genuinely think something similar will happen with full QM.)

I don't know if the physicists would be "embarrassed". Because, after all, the positivity of the Wigner (or other) distribution in a phase space is only one aspect of the problem ; the other one is the dynamics. A truly probabilitic interpretation would want to see one or other nice dynamical rule, relating only to the POINTS in phase space, and not to the DISTRIBUTION in phase space. If you consider the Wigner distribution to be a probabilistic distribution (which means, in reality, there's only ONE POINT in phase space which describes reality, but we are simply unsure about which one it is, so we take an ensemble of points), then the dynamics should NOT depend on the specific form of the distribution, in the sense that the final distribution should be the convolution of the initial distribution, and a kernel function which describes the dynamics of the individual points. If that kernel function is a Dirac, then we have a deterministic evolution ; if not, we have a probabilitic evolution (a point in phase space evolves into a distribution).
So IF all these conditions are satisfied, then one COULD consider taking on this new ontology. There's no embarrassment to have, to adapt one's picture of the world to the new knowledge one obtains.
Nevertheless, EVEN in such a classical setting, a MWI view is ALSO a possibility (there's nothing *wrong* with it, although it looses its interest). For instance, there's nothing wrong with people upholding "action at a distance" in Newton's time. It was the right paradigm to view Newtonian gravity in. But with the advent of relativity, that gave problems.

Some other comments based on your various postings: As you noted Bell's theorem tells us nothing about whether or not there exists a classical probability type of interpretation of quantum mechanics, only that whatever the underlying "ontic states" are (the equivalent of the position/momentum phases space for gaussian world say) they must be nonlocal.

The states do not matter, the dynamics does. Bell's theorem tells us that if there is to be a probabilistic interpretation with an underlying deterministic dynamics (a phase space with a distribution, a la Wigner), that the dynamics will involve non-local effects.

Is this so bad? As I mentioned above I have never heard an MWI'er claim that they have local ontic states, I'd be interested if this is your claim.

What's bad is non-local dynamics IMO.

[to be continued]
 
  • #22
Tez said:
Regarding the "preferred factorizations" of the Hilbert space, I think there are a myriad of interesting problems to be tackled, but this isn't the place to list them I guess. [A new example that just popped to mind and thus may well be trivial: When one moves from non-relativistic QM to QFT, certain factorizations can get "mixed up". An example is spin and momentum, which non-relativistically are described in a tensor product, but which by relativisitic observers are described in a direct sum. These two possibilities yield quite different ontologies under MWI, as I understand it] I do agree that the "observer/rest" split is about as natural as one can get. However my (perhaps limited) understanding of standard MWI is that actual splittings are not limited to observer/everything else situations, and thus an ambiguous ontology is somewhat inevitable.

I agree with you, and I'm a heretic MWI-er :-) I think that the best view is not an objective number of worlds, or even a very well defined number of worlds, but a pseudo-Schmidt factorisation where a conscious observer has his bodystate in one of several eigenspaces of "clearly distinct conscious experience states" (meaning: my brain is in - one of a set of - states where I clearly saw light A flash, or it is in a state where I clearly saw light B flash: as such I put by hand a coarse-grained preferred basis for my brainstates - but they also are supposed to emerge from decoherence). As such there is some fuzziness in the concept of what exactly is an observation, but this corresponds to the same fuzziness in what exactly is a conscious experience vs the physical state of my brain.
I put in by hand what does not come out obviously, I don't care. For instance, I put in by hand the Born rule because I'm pretty convinced that it cannot be derived from unitary QM (as such I'm an MWI heretic).
All this doesn't matter. The fuzziness is not in the ontology (that's simply the wavefunction), it resides in how exactly subjective experience is derived from it, and as we all know, there IS some fuzziness in that. But for CLEAR outcomes of experiment (where the details of what is exactly consciously experienced don't matter because the outcome is so clear) the view is rather clear (and corresponds to standard QM predictions). I think that even with this fuzziness, this is _in any case_ better than the crude "projection postulate". As such, I don't have to worry about a spin degree of freedom more or less being part of my body (or the part of my body that determines my subjective experience).

I repeat again that I'm not selling MWI as a kind of religion, or as any absolute truth or whatever, but just as a way of looking at the QM formalism which eliminates some troubles - if you're willing to accept the strangeness of parallel worlds (in other words: of the superposition principle all the way!). And the day that I'll learn of a _better_ way, I'll have no regrets switching to that better way. For the moment, MWI gives me the clearest view on the formalism of QM.

Regarding the Liouville distribution stuff: The reason you can claim that in the classical case one can simply look at the marginal of tha appratus system is because you and I know that the relevant phase space variables we want to describe the world in are position and momentum. However I can take any old canonical re-definition of the phase space variables (x->x+p,p->x-p or something), and give you the Liouville distribution in terms of these variables. My claim is simply that some form of empirical input is required in order to get the "correct" interpretation of the measurement.

Yes, but in the classical case, you will not make any _error_ that way. However, I think you are, in classical mechanics, ALSO confronted with the final problem of how to link a classical state of a body with a conscious subjective experience of that body, which is nothing else but the ultimate "observation" and the one you have to get right to subjectively accept the theory to agree with the observation. THAT problem is the same I'm also facing in MWI. Maybe that's what you were referring too then.

In the quantum case, (under the view the QM is incomplete) we simply cannot expect to extract the information you desire. Let me harp back to gaussian world (which, although you may not agree with how I'm using it I hope you see is a useful pedagogical tool!) - your argument applies with no modification to it. There, however, its clear that what you are expecting of the theory is something one can be tricked into by an assumption of completeness of the particular Hilbert space formalism.

As I said, the day that we have the insight of a more classical-like description (as is the case in your Gaussian world), we can change our ontology ! I will certainly agree with you that a classical ontology "feels better" than this MWI stuff. But, this MWI stuff (to me at least) feels better than claims of inconsistency or of ignorance! I think that with every formalism comes its "natural" ontology, which gives most insight into the theory.

Oh, another thing popped into mind: A different sort of problem with regarding quantum states as ontic entities arises from the fact that in many physical situations I can use completely different quantum states to describe exactly the same physical situation.

(didn't read the paper yet)
If you're alluding to the way you can get identical density matrices from different state mixtures, yes, I'll agree with you that that is disturbing. The way I could try to weasel out is by claiming that every "mixture" is in fact just "one leg" of an entangled pure state (the density matrix being the reduced density matrix).

Man, this is one of the longest posts I've ever made on a forum. If one of my students catches it they'll be moaning all day at me! Travis I'm sorry for the derailment - if a moderator wants to split this discussion off somehow that'd be fine.

Thanks, it is one of the most interesting discussions on MWI I've ever had. Usually I simply have to explain what I mean, and have to hear "naaah! too crazy!"

Again, I'm not religiously attached to MWI. To me it is simply a great way to get a feeling for the formalism of quantum theory, that's all. And I regret it being rejected so quickly.
 
  • #23
I certainly very much appreciate it that you're not vehemently religous about MWI - it doesn't really pay to be religous about anything in the long run! Unless you're taking up a collection I guess. Because there are many, many many-worlders in my field (for a reason I don't quite understand) I have talked to, or yelled at, a number of the more famous and vocal proponents, and (with the exception of Harvey Brown) I'd say they are an overly zealous bunch. But in other things they're smart as hell, which is why I cannot dismiss it all outright. To the extent that a reasoned conversation such as ours is pretty much impossible with them, however, I've pretty much given up on talking about these things - rather resolving to one day learn more about it all and to write a paper with all the ambiguities I see spelt out. Given the number of half written papers I have, this is somewhat unlikely to ever actually appear...

The problem we discuss in that paper I mentioned goes beyond "ambiguity of mixtures" I think, though it is certainly related (and is the way we introduce it in the optical context). Basically there are many situations in "practical physics" wherein standard procedure is to describe a system with a pure state, although a mixed state description is equally valid. [Ambiguity of mixtures is generally about assigning a system a state from one of two ensembles whose convex decompositions form the same mixed state.] But why its interesting (in the context of our discussion) is that the arguments (which revolve around superselection rules in many cases) strike at the very heart of what is a quantum superposition and how we know when we have one of these dastardly creatures...

Ultimately I think it boils down to not thinking of quantum states as encapsulating an intrinsic set of physical features of a system, but also extrinsic ones. This (for different reasons obviously) is not so far from a "relative state" picture - at least at a glib "lip-service" level - but its certainly not a-priori obvious to me that these ambiguities don't result in quite differing ontologies in the MW picture.

As I said, I haven't really thought it through carefully and it may well be possible to trivially dismiss this set of concerns. Actually that would be a strength of the approach were it to be the case...
 
  • #24
Tez said:
...it also has all the same states and Hamiltonians as required to perform all elements of the EPR/Bohr argument (which despite a myriad of papers claiming the contrary clearly has nothing interesting to say about locality, it is an argument about the completeness of QM).

It's not either/or. The EPR argument was that locality *requires* incompleteness. Or, equivalently, that completeness entails nonlocality. I mean, it's just a fact that orthodox QM's explanation of the EPR-type correlations involves nonlocality. EPR's point was to point this out, and note that one could perhaps construct a local replacement theory by rejecting the completeness doctrine and adding "hidden variables." (Of course, some of this was unfortunately obscured by Podolsky's write-up.)

But what I really wanted to ask is: where are you going with the whole Wigner distribution thing? I got the impression you think one can use these to construct a local theory, by interpreting the Wigner distributions as classical stat-mech-like probability distributions for x,p. That's certainly not true, right? So then what's the point?



Finally let me point out that I am agnostic regarding interpretations, though I think the evidence suggests that none of them are completely satisfactory, but that the quantum state is incomplete. This latter assertion is not an interpretation because we have no concrete example of how to quantify this intuition and represent quantum states as states of knowledge over some sort of underlying ontological state space. If I was a betting man I'd bet its possible though.

What are your objections to Bohmian Mechanics? I mean, what makes that particular option not "completely satisfactory" (so far as it goes)?
 
  • #25
ttn said:
It's not either/or. The EPR argument was that locality *requires* incompleteness. Or, equivalently, that completeness entails nonlocality. I mean, it's just a fact that orthodox QM's explanation of the EPR-type correlations involves nonlocality.

I agree with the first two sentences, but not the third because I don't really see how to quantify "involves nonlocality" unless we decide on some operational measure of nonlocality (e.g. can it violate a Bell inequality - Which in this case it can't...)

EPR's point was to point this out, and note that one could perhaps construct a local replacement theory by rejecting the completeness doctrine and adding "hidden variables." (Of course, some of this was unfortunately obscured by Podolsky's write-up.)

Agreed - I think everyone who wants to write about foundations should be forced by law to read Don Howards "Einstein and Separability" paper, and then we'd presumably get a lot less of the fluffy historical redaction's of this paper that poorly reflected Einstein's arguments. I find it amazing how Einstein saw that separability was going to be a major problem just from the appearance of Bose-Einstein statistics, before QM had actually been properly formed!

But what I really wanted to ask is: where are you going with the whole Wigner distribution thing? I got the impression you think one can use these to construct a local theory, by interpreting the Wigner distributions as classical stat-mech-like probability distributions for x,p. That's certainly not true, right? So then what's the point?

Why is it not true? They are positive, normalized probability distributions, and there is no evolution (or measurement) which will make them go negative. The theory is just a classical one of "restricted observers" - the restriction being basically that they are forced into obeying the uncertainty principle, because everything (e.g. their frames of reference and all their measurement apparatuses, and all subsytems) are obeying the principle. Measurements which collapse states (in the Hilbert space representation) are basically just "collapses" of the Wigner distribution due to an updating of information, accompanied by some disturbance (which preserves the uncertainty principle). What is fun to play around with (and why I originally explored this thing a bit) is to look at how the disturbance necessarily occurs.

Actually I find it amusing that Bohr's various arguments for preservation of the uncertainty principle can be read word for word as applying to the gaussian world and not QM...

I will publish a pithy paper on this sort of stuff soon, though its hard to motivate myself or my co-authors to finish it because we're not sure anyone will read it. Including ourselves. The tack we're taking is a little different - we're saying start with classical mechanics and impose an epistemic restriction (equivalent to the uncertainty principle), and then ask how much does this in-between theory (which I call "quassical mechanics") resemble quantum mechanics. This tack is more mathematically interesting because the Liouville distributions do not have to be gaussian - thus quassical mechanics incorporates this gaussian world but goes a little further.

What are your objections to Bohmian Mechanics? I mean, what makes that particular option not "completely satisfactory" (so far as it goes)?

Because I've had a lot less exposure to Bohmian's (Valentini is actually the only one I've interacted locally with!) its harder for me to glibly rattle off concisely the various things that make me unhappy with it. I'll put some thought into it, however, and see if I can come up with something coherent. [Its probably good practise to do this sort of thing on a forum, where I have time to think, before being forced to confront the beasts in the wild ;)] As with my comments on MWI I think you'll find I don't really have any major or crushing objections - I simply try and push these things in a way that it becomes clear to me what is the ontology and to what extent it can be justified in a "formalism independent" manner...
 
  • #26
Tez said:
I agree with the first two sentences, but not the third because I don't really see how to quantify "involves nonlocality" unless we decide on some operational measure of nonlocality (e.g. can it violate a Bell inequality - Which in this case it can't...)

The last statement is just factually wrong: orthodox QM's predictions do violate bell inequalities. But what you maybe had in mind is correct: this doesn't mean that OQM is a nonlocal theory (since some of the premises from which the inequality is derived don't apply to OQM), and to address that question we need to define "locality". Well, how about the definition of Bell Locality that is presented in the paper referenced in the first post of this thread? Bell Locality is *not* just shorthand for "violates a Bell inequality." Bell Locality is an attempt to give a precise, mathematical definition of what it means to prohibit superluminal causation. Maybe you would be interested to look at that paper... I'd certainly be interested in hearing what, if anything, you think is flawed in that definition of locality.

Of course, there are other definitions out there, e.g., "signal locality." But that's a completely different concept -- it's about whether or not humans can do a certain thing (namely send signals) and not about the fundamental causal character of theories.

Here are some relevant facts: OQM is signal-local, but violates Bell Locality. (That's what I had in mind when I said it was nonlocal.) For what it's worth, Bohmian Mechanics is also signal-local, and it also violates Bell Locality.



Why is it not true? They are positive, normalized probability distributions, and there is no evolution (or measurement) which will make them go negative. The theory is just a classical one of "restricted observers" - the restriction being basically that they are forced into obeying the uncertainty principle, because everything (e.g. their frames of reference and all their measurement apparatuses, and all subsytems) are obeying the principle. Measurements which collapse states (in the Hilbert space representation) are basically just "collapses" of the Wigner distribution due to an updating of information, accompanied by some disturbance (which preserves the uncertainty principle). What is fun to play around with (and why I originally explored this thing a bit) is to look at how the disturbance necessarily occurs.

Well, we know from Bell's theorem and experiment that no local hidden variable theory of this type is going to work. I mean, you can probably reproduce certain things with such a theory, e.g., the statistics for position measurements on entangled particles. But you know going in that any such local scheme is not going to work for *all* possible experiments/measurements. That's what Bell proved. So why bother trying to cook something up that is known in advance to be doomed?



Because I've had a lot less exposure to Bohmian's (Valentini is actually the only one I've interacted locally with!) its harder for me to glibly rattle off concisely the various things that make me unhappy with it. I'll put some thought into it, however, and see if I can come up with something coherent. [Its probably good practise to do this sort of thing on a forum, where I have time to think, before being forced to confront the beasts in the wild ;)] As with my comments on MWI I think you'll find I don't really have any major or crushing objections - I simply try and push these things in a way that it becomes clear to me what is the ontology and to what extent it can be justified in a "formalism independent" manner...

OK, I hope you don't think I was trying to pick a fight or something. I was just worried (from the above stuff about Wigner distributions) that you thought a local hidden variable theory was still on the table as possible, and hence that Bohm's theory was vetoable because of its nonlocality (something which lots and lots of people erroneously believe). In which case we could probably have a very interesting conversation about locality... one that might even bring this thread back near the originally intended topic! :smile:
 
  • #27
ttn said:
Of course, there are other definitions out there, e.g., "signal locality." But that's a completely different concept -- it's about whether or not humans can do a certain thing (namely send signals) and not about the fundamental causal character of theories.

How about: all relevant equations are to be written in a Lorentz-invariant way ?
 
  • #28
Travis, you need to reread what I said about the gaussian world construction. I am not talking about it as anything more than a toy universe model. When I said "(e.g. can it violate a Bell inequality - Which in this case it can't...)" I was referring to the specific EPR construction. You then twist that around and sy I'm factually wrong because orthodox QM can violate a Bell inequality. Attack all the strawmen you want, I don't have time for that sort of debating.

My claim, which you can try and refute but I think you will fail, is that all the EPR and Bohr debates operationally involved only Gaussian states, and thus they could have occurred in a universe governed by the physics of this gaussian world. I am talking the original position and momentum EPR, not some "generalized EPR" which violates a Bell inequality.

I have read La Nouvelle Cuisine several times and I think Bell's notion of locality is quite fine for me.

So yes - I know "no hidden variable theory of this type is going to work", because the one I constructed explicitly had locality (by virtue of using position as one of the canonical variables). But if your claim is that "no hidden variable theory which treats quantum states as no more than classical ignorance about the values of some underlying ontic states is going to work" then you have some mathematical proving to do.

Let me reiterate - the reason I brought this example up was to show that it is not implausible that a Hilbert space kind of structure is compatible with the classical probabiltiy structure, and that a MWI seems to me primarily to arise from the way that we "watch" the tensor product and entaglement evolve in the standard measurement scenario.
 
  • #29
Tez said:
... you need to reread what I said about the gaussian world construction. I am not talking about it as anything more than a toy universe model. When I said "(e.g. can it violate a Bell inequality - Which in this case it can't...)" I was referring to the specific EPR construction. You then twist that around and sy I'm factually wrong because orthodox QM can violate a Bell inequality. Attack all the strawmen you want, I don't have time for that sort of debating.

I'm sorry, I wasn't trying to twist your meaning and construct a straw man argument. I just misunderstood what you were saying there. Yes, perhaps you are right about that toy universe in which states are restricted to Gaussians. I assumed (erroneously, for which I apologize) that you intended this "toy" to be somehow applicable to the real universe. But I guess you were just using it to illustrate something about the original Einstein/Bohr debate.

My claim, which you can try and refute but I think you will fail, is that all the EPR and Bohr debates operationally involved only Gaussian states, and thus they could have occurred in a universe governed by the physics of this gaussian world. I am talking the original position and momentum EPR, not some "generalized EPR" which violates a Bell inequality.

OK.

I have read La Nouvelle Cuisine several times and I think Bell's notion of locality is quite fine for me.

OK. Do you then accept the two-part argument that no Bell Local theory can reproduce the quantum predictions (for, at least, the generalized EPR scenario involving spin-entanglement)? I suspect not, given what you say below...


So yes - I know "no hidden variable theory of this type is going to work", because the one I constructed explicitly had locality (by virtue of using position as one of the canonical variables). But if your claim is that "no hidden variable theory which treats quantum states as no more than classical ignorance about the values of some underlying ontic states is going to work" then you have some mathematical proving to do.

I don't quite get the qualification of "hidden variable theory". What would be an example of a hidden variable theory that *didn't* "treat quantum states as no more than classical ignorance about the values of some underlying ontic states"?

So maybe I'm just confused about your statement here. But I think I am making at least the claim you're suggesting (that "no hidden variable theory... is going to work", i.e., is going to be able to account for the empirically verified QM predictions for the generalized EPR/Bell spin experiments in a local way). In fact, I'm making an even stronger claim: no theory *of any kind* ("hidden variable" or not) will be able to account for those results locally. But I'm confused why you would say that I "have some mathematical proving to do." What exactly remains to be proved? Bell proved that no hidden variable theory of a certain type could reproduce all those predictions. Do you think that proof is somehow flawed? And then my recent paper proves that hidden variables of just that type are required by Bell Locality (and some of the correlations). Do you think that proof is somehow flawed?

I think I somehow conveyed the wrong tone before, so let me say explicitly: I'm not trying to be rude or confrontational here. I'm just sincerely interested in knowing what you think remains to be proved.

Or maybe we're talking past each other again, and you're still talking exclusively about the toy gaussian world? ...in which case all of my comments just above are completely off the track.
 
  • #30
ttn said:
OK. Do you then accept the two-part argument that no Bell Local theory can reproduce the quantum predictions (for, at least, the generalized EPR scenario involving spin-entanglement)?

Absolutely. Its the reason I am still in physics.


I don't quite get the qualification of "hidden variable theory". What would be an example of a hidden variable theory that *didn't* "treat quantum states as no more than classical ignorance about the values of some underlying ontic states"?

Well, I'm imagining things like Beltramatti and Bugowski's non-outcome-deterministic model. Personally I don't see the point in investigating non-outcome-deterministic models, but people do.

However I have something stronger in mind than what you maybe think I do when I say "treat the quantum states as no more than classcal ignorance. For instance, in Bohmian mechanics the wavefunction is not a purely epistemic object. As far as I know, in BM there is no isomorphism induced from the quantum state [itex]|\psi\rangle[/itex] to probablity densities [itex]P_\psi(\lambda) d\lambda[/itex] over some hidden variable space [itex]\Lambda[/itex]. Thus the interpretation of the wavefunction in BM is not purely epistemic - some parts of it "touch the world" (through the quantum potential). In the gaussian world the Wigner distributions are isomorphic to the states and yet do not "touch the world", so I would call them "purely epistemic". They can be understod as merely encapsulations of an observers "state of knowledge" about the world.
Note that this gaussian model is deterministic - the point of the particle in phase space determines what outcome it will give to any particular measurement - and this is a feature I like and generally presume.

So let's call "epistemic models" the deterministic hidden variable theories of the form I just briefly described. The gaussian world is formed in Hilbert space, but can be understood in terms of a local epistemic model. Quantum mechanics cannot be understood in terms of a local epistemic model, but perhaps it can in terms of a nonlocal one (i.e. one in which the ontic states [itex]\lambda\in\Lambda[/itex] are nonlocal).

So maybe I'm just confused about your statement here. But I think I am making at least the claim you're suggesting (that "no hidden variable theory... is going to work", i.e., is going to be able to account for the empirically verified QM predictions for the generalized EPR/Bell spin experiments in a local way). In fact, I'm making an even stronger claim: no theory *of any kind* ("hidden variable" or not) will be able to account for those results locally. But I'm confused why you would say that I "have some mathematical proving to do." What exactly remains to be proved? Bell proved that no hidden variable theory of a certain type could reproduce all those predictions. Do you think that proof is somehow flawed? And then my recent paper proves that hidden variables of just that type are required by Bell Locality (and some of the correlations). Do you think that proof is somehow flawed?

I absolutely agree that no local hidden variable theory (epsitemic or otherwise) is going to work. I hope that's clear form what I wrote above. I was challenging you to prove that no epsitemic model whatsoever is going to work.
 
  • #31
And let me nip something in the bud: The hidden variable models Bell gave in those two papers were not epistemic models - the ontic state space included the full information about the quantum state as well as a hidden parameter. However in Kochen and Specker's 67 paper they give what I would call an epsitemic model (which also happens to be non-contextual) for a spin-1/2 system...
 
  • #32
Tez said:
However I have something stronger in mind than what you maybe think I do when I say "treat the quantum states as no more than classcal ignorance. For instance, in Bohmian mechanics the wavefunction is not a purely epistemic object. As far as I know, in BM there is no isomorphism induced from the quantum state [itex]|\psi\rangle[/itex] to probablity densities [itex]P_\psi(\lambda) d\lambda[/itex] over some hidden variable space [itex]\Lambda[/itex]. Thus the interpretation of the wavefunction in BM is not purely epistemic - some parts of it "touch the world" (through the quantum potential). In the gaussian world the Wigner distributions are isomorphic to the states and yet do not "touch the world", so I would call them "purely epistemic". They can be understod as merely encapsulations of an observers "state of knowledge" about the world.

Yes, you're right, I didn't appreciate your intended emphasis on *no more than* classical ignorance. And you're of course right about Bohm's theory. In that theory, the wf is not merely a statement about our knowledge. It refers to a real "wave" out there in the world (from which we can infer something about the probability distribution of particles according to the quantum equilibrium hypothesis).


Note that this gaussian model is deterministic - the point of the particle in phase space determines what outcome it will give to any particular measurement - and this is a feature I like and generally presume.

Hmmm. But if you're regarding this distribution as purely epistemic, doesn't that mean it's implicit in the model that there exists a real particle which has some definite (but of course unknown) values for position and momentum? (Otherwise I just don't know what you mean by "purely epistemic" -- that knowledge has to be knowledge about some real state of affairs, or it isn't really knowledge.) But then I wonder: is the model deterministic in the sense that the Wigner distributions at different times are consistent (in the sense of "equivariance" in Bohm's theory -- that is, does the x,p probability distribution at one time flow, via the underlying Schroedinger dynamics, into the same new probability distribution you'd get by evolving it forward using the implied particle-with-definite-position-and-momentum ontology?

Maybe that isn't clear. What I'm getting at is that there seem to be two aspects to the dynamics: the measurement part, and the non-measurement part. You claimed that the model is deterministic in the sense that there is a definite phase space point which determines what the outcome will be for either an x-measurement or a p-measurement. No problem there. The question is: what about the other half of the dynamics, the non-measurement part? If there is an actual phase space point, what controls its evolution from one moment to the next, and is this consistent with the Schroedinger type evolution which lies behind the time evolution of the Wigner distributions? I suspect the answer is that these aren't consistent, which seems like a serious problem. But I'm not really sure.


So let's call "epistemic models" the deterministic hidden variable theories of the form I just briefly described. The gaussian world is formed in Hilbert space, but can be understood in terms of a local epistemic model. Quantum mechanics cannot be understood in terms of a local epistemic model, but perhaps it can in terms of a nonlocal one (i.e. one in which the ontic states [itex]\lambda\in\Lambda[/itex] are nonlocal).

I don't understand this. The model you described above does have some "ontic" commitments, right? It says there's a definite (but unknown) phase space point for the particle. Maybe you want to deny that, and just take the phase space probability distribution (the wigner dist) as elementary. But if a theory is purely epistemic in that sense -- if it makes no ontic claims whatever -- then I literally don't know what there is to talk about in regard to the locality/nonlocality of the model. Indeed, it ceases to be a model or theory in the sense I'm used to -- it ceases to say *anything* about the external world. So then what is left of the question of whether or not what it says about the world is consistent with relativity's causal prohibitions?



I absolutely agree that no local hidden variable theory (epsitemic or otherwise) is going to work.

Well we know a non-local hvt can work, because there's an example: Bohm's theory. The question that most interests me is: what do you have to give up to get rid of the nonlocality? Everybody has been saying for decades that you can have a local model so long as it isn't deterministic, or so long as it isn't "realistic", or so long as it doesn't have hidden variables, etc. But as far as I can tell, all these claims are wrong. Nonlocality is not a price paid for introducing determinism/hv's/etc. I think you indicated that you agree with me here. So I'm still confused about what exactly we're arguing about... :uhh:


I hope that's clear form what I wrote above. I was challenging you to prove that no epsitemic model whatsoever is going to work.

I guess I'm objecting to the phrase "epistemic model." If what you mean by "epistemic" is that the model makes zero ontic claims -- i.e., doesn't purport to be *about* anything, i.e., doesn't say anything about a really-existing external world -- then (a) I don't know why you're calling it a "model" (since it's not then a model *of* anything) and (b) I don't know what it even means to assess its locality.
 
  • #33
ttn said:
Maybe that isn't clear. What I'm getting at is that there seem to be two aspects to the dynamics: the measurement part, and the non-measurement part. You claimed that the model is deterministic in the sense that there is a definite phase space point which determines what the outcome will be for either an x-measurement or a p-measurement. No problem there. The question is: what about the other half of the dynamics, the non-measurement part? If there is an actual phase space point, what controls its evolution from one moment to the next, and is this consistent with the Schroedinger type evolution which lies behind the time evolution of the Wigner distributions? I suspect the answer is that these aren't consistent, which seems like a serious problem. But I'm not really sure.

No, it is consistent - in fact this is what underpins various results in quantum information to do with being able to classically simulate quantum computers built only out of such gaussian operations. In fact these simulations are nearly always done by following the covariance matrix (fourier transform of the Wigner distribution). The Hamiltonian evolution induces exactly the same symplectic transform on the canonical variables as it would in the classical case, while on the states it induces standard unitary evolution. The closest paper I have in front of me at hand I see discussing the evolution is quant-ph/0402004, though they may not prove it just assume it! Ah - I also just noticed in quant-ph/0204052 in the second paragraph on page 2 they state what I said above without proof! I don't think it'd be too hard to prove, but if you'd like to see it I'll try.




I don't understand this. The model you described above does have some "ontic" commitments, right? It says there's a definite (but unknown) phase space point for the particle. Maybe you want to deny that, and just take the phase space probability distribution (the wigner dist) as elementary. But if a theory is purely epistemic in that sense -- if it makes no ontic claims whatever -- then I literally don't know what there is to talk about in regard to the locality/nonlocality of the model. Indeed, it ceases to be a model or theory in the sense I'm used to -- it ceases to say *anything* about the external world. So then what is left of the question of whether or not what it says about the world is consistent with relativity's causal prohibitions?

I don't understand what you don't understand! But let me try and help by saying yes, the gaussian model does have ontic commitments which are represented by the points of phae space which in turn describe the position and momentum (or quadrature value in optics) of the system; no, I don't want to take the phase space distribution as ontic, that's the last thing I want to do!

I'm simply imagining the ontic entities out there are nonlocal - in some way or another they disrespect our precious notions of locality. I see quantum theory as a probabilistic theory about these Unidentified Ontic Objects. (UOO's)



Well we know a non-local hvt can work, because there's an example: Bohm's theory. The question that most interests me is: what do you have to give up to get rid of the nonlocality? Everybody has been saying for decades that you can have a local model so long as it isn't deterministic, or so long as it isn't "realistic", or so long as it doesn't have hidden variables, etc. But as far as I can tell, all these claims are wrong. Nonlocality is not a price paid for introducing determinism/hv's/etc. I think you indicated that you agree with me here. So I'm still confused about what exactly we're arguing about... :uhh:
As you said, we have no disagreement on those points.


I guess I'm objecting to the phrase "epistemic model." If what you mean by "epistemic" is that the model makes zero ontic claims -- i.e., doesn't purport to be *about* anything, i.e., doesn't say anything about a really-existing external world -- then (a) I don't know why you're calling it a "model" (since it's not then a model *of* anything) and (b) I don't know what it even means to assess its locality.

No - an epistemic model makes all sorts of ontic claims (once one identifies the ontic state space!) The epsitemic part refers only to the fact that the quantum states are interpreted epistemically.

More simply, by epsitemic model I mean only this: Quantum states are to the (nonlocal) UOO's as the probability distributions in statistical mechanics are to the points in phase space of atoms.

In the big picture I look to what Jaynes did in '48 when he showed how many of the laws of thermodynamics could be understood as not truly fundamental, but rather must follow from how any rational being must calculate given coarse grained information (pressure, temp etc) about an underlying reality. I suspect that many features of QM - collapse and linearity being the main two - also follow not from something truly fundamental, but rather from similar such principles.
 
  • #34
ttn said:
Hmmm. But if you're regarding this distribution as purely epistemic, doesn't that mean it's implicit in the model that there exists a real particle which has some definite (but of course unknown) values for position and momentum? (Otherwise I just don't know what you mean by "purely epistemic" -- that knowledge has to be knowledge about some real state of affairs, or it isn't really knowledge.) But then I wonder: is the model deterministic in the sense that the Wigner distributions at different times are consistent (in the sense of "equivariance" in Bohm's theory -- that is, does the x,p probability distribution at one time flow, via the underlying Schroedinger dynamics, into the same new probability distribution you'd get by evolving it forward using the implied particle-with-definite-position-and-momentum ontology?

Yes, that's also the objection I had, even with the limited positive definite Wigner states.

It isn't sufficient to say that we can just have, at each moment in time, a positive-definite probability function over some state space. One also needs to define a dynamics that gouverns the flow of this probability distribution in such a way that it really is a flow of independent points, ea that the final distribution is the convolution of the initial distribution and a "dynamic Kernel function" ; where this dynamic kernel function is independent of the initial distribution, of course.
That Kernel function then describes the true dynamics of each individual state (point in phase space) independent of how we (epistemologically) had a distribution of probability over the different points. This is what Bohmian mechanics does, if I'm not mistaking. But this dynamics is then assuredly non-local (a flow in phase space can be local, or not, depending on whether we can split the phase space into a direct sum of sub-phase space points corresponding to remote systems, and whether the flow also splits correspondingly).
 
  • #35
A note on what I mean by epistemic models, plus the qubit model of Kochen and Specker:

http://www.physicsnerd.com/NotesForPhysicsForums.pdf
 
Last edited by a moderator:

Similar threads

Replies
6
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
749
Replies
25
Views
2K
Replies
17
Views
2K
Replies
11
Views
2K
Replies
8
Views
2K
Replies
6
Views
2K
Replies
49
Views
2K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Back
Top