Is QM Inherently Non-local in EPR and Bell Discussions?

  • Thread starter DrChinese
  • Start date
  • Tags
    Qm
In summary: It's a cool thing to happen, sure, but it doesn't mean that non-locality is what's causing it. Regarding Bell tests:The conclusion that Bell tests show is that local reality is not excluded. But this doesn't mean that QM is non-local.
  • #211
DrChinese said:
2. I wonder if you might have anything up your sleeve in the way of an example or reference I could take a peek at? Or perhaps you can elaborate on your position? You had mentioned this in an earlier post too in which you were talking about the undefined results.
It's actually a relatively straightforward idea so someone has surely done this, but I don't have sufficient QM literacy to provide you with a paper. The "Proof that all signal local theories have local interpretations" thread is an attempt to make a legible general version of this, but the following should give you an idea of what I mean:
Let's assume (for the sake of discussion) that there is a local hidden state theory for QM, and that we have a repeatable EPR set-up where we the polarization of two entangled photons along one of three axes each with binary measurements, and that we can only make one meaningful measurement on each photon. And, let's also assume that this particular set up is signal local.
Then we can restrict the state space to a list of all the possible combinations of measurement results - so the state space has size [itex]2^6=64[/itex].
Now, there are a large number of subsets of the state space, [itex]2^{64}[/itex] of them in fact, but we can only experimentally test the probability for [itex]48[/itex] subsets.
If we simply assign the appropriate (and experimentally verifiable) probabilities to those [itex]48[/itex] subsets, and, in addition assign the probability [itex]0[/itex] to the empty set, and the probability [itex]1[/itex] to the entire set it turns out that signal locality insures that we end up with a probability measure on the state space. (This last clause is really what that other thread is about.)
Now, this interpretation clearly assumes that there is a local hidden state, and is local, but does not run into the Bell's theorem type contradictions.
 
Physics news on Phys.org
  • #212
That is a FAPP argument, which boils down to the statement : ``QM is good as long as we ask the correct questions (correct = good common sense borrowed from everyday life experience)´´
This is pretty close to what I've been beginning to believe. (With a differnet interpretation of "correct")

These interpretational issues always seem to me to be questions that only a "superobserver" can ask, but not a real person. They seem to be questions that cannot, even in principle, be captured by an experiment.

For example, the Penrose's thought experiment about weather: we can sit "externally" and ponder about the macroscopic superposition of states, but the fact is that once we sit down and observe it, then no matter what happens, we will only observe one state. So if try to ask this question "internally" by setting up an experiment to count how many states there are, it must always say "there's only one state".

Of course, if we had two copies of the planet, we could perform a state-counting experiment that could either say "there's one state" or "there's two states".


A more concise way of stating it is:

Suppose there is only world A. Then, I will only observe one world.
Suppose there is only world B. Then, I will only observe one world.
From which it follows:
Suppose there is only the superposition A + B. Then, I will only observe one world.


At this moment in time, you have to wonder why we only percieve one weather state. This what reduction does for you, or Vanesh consciousness does.
So, I don't have to wonder why -- I'm simply in a superposition of states, each of which perceives only one weather state. As far as I can tell, I don't need to invoke any sort of reduction or consciousness to deduce that the answer to the question "Do you percieve only one weather state?" will be "yes" 100% of the time.


I'll admit it's a quirky way of looking at things, but it seems to me to be the natural thing to do. (But I know I'm weird)

(Incidentally, the distinction between an internal and an external question is used to great practical effect in formal logic... nonstandard analysis is one particular success, and it seems to me to be very appropriate for this particular problem)
 
Last edited:
  • #213
As I said somewhere else, the transition from a unitary view to actual observations always involves a statement (postulate) about perception (=consciousness)
Hurkyl said:
So, I don't have to wonder why -- I'm simply in a superposition of states, each of which perceives only one weather state. As far as I can tell, I don't need to invoke any sort of reduction or consciousness to deduce that the answer to the question "Do you percieve only one weather state?" will be "yes" 100% of the time.

I do that too, but you should realize that this is a postulate.

The next point is: with what probability do you perceive that weather state ? And I bet your answer is "using the Born rule".
But again, that's a postulate.

Next comes, IN WHAT BASIS ? Because applying the Born rule depends on the chosen basis (this is what makes the entire story different from classical probability considerations). I bet your answer will be "in a classically looking basis". Ok, but again, that's a postulate.

In classical physics, you DON'T have to make such statements. It would even not occur to you: your brain is in a certain state, which contains the information you are "aware of". But in QM, your brain is in a superposition. And it depends in what basis you're going to look at this superposition to even be able to begin to say that you are going to look at only "one state". And that you will see that probabilistically, for instance, according to the Born rule. So there is no evident 1-1 relation anymore between your brain state and your conscious perception.

I do exactly that too. But you should realize that 1) it is an extra set of postulates and 2) they tell you something about what you (consciously) perceive from the "entire world state".
 
Last edited:
  • #214
Careful said:
Aha, the noble art of stating exactly what others said while twisting the emotional output :smile: Perhaps, I should become a lawyer indeed ... :rofl:

:rofl: Maybe you ARE in fact a lawyer, only, you're not aware of it :rofl:
 
  • #215
NateTG said:
Then we can restrict the state space to a list of all the possible combinations of measurement results - so the state space has size [itex]2^6=64[/itex].
Now, there are a large number of subsets of the state space, [itex]2^{64}[/itex] of them in fact, but we can only experimentally test the probability for [itex]48[/itex] subsets.

I'm trying to understand what you are meaning here. If you are claiming that for each experimental setup, QM produces probabilities which satisfy the Kolmogorov axioms (in other words, ARE probabilities) and are signal-local, then that is correct of course: QM produces such probabilities. I guess that's the list of 48 numbers you want to see.

But local *reality* makes an extra assumption, which is that for a given known "hidden state", there is statistical independence of the outcomes on both sides. This is deduced from the "reality assumption" that the left-going particle HAS A STATE which will determine the (probabilities of) outcomes of measurements done on it, and that this state will need to provide for probabilities of *potential* experiments, those experiments not even being decided upon. In my opinion, it is *that* part which fails with LR, not the "local" part.
Then you need to provide for the 64 numbers also satisfying a HYPOTHETICAL factorization condition and THAT's what cannot be done while respecting Kolmogorov axioms.
 
  • #216
For someone looking for the math behind this question, in Chapter 2 Peskin and Schroeder demonstrate that non-relativistic QM is a non-local theory (because it's non-relativistic of course). They then move to a relativistic quantum field theory and show that that is indeed local. Good stuff, and all you need is in chapter 2.
 
  • #217
drunkenfool said:
For someone looking for the math behind this question, in Chapter 2 Peskin and Schroeder demonstrate that non-relativistic QM is a non-local theory (because it's non-relativistic of course). They then move to a relativistic quantum field theory and show that that is indeed local. Good stuff, and all you need is in chapter 2.

:smile: P&S only consider the unitary evolution, of course. Indeed, the unitary dynamics can in a way be made local as they show. But that doesn't address the measurement. They only show that the Green's functions remain within the light cone (equivalently, that space-like separated field operators commute).
 
  • #218
Nate, could you provide a bit more detail, I can't quite follow it (and the more general discussion in the other thread just created more general confusion!).

NateTG said:
Let's assume (for the sake of discussion) that there is a local hidden state theory for QM, and that we have a repeatable EPR set-up where we the polarization of two entangled photons along one of three axes each with binary measurements, and that we can only make one meaningful measurement on each photon. And, let's also assume that this particular set up is signal local.


ok so I'm imagining Alice is measuring a two outcome measurement along either the X axis, Y axis or Z axis of the bloch sphere. Bob does something similar along one of three axes x,y,z. (Note that we cannot actually have x=X,y=Y,z=Z if we are using a singlet state for example, because it is well known there exists a LHV for this specific set of choices!). But, whatever, just some other set of three orthogonal directions.


Then we can restrict the state space to a list of all the possible combinations of measurement results - so the state space has size [itex]2^6=64[/itex].

Originally I assumed you meant the state space for both particles? But then we'd have 9 possible pairs of measurements which can be performed Xx,Xy,Xz,Yx,...Zz, and a bunch more possible sets of results. So I think you mean a state space on just one side - Alice's for example - which has X={-1,1},Y={-1,1} and Z={-1,1}. But I can't work out where the 2^64 is coming from (unless its something like make a vector the 6 entries of which correspond to all 6 outcomes i.e. [X=-1,X=+1,Y=-1,Y=+1,Z=-1,Z=+1] and then look at all possible asignments of 1,0 to each entry. But this doesn't make a lot of sense as the state space nor as "subsets of the state space" which is what you say next.

I tell you this so you see I'm trying!
 
  • #219
Next comes, IN WHAT BASIS ? Because applying the Born rule depends on the chosen basis (this is what makes the entire story different from classical probability considerations). I bet your answer will be "in a classically looking basis".
Well, I would say the basis defined by the measurement!

The next point is: with what probability do you perceive that weather state ? And I bet your answer is "using the Born rule".
Ack! I don't. :frown:

I worked through the math, and if I postulate consistency with the frequentist interpretation (i.e. a counting experiment should agree with the probability distribution), and if I assume that I did the work right, I would get that in the simple case of 2 states that:

If we're in state u|0> + v|1>, then we see state 0 with probability |u|/(|u|+|v|)
 
  • #220
Hurkyl said:
Ack! I don't. :frown:
I worked through the math, and if I postulate consistency with the frequentist interpretation (i.e. a counting experiment should agree with the probability distribution), and if I assume that I did the work right, I would get that in the simple case of 2 states that:

?? That doesn't work (is well known btw and is a serious problem - although not even the one I'm addressing).

Imagine that you do a binary experiment with probabilities 0.01 (+) and 0.99 (-) each time. You do that N times. This means that you end up in a sum of the states which correspond to the outcomes: |++--+..>. In other words, all binary sequences of N +/- are present, and just once (the order is the one in the time series of the experiment). Each of these states is a basis state in the product Hilbert space H2xH2x...H2. (N spaces)

The state you end up with is a sum over these 2^N basis states, because each "measurement" evolved your |n-1> state into
sqrt(0.01)|n-1>|+> + sqrt(0.99)|n-1>|->

So we see that all the basis states are present in the final |psi>, and the basis state |+--+...> has as a complex amplitude (0.01)^(A/2) x (0.99)^(B/2) where A is the number of + and B the number of - of the basis state we're calculating the coefficient of.

So according to the BORN rule, we have a probability to observe this state which is equal to (0.01)^A x (0.99)^B, which is of course correct. (A+B = N)

However, if you do a frequentist (world counting) interpretation, then EACH OF THE PRESENT STATES is equally probable, right ?
Now, that means that the particular state we're talking about has, as just any other, a probability of 1/2^N of occurring. No matter what is A or B. Clearly that is not the same (and not correct), as it doesn't depend upon (0.01) or (0.99) for instance. Just ANY binary experiment would result in exactly the same frequentist probability for a time series, whatever the relative amplitudes of the two contributions.

Or am I missing what you are at ?
 
  • #221
I was looking at it like this:

Suppose I do the following counting experiment:

Let [itex]\psi = u | 0 \rangle + v | 1 \rangle[/itex] be some quantum state.

Now, let's suppose I was able to work with the following state:

[tex]\psi \otimes \psi \otimes \cdots \otimes \psi \otimes | 0 \rangle[/tex]

that is, N copies of [itex]\psi[/itex], and the initial state of my "counter".

Now, I apply the operator:

[tex]T |x\rangle \otimes |y\rangle = |x\rangle \otimes |x + y\rangle[/tex]

N times, each time applied to one of the N input states and to my counter state.

This experiment, IMO, captures the notion of repeating an experiment N times and counting how many times we got the outcome [itex]|1 \rangle[/itex].

I postulate that this should agree the frequentist interpretation of statistics: probabilities are supposed to be the proportion of times we expect to see a given outcome if we repeat the experiment multiple times.

So, for the state [itex]\psi = u |0\rangle + v |1\rangle[/itex], I need that [itex]P(1 | \psi)[/itex] to "agree" with the counting experiment, which I'm interpreting to mean that if we project onto the counter portion of the state, the basis states of the greatest amplitude should cluster around [itex]P(1 | \psi)[/itex].

We start with:

[tex]
\sum_{\vec{x}} u^{N - i} v^i |x_1 \rangle \otimes \cdots \otimes |x_N \rangle \otimes | i \rangle
[/tex]

where [itex]\vec{x}[/itex] ranges over all binary N-tuples, and i denotes the number of 1's in [itex]\vec{x}[/itex].

If we project this onto the counter, we get:

[tex]
\sum_{i = 0}^N \binom{N}{i} u^{N - i} v^i | i \rangle
[/tex]

the magnitudes of the amplitudes are (proportionally) binomially distributed with parameter [itex]p = |v| / (|u| + |v|)[/itex].

The largest amplitudes will cluster around pN, so this suggests to me that the "right" way to assign probabilities is that [itex]P(1 | \psi) = |v| / (|u| + |v|)[/itex].
 
Last edited:
  • #222
Hurkyl said:
I postulate that this should agree the frequentist interpretation of statistics: probabilities are supposed to be the proportion of times we expect to see a given outcome if we repeat the experiment multiple times.

Ok...

So, for the state [itex]\psi = u |0\rangle + v |1\rangle[/itex], I need that [itex]P(1 | \psi)[/itex] to "agree" with the counting experiment, which I'm interpreting to mean that if we project onto the counter portion of the state, the basis states of the greatest amplitude should cluster around [itex]P(1 | \psi)[/itex].

and further:

the magnitudes of the amplitudes are (proportionally) binomially distributed with parameter LaTeX graphic is being generated. Reload this page in a moment..

Isn't that applying the Born rule ? :smile: What links "greatest amplitude" ( hilbert norm) and "probability" except for the Born rule ? Why should it be more probable for you to "experience" this large (in hilbert norm) component than a "small" (in hilbert norm) one, except by POSTULATING that it is the hilbert norm that gives the probability ?

Normally, in MWI, one has the more natural tendency to say that EACH TERM (no matter what hilbert norm as long as it is non-zero) is a *separate world* and that you are "in just one of these worlds" (giving implicitly equal probabilites to each "world"). This comes about because one considers "a copy of Hurkyl" in each of these worlds, experiencing whatever happened there, and "you" are "just one of them". So all "Hurkyl"s are equal for the law - meaning that "you" have probability 1/N to be one of them. Nowhere, the hilbert norm appears in this scheme of reasoning.

My claim (my little paper I wrote about that) is that you have, in any case, to postulate OR this "world counting" hypothesis, or the Born rule (which you are implicitly using, by looking at the terms with "greatest amplitude").
You've also shown here, quite correctly, that IF YOU USE THE BORN RULE LATER in the process, this "trickles down" and it is equivalent to applying already the Born rule at each individual process.
But you've used the Born rule :-) (by looking at what were the "largest hilbert norms" of the terms).
cheers,
Patrick.
 
Last edited:
  • #223
Isn't that applying the Born rule ? What links "greatest amplitude" ( hilbert norm) and "probability" except for the Born rule ?
I thought the Born rule was a specific map from amplitudes to probability, and not just the postulate that it's an order-preserving map?

But don't fear, I can work with a weaker hypothesis!

So for each N, we have the state:

[tex]
\sum_{i = 0}^N \binom{N}{i} u^{N - i} v^i | i \rangle
[/tex]

Suppose we rewrite [itex]| i \rangle[/itex] as [itex]| i / N \rangle[/itex] so that it's labelled by the proportion it represents, rather than the counter.

Then, we take this state and project it further as follows:

[tex]
| x \rangle \rightarrow
\left\{
\begin{array}{l @{\quad} l}
| A \rangle & x \in (p - \epsilon, p + \epsilon) \\
| B \rangle & \mbox{otherwise}
\end{array}
[/tex]

Where [itex]\epsilon[/itex] is your favorite, small positive real number.

So, this projects the state down to [itex]s | A \rangle + t |B \rangle[/itex]. As N goes to infinity, the ratio s / t also goes to infinity.

Since [itex]| A \rangle[/itex] denotes a "near the proportion p" result from the modified experiment that ends by applying the above projection after the counting experiment, then I can conclude that the result is [itex]| A \rangle[/itex] almost surely. (At least, it is if I assume that "infinitessimal" amplitudes are mapped to "infinitessimal" probabilities -- this is much weaker than assuming the order-preserving map)




If you were to say this:

the state [itex]u |0 \rangle + v |1 \rangle[/itex], if I specified more detail, would wind up to be a superposition of U states that correspond to [itex]| 0 \rangle[/itex] and V states that correspond to [itex]| 1 \rangle[/itex], and that U / V = |u| / |v|,

then my derivation yields the equal counting rule for probabilities.

If you don't say this, then it would seem to require a very convoluted method to show that you generally get the right answers when you statistically analyze the results of a repeatable experiment.
 
Last edited:
  • #224
Hi ! You just went through 2 epochs of MWI history :smile:
Hurkyl said:
So, this projects the state down to [itex]s | A \rangle + t |B \rangle[/itex]. As N goes to infinity, the ratio s / t also goes to infinity.
You just reinvented the original argument by Everett and DeWitt :smile:
They argued indeed, that in the limit of an infinity of measurements, the state which DOESN'T correspond to the right statistics has 0 hilbert norm (and hence isn't there anymore). So suddenly all these worlds, with all these Hurkyls in them, "disappear in a puff of 0 hilbert norm".
The objection is of course: and with a finite number of measurements ? The relative number of independent Hurkyls in independent worlds having seen a statistically significant, though finite, measurement IN FLAGRANT CONFLICT with the Hilbert norm is rising with N. It is only when you take the limit that "poof" they go away into 0.
(At least, it is if I assume that "infinitessimal" amplitudes are mapped to "infinitessimal" probabilities -- this is much weaker than assuming the order-preserving map)
Uh, oh, that's the Born rule. Remember, in MWI, each present "observer state" is to be an independent observer, who lives his life. You're one of them. It is the "being one of them" that generates the probabilistic aspect.
Nevertheless, you *still* have to make an assumption, no matter how weak, OUTSIDE of the strict frame of unitary QM, and it is an assumption about perception. Here, you make the assumption that observers in *small* worlds, well, aren't observers. Don't count. But how small is small ?
If you were to say this:
the state [itex]u |0 \rangle + v |1 \rangle[/itex], if I specified more detail, would wind up to be a superposition of U states that correspond to [itex]| 0 \rangle[/itex] and V states that correspond to [itex]| 1 \rangle[/itex], and that U / V = |u| / |v|,
then my derivation yields the equal counting rule for probabilities.
That's another popular argument. It is in fact, what Deutsch sneaks into his recent argument for "deriving the Born rule" from decision-theoretic arguments. But again, that's of course an extra hypothesis. And there's a difficulty with it, because this comes down to redefining of course the Hilbert space (you introduce new degrees of freedom). Ok, but once we have these new degrees of freedom (with their hamiltionan dynamics?), what stops me from having superpositions in THAT new space where you cannot play that trick anymore ? You're going to introduce AGAIN new degrees of freedom ?
Boy, at the rate where you reinvent MWI arguments, (you just covered about 50 years in, what, 30 minutes?) you'll soon find all FUTURE arguments too :-)

If you don't say this, then it would seem to require a very convoluted method to show that you generally get the right answers when you statistically analyze the results of a repeatable experiment.
It is the holy grail of hard-core MWIers. My viewpoint is that in ANY CASE you will need to introduce an extra hypothesis, outside of unitary QM, and related to exactly WHAT makes you observe an "observer state", in other words, linking what you consciously observe of your body state.
 
  • #225
Tez said:
But I can't work out where the 2^64 is coming from (unless its something like make a vector the 6 entries of which correspond to all 6 outcomes i.e. [X=-1,X=+1,Y=-1,Y=+1,Z=-1,Z=+1] and then look at all possible asignments of 1,0 to each entry. But this doesn't make a lot of sense as the state space nor as "subsets of the state space" which is what you say next.
I tell you this so you see I'm trying!
By 'state space' I mean the space of all possible states - in this case it's the 64 possible state vectors. I chose that because it's a way to describe the state that clearly specifies the state as far as all of the measurements are concerned. Perhaps calling it the 'potential state space' would be clearer.
Now, when we run an experiment, we can't measure all 6 of the values in a particular vector, but instead we can only measure 2. As a consequence, we can only measure the probability of subsets like: all vectors where X=+1 and x=-1.
From here, the next step is to (try to) construct a minimal measure on this potental state space with the property that all of the subsets of the potential state space that have experimentally testable probabilities have measure equal to their probability, and that has a measure of 1 for the entire space. If this probability measure exists, then, for every run of the experiment we can have some lambda from the potential state space assigned from this 'potential state space'.

Now, due to a brain fart, I thought that the subsets for which probability was testable did, indeed, form an algebra, so it would be possible to simply assign measures to those sets, and be done with it, but that is not the case.
 
  • #226
vanesch said:
Well, the problem is that if you take quantum theory seriously, that's exactly what happens: your detector IS in two "mutually exclusive states" at the same time. That's what unitary evolution dictates, and it is the very founding principle of quantum mechanics.
This is called the superposition principle, and it is exactly the same principle that says that an electron in a hydrogen atom is both above and below the nucleus, and to the left and to the right of it, which are also "classically mutually exclusive states". This is exactly what the wavefunction is supposed to mean: the electron is in the state ABOVE the nucleus, is ALSO to the left of it, is ALSO to the right of it, and is ALSO below it, with the amplitudes given by the value of the wavefunction.
A quantum particle that impinges on a screen with several holes goes through the first hole, and ALSO goes through the second hole, and ALSO goes through the third hole.
And if you take this principle seriously all the way (that's what MWI does) then your particle detector SAW the particle, and DIDN'T see the particle. So on the display of the detector it is written "CLICK" AND it is written also "NO CLICK". And if you look at it, your eyes will BOTH see "click" and "no click". And your brain will BOTH register the information of the fact that your eyes saw "click" and that your eyes DIDN'T see click.
Only... you are only consciously aware of ONE of these possibilities.
Interference and the production of wave packets require the principle of linear superposition. Quantum theory is concerned with interference at the sub-microscopic level -- the level of interaction of the quantum disturbances themselves (including measuring device quanta). There is some relation to the physical reality of this level in QM's wave equation and wave functions wrt phases, phase relations, and amplitudes. It seems pretty certain that the details aren't in one to one correspondence with the physical reality of the sub-microscopic phenomena. Anyway, in order to say anything unambiguous about the quantum realm it's necessary to have these phenomena interact with macroscopic instruments.

The recorded (at a certain time) position of a particle at some location, or that a cat is alive (or dead) is unambiguous (and necessarily thermodynamically irreversible for the consistency of quantum theory). Afaik, quantum theory doesn't say that a detecting screen will detect an individual quantum in two different locations, or that a cat will be found to be both alive and dead. Measurement results are well defined values. Of course, in any set of many measurements of an identically prepared system, a detecting screen will have detected in many different locations, and the cat(s) will sometimes be alive and sometimes dead after a certain delta t from the opening of the radioactive material's enclosure.

vanesch said:
*IF* quantum theory as we know it applies to all the particles and interactions in this scheme (the atoms of the detector, of your eyes, of your brain etc...) then there is no escaping this conclusion. This is due to the fact that *ALL* interactions we know (electroweak, strong, except for gravity), are, as far as we know in current quantum theory, described by a UNITARY EVOLUTION OPERATOR.
So what are the ways out of this riddle ?
1) this is indeed what happens, and for some strange (?) reason, we are only aware of one of the states. This is the picture I'm advocating - unless we've good indications of the other possibilities.
2) this unitary evolution is a very good approximation which is in fact, slightly non-linear. this can be a minor modification to QM, or this can be just an indication that QM is a good effective theory for something totally different.
3) we've not yet included gravity. Maybe gravity will NOT be described by a unitary evolution operator.
4) there's maybe another interaction that spoils the strictly unitary evolution
5) somehow the act of observation (what's that ?) is a physical process that acts upon the wavefunction (that's the von Neumann view: but WHAT PHYSICS is this act of observation then ?) and reduces the state of whatever you're "observing".
I prefer number 5. The physics of the measurement process depends in part on the hardware that's doing the measuring, doesn't it? The wave equation for a free particle is different than for one that is interacting with some measuring device.
In the S-cat scenario, the measuring device includes whatever an emitted quantum disturbance interacts with that eventually amplifies the quantum disturbance and frees the poisonous gas, the poisonous gas itself, and the cat. The cat is the "pointer" or "clicker" of the device.

There is a problem in that quantum measurement processes are essentially uncontrollable and unpredictable. In the process of measuring the quantum disturbance, definite phase relations are destroyed, and the wavelike object that has been evolving unitarily is transformed into a particle-like object which eventually manifests macroscopically as a well defined value.

The problem doesn't really have to do with why we don't see the S-cat alive and dead, or a quantum particle here and there as a singular outcome of an individual measurement. It has to do with the fact that we can't see what's happening at the sub-microscopic level of the quantum disturbance itself.
 
  • #227
Quote:
Originally Posted by Sherlock
Isn't Bell's general formulation for local realistic theories an exact definition?
P(a,b) = integral d lambda rho(lambda) A(a,lambda) B(b,lambda)
DrChinese said:
That is the separability requirement, also often referred to as "Bell Locality". It is also sometimes called "factorizability" which may or may not be the same thing, depending on your exact definition. Separability is sometimes defined as the following, where A and B are the two systems:

1) Each [system] possesses its own, distinct physical state.
2) The joint state of the two systems is wholly determined by these separate states.

But that does not include the "realistic" requirement which I call "Bell Reality". It is the requirement that there are values for observables which could have been measured alternately. "It follows that c is another unit vector..." from Bell, just after his (14). If you don't insert this assumption into the mix, there is no Bell Theorem.
That (realism) assumption is embodied in Bell's general lhv formulation (via the inclusion of lambda) isn't it?

Bell's locality requirement is based on the assumption that the statistics of two spacelike separated sets of detection events must be independent. But that assumption is wrong, because the statistics produced by two opposite-moving disturbances emitted by the same atom during the same transitional process and analyzed by a common measurement operator are going to be related.

Local realism seems to be disallowed for quantum theories, but locality as far as Nature is concerned isn't ruled out.
 
  • #228
Bell, as the originator of these ideas, didn't disambiguate them. But since they turned out to be so very important a number of sharp thinkers have pondered them deeply and come up with the formulation Dr. Chinese sets forth.

It doesn't seem to me to be constructive to go back now and reassert Bell's original formulation as if it were some tablet of the Law handed down from on high. Ideas develop, even the ideas of great men.
 
  • #229
Sherlock said:
Bell's locality requirement is based on the assumption that the statistics of two spacelike separated sets of detection events must be independent. But that assumption is wrong, because the statistics produced by two opposite-moving disturbances emitted by the same atom during the same transitional process and analyzed by a common measurement operator are going to be related.

This is something that you can see for yourself is quite different from the "Bell Reality" requirement. If you begin with Bell Locality (separability) as an assumption alone (and there is no unit vector c), you never get to Bell's Theorem as a conclusion. In fact, nothing at all strange happens except that you come to the conclusion that QM violates this (this is the point which ttn has made). You will NOT come to the conclusion that local realistic theories must respect Bell's Inequality. That is because the Inequality absolutely depends on the existence of the Bell Reality assumption.

What is not clear to me - and I know what Bell says - is whether or not the Bell Locality requirement is also necessary to arrive at Bell's Inequality. I think that it might be more accurate to say that parameter independence (PI) is a requirement but not outcome independence (OI) - where PI+OI=Bell Locality. Sure, it is in the proof and conventional wisdom is that it is a requirement. (And everyone knows how I feel about convention and QM. :bugeye: ) But here is a case where I personally feel that convention *may* be wrong. Suppose you deny separability - i.e. assume that there IS in fact a link between the outcomes at Alice and Bob (OI is false). Guess what, you can still end up with Bell's Inequality assuming PI alone! That shouldn't be possible if Bell Locality were necessary to the mix. By the way, PI is the requirement mentioned specifically in EPR - not OI.

If you are interested, I can explain the proof= of this in more detail. But I wouldn't bet my (:rofl: non-)reputation on it.
 
  • #230
DrChinese said:
What is not clear to me - and I know what Bell says - is whether or not the Bell Locality requirement is also necessary to arrive at Bell's Inequality. I think that it might be more accurate to say that parameter independence (PI) is a requirement but not outcome independence (OI) - where PI+OI=Bell Locality. Sure, it is in the proof and conventional wisdom is that it is a requirement. (And everyone knows how I feel about convention and QM. :bugeye: ) But here is a case where I personally feel that convention *may* be wrong. Suppose you deny separability - i.e. assume that there IS in fact a link between the outcomes at Alice and Bob (OI is false). Guess what, you can still end up with Bell's Inequality assuming PI alone! That shouldn't be possible if Bell Locality were necessary to the mix. By the way, PI is the requirement mentioned specifically in EPR - not OI.
If you deny separability, then you're treating it like qm does, aren't you?
I'm not sure what you're getting at. What do you mean by "parameter independence"?
 
  • #231
Sherlock said:
If you deny separability, then you're treating it like qm does, aren't you?

I'm not sure what you're getting at. What do you mean by "parameter independence"?

At some point (perhaps Jarrett?), it was noticed that "Bell Locality" (separability or factorizability) could be split into 2 different elements which have now come to be called Parameter Independence (PI) and Outcome Independence (OI). This is why I say BL=PI+OI.

Parameter Independence means that Alice's outcome is not affected by Bob's polarizer setting (i.e. how Bob chooses to measure his particle, which is his measurement parameter).

Outcome Independence means that Alice's outcome is not affected by Bob's outcome.

It is known that the Alice's local likelihood of a particular outcome does not change based on Bob's parameter or his outcome. However, knowledge of both Bob's parameter and Bob's outcome would in fact give you a more complete specification of the Alice's system. So that is why ttn (and many others) says oQM is not Bell local.

What I am trying to push - I think - is that if you assume parameter independence (and ignore outcome independence) and Bell Reality (let c be another unit vector...) then that is sufficient to lead to Bell's Inequality. Bell's inequality is violated in experiments, therefore either parameter independence or Bell Reality fails. oQM is a parameter independent theory (i.e. it is local in this specific limited respect), but does deny Bell Reality. Ergo it is realism, not locality, that needs to be sacrificed.

Keep in mind, in oQM you do not get a more complete specification of the system if you only specify Alice and/or Bob's parameters - you still get the same superposition until there is a measurement. So why do we want to even think about parameter independence as it relates to locality? In my mind, it is because you need parameter independence to match up to signal locality and therefore keep the concepts of relativity intact. But that is just one view.
 
  • #232
Is space-time inherently classical ?

This could be the same question from another point of view?
Could it help to take this other pov?
 
  • #233
DrChinese said:
At some point (perhaps Jarrett?), it was noticed that "Bell Locality" (separability or factorizability) could be split into 2 different elements which have now come to be called Parameter Independence (PI) and Outcome Independence (OI). This is why I say BL=PI+OI.
Parameter Independence means that Alice's outcome is not affected by Bob's polarizer setting (i.e. how Bob chooses to measure his particle, which is his measurement parameter).
Outcome Independence means that Alice's outcome is not affected by Bob's outcome.
It is known that the Alice's local likelihood of a particular outcome does not change based on Bob's parameter or his outcome. However, knowledge of both Bob's parameter and Bob's outcome would in fact give you a more complete specification of the Alice's system. So that is why ttn (and many others) says oQM is not Bell local.
What I am trying to push - I think - is that if you assume parameter independence (and ignore outcome independence) and Bell Reality (let c be another unit vector...) then that is sufficient to lead to Bell's Inequality. Bell's inequality is violated in experiments, therefore either parameter independence or Bell Reality fails. oQM is a parameter independent theory (i.e. it is local in this specific limited respect), but does deny Bell Reality. Ergo it is realism, not locality, that needs to be sacrificed.
Keep in mind, in oQM you do not get a more complete specification of the system if you only specify Alice and/or Bob's parameters - you still get the same superposition until there is a measurement. So why do we want to even think about parameter independence as it relates to locality? In my mind, it is because you need parameter independence to match up to signal locality and therefore keep the concepts of relativity intact. But that is just one view.
Thanks for your efforts DrChinese. I understand now what's meant by PI. This was Bell's "vital assumption". Since the quantum correlations in Bell tests are aggregates of individual joint measurements, each of which is initiated by a detection at either A or B, then it would seem that PI is equivalent to OI.

I agree with your conclusion that realism (but not necessarily locality) needs to be sacrificed. The reason that locality isn't necessarily disallowed is because the detection schemes that are necessary in order to produce correlations that violate a Bell inequality require that observations at A and B depend on each other. That is, while the settings at A and B are varied randomly, the pairings aren't random. The observations at A and B aren't independent so the statistics at A and B aren't independent -- and locality doesn't require that they be independent. So separable vs. non-separable formulation isn't local vs. non-local.
 

Similar threads

Replies
50
Views
3K
Replies
4
Views
1K
Replies
2
Views
954
Replies
50
Views
4K
Replies
80
Views
7K
Replies
7
Views
1K
  • Quantum Physics
Replies
7
Views
1K
Replies
80
Views
4K
  • Quantum Physics
Replies
13
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Back
Top