Does MWI Resolve Locality Problems with Entanglement?

In summary, according to the Everett interpretation, the splitting of the universes is completely local, meaning that it does not require the intervention of a faster-than-light influence. However, because the splitting is local, it results in an apparent violation of Bell's inequality.
  • #1
peter0302
876
3
I believe I read that the Everett Many Worlds Interpretation resolves the apparent locality problems with entanglement (i.e. the necessity of a faster-than-light influence to explain the correlations between the behavior of entangled particles). If so I'm not sure how.

MWI says that the universe splits every time two quana interact, right? So for the entangled photons A and B, each time the two photons are sent through polarizers the universe splits four ways (AB, ab, aB, Ab). The universe splits four ways each time the experiment is run (n times), for a total of 4^n universes after the entire experiment is over.

Assuming the Bell Inequality is still violated, depending on the polarizer settings (say 22.5 and 45 degrees), we know that the number of occurrences of AB, and ab will be grossly disproportionate to Ab and aB. Meaning, somehow, in the act of splitting the universes, bias was given toward both particles either passing or failing. Does this still not imply the same locality issues as any single-world interpretation would?
 
Physics news on Phys.org
  • #2
peter0302 said:
I believe I read that the Everett Many Worlds Interpretation resolves the apparent locality problems with entanglement (i.e. the necessity of a faster-than-light influence to explain the correlations between the behavior of entangled particles). If so I'm not sure how.
The “how” is by using a MWI version of “Local” where the local connections between MW’s though the ‘splitting routine creates results in any single world (or at least our single world) results that are apparently Non-Local in the Einstein local and realistic version of the term Local.

Does it resolve “locality” problems?
Only from within a WMI view of what “local” is.
Until some WMI devised experiment is performed that conclusively demonstrates at least one of those other worlds actually exists, IMO the answer is NO.
 
  • #3
What I still don't understand is if the theory explains how the "splitting routine" is biased toward creating the appearance of our beloved Bell inequality violations without "knowing" the angles of the two polarizers. And how can any mechanism "know" the angles in the delayed choice experiments where the choice is made while the photon is mid-flight?
 
  • #4
peter0302 said:
MWI says that the universe splits every time two quana interact, right? So for the entangled photons A and B, each time the two photons are sent through polarizers the universe splits four ways (AB, ab, aB, Ab). The universe splits four ways each time the experiment is run (n times), for a total of 4^n universes after the entire experiment is over.
From what I've read, MWI advocates don't view the entire universe splitting, the splitting is completely local. So, if the first experimenter over here splits into A and a, and the second experimenter over there splits into B and b, the universe doesn't have to map copies in one location to copies in another until there's been time for a signal to pass between them. The Everett Interpretation FAQ says:
Q12 Is many-worlds a local theory?

The simplest way to see that the many-worlds metatheory is a local theory is to note that it requires that the wavefunction obey some relativistic wave equation, the exact form of which is currently unknown, but which is presumed to be locally Lorentz invariant at all times and everywhere. This is equivalent to imposing the requirement that locality is enforced at all times and everywhere. Ergo many-worlds is a local theory.

Another way of seeing this is examine how macrostates evolve. Macrostates descriptions of objects evolve in a local fashion. Worlds split as the macrostate description divides inside the light cone of the triggering event. Thus the splitting is a local process, transmitted causally at light or sub-light speeds. (See "Does the EPR experiment prohibit locality?" and "When do worlds split?")
 
  • #5
JesseM said:
From what I've read, MWI advocates don't view the entire universe splitting, the splitting is completely local. So, if the first experimenter over here splits into A and a, and the second experimenter over there splits into B and b, the universe doesn't have to map copies in one location to copies in another until there's been time for a signal to pass between them. The Everett Interpretation FAQ says:

I've heard there are different varieties of MWI, and that although some explain the splitting itself as local, most plainly assume "the universe" splits at that moment.

If splitting would be local, then it get's interesting, I'd think, if at one location there is a split into a 1000 universes, and at another into 2000. So when the local "splitting waves" (or whatever that would be) meet each other, they suddenly multiply into 2 Million universes?

If one believes in MWI, and one considers life worthwhile, is there a moral obligation to conduct as many quantum experiments as possible, in order to share the joy?

Regarding the non-locality of Entanglement, the above quoted 'Everett Interpretation FAQ" is from 1995.

However many decisive experiments regarding non-locality were performed 1998 and later, especially the GHZ experiments, which take non-locality out of the statistical correlation realm into that of definite predictions.

[Edit added:] Nowadays it's a bit odd to just postulate that there should be a local explanation without being able to specify one that is at least plausible.

(Also I've seen MWI listed as non-local in some overviews.)
 
Last edited:
  • #6
colorSpace said:
I've heard there are different varieties of MWI, and that although some explain the splitting itself as local, most plainly assume "the universe" splits at that moment.
Where did you hear this? Every source I've read which addresses this question and is written by actual scientists says that the splitting is local, I think the notion of the entire universe splitting is just a popular misconception.
colorSpace said:
Regarding the non-locality of Entanglement, the above quoted 'Everett Interpretation FAQ" is from 1995.

However many decisive experiments regarding non-locality were performed 1998 and later, especially the GHZ experiments, which take non-locality out of the statistical correlation realm into that of definite predictions.
But all claims of non-locality are based on correlations between measurements at one location and events at another--the only difference with the GHZ experiments is that you have a 100% chance of seeing certain correlations. I don't see why the MWI would have any greater trouble with perfect correlations, it could just say something like "every copy of experimenter A that sees outcome 1 is guaranteed to be matched with a copy of experimenter B that sees outcome 0, and vice versa."
 
  • #7
JesseM said:
Where did you hear this? Every source I've read which addresses this question and is written by actual scientists says that the splitting is local, I think the notion of the entire universe splitting is just a popular misconception.

Aside from being perhaps a "popular misconception", I would say that it is commonly expressed in such a way that it is said that "the universe splits", rather than "the universe in the space around the event splits"; and "there will be multiple universes", rather than "there will be multiple expanding bubbles which ultimately become universes". Otherwise I have no reason to disagree here.

JesseM said:
But all claims of non-locality are based on correlations between measurements at one location and events at another--the only difference with the GHZ experiments is that you have a 100% chance of seeing certain correlations. I don't see why the MWI would have any greater trouble with perfect correlations, it could just say something like "every copy of experimenter A that sees outcome 1 is guaranteed to be matched with a copy of experimenter B that sees outcome 0, and vice versa."

A 100% chance of seeing a correlation is called a definite prediction. It means that it doesn't depend on chance. Which makes it much more "realistic" thing, for example by the EPR definition of what an element of reality is, AFAIK. It doesn't make it more difficult for MWI, necessarily, but it does make it a less vague phenomenon, and (even) more clear that there is "really" something happening, non-locally.

It seems obvious to me that MWI would have to postulate that such an experimenter "A" will be matched by a corresponding experimenter "B".

However it seems equally obvious, that if "A" and "B" are spacelike separate, that then this would imply the splitting process to be non-local, contrary to the postulate that splitting is assumed (or defined) to be local.

And this is where the experiments from 1998 and later come in, as they ensure the spacelike separation of "A" and "B".

[Edit added:]On re-reading this I find it again necessary to point out that the result at "B" is impacted by the measurement angles at "A", not just in correlation with the result of "A". That is, there is a spacelike separate effect of action at "A" on results at "B". Only talking about correlations too easily creates the impression that the results could simply be in a corresponding sequence, independent of any actions at the other location.
 
Last edited:
  • #8
If you don't have collapse, then the following is true in LQFT:


(1) Space-time has a causal structure.

This is the familiar structure given to us by Special Relativity.


(2) We can restrict states.

If R is a region of space-time, then we can consider the restriction of the state of the universe to R. If S is a subregion of R, then we can further restrict the state to S, etc.

(i.e. if we know everything about the universe, we know everything about Alice's lab. And if we know that, then we can describe any experiment that takes place in part of her lab)


(3) The state of the universe is non-local.

If the region R consists of two regions S and T, then knowledge of the state restricted to S and the state restricted to T is insufficient to recover the state in R.

(i.e. if you know everything about Alice's lab and you know everything about Bob's lab, that isn't sufficient to fully describe any experiment that involves both labs)


(4) The evolution of the state is local.

If a region R is causally determined by some other region S, and we know the state of the universe restricted to S, that is enough to determine the state restricted to R.

(i.e. if we know everything about an instantaneous region of space one light-year in diameter about the Earth, we can then fully describe any experiment that happens on Earth for the next year)
 
Last edited:
  • #9
colorSpace said:
It seems obvious to me that MWI would have to postulate that such an experimenter "A" will be matched by a corresponding experimenter "B".

However it seems equally obvious, that if "A" and "B" are spacelike separate, that then this would imply the splitting process to be non-local, contrary to the postulate that splitting is assumed (or defined) to be local.
Why would this imply non-local splitting? If you have observers A and B, and when A makes a measurement he splits into a version who sees result + and a version who sees result -, and B likewise splits into a version who sees result + and a version who sees result -, then the universe doesn't have to map versions of A to versions of B until after there's been time for a signal to pass between them, and once that happens it can match them in such a way that there's a 100% chance that any version of A who saw + will be paired up with a version of B who saw - and vice versa (I think the GHZ experiment actually involves three experimenters A B and C, but obviously it's the same principle).
colorSpace said:
[Edit added:]On re-reading this I find it again necessary to point out that the result at "B" is impacted by the measurement angles at "A", not just in correlation with the result of "A". That is, there is a spacelike separate effect of action at "A" on results at "B". Only talking about correlations too easily creates the impression that the results could simply be in a corresponding sequence, independent of any actions at the other location.
What do you mean "not just in correlation with the result at 'A'?" You mean even before knowing what outcome was observed at A, you think you'd observe different results at B depending on the choice of measurement angles at A? This is definitely incorrect as it would imply the possibility of FTL communication.
 
Last edited:
  • #10
JesseM said:
What do you mean "not just in correlation with the result at 'A'?" You mean even before knowing what outcome was observed at A, you think you'd observe different results at B depending on the choice of measurement angles at A? This is definitely incorrect as it would imply the possibility of FTL communication.

I'll respond to the second paragraph first since it seems to me this needs to be clarified first. Your second question doesn't quite make sense to me, since in order to perform the experiment, one doesn't need to know the outcome of A, that is only necessary for the evaluation afterwards, when analyzing what happened in the experiment.

The answer to the first question is that the results of B do depend on the measurement angles at A, in their relation to the angles at B. That is, how A and B correlate depends on the relative angles at A and B. In other words, results at B correlate not just with the results at A, but with the combination of the results at A and the relative measurement angles at A and B.

JesseM said:
Why would this imply non-local splitting? If you have observers A and B, and when A makes a measurement he splits into a version who sees result + and a version who sees result -, and B likewise splits into a version who sees result + and a version who sees result -, then the universe doesn't have to map versions of A to versions of B until after there's been time for a signal to pass between them, and once that happens it can match them in such a way that there's a 100% chance that any version of A who saw + will be paired up with a version of B who saw - and vice versa (I think the GHZ experiment actually involves three experimenters A B and C, but obviously it's the same principle).

This is, I think, the first time I hear about the concept of "pairing up", so I am not sure my response will be on the point. However my first response would be this:

If there is a third observer exactly in the middle "M", between A and B, and if A and B immediately send signals about the measurement result, then the universe (unless it has non-local intelligence) has no means of instantly pairing up the correct sub-universes, as it depends on the measurement angles how they will need to be paired up (especially in the GHZ case of more than two entangled particles). That is, there would need to be an intelligent process, yet no time for such a process. I hope this response reflects an adequate interpretation of this concept of "pairing up", of which I don't know more than you have written.
 
Last edited:
  • #11
JesseM said:
What do you mean "not just in correlation with the result at 'A'?" You mean even before knowing what outcome was observed at A, you think you'd observe different results at B depending on the choice of measurement angles at A? This is definitely incorrect as it would imply the possibility of FTL communication.
colorSpace said:
I'll respond to the second paragraph first since it seems to me this needs to be clarified first. Your second question doesn't quite make sense to me, since in order to perform the experiment, one doesn't need to know the outcome of A, that is only necessary for the evaluation afterwards, when analyzing what happened in the experiment.
My second question didn't say anything about knowing the actual results at A, I was asking whether you thought the results at B would differ statistically depending on the choice of measurement angle at A. In other words, do you think the probability of getting a particular outcome at B will change depending on what angle is chosen at A? Again, this would imply the possibility of FTL communication. The entanglement only reveals itself in looking at the correlations between results at A and results at B, but when viewed in isolation, the probability of getting a given result at A does not in any way depend on what choice of measurement was made at B.
colorSpace said:
The answer to the first question is that the results of B do depend on the measurement angles at A, in their relation to the angles at B. That is, how A and B correlate depends on the relative angles at A and B. In other words, results at B correlate not just with the results at A, but with the combination of the results at A and the relative measurement angles at A and B.
Of course, I wasn't saying otherwise. But this is still just a matter of correlations that can only be seen once a classical signal has informed someone about both the choice of measurement and the result at both A and B.
colorSpace said:
This is, I think, the first time I hear about the concept of "pairing up", so I am not sure my response will be on the point. However my first response would be this:

If there is a third observer exactly in the middle "M", between A and B, and if A and B immediately send signals about the measurement result, then the universe (unless it has non-local intelligence) has no means of instantly pairing up the correct sub-universes, as it depends on the measurement angles how they will need to be paired up (especially in the GHZ case of more than two entangled particles). That is, there would need to be an intelligent process, yet no time for such a process. I hope this response reflects an adequate interpretation of this concept of "pairing up", of which I don't know more than you have written.
I think you're confusing yourself by all this talk of "sub-universes", all we really need to think about is which copy of a system over here is matched with which signal/causal effect from copies of a system over there.

Let's look at the simple case of two-particle entanglement. Suppose Bob and Alice are each receiving one member of an entangled pair, and each has three measurement settings A, B, and C, and the particles are entangled in such a way that they are always guaranteed to get opposite results if they pick the same measurement setting--if Bob picks setting A and gets result +1 then if Alice also picks setting A, she's guaranteed to get result -1. As I explained in post #5 of this thread, in this situation local realism predicts that when they pick different settings, they should get opposite results on at least 1/3 of all trials; but with the right choice of measurement angles it is possible to ensure that they actually get opposite results less frequently, say on only 1/4 of all trials, which is a violation of Bell's theorem.

But now look at this in a situation where we allow multiple copies of each experimenter. For concreteness, let's say we have A.I. experimenters doing a simulated version of this experiment on computers at different locations, and we want to reproduce this apparent violation of Bell's theorem in a purely classical way, just by running multiple copies of each A.I. experimenter on each computer. Now suppose on a given trial the Bob-A.I. picks a particular setting, say C, and the computer has to decide how to split Bob into copies who observe different results before it gets a message from the other computer about what setting the Alice-A.I. chose. All it needs to do is split Bob into 8 copies with the following results:

1. Bob measures A, gets +1
2. Bob measures A, gets +1
3. Bob measures A, gets +1
4. Bob measures A, gets +1
5. Bob measures A, gets -1
6. Bob measures A, gets -1
7. Bob measures A, gets -1
8. Bob measures A, gets -1

Now sometime later, a group of signals from all the Alice-copies comes from the other computer, and the computer simulating Bob has to decide which Alice-copy-signal is received by each Bob-copy. Suppose it turns out that Alice had also chose setting A, and the computer simulating her had split her into 8 copies in the same way:

1. Alice measures A, gets +1
2. Alice measures A, gets +1
3. Alice measures A, gets +1
4. Alice measures A, gets +1
5. Alice measures A, gets -1
6. Alice measures A, gets -1
7. Alice measures A, gets -1
8. Alice measures A, gets -1

In this case the computer simulating Bob can match up signals like this:

Bob 1 gets signal from Alice 5 (Bob +1, Alice -1)
Bob 2 gets signal from Alice 6 (Bob +1, Alice -1)
Bob 3 gets signal from Alice 7 (Bob +1, Alice -1)
Bob 4 gets signal from Alice 8 (Bob +1, Alice -1)
Bob 5 gets signal from Alice 1 (Bob -1, Alice +1)
Bob 6 gets signal from Alice 2 (Bob -1, Alice +1)
Bob 7 gets signal from Alice 3 (Bob -1, Alice +1)
Bob 8 gets signal from Alice 4 (Bob -1, Alice +1)

This will guarantee that each Bob finds that Alice got the opposite result from his own.

On the other hand, suppose it turns out that Alice had chose setting C, and her computer had split her up like this:

1. Alice measures C, gets +1
2. Alice measures C, gets +1
3. Alice measures C, gets +1
4. Alice measures C, gets +1
5. Alice measures C, gets -1
6. Alice measures C, gets -1
7. Alice measures C, gets -1
8. Alice measures C, gets -1

In this case, the computer simulating Bob could match the signals like so:

Bob 1 gets signal from Alice 1 (Bob +1, Alice +1)
Bob 2 gets signal from Alice 2 (Bob +1, Alice +1)
Bob 3 gets signal from Alice 3 (Bob +1, Alice +1)
Bob 4 gets signal from Alice 5 (Bob +1, Alice -1)
Bob 5 gets signal from Alice 4 (Bob -1, Alice +1)
Bob 6 gets signal from Alice 6 (Bob -1, Alice -1)
Bob 7 gets signal from Alice 7 (Bob -1, Alice -1)
Bob 8 gets signal from Alice 8 (Bob -1, Alice -1)

In this case 6/8 of the Bob-copies find that Alice got the same result as their own, while only 2/8 = 1/4 find that Alice got the opposite result. If Bob and Alice don't realize they are living in computer simulations and have been split into multiple copies, they will think that their results violate Bell's theorem.

You could certainly have a third computer midway between the ones simulating Alice and Bob, simulating a third observer, "Marvin". Then if Alice and Bob each send their results to Marvin, and there are 8 copies of Marvin as well, you could match them up like so:

Marvin 1 gets signals from Bob 1 and Alice 1 (Bob +1, Alice +1)
Marvin 2 gets signals from Bob 2 and Alice 2 (Bob +1, Alice +1)
Marvin 3 gets signals from Bob 3 and Alice 3 (Bob +1, Alice +1)
Marvin 4 gets signals from Bob 4 and Alice 5 (Bob +1, Alice -1)
Marvin 5 gets signals from Bob 5 and Alice 4 (Bob -1, Alice +1)
Marvin 6 gets signals from Bob 6 and Alice 6 (Bob -1, Alice -1)
Marvin 7 gets signals from Bob 7 and Alice 7 (Bob -1, Alice -1)
Marvin 8 gets signals from Bob 8 and Alice 8 (Bob -1, Alice -1)

But there's no need for the computers simulating Alice and Bob to know which copy of Alice is paired up with which copy of Bob at the same instant--for example, Bob's computer doesn't have to decide this until a simulated message from Alice has had time to arrive there, or a simulated message from Marvin sent after he had received a message from Alice (if both messages were sent as quickly as possible they would reach Bob at the same moment). So, we can still simulate all the aspects of this situation that are predicted by QM perfectly well using classical computers that create multiple copies of each observer, with the actual signals between computers not able to travel any faster than the simulated messages between observers in the simulated universe.
 
  • #12
JesseM said:
My second question didn't say anything about knowing the actual results at A, I was asking whether you thought the results at B would differ statistically depending on the choice of measurement angle at A.

No, looking at a specific result locally and in isolation, the probability is always 50% (in the case of spins) for one or the other direction, relative to the local angle.

JesseM said:
I think you're confusing yourself by all this talk of "sub-universes", all we really need to think about is which copy of a system over here is matched with which signal/causal effect from copies of a system over there.

On reading your response, I have to say I find everything that I said is confirmed, and no sign of confusion on my behalf.

First, I think that moving the whole scenario onto a computer simulation doesn't help your case, since on a computer, space is simulated and so there is no 'locality' in the first place. Or maybe I missed the point there. A computer (even a classical one, I guess) could simulate anything it wants to simulate, and we are still left with the task to figure out what is going on "inside the simulation". So I think we should put that aside as distracting.

JesseM said:
Bob 1 gets signal from Alice 5 (Bob +1, Alice -1)
Bob 2 gets signal from Alice 6 (Bob +1, Alice -1)
Bob 3 gets signal from Alice 7 (Bob +1, Alice -1)
Bob 4 gets signal from Alice 8 (Bob +1, Alice -1)
Bob 5 gets signal from Alice 1 (Bob -1, Alice +1)
Bob 6 gets signal from Alice 2 (Bob -1, Alice +1)
Bob 7 gets signal from Alice 3 (Bob -1, Alice +1)
Bob 8 gets signal from Alice 4 (Bob -1, Alice +1)
JesseM said:
Bob 1 gets signal from Alice 1 (Bob +1, Alice +1)
Bob 2 gets signal from Alice 2 (Bob +1, Alice +1)
Bob 3 gets signal from Alice 3 (Bob +1, Alice +1)
Bob 4 gets signal from Alice 5 (Bob +1, Alice -1)
Bob 5 gets signal from Alice 4 (Bob -1, Alice +1)
Bob 6 gets signal from Alice 6 (Bob -1, Alice -1)
Bob 7 gets signal from Alice 7 (Bob -1, Alice -1)
Bob 8 gets signal from Alice 8 (Bob -1, Alice -1)

As these two sets of assignments show, the sub-universes are matched up differently depending on the measurement angle. But the first signal sent from A to B doesn't even need to include information about the angle, it could be just the measurement results. Still this would require the sub-universes to be matched up already.

Your description skips the part where the "computer" needs to match up the sub-universes, and how it does that. It does seem to indeed require logic, which in turn requires time, which in turn isn't available (for example when the signals are sent immediately and meet in the middle, and are recorded there). Also, in reality there is no computer available to perform this "matching up", it would have to be a physical process that can take place with any kind of signal that could be recorded in the middle, so I'd say with two photons meeting in the middle. How are two photons going to figure that out?


JesseM said:
But there's no need for the computers simulating Alice and Bob to know which copy of Alice is paired up with which copy of Bob at the same instant--for example, Bob's computer doesn't have to decide this until a simulated message from Alice has had time to arrive there, or a simulated message from Marvin sent after he had received a message from Alice (if both messages were sent as quickly as possible they would reach Bob at the same moment).

My point is that when the signals arrive in the middle, an observer there can immediately write down the results from both sides, which means the sub-universes must be matched up at this instant, there is no time interval available for some non-existing computer to figure out how to match up the universes. And again, the information about the measurement angles doesn't even have to be part of the signal.

Even if you were to construct some kind of undercover avalanche of "universe-internal" information that gets sent along "automatically", then you will still run into an unsolvable problem (I think) when one extends the experiment to a large triangle with A, B and C at the corners and three additional observers at each middle point (AB, BC, AC). Then the three sub-universes must be matched up when the signals meet at these middle points AB, BC and AC. But if there is no FTL at all (or rather: no non-locality), then at this point in time, at each middle point, there can be only information about two of the three angles, and I'd guess this information wouldn't be sufficient. An additional limiting factor here is, that now the matching up has to be consistent, that is, the meeting points AB and AC must make the same choice about which A sub-universe to pick, otherwise there will be a contradiction (when BC has already been established, so to speak).

As presented so far, I don't think this is a viable scenario, for multiple reasons.
 
  • #13
colorSpace said:
First, I think that moving the whole scenario onto a computer simulation doesn't help your case, since on a computer, space is simulated and so there is no 'locality' in the first place. Or maybe I missed the point there.
You missed the point that the two computers simulating the two experimenters are at different physical locations, "with the actual signals between computers not able to travel any faster than the simulated messages between observers in the simulated universe". To make this more clear, we could imagine the simulation is something like a large cellular automaton, with each "cell" of simulated space being simulated by a physically distinct computing element, and the distance between any pair of computing elements in the real world being just as great as the distance between cells in the simulated universe.
colorSpace said:
As these two sets of assignments show, the sub-universes are matched up differently depending on the measurement angle. But the first signal sent from A to B doesn't even need to include information about the angle, it could be just the measurement results.
OK, just suppose the simulation is passing along all physical information about the event of Alice's measurement(s) as part of its physics simulation, so when deciding the mapping it need not be limited by the specifics of what Alice chooses to mention or not mention. After all, even in a simulation of a classical universe which obeyed locality we would expect all information about a given region's past light cone to influence the simulation of what was happening in that region.
colorSpace said:
Your description skips the part where the "computer" needs to match up the sub-universes, and how it does that.
I don't know what a "sub-universe" is. All the computer needs to do is match up copies of a system over here with copies of a system over there.
colorSpace said:
It does seem to indeed require logic, which in turn requires time, which in turn isn't available (for example when the signals are sent immediately and meet in the middle, and are recorded there).
OK, but that's not an issue related to locality, it's just a limitation of computer simulations. In reality, when a signal from one event reaches me the laws of physics can "compute" the effects of the signal on me instantly, whereas a computer simulation of me and the signal might require some extra time to compute these effects. Since all we're interested in here is the question of whether any violations of locality are needed to explain quantum effects, in the thought-experiment we are free to brush aside this practical concern and imagine we have ideal computers that can instantly compute the effects of one event on another distant system at the precise moment that other system enters the event's future light cone (but no sooner).
colorSpace said:
Also, in reality there is no computer available to perform this "matching up", it would have to be a physical process that can take place with any kind of signal that could be recorded in the middle, so I'd say with two photons meeting in the middle. How are two photons going to figure that out?
You are taking this much too literally, the computer is just meant to represent whatever local laws of physics can potentially do. We don't bother asking how the laws of physics "figure out" how to guide each particle according to whatever complicated equations we use to represent these laws, this is more like a philosophical question than a scientific one. All that we're interested in here is the issue of whether local laws can reproduce the results of QM, instead of a computer I could just as easily talk about a genie in each region of space who magically creates multiple copies of any system in that region and decides what signals/causal effects passed along by other genies get mapped to which copy. How the genie does this is irrelevant to the thought-experiment, all that matters is that the genie can only do copying/mapping on the basis of information that is part of his past light cone.
colorSpace said:
My point is that when the signals arrive in the middle, an observer there can immediately write down the results from both sides, which means the sub-universes must be matched up at this instant
Again, what are "sub-universes"? There is no need for a computer or genie responsible for copies of Bob over here to figure out which ones get mapped to which copies of Alice over there until the event of Alice's measurement enters Bob's past light cone (and at this moment the event of various Marvin copies receiving signals from both Bob and Alice is also entering Bob's light cone, so each copy of Bob can then be matched with a copy of Alice and a copy of Marvin in such a way that everything stays consistent).
colorSpace said:
Even if you were to construct some kind of undercover avalanche of "universe-internal" information that gets sent along "automatically", then you will still run into an unsolvable problem (I think) when one extends the experiment to a large triangle with A, B and C at the corners and three additional observers at each middle point (AB, BC, AC). Then the three sub-universes must be matched up when the signals meet at these middle points AB, BC and AC. But if there is no FTL at all (or rather: no non-locality), then at this point in time, at each middle point, there can be only information about two of the three angles, and I'd guess this information wouldn't be sufficient.
I still think you have confused the issue with all your talk of "sub-universe", there is no need for the computers/genies in any region--whether the region of A, B, C, or AB, AC, and BC--to decide which copies in that region are mapped to which copies in another region until the event that differentiated the copies has had time to enter the first region's past light cone. So at a certain time each copy of AB will be mapped to some copy of A and some copy of B; each copy of AC will be mapped to some copy of A and some copy of C; and each copy of BC will be matched to some copy of B and some copy of C; but until there has been time for signals from all three measurements to reach some common point in space, there is no need for any mappings that include A, B and C together in order to accurately simulate what observers at each location in space experience.
 
  • #14
JesseM said:
Since all we're interested in here is the question of whether any violations of locality are needed to explain quantum effects, in the thought-experiment we are free to brush aside this practical concern and imagine we have ideal computers that can instantly compute the effects of one event on another distant system at the precise moment that other system enters the event's future light cone (but no sooner).

I think your thought-experiment in itself "brushes aside" too many things that are not possible in a local classical universe. For example this ideal computer that can instantly compute all those things just doesn't exist. Real computations in a classical universe are limited by the speed of light, among other things. It seems you respond to most of my points only with hand-waving.

JesseM said:
I still think you have confused the issue with all your talk of "sub-universe", there is no need for the computers/genies in any region--whether the region of A, B, C, or AB, AC, and BC--to decide which copies in that region are mapped to which copies in another region until the event that differentiated the copies has had time to enter the first region's past light cone. So at a certain time each copy of AB will be mapped to some copy of A and some copy of B; each copy of AC will be mapped to some copy of A and some copy of C; and each copy of BC will be matched to some copy of B and some copy of C; but until there has been time for signals from all three measurements to reach some common point in space, there is no need for any mappings that include A, B and C together in order to accurately simulate what observers at each location in space experience.

"Sub-universe" is simply my name for what you call "copy of a system". Apparently you are confused about what I'm saying, since when you write "until there has been time for signals from all three measurements to reach some common point in space", then that is exactly what I mean by signals meeting in the middle points between A, B and C. I called these middle points AB, AC, and BC. At these points, the mapping needs to take place. With that explanation, I will simply copy in this scenario again, for you consideration:

Even if you were to construct some kind of undercover avalanche of "universe-internal" information that gets sent along "automatically", then you will still run into an unsolvable problem (I think) when one extends the experiment to a large triangle with A, B and C at the corners and three additional observers at each middle point (AB, BC, AC). Then the three sub-universes must be matched up when the signals meet at these middle points AB, BC and AC. But if there is no FTL at all (or rather: no non-locality), then at this point in time, at each middle point, there can be only information about two of the three angles, and I'd guess this information wouldn't be sufficient. An additional limiting factor here is, that now the matching up has to be consistent, that is, the meeting points AB and AC must make the same choice about which A sub-universe to pick, otherwise there will be a contradiction (when BC has already been established, so to speak).
 
  • #15
colorSpace said:
I think your thought-experiment in itself "brushes aside" too many things that are not possible in a local classical universe. For example this ideal computer that can instantly compute all those things just doesn't exist. Real computations in a classical universe are limited by the speed of light, among other things. It seems you respond to most of my points only with hand-waving.
The universe itself can "compute" the outcome of an interaction that takes place within an arbitrarily small region in an arbitrarily short time according to the laws of physics, no? You're really hung up on my use of computers, but as I said they are not relevant to the question of locality, what I'm asking you to imagine is just local laws of physics which involve many copies of each system/region of space, and which update what happens to each copy of that system/region (including what signals or causal effects it receives from which copies of systems in other regions) based only on what has happened in the system's past light cone.
colorSpace said:
"Sub-universe" is simply my name for what you call "copy of a system". Apparently you are confused about what I'm saying, since when you write "until there has been time for signals from all three measurements to reach some common point in space", then that is exactly what I mean by signals meeting in the middle points between A, B anc C. I called these middle points AB, AC, and BC.
I thought AB was the point in space and time where signals from A and B could meet before there'd been time for signals from C to reach that point, and likewise for AC and BC. If signals from all three events reach all three points simultaneously (so that there is no moment when AB, AC, and BC have each received signals from a pair of measurements but there is no region that has access to information about all three measurements), what is the reason you gave each one a name involving only two of the three letters?
colorSpace said:
At these points, the mapping needs to take place. With that explanation, I will simply copy in this scenario again, for you consideration:

Even if you were to construct some kind of undercover avalanche of "universe-internal" information that gets sent along "automatically", then you will still run into an unsolvable problem (I think) when one extends the experiment to a large triangle with A, B and C at the corners and three additional observers at each middle point (AB, BC, AC). Then the three sub-universes must be matched up when the signals meet at these middle points AB, BC and AC. But if there is no FTL at all (or rather: no non-locality), then at this point in time, at each middle point, there can be only information about two of the three angles, and I'd guess this information wouldn't be sufficient.

Here you say "at each middle point, there can be only information about two of the three angles", which is also consistent with my earlier interpretation that AB had the measurements at A and B in its past light cone but not C, and likewise for BC and AC. If this is not what you meant, please clarify; if it is what you meant, then please reconsider my objection that "until there has been time for signals from all three measurements to reach some common point in space, there is no need for any mappings that include A, B and C together in order to accurately simulate what observers at each location in space experience." If you still disagree with this, perhaps it would help if we came up with an actual example (like the GHZ experiment which does involve three observers) to see whether there'd be any problems with a copying-and-mapping scheme of the kind I describe.
 
Last edited:
  • #16
JesseM said:
The universe itself can "compute" the outcome of an interaction that takes place within an arbitrarily small region in an arbitrarily short time according to the laws of physics, no? You're really hung up on my use of computers, but as I said they are not relevant to the question of locality, what I'm asking you to imagine is just local laws of physics which involve many copies of each system/region of space, and which update what happens to each copy of that system/region (including what signals or causal effects it receives from which copies of systems in other regions) based only on what has happened in the system's past light cone.

The logic that is necessary to match up the copies of each system seems non-trivial, and to require time, however it appears that there is no available time, zero. And in a classical universe nothing can happen in zero time. That is, a viable theory would have to be able to specify a way for this to happen physically, that can work in real (including in the case below); otherwise this may be a point where 'non-locality' is hidden: in the unspecified computations that this model requires.

JesseM said:
I thought AB was the point in space and time where signals from A and B could meet before there'd been time for signals from C to reach that point, and likewise for AC and BC.

Yes, correct - it seemed to me this hadn't been clear, since the reasoning in this scenario is that when the signals reach the midpoints AB, AC and BC, all copies of all systems need to be matched up. I'll give an example:

Let's say when signals meet at AB, that A5 is matched with B2. When signals meet at AC, A5 is matched with C3. This already implies that at BC, the copies of systems B and C that need to be matched up include the pair B2 and C3.

So at this point, the mapping (A5,B2,C3) is already fixed, even though there is no "common point in space" that has been reached by "signals from all three measurements".

Since there is no upper limit for the number of entangled particles, this can be extended arbitrarily. You might have a large ring of a 100 systems, where at each midpoint the available information is only 2 out of 100 angles and results, and still the copies of all 100 systems must be matched up.
 
  • #17
colorSpace said:
The logic that is necessary to match up the copies of each system seems non-trivial, and to require time, however it appears that there is no available time, zero. And in a classical universe nothing can happen in zero time. That is, a viable theory would have to be able to specify a way for this to happen physically, that can work in real (including in the case below); otherwise this may be a point where 'non-locality' is hidden: in the unspecified computations that this model requires.
I disagree, viable physical theories never explain how the universe does what it does, they just specify mathematical equations describing what rules it follows (and it seems to me certain processes can indeed happen in 'zero time' in certain theories, like emission/absorption events in quantum field theory). The issue is just whether a local set of copying/matching rules can account for Bell inequality violations ('local' in the sense that the rules cannot depend on any events outside of the past light cone of the event in question), if they can than any further objections based on wondering how the universe follows the rules is a metaphysical one.
colorSpace said:
Yes, correct - it seemed to me this hadn't been clear, since the reasoning in this scenario is that when the signals reach the midpoints AB, AC and BC, all copies of all systems need to be matched up. I'll give an example:

Let's say when signals meet at AB, that A5 is matched with B2. When signals meet at AC, A5 is matched with C3. This already implies that at BC, the copies of systems B and C that need to be matched up include the pair B2 and C3.
This objection is valid if you assign importance to the "names" I gave each copy, but if we dispense with the names and just describe each copy in terms of its physical features, then this is no longer a problem. In other words, instead of saying this:
1. Bob measures A, gets +1
2. Bob measures A, gets +1
3. Bob measures A, gets +1
4. Bob measures A, gets +1
5. Bob measures A, gets -1
6. Bob measures A, gets -1
7. Bob measures A, gets -1
8. Bob measures A, gets -1
I could have said something more like this:
There are 4 indistinguishable copies of Bob who measure A and get +1, and 4 indistinguishable copies of Bob who measure A and get -1
So in this case when Alice measures C, with 4 copies of her getting +1 and 4 copies getting -1, and a bundles of signals from the Alice-copies reaches Bob, then we could just say something like "3 of the Bob-copies that got +1 get signals from an Alice copy that got +1, and the other Bob-copy that got +1 gets a signal from an Alice copy that got -1; likewise, 3 of the Bob-copies that got -1 get signals from an Alice copy that got -1, and the other Bob-copy that got -1 gets a signal from an Alice copy that got +1". In order to keep things consistent, all we need to know is that any Bob-copy that got signals from an Alice copy which say she got +1 is "marked" with this association and therefore must continue to get signals from an Alice copy that got +1, as long as the copies are otherwise indistinguishable it doesn't matter if the signal is from the "same" Alice copy (and if the Alice copies that got +1 do differentiate in some other way, so that Alice copies that got +1 later differentiate into Alice copies with property X and copies with property Y, then as soon as a Bob-copy gets a signal from a version with property X after the differentiation-event, he is afterwards 'marked' as being associated with an Alice copy that got +1 and has property X and will only receive signals from Alice-copies with this history afterwards).

So as long as we do not assign specific names to copies beyond the physical aspects of their history that differentiate them, and the "mapping" amounts to no more than a statement that a copy A who gets a signal from a copy B with a certain history is marked in this way and will continue to only receive signals from copies of B with that same history, I don't think we will have the type of problem you mention above. But again, if you disagree perhaps it would be a good idea to try to apply all this to a specific example like the GHZ experiment.
 
  • #18
JesseM said:
I disagree, viable physical theories never explain how the universe does what it does, they just specify mathematical equations describing what rules it follows (and it seems to me certain processes can indeed happen in 'zero time' in certain theories, like emission/absorption events in quantum field theory). The issue is just whether a local set of copying/matching rules can account for Bell inequality violations ('local' in the sense that the rules cannot depend on any events outside of the past light cone of the event in question), if they can than any further objections based on wondering how the universe follows the rules is a metaphysical one.

Not a metaphysical explanation, however a specification of the physical processes is needed. Your model appears to move a lot of what needs to happen into a hypothetical computer that is simulating the universe. I think this model is non-local in the first place, since the required computer's architecture is not specified within physical space.

JesseM said:
This objection is valid if you assign importance to the "names" I gave each copy, but if we dispense with the names and just describe each copy in terms of its physical features, then this is no longer a problem. In other words, instead of saying this:

I could have said something more like this:

So in this case when Alice measures C, with 4 copies of her getting +1 and 4 copies getting -1, and a bundles of signals from the Alice-copies reaches Bob, then we could just say something like "3 of the Bob-copies that got +1 get signals from an Alice copy that got +1, and the other Bob-copy that got +1 gets a signal from an Alice copy that got -1; likewise, 3 of the Bob-copies that got -1 get signals from an Alice copy that got -1, and the other Bob-copy that got -1 gets a signal from an Alice copy that got +1". In order to keep things consistent, all we need to know is that any Bob-copy that got signals from an Alice copy which say she got +1 is "marked" with this association and therefore must continue to get signals from an Alice copy that got +1, as long as the copies are otherwise indistinguishable it doesn't matter if the signal is from the "same" Alice copy (and if the Alice copies that got +1 do differentiate in some other way, so that Alice copies that got +1 later differentiate into Alice copies with property X and copies with property Y, then as soon as a Bob-copy gets a signal from a version with property X after the differentiation-event, he is afterwards 'marked' as being associated with an Alice copy that got +1 and has property X and will only receive signals from Alice-copies with this history afterwards).

So as long as we do not assign specific names to copies beyond the physical aspects of their history that differentiate them, and the "mapping" amounts to no more than a statement that a copy A who gets a signal from a copy B with a certain history is marked in this way and will continue to only receive signals from copies of B with that same history, I don't think we will have the type of problem you mention above. But again, if you disagree perhaps it would be a good idea to try to apply all this to a specific example like the GHZ experiment.

My understanding is that you used "4 indistinguishable copies" only to address the probabilities of 1/4 for some events. So I don't see how giving names, or not, would affect the argument. The argument being, that at the midpoints, at the point in time when the copies must be matched up, only partial information is available.

Rather it seems that this theory isn't developed to the point where it could even try to answer this problem. I'm not sure what advantage there would be in using a more specific GHZ example, unless this theory has a more specific promise of addressing this situation. However one thing I'm quite sure about: I wouldn't have enough time to do so.

However I thank you for explaining the theory to this point.

[Edit added:] BTW, in the GHZ case, the scenario doesn't involve probabilities of 1/4, so the "indistinguishable copies" are not needed and would be redundant.
 
Last edited:
  • #19
colorSpace said:
Not a metaphysical explanation, however a specification of the physical processes is needed. Your model appears to move a lot of what needs to happen into a hypothetical computer that is simulating the universe. I think this model is non-local in the first place, since the required computer's architecture is not specified within physical space.
I've already said the computer is not a part of the "model", it's just an analogy for a universe obeying local laws, just like the local magic genies in my second analogy. And never in physics do we need a "specification of the physical processes" for a mathematical law, physics is just about stating mathematical laws and testing if they match up with experiments. (can you name the 'physical process' that causes matter and energy to curve spacetime in the way predicted by general relativity?) If it's possible to come up with a mathematical law governing copies of systems and signals that describes the statistics of which versions of a signal match up to which copies of a system, and this law can reproduce the statistics seen in QM, and is also "local" in the sense that the mapping rule only depends on information in the past light cone of the event of the signal reaching the system, then I think that's all that's needed to show that a local many-worlds type explanation can logically account for the statistics seen in QM.
colorSpace said:
My understanding is that you used "4 indistinguishable copies" only to address the probabilities of 1/4 for some events.
Just like giving each copy a name, saying that there would be 4 copies was also just a simplification to make the example easier to follow. Since probabilities are continuous in QM, it would be more "realistic" to imagine a continuous infinity of copies of each system, and to say something like "50% of copies of Bob who measured A got result +1 while 50% got the result -1, and when the copies of Bob who got result +1 got signals from Alice, 75% of these got a signal that Alice had measured +1 while 25% got a signal that Alice had measured -1", etc.
colorSpace said:
So I don't see how giving names, or not, would affect the argument. The argument being, that at the midpoints, at the point in time when the copies must be matched up, only partial information is available.
But without giving them names there is no longer the problem that if AB saw Alice 5 matched with Bob 7 and BC saw Bob 7 matched with Carl 6, then AC must have seen Alice 5 matched with Carl 6. Instead you can say something like "25% of copies of AB got a signal that Alice had got +1 and Bob had gotten -1, 25% of the copies of BC got a signal that Bob had got -1 and Carl had got -1, and 25% of the copies of AC got a signal that Alice had got +1 and Carl had got -1". There's no need for the mapping at AC to have faster-than-light knowledge of the mapping at AB and BC here, since each of these observers can see 4 possible results and each result has a 25% chance (since there are no signs of entanglement when you measure only two particles of a 3-particle entangled system). Then when a fourth observer ABC gets signals from AB, BC, and AC, the signals from each of these three can be mapped to ABC in such a way that everything is consistent (so you don't have ABC hearing from AB that A got +1 but hearing from AC that A got -1) and that the quantum correlations predicted in the GHZ experiment are observed.
colorSpace said:
[Edit added:] BTW, in the GHZ case, the scenario doesn't involve probabilities of 1/4, so the "indistinguishable copies" are not needed and would be redundant.
The need for "indistinguishable copies" has nothing specifically to do with there being a 1/4 probability involved, obviously. There will be a 1/2 probability that each particle gives result +1 or -1, and then with multiple copies of each experimenter at A, B, and C, we can ensure that copies of A and B and C are always mapped with one another in such a way as to give the correlations predicted by QM which violate single-universe local realism.
 
  • #20
JesseM said:
I've already said the computer is not a part of the "model", it's just an analogy for a universe obeying local laws, just like the local magic genies in my second analogy. And never in physics do we need a "specification of the physical processes" for a mathematical law, physics is just about stating mathematical laws and testing if they match up with experiments. (can you name the 'physical process' that causes matter and energy to curve spacetime in the way predicted by general relativity?) If it's possible to come up with a mathematical law governing copies of systems and signals that describes the statistics of which versions of a signal match up to which copies of a system, and this law can reproduce the statistics seen in QM, and is also "local" in the sense that the mapping rule only depends on information in the past light cone of the event of the signal reaching the system, then I think that's all that's needed to show that a local many-worlds type explanation can logically account for the statistics seen in QM.

I think you are quite right in introducing the word "magic", since that would be required to match up the universes in such a way. There are just too many things being assumed, (such as this avalanche of available information, just because the edge of a light cone is reached) that are not possible in local classical physics. The universe doesn't send around complete information about all internal states at the very edge of any light cone - this is just beyond "physics", let alone local classical physics. You mentioned above that emission/absorption events may require zero time in quantum physics. But if so, and if these are events worthwhile to be mentioned in this context, then this would likely imply non-local effects as well.

I think it is just not a viable theory to just say: at this point some magic happens, and requiring magic that involves so much. In so far as it is local, which it only appears by moving the problem into unspecified territory or magic, it doesn't seem viable.

Furthermore, there is still a very simple problem: you haven't yet resolved the GHZ case with more than two locations, as below.

JesseM said:
Just like giving each copy a name, saying that there would be 4 copies was also just a simplification to make the example easier to follow. Since probabilities are continuous in QM, it would be more "realistic" to imagine a continuous infinity of copies of each system, and to say something like "50% of copies of Bob who measured A got result +1 while 50% got the result -1, and when the copies of Bob who got result +1 got signals from Alice, 75% of these got a signal that Alice had measured +1 while 25% got a signal that Alice had measured -1", etc.

This appears redundant in regard to addressing the challenge of the triangular situation.

JesseM said:
But without giving them names there is no longer the problem that if AB saw Alice 5 matched with Bob 7 and BC saw Bob 7 matched with Carl 6, then AC must have seen Alice 5 matched with Carl 6.

That is not really a problem, just something that makes it perhaps a little more difficult to come up with a specific solution, which you haven't presented yet.

JesseM said:
Instead you can say something like "25% of copies of AB got a signal that Alice had got +1 and Bob had gotten -1, 25% of the copies of BC got a signal that Bob had got -1 and Carl had got -1, and 25% of the copies of AC got a signal that Alice had got +1 and Carl had got -1". There's no need for the mapping at AC to have faster-than-light knowledge of the mapping at AB and BC here, since each of these observers can see 4 possible results and each result has a 25% chance (since there are no signs of entanglement when you measure only two particles of a 3-particle entangled system).

Introducing precentages in the GHZ case still seems redundant and out of place.

The point is that the mapping at AB does not have information about the measurement angle (also not the result, but the point here more the angle) at C, yet the angle at C may impose requirements on the necessary mapping at AB. That is the challenge to answer, given that:

JesseM said:
Then when a fourth observer ABC gets signals from AB, BC, and AC, the signals from each of these three can be mapped to ABC in such a way that everything is consistent (so you don't have ABC hearing from AB that A got +1 but hearing from AC that A got -1) and that the quantum correlations predicted in the GHZ experiment are observed.

This appears wrong, as for a fourth observer ABC there is no more mapping left to do (other than perhaps removing redundant cases that need not have been introduced in the first place.). Any copy of A is already mapped to a specific copy of B due to AB, and to a specific copy of C due to AC. When both AB and AC are mapped, no other possibilities (or requirements, for that matter) to map are left. At that point, the problem is either solved, or not.
 
  • #21
JesseM said:
From what I've read, MWI advocates don't view the entire universe splitting, the splitting is completely local. So, if the first experimenter over here splits into A and a, and the second experimenter over there splits into B and b, the universe doesn't have to map copies in one location to copies in another until there's been time for a signal to pass between them. The Everett Interpretation FAQ says:


Thanks, Jesse - that makes the most sense out of any explanation I've heard. So this means essentially that Experimenter A and Experimenter B both exist in superpositioned states (reflecting the possible outcomes they could have seen) until they meet, at which time the universe splits in four? If so, that still requires that the mechanism that splits the universe have knowledge of both polarizer settings - although the information traveled slower than light, it still travelled. Are the polarizer settings somehow reflected in the wavefunctions of the respective observers when they meet? That would be the only way to explain the correlations; the respective results being reflected in the wavefunctions is not enough, as the correlations depend entirely on the relative angle between the polarizers.

On the issue of computer simulations, I see no problem using a single computer to simulate anything. Correct programming can adequately simulate locality without the need for separate computers interacting across vast distances - simply deny distant "objects" access to information about one another. The bits and logic gates in the computer are not quantum devices and random numbers are not really random. Therefore I do not see why they would behave in any way other than the way in which they were programmed.
 
Last edited:
  • #22
colorSpace said:
The point is that the mapping at AB does not have information about the measurement angle (also not the result, but the point here more the angle) at C, yet the angle at C may impose requirements on the necessary mapping at AB. That is the challenge to answer, given that: ...

To this statement of mine I can add that even though I'm not sure enough of the details of GHZ entanglement to say whether it will create this situation, I am (quite) certain that this situation will occur with 'entanglement-swapping', or a combination of entanglement-swapping and GHZ, when the 'swapping' is done at location C.

Then it may be difficult to come up with a mapping logic that works at the midpoint AB (without the information from C).
 
  • #23
Probably a little late to this discussion, but the issue of locality in QM is addressed directly in this groundbreaking paper by David Deutsch:

http://xxx.lanl.gov/abs/quant-ph/9906007

Basically, the paper says that: "All information in quantum systems is, notwithstanding Bell's theorem, localised. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through 'classical' (i.e. decoherent) information channels."
 
  • #24
Michael Bacon said:
Probably a little late to this discussion, but the issue of locality in QM is addressed directly in this groundbreaking paper by David Deutsch:

http://xxx.lanl.gov/abs/quant-ph/9906007

Basically, the paper says that: "All information in quantum systems is, notwithstanding Bell's theorem, localised. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through 'classical' (i.e. decoherent) information channels."

I've read a similar paper a while ago on D.Deutsch's old homepage. The proof seems to be very abstract and to come to the conclusion that theoretically there could be a qubit passed along with the measurement results which B will send to A after the experiment, and will then make things fit to create the impression that there had been a non-local influence beforehand. I think this will run into the same challenges that I provided above, and so I wouldn't be convinced until I hear a convincing response to these challenges.

Also this seems to assume something that at least naively looks very unreasonable: In the EPR experiment, the results could be sent by plain email, and so the qubit would have to travel through the typing fingers into the keyboard, attach itself to the ASCII codes sent over the internet, repeaters, and so on. I simply wouldn't believe that. Or am I missing something?

To me this explanation, so far, would seem more than "half-baked".
 
  • #25
Many worlds interpretation don't resolve non locality. Non locality is not a problem it's a characteristic of the nature. The fact many people feel bad about it is just due to subjectiv viewpoints.

In many worlds interpretation there are no collapse of the wave-function, there's no measure process in the sense of Copenhagen/von Neumann. It have nothing to do with non-locality.

Above all, non locality is scratched against our noses, as said Feynman, by the nature. Any interpretation that tries to falsify a fact is a misinterpretation.
 
  • #26
Non locality is not a problem it's a characteristic of the nature. The fact many people feel bad about it is just due to subjectiv viewpoints.
Wow, that's bold. How did you come to know so much about the nature?

Any interpretation that tries to falsify a fact is a misinterpretation.
Wow again. So Bohm, Cramer, and anyone else who tries to resolve locality with QM is a liar? What support do you have for such a bold assertion?
 
  • #27
peter0302 said:
Wow again. So Bohm, Cramer, and anyone else who tries to resolve locality with QM is a liar? What support do you have for such a bold assertion?

What do you mean with "tries to resolve locality with QM" ?
 
  • #28

1. What is MWI and how does it relate to locality problems and entanglement?

MWI stands for Many-Worlds Interpretation, a theory in quantum mechanics proposed by Hugh Everett in the 1950s. It suggests that the wavefunction of a quantum system never collapses, but instead branches off into multiple parallel universes. This theory is often used to try to explain the phenomenon of entanglement, where two particles can be connected in such a way that the state of one affects the state of the other, even when they are separated by large distances. Locality problems arise when trying to reconcile this non-local connection with the principle of locality, which states that no physical influence can travel faster than the speed of light.

2. How does MWI resolve locality problems with entanglement?

MWI suggests that the apparent non-locality of entanglement is simply due to the fact that the universe is constantly branching into multiple parallel universes, each of which contains a copy of the entangled particles. Therefore, the particles are not truly communicating with each other across large distances, but rather, their states are determined by the branching of the wavefunction. This interpretation eliminates the need for faster-than-light communication and is consistent with the principle of locality.

3. Are there any alternative theories to MWI that also attempt to explain locality problems with entanglement?

Yes, there are several alternative theories, such as the Copenhagen interpretation and the pilot-wave theory. Each of these theories has its own way of explaining the phenomenon of entanglement and how it relates to locality. However, MWI is one of the most popular and widely accepted interpretations among physicists.

4. Are there any experiments that have been conducted to test the validity of MWI in resolving locality problems with entanglement?

While there have been numerous experiments that have confirmed the phenomenon of entanglement, there is currently no way to directly test the validity of MWI or any other interpretation of quantum mechanics. The different interpretations are largely philosophical and cannot be proven or disproven through experimentation.

5. What are some potential implications of MWI's resolution of locality problems with entanglement?

If MWI is correct, it would mean that the universe is constantly branching into multiple parallel worlds, each of which contains a version of ourselves and everything else in the universe. This idea has profound philosophical implications and challenges our understanding of reality. It also has potential practical applications, such as in quantum computing and communication, where the phenomenon of entanglement is harnessed for various purposes.

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
62
Views
1K
  • Quantum Interpretations and Foundations
Replies
15
Views
2K
  • Quantum Interpretations and Foundations
Replies
0
Views
209
  • Quantum Interpretations and Foundations
Replies
27
Views
2K
  • Quantum Interpretations and Foundations
4
Replies
108
Views
8K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
3
Replies
79
Views
5K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
54
Views
3K
  • Quantum Interpretations and Foundations
Replies
9
Views
1K
Back
Top