Entanglement and teleportation

daytripper
Messages
108
Reaction score
1
I read up on wikipedia.com about entanglement and teleportation but it left me with a few questions. If you go to This Link. You'll see that they give the analogy "Bob has created two atoms called I and II which are maximally entangled". Now obviously, bob can't create two atoms at will so how do two particles become entangled? Other texts in the article suggest that due to the fact that they're identicle particles in the same instant of time, they're basically one particle so what happens to one will happen to the other. But then it says the transmition of information can not go faster than the speed of light. If this is true then I would assume that the communication between the particles is transmitted through some sort of EM wave. There's a lot of confusion right now, could someone clear this up for me? I just realized that my questions might not be obvious from the text so I will list them.
1) What defines which identicle particles in the same time period are entangled and which are not? Is there an "entanglement process" or are two identical particles that exist in the same point in time automatically entangled?
2) Does the transfer of particle state information happen instataneously no matter what the distance?
3) If not, is the information transmitted through some sort of EM wave?
 
Physics news on Phys.org
An entangeled pair can be created by a technique called parametric down conversion. i suggest you google for that. Besides, two entangeled particles can indeed be seen as one 'bigger' particle with twice as much energy (suppose that each particle has the same amount of energy) or twice as much shorter wavelength. This is how entangled pairs can beat the diffraction limit. i refer to my journal for more info. Just look at the faster then light communication entry and the for qubit-lovers entry

marlon
ps this is nice : http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
 
Last edited by a moderator:
Marlon, I read your faster than light article. Just to verify, you said that the communication between the two particles themselves is instataneous but the ability to read these particles has to be enabled through a classical communication platform? Do physicsists know how the particles communicate faster than the speed of light or is there not an explanation behind that?
 
Last edited:
daytripper said:
Do physicsists know how the particles communicate faster than the speed of light or is there not an explanation behind that?

The faster then light aspect directly comes from the entaglement. If you have an entangled pair of two atoms (one has spin up along the x-axis and the other has spin down, for example) and you measure the spin of one atom along the x-asis, then you automatically know the spin value of the other atom because it has to be the opposite direction. First of all, communication like that is impossible because of the necessary classical phonecall that is required between the two observers. Secondly, both observers need to measure along the very same axis but who say they will do that ?

regards
marlon
 
I must have my concept of entanglement wrong then. I thought entanglement meant that what was done to one entangled particle was automatically done to the other. But this is just a matter of "If this particle is moving this way, the other particle has to be moving that way". No communication is done between the particles themselves. right?
 
Lets say; two observers, separated a light year from each other, have come to the agreement that spin up refers to 1 and spin down to 0 (assuming they live quite long...). (They have come to this agreement trough the conventional ways of communication, in this example taking multiple light years, discerning the side effects such as signal-loss and such). When they agreed this, they also agreed to prepare two isolated photons (one here and one there) and to bring them into a 'long-lasting' state of entanglement.
Lets assume they have the possibility (by chance or agreed before) to each measure the photon the exact same time or in a time frame allowing FTL.

Then, Observer A puts the photon into a forced 'spin up' state, which will, due to entanglement, be instantaneously sent to the other photon, thus forcing that one at observer B into an immediate spin down state. Via this way, observer B will be able to read 0 from his photon.

Apart from the fact that measuring the spin state requires classical communication, the whole procedure will highly exceed light speed.
 
daytripper said:
I must have my concept of entanglement wrong then. I thought entanglement meant that what was done to one entangled particle was automatically done to the other. But this is just a matter of "If this particle is moving this way, the other particle has to be moving that way". No communication is done between the particles themselves. right?

In fact it is neither !
If it were "what is done to one, is also done to the other", you obviously would have a faster-than-light communication channel. Some fools even thought that you could build a rocket motor that way, by having entangled atoms, one in the rocket, and one on earth, and then accelerate those on earth, so that those in the rocket would also accelerate :-)

On the other hand, it is not just learning about an unknown parameter of the other atom either. Bell's theorem tells us that that is not the case.

Let us separate two issues: one is discussions on the *mechanism* that is responsible for entanglement: there are many discussions about it, people have different views on what is actually going on (I have my own view which I don't stop defending over here :-). The other issue is about what is actually observed: here, most people agree (there's still a "local realist" crowd who denies all experimental results and claims it are all tricked, or badly analysed, or oversold results, but they are, by most others, seen as kind of cranky).

I won't go into the mechanism explanations. I will just try to state what is actually predicted by quantum theory, no matter what interpretational flavor. It is about 2 observers, Alice and Bob, who receive each one of the two entangled particles (photons, atoms, whatever).
Now, they can do only one measurement on the particle, but they have a choice of WHICH experiment they can do, which is parametrised by a variable, theta-Alice, and theta-Bob. (usually a polarisation angle).
So Alice makes a choice of theta-Alice, and then gets a result (up or down) for the measurement at hand.
Bob on his side makes a choice of theta-Bob, and then gets a result (up or down) for the measurement at hand.

Alice has a certain probability of getting "up", P(a_up, theta_Alice), which is only a function of theta_Alice.
Bob has a certain probability of getting "up", P(b_up, theta_Bob), which is only a function of theta_Bob.
So far, so good: this is what people mean by 'there is no information transfer': Bob, with his measurement, cannot learn anything about Alice's choice of theta-Alice and vice versa.

BUT, but...
If Alice and Bob COME TOGETHER, AND COMPARE NOTES, then they observe something strange: there is a correlation: the probability
P(a_up, b_up, {theta_Alice, theta_Bob} ) is such that it does not satisfy a property which is called Bell locality.
In order to explain this in detail, you should study a bit Bell's theorem. In short, it comes down to the following point. Bell worked out what would be the requirement on the joint probability P(a_up,b_up,{theta_Alice,theta_Bob}) when we assume that the two particles share some common "hidden variables", and then have to generate the probability of "up" or "down" at Bob and Alice, INDEPENDENTLY.
So Bell assumed that there is a common variable lambda, and that P(a_up, theta_Alice) is in fact given by P(a_up,lambda,theta_Alice), and that at Bob's the probability is given by P(b_up,lambda,theta_Bob) ; and that these probabilities are independently generated, once we know lambda.
This means then that the joint probability is a product:
P(a_up,b_up,lambda,{theta_Alice,theta_Bob}) = P(a_up,lambda,theta_Alice) x P(b_up,lambda,theta_Bob).
But we don't know anything about lambda, is just has an unknown probability distribution, P(lambda), so our observed correlation is then, according to Bell:

P(a_up,b_up,{theta_Alice,theta_Bob}) =
Integral P(lambda) P(a_up,b_up,lambda,{theta_Alice,theta_Bob}) d lambda

Of course, there is still a lot of freedom, because of the choice of P(lambda), and P(a_up,lambda,theta_alice) and so, but Bell succeeded nevertheless in writing down some INEGALITIES which the joint probability needs to satisfy.

Well, it turns out that the joint probabilities for entangled particles in quantum theory DO NOT ALWAYS SATISFY those Bell inequalities.

What does this mean, statistically ? Well, it just means that one of Bell's hypotheses are not satisfied.
And Bell's hypotheses are that the probabilities of Alice and Bob observing "up" for their chosen angles are generated INDEPENDENTLY as a function of a COMMON SET OF (HIDDEN) VARIABLES.
This is a very reasonable hypothesis when "statistical" things happen and when a correlation is observed. If somehow you arrange that there cannot be any DIRECT influence (because there's a big distance, a concrete wall etc.. between Alice and Bob), then if you observe a correlation, you normally assume a COMMON CAUSE (the hidden variable).
So this is somehow not true in quantum theory: you can have correlations without having a "common cause".
But it is also true that Bob cannot learn anything from Alice's CHOICE from his local measurement, nor can Alice learn anything from Bob's choice. So this thing doesn't allow you to send information from Alice to Bob.

cheers,
Patrick.
 
daytripper said:
I must have my concept of entanglement wrong then. I thought entanglement meant that what was done to one entangled particle was automatically done to the other. But this is just a matter of "If this particle is moving this way, the other particle has to be moving that way". No communication is done between the particles themselves. right?

Well your definition is correct but the communication part is just the fact that if you measure one spin, you automatically know what the other observer will have as outcome when he measures the other atom of the entangled pair

regards
marlon
 
daytripper said:
.
1) What defines which identicle particles in the same time period are entangled and which are not?
Their behavior wrt some detection scheme or other is
correlated. For example, different, separated parts of
the (same) television signal (wave) are entangled.

daytripper said:
Is there an "entanglement process" or are
two identical particles that exist in the same point in time
automatically entangled?
"Entanglement processes" produce the entangled phenomena
observed experimentally. The entangled phenomena have a
common cause (including, but not necessarily requiring, that
they've interacted prior to detection). Marlon mentioned PDC.
There are also other experimental processes that produce
entanglement.

daytripper said:
2) Does the transfer of particle state information happen
instantaneously no matter what the distance?
"Particle state information" is something that *we*
generate via theory and observation. Are the separated,
entangled physical phenomena *causing* each other
(instantaneously or superluminally)? There's no direct
evidence of that. But some interpretations have it that
that's what's happening. My personal opinion is that that
sort of *causation-at-a-distance* probably isn't what's
happening.

The correlations are a function of analyzing (even
via spacelike separated events) motional properties that the
entangled phenomena have in common due to their having
interacted in the past, or being created at the same time
and place (eg., a wave moving omnidirectionally away from
its source and rotating parallel to some plane-- separate,
individual points on the wave are entangled wrt the
rotation). Separated objects in any *system* of objects
moving together as a group are entangled wrt the movement
of the system as a whole.

daytripper said:
3)If not, is the information transmitted through
some sort of EM wave?
Information, in the sense of something being communicated
from one place to another, is transmitted electromagnetically.
There might be other waves in nature moving faster than EM
waves, but nobody has detected that yet. So, as far as
anybody knows, the speed of electromagnetic radiation in
a vacuum is the upper limit.

Nothing *needs* to be being transferred instantaneously or
superluminally to understand why the correlations of entangled
phenomena are what they are. For example, in the case of photons
entangled in polarization, light waves emitted (presumably by
the same atom) during the same interval are analyzed by
crossed linear polarizers. No nonlocal causation needs to be
happening -- the polarizers are simply, in effect, analyzing
the same light at the same time, and a cos^2 theta correlation
for coincidental detection emerges (which is what would be
expected if the same light is being analyzed by crossed linear
polarizers).

Now, I'm aware of analyses of this that conclude
that the light incident on the polarizers can't have been
made the same by the emission process, that it must happen
at the instant the detection that initiates a coincidence
interval occurs. But, these analyses are flawed, imho.
One way to approach it is by considering where the qm
projection along the plane of transmission (by the polarizer
at the initially detecting end) comes from. There's, imo,
a sound physical basis for it. Anyway, what results is
a probability of 1 for the initiating detection, and
a cos^2 theta probability at the other end for the same
interval. So, the joint probability of detection
(the probability of coincidental detection)
wrt any interval is 1(cos^2 theta). And, experiments
support this prediction.

The assumption of the causal independence of spacelike
separated individual results holds as long as one is
careful to modify the probabilistic picture following
the initiating detection. Maybe current 'pictures'
of spin and polarization are inadequate to describe
exactly what is happening. But, the plane of polarization,
and the intensity, of the light transmitted by the first
polarizer (associated with the start of the coincidence
interval) is a subset of the emitted light incident
on each polarizer for the common interval. This
light produced a photon, which represents maximal
intensity for that coincidence interval, at the first
detector. So, it follows from standard optics that
the probability that it will produce a photon at the
second detector (via analyzing the light from the
same emission, or set of emissions) is cos^2 theta,
where theta is the angular difference between the
settings of the two polarizers.
 
Last edited:
  • #10
Sherlock said:
The correlations are a function of analyzing (even
via spacelike separated events) motional properties that the
entangled phenomena have in common due to their having
interacted in the past, or being created at the same time
and place (eg., a wave moving omnidirectionally away from
its source and rotating parallel to some plane-- separate,
individual points on the wave are entangled wrt the
rotation). Separated objects in any *system* of objects
moving together as a group are entangled wrt the movement
of the system as a whole.
This pretty much sums up my conception of the process. There just can't be any impossible or mysterious factors involved. We just haven't identified all the properties and restrictions on their motion. Unless I miss the point, communication at a distance is merely speculation, right?
:shy:
 
  • #11
LindaGarrette said:
This pretty much sums up my conception of the process. There just can't be any impossible or mysterious factors involved. We just haven't identified all the properties and restrictions on their motion. Unless I miss the point, communication at a distance is merely speculation, right?
Well, there can't be any *impossible* factors involved. :)

But, there are mysterious factors involved, and they have,
imo, as much (maybe more in the case of entanglement) to do
with the way competing formulations are analysed as with
the entangled phenomena themselves.

The (speculative) inference of instantaneous or
superluminal *causal* relationships between the separated
phenomena is allowed, logically, given certain assumptions
(or, more strictly, the experimental negation of certain
*interpretations* of certain assumptions via the formulation
of probability statements regarding joint detection, and
the restriction of alternatives).

I've outlined the reasons why I don't think that experimental
violations of Bell inequalities are telling us what some people
seem to think they're telling us. Was Bell wrong? No, he
said his formulation regarding probability of joint detection is
incompatible with qm. It is. It's also incompatible with
experimental results, which support the qm formulation.
The problem is that the usual lhv formulation, following Bell,
doesn't take into account that the probabilities for individual
detection have changed once a detection is registered and
a coincidence interval is initiated. If you give the qm
projection operator the correct, imo, physical interpretation
in these experiments, then the qm formulation can be
seen as a sort of lhv theory itself.

It seems like a good bet that all the properties of light, electricity,
etc. haven't been identified yet -- at least not precisely enough
to give a clear picture of the physical details of what's happening
in the entanglement experiments.
 
  • #12
Oh, I see. So it's more saying that because the photons were produced at the same exact time, any reading of the particles will be probably the same depending on the point in time the photon was "read". (seeing the same light at the same time). I was thinking like the idiots that were going to use it for rocket propulsion. Haha. Thank you for clearing that up for me. I thought that the actions of Alice would produce an effect to Bob's photon.
 
  • #13
Sherlock said:
The (speculative) inference of instantaneous or
superluminal *causal* relationships between the separated
phenomena is allowed, logically

But entanglement isn't one of those causual relationships, right?
 
Last edited:
  • #14
daytripper said:
But entanglement isn't one of those causual relationships, right?

I don't think so, but some pretty smart people do.

The problems arise because of the way some lhv formulas
are done. If you describe joint detection in terms of
the product of the *initial* (prior to detection) individual
probabilities, then you get some predictions that don't agree
with qm (or experiment). But, the probability of individual
detection changes upon a detection being registered at one
end or the other. When that's taken into account, then
the idea that the filters are analyzing a common property
(or properties) imparted at emission is ok.
 
  • #15
daytripper said:
Oh, I see. So it's more saying that because the photons were produced at the same exact time, any reading of the particles will be probably the same depending on the point in time the photon was "read".
(seeing the same light at the same time).

More like, because the photons were produced at the same exact
time *and place* (like from the same atomic 'burp'), subsequent
analysis of the photons by the same sort of device will produce
results that are correlated.

There's a lot of great stuff written about this sort of thing.
If you're really interested, then you should read all of Bell's
work on this (and check out all of the citations, including the
EPR paper, the Aspect papers, etc). That should set you back
at least a few months, but it will give you a much better
understanding of the difficulties involved -- and the
considerations that led to the belief by some that there
are superluminal 'influences' (or whatever you want to
call the nonlocal stuff) in nature.
 
  • #16
Ok. I'll check out those articles. Thank you for helpin me out even through my confusion. I must go now.
 
  • #17
Sherlock said:
The problem is that the usual lhv formulation, following Bell,
doesn't take into account that the probabilities for individual
detection have changed once a detection is registered and
a coincidence interval is initiated. If you give the qm
projection operator the correct, imo, physical interpretation
in these experiments, then the qm formulation can be
seen as a sort of lhv theory itself.

I read, and re-read this several times, and I can't make up what you mean. I am reasonably well acquainted (or so I think) with Bell's reasoning.
What do you mean by "the probabilities for individual detection have changed once a detection is registered" ??

cheers,
Patrick.

EDIT: btw, this has probably already been cited, but I just found a very very thorough reference on all things Bell:

http://plato.stanford.edu/entries/bell-theorem/
 
Last edited:
  • #18
vanesch said:
What do you mean by "the probabilities for individual
detection have changed once a detection is registered" ??
Prior to detection the probability of individual detection at each
end is .5. A detection at one end or the other starts the
coincidence circuitry. A 'coincidence interval' is electronically
defined and, for this interval, the probability of detection at
the detecting end is no longer .5. It's 1. The probability of
detection at the other end for this interval is no longer .5, but
cos^2 theta (where theta is the angular difference of the
polarizer settings). So, the probability of joint detection is
1(cos^2 theta).

The transmission axis of the polarizer at the initially detecting
end is taken or projected as the global emission parameter,
because:
(1) the intensity of the detected light is a subset of the
emitted light.
(2) the transmission axis of the polarizer at the initially
detecting end represents the or 'a' plane of maximal
transmission by the polarizer(s) wrt the light emitted
during the interval (a photon *was* produced out of
light that was filtered from the emitted light).
(3) PMT response is directly proportional to the intensity
of the light transmitted by the polarizer.
(4) the intensities of the light between the polarizers and
their respective PMT's are related by cos^2 theta, which
therefore represents the probability of joint detection
for any coincidence interval.
 
  • #19
daytripper said:
I thought that the actions of Alice would produce an effect to Bob's photon.

As far as anyone can tell, that is EXACTLY what happens. Of course it is just as likely that it is Bob's actions that affect Alice's results. These scenarios end up being indistinguishable, which is of course a bit puzzling.
 
  • #20
Sherlock said:
The problem is that the usual lhv formulation, following Bell,
doesn't take into account that the probabilities for individual
detection have changed once a detection is registered and
a coincidence interval is initiated. If you give the qm
projection operator the correct, imo, physical interpretation
in these experiments, then the qm formulation can be
seen as a sort of lhv theory itself.

That is certainly an unconventional description of the situation. Since the results change upon the "first" detection, as you also point out in other posts, and that "causes" the results at the other detector to immediately change, you are saying that the results ARE dependent on space-like separated observer settings. That is the opposite of a LHV interpretation.
 
  • #21
Sherlock said:
Prior to detection the probability of individual detection at each
end is .5. A detection at one end or the other starts the
coincidence circuitry. A 'coincidence interval' is electronically
defined and, for this interval, the probability of detection at
the detecting end is no longer .5. It's 1. The probability of
detection at the other end for this interval is no longer .5, but
cos^2 theta (where theta is the angular difference of the
polarizer settings). So, the probability of joint detection is
1(cos^2 theta).

Ah, I see :-)

But now you have another difficulty. I have a vague "deja vu" feeling, when I've been through this with a person whose nickname was nightlight.
Ok, you're absolutely right that EPR experiments, in a completely classical wave setting, are explainable using Maxwell's theory and detectors whose triggering is proportional to the incident intensity.
But,...

Now you have a serious problem with the "photon" concept! Because if what you write is correct, then a single-photon state, incident on a beamsplitter, and detected by two photodetectors on the two arms, should then click in perfect coincidence, no ? (it is as in your case, but with theta= 0 degrees)
After all the intensities on both detectors are identical (the beamsplitter splits the intensity 50-50).
So, do you expect coincidence or not ?


cheers,
Patrick.
 
  • #22
DrChinese said:
As far as anyone can tell, that is EXACTLY what happens. Of course it is just as likely that it is Bob's actions that affect Alice's results. These scenarios end up being indistinguishable, which is of course a bit puzzling.

As you might know by now, I don't think that Alice's actions "have some effect" at Bob's, because that would imply, in one way or another, a non-local dynamical interaction. Of course, the possibility is not excluded. It is possible that there is such a "spooky action at a distance". But then you should admit that it is very strange that, given that spooky actions at a distance are possible, that nevertheless there is a conspiration that forbids us to use it to make a faster-than-light telephone.
But it is also possible that all what happens is strictly local, and that the correlation probability only has a meaning AFTER the results have been brought together, by applying the Born rule only at the point where messengers (which are entangled with the different outcomes at Alice and Bob) come together and can allow for a physical implementation of a correlation measurement (a coincidence circuit with counter, say). It is only when we observe THAT circuit that we apply the Born rule. It is only by inference that we then suppose that we know what happened at Bob's or at Alice's place.
You can, or you cannot, agree with that explanation. If you feel like the observed world is "really out there" then you have no option but to accept "spooky action at a distance". But I would like to stress that this is *NOT* the only logical possibility. You can still "save locality" by accepting what quantum theory cries out: macroscopic superposition.

cheers,
Patrick.
 
  • #23
vanesch said:
As you might know by now, I don't think that Alice's actions "have some effect" at Bob's, ...

Yes, MWI (and a few others) is a possibility. What I was trying to stress is that Daytripper's idea that there is NOT a causal connection cannot be supported by the evidence. In other words, the evidence is compatible with a causal connection between Alice and Bob's observations.
 
  • #24
Q:
daytripper said:
I thought that the actions of Alice would produce an effect to Bob's photon.
A's:
vanesch said:
As you might know by now, I don't think that Alice's actions "have some effect" at Bob's, because that would imply, in one way or another, a non-local dynamical interaction.
DrChinese said:
As far as anyone can tell, that is EXACTLY what happens.

It seems that there is a difference in opinion as to what is actually happening. Is this something I have to get my own opinion on through research or, to put it bluntly, are one of you wrong?
 
  • #25
daytripper said:
It seems that there is a difference in opinion as to what is actually happening. Is this something I have to get my own opinion on through research or, to put it bluntly, are one of you wrong?

Both!

The "consensus" within the physics community is that either there are a) no hidden variables; or b) there are non-local ("spooky action at a distance") effects. There are also c) variant interpretations such as Many Worlds (MWI) that try to solve the paradox with other assumptions (as Vanesch himself has recently pointed out in a very nice original paper).

The main thing is to discard the "naive" view that the situation results from some lack of knowledge - even though there is unquestionably much more to learn. After all, a), b) and c) are all quite different!
 
  • #26
daytripper said:
But entanglement isn't one of those causual relationships, right?
Many scientists (including Sherlock) may disagree, but I think everything is causally determined, even at the quantum level. Only because there isn't enough evidence to the contrary. But, the effect of any quantum interaction on space time reality would be irrelevant. (Oops, Looks like my post is out of place in the thread. I'm not accustomed to finding most recent posts last.)
 
Last edited:
  • #27
DrChinese said:
Yes, MWI (and a few others) is a possibility. What I was trying to stress is that Daytripper's idea that there is NOT a causal connection cannot be supported by the evidence. In other words, the evidence is compatible with a causal connection between Alice and Bob's observations.

Ok, I can agree with that :approve:

The only thing experimental evidence suggests strongly is that the individual probabilities, and the correlations, as calculated by quantum theory (according to your favorite scheme, they all give the same result of course) are strongly supported, and that this implies that some inequalities a la Bell are violated.
As such the total set of hypotheses that were used (locality, reality, independence of probabilities at remote places...) is falsified. But it is an error to jump directly to the throat of locality. This is a possibility, but it doesn't follow from any evidence. Just as the denial of hidden variables is a possibility, but not necessary.

The way I see it (even if you do not want to go explicitly in an MWI scheme) is that for Alice, it does not make sense to consider Bob's "outcomes" until she observes them (and calculates a correlation), in the same way as it doesn't make sense to talk about the position of a particle until it is observed.
If you use the quantity (Bob's outcome, or the position of a particle) without having observed it, it leads you to bizarre results, and I think that's simply what is happening here.
In many cases, you can get away with that (for instance if the particle can be considered classical, you can talk about its position without punishment), but in certain cases (double slit experiment) you get paradoxal situations.
In the same way, talking about the "result a remote observer had" before observing it yourself is something you can get away with most of the time, but sometimes you get paradoxal results (EPR).
This view is of course inspired by MWI, because, from Alice's point of view Bob didn't get one single outcome: he went into a superposition of states depending on the outcome (so talking about his outcome doesn't make sense yet). It is only upon interaction with Alice that a specific outcome state for Bob is chosen. But at that point, the hypotheses that go into Bell's inequality don't make sense anymore because information from both sides IS present.

So in a way, EPR is yet another example of a paradoxal result one can obtain when one talks about quantities that do not (yet) have an existence ; in this case, Bob's results before Alice saw them.

cheers,
Patrick.
 
  • #28
daytripper said:
It seems that there is a difference in opinion as to what is actually happening. Is this something I have to get my own opinion on through research or, to put it bluntly, are one of you wrong?

I think it is fine to have different opinions, as long as you understand the other's opinion. The factual information, however shouldn't be matter of opinion, and the factual information is that there is a strong indication that the quantum predictions are right. I say "strong indication" because (as is often underlined by local realists) in all cases, some experimental corrections are needed before the results come out.

I have been in favor of MWI, but I also know that there are other explanations. For instance in Bohmian mechanics, there is an explicit non-local interaction (the quantum potential). So there the issue is solved: locality is gone (from the start), and apart from that, the universe is (almost) classical. The predictions (at least in non-relativistic QM) are equivalent to standard quantum theory. So this is a clear settling of the issue (and when you look at Bohm's theory, you wouldn't even consider Bell's inequalities: the non-locality is so evident that it is clear that they will be violated).

I just wanted to point out that this is _not the only solution to the riddle_ and that locality can be saved at the expense, I agree, of some weirdness.

I could even say that if you let go locality, then Bohm' s mechanics is really the solution to all of your problems :-) The non-local mechanism is clear (it is the quantum potential).

The most ambiguous view is Copenhagen, with some quantum/classical transition. If you stick to it, you're in deep sh**! And that's what most people then have: they switched too early from quantum to classical behavior, and then they find themselves with "impossible" classical results: correlations without a mechanism !

cheers,
Patrick.
 
  • #29
vanesch said:
Ah, I see :-)

But now you have another difficulty. I have a vague "deja vu" feeling, when I've been through this with a person whose nickname was nightlight.
Ok, you're absolutely right that EPR experiments, in a completely classical wave setting, are explainable using Maxwell's theory and detectors whose triggering is proportional to the incident intensity.
But,...

Now you have a serious problem with the "photon" concept! Because if what you write is correct, then a single-photon state, incident on a beamsplitter, and detected by two photodetectors on the two arms, should then click in perfect coincidence, no ? (it is as in your case, but with theta= 0 degrees)
After all the intensities on both detectors are identical (the beamsplitter splits the intensity 50-50).
So, do you expect coincidence or not ?


cheers,
Patrick.

I'll get to this question below, but first:

I don't think there's any conflict with the
photon concept in the way I've learned to
look at the EPR/Bell experiments.

The photon is associated with a detection
event, which is associated with an emission
event. A detection event is a detection event.
It's not .5 a detection or 1.5 a detection -- it's
1 detection. Photons are, by definition, indivisible.

The emission models have been built from -- at
least in part, and certainly in the sense that
they must accord with -- the experimental
*results* (the detections).

What's happening in between (which is what
we're talking about) is anybody's guess. :-)

The way I learned about photons doesn't
*require* that I think of them as indivisible
wave trains, or energy packets, or point
particles when I'm thinking about light
in terms that I want to correspond to what's
physically happening prior to detection.

So, I don't have a problem with the photon
concept -- just looking for deeper (ie., real
physical) explanations for some experimental
results.

The result of beamsplitter situation you describe
is certainly resistant to explanation using wave
picture.

However, even the best beam splitters
don't produce an exact 50-50 split of the beam.
This is demonstrated experimentally. But the
difference is small, and in conjunction with PMT
calibration considerations seems not adequate to
explain the result of detection at one arm or the
other, but never both, per single
emission/detection interval.

Regarding your question, I would have to know
exactly the type of beamsplitter and detector
and calibrations to say what I would expect.
I already know the results of some of these
sorts of experiments. But, like the dot by dot
interference results, have so far no satisfactory
way to explain why, if there is some wave activity
incident on locations where it isn't detected, then
why isn't it detected?

The argument for the notion that the light wave(s)
itself (which correspond to photon detection event)
is *indivisible* is, I think, based on that question
pointing to the inadequacy of the wave picture
in some situations.

However, I think that we *have* successfully
refuted at least one set of arguments for the
existence of superluminal causal connections.

Whether or not these do exist in some medium that's
inaccessible to us remains an open question -- just
maybe not a necessary one at the moment.

The question of whether or not light is propagating
and interacting with other media, as, fundamentally,
*indivisible* wavetrains or bundles seems like a
better one.
 
  • #30
DrChinese said:
That is certainly an unconventional description of the situation. Since the results change upon the "first" detection, as you also point out in other posts, and that "causes" the results at the other detector to immediately change, you are saying that the results ARE dependent on space-like separated observer settings. That is the opposite of a LHV interpretation.

Yes, of course the joint results depend on the joint
settings. But, that doesn't mean that what happens
at one end is affecting what happens at the other.

I didn't say that the results change upon first
detection. The probability of detection changes,
because the result was that a detection occured.
This initiates a coincidence interval. And, during
this interval, the probability of detection at the
other end is different than it was prior to the
detection that initiated the interval.

What I offered was local interpretation in that
it requires no superluminal effects, no causal
connection between Alice and Bob to understand
why the probability of coincidental detection is
cos^2 theta.

You're right that it isn't, strictly speaking, a hidden
*variable* description. This is because the *variability*
of the global parameter isn't relevant to the
variability of coincidental detection. (The variability
of this parameter is, however, relevant to the variability
of individual detection. As Bell pointed out, if you
augmented qm with this value, then it would certainly
improve the precision of individual result predictions.
Such a formulation, for individual results, is not in conflict
with qm.)

The description I offered is certainly not unconventional.
What's unconventional is saying that spacelike separated
events are causally affecting each other, that what Alice
does has some influence on what Bob does via some sort
of superluminal 'transmission' or whatever. There's simply
no good reason to adopt that belief ... yet. :-)
 
Last edited:
  • #31
DrChinese said:
The "consensus" within the physics community is that either there are a) no hidden variables; or b) there are non-local ("spooky action at a distance") effects. There are also c) variant interpretations such as Many Worlds (MWI) that try to solve the paradox with other assumptions (as Vanesch himself has recently pointed out in a very nice original paper).

The main thing is to discard the "naive" view that the situation results from some lack of knowledge - even though there is unquestionably much more to learn. After all, a), b) and c) are all quite different!
I disagree with you here.

a) Hidden variables.

The *existence* of hidden variables is not in question. Bell's
point is that the addition of a *variable* global parameter will
not only not enhance qm predictions wrt joint detection, it
will give different predictions for some settings. That qm is
correct is confirmed by experiment.

Nobody knows how the emitted light is behaving prior
to detection. This is the hidden variable(s) -- and as
long as it's behaving pretty much the same at both
ends during any given coincidence interval (which
is the condition that the applicability of the cos^2 theta
formula depends on), this hidden variable(s) is *irrelevant*
to the determination of coincidental detection.

b) non-local ("spooky action at a distance") effects

This is an unnecessary option (given an understanding
of Bell's analysis, optics, and the probability calculus).

I suspect that if one suggests this as a serious possibility
to working physicists one will get a non-serious reply in
most cases.

c) variant interpretations such as MWI

I think that most physicists would put this
sort of stuff in the "not even wrong" category.
______

Regarding the naive view, if the "situation" you're referring
to is the *debate* about the meaning of experimental
violations of Bell inequalities, then "lack of knowledge"
would certainly seem to have something to do with it.

If the "situation" you're referring to is the data
produced in the experiments, then there is enough
knowledge to explain this via local transmissions.

The inference of nonlocal effects via violations of
Bell inequalities rests primarily on the assumption that
the general lhv formulation proposed by Bell is the
*only* way to formulate a local description of the
probabilities in the joint-detection context. The
problem is that this *general* lhv formulation is
flawed (ie., inapplicable) -- for reasons that I've
pointed out in other posts in this thread.

So, imho, what should be discarded are MWI, Bohmian
mechanics, and visions of events in New York instantaneously
affecting events in, say, Los Angeles -- even though events
happening at more or less the same time (8pm Eastern/5pm Pacific)
in each place might well be correlated wrt some context or
some phenomenon (like, say, a giant storm system covering
the entire continental United States). :)
 
  • #32
Sherlock said:
The photon is associated with a detection
event, which is associated with an emission
event. A detection event is a detection event.
It's not .5 a detection or 1.5 a detection -- it's
1 detection. Photons are, by definition, indivisible.

You are, by any coincidence, not the schizofrenic alter-ego of nightlight, are you ? :-)

What you describe is the so-called semi-classical model: we quantify "matter" but we treat the EM field as classical (Maxwell). It is true that many properties of light-matter interaction can be dealt with appropriately with this semiclassical model, and it is true that the "star" phenomena usually invoqued to point to the existence of "photons" (photo-electric effect, compton effect) are in fact also explainable by this semiclassical view, in that the *apparent* lumpiness of the EM interaction is due to the quantisation of matter, and not so much due to the quantization of the EM field itself.
But the quantum field view assigns a real existence to photons themselves, independently of their detection, and there are situations such as the anti-correlation detections which are not explainable in the frame of a semi-classical model, but follow quite nicely from a full quantum-field theoretic treatment.


The way I learned about photons doesn't
*require* that I think of them as indivisible
wave trains, or energy packets, or point
particles when I'm thinking about light
in terms that I want to correspond to what's
physically happening prior to detection.

Photons are not simply "point particles" or "wave trains": they have in fact no genuine existence in a classical field theory like Maxwell's. The quantum state of the EM field cannot, in most cases, be fully described by a classical field E(x,y,z) and B(x,y,z), and photons are specific quantum states of the field.
All imaging of photons as wave trains or classical point particles will, at a certain point, lead to paradoxes.


However, even the best beam splitters
don't produce an exact 50-50 split of the beam.
This is demonstrated experimentally. But the
difference is small, and in conjunction with PMT
calibration considerations seems not adequate to
explain the result of detection at one arm or the
other, but never both, per single
emission/detection interval.

Regarding your question, I would have to know
exactly the type of beamsplitter and detector
and calibrations to say what I would expect.
I already know the results of some of these
sorts of experiments. But, like the dot by dot
interference results, have so far no satisfactory
way to explain why, if there is some wave activity
incident on locations where it isn't detected, then
why isn't it detected?

Ha, but that's exactly where a true QED photon description differs from any Maxwellian picture: the point is that the initial state (an incoming single-photon state on a beam splitter) evolves into a quantum state which is a superposition of two quantum states you CAN measure. You CAN measure a photon in the reflexion arm, and you can measure a photon in the transmission arm. So somehow your "measurement states" of the EM quantum state (reflexion, transmission) are in a superposition of the incoming state (after the beam splitter), and the typical quantum measurement procedure takes place: with a certain probability you detect the first term (photon reflected) and with another probability you detect the second term (photon transmitted). However, you cannot detect both of them, because that corresponds not to a superposition, but to a product-state (a 2-photon state).

The "activity" at both detectors is only there in a Maxwellian picture. In the QED picture, there is a quantum mechanical superposition of "activity at T detector" and "activity at the R detector". As our measurement makes us apply the Born rule in this basis, nature has to choose, event by event, which of both eigenstates will be realized, following the Born rule.

The point usually made by people who want to stick to the semiclassical model is: I need a detailled description of the detector, the beam splitter etc...
I think that this is missing the point, and even "trying to confuse the ennemi" :-)
The reason is this: if QED makes the CORRECT predictions without this detailled knowledge, that means that this detailled knowledge doesn't matter. For instance, you say that the beamsplitter is never exactly 50-50. Granted. So say that it can vary between 40-60 and 60-40. What does this change ? QED doesn't need these details to give you a gross outcome which is verified. I didn't ask you if you expected 13.6% or 18.5% correlation. I asked if you expected about 100% correlation (after taking finite efficiencies into account), or about 0% correlation. This gross estimation should be independent about the details of the beamsplitter or the detector and its calibration, because QED can tell you this result in an "ideal" situation: 0% correlation. In an ACTUAL experiment (as certain have been conducted), you don't find 0% of course: you find something like 0.4%. But you don't find something like 85%.

The argument for the notion that the light wave(s)
itself (which correspond to photon detection event)
is *indivisible* is, I think, based on that question
pointing to the inadequacy of the wave picture
in some situations.

Well, the photon picture as given by QED is a bit more involved than "indivisible" light waves :-)

However, I think that we *have* successfully
refuted at least one set of arguments for the
existence of superluminal causal connections.

The problem is that you need a picture which is globally explaining results. You cannot switch to a Maxwellian picture which give you classical intensities to explain EPR results, and then switch to a particle view to explain anti-correlations. You have to explain both at once, with the same picture. QED can do that, but the price to pay is that you have to accept a full quantum view of the EM field. Once you do that, you cannot use the intensity explanation of classical fields anymore to explain the EPR correlations, in the sense that the photon passes, or doesn't pass, the polarizer, and not that its intensity passes "a bit".

cheers,
Patrick.
 
  • #33
Sherlock said:
I didn't say that the results change upon first
detection. The probability of detection changes,
because the result was that a detection occured.
This initiates a coincidence interval. And, during
this interval, the probability of detection at the
other end is different than it was prior to the
detection that initiated the interval.

I would say that that is the usual definition of conditional probability :-) I think that this is not the resolution of the EPR riddle.


What I offered was local interpretation in that
it requires no superluminal effects, no causal
connection between Alice and Bob to understand
why the probability of coincidental detection is
cos^2 theta.

Yes, but exactly the same classical intensity explanation DOES NOT WORK for anti-coincidence experiments.

cheers,
Patrick.
 
  • #34
Sherlock said:
This is an unnecessary option (given an understanding
of Bell's analysis, optics, and the probability calculus).

Let's not forget that optics is not necessary. If you take quantum mechanics for granted, you get exactly the same Bell violations with electrons. It is only that the experiments are easier to carry out with light than with electrons.

c) variant interpretations such as MWI

I think that most physicists would put this
sort of stuff in the "not even wrong" category.

As I pointed out before, this is a misconception. It would mean that all people working on subjects like quantum gravity, string theory or decoherence are working in the "not even wrong" category.

You cannot seriously calculate the quantum states (and its entropy) of a black hole without taking the superposition principle seriously on the scale of several solar masses. Also all the work on decoherence only makes sense in a MWI setting.

The reason to prefer MWI is not that it is somehow fancy or that the mystery part of it has some strange attraction. MWI is the natural consequence of two principles:
- the quantum-mechanical superposition principle
- locality (in the relativistic sense)

These two principles have been the major guiding ideas in the development of all of current modern physics, and at not one single instant have they had consequences which have been explicitly contradicted by experiment. Each time where the superposition principle could be tested, it won, from the exchange terms in molecular spectroscopy, over phonon and other collective quantum phenomena in solids, bose-einstein condensates and all that. It has never been put in failure.
In the same way, Lorentz invariance and its associated requirement of locality has been a major guiding principle which did withstand many experimental challenges. The price to pay was a major revision of our notion of time, which could have been classified in the "not even wrong" category too if intuition was the only judge.
At this point, there is absolutely no indication that we should limit the applicability, nor of the superposition principle, nor of locality. And when you take these two "good soldiers" seriously all the way, you have no choice but to end up in a MWI view.
That doesn't mean that tomorrow, we will not find the limits of applicability of these principles - gravity might be such a limit, although current indications go in the opposite direction. But as of now, they are to be considered universally valid, because they have never undergone any experimental contradiction what so ever. You are of course allowed to have personal preferences - based upon intuition - that make you dislike the apparent weirdness of MWI. However, you cannot say that MWI is "not even wrong". Remember that the weirdness of MWI is only on a philosophical level: concerning hard predictions of observation, it is in FULL agreement with all that has ever been observed, on the same level as Copenhagen QM or Bohmian mechanics. It does so however, without violating the two basic principles which led to the rest of the theory in the first place, a claim which Copenhagen or Bohmian mechanics cannot make.

cheers,
Patrick.
 
  • #35
vanesch said:
You are, by any coincidence, not the schizofrenic alter-ego of nightlight, are you ? :-)

I'm not nightlight, no ... or schizofrenic,
as far as I know. :) It was an interesting
discussion you had there. Lots of messages.
I still haven't read most of them.

I think that one could construct a semi-classical
model to account for the anti-correlation (beamsplitter)
situations. Not sure if *I* can do it. It would be an
interesting exercise.

About these photon fields that exist independent of
detection -- how do we know that they exist?

I don't think of photons using the images you (I) mentioned.

If you think of light in terms of photons, what sort of
imagery do you associate with this? Is your imagery wrt photons
strictly mathematical/symbolic? Or, do photons correspond
to some 'natural' physical form or phenomenon, and, if so, what?
That is, just what sort of picture do you get from
the quantum theoretical picture of photons and
quantized EM fields.

For me, the quantum theoretical 'picture' is, devoid of any
sort of real imagery (that is, imagery analogous to my sensory
experience of natural phenomena). When I'm (trying) to do a
calculation using quantum theory, I'm not thinking in pictures.
But, I *want* to think in pictures wrt this stuff. :) And, I
don't see any reason why that should be absolutely
impossible. Well, for the foreseeable future it's
impossible wrt some experiments. :)

The problem is that you need a picture which is globally explaining results. You cannot switch to a Maxwellian picture which give you classical intensities to explain EPR results, and then switch to a particle view to explain anti-correlations.

Why not? People do this all the time. In the case of the EPR
results it gives the simplest explanation, requiring no exotic
natural phenomenon.

You have to explain both at once, with the same picture. QED can do that, but the price to pay is that you have to accept a full quantum view of the EM field. Once you do that, you cannot use the intensity explanation of classical fields anymore to explain the EPR correlations, in the sense that the photon passes, or doesn't pass, the polarizer, and not that its intensity passes "a bit".

You're thinking of the qm ability to reproduce
the results of measurements as an explanation.
But, what actual understanding does it provide?
Isn't this the conundrum that is quantum theory
itself? Physical details of the processes that
produce the results of the beamsplitter and
double-slit experiments with photons aren't provided
by the theory. In that sense it is, following
Einstein's appraisal, an incomplete description
of the physical reality. Bohr said it is impossible
to visualize what's happening at the quantum level.
I'm not so sure that Bohr was correct about that.

It's true that if you characterize photons as indivisible
particles of light, then you can, after a fashion, 'account'
for the results of experiments. But, I wouldn't call these
accounts explanations in the usual way that we use the word
explanation.

Resorting to this characterization goes to the foundation
of quantum theory. No imagery, no real-world 3D details
of pre-detection behavior of the light. There are good
reasons for this approach of course. Just experimental
results. Quantum statistics. A consistent mathematical
structure. A method for calculating the probable results
of any experimental setup, and an associated abstract
'picture' that gives little insight into the actual
behavior of the 'phenomena' in question.
(There are no half-photons because there are no
half-detections.)

Anyhow, since I see no reason why the behavior of
waves in undetectable media should necessarily
be fundamentally different than the behavior of waves
in detectable media, I use the wave analogy when possible
and speculate about the details. I think that this
approach will eventually provide a better understanding
of nature than the more strictly instrumental approach
that characterizes quantum theory. I also think that
quantum theory will be around pretty much forever. It
does, after all, 'work'. In fact, I don't see how
you could *possibly* get incorrect predictions if you
use it correctly. There does seem to be something not
entirely arbitrary about the idea of a fundamental
quantum of action. Light might well be quantized
independent of detection -- I just don't know exactly
what that might mean in physically descriptive
terms.

To offer as an 'explanation' for the results
of the beamsplitter experiments that the emitted
light exists in a superposition of photon states
and that mother nature 'chooses' (quite randomly,
with equal passion for both detectors) which path
will be taken by all of the light emitted in a single
emission/detection interval ... well, forgive me
if I don't find that a compelling 'description' of
what's happening at the quantum level of interactions. :)

By the way, what does "Warnings" mean where it
says "View so-and-so's Warnings"?
 
  • #36
vanesch said:
I would say that that is the usual definition of
conditional probability :-) I think that this is not
the resolution of the EPR riddle.

It is if you follow the logic of my interpretation,
and realize that in the usual lhv formulation
that is incompatible with some qm predictions
the probabilities are not calculated conditionally.
But they should be -- and in the simplest
descriptive approach the joint results are seen to
be both locally produced and following standard
classical optics theory. This renders unnecessary
any other description or interpretive explanation
for these types of experiments.

vanesch said:
Yes, but exactly the same classical intensity
explanation DOES NOT WORK for anti-coincidence experiments.

Not so far. :) And anyway, so what? The classical
intensity argument didn't work wrt the EPR/Bell
stuff either ... until the problem with the probabilistic
picture assumed by Bell-inspired lhv formulations became
clear.

And let's be clear here. In the joint EPR/Bell
context the variability of the supplementary global
parameter isn't a factor. So, the qm formulation
is, as usual, as quantitatively complete as it needs
to be without it, and can be interpreted as a
local description in concert with the classical
optics stuff (which gives a more *visualizably*
descriptive interpretation of what's happening).
 
  • #37
vanesch said:
Ok, I can agree with that :approve:

The only thing experimental evidence suggests strongly is that the individual probabilities, and the correlations, as calculated by quantum theory (according to your favorite scheme, they all give the same result of course) are strongly supported, and that this implies that some inequalities a la Bell are violated.

As such the total set of hypotheses that were used (locality, reality, independence of probabilities at remote places...) is falsified. But it is an error to jump directly to the throat of locality. This is a possibility, but it doesn't follow from any evidence. Just as the denial of hidden variables is a possibility, but not necessary.

And I can agree with that (well said as always...) ! In fact, I personally like keeping the locality assumption.
 
  • #38
Sherlock said:
Regarding the naive view, if the "situation" you're referring
to is the *debate* about the meaning of experimental
violations of Bell inequalities, then "lack of knowledge"
would certainly seem to have something to do with it.

If the "situation" you're referring to is the data
produced in the experiments, then there is enough
knowledge to explain this via local transmissions.

The inference of nonlocal effects via violations of
Bell inequalities rests primarily on the assumption that
the general lhv formulation proposed by Bell is the
*only* way to formulate a local description of the
probabilities in the joint-detection context. The
problem is that this *general* lhv formulation is
flawed (ie., inapplicable) -- for reasons that I've
pointed out in other posts in this thread.

The HV assumption made by Bell is not flawed in any respect. If you can explain a situation in which the determinate existence of a hypothetical third observable polarizer yields results compatible with experiment, please so state.

In the words of Bell: "It follows that c is another unit vector [in addition to a and b] ...". Therefore, there are 8 possible outcomes (permutations on a/b/c) that must total to 100% (total probability=1). It is a fact that 2 cases of the 8 have a negative probability when the angles have certain settings (such as a=0, b=67.5, c=45 degrees).

If you are not addressing this, then you are ignoring Bell entirely. Which is your right, but it is the essence of the issue.
 
  • #39
vanesch said:
Let's not forget that optics is not necessary. If you take quantum mechanics for granted, you get exactly the same Bell violations with electrons. It is only that the experiments are easier to carry out with light than with electrons.

If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

Personally, I (apparently unlike you) try *not* to
think about quantum theory as much as possible. :)

About my guess that most physicists would
characterize MWI as "not even wrong", you wrote:

vanesch said:
As I pointed out before, this is a misconception. It would mean that all people working on subjects like quantum gravity, string theory or decoherence are working in the "not even wrong" category.

These are clearly fictional accounts of the real world.
Some fictions are useful. Some fictions become
'necessary' in the continued absence of a more
realistic account. But, MWI or CI or Bohmian mechanics
simply aren't necessary. Excess baggage.

Now, about quantum gravity, string theory, and
decoherence. I don't understand any of that stuff.
Although, I think it's safe to say that my peanut butter
sandwich is rapidly decohering. Anyhow, let's say
you've calculated the quantum states of a black
hole. Then what? Is there any way to ascertain
whether or not you were right? Suppose there is,
and you find that your calculations are correct.
Will you really have a better *understanding* of
what a black hole actually is. I don't think so, but
of course I could be quite wrong. General Relativity
seems like a *very* simplistic account of the
complex wave interactions that produce observable
gravitational behavior. So, to try marry it with quantum
theory (a decidedly non-descriptive 'description' of
reality) seems ill-advised from the start.

Has decoherence really helped us understand anything
better than we did without it?

Some of the (presumably) smartest people in the
world have been working on string theory ever since
someone noticed some interesting mathematical
connections back in, what, the 60's? Suppose, that
they eventually (in our lifetimes) get it to consistently
tie everything together. So what? Is anybody going
to actually use it to describe their experiments? Or
will it be useful only for metaphysical speculation?
Or, will it not be used much at all by anybody, but just
be nice that everything finally got 'unified'.

vanesch said:
The reason to prefer MWI is not that it is somehow fancy or that the mystery part of it has some strange attraction. MWI is the natural consequence of two principles:
- the quantum-mechanical superposition principle
- locality (in the relativistic sense)

Natural consequence? Ok. Necessary? No. There must be
some good reasons why the physics community isn't too
excited about MWI.

vanesch said:
In the same way, Lorentz invariance and its associated requirement of locality has been a major guiding principle which did withstand many experimental challenges. The price to pay was a major revision of our notion of time, which could have been classified in the "not even wrong" category too if intuition was the only judge.

I don't know. I think I understand special relativity, but it
hasn't altered my basic intuitive notion of time. The physical
basis of the Lorentz-Fitzgerald contraction is another thing
altogether.

vanesch said:
At this point, there is absolutely no indication that we should limit the applicability, nor of the superposition principle, nor of locality. And when you take these two "good soldiers" seriously all the way, you have no choice but to end up in a MWI view.

I don't think of locality in terms of applicability. It's the way
the world is, until demonstrated otherwise.

As for the superposition principle, whether it's a
frequency distribution of possible experimental results, or
of events in some other medium, it applies to waves.

The choice to not adopt the MWI view, is simply the choice
to not adopt extraneous symbolic baggage in trying to
understand things.

By the way, I hope you don't mind me being sort
of the devil's advocate wrt some of your statements.
I'm sure you know *much* more about these things
than I probably ever will. I've already learned much
from your (and others) postings and it's been very
entertaining. So, it is in a spirit of gratitude,
a genuine fascination with the physical world, and
an intense desire to keep things as simple as
possible :) that my replies are submitted.
 
  • #40
Sherlock said:
If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

That's sort of funny, you know. Application of classical optics' formula cos^2\theta is incompatible with hidden variables but consistent with experiment.
 
  • #41
Well you guys have flown over my head. Haha. I'll stick with "either a non-local causual relationship or taking the same measures of the same light at the same time"
 
  • #42
Sherlock said:
If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

Well, because of some view that there should be an underlying unity to physics. You're not required to subscribe to that view, but I'd say that physics then looses a lot of interest - that's of course just my opinion. The idea is that there ARE universal laws of nature. Maybe that's simply not true. Maybe nature follows totally different laws from case to case. But then physics reduces to a catalog of experiments, without any guidance. A bit like biology before the advent of its molecular understanding.
I think that the working hypothesis that there ARE universal laws has not yet been falsified. Within that frame, you'd think that ONE AND THE SAME theory must account for all experimental observations concerning optics. We have such a theory, and it is called QED. Of course we had older theories, like Maxwell's theory and even the corpuscular theory ; and QED shows us IN WHAT CIRCUMSTANCES these older theories are good approximations ; and in what circumstances we will get deviations from their predictions.
It just turns out that in EPR type experiments you are in fact NOT in a regime where you can use Maxwell's theory because it is exactly the same regime in which you have the anti-coincidence counts. In one case however, Maxwell gives you (I'd say, by accident) an answer which corresponds to the QED prediction, in the other case, it is completely off.

But when you analyse EPR experiments in more detail, you can see that Maxwell DOES NOT give you ALL correct answers:

If you have the situation:
Code:
-------> (PDC)  -------> PBS(alice) --> A+
            |                      |
            |                      0---------> A-
            |
            |
            X---- > PBS(Bob) --> B+
                             |
                             |
                             O------->B-
Then Maxwell will give you the right Cos^2(theta) correlation between A+ and B+ as some people point out, but Maxwell will NOT give you the correct correlations between (A+ OR A-) AND (B+) AND (B-) which are ANTI-correlations.

(A+ OR A-) works here as the "trigger" (one photon seen in Alice's arm), while the other photon can only be detected at B+ OR at B- but not simultaneously. This is a complicated version of the Thorn experiment.

Personally, I (apparently unlike you) try *not* to
think about quantum theory as much as possible. :)

Well, I still believe in the working hypothesis of a "unity of physics" in that there is a single set of universal laws that nature should obey. All the rest is, proverbially, "stamp collecting" :-)
As such, quantum theory is the basis for ALL of physics (except for GR, that's the big riddle).

These are clearly fictional accounts of the real world.
Some fictions are useful. Some fictions become
'necessary' in the continued absence of a more
realistic account.

They are the natural consequence of a belief in a "unity of physics".

But, MWI or CI or Bohmian mechanics
simply aren't necessary. Excess baggage.

Well, for me the essence of physics is the identification of an objective world with the Platonic world (the mathematical objects), in such a way that the subjectively observed world corresponds to what you can deduce from those mathematical objects. MWI, CI and Bohmian mechanics are different mappings between an objective world and the Platonic world ; only they lead to finally the same subjectively observed phenomena. Now if physics would be "finished" then it is a matter of taste which one you pick out. But somehow you have to choose I think.
However, physics is not finished yet. So this choice of mapping can be more or less inspiring for new ideas.


Suppose there is,
and you find that your calculations are correct.
Will you really have a better *understanding* of
what a black hole actually is.

I think that the perfect understanding is a fully coherent mapping between a postulated objective world and the platonic world of mathematical objects, in such a way that all of our subjective observations are in agreement with that mapping. There may be more than one way of doing this. I am still of the opinion that there exists at least one way.
Apart from basing the meaning of "explanation" on intuition (and we should know by now that that is not a reliable thing to do), I don't know what else can it mean, to "explain" something.

General Relativity
seems like a *very* simplistic account of the
complex wave interactions that produce observable
gravitational behavior. So, to try marry it with quantum
theory (a decidedly non-descriptive 'description' of
reality) seems ill-advised from the start.

Well, this is a remark I never understood. If you have a theory which makes unambiguous, correct predictions of experiments, then in what way is there still something not "understood" ? I can understand the opposite argument: discrepancies between a theory's prediction and an experimental result can point to a more complex underlying "reality". But if the theory makes the right predictions ? I would then be inclined to think that the theory already possesses ALL the ingredients describing the phenomenon under study, no ?

Has decoherence really helped us understand anything
better than we did without it?

For sure ! It resolved an ambiguity in the formulation of quantum theory, namely WHEN to apply the Born rule (the famous Heisenberg cut). After all, the application of the Born rule is somehow left to the judgement of the person studying the phenomenon: he can, or cannot, include more and more "apparatus", complicating apparently the calculations ; nevertheless, from a certain point upward, this seems like a useless complication. Decoherence theory tells us why: that from the moment you have "irreversible coupling to the environment", putting more stuff from the "observer" part into the "system under study part" doesn't change the final result. This explains why "simplistic" quantum calculations often give accurate results. It is sufficient that the *essential quantum mechanical phenomena* are taken into account in a full quantum calculation, and that we apply the Born rule just after that "level of complexity", continuing with classical calculations upwards, and we will have the same results as if we did very very complicated fully unitary quantum calculations, including everything, all the atoms of measurement apparatus and all that.
Without these results of decoherence theory, quantum theory was in fact not usable, except if some ontological status was given to the quantum-classical transition (that's the Copenhagen view). But apparently that depended then on the choice of the scientist to include, or not, certain features of the apparatus into his calculation. Hence the ambiguity. And decoherence theory tells us that it doesn't matter where we do this, if we do it late enough.


Natural consequence? Ok. Necessary? No. There must be
some good reasons why the physics community isn't too
excited about MWI.

Ha, sample bias ! You say that the people who are excited by MWI are working on phantasies, and that the "others", which you seem to identify with the entire community, are not excited by it :-)

Honestly, I think that many physicists are too much their nose into their actual (interesting) work on a technical level, to be concerned about foundational issues. They apply the Born rule, get out predictions, and do measurements. It is only for a minority, working on very fundamental issues, that it really matters in their work which view to hold. And most of those do "get excited about MWI", or at least, consider the possibility, or recon that if they don't like it, they'll have to come up with a *specific mechanism* which denies MWI. As I said, I'm open to that, and the possibility exists that gravity will exactly do that. But if that's true, the work of most string theorists goes into the dustbin.

As for the superposition principle, whether it's a
frequency distribution of possible experimental results, or
of events in some other medium, it applies to waves.

I think that that is a serious misunderstanding of what exactly tells us the superposition principle of quantum theory - but I've seen many people think of it that way. Somehow the superposition principle is associated immediately with "linear partial differential equations" (= waves). It's probably because of the way the material is usually introduced.

However, that's not at all the content of the superposition principle, as I understand it. The superposition principle says that if a physical situation A exists, and a different physical situation B exists, then automatically there exist distinct physical situations for all complex numbers U.
We write this in ket notation as |A> + U|B>. And this, independent of the nature of situation A and situation B.
This is, at first sight, a mind boggling statement and it is the fundamental idea behind quantum theory.
It is exactly what Schroedinger thought it couldn't mean: if situation A is "my cat is dead" and situation B is "my cat chases a bird" then there exists a *new* situation for each complex number U: |my cat is dead> + U |my cat chases a bird>.

The choice to not adopt the MWI view, is simply the choice
to not adopt extraneous symbolic baggage in trying to
understand things.

Well, in my view of "understanding", which I explained, was, in its purest form, a mapping from a postulated objective reality into the world of mathematical objects, it is ONE such mapping. As I want to have *A* mapping, I find the one given by MWI the cleanest.

cheers,
Patrick.
 
Last edited:
  • #43
DrChinese said:
If you can explain a situation in which the determinate existence of a hypothetical third observable polarizer yields results compatible with experiment, please so state.

Not sure what you mean. In the situations we're considering
there is a source of (opposite moving) entangled photons,
two polarizers at each end to filter the emitted light, and
a PMT behind each polarizer to facilitate detection when
a certain amount of light has been transmitted by the polarizer.

The angular difference (theta) between the polarizer settings
determines the probability of joint detection. This probability varies
as cos^2 theta. There's never a negative probability of joint
detection. It goes from 0 (for theta = 90 degrees) to 1 (for
theta = 0 degrees).

DrChinese said:
In the words of Bell: "It follows that c is another unit vector [in addition to a and b] ...". Therefore, there are 8 possible outcomes (permutations on a/b/c) that must total to 100% (total probability=1). It is a fact that 2 cases of the 8 have a negative probability when the angles have certain settings (such as a=0, b=67.5, c=45 degrees).

If you are not addressing this, then you are ignoring Bell entirely. Which is your right, but it is the essence of the issue.

a, b and c are values for theta? What's the problem? I don't
see where you would get any negative probabilities. Or, maybe
a, b and c aren't values for theta? Then, what are they?
Individual settings? Ok, so you get the theta for a set of
joint measurements by combining the individual settings, |a-b|
or |b-c| or |a-c| and so on. I don't see any negative
probabilities coming out of this. I don't understand
what you think is the essence of the Bell issue.

By the way, your web page is cool. I too am a fan of Cream. :)
 
Last edited:
  • #44
Sherlock said:
Not sure what you mean. In the situations we're considering
there is a source of (opposite moving) entangled photons,
two polarizers at each end to filter the emitted light, and
a PMT behind each polarizer to facilitate detection when
a certain amount of light has been transmitted by the polarizer.

The angular difference (theta) between the polarizer settings
determines the probability of joint detection. This probability varies
as cos^2 theta. There's never a negative probability of joint
detection. It goes from 0 (for theta = 90 degrees) to 1 (for
theta = 0 degrees).



a, b and c are values for theta? What's the problem? I don't
see where you would get any negative probabilities. Or, maybe
a, b and c aren't values for theta? Then, what are they?
Individual settings? Ok, so you get the theta for a set of
joint measurements by combining the individual settings, |a-b|
or |b-c| or |a-c| and so on. I don't see any negative
probabilities coming out of this. I don't understand
what you think is the essence of the Bell issue.

By the way, your web page is cool. I too am a fan of Cream. :)

Cream was awesome, by the way!

a b and c are the hypothetical settings you could have IF local hidden variables existed. This is what Bell's Theorem is all about. The difference between any two is a theta. If there WERE a hidden variable function independent of the observations (called lambda collectively), then the third (unobserved) setting existed independently BY DEFINITION and has a non-negative probability.

Bell has nothing to do with explaining coincidences, timing intervals, etc. This is always a red herring with Bell. ALL theories predict coincidences, and most "contender" theories yield predictions quite close to Malus' Law anyway. The fact that there is perfect correlation at a particular theta is NOT evidence of non-local effects and never was. The fact that detections are triggered a certain way is likewise meaningless. It is the idea that Malus' Law leads to negative probabilities for certain cases is what Bell is about and that is where his selection of those cases and his inequality comes in.

Suppose we set polarizers at a=0 and b=67.5 degrees. For the a+b+ and a-b- cases, we call that correlation. The question is, was there a determinate value IF we could have measured at c=45 degrees? Because IF there was such a determinate value, THEN a+b+c- and a-b-c+ cases should have a non-negative likelihood (>=0). Instead, Malus' Law yields a prediction of about -10%. Therefore our assumption of the hypothetical c is wrong if Malus' Law (cos^2) is right.
 
Last edited:
  • #45
I would like to add a text I prepared concerning the MWI view on an EPR experiment.

H = H_alice x H_bob x H_cable x H_sys1 x H_sys2

|psi(t0)> = |alice0>|bob0>|cable0>(|z+>|z-> - |z->|z+>)/sqrt(2)

Remember, |z+> = cos(th) |th+> + sin(th) |th->
|z-> = -sin(th) |th+> + cos(th) |th->


from t0 to t1, Bob measures system 2 along direction th_b:

This means that a time evolution operator U_b acts,
such that:

U_b |bob0> |thb+> -> |bob+> |sys0>
U_b |bob0> |thb-> -> |bob-> |sys0>

U_b acting only on H_bob x H_sys2.

Rewriting psi(t0):

|psi(t0)> = |alice0>|bob0>|cable0>(|z+>(-sin(thb) |thb+> + cos(thb) |thb->) -
|z->( cos(thb) |thb+> + sin(thb) |thb->) )/sqrt(2)

Applying U_b

|psi(t1)> = {- sin(thb)|alice0>|bob+>|cable0>|z+>|sys0>
+ cos(thb) |alice0>|bob->|cable0>|z+>|sys0>
- cos(thb) |alice0>|bob+>|cable0>|z->|sys0>
- sin(thb) |alice0>|bob->|cable0>|z->|sys0>}/sqrt(2)


from t1 to t2, Alice measures system 1 along direction th_a, so we have
an evolution operator U_a which acts:

U_a |alice0> |tha+> -> |alice+>|sys0>
U_a |alice0> |tha-> -> |alice->|sys0>

U_a acts only on H_alice x H_sys1

Rewriting psi(t1):

|psi(t1)> = {- sin(thb)|alice0>|bob+>|cable0>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
+ cos(thb) |alice0>|bob->|cable0>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
- cos(thb) |alice0>|bob+>|cable0>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>
- sin(thb) |alice0>|bob->|cable0>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>}/sqrt(2)

and applying U_a:

|psi(t2)> = {- sin(thb) cos(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- sin(thb) sin(tha)|alice->|bob+>|cable0> |sys0> |sys0>
+ cos(thb) cos(tha)|alice+>|bob->|cable0> |sys0> |sys0>
+ cos(thb) sin(tha)|alice->|bob->|cable0> |sys0> |sys0>
+ cos(thb) sin(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- cos(thb) cos(tha)|alice->|bob+>|cable0> |sys0> |sys0>
+ sin(thb) sin(tha)|alice+>|bob->|cable0> |sys0> |sys0>
- sin(thb) cos(tha)|alice->|bob->|cable0> |sys0> |sys0>}/sqrt(2)

or:

|psi(t2)> = { (-sin(thb) cos(tha) + cos(thb) sin(tha) ) |alice+>|bob+>
+(-sin(thb) sin(tha) - cos(thb) cos(tha) ) |alice->|bob+>
+( cos(thb) cos(tha) + sin(thb) sin(tha) ) |alice+>|bob->
+( cos(thb) sin(tha) - sin(thb) cos(tha) ) |alice->|bob-> }|cable0> |sys0>|sys0>}/sqrt(2)

or:

|psi(t2)> = { sin(tha-thb) |alice+> |bob+>
-cos(tha-thb) |alice-> |bob+>
+cos(tha-thb) |alice+> |bob->
+sin(tha-thb) |alice-> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)


We can now play the game of bob sending his message on a cable between t2 and t3.

U_cable-bob leads then to a mapping:

|bob+> |cable0> -> |bob+> |cable+>
|bob-> |cable0> -> |bob-> |cable->

U_cable_bob acts only on the space H_bob x H_cable

The change in state is obvious:

|psi(t3)> = { sin(tha-thb) |alice+> |bob+> |cable+>
-cos(tha-thb) |alice-> |bob+> |cable+>
+cos(tha-thb) |alice+> |bob-> |cable->
+sin(tha-thb) |alice-> |bob-> |cable->}|sys0>|sys0>}/sqrt(2)

Between t3 and t4, the signal propagates on the cable from Bob to Alice.
Note that this can be represented by an evolution operator U_cable, but as
we didn't discriminate between the state of the cable at "bob" and at "alice"
we represent this evolved state by the same symbol |cable+> or |cable->, with
the understanding that the signal is, at t4, available locally at Alice.

Next, from t4 to t5, Alice reads the cable message. So Alice will learn
what Bob measured.

U_alice_cable acts on the space H_alice x H_cable.

It leads to the mapping:

|alice+>|cable+> -> |alice++> |cable0>
|alice+>|cable-> -> |alice+-> |cable0>
|alice->|cable+> -> |alice-+> |cable0>
|alice->|cable-> -> |alice--> |cable0>

(we put the cable state back to 0 ; in fact it doesn't matter what we do there).

So our final state is:

|psi(t5)> = { sin(tha-thb) |alice++> |bob+>
-cos(tha-thb) |alice-+> |bob+>
+cos(tha-thb) |alice+-> |bob->
+sin(tha-thb) |alice--> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)



Let us now look at Alice's possible evolutions:

At t0 and t1, Alice is in the Alice0 state, 100% probability.
From t1 to t2, the state of Alice evolves, and after t2,
Alice has 50% chance to be in the state Alice+ and 50% chance to be in the state Alice-.
This remains so until t4: you can verify that the total length of the vector in product
with Alice+ remains 1/2.

Between t4 and t5, Alice's state changes.

At t5, we have:
50% chance that Alice was in a alice+ state before, and finally 1/2sin^2(tha-thb) chance
that she ends up in an alice++ state, and 1/2cos^2(tha-thb) chance that she ends up
in an alice+- state (both possibilities do add up to the original 50% chance to be in
alice+ before).

50% chance that Alice was in an alice- state before, and finally 1/2cos^2(tha-thb) chance
that she ends up in an alice-+ state, and 1/2sin^2(tha-thb) chance that she ends up in
an alice-- state.

The chance that she sees an anti-correlation (-+ or +-) is cos^2(tha-thb),
and the chance that she sees a correlation is sin^2(tha-thb).

Note that it is upon reception of the cable signal (which is in a superposition)
that it is ALICE'S assignment to one of the states which makes her decide whether Bob saw a + or a -.

Note that all time evolution operators act "locally" that means only on those subspaces which are in "local contact".

cheers,
Patrick.
 
  • #46
vanesch said:
The way I see it (even if you do not want to go explicitly in an MWI scheme) is that for Alice, it does not make sense to consider Bob's "outcomes" until she observes them (and calculates a correlation), in the same way as it doesn't make sense to talk about the position of a particle until it is observed.
If you use the quantity (Bob's outcome, or the position of a particle) without having observed it, it leads you to bizarre results, and I think that's simply what is happening here.
In many cases, you can get away with that (for instance if the particle can be considered classical, you can talk about its position without punishment), but in certain cases (double slit experiment) you get paradoxal situations.
In the same way, talking about the "result a remote observer had" before observing it yourself is something you can get away with most of the time, but sometimes you get paradoxal results (EPR).
This view is of course inspired by MWI, because, from Alice's point of view Bob didn't get one single outcome: he went into a superposition of states depending on the outcome (so talking about his outcome doesn't make sense yet). It is only upon interaction with Alice that a specific outcome state for Bob is chosen. But at that point, the hypotheses that go into Bell's inequality don't make sense anymore because information from both sides IS present.

So in a way, EPR is yet another example of a paradoxal result one can obtain when one talks about quantities that do not (yet) have an existence ; in this case, Bob's results before Alice saw them.

cheers,
Patrick.


Hi Patrick!

Now that my semester has ended (gave my final exam yesterday), I can finally go back to enjoying thinking about physics (instead of thinking about how to *explain* physics!).

Your point of view is very interesting. Of course, the questions that arises is this. Let's say we carry an EPR type of experiment, you and I. I choose a certain setting and make my measurement. But we never get together to compare our result. So, how do I experience this. I am still in a linear superposition of quantum states, right? But how would my consciousness experience this?

That's the part that I find difficult to accept. Given that I consider myself and my brain as classical entities, I find it difficult to think that such large structure could remain in a quantum state until I would be able to be in touch with you to compare our measurements. Just because I made an EPR type of measurement. What about any other type of measurement? If I look at the impact of a single photon going through a double slit setup, is my mind also in a linear superposition of the possible outcomes? If not, why would this be different than the case of the EPR measurement?


Let me say again that I find your posts *extremely* interesting and informative. Thanks for taking the time to post!

Pat
 
  • #47
vanesch said:
...

Note that it is upon reception of the cable signal (which is in a superposition)
that it is ALICE'S assignment to one of the states which makes her decide whether Bob saw a + or a -.

Note that all time evolution operators act "locally" that means only on those subspaces which are in "local contact".

cheers,
Patrick.

Patrick,

I like your example, but I still don't really follow the logic here of your MWI application. I assume that there are still no hidden variables, is that correct?

And suppose, assuming we could actually do this... Bob's photon polarization is checked .001 second after emission. The result is sent to Alice. Alice's entangled photon is placed into a coil of fiber optics and left there for a "while", perhaps just going around in circles or something - but not yet measured. She now knows the Bob result and can predict accurately what her photon will do.

So does the statement quoted above about Alice's receiving the cable etc. still apply? Just wondering...
 
  • #48
Hi Pat !

Nice to have you here again.

nrqed said:
So, how do I experience this. I am still in a linear superposition of quantum states, right? But how would my consciousness experience this?

Well, if you believe in quantum mechanics "all the way up", there's no way your body can get out of a superposition, simply because of the linearity of the time evolution operator. That's the essence of any MWI view. The funny thing is that we don't experience this. So whatever is an *observer* associated with a body, it cannot observe the entire quantum description of the body. So that's why I postulate that an *observer* (call it your consciousness, but usually I get a lot of trouble with that :-) is only *associated* with ONE of the states which occur in the Schmidt decomposition, and I simply say that this association happens randomly, according to the Born rule.

There is a part which is personal input, and there is a part which - I think - is common to all relative-state views.
Let me start with what is generally accepted in relative-state views. The most important part is that we say that in the whole universe, all dynamics is ruled by quantum theory, namely by the unitary evolution operator. This is what most people work on. You know I don't know much about string theory, but I think I know that this part is not touched upon even there.
So there is no explicit projection postulate. Just unitary evolution.
Next comes decoherence (which, let us recall, doesn't make sense outside of a MWI view). What decoherence essentially says is the following. Split the entire universe in two parts: "yourstuff" and "theenvironment". Yourstuff contains you, your apparatus, and the system under study.
The hilbert space of the universe is a direct product of H_yourstuff x H_env. Let us say that we start out in a peculiar state, where you haven't yet interacted with the environment: |psi(t0)> = |you0> x |env0>, but where, within "you0" there is a "superposition" present in one way or another.
After a very short time, due to interactions with the environment, this will evolve into a state which is not a pure product state anymore, |psi(t1)>. The Schmidt decomposition theorem tells us that there is a basis in H_yourstuff, namely |y_n> and a basis in H_env, namely |e_n> such that ANY psi can be written as:

|psi(t1)> = Sum_n a_n |y_n> |e_n>

Of course, |y_n> and |e_n> as a basis, depend on psi(t1). Decoherence now tells us that |y_n> are quasi-classical states once everything in "yourstuff" has interacted with the environment. Further evolution will now simply take place of these quasi-classical states (the environment basis and the "yourstuff" basis will remain essentially the same, except for internal time evolution), so |y_n>|e_n> can be considered stationary states of the overall hamiltonian... well, not completely, but in such a way that the classicity of y_n is not seriously affected.

Some people claim that this solves the "appearence of classicality". I think that this has been an important step, but that the problem is not solved. After all, there's a state of my body that appears in EACH of the terms ! Zeh recognises this, because he recognizes that at the "end" he STILL needs to use the born rule. So decoherence solved the "basis" problem: it showed that the environment induces a natural basis which is a basis made of classical states. But it doesn't tell us why we should pick out one state in the sum, or with what probability we should do this.

MWI people try to find ways to do that last thing, by considering "natural" distributions of observers over their body states. I wrote recently a small paper (quant-ph/0505059) where I think that this cannot be done without finally introducing something that is equivalent to the Born rule. So instead of torturing ourselves to find a way, let's just bluntly say that you associate your consciousness to ONE of the states of your body, which appears in a product state with the rest of the universe, using the Born rule. That last part is of course my personal stuff.

That's the part that I find difficult to accept. Given that I consider myself and my brain as classical entities, I find it difficult to think that such large structure could remain in a quantum state until I would be able to be in touch with you to compare our measurements.

Well, "classical" in an MWI view, means essentially: hopelessly entangled with the environment. If you insist on absolute classicity, there's no way out but to introduce a genuine collapse, with all problems it brings in (non-locality, and the arbitrariness of when it happens). Once you are hopelessly entangled with the environment, you can never INTERFERE anymore with your other terms. The reason is this:
If you have |stuff> (|you1> + |you2>), you could think of a local measurement on the "yous" which has eigenstates which mix you1 and you2, say: |youa> = |you1> + |you2> and |youb> = |you1> - |you2>. This would then show you absence of youb situations, while you1 and you2 individually have a youb component. That's typical "quantum interference".
But once you have |you1> |env1> + |you2> |env2> with |env1> and |env2> essentially orthogonal, there's no way you can do this anymore. A measurement of youa/b will give you 50% chance to have youa, and 50% chance of having youb. So it is as if you changed from a superposition into a statistical mixture, which is what most people consider a transition to a classical situation. Nevertheless, you remain in a quantum state, not a "local superposition" anymore, but an entanglement with the environment.

So you are right, in that "bob" will of course mix with his environment, and Alice too. But as long as his environment is space-like separated from Alice's, you can include that BIG lump into the Bob state and the Alice state and both environments still didn't interact. I have indeed been wondering what happens when the two light cones mix, even long before Alice "saw" the result of Bob.

Just because I made an EPR type of measurement. What about any other type of measurement? If I look at the impact of a single photon going through a double slit setup, is my mind also in a linear superposition of the possible outcomes? If not, why would this be different than the case of the EPR measurement?

You mix with the environment with EVERY SINGLE SMALL interaction you are aware of (according to MWI). It happens all the time. Or better, your BODY mixes so. And each time (that's my personal view) your mind gets attached to ONE of those bodystates, through the Born rule. The "standard" MWI view is that there are many multiple minds, each attached with each product body state, and that you simply statistically are "one" of them, and they hope to find schemes that make the Born rule come out - I think I've shown that you cannot get it out if you do not put it in somehow.

Now, I already said this (it is, I think, Penrose's view), that the possibility is still open that gravity will induce a GENUINE collapse. Too bad for string theory then :-) However, as long as we limit ourselves to EM, weak and strong interactions, we KNOW we have unitary evolution. So I don't see how a physical process based upon those interactions generates a non-unitary collapse.

cheers,
Patrick.
 
  • #49
DrChinese said:
I like your example, but I still don't really follow the logic here of your MWI application. I assume that there are still no hidden variables, is that correct?

That's the point. There are no hidden variables, and everything is local. So what gives, in Bell ? What gives is that, from Alice's point of view, Bob simply didn't have a definite result, and so you cannot talk about a joint probability, until SHE "decided" which branch to take. But when she did, information was present from both sides, so the Bell factorisation hypothesis is not justified anymore.

And suppose, assuming we could actually do this... Bob's photon polarization is checked .001 second after emission. The result is sent to Alice. Alice's entangled photon is placed into a coil of fiber optics and left there for a "while", perhaps just going around in circles or something - but not yet measured. She now knows the Bob result and can predict accurately what her photon will do.

Well, it changes the order in my example, of course, because the interactions are in a different order, but it won't change the conclusion. In fact, you can even take out Bob now, he doesn't serve any purpose anymore. Through Bob, she observes the state of the Bob-photon, and a while later, she observes her own photon. I think your example is in fact less "spectacular", because now there is no need for Alice to have Bob in a superposition: on her first observation of Bob's result, she and Bob are "in agreement" ; you simply have that Alice's photon is still in a superposition.

Let's do it.

H = H_alice x H_bob x H_sys1 x H_sys2

|psi(t0)> = |alice0>|bob0>(|z+>|z-> - |z->|z+>)/sqrt(2)

Remember, |z+> = cos(th) |th+> + sin(th) |th->
|z-> = -sin(th) |th+> + cos(th) |th->


from t0 to t1, Bob measures system 2 along direction th_b:

This means that a time evolution operator U_b acts,
such that:

U_b |bob0> |thb+> -> |bob+> |sys0>
U_b |bob0> |thb-> -> |bob-> |sys0>

U_b acting only on H_bob x H_sys2.

Rewriting psi(t0):

|psi(t0)> = |alice0>|bob0>(|z+>(-sin(thb) |thb+> + cos(thb) |thb->) -
|z->( cos(thb) |thb+> + sin(thb) |thb->) )/sqrt(2)

Applying U_b

|psi(t1)> = {- sin(thb)|alice0>|bob+>|z+>|sys0>
+ cos(thb) |alice0>|bob->|z+>|sys0>
- cos(thb) |alice0>|bob+>|z->|sys0>
- sin(thb) |alice0>|bob->|z->|sys0>}/sqrt(2)

Bob now sends his message on a cable to Alice, and Alice reads it.
We lump all this in a time evolution operator U_cable, which is "ready"
at time t2:

U_cable: |alice0>|bob+> -> |alice0+>|bob+>
and: |alice0>|bob-> -> |alice0->|bob->

So we have:

|psi(t2)> = {- sin(thb)|alice0+>|bob+>|z+>|sys0>
+ cos(thb) |alice0->|bob->|z+>|sys0>
- cos(thb) |alice0+>|bob+>|z->|sys0>
- sin(thb) |alice0->|bob->|z->|sys0>}/sqrt(2)

From t2 to t3, Alice measures system 1 along direction th_a, so we have
an evolution operator U_a which acts:

U_a |alice0X> |tha+> -> |alice+X>|sys0>
U_a |alice0X> |tha-> -> |alice-X>|sys0>

with X equal to + or -

U_a acts only on H_alice x H_sys1

Rewriting psi(t2):

|psi(t2)> = {- sin(thb)|alice0+>|bob+>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
+ cos(thb) |alice0->|bob->(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
- cos(thb) |alice0+>|bob+>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>
- sin(thb) |alice0->|bob->(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>}/sqrt(2)

and applying U_a:

|psi(t3)> = {- sin(thb) cos(tha)|alice++>|bob+> |sys0> |sys0>
- sin(thb) sin(tha)|alice-+>|bob+> |sys0> |sys0>
+ cos(thb) cos(tha)|alice+->|bob-> |sys0> |sys0>
+ cos(thb) sin(tha)|alice-->|bob-> |sys0> |sys0>
+ cos(thb) sin(tha)|alice++>|bob+> |sys0> |sys0>
- cos(thb) cos(tha)|alice-+>|bob+> |sys0> |sys0>
+ sin(thb) sin(tha)|alice+->|bob-> |sys0> |sys0>
- sin(thb) cos(tha)|alice-->|bob-> |sys0> |sys0>}/sqrt(2)

or:

|psi(t3)> = { (-sin(thb) cos(tha) + cos(thb) sin(tha) ) |alice++>|bob+>
+(-sin(thb) sin(tha) - cos(thb) cos(tha) ) |alice-+>|bob+>
+( cos(thb) cos(tha) + sin(thb) sin(tha) ) |alice+->|bob->
+( cos(thb) sin(tha) - sin(thb) cos(tha) ) |alice-->|bob-> }|sys0>|sys0>}/sqrt(2)

or:

|psi(t3)> = { sin(tha-thb) |alice++> |bob+>
-cos(tha-thb) |alice-+> |bob+>
+cos(tha-thb) |alice+-> |bob->
+sin(tha-thb) |alice--> |bob-> } |sys0>|sys0>}/sqrt(2)





Let us now look at Alice's possible evolutions:

Up to t1, Alice is in the Alice0 state, with 100% probability.
From t1 to t2, she learns about Bob's results, and her state evolves.
At t2, Alice has 50% chance to be in the Alice0+ state, and 50% chance
to be in the Alice0- state. In each case, she's in perfect agreement with
Bob, which doesn't occur, to each of her possibilities, to be in a superposition.

From t2 to t3, Alice measures her own photon.
If she was, with 50% chance, in the Alice0+ state, then she will now be, with a
probability 1/2 sin^2(tha-thb), in the Alice++ state, and with a probability
1/2 cos^2(tha-thb), in the Alice-+ state.

If she was in the Alice0- state, she will now be, with a probability
1/2cos^2(tha-thb), in the alice+- state, etc...

Note that upon reception of the message from Bob, she "decided" what Bob's state
was, and from there on she's in agreement with him, in each of her possible states.
It is when she observes her own photon (which was, at t2, still in a superposition
with respect to her), that she "decides" what state it is in. She is, of course,
still in agreement with Bob.


As I said, it is much less spectacular this way, because you only have Alice having a "superposition" of states of her photon. It's more spectacular to have her have a superposition of states of Bob.

cheers,
Patrick.
 
  • #50
I should probably add a remark to my previous post (the one with the first calculation).

At a certain point, we had:

vanesch said:
|psi(t2)> = { sin(tha-thb) |alice+> |bob+>
-cos(tha-thb) |alice-> |bob+>
+cos(tha-thb) |alice+> |bob->
+sin(tha-thb) |alice-> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)

This can be re-written of course as:

|psi(t2)> = |alice+> (sin(tha-thb)|bob+>+cos(tha-thb) |bob->)|cable0>|sys0>|sys0>/sqrt(2)

+ |alice-> (-cos(tha-thb) |bob+> + sin(tha-thb) |bob->) |cable0>|sys0>|sys0>/sqrt(2)|cable0>|sys0>|sys0>/sqrt(2)


So here it is clear that alice, in the alice+ state, "lives" with a Bob in superposition, so for her, at this moment, it doesn't make sense to talk about Bob's result.

This was maybe not obvious the way I wrote it earlier.

cheers,
Patrick.
 

Similar threads

Replies
3
Views
1K
Replies
4
Views
2K
Replies
6
Views
220
Replies
5
Views
1K
Replies
4
Views
3K
Replies
26
Views
2K
Replies
8
Views
828
Replies
5
Views
2K
Replies
59
Views
6K
Back
Top