Entanglement and teleportation

In summary: Entanglement is a very strong connection between particles, but it doesn't mean that information is always transmitted instantly. Information can take some time to propagate.
  • #36
vanesch said:
I would say that that is the usual definition of
conditional probability :-) I think that this is not
the resolution of the EPR riddle.

It is if you follow the logic of my interpretation,
and realize that in the usual lhv formulation
that is incompatible with some qm predictions
the probabilities are not calculated conditionally.
But they should be -- and in the simplest
descriptive approach the joint results are seen to
be both locally produced and following standard
classical optics theory. This renders unnecessary
any other description or interpretive explanation
for these types of experiments.

vanesch said:
Yes, but exactly the same classical intensity
explanation DOES NOT WORK for anti-coincidence experiments.

Not so far. :) And anyway, so what? The classical
intensity argument didn't work wrt the EPR/Bell
stuff either ... until the problem with the probabilistic
picture assumed by Bell-inspired lhv formulations became
clear.

And let's be clear here. In the joint EPR/Bell
context the variability of the supplementary global
parameter isn't a factor. So, the qm formulation
is, as usual, as quantitatively complete as it needs
to be without it, and can be interpreted as a
local description in concert with the classical
optics stuff (which gives a more *visualizably*
descriptive interpretation of what's happening).
 
Physics news on Phys.org
  • #37
vanesch said:
Ok, I can agree with that :approve:

The only thing experimental evidence suggests strongly is that the individual probabilities, and the correlations, as calculated by quantum theory (according to your favorite scheme, they all give the same result of course) are strongly supported, and that this implies that some inequalities a la Bell are violated.

As such the total set of hypotheses that were used (locality, reality, independence of probabilities at remote places...) is falsified. But it is an error to jump directly to the throat of locality. This is a possibility, but it doesn't follow from any evidence. Just as the denial of hidden variables is a possibility, but not necessary.

And I can agree with that (well said as always...) ! In fact, I personally like keeping the locality assumption.
 
  • #38
Sherlock said:
Regarding the naive view, if the "situation" you're referring
to is the *debate* about the meaning of experimental
violations of Bell inequalities, then "lack of knowledge"
would certainly seem to have something to do with it.

If the "situation" you're referring to is the data
produced in the experiments, then there is enough
knowledge to explain this via local transmissions.

The inference of nonlocal effects via violations of
Bell inequalities rests primarily on the assumption that
the general lhv formulation proposed by Bell is the
*only* way to formulate a local description of the
probabilities in the joint-detection context. The
problem is that this *general* lhv formulation is
flawed (ie., inapplicable) -- for reasons that I've
pointed out in other posts in this thread.

The HV assumption made by Bell is not flawed in any respect. If you can explain a situation in which the determinate existence of a hypothetical third observable polarizer yields results compatible with experiment, please so state.

In the words of Bell: "It follows that c is another unit vector [in addition to a and b] ...". Therefore, there are 8 possible outcomes (permutations on a/b/c) that must total to 100% (total probability=1). It is a fact that 2 cases of the 8 have a negative probability when the angles have certain settings (such as a=0, b=67.5, c=45 degrees).

If you are not addressing this, then you are ignoring Bell entirely. Which is your right, but it is the essence of the issue.
 
  • #39
vanesch said:
Let's not forget that optics is not necessary. If you take quantum mechanics for granted, you get exactly the same Bell violations with electrons. It is only that the experiments are easier to carry out with light than with electrons.

If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

Personally, I (apparently unlike you) try *not* to
think about quantum theory as much as possible. :)

About my guess that most physicists would
characterize MWI as "not even wrong", you wrote:

vanesch said:
As I pointed out before, this is a misconception. It would mean that all people working on subjects like quantum gravity, string theory or decoherence are working in the "not even wrong" category.

These are clearly fictional accounts of the real world.
Some fictions are useful. Some fictions become
'necessary' in the continued absence of a more
realistic account. But, MWI or CI or Bohmian mechanics
simply aren't necessary. Excess baggage.

Now, about quantum gravity, string theory, and
decoherence. I don't understand any of that stuff.
Although, I think it's safe to say that my peanut butter
sandwich is rapidly decohering. Anyhow, let's say
you've calculated the quantum states of a black
hole. Then what? Is there any way to ascertain
whether or not you were right? Suppose there is,
and you find that your calculations are correct.
Will you really have a better *understanding* of
what a black hole actually is. I don't think so, but
of course I could be quite wrong. General Relativity
seems like a *very* simplistic account of the
complex wave interactions that produce observable
gravitational behavior. So, to try marry it with quantum
theory (a decidedly non-descriptive 'description' of
reality) seems ill-advised from the start.

Has decoherence really helped us understand anything
better than we did without it?

Some of the (presumably) smartest people in the
world have been working on string theory ever since
someone noticed some interesting mathematical
connections back in, what, the 60's? Suppose, that
they eventually (in our lifetimes) get it to consistently
tie everything together. So what? Is anybody going
to actually use it to describe their experiments? Or
will it be useful only for metaphysical speculation?
Or, will it not be used much at all by anybody, but just
be nice that everything finally got 'unified'.

vanesch said:
The reason to prefer MWI is not that it is somehow fancy or that the mystery part of it has some strange attraction. MWI is the natural consequence of two principles:
- the quantum-mechanical superposition principle
- locality (in the relativistic sense)

Natural consequence? Ok. Necessary? No. There must be
some good reasons why the physics community isn't too
excited about MWI.

vanesch said:
In the same way, Lorentz invariance and its associated requirement of locality has been a major guiding principle which did withstand many experimental challenges. The price to pay was a major revision of our notion of time, which could have been classified in the "not even wrong" category too if intuition was the only judge.

I don't know. I think I understand special relativity, but it
hasn't altered my basic intuitive notion of time. The physical
basis of the Lorentz-Fitzgerald contraction is another thing
altogether.

vanesch said:
At this point, there is absolutely no indication that we should limit the applicability, nor of the superposition principle, nor of locality. And when you take these two "good soldiers" seriously all the way, you have no choice but to end up in a MWI view.

I don't think of locality in terms of applicability. It's the way
the world is, until demonstrated otherwise.

As for the superposition principle, whether it's a
frequency distribution of possible experimental results, or
of events in some other medium, it applies to waves.

The choice to not adopt the MWI view, is simply the choice
to not adopt extraneous symbolic baggage in trying to
understand things.

By the way, I hope you don't mind me being sort
of the devil's advocate wrt some of your statements.
I'm sure you know *much* more about these things
than I probably ever will. I've already learned much
from your (and others) postings and it's been very
entertaining. So, it is in a spirit of gratitude,
a genuine fascination with the physical world, and
an intense desire to keep things as simple as
possible :) that my replies are submitted.
 
  • #40
Sherlock said:
If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

That's sort of funny, you know. Application of classical optics' formula [tex]cos^2\theta[/tex] is incompatible with hidden variables but consistent with experiment.
 
  • #41
Well you guys have flown over my head. Haha. I'll stick with "either a non-local causual relationship or taking the same measures of the same light at the same time"
 
  • #42
Sherlock said:
If you're doing an optics experiment, and you can
use classical optics to account for the results, then
why wouldn't you want to do that?

Well, because of some view that there should be an underlying unity to physics. You're not required to subscribe to that view, but I'd say that physics then looses a lot of interest - that's of course just my opinion. The idea is that there ARE universal laws of nature. Maybe that's simply not true. Maybe nature follows totally different laws from case to case. But then physics reduces to a catalog of experiments, without any guidance. A bit like biology before the advent of its molecular understanding.
I think that the working hypothesis that there ARE universal laws has not yet been falsified. Within that frame, you'd think that ONE AND THE SAME theory must account for all experimental observations concerning optics. We have such a theory, and it is called QED. Of course we had older theories, like Maxwell's theory and even the corpuscular theory ; and QED shows us IN WHAT CIRCUMSTANCES these older theories are good approximations ; and in what circumstances we will get deviations from their predictions.
It just turns out that in EPR type experiments you are in fact NOT in a regime where you can use Maxwell's theory because it is exactly the same regime in which you have the anti-coincidence counts. In one case however, Maxwell gives you (I'd say, by accident) an answer which corresponds to the QED prediction, in the other case, it is completely off.

But when you analyse EPR experiments in more detail, you can see that Maxwell DOES NOT give you ALL correct answers:

If you have the situation:
Code:
-------> (PDC)  -------> PBS(alice) --> A+
            |                      |
            |                      0---------> A-
            |
            |
            X---- > PBS(Bob) --> B+
                             |
                             |
                             O------->B-
Then Maxwell will give you the right Cos^2(theta) correlation between A+ and B+ as some people point out, but Maxwell will NOT give you the correct correlations between (A+ OR A-) AND (B+) AND (B-) which are ANTI-correlations.

(A+ OR A-) works here as the "trigger" (one photon seen in Alice's arm), while the other photon can only be detected at B+ OR at B- but not simultaneously. This is a complicated version of the Thorn experiment.

Personally, I (apparently unlike you) try *not* to
think about quantum theory as much as possible. :)

Well, I still believe in the working hypothesis of a "unity of physics" in that there is a single set of universal laws that nature should obey. All the rest is, proverbially, "stamp collecting" :-)
As such, quantum theory is the basis for ALL of physics (except for GR, that's the big riddle).

These are clearly fictional accounts of the real world.
Some fictions are useful. Some fictions become
'necessary' in the continued absence of a more
realistic account.

They are the natural consequence of a belief in a "unity of physics".

But, MWI or CI or Bohmian mechanics
simply aren't necessary. Excess baggage.

Well, for me the essence of physics is the identification of an objective world with the Platonic world (the mathematical objects), in such a way that the subjectively observed world corresponds to what you can deduce from those mathematical objects. MWI, CI and Bohmian mechanics are different mappings between an objective world and the Platonic world ; only they lead to finally the same subjectively observed phenomena. Now if physics would be "finished" then it is a matter of taste which one you pick out. But somehow you have to choose I think.
However, physics is not finished yet. So this choice of mapping can be more or less inspiring for new ideas.


Suppose there is,
and you find that your calculations are correct.
Will you really have a better *understanding* of
what a black hole actually is.

I think that the perfect understanding is a fully coherent mapping between a postulated objective world and the platonic world of mathematical objects, in such a way that all of our subjective observations are in agreement with that mapping. There may be more than one way of doing this. I am still of the opinion that there exists at least one way.
Apart from basing the meaning of "explanation" on intuition (and we should know by now that that is not a reliable thing to do), I don't know what else can it mean, to "explain" something.

General Relativity
seems like a *very* simplistic account of the
complex wave interactions that produce observable
gravitational behavior. So, to try marry it with quantum
theory (a decidedly non-descriptive 'description' of
reality) seems ill-advised from the start.

Well, this is a remark I never understood. If you have a theory which makes unambiguous, correct predictions of experiments, then in what way is there still something not "understood" ? I can understand the opposite argument: discrepancies between a theory's prediction and an experimental result can point to a more complex underlying "reality". But if the theory makes the right predictions ? I would then be inclined to think that the theory already possesses ALL the ingredients describing the phenomenon under study, no ?

Has decoherence really helped us understand anything
better than we did without it?

For sure ! It resolved an ambiguity in the formulation of quantum theory, namely WHEN to apply the Born rule (the famous Heisenberg cut). After all, the application of the Born rule is somehow left to the judgement of the person studying the phenomenon: he can, or cannot, include more and more "apparatus", complicating apparently the calculations ; nevertheless, from a certain point upward, this seems like a useless complication. Decoherence theory tells us why: that from the moment you have "irreversible coupling to the environment", putting more stuff from the "observer" part into the "system under study part" doesn't change the final result. This explains why "simplistic" quantum calculations often give accurate results. It is sufficient that the *essential quantum mechanical phenomena* are taken into account in a full quantum calculation, and that we apply the Born rule just after that "level of complexity", continuing with classical calculations upwards, and we will have the same results as if we did very very complicated fully unitary quantum calculations, including everything, all the atoms of measurement apparatus and all that.
Without these results of decoherence theory, quantum theory was in fact not usable, except if some ontological status was given to the quantum-classical transition (that's the Copenhagen view). But apparently that depended then on the choice of the scientist to include, or not, certain features of the apparatus into his calculation. Hence the ambiguity. And decoherence theory tells us that it doesn't matter where we do this, if we do it late enough.


Natural consequence? Ok. Necessary? No. There must be
some good reasons why the physics community isn't too
excited about MWI.

Ha, sample bias ! You say that the people who are excited by MWI are working on phantasies, and that the "others", which you seem to identify with the entire community, are not excited by it :-)

Honestly, I think that many physicists are too much their nose into their actual (interesting) work on a technical level, to be concerned about foundational issues. They apply the Born rule, get out predictions, and do measurements. It is only for a minority, working on very fundamental issues, that it really matters in their work which view to hold. And most of those do "get excited about MWI", or at least, consider the possibility, or recon that if they don't like it, they'll have to come up with a *specific mechanism* which denies MWI. As I said, I'm open to that, and the possibility exists that gravity will exactly do that. But if that's true, the work of most string theorists goes into the dustbin.

As for the superposition principle, whether it's a
frequency distribution of possible experimental results, or
of events in some other medium, it applies to waves.

I think that that is a serious misunderstanding of what exactly tells us the superposition principle of quantum theory - but I've seen many people think of it that way. Somehow the superposition principle is associated immediately with "linear partial differential equations" (= waves). It's probably because of the way the material is usually introduced.

However, that's not at all the content of the superposition principle, as I understand it. The superposition principle says that if a physical situation A exists, and a different physical situation B exists, then automatically there exist distinct physical situations for all complex numbers U.
We write this in ket notation as |A> + U|B>. And this, independent of the nature of situation A and situation B.
This is, at first sight, a mind boggling statement and it is the fundamental idea behind quantum theory.
It is exactly what Schroedinger thought it couldn't mean: if situation A is "my cat is dead" and situation B is "my cat chases a bird" then there exists a *new* situation for each complex number U: |my cat is dead> + U |my cat chases a bird>.

The choice to not adopt the MWI view, is simply the choice
to not adopt extraneous symbolic baggage in trying to
understand things.

Well, in my view of "understanding", which I explained, was, in its purest form, a mapping from a postulated objective reality into the world of mathematical objects, it is ONE such mapping. As I want to have *A* mapping, I find the one given by MWI the cleanest.

cheers,
Patrick.
 
Last edited:
  • #43
DrChinese said:
If you can explain a situation in which the determinate existence of a hypothetical third observable polarizer yields results compatible with experiment, please so state.

Not sure what you mean. In the situations we're considering
there is a source of (opposite moving) entangled photons,
two polarizers at each end to filter the emitted light, and
a PMT behind each polarizer to facilitate detection when
a certain amount of light has been transmitted by the polarizer.

The angular difference (theta) between the polarizer settings
determines the probability of joint detection. This probability varies
as cos^2 theta. There's never a negative probability of joint
detection. It goes from 0 (for theta = 90 degrees) to 1 (for
theta = 0 degrees).

DrChinese said:
In the words of Bell: "It follows that c is another unit vector [in addition to a and b] ...". Therefore, there are 8 possible outcomes (permutations on a/b/c) that must total to 100% (total probability=1). It is a fact that 2 cases of the 8 have a negative probability when the angles have certain settings (such as a=0, b=67.5, c=45 degrees).

If you are not addressing this, then you are ignoring Bell entirely. Which is your right, but it is the essence of the issue.

a, b and c are values for theta? What's the problem? I don't
see where you would get any negative probabilities. Or, maybe
a, b and c aren't values for theta? Then, what are they?
Individual settings? Ok, so you get the theta for a set of
joint measurements by combining the individual settings, |a-b|
or |b-c| or |a-c| and so on. I don't see any negative
probabilities coming out of this. I don't understand
what you think is the essence of the Bell issue.

By the way, your web page is cool. I too am a fan of Cream. :)
 
Last edited:
  • #44
Sherlock said:
Not sure what you mean. In the situations we're considering
there is a source of (opposite moving) entangled photons,
two polarizers at each end to filter the emitted light, and
a PMT behind each polarizer to facilitate detection when
a certain amount of light has been transmitted by the polarizer.

The angular difference (theta) between the polarizer settings
determines the probability of joint detection. This probability varies
as cos^2 theta. There's never a negative probability of joint
detection. It goes from 0 (for theta = 90 degrees) to 1 (for
theta = 0 degrees).



a, b and c are values for theta? What's the problem? I don't
see where you would get any negative probabilities. Or, maybe
a, b and c aren't values for theta? Then, what are they?
Individual settings? Ok, so you get the theta for a set of
joint measurements by combining the individual settings, |a-b|
or |b-c| or |a-c| and so on. I don't see any negative
probabilities coming out of this. I don't understand
what you think is the essence of the Bell issue.

By the way, your web page is cool. I too am a fan of Cream. :)

Cream was awesome, by the way!

a b and c are the hypothetical settings you could have IF local hidden variables existed. This is what Bell's Theorem is all about. The difference between any two is a theta. If there WERE a hidden variable function independent of the observations (called lambda collectively), then the third (unobserved) setting existed independently BY DEFINITION and has a non-negative probability.

Bell has nothing to do with explaining coincidences, timing intervals, etc. This is always a red herring with Bell. ALL theories predict coincidences, and most "contender" theories yield predictions quite close to Malus' Law anyway. The fact that there is perfect correlation at a particular theta is NOT evidence of non-local effects and never was. The fact that detections are triggered a certain way is likewise meaningless. It is the idea that Malus' Law leads to negative probabilities for certain cases is what Bell is about and that is where his selection of those cases and his inequality comes in.

Suppose we set polarizers at a=0 and b=67.5 degrees. For the a+b+ and a-b- cases, we call that correlation. The question is, was there a determinate value IF we could have measured at c=45 degrees? Because IF there was such a determinate value, THEN a+b+c- and a-b-c+ cases should have a non-negative likelihood (>=0). Instead, Malus' Law yields a prediction of about -10%. Therefore our assumption of the hypothetical c is wrong if Malus' Law (cos^2) is right.
 
Last edited:
  • #45
I would like to add a text I prepared concerning the MWI view on an EPR experiment.

H = H_alice x H_bob x H_cable x H_sys1 x H_sys2

|psi(t0)> = |alice0>|bob0>|cable0>(|z+>|z-> - |z->|z+>)/sqrt(2)

Remember, |z+> = cos(th) |th+> + sin(th) |th->
|z-> = -sin(th) |th+> + cos(th) |th->


from t0 to t1, Bob measures system 2 along direction th_b:

This means that a time evolution operator U_b acts,
such that:

U_b |bob0> |thb+> -> |bob+> |sys0>
U_b |bob0> |thb-> -> |bob-> |sys0>

U_b acting only on H_bob x H_sys2.

Rewriting psi(t0):

|psi(t0)> = |alice0>|bob0>|cable0>(|z+>(-sin(thb) |thb+> + cos(thb) |thb->) -
|z->( cos(thb) |thb+> + sin(thb) |thb->) )/sqrt(2)

Applying U_b

|psi(t1)> = {- sin(thb)|alice0>|bob+>|cable0>|z+>|sys0>
+ cos(thb) |alice0>|bob->|cable0>|z+>|sys0>
- cos(thb) |alice0>|bob+>|cable0>|z->|sys0>
- sin(thb) |alice0>|bob->|cable0>|z->|sys0>}/sqrt(2)


from t1 to t2, Alice measures system 1 along direction th_a, so we have
an evolution operator U_a which acts:

U_a |alice0> |tha+> -> |alice+>|sys0>
U_a |alice0> |tha-> -> |alice->|sys0>

U_a acts only on H_alice x H_sys1

Rewriting psi(t1):

|psi(t1)> = {- sin(thb)|alice0>|bob+>|cable0>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
+ cos(thb) |alice0>|bob->|cable0>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
- cos(thb) |alice0>|bob+>|cable0>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>
- sin(thb) |alice0>|bob->|cable0>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>}/sqrt(2)

and applying U_a:

|psi(t2)> = {- sin(thb) cos(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- sin(thb) sin(tha)|alice->|bob+>|cable0> |sys0> |sys0>
+ cos(thb) cos(tha)|alice+>|bob->|cable0> |sys0> |sys0>
+ cos(thb) sin(tha)|alice->|bob->|cable0> |sys0> |sys0>
+ cos(thb) sin(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- cos(thb) cos(tha)|alice->|bob+>|cable0> |sys0> |sys0>
+ sin(thb) sin(tha)|alice+>|bob->|cable0> |sys0> |sys0>
- sin(thb) cos(tha)|alice->|bob->|cable0> |sys0> |sys0>}/sqrt(2)

or:

|psi(t2)> = { (-sin(thb) cos(tha) + cos(thb) sin(tha) ) |alice+>|bob+>
+(-sin(thb) sin(tha) - cos(thb) cos(tha) ) |alice->|bob+>
+( cos(thb) cos(tha) + sin(thb) sin(tha) ) |alice+>|bob->
+( cos(thb) sin(tha) - sin(thb) cos(tha) ) |alice->|bob-> }|cable0> |sys0>|sys0>}/sqrt(2)

or:

|psi(t2)> = { sin(tha-thb) |alice+> |bob+>
-cos(tha-thb) |alice-> |bob+>
+cos(tha-thb) |alice+> |bob->
+sin(tha-thb) |alice-> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)


We can now play the game of bob sending his message on a cable between t2 and t3.

U_cable-bob leads then to a mapping:

|bob+> |cable0> -> |bob+> |cable+>
|bob-> |cable0> -> |bob-> |cable->

U_cable_bob acts only on the space H_bob x H_cable

The change in state is obvious:

|psi(t3)> = { sin(tha-thb) |alice+> |bob+> |cable+>
-cos(tha-thb) |alice-> |bob+> |cable+>
+cos(tha-thb) |alice+> |bob-> |cable->
+sin(tha-thb) |alice-> |bob-> |cable->}|sys0>|sys0>}/sqrt(2)

Between t3 and t4, the signal propagates on the cable from Bob to Alice.
Note that this can be represented by an evolution operator U_cable, but as
we didn't discriminate between the state of the cable at "bob" and at "alice"
we represent this evolved state by the same symbol |cable+> or |cable->, with
the understanding that the signal is, at t4, available locally at Alice.

Next, from t4 to t5, Alice reads the cable message. So Alice will learn
what Bob measured.

U_alice_cable acts on the space H_alice x H_cable.

It leads to the mapping:

|alice+>|cable+> -> |alice++> |cable0>
|alice+>|cable-> -> |alice+-> |cable0>
|alice->|cable+> -> |alice-+> |cable0>
|alice->|cable-> -> |alice--> |cable0>

(we put the cable state back to 0 ; in fact it doesn't matter what we do there).

So our final state is:

|psi(t5)> = { sin(tha-thb) |alice++> |bob+>
-cos(tha-thb) |alice-+> |bob+>
+cos(tha-thb) |alice+-> |bob->
+sin(tha-thb) |alice--> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)



Let us now look at Alice's possible evolutions:

At t0 and t1, Alice is in the Alice0 state, 100% probability.
From t1 to t2, the state of Alice evolves, and after t2,
Alice has 50% chance to be in the state Alice+ and 50% chance to be in the state Alice-.
This remains so until t4: you can verify that the total length of the vector in product
with Alice+ remains 1/2.

Between t4 and t5, Alice's state changes.

At t5, we have:
50% chance that Alice was in a alice+ state before, and finally 1/2sin^2(tha-thb) chance
that she ends up in an alice++ state, and 1/2cos^2(tha-thb) chance that she ends up
in an alice+- state (both possibilities do add up to the original 50% chance to be in
alice+ before).

50% chance that Alice was in an alice- state before, and finally 1/2cos^2(tha-thb) chance
that she ends up in an alice-+ state, and 1/2sin^2(tha-thb) chance that she ends up in
an alice-- state.

The chance that she sees an anti-correlation (-+ or +-) is cos^2(tha-thb),
and the chance that she sees a correlation is sin^2(tha-thb).

Note that it is upon reception of the cable signal (which is in a superposition)
that it is ALICE'S assignment to one of the states which makes her decide whether Bob saw a + or a -.

Note that all time evolution operators act "locally" that means only on those subspaces which are in "local contact".

cheers,
Patrick.
 
  • #46
vanesch said:
The way I see it (even if you do not want to go explicitly in an MWI scheme) is that for Alice, it does not make sense to consider Bob's "outcomes" until she observes them (and calculates a correlation), in the same way as it doesn't make sense to talk about the position of a particle until it is observed.
If you use the quantity (Bob's outcome, or the position of a particle) without having observed it, it leads you to bizarre results, and I think that's simply what is happening here.
In many cases, you can get away with that (for instance if the particle can be considered classical, you can talk about its position without punishment), but in certain cases (double slit experiment) you get paradoxal situations.
In the same way, talking about the "result a remote observer had" before observing it yourself is something you can get away with most of the time, but sometimes you get paradoxal results (EPR).
This view is of course inspired by MWI, because, from Alice's point of view Bob didn't get one single outcome: he went into a superposition of states depending on the outcome (so talking about his outcome doesn't make sense yet). It is only upon interaction with Alice that a specific outcome state for Bob is chosen. But at that point, the hypotheses that go into Bell's inequality don't make sense anymore because information from both sides IS present.

So in a way, EPR is yet another example of a paradoxal result one can obtain when one talks about quantities that do not (yet) have an existence ; in this case, Bob's results before Alice saw them.

cheers,
Patrick.


Hi Patrick!

Now that my semester has ended (gave my final exam yesterday), I can finally go back to enjoying thinking about physics (instead of thinking about how to *explain* physics!).

Your point of view is very interesting. Of course, the questions that arises is this. Let's say we carry an EPR type of experiment, you and I. I choose a certain setting and make my measurement. But we never get together to compare our result. So, how do I experience this. I am still in a linear superposition of quantum states, right? But how would my consciousness experience this?

That's the part that I find difficult to accept. Given that I consider myself and my brain as classical entities, I find it difficult to think that such large structure could remain in a quantum state until I would be able to be in touch with you to compare our measurements. Just because I made an EPR type of measurement. What about any other type of measurement? If I look at the impact of a single photon going through a double slit setup, is my mind also in a linear superposition of the possible outcomes? If not, why would this be different than the case of the EPR measurement?


Let me say again that I find your posts *extremely* interesting and informative. Thanks for taking the time to post!

Pat
 
  • #47
vanesch said:
...

Note that it is upon reception of the cable signal (which is in a superposition)
that it is ALICE'S assignment to one of the states which makes her decide whether Bob saw a + or a -.

Note that all time evolution operators act "locally" that means only on those subspaces which are in "local contact".

cheers,
Patrick.

Patrick,

I like your example, but I still don't really follow the logic here of your MWI application. I assume that there are still no hidden variables, is that correct?

And suppose, assuming we could actually do this... Bob's photon polarization is checked .001 second after emission. The result is sent to Alice. Alice's entangled photon is placed into a coil of fiber optics and left there for a "while", perhaps just going around in circles or something - but not yet measured. She now knows the Bob result and can predict accurately what her photon will do.

So does the statement quoted above about Alice's receiving the cable etc. still apply? Just wondering...
 
  • #48
Hi Pat !

Nice to have you here again.

nrqed said:
So, how do I experience this. I am still in a linear superposition of quantum states, right? But how would my consciousness experience this?

Well, if you believe in quantum mechanics "all the way up", there's no way your body can get out of a superposition, simply because of the linearity of the time evolution operator. That's the essence of any MWI view. The funny thing is that we don't experience this. So whatever is an *observer* associated with a body, it cannot observe the entire quantum description of the body. So that's why I postulate that an *observer* (call it your consciousness, but usually I get a lot of trouble with that :-) is only *associated* with ONE of the states which occur in the Schmidt decomposition, and I simply say that this association happens randomly, according to the Born rule.

There is a part which is personal input, and there is a part which - I think - is common to all relative-state views.
Let me start with what is generally accepted in relative-state views. The most important part is that we say that in the whole universe, all dynamics is ruled by quantum theory, namely by the unitary evolution operator. This is what most people work on. You know I don't know much about string theory, but I think I know that this part is not touched upon even there.
So there is no explicit projection postulate. Just unitary evolution.
Next comes decoherence (which, let us recall, doesn't make sense outside of a MWI view). What decoherence essentially says is the following. Split the entire universe in two parts: "yourstuff" and "theenvironment". Yourstuff contains you, your apparatus, and the system under study.
The hilbert space of the universe is a direct product of H_yourstuff x H_env. Let us say that we start out in a peculiar state, where you haven't yet interacted with the environment: |psi(t0)> = |you0> x |env0>, but where, within "you0" there is a "superposition" present in one way or another.
After a very short time, due to interactions with the environment, this will evolve into a state which is not a pure product state anymore, |psi(t1)>. The Schmidt decomposition theorem tells us that there is a basis in H_yourstuff, namely |y_n> and a basis in H_env, namely |e_n> such that ANY psi can be written as:

|psi(t1)> = Sum_n a_n |y_n> |e_n>

Of course, |y_n> and |e_n> as a basis, depend on psi(t1). Decoherence now tells us that |y_n> are quasi-classical states once everything in "yourstuff" has interacted with the environment. Further evolution will now simply take place of these quasi-classical states (the environment basis and the "yourstuff" basis will remain essentially the same, except for internal time evolution), so |y_n>|e_n> can be considered stationary states of the overall hamiltonian... well, not completely, but in such a way that the classicity of y_n is not seriously affected.

Some people claim that this solves the "appearence of classicality". I think that this has been an important step, but that the problem is not solved. After all, there's a state of my body that appears in EACH of the terms ! Zeh recognises this, because he recognizes that at the "end" he STILL needs to use the born rule. So decoherence solved the "basis" problem: it showed that the environment induces a natural basis which is a basis made of classical states. But it doesn't tell us why we should pick out one state in the sum, or with what probability we should do this.

MWI people try to find ways to do that last thing, by considering "natural" distributions of observers over their body states. I wrote recently a small paper (quant-ph/0505059) where I think that this cannot be done without finally introducing something that is equivalent to the Born rule. So instead of torturing ourselves to find a way, let's just bluntly say that you associate your consciousness to ONE of the states of your body, which appears in a product state with the rest of the universe, using the Born rule. That last part is of course my personal stuff.

That's the part that I find difficult to accept. Given that I consider myself and my brain as classical entities, I find it difficult to think that such large structure could remain in a quantum state until I would be able to be in touch with you to compare our measurements.

Well, "classical" in an MWI view, means essentially: hopelessly entangled with the environment. If you insist on absolute classicity, there's no way out but to introduce a genuine collapse, with all problems it brings in (non-locality, and the arbitrariness of when it happens). Once you are hopelessly entangled with the environment, you can never INTERFERE anymore with your other terms. The reason is this:
If you have |stuff> (|you1> + |you2>), you could think of a local measurement on the "yous" which has eigenstates which mix you1 and you2, say: |youa> = |you1> + |you2> and |youb> = |you1> - |you2>. This would then show you absence of youb situations, while you1 and you2 individually have a youb component. That's typical "quantum interference".
But once you have |you1> |env1> + |you2> |env2> with |env1> and |env2> essentially orthogonal, there's no way you can do this anymore. A measurement of youa/b will give you 50% chance to have youa, and 50% chance of having youb. So it is as if you changed from a superposition into a statistical mixture, which is what most people consider a transition to a classical situation. Nevertheless, you remain in a quantum state, not a "local superposition" anymore, but an entanglement with the environment.

So you are right, in that "bob" will of course mix with his environment, and Alice too. But as long as his environment is space-like separated from Alice's, you can include that BIG lump into the Bob state and the Alice state and both environments still didn't interact. I have indeed been wondering what happens when the two light cones mix, even long before Alice "saw" the result of Bob.

Just because I made an EPR type of measurement. What about any other type of measurement? If I look at the impact of a single photon going through a double slit setup, is my mind also in a linear superposition of the possible outcomes? If not, why would this be different than the case of the EPR measurement?

You mix with the environment with EVERY SINGLE SMALL interaction you are aware of (according to MWI). It happens all the time. Or better, your BODY mixes so. And each time (that's my personal view) your mind gets attached to ONE of those bodystates, through the Born rule. The "standard" MWI view is that there are many multiple minds, each attached with each product body state, and that you simply statistically are "one" of them, and they hope to find schemes that make the Born rule come out - I think I've shown that you cannot get it out if you do not put it in somehow.

Now, I already said this (it is, I think, Penrose's view), that the possibility is still open that gravity will induce a GENUINE collapse. Too bad for string theory then :-) However, as long as we limit ourselves to EM, weak and strong interactions, we KNOW we have unitary evolution. So I don't see how a physical process based upon those interactions generates a non-unitary collapse.

cheers,
Patrick.
 
  • #49
DrChinese said:
I like your example, but I still don't really follow the logic here of your MWI application. I assume that there are still no hidden variables, is that correct?

That's the point. There are no hidden variables, and everything is local. So what gives, in Bell ? What gives is that, from Alice's point of view, Bob simply didn't have a definite result, and so you cannot talk about a joint probability, until SHE "decided" which branch to take. But when she did, information was present from both sides, so the Bell factorisation hypothesis is not justified anymore.

And suppose, assuming we could actually do this... Bob's photon polarization is checked .001 second after emission. The result is sent to Alice. Alice's entangled photon is placed into a coil of fiber optics and left there for a "while", perhaps just going around in circles or something - but not yet measured. She now knows the Bob result and can predict accurately what her photon will do.

Well, it changes the order in my example, of course, because the interactions are in a different order, but it won't change the conclusion. In fact, you can even take out Bob now, he doesn't serve any purpose anymore. Through Bob, she observes the state of the Bob-photon, and a while later, she observes her own photon. I think your example is in fact less "spectacular", because now there is no need for Alice to have Bob in a superposition: on her first observation of Bob's result, she and Bob are "in agreement" ; you simply have that Alice's photon is still in a superposition.

Let's do it.

H = H_alice x H_bob x H_sys1 x H_sys2

|psi(t0)> = |alice0>|bob0>(|z+>|z-> - |z->|z+>)/sqrt(2)

Remember, |z+> = cos(th) |th+> + sin(th) |th->
|z-> = -sin(th) |th+> + cos(th) |th->


from t0 to t1, Bob measures system 2 along direction th_b:

This means that a time evolution operator U_b acts,
such that:

U_b |bob0> |thb+> -> |bob+> |sys0>
U_b |bob0> |thb-> -> |bob-> |sys0>

U_b acting only on H_bob x H_sys2.

Rewriting psi(t0):

|psi(t0)> = |alice0>|bob0>(|z+>(-sin(thb) |thb+> + cos(thb) |thb->) -
|z->( cos(thb) |thb+> + sin(thb) |thb->) )/sqrt(2)

Applying U_b

|psi(t1)> = {- sin(thb)|alice0>|bob+>|z+>|sys0>
+ cos(thb) |alice0>|bob->|z+>|sys0>
- cos(thb) |alice0>|bob+>|z->|sys0>
- sin(thb) |alice0>|bob->|z->|sys0>}/sqrt(2)

Bob now sends his message on a cable to Alice, and Alice reads it.
We lump all this in a time evolution operator U_cable, which is "ready"
at time t2:

U_cable: |alice0>|bob+> -> |alice0+>|bob+>
and: |alice0>|bob-> -> |alice0->|bob->

So we have:

|psi(t2)> = {- sin(thb)|alice0+>|bob+>|z+>|sys0>
+ cos(thb) |alice0->|bob->|z+>|sys0>
- cos(thb) |alice0+>|bob+>|z->|sys0>
- sin(thb) |alice0->|bob->|z->|sys0>}/sqrt(2)

From t2 to t3, Alice measures system 1 along direction th_a, so we have
an evolution operator U_a which acts:

U_a |alice0X> |tha+> -> |alice+X>|sys0>
U_a |alice0X> |tha-> -> |alice-X>|sys0>

with X equal to + or -

U_a acts only on H_alice x H_sys1

Rewriting psi(t2):

|psi(t2)> = {- sin(thb)|alice0+>|bob+>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
+ cos(thb) |alice0->|bob->(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
- cos(thb) |alice0+>|bob+>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>
- sin(thb) |alice0->|bob->(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>}/sqrt(2)

and applying U_a:

|psi(t3)> = {- sin(thb) cos(tha)|alice++>|bob+> |sys0> |sys0>
- sin(thb) sin(tha)|alice-+>|bob+> |sys0> |sys0>
+ cos(thb) cos(tha)|alice+->|bob-> |sys0> |sys0>
+ cos(thb) sin(tha)|alice-->|bob-> |sys0> |sys0>
+ cos(thb) sin(tha)|alice++>|bob+> |sys0> |sys0>
- cos(thb) cos(tha)|alice-+>|bob+> |sys0> |sys0>
+ sin(thb) sin(tha)|alice+->|bob-> |sys0> |sys0>
- sin(thb) cos(tha)|alice-->|bob-> |sys0> |sys0>}/sqrt(2)

or:

|psi(t3)> = { (-sin(thb) cos(tha) + cos(thb) sin(tha) ) |alice++>|bob+>
+(-sin(thb) sin(tha) - cos(thb) cos(tha) ) |alice-+>|bob+>
+( cos(thb) cos(tha) + sin(thb) sin(tha) ) |alice+->|bob->
+( cos(thb) sin(tha) - sin(thb) cos(tha) ) |alice-->|bob-> }|sys0>|sys0>}/sqrt(2)

or:

|psi(t3)> = { sin(tha-thb) |alice++> |bob+>
-cos(tha-thb) |alice-+> |bob+>
+cos(tha-thb) |alice+-> |bob->
+sin(tha-thb) |alice--> |bob-> } |sys0>|sys0>}/sqrt(2)





Let us now look at Alice's possible evolutions:

Up to t1, Alice is in the Alice0 state, with 100% probability.
From t1 to t2, she learns about Bob's results, and her state evolves.
At t2, Alice has 50% chance to be in the Alice0+ state, and 50% chance
to be in the Alice0- state. In each case, she's in perfect agreement with
Bob, which doesn't occur, to each of her possibilities, to be in a superposition.

From t2 to t3, Alice measures her own photon.
If she was, with 50% chance, in the Alice0+ state, then she will now be, with a
probability 1/2 sin^2(tha-thb), in the Alice++ state, and with a probability
1/2 cos^2(tha-thb), in the Alice-+ state.

If she was in the Alice0- state, she will now be, with a probability
1/2cos^2(tha-thb), in the alice+- state, etc...

Note that upon reception of the message from Bob, she "decided" what Bob's state
was, and from there on she's in agreement with him, in each of her possible states.
It is when she observes her own photon (which was, at t2, still in a superposition
with respect to her), that she "decides" what state it is in. She is, of course,
still in agreement with Bob.


As I said, it is much less spectacular this way, because you only have Alice having a "superposition" of states of her photon. It's more spectacular to have her have a superposition of states of Bob.

cheers,
Patrick.
 
  • #50
I should probably add a remark to my previous post (the one with the first calculation).

At a certain point, we had:

vanesch said:
|psi(t2)> = { sin(tha-thb) |alice+> |bob+>
-cos(tha-thb) |alice-> |bob+>
+cos(tha-thb) |alice+> |bob->
+sin(tha-thb) |alice-> |bob-> } |cable0> |sys0>|sys0>}/sqrt(2)

This can be re-written of course as:

|psi(t2)> = |alice+> (sin(tha-thb)|bob+>+cos(tha-thb) |bob->)|cable0>|sys0>|sys0>/sqrt(2)

+ |alice-> (-cos(tha-thb) |bob+> + sin(tha-thb) |bob->) |cable0>|sys0>|sys0>/sqrt(2)|cable0>|sys0>|sys0>/sqrt(2)


So here it is clear that alice, in the alice+ state, "lives" with a Bob in superposition, so for her, at this moment, it doesn't make sense to talk about Bob's result.

This was maybe not obvious the way I wrote it earlier.

cheers,
Patrick.
 
  • #51
I will now, based upon my first calculation, show how this is related to a Copenhagen view of things.

We had, at t1, the measurement of Bob ; this can partly be done again with the unitary evolution operator, but (if we apply the Heisenberg cut at the level of "Bob"):
vanesch said:
This means that a time evolution operator U_b acts,
such that:

U_b |bob0> |thb+> -> |bob+> |sys0>
U_b |bob0> |thb-> -> |bob-> |sys0>

U_b acting only on H_bob x H_sys2.

Rewriting psi(t0):

|psi(t0)> = |alice0>|bob0>|cable0>(|z+>(-sin(thb) |thb+> + cos(thb) |thb->) -
|z->( cos(thb) |thb+> + sin(thb) |thb->) )/sqrt(2)

Applying U_b

|psi(t1)> = {- sin(thb)|alice0>|bob+>|cable0>|z+>|sys0>
+ cos(thb) |alice0>|bob->|cable0>|z+>|sys0>
- cos(thb) |alice0>|bob+>|cable0>|z->|sys0>
- sin(thb) |alice0>|bob->|cable0>|z->|sys0>}/sqrt(2)
we have to apply the projection postulate now.

With 50% chance, bob is bob+, and we have then (after renormalization):

|psi(t1+)> = - sin(thb)/sqrt(2) |alice0> |bob+> |cable0> |z+> |sys0>
- cos(thb)/sqrt(2) |alice0> |bob+> |cable0> |z-> |sys0>

(and with 50% chance, we had bob- and another state which I won't write out).

HERE YOU SEE THE NON-LOCALITY AT WORK.
Indeed, the decision to go to the bob+ state affected immediately the amount of |z+> and |z-> (the Alice particle) in the state !
This wasn't the case when we kept the entire state |psi(t1)>: you can verify that the total length of the vector containing |z+> was still 50% in that case.
So the mechanism of the projection introduces the non-locality, in that the length of tensor product components (hilbert spaces of remote components) has suddenly changed. The evolution with Alice will be similar, but the main EPR effect occurred right here, in the Copenhagen view.

Alice's measurement:
from t1 to t2, Alice measures system 1 along direction th_a, so we have
an evolution operator U_a which acts:

U_a |alice0> |tha+> -> |alice+>|sys0>
U_a |alice0> |tha-> -> |alice->|sys0>

U_a acts only on H_alice x H_sys1

Rewriting psi(t1+):

|psi(t1+)> = - sin(thb)|alice0>|bob+>|cable0>(cos(tha) |tha+> + sin(tha) |tha->)|sys0>
- cos(thb) |alice0>|bob+>|cable0>(-sin(tha) |tha+> + cos(tha) |tha->)|sys0>


and applying U_a:

|psi(t2)> = - sin(thb) cos(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- sin(thb) sin(tha)|alice->|bob+>|cable0> |sys0> |sys0>
+ cos(thb) sin(tha)|alice+>|bob+>|cable0> |sys0> |sys0>
- cos(thb) cos(tha)|alice->|bob+>|cable0> |sys0> |sys0>

or:

|psi(t2)> = {(-sin(thb) cos(tha) + cos(thb) sin(tha) ) |alice+>|bob+>
+(-sin(thb) sin(tha) - cos(thb) cos(tha) ) |alice->|bob+>}|cable0> |sys0>|sys0>

or:

|psi(t2)> = { sin(tha-thb) |alice+> |bob+>
-cos(tha-thb) |alice-> |bob+>} |cable0> |sys0>|sys0>

After this measurement, again we have to use the projection postulate: with sin^2(tha-thb) probability, alice will have measured a + state (we already know bob had a + state and this is taken into account: we have here a conditional probability for alice), and the state will be, after normalization:

|psi(t2+)> = |alice+>|bob+> |cable0> |sys0>|sys0>

The whole "mystery" resides then in 2 things:

1) what about this non-locality ? Clearly it is contained in the quantum formalism (a la Copenhagen) and clearly also it doesn't correspond to any specific dynamics. One cannot say that it is "due to a force yet to be discovered", because it *is* already present in the formalism, and it is NOT some dynamics of unknown sort.

2) How does it come that Bob changes the states in such a way at Alice's that a) this directly influences the probabilities of outcomes Alice will observe, but b) that the mixture of influences that Bob prepares for Alice (when repeating the experiment) is exactly such, that, when weighting with Bob's mixture of outcomes, Alice finds finally a 50/50 probability AS IF Bob didn't influence her stuff. That, to me, sounds like a serious conspiracy :-)

It is here that I see a certain superiority of the MWI view: we know why this has to remain 50/50 because after the unitary evolution at Bob, the length of the vectors at Alice weren't influenced.

cheers,
Patrick.
 
  • #52
vanesch said:
Well, because of some view that there should be an underlying unity to physics. You're not required to subscribe to that view, but I'd say that physics then looses a lot of interest - that's of course just my opinion.
The idea is that there ARE universal laws of nature. Maybe that's simply not true. Maybe nature follows totally different laws from case to case. But then physics reduces to a catalog of experiments, without any guidance. A bit like biology before the advent of its molecular understanding.
I think that the working hypothesis that there ARE universal laws has not yet been falsified. Within that frame, you'd think that ONE AND THE SAME theory must account for all experimental observations concerning optics. We have such a theory, and it is called QED. Of course we had older theories, like Maxwell's theory and even the corpuscular theory ; and QED shows us IN WHAT CIRCUMSTANCES these older theories are good approximations ; and in what circumstances we will get deviations from their predictions.
It just turns out that in EPR type experiments you are in fact NOT in a regime where you can use Maxwell's theory because it is exactly the same regime in which you have the anti-coincidence counts. In one case however, Maxwell gives you (I'd say, by accident) an answer which corresponds to the QED prediction, in the other case, it is completely off.

I don't think that adopting a more easily visualizable 'classical'
explanation when possible for some experiments destroys the idea
that there are universal organizing principles. I believe that
a coherent 'big picture' that is close to the 'deep' or 'true'
nature of the universe can eventually be developed. (I think that
it will be some sort of wave mechanics that will account
for both the orderly and the chaotic/turbulent aspects of
reality, and that it will provide a communicable 'picture'
in a way that current quantum theory doesn't.) But that
belief isn't why I study physics.

Yes, QED can account for the instrumentally produced
data. But that isn't a picture of the sub-microscopic,
sub-atomic reality. It's a picture of the experimental data.
There is no picture of what light actually is, just
sometimes paradoxical experimental results. Using the
same single-photon light source you can make light behave
as if it is composed of indivisible 'particles' or
divisible waves. (The same setups that produce
anti-coincidence counts can be modified to produce
interference effects.) This could be due to the
interference-producing setups analyzing indivisible units
in aggregate (via combined streams using interferometer
in the beamsplitter setups or long time exposure using
detection location data in the double-slit setups), or it
could be due to instrumental insensitivity to sub-threshold
(divisible) wave activity. The answer isn't clear yet, afaik.

In any case, I don't think the fact that the cos^2 theta
formula works in the standard two-detector optical EPR/Bell
setup, and the fact that it's a 200 year old optics formula is
just a coincidence. (Remember all that stuff about
an "underlying unity to physics" above? :) )

There do seem to be organizing principles that are peculiar
to certain scales and contexts. The phenomenology
of, say, human social interactions is certainly different
than the phenomenology of quantum interactions.

It seems unlikely to me that there will ever be anything
like a quantum gravity. Gravitational behavior (in accordance
with the equivalence principle by the way) can be thought of
as emerging via complex wave interactions many orders of
magnitude greater in complexity than the simpler interactions
that are characterized as quantum. This isn't to say that
there aren't quantum interactions happening in and between
gravitating bodies -- they just aren't important in that
context, they don't *determine* gravitational behavior.

String theory, on the other hand, by positing the existence
of an underlying universal particulate medium, seems very well
motivated, though obviously a contrivance. I think it's
sort of the wrong approach, and even if they get it to
work mathematically for everything that won't necessarily
mean that it's a 'true' description of reality.

vanesch said:
... for me the essence of physics is the identification of an objective world with the Platonic world (the mathematical objects), in such a way that the subjectively observed world corresponds to what you can deduce from those mathematical objects. MWI, CI and Bohmian mechanics are different mappings between an objective world and the Platonic world ; only they lead to finally the same subjectively observed phenomena. Now if physics would be "finished" then it is a matter of taste which one you pick out. But somehow you have to choose I think.
However, physics is not finished yet. So this choice of mapping can be more or less inspiring for new ideas.

For me, the essence of physics is the recognition of associations
or connections wrt natural and experimentally observed phenomena
and the ability to quantify those (intuitive?) associations.
(For example, I'll bet you've wondered why there is any motion
at all. Most people just take it as a given. There's motion,
now proceed to Newton's Laws and so on. But, there are
observations that indicate that the universe is expanding
omnidirectionally. Could these observations be the basis
for a new fundamental, universal law?)

I agree that physics is not only not finished, it's pretty
much just getting started. I also think that MWI, CI and Bohmian
mechanics *are* a matter of taste, and not very inspiring. :)

vanesch said:
I think that the perfect understanding is a fully coherent mapping between a postulated objective world and the platonic world of mathematical objects, in such a way that all of our subjective observations are in agreement with that mapping. There may be more than one way of doing this. I am still of the opinion that there exists at least one way.
Apart from basing the meaning of "explanation" on intuition (and we should know by now that that is not a reliable thing to do), I don't know what else can it mean, to "explain" something.

If there's more than one way of doing it (and using the
method that you advocate almost assures that there will
always be more than one way) then why would you consider
any one of those ways to be the 'perfect' understanding?

One's 'intuition' changes as one learns and observes.

My intuition tells me that, for example, MWI, CI and
Bohmian mechanics are *not* providing us with a true
picture of the real world -- regardless of how
'coherently' they 'map'. I think that most scientists'
intuitions would tell them this, and I think that
scientists intuitive judgements about things should
be taken seriously.

vanesch said:
If you have a theory which makes unambiguous, correct predictions of experiments, then in what way is there still something not "understood" ? I can understand the opposite argument: discrepancies between a theory's prediction and an experimental result can point to a more complex underlying "reality". But if the theory makes the right predictions ? I would then be inclined to think that the theory already possesses ALL the ingredients describing the phenomenon under study, no?

Well, yes and no. :) For example, quantum theory makes
correct predictions. But, the *phenomena* under study are
experimental results, not an 'underlying reality' that
the results are, as presumed by some, about. So, you
sometimes get incomprehensible results. From this, the
CI view is that the 'quantum world' is simply
incomprehensible, and that analogies from the world of
our sensory experience are simply inapplicable. And, I
consider that to be a very wrongheaded view.

As for my statement regarding GR as simplistic:
if gravitational behavior is complex wave
interactions, then GR is an oversimplification.
Lots of people think that GR, and even the
Standard Model, won't be up to the task of
handling recent astronomical observations.

And, regarding MWI, I don't consider it to be a
physical theory -- even though it might be
a very clean mapping. :)
 
  • #53
vanesch said:
The whole "mystery" resides then in 2 things:

1) what about this non-locality ? Clearly it is contained in the quantum formalism (a la Copenhagen) and clearly also it doesn't correspond to any specific dynamics. One cannot say that it is "due to a force yet to be discovered", because it *is* already present in the formalism, and it is NOT some dynamics of unknown sort.

2) How does it come that Bob changes the states in such a way at Alice's that a) this directly influences the probabilities of outcomes Alice will observe, but b) that the mixture of influences that Bob prepares for Alice (when repeating the experiment) is exactly such, that, when weighting with Bob's mixture of outcomes, Alice finds finally a 50/50 probability AS IF Bob didn't influence her stuff. That, to me, sounds like a serious conspiracy :-)

It is here that I see a certain superiority of the MWI view: we know why this has to remain 50/50 because after the unitary evolution at Bob, the length of the vectors at Alice weren't influenced.

I don't think this clearly states the essence of the real physical
mystery, which I view as concerning whether all of the light
incident on a polarizer during a certain coincidence interval
associated with a photon detection is being transmitted by the
polarizer or not (there are similar considerations for the two-slit
and beamsplitter setups -- is the emitted light associated with a
photon detection going through both slits when they are both open,
and is the emitted light associated with a photon detection being
both reflected and transmitted after interacting with a beamsplitter?).
That is, it's known what photons *are* theoretically and to a certain
extent instrumentally, but the actual physical nature of photons
isn't known. Hence, there are some interpretational problems.

As for the projection, it's based on the idea that Alice and Bob
are analyzing in the joint context the same value of some physical
property during a certain interval associated with the production of
that value. The projected axis is taken as the axis of
maximum probability of detection because it produced a
detection. This in itself doesn't imply a nonlocal
physical connection between Alice and Bob. The nonlocal
stuff comes from people thinking that Bell proved
that the light incident on the polarizers couldn't
have a common motional property.

But, this is the essence of what Schroedinger called entanglement --
that two objects which have interacted, or have been produced
by the same process (like being emitted via one and the same
atomic transition), carry with them in their subsequent motion
information of the motion imparted via the interaction or
the process that created them. This shared property of
motion will stay with the objects no matter how far apart
they travel, as long as no external torques are introduced
which might modify the value of the shared property.

Probabilities are not explanations. They're descriptions of
behavior at the level of instrumental detection, which to
a certain extent can't be controlled.
 
  • #54
Sherlock said:
In any case, I don't think the fact that the cos^2 theta
formula works in the standard two-detector optical EPR/Bell
setup, and the fact that it's a 200 year old optics formula is
just a coincidence. (Remember all that stuff about
an "underlying unity to physics" above? :) )

Well, it doesn't really work. It works for ONE specific correlation, under the assumption (which is semi-classically correct) that you have a probability of clicking proportional to incident intensity, namely A+ and B-. Now if you use ABSORBING polarizers, that's all you get, so there it is ok. But if you use *polarizing beam splitters*, it DOESN'T work for some of the other correlations, as I tried to point out in post number 42 in this thread.

Now, if, within the same experiment, a certain way of reasoning explains SOME results, and is in contradiction with OTHERS, then that way of reasoning IS WRONG.

Like my old physics teacher used to say: we know that many solids have a dilatation as a function of temperature. Now, in summer, days are longer, and they are hotter too... (but it doesn't work for the summer nights...)

cheers,
Patrick.
 
  • #55
Sherlock said:
That is, it's known what photons *are* theoretically and to a certain
extent instrumentally, but the actual physical nature of photons
isn't known. Hence, there are some interpretational problems.

Que veut le peuple ?

If you know what they are "theoretically" and you know what they mean instrumentally, what else is there to know ?
A "mechanical" picture (like the discussions people had in the 19th century about *in what matter* the E and B fields had to propagate) ?

cheers,
Patrick.
 
  • #56
Regarding cos^2 theta correlation curve in EPR/Bell experiments
you wrote:

vanesch said:
Well, it doesn't really work. It works for ONE specific correlation ...

It describes the data curves for a class of setups. Which have
some things in common with the setup from which it was originally
gotten.

You disappoint me if you don't see at least the possibility
of some connection between the two.
 
  • #57
Regarding photons, you wrote:

vanesch said:
Que veut le peuple ?
If you know what they are "theoretically" and you know what they mean instrumentally, what else is there to know ?
A "mechanical" picture (like the discussions people had in the 19th century about *in what matter* the E and B fields had to propagate) ?

I know what 'gods' are 'theoretically'. And I know how people
react to the word. But I have no idea what gods *are*.
That is, I have no way of knowing how (in what form) or if
they exist outside those contexts.

It's sort of the same with photons, except that photons
are a much more interesting subject -- especially entangled
ones.

So, yes, I'd say that there's a lot more to be known
about photons, about light, than is currently known.
Some sort of mechanical picture of the deep reality
would be nice. Do you think that's impossible?

I think that not being curious in this way would
make physics a lot less interesting.
 
  • #58
Sherlock said:
Regarding cos^2 theta correlation curve in EPR/Bell experiments

It describes the data curves for a class of setups. Which have
some things in common with the setup from which it was originally
gotten.

You disappoint me if you don't see at least the possibility
of some connection between the two.

Sorry to disappoint you :-)

The link is however, rather clear. In the QED picture, the AVERAGE photon count rate is of course equal to the classical intensity, and we know that the classical intensities are related with a cos^2 theta curve.
So if you consider that the light beams are made up of classical *pulses* with random orientation, and you look at the intensities per pulse that get through the polarizers, then you get the cos^2 theta relationship. On average, then, the photon counting rates must also be related by a cos^2 theta relationship.
So *A* way to respect this constraint is just to have a correlation PER EVENT which is given by cos^2 theta. But that doesn't NEED to be so. For A+ and B-, it is so, agreed. But for A+ and A-, they have, in the same classical picture, intensities which vary from 50-50 to 0-100 (namely 50-50 when the incoming classical pulse is under 45 degrees with the polarizing BS orientation, and 0-100 when the classical pulse is parallel (or perpendicular) to the BS orientation). So you would expect a certain correlation rate (about 50%: you have EQUAL intensities in the 50-50 -> full correlation and you have anti-correlation in the 0-100 case).
Well, this IS NOT THE CASE. You find perfect anticorrelation. So this illustrates that the picture of a classical pulse with a random polarization, and a probability of triggering PER CLASSICAL PULSE of the photodetector, proportional to the classical intensity of the individual pulse, DOES NOT WORK IN THIS SETUP. If it doesn't work for certain aspects of the set-up, it doesn't work AT ALL.
The proportionality of detections and classical intensitis only works ON AVERAGE, not nessesarily PULSE PER PULSE.

The ONLY picture which gives you a consistent view on all the data is the photon picture, with a SINGLE DETECTABLE ENTITY PER "PULSE" in each arm. And if you accept THAT, you appreciate the EPR "riddle", and you do not explain it with the old cos^2 theta law, because that SAME cos^2 theta law would also give us SIMULTANEOUS HITS in A+ and A-, which we don't have. The EPR problem is only valid in the case where you do not have simultaneous
YES/NO answers, of course, otherwise you have, apart from a +z and a -z answer, also a (+z AND -z) answer, which changes Bell's ansatz.

But I repeat my question: people do experiments with light because of 2 reasons: it is feasable, and they *assume* already that we accept the photon picture. If you do not do so, then doing the EPR experiment with light is probably not very illuminating (-:.
However, (at least on paper), you can do the same thing WITH ELECTRONS. Now, I take it that you accept that a single electron going onto two detectors will only be detected ONCE, right ? Well, according to quantum theory, you get exactly the same situation (the cos^2 theta correlation) there. So how is this now explained "classically" ?

(ok, the angle is now defined differently because of the difference between spin-1 and spin-1/2 particles).

Do you:
a) think that QM just makes a wrong prediction there ?
b) do not accept that a single electron can only be detected in 1 detector ?
c) other ?

cheers,
Patrick.
 
  • #59
Sherlock said:
So, yes, I'd say that there's a lot more to be known
about photons, about light, than is currently known.
Some sort of mechanical picture of the deep reality
would be nice. Do you think that's impossible?

No, it is not impossible, Bohm's theory does exactly that.
The main objection I have against the view that we need a mechanical picture as an explanation, is: what MORE does a mechanical picture explain ? Isn't it simply because we grew up with Newton's mechanics, and the associated mathematics (calculus) and we develloped more "gut feeling" for it ? What is so special about some mechanical view of things ? I have nothing *against* a mechanical view, but I don't think a mechanical view is worth sacrifying OTHER ideas. And that's what, for instance, Bohm's theory does: it sacrifices locality (and so does the projection postulate).

I will agree with you that quantum theory, or general relativity, or whatever, doesn't give us a "final view" on how nature "really" works ; for the moment however, it is the best we have. 300 years from now, I'm pretty sure that our paradigms will have changed completely, and people will look back on our discussions with a smile in the same way we could look back on people develloping a "world view" based upon a Newtonian picture. And they are being naive, because 600 years from now, their descendants will again have changed their views :-)

So for short I think it is a meaningless exercise to try to say what nature "really" looks like. But what you can try to do is to build a mental picture that gives you the clearest possible view on how nature is seen using things that we KNOW right now. It is in that context that I see MWI. I do not know/think/hope that the MWI view is the "real" view on the world (which, I outlined, I don't think we'll ever have). I think that MWI is about the purest mental picture of quantum theory, because *it respects most of all its basic postulates*. That's all. If you do formally ugly things, such as the projection postulate, to get "closer to your gutfeeling about nature" I think you miss the essential content of quantum theory, and as such I think you're in a bad shape to see where it could be extended, because you already mutilated it !

cheers,
Patrick.
 
  • #60
vanesch said:
That's the point. There are no hidden variables, and everything is local. So what gives, in Bell ? What gives is that, from Alice's point of view, Bob simply didn't have a definite result, and so you cannot talk about a joint probability, until SHE "decided" which branch to take. But when she did, information was present from both sides, so the Bell factorisation hypothesis is not justified anymore.

...

As I said, it is much less spectacular this way, because you only have Alice having a "superposition" of states of her photon. It's more spectacular to have her have a superposition of states of Bob.

cheers,
Patrick.

Thanks, that helps me to understand this perspective better!
 
  • #61
Sherlock said:
In any case, I don't think the fact that the cos^2 theta
formula works in the standard two-detector optical EPR/Bell
setup, and the fact that it's a 200 year old optics formula is
just a coincidence. (Remember all that stuff about
an "underlying unity to physics" above? :) )

There are definitely TWO ways to look at that statement. Some of the vocal local realists argue that the cos^2 law isn't correct! They do that so the Bell Inequality can be respected; and then explain that experimental loopholes account for the difference between observation and their theory.

Clearly, classical results sometimes match QM and sometimes don't; and when they don't, you really must side with the predictions of QM. Even Einstein saw that this was a steamroller he had to ride, and the best he could muster was that QM was incomplete.
 
  • #62
I said the following:

vanesch said:
And if you accept THAT, you appreciate the EPR "riddle", and you do not explain it with the old cos^2 theta law, because that SAME cos^2 theta law would also give us SIMULTANEOUS HITS in A+ and A-, which we don't have. The EPR problem is only valid in the case where you do not have simultaneous
YES/NO answers, of course, otherwise you have, apart from a +z and a -z answer, also a (+z AND -z) answer, which changes Bell's ansatz.

and I would like to illustrate WHERE it changes Bell's ansatz.

Consider again 3 directions, a, b and c, for Alice and Bob.

Alice has an A+ and an A- detector, and Bob has a B+ and a B- detector.
Usually people talk only about the A+ hit or the "no-A+ hit" (where it is understood that the no-A+ hit is an A- hit).

We then take as hidden variable a bit for each a, b and c:

If we have a+ this means that Alice will have A+ and bob will have no B+ in the a direction, if we have a b+ that means that Alice will have an A+ and bob will have no B+ in the b direction, and ...

So we can have: a(+/-) b(+/-) c(+/-) as hidden state. But that description already includes the anti-correlation: if A+ triggers, then A- does NOT trigger, and if A- triggers, then A+ does not trigger. When A+ and A- do not trigger, that is then assumed to be due to the finite quantum efficiencies of the detector, which lead to the "fair sampling hypothesis".

But if we accept the possibility that A+ AND A- trigger together, then each direction has, besides the + and - possibility, a THIRD possibility namely X: double trigger. So from here on, we have 27 different possible states. This changes completely the "probability bookkeeping" and Bell's inequalities are bound to change. The local realist cloud even introduces a fourth possibility: A+ and A- do not trigger, and this is not due to some inefficiency, with symbol 0.

So we have a(+/-/X/0), b(+/-/X/0), c(+/-/X/0) which gives us 64 possibilities.
You can then easily show that Bell's inequalities are different and that experiments don't violate them.

The blow to this view is that whenever you make up a detector law as a function of intensity which allows you to consider the 0 case, you also have to consider the X case. The X case is never observed, so there are reasons to think that the 0 case doesn't exist either, especially because QED tells us so, and that you do get out the right results (including the observed number of 0 cases) when applying the quantum efficiency under the fair sampling hypothesis.

cheers,
Patrick.
 
  • #63
DrChinese said:
That's sort of funny, you know. Application of classical optics' formula [tex]cos^2\theta[/tex] is incompatible with hidden variables but consistent with experiment.

The cos^2 theta formula isn't incompatible with hidden
variables.

For the context of individual results you can write,

P = cos^2 |a - lambda|,

where P is the probability of detection, a is the
polarizer setting and lambda is the variable
angle of emission polarization.

This doesn't conflict with qm. If you knew
the value of lambda, or had any info about
how it was varying (other than just that
it's varying randomly), then you could more
accurately predict individual results (by
individual results I mean the data streams
at one end or the other).

How do we know that there *is* a hidden
variable operating in the individual measurement
context? Because, if you keep the polarizer
setting constant the data stream varies
randomly.

Now, this hidden variable doesn't just
stop existing because we decide to
combine the individual data streams wrt
joint polarizer settings.

However, the *variability* of lambda
isn't a factor wrt determining coincidental
detection.

DrChinese said:
a b and c are the hypothetical settings you could have IF local hidden variables existed. This is what Bell's Theorem is all about. The difference between any two is a theta. If there WERE a hidden variable function independent of the observations (called lambda collectively), then the third (unobserved) setting existed independently BY DEFINITION and has a non-negative probability.

Bell has nothing to do with explaining coincidences, timing intervals, etc. This is always a red herring with Bell. ALL theories predict coincidences, and most "contender" theories yield predictions quite close to Malus' Law anyway. The fact that there is perfect correlation at a particular theta is NOT evidence of non-local effects and never was. The fact that detections are triggered a certain way is likewise meaningless. It is the idea that Malus' Law leads to negative probabilities for certain cases is what Bell is about and that is where his selection of those cases and his inequality comes in.

Suppose we set polarizers at a=0 and b=67.5 degrees. For the a+b+ and a-b- cases, we call that correlation. The question is, was there a determinate value IF we could have measured at c=45 degrees? Because IF there was such a determinate value, THEN a+b+c- and a-b-c+ cases should have a non-negative likelihood (>=0). Instead, Malus' Law yields a prediction of about -10%. Therefore our assumption of the hypothetical c is wrong if Malus' Law (cos^2) is right.

Bell demonstrated that using the variability of lambda
to augment the qm formulation for coincidental
detection gives a result that is incompatible
with qm predictions for all values of theta
except 0, 45 and 90 degrees.

Now, there's at least two ways to interpret Bell's
analysis. Either (1) lambda suddenly stops existing when we
decide to combine individual results, or (2) the variability
of lambda isn't relevant wrt joint detection.

I think the latter makes more sense, and in fact
it's part of the basis for the qm account which
assumes that photons emitted by the same atom
are entangled in polarization via the emission
process. This is why you have an entangled
quantum state prior to detection. So, all you
need to know to accurately predict the
*coincidental* detection curve is the angular
difference between the polarizer settings. And,
as in all such situations where you're analyzing,
in effect, the same light with crossed linear
polarizers the cos^2 theta formula holds.
 
  • #64
Sherlock said:
The cos^2 theta formula isn't incompatible with hidden
variables.

For the context of individual results you can write,

P = cos^2 |a - lambda|,

where P is the probability of detection, a is the
polarizer setting and lambda is the variable
angle of emission polarization.

Ok, that's the probability for the A+ detector to trigger. And what is the probability for the A- detector to trigger, then ? P = sin^2 |a - lambda| I'd say...

cheers,
Patrick.

EDIT:

I played around a bit with this, and in fact, it is not so easy to arrive at a CORRELATION function which is cos^2(a-b). Indeed, let's take your probability which is p(a+) = cos^2(lambda-a).
Assuming independent probabilities, we have then that the correlation, which is given by p(a+) p(b+) = cos^2(lambda-a) sin^2(lambda-b) for an individual event. (the b+ on the other side is the b- on "this" side)

Now, by the rotation symmetry of the problem, lambda has to be uniformly distributed between 0 and 2 Pi, so we have to weight this p(a+) p(b+) with this uniform distribution in lambda:

P(a+)P(b-) = 1/ (2 Pi) Integral (lambda=0 -> 2 Pi) cos^2(lambda-a) sin^2(lambda-b) d lambda.

If you do that, you find:

1/8 (2 - Cos(2 (a-b)) ) = 1/8 (3-2 Cos^2[a-b])

And NOT 1/2 sin^2(a-b) !

I checked this with a small Monte Carlo simulation in Mathematica and this comes out the same. Ok, in the MC I compared a+ with b+ (not with b-), and then the result is 1/8 (2+cos(2(a-b)))

So this specific model doesn't give us the correct, measured correlations...

cheers,
Patrick.

I attach the small Mathematica notebook with calculation...
 

Attachments

  • coslaw.zip
    10.6 KB · Views: 215
Last edited:
  • #65
Sherlock said:
The cos^2 theta formula isn't incompatible with hidden
variables.

For the context of individual results you can write,

P = cos^2 |a - lambda|,

where P is the probability of detection, a is the
polarizer setting and lambda is the variable
angle of emission polarization.

This doesn't conflict with qm. If you knew
the value of lambda, or had any info about
how it was varying (other than just that
it's varying randomly), then you could more
accurately predict individual results (by
individual results I mean the data streams
at one end or the other).

How do we know that there *is* a hidden
variable operating in the individual measurement
context? Because, if you keep the polarizer
setting constant the data stream varies
randomly.

Now, this hidden variable doesn't just
stop existing because we decide to
combine the individual data streams wrt
joint polarizer settings.

Now, there's at least two ways to interpret Bell's
analysis. Either (1) lambda suddenly stops existing when we
decide to combine individual results, or (2) the variability
of lambda isn't relevant wrt joint detection.

I think the latter makes more sense, and in fact
it's part of the basis for the qm account which
assumes that photons emitted by the same atom
are entangled in polarization via the emission
process. This is why you have an entangled
quantum state prior to detection. So, all you
need to know to accurately predict the
*coincidental* detection curve is the angular
difference between the polarizer settings. And,
as in all such situations where you're analyzing,
in effect, the same light with crossed linear
polarizers the cos^2 theta formula holds.

Or Lambda=LHV does not exist, a possibility you consistently pass over. It is a simple matter to show that with a table of 8 permutations on A/B/C, there are no values that can be inserted that add to 100% without having negative values at certain angle settings.

A=___ (try 0 degrees)
B=___ (try 67.5 degrees)
C=___ (try 45 degrees)

Hypothetical hidden variable function: __________ (should be cos^2 or at least close)

1. A+ B+ C+: ___ %
2. A+ B+ C-: ___ %
3. A+ B- C+: ___ %
4. A+ B- C-: ___ %
5. A- B+ C+: ___ %
6. A- B+ C-: ___ %
7. A- B- C+: ___ %
8. A- B- C-: ___ %

It is the existence of C that relates to the hidden variable function. What you describe is just fine as long as we are talking about A and B only. (Well, there are still some problems but there is wiggle room for those determined to keep the hidden variables.) But with C added, everything falls apart as you can see.

You can talk all day long about joint probabilities and lambda, but that continues to ignore the fact that you cannot make the above table work out. If you are testing something else, you are ignoring Bell. After you account for the above table, then your explanation might make sense. Meanwhile, the Copenhagen Interpretation (and MWI) accounts for the facts that LHV cannot.
 
  • #66
I would like to point out, in a previous round against Vanesh about EPR and many worlds, the following point (1) :

Usual "orthodox Copenhagen QM" contains

1) a local hidden variable that corresponds to the specification of the PRECISE endstate when the latter is degenerate. The "standard" Copenhagen QM is a special configuration of the endstate that corresponds to it's maximum.

However, there is more :

2) a NON-LOCAL hidden variable.

Let see the latter : a non-local measurement is obtained by the operator : [tex]\sigma_z\otimes\(\sigma_z\cdot\vec{n}_b) [/tex]...hence Both side are measured, and there is no 1 operator on the other (non disturbing operator).

Let consider [tex] \theta_b=0[/tex]

Hence : both directions of measurement are the same. The clearly the only 2 possible endstates are :

|+-> or |-+>, with [tex] p(+-)=|<+-|\Psi>|^2=\frac{1}{2}=p(-+)[/tex]

This sounds very like more than intuitive and easy to understand.

However, one can see the things in an other way, by looking that :

[tex]M=\sigma_z\otimes\sigma_z=\left(\begin{array}{cccc} 1 &&&\\&-1&&\\&&1&\\&&&-1\end{array}\right)[/tex]

Hence, then eigenvalues of M are 1,-1 and are both degenerate. 1 corresponds to |A=B> and -1 to |A<>B> (same or different results in A and B).

Here again, the eigenSPACE can be parametrized :

[tex] |same>=\left(\begin{array}{c}\cos(\chi)\\0\\\sin(\chi)\\0\end{array}\right)[/tex]
[tex]|different>=\left(\begin{array}{c}0\\cos(\delta)\\0\\\sin(\delta)\end{array}\right)[/tex]
[tex] |\Psi>=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\1\\-1\\0\end{array}\right)[/tex]

So that : [tex] p(different)=|<different|\Psi>|^2=\frac{1}{2}\cos(\delta)^2[/tex]

[tex] p(same)=|<same|\Psi>|^2=\frac{1}{2}\sin(\chi)^2 [/tex]

Where [tex]\chi,\delta[/tex] are GLOBAL HIDDEN VARIABLES...

So that in fact 2p(same)=1 at MAX...what is the interpretation of this, if there is no mistake of course...??
 
  • #67
kleinwolf said:
So that in fact 2p(same)=1 at MAX...what is the interpretation of this, if there is no mistake of course...??

To me the interpretation is that your chi and delta are just variables that parametrize the eigenspaces of the operator sigma_z x sigma_z.

However, I don't understand your calculation. When you write out sigma-z x sigma-z, I presume in the basis (++, -+,+-,--), then I'd arrive at a diagonal matrix which is (1,-1,-1,1)... You seem to have taken the DIRECT SUM, no ?


cheers,
Patrick.
 
  • #68
Yes, you're entirely right...my mistake is unforgivable, since this will change all the afterwards calculation and interpretation of [tex]\delta[/tex].

Then the result is [tex] p(same)=0\quad p(diff)=\frac{1}{2}(1-\sin(2\chi))[/tex]

However, you admit there are 2 visions of computing the probabilities with your correct M :

locally : p(+-)=p(-+)=1/2

globally, the endstate |->_g=(0,cos(a),sin(a),0), gives the prob :

p(+-)=cos(a)^2, p(-+)=sin(a)^2...hence on average or special values of a, the same as locally...but a infinite of possibilities more are allowed.

Can this be measured on the statistical results in an experiement, and how to find how to change the value of a experimentally ??
 
  • #69
kleinwolf said:
Yes, you're entirely right...my mistake is unforgivable, since this will change all the afterwards calculation and interpretation of [tex]\delta[/tex].

Then the result is [tex] p(same)=0\quad p(diff)=\frac{1}{2}(1-\sin(2\chi))[/tex]

Well, don't you find this funny that the sum of the probabilities for the two possible outcomes don't add up to 1 ? You could think that for each event, you have two possible results: they are the same, or they are different. And if you add up their probabilities, you don't find 1.
It's like: throw up a coin: 25% chance you have head, 30% you have tail :-)

cheers,
Patrick.
 
  • #70
It's just because we don't understand QM. But QM is omnipotent for everyone, just put : [tex]\chi=-\frac{\pi}{4}\Rightarrow p(diff)=1[/tex]

In the other calculation, the sum add up to 1 in every case...

So what does it mean that the prob of the possible outcomes don't add up to 1 in everycase for the other calculation ?

Just because the correlation, even if measured along the same directions, of the singlet state, is not always perfect, remind : there is a non-local part and a local one...here it's just the non-local one.

Best regards.
 

Similar threads

  • Quantum Physics
Replies
3
Views
775
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
5
Views
738
  • Quantum Physics
Replies
4
Views
1K
Replies
50
Views
3K
  • Quantum Physics
Replies
26
Views
1K
  • Quantum Physics
Replies
5
Views
969
Replies
1
Views
827
Replies
19
Views
961
Replies
41
Views
2K
Back
Top