# Aspect/Innsbruck Interpretation which respects SR locality

#### JesseM

vanesch said:
The problem is that then, every physical theory is realist (the word looses its meaning). After all, no matter what theory, its formalism will be "real" in the Platonic sense and is the "machinery that produces the numbers I observe", so the "real description" of the universe.

I saw "realism" as a more constrained way, in that "what is observed is really there, locally encoded in the thing we're observing". So if I see an electron with a spin-up, because that's what my detector says, then I say that there really is an electron with a spin up, and it is spin up even if I wouldn't have observed it. And that's of course NOT possible in QM. The only thing that QM says is that the part of my state that corresponds to my conscient observation that I experience is entangled with the part of the electron's state that has spin up, but there can (or cannot) be other parts of the electron's state that are different (say, spin down), but of which I will never hear again. So the "realism" has shifted from "it is really there" to "a relationship between me, my observations, and part of what's there".

cheers,
Patrick.
At one point when I was thinking about how the Everett interpretation could explain the results of the EPR experiment in a local realist way, I came up with the following analogy to show how in principle an Everett-like interpretation could do this:
say Bob and Alice are each recieving one of an entangled pair of photons, and their decisions about which spin axis to measure are totally deterministic, so the only "splitting" necessary is in the different possible results of their measurements. Label the three spin axes a, b, and c. If they always find opposite spins when they both measure their photons along the same axis, a local hidden-variables theory would say that if they choose different axes, the probability they get opposite spins must be at least 5/9 (assuming there's no correlation between their choice of which axes to measure and the states of the photons before they make the measurement). I forgot what the actual probability of opposite spins along different axes ends up being in this type of experiment, but all that's important is that it's less than 5/9, so for the sake of the argument let's say it's 1/3.

So suppose Bob's decision will be to measure along axis a, and Alice's decision will be to measure along axis c. When they do this, suppose each splits into 6 parallel versions, 3 measuring spin + and 3 measuring spin -. Label the 6 Bobs like this:

Bob 1: a+
Bob 2: a+
Bob 3: a+
Bob 4: a-
Bob 5: a-
Bob 6: a-

Similarly, label the 6 Alices like this:

Alice 1: c+
Alice 2: c+
Alice 3: c+
Alice 4: c-
Alice 5: c-
Alice 6: c-

Note that the decision of how they split is based only on the assumption that each has a 50% chance of getting + and a 50% chance of getting - on whatever axis they choose, no knowledge about what the other one was doing was needed. And again, only when a signal travelling at the speed of light or slower passes from one to the other does the universe need to decide which Alice shares the same world with which Bob...when that happens, they can be matched up like this:

Alice 1 (c+) <--> Bob 1 (a+)
Alice 2 (c+) <--> Bob 2 (a+)
Alice 3 (c+) <--> Bob 4 (a-)
Alice 4 (c-) <--> Bob 3 (a+)
Alice 5 (c-) <--> Bob 5 (a-)
Alice 6 (c-) <--> Bob 6 (a-)

This insures that each one has a 2/3 chance of finding out the other got the same spin, and a 1/3 chance that the other got the opposite spin. If Bob and Alice were two A.I.'s running on classical computers in realtime, you could simulate Bob on one computer and Alice on another, make copies of each according to purely local rules whenever each measured a quantum particle, and then use this type of matching rule to decide which of the signals from the various copies of Alice will be passed on to which copy of Bob, and you wouldn't have to make that decision until the information from the computer simulating Alice was actually transmitted to the computer simulating Bob. So using purely local rules you could insure that, after many trials like this, a randomly-selected copy of A.I. Bob or A.I. Alice would record the same type of statistics that's seen in the Aspect experiment, including the violation of Bell's inequality.

Note that you wouldn't have to simulate any hidden variables in this case--you only have to decide what the spin was along the axes each one measured, you never have to decide what the spin along the other 2 unmeasured axes of each photon was.
Now, I realize that the various Everett interpretations are not so straightforward--in my computer simulation above, probability has a clear frequentist meaning, while the problem of getting a notion of "probability" out of any version of the Everett interpretation is notoriously difficult, and perhaps it can't work at all without tacking on extra assumptions. Still, I got the impression that this was the general type of explanation that Mark Rubin was aiming for in his papers, where each observation creates a local splitting of the observer, but the observations of spatially separated observers are only mapped to each other once a signal has had the chance to pass between them.

#### chronon

Hans de Vries said:
The alternative local model: We presume that both photons share a property because they are entangled. They are more
equal then other seemingly equal photons.
This sounds to me like a standard local hidden variables theory. As such it would have to obey Bell's inequality, and so disagrees with the results of the Aspect experiments.

I have to say that I do tend towards believing in local realism (although I don't like the 'realism' bit, see http://www.chronon.org/Articles/localreal.html),and I hope for either the success of Caroline Thompson's attack on the statistics of the experiments, or for an explanation in terms of subluminal (but maybe 'spooky') transfer of information between the detectors. But I don't see Hans' model as succeeding.

#### Hans de Vries

Gold Member
chronon said:
This sounds to me like a standard local hidden variables theory. As such it would have to obey Bell's inequality, and so disagrees with the results of the Aspect experiments.
The word "hidden variables" is always used in combination with non-
commuting quantities. This is not the case here with the extra degrees
of freedom.

Regards, Hans

#### chronon

Hans de Vries said:
The word "hidden variables" is always used in combination with non-
commuting quantities. This is not the case here with the extra degrees
of freedom. Regards, Hans
So "hidden variables" means something else, and as I've said I don't like the term "realist" being used for this.
vanesch said:
The problem is that then, every physical theory is realist (the word looses its meaning).
Indeed. I don't think that the words "non-realist theory" have any meaning.

But let's be clear about what Bell tells us. Suppose we have two detectors A and B with experimenters who are free to choose the settings of the detectors. If Bell's inequalities are violated, then any model which agrees with the results must have something corresponding to information exchange between A and B.

#### JesseM

chronon said:
But let's be clear about what Bell tells us. Suppose we have two detectors A and B with experimenters who are free to choose the settings of the detectors. If Bell's inequalities are violated, then any model which agrees with the results must have something corresponding to information exchange between A and B.
But the information exchange doesn't necessarily have to be faster than light--see my thought-experiment above involving two simulated experimenters, Alice and Bob who split into multiple copies when they make a measurement, and the simulation doesn't have to decide which Alice-copy is mapped to which Bob-copy until there has been time for a signal moving at light speed to pass between them. The Everett interpretation isn't as simple as this, but the thought-experiment shows that in principle you can have a local realist theory (without hidden variables, though) in which the Bell inequalities are violated.

#### vanesch

Staff Emeritus
Gold Member
chronon said:
But let's be clear about what Bell tells us. Suppose we have two detectors A and B with experimenters who are free to choose the settings of the detectors. If Bell's inequalities are violated, then any model which agrees with the results must have something corresponding to information exchange between A and B.
Yes, and that something is the state of the observer which goes from A to B to check the correlations!
The trick is that for the observer at A (the one, for instance who will do the travelling), there IS NO DEFINITE RESULT at B as long as he didn't go there to check. It is when you insist on the definiteness of the remote result, before you can check it, that *something corresponding to information exchange* must travel FTL between A and B. But if all that is done at B remains in a superposition until A checks it (and it is only at that moment that A can verify the correlation), then it is whatever travels from A to B (or from B to A, or from A to X and B to X) that carries with it "the information needed", which corresponds to the "choice of the partial statevector" that has been made when A had to decide on which branch its sentient experience was now going to live (according to Born's rule).

When, however, you insist on the "reality" of the measurement at A and the "reality of the measurement" at B, and you insist on the fact that every correlation must have a causal origin, then yes, "something" must travel from A to B and from B to A forward and backward in time. But as that "reality" cannot be checked by a real transmission of information FTL, that "something" remains very vague! It is then left to the opinion of the interpreter to accept such a "something" which will never have any verifiable influence and call it collapse at a distance, or to accept that there is not such a "something" but that macroscopic objects, such as persons, can exist in superposed states.
My preference goes to the second possibility. Not because I find this exciting or so, but because it introduces the least new elements in the existing theory ; especially the necessary introduction of a "something" which communicates at a distance but in such a way that we will never be able to use it to communicate at a distance, relativity obliging, and which happens or not, according to whether a physical process is called a measurement or an interaction.

cheers,
Patrick.

#### chronon

vanesch said:
Yes, and that something is the state of the observer which goes from A to B to check the correlations!
The trick is that for the observer at A (the one, for instance who will do the travelling), there IS NO DEFINITE RESULT at B as long as he didn't go there to check.
But what if there are two experimenters one at A and one at B? If they then get together to compare results, you seem to be claiming that the one from A could say to the one from B. "Yes, I realise that you claim to have made a definite measurement at B, but from my point of view that wasn't a real event. In fact you are just part of a superposition which only resolved itself when we met."

You say you think this involves the least new elements, but it still needs "something" which causes our minds to divide when faced with a superposition, which I find far less acceptable than a non-local influence.

#### vanesch

Staff Emeritus
Gold Member
chronon said:
You say you think this involves the least new elements, but it still needs "something" which causes our minds to divide when faced with a superposition, which I find far less acceptable than a non-local influence.
This is indeed a matter of personal opinion, and you hit the nail on its head: I think that this is the true content of the physics of the Born rule and that you cannot leave it out and derive it from strictly unitary QM. So you can say that unitary QM is the physics of the universe, and the Born rule is the physics of the mind. It is just a personal, maybe strange, but - I think - coherent viewpoint. But it has one practical consequence, or almost so: if you take that viewpoint, you will not bother trying to make a FTL telephone with entangled states, trying to use "collapse at a distance" in order to force somehow what is proven not to be able to exist in QM

Now if you *really* want my personal opinion, I don't take my opinion very seriously, but take it just as a means to reason in QM, as it stands. It avoids me a lot of traps which are often invited by the smalltalk that goes into EPR and quantum erasure papers, and which come down (IMHO) to applying the Born postulate too early.
One thing is sure in QM: when you apply the Born postulate only COMPLETELY AT THE END of your calculation, for the quantity that you want to plot in your paper, you NEVER make an error. If you apply it earlier, you can get away with it if you are very careful. If you apply it too early, you have an FTL telephone and you made an error against QM theory.

I am completely open on the seriousness with which you have to take all this. If indeed, say, gravity DOES somehow induce the collapse in QM, then this changes completely the picture. But I wasn't talking about an interpretion of _that_ hypothetical theory of which I'm even not aware of the formalism. I'm talking about a way of seeing _current_ QM theory. What *really* happens, I just don't know, and I think that everybody who claims to know is fooling him/herself.

cheers,
Patrick.

#### vanesch

Staff Emeritus
Gold Member
chronon said:
But what if there are two experimenters one at A and one at B? If they then get together to compare results, you seem to be claiming that the one from A could say to the one from B. "Yes, I realise that you claim to have made a definite measurement at B, but from my point of view that wasn't a real event. In fact you are just part of a superposition which only resolved itself when we met."
Exactly ! As seen from the "I" experience of A.
The "I" experience seems to require that it can only observe a product state of the self and the rest of the universe ; call it the "self-awareness" or whatever. It seems not to support entanglement with the universe. So that "I" experience HAS to choose when it undergoes a physical interaction which entangles its physical structure with something else, and that choice is dictated by the Born rule.

So if A and B are two friends, and they get apart, do an EPR experiment and then come together, there's a possibility that the "I" experience of A now talks to the CLONE of B, while B's "I" experience is now in another branch. There's no way for A to find out that he's now separated from his friend forever, and has to deal with a clone

Try to explain that to your wife/girlfriend when cheating on her :tongue:

cheers,
Patrick.

#### chronon

I think you can put it this way:

If the Bell inequalities are still violated when you have two experimenters choosing the detector settings at spacelike separation,
then either
(1) Your model must have something corresponding to FTL communication
or
(2) Your model must include the minds of the experimenters

I agree that this is a matter of personal opinion, but I can think of several arguments against (2)

(a) You are trying to model a system consisting of lasers and detectors. Why should you need to bring in the unrelated subject of mental workings.

(b) It moves towards non-falsifiability. If results disagree with your model you can keep the model, just change what the experimenters minds end up believing.

(c) It can be seen as a delaying tactic. By introducing mental workings, which we are unlikely to understand for several decades, it means we can go on using hand-waving arguments.

vanesch said:
you will not bother trying to make a FTL telephone with entangled states, trying to use "collapse at a distance" in order to force somehow what is proven not to be able to exist in QM
I think people will always try to use the results to make an FTL telephone. In case (1) you can think that maybe you can utilise the underlying FTL information transfer, but in case (2) you can think that maybe the recipient gets a random signal but you can arrange things so that you end up in the universe when they get the message you sent.

#### vanesch

Staff Emeritus
Gold Member
chronon said:
I think you can put it this way:

If the Bell inequalities are still violated when you have two experimenters choosing the detector settings at spacelike separation,
then either
(1) Your model must have something corresponding to FTL communication
or
(2) Your model must include the minds of the experimenters
I would agree with you if these two options were all there was to it. But it is not!

In (1), you still must explain me why the photodetector cannot be described by the unitary evolution of the schroedinger equation describing the photo detector processes. If the essential process is photo-emission (in a photomultiplier), then we perfectly know how that works, through unitary evolution.
So the "measurement problem" still stands unsolved: what physical processes are "measurements" (and don't follow the Schroedinger equation, but follow the Born rule and the projection postulate), and what physical processes are "interactions" following unitary evolution ? When is the emission of an electron by an impinging photon a measurement (and hence will collapse a wave function through a yet unknown FTL process), and when is it a physical process that could still in principle be used to do further QM with ?
And on top of that you have to invent an FTL transmission in the past in such a way that you cannot use it.

In (2), you can say that you know that already: it is the Born rule.
So in (2), you essentially solve the measurement process issue and you do not need to invent an FTL communication.

So the options are:

(1) Your model must have something corresponding to FTL communication AND you must STILL explain in what a measurement is physically different from an interaction.

(2) Your model of the minds of the experimenters is given by the Born rule.

I think that (2) is much closer to the actual formalism than (1). In fact, (1) expects a NEW theory, with new physics in it. Indeed, whenever the exact physical process responsable for the distinction between a measurement and an interaction is found, it will be possible to determine that experimentally (even if our current technology is maybe not yet up to it), because it will be IMPOSSIBLE in principle to obtain superpositions in that case. An application of the Born rule implies that you fix the basis in which you "measure", while unitary evolution lets you free to work in any basis.

I had a similar discussion in another thread here. Look at your photodetector, say, a PM. You will probably agree with me that it is the electron emission from the photocathode that is the "measurement process". All the rest is amplification.
So, you say, when a photon impinges on a photocathode, there is a relatively high probability that an electron is emitted. But what, in this process, is not unitary ? In what "photon basis" do we now apply the Born rule ? I think it is quite obvious that it is the standard "photon" basis of Fock space (there is no photon, or there is 1 photon, or there are 2 photons...). Does this then mean that in all interactions of light with a metal, we have to work in that basis and apply the Born rule ?
Hell no ! If that were the case, a metal surface wouldn't work as a mirror ! Indeed, to do so, you need to have a coherent light state (a superposition of Fock states) interact with the sea of electrons, in order for them to emit another coherent state which is the reflected beam. So now suddenly, the preferred basis is the basis of coherent states ?
You will then say: no, it is when a photon is "absorbed" that you apply the Born rule in the Fock basis. But isn't the mirror action an absorption and coherent re-emission of the coherent states by the sea of electrons then ?
Ah, you will say: it is when ENERGY is transferred between the EM field and the electrons that you have to apply the Born rule. But (ok, I failed to come up with the correct complete calculation) if that were true, stimulated emission couldn't amplify coherent states in a laser then !

And this is the case each time when you analyse a "measurement device". Each time you think you've found the pivotal process that "does the measurement" you can find situations where very similar interactions are necessarily described by unitary processes and superpositions have to remain so in order to be correct. So why, in some cases, do these processes "collapse" the wavefunction and send out their FTL signals, and not in other cases ?

So maybe there ARE indeed physical processes that collapse the wavefunction, and maybe there ARE then FTL messages sent out. But you agree with me that that is a whole lot of new physics to be added, so we're not talking about an *interpretation* of QM anymore. It is only in such a setting that (1) makes sense.

cheers,
Patrick.

#### gptejms

Have logged in after a couple of days and need to catch up with what has been going on.Firstly what is FTL?Also let me ask what is IMHO---seen this quite commonly used in physicsforums.
Coming to the discussion between vanesch and chronon,there was a thread 'Does decoherence solve the measurent problem?' in s.p.r. which is relevant to the present discussion.Decoherence as you know knocks off the off-diagonal elements of the density matrix.So you are left with the diagonal elements each with its own probability.This probability I think is like a classical probability--for a two state system,you'll find some systems in state |1> and some in state |2>.True you have only one system to make measurement on,but this does not imply that the system continues to be in a superposition of the two states.
Besides,in 'most' decoherence situations,the coherent superposition is short lived because of dissipation and the system mostly ends up in the state of lower energy.So does decoherence not solve the measurement problem?

#### vanesch

Staff Emeritus
Gold Member
gptejms said:
True you have only one system to make measurement on,but this does not imply that the system continues to be in a superposition of the two states.
Besides,in 'most' decoherence situations,the coherent superposition is short lived because of dissipation and the system mostly ends up in the state of lower energy.So does decoherence not solve the measurement problem?
FTL = Faster Than Light
IMHO = In My Humble Opinion

Concerning decoherence, it solves *part* of the measurement problem, namely the "preferred basis" problem. But it doesn't solve the "projection postulate" problem, however, it gives an interesting property about it.
Note also that in order to apply decoherence theory, you place yourself already in a MWI like situation, otherwise it has no meaning !

In order to apply decoherence theory, you assume that unitary quantum mechanics is strictly correct upto the macroscopic level, so that your macroscopic measurement instruments become entangled with the states of the microsystem under study.

Typically, you consider the begin situation:

System : (a |s1> + b |s2> + c |s3> )

Measurement: |m0> (ignorance state)

Environment: |e0> (not yet interacted).

The physics of the system and the measurement apparatus is then such that the Hamiltonian of it leads to an entanglement:

(a |s1> |m1> + b |s2> |m2> + c |s3> |m3>)

Here, the "m" states are the so-called pointer states of the measurement device.
They are supposed to indicate macroscopically what is the value of the measurement under study. So for example, it could be a spin-z measurement on the system.
But you see the arbitrariness of the procedure: If I would have written my original system state in another basis, which is a linear combination of |s1> |s2> and |s3> then I would find "pointer states" that are linear combinations of m1, m2 and m3, and my "z-spin" measurement apparatus would work just as well as a "y-spin" measurement apparatus.

It is here that decoherence theory comes to rescue, by showing that, when the measurement apparatus interacts with the environment, there will be only one way to write this:
|m1> |e1> + |m2> |e2> + |m3> |e3>
in a way that is stable against remixture (so that the Hamiltonian of the interaction m-e takes on a block-diagonal form in the m-e basis).

As such, the m-basis is NOT arbitrary anymore, but is determined by the Hamiltonian of the interaction measurement system - environment (and corresponds to what we call "classical states").
Once this is fixed, we also see that our trio s-m-e now takes on the form:

a |s1>|m1>|e1> + b |s2> |m2> |e2> + c |s3> |m3> |e3>

This is to where decoherence theory brings you: that the interaction between a macroscopic measurement system and the environment leads to a unitary evolution of the system which can only be written in one way.

But you now still have to apply the Born rule in order to say:

with probability |a|^2 I measured s1 (through the pointer state m1), and now my state is to be considered to be |s1> |m1> |e1>

This last part is still a mystery which is NOT explained by any physical interaction. In fact, it cannot, because all physical interactions are described by unitary transformations which are linear, so you can never pick out one component of a superposition that way.

But decoherence theory is important because it tells you that, IF YOU ARE GOING TO APPLY THE BORN RULE at the end of your calculation, you can just as well apply it in the pointer basis from the moment you will seriously interact with the environment (where the pointer basis comes down to the states that are stable against further mixture with the environment). So it allows you the mathematical shortcut which is always applied: "and we measure the position of the electron |psi(x)|^2 ... " without having to plunge into the details of your measurement apparatus and all that.
So that's why people say that decoherence theory solves the measurement problem FAPP (for all practical purposes). But it doesn't solve it in principle, because it USES it at the end of the calculation.

cheers,
Patrick.

#### ppnl2

Pardon a quick question.

Hans talks about a pair of photons that are phase entangled at 90 degrees and polarized at 45 degrees. Is that possible?

Usually when you talk about entangled photons they are randomly polarized. How can you know they are polarized at 45 degree and also entangled at 90 degrees?

I don't think it affects what he said much. Its just that something about the way he said it seemed wrong. But I get confused easily.

#### chronon

vanesch said:
In (1), you still must explain me why the photodetector cannot be described by the unitary evolution of the schroedinger equation describing the photo detector processes.
I agree that in (1) we are still left with the (local) measurement problem. However, I think that the real mystery in quantum theory is nonlocality, and that the measurement problem is minor in comparison. Several possibilities have been proposed, and the main reason they are rejected seems to be that they don't also deal with nonlocality. (Actually, the real measurement problem I see is that people have convinced themselves that they can calculate much more than is in fact the case; see http://www.chronon.org/Articles/shut_up_and_calculate.html)

So in (1) we still have some work to do, but I would maintain that this is also the case with (2), as you then have to include the behaviour of the human mind in your theory - not a trivial matter. Even if you have a simplified model of minds splitting, you have to explain why they split in the way they do, essentially giving the same problem as in (1).

#### gptejms

Vanesch,let's take a specific example e.g. the double slit experiment with electrons.Before I make any measurement,the electron is in state a|1> + b|2> and there are off-diagonal elements a^*b and ab^*.Now I make a measurement(with a gamma ray microscope) at slit 1 and find that the electron passes thru it.Let me denote my state by |m>---since this is a macroscopic state I don't expect it to change significantlyby detection of an electron(may be some microscopic changes take place which have recorded the fact that the electron has passed).Upon my measurement,the electron decoheres(due to interaction with the gamma photon) from the state a|1>+b|2> to |1> and a becomes equal to 1 and b=0.Not only the off-diagonal elements go to zero,but also only one of the diagonal elements survives.What is your environment in this case--why do you need it all?

What is your version of the above experiment?

Last edited:

#### vanesch

Staff Emeritus
Gold Member
chronon said:
I agree that in (1) we are still left with the (local) measurement problem. However, I think that the real mystery in quantum theory is nonlocality, and that the measurement problem is minor in comparison. Several possibilities have been proposed, and the main reason they are rejected seems to be that they don't also deal with nonlocality.
Well, it is hard to accept non-locality which implies non-causality through relativity, especially when it is not strictly necessary (after all, in the QM formalism, there is no real FTL communication (information transmission allowed). But I still claim that the really big problem in QM is the measurement problem. After all, it is not that because you cannot explicitly, without approximations, CALCULATE several measurement processes, that you cannot know that they must be unitary. If they have a Hamiltonian description, then they ARE unitary.

So in (1) we still have some work to do, but I would maintain that this is also the case with (2), as you then have to include the behaviour of the human mind in your theory - not a trivial matter. Even if you have a simplified model of minds splitting, you have to explain why they split in the way they do, essentially giving the same problem as in (1).
No, I really don't, if I say that the essence of their behaviour is given by the Born rule. I don't have to say WHY it follows the Born rule, it is a fundamental postulate, just as the unitary evolution is a fundamental postulate.

Fundamental postulate I: "The universe evolves according to the Schroedinger equation" i hbar d/dt psi = H psi

Fundamental postulate II: "a sentient being gets its subjective experiences of its interactions with the universe through random assignment to one term in the superposition according to the Born rule". Decoherence insures me that this basis is well-defined.

As such, I didn't have to touch at all at the formalism of QM. I just fixed *where* the Born rule had to be applied, and because it doesn't correspond to a physical process but a subjective mental one, I don't have the difficulty of having at the same time a unitary description of it, as would be the problem if ever I fixed the Born rule application earlier in the measurement chain.

I really think it is the minimally invading interpretation in the formalism of QM.
Moreover, I get as a bonus that I do not need any extra non-local stuff.

cheers,
Patrick.

#### vanesch

Staff Emeritus
Gold Member
gptejms said:
Upon my measurement,the electron decoheres(due to interaction with the gamma photon) from the state a|1>+b|2> to |1> and a becomes equal to 1 and b=0.
This is not a unitary process. After all, a unitary operator is linear:

U (a |1> + b|2> ) = a U |1> + b U |2>

So there's no way to change these a and b through unitary processes. THIS is the central problem of the measurement process. You can make your physical process as complicated as you want, you cannot get away with the fact that the time evolution operator will be a linear operator on the states, and that, by definition, superpositions survive the application of U.

cheers,
Patrick.

#### gptejms

I expected you to tell what the environment is,and to give a more detailed explanation where you would include the states of the observer |m1>,|m2> etc.(and possibly also of the environment).
Anyway,coming to your objection.For an atom radiating, the state is a(t)|e> + b(t)|g> and a(t) goes from 1 to 0 and b(t) from 0 to 1---isn't this a unitary process?
Coming back to my double slit experiment,upto what state does decoherence take the initial superposition a|1>+b|2> to?

#### vanesch

Staff Emeritus
Gold Member
gptejms said:
I expected you to tell what the environment is,and to give a more detailed explanation where you would include the states of the observer |m1>,|m2> etc.(and possibly also of the environment).
Anyway,coming to your objection.For an atom radiating, the state is a(t)|e> + b(t)|g> and a(t) goes from 1 to 0 and b(t) from 0 to 1---isn't this a unitary process?
Coming back to my double slit experiment,upto what state does decoherence take the initial superposition a|1>+b|2> to?
I'm a bit confused by what you say ; so let's first get tuned :-)
In your previous message, you wrote things I don't understand:

Before I make any measurement,the electron is in state a|1> + b|2> and there are off-diagonal elements a^*b and ab^*.Now I make a measurement(with a gamma ray microscope) at slit 1 and find that the electron passes thru it.Let me denote my state by |m>---since this is a macroscopic state I don't expect it to change significantlyby detection of an electron(may be some microscopic changes take place which have recorded the fact that the electron has passed).
I don't know what you mean with this off-diagonal elements a^*b ... ?

If you "make a measurement and find" you leave already the superposition. Normally, you would say that your gamma ray microscope interacts with your electron, and would get into the state:

a |1> |gammamicroscope_saw_electron> + b |2> |gammamicroscope_didnt_see electron>

Also, your *macroscopic state* after having observed the display of the gamma microscope, is significantly different according to whether you saw or didn't see it. In fact, chances are these are orthogonal states.
Indeed, if you write your state as:
|stateofmyfirstproton>|stateofmysecondproton>...|stateofmylastneutron>
it is sufficient for ONE SINGLE PROTON to be at a slightly different place between two possibilities for your entire states to be orthogonal to eachother.

So the end state is:
a |1> |gammamicroscope_saw_electron>|yousawdisplayred> + b |2> |gammamicroscope_didnt_see electron> |yousawdisplaygreen>

cheers,
patrick.

#### gptejms

vanesch said:
I'm a bit confused by what you say ; so let's first get tuned :-)
In your previous message, you wrote things I don't understand:

I don't know what you mean with this off-diagonal elements a^*b ... ?
$$|\psi> = a|0> + b|1>$$

density matrix $$\rho$$ is
$$\rho = |\psi><\psi|, so <0|\rho|1> = ab^* and <1|\rho|0> = a^*b$$
These off-diagonal elements go to zero by the act of measurement.Decoherence also leads to decay of the off-diagonal elements--i.e. why it's said to provide a solution to the measurement problem.

So the end state is:
a |1> |gammamicroscope_saw_electron>|yousawdisplayred> + b |2> |gammamicroscope_didnt_see electron> |yousawdisplaygreen>
So do you mean that the off-diagonal elements survive as such in this mega-superposition?

#### vanesch

Staff Emeritus
Gold Member
gptejms said:
$$|\psi> = a|0> + b|1>$$

density matrix $$\rho$$ is
$$\rho = |\psi><\psi|, so <0|\rho|1> = ab^* and <1|\rho|0> = a^*b$$
These off-diagonal elements go to zero by the act of measurement.Decoherence also leads to decay of the off-diagonal elements--i.e. why it's said to provide a solution to the measurement problem.
Ah, ok, it was about the components of the density matrix associated with this statevector.

So do you mean that the off-diagonal elements survive as such in this mega-superposition?
Yes, of course they survive, _in the densitymatrix of the entire system_, including the environment. They have to, by unitarity.
What happens is that when you now calculate the LOCAL DENSITY MATRIX, limited to the system, by taking the partial traces, in this LOCAL density matrix the off-diagonal elements become zero.

Let us limit the description to:

a |1> |gammamicroscope_saw_electron>|yousawdisplayred> + b |2> |gammamicroscope_didnt_see electron> |yousawdisplaygreen>, and let us include the gammamicroscope state in the "you saw" state, to simplify notation:

a |1> |yousawelectron> + b |2> |youdidntsee>

The overall densitymatrix, in the basis:

|1>|yousaw> , |1>|youdidntsee>, |1>|yourotherstates...> ...
|2> |yousaw> , |2>|youdidntsee>,|2>|yourotherstates...> ...

takes on the form of 4 blocs:

rho_11 rho_12
rho_21 rho_22

with rho_11 the coefficients of |1>|you..><1|<you...| in |state><state| ;
rho_12 the coefficients of |2>|you...><1|<you...|
etc...

The coefficient a b* appears off-diagonal in rho_21, in the term:
|1>|yousaw><2|<youdidntsee|

The coefficient a^2 appears in rho_11 on the diagonal:
|1>|yousaw><1|<yousaw|

etc...

To get back to the local density matrix, we have to take the traces of these 4 component matrices (that's what partial tracing out means).

So you see that the trace of rho_11 will essentially be a^2,
that the trace of rho_12 and rho_21 will be 0 (because a b* appears off-diagonal) and that the trace of rho_22 will essentially be b^2.

Of course, the evolution of the states |you...> will make the off-diagonal components wiggle, but if the |you...> space is big enough, they will never gain significant components on the diagonal.

So you see that after tracing out, the LOCAL density matrix is reduced to:

|a|^2 0

0 |b|^2

If the entanglement is perfect, as the state describes. But this is a PARTIALLY TRACED OUT density matrix, and in order for this to be interpreted as probabilities, you in fact USE already the Born rule, saying that you have summed over the probabilities of all the potential exclusive cases of the environment. That's why this local density matrix is called an "improper mixture", because it behaves as a statistical mixture only if:
- we limit ourselves to the local observables
- we have assumed the Born rule for the total system

This is decoherence in a nutshell....

cheers,
Patrick.

Last edited:

#### vanesch

Staff Emeritus
Gold Member
I would like to add something, concerning this partial tracing out. Imagine I have 2 systems, one with 2 states |1> and |2> and another one with 3 states, |a>,|b> and |c>.
We construct the tensor basis:

{|1>|a>, |1>|b>,|1>|c>,|2>|a>,|2>|b>,|2>|c>}

Let us assume that we have a pure, but entangled state |psi>, written in this basis, and given by the 6-tupel {u_1a, u_1b...,u_2c} (with u_xx complex numbers, and normalized).

Imagine now that we have an observable which only observes something on system 1. This means that it can be written as: O x 1 (tensor product of operators), and let us imagine that O has as eigenstates |1> and |2>, with eigenvalues o1 and o2. This means that the eigenstates of Ox1 are:
|1>|a>, |1>|b> and |1>|c> with eigenvalue o1
and
|2>|a>, |2>|b> and |2> |c> with eigenvalue o2

The probability of having eigenvalue o1 is the sum of the probabilities of having |1>|a>, |1>|b> and |1>|c> , so this will be |u_1a|^2 + |u_1b|^2 + |u_1c|^2.

And that is nothing else but the trace of the 1-1 block in the overall density matrix |psi><psi| as you can easily verify.
What is very important is that a trace is invariant under a change of basis. So if we would have taken another basis for the H2 system we would find exactly the same trace of the 1-1 block. And this is the proof that the measurement on system 2 (choosing another basis for the second system) has no influence on the local measurement O.

But you also see that in order to give a meaning to this partial trace, we had to apply the Born rule on the 6-state space H1 x H2 ; once these were probabilities, we could then sum them.

The off-diagonal elements in the local density matrix play a role when we have a local observable O which doesn't diagonalize in the |1>, |2> basis. You can work the algebra out if you want to, it is a bit tedious.

cheers,
Patrick.

#### gptejms

Excellent posts---cheers Patrick!
So is this your conclusion:-decoherence 'theory'(I see some people call it a theory) assumes Born's rule in its derivation,so it really does not explain much.Use Born's rule to get at a local/reduced density matrix that does not have off-diagonal elements;then say since you now have only a statistical mixture you have solved the measurement problem--the argument is flawed.Is this what you are saying?
Because of interactions with measuring device/environment,the phase information gets dispersed,but is never lost.Superpositions stay---so MWI kind of thing is needed(?).But my problem is:-once you have included the measuring device as well as the environment(which includes one who wrote down the wavefunction,plus everyone else) into your wavefunction,who is left to make a measurement?

Last edited:

#### vanesch

Staff Emeritus
Gold Member
gptejms said:
Excellent posts---cheers Patrick!
Thanks

So is this your conclusion:-decoherence theory(I see some people call it that way) assumes Born's rule in its derivation,so it really does not explain much.Use Born's rule to get at a local/reduced density matrix that does not have off-diagonal elements;then say since you now have only a statistical mixture you have solved the measurement problem--the argument is flawed.Is this what you are saying?
No, decoherence people, like Zeh, realise this and say this also. Decoherence DOES show us something, namely the "preferred basis", the one in which the product states remain product states that way under time evolution ; this is determined by the character of the interaction between the system and the environment, and always leads to a basis of states which "looks classical" (like position states for particles ; or coherent field states for EM fields). THAT is the real contribution of decoherence.
It allows you to make the shortcut of applying the Born rule on the system level instead of having to work out the complicated QM of the interaction with the measurement instrument.
But, as you say, considering that it solves the measurement problem is based upon circular reasoning.

Because of interactions with measuring device/environment,the phase information gets dispersed,but is never lost.Superpositions stay---so MWI kind of thing is needed(?).But my problem is:-once you have included the measuring device as well as the environment into your wavefunction,who is left to make a measurement,collapse the wavefunction?
Hehe, you're beginning to see the issue ! Who's left ? My way to solve it is:
my consciousness is left :-) But you got to the gist of the problem I think.

cheers,
Patrick.