# Density Matrix and State Alpha

1. Jan 7, 2016

### jlcd

There is something that I don't quite understand or want clarification. See John Wheeler article "100 years of the quantum"

http://arxiv.org/pdf/quant-ph/0101077v1.pdf

refer to page 6 with parts of the quotes read

"so if we could measure whether the card was in the alpha
or beta-states, we would get a random outcome. In
contrast, if we put the card in the state “face up”, it
would stay “face up” in spite of decoherence. Decoherence
therefore provides what Zurek has termed a “predictability
sieve”, selecting out those states that display
some permanence and in terms of which physics has predictive
power."

but note that the state alpha and beta still exist. what if we would get random outcome (and what does this mean). Is it saying that since it's random outcome.. it doesn't occur in our world.. but somehow it still exist??
My math is very high school so please mention only those in the article above and not letting me read a treatise about density matrix that would confuse me more. Just want to understand this thing about alpha state (superposition of up and down). In the Schrodinger Cat analogy. Alpha is that Cat dead plus Cat alive. The density matrix would still produce it but only random outcome? Again does it mean we just don't perceive it but it's still there (let's not use Copenhagen where they made the ad hoc classical cut or sweeping it under the rug.. let's use the all quantum formalism where all system, environment, measuring device are all quantum). Thanks.

2. Jan 8, 2016

### naima

I think that you will win nothing to think about macroscopic cats! begin with spins.
There is no more mistery in |0> + |1> than in |0> or in |1>.
You can translate in density matrix language by saying that
There no more mistery in (|0> + |1>) (<0| + <1|) than in |0><0| or |1><1|
You have pure states. And you will have a perfectly known output for each of them if you choose the good direction for tha Stern Gerlach apparatus.

Last edited: Jan 8, 2016
3. Jan 8, 2016

### jlcd

Really? so replacing the face up & down with spin up and down in the following

"so if we could measure whether the electron was in the alpha or beta-states, we would get a random outcome. In contrast, if we put the card in the state “spin up”, it
would stay “spin up” in spite of decoherence. Decoherence therefore provides what Zurek has termed a “predictability sieve”, selecting out those states that display
some permanence and in terms of which physics has predictive power."

Does this mean the alpha state of spin up and down is a legal quantum state that we just can't detect with our sensors but is still there for all intent and purposes? Is this what the article meant? any bonafide Science Advisor can expand on it? Thanks.

4. Jan 8, 2016

### Staff: Mentor

By "the alpha state of spin up and down" do you mean the state that is an equal superposition of spin-up and spin-down, so that a measurement of spin along the vertical axis has an equal probability of yielding either result? If so, that is a perfectly sensible quantum state - it's the state in which a spin measurement along the horizontal axis has a 100% probability of finding spin-up, and it's the state the particles in one output beam of a horizontally oriented Stern-Gerlach device will be in.

As the above suggests, we can write the wave function in many different forms, some of which are superpositions and some of which are not. This is analogous to the way that I can describe a vector pointing to the northwest as either "A vector pointing to the northwest" or "a superposition (sum) of a vector pointing north and a vector pointing west"; or I could write an ostensibly unsuperimposed north-pointing vector as either "a vector pointing north" or "a superposition (sum) of a vector pointing northwest and a vector pointing northeast".

If we're going to measure the spin on a vertical axis, we should write the wave function as a superposition of spin-up and spin-down; if we're going to measure the spin on a horizontal axis we should write the wave function as a superposition of spin-left and spin-right. In that form it's particularly easy to calculate the probability of getting either possible outcome; but in any case after the measurement the particle will be in the state corresponding to a 100% probability of getting whatever you measured.

(By the way - if your math is "very high school" you might want to mark your threads B instead of I to say that you don't want answers that assume you've had a few years of college-level math. I've reset the level in this thread).

5. Jan 8, 2016

### jlcd

Let me go back to the original article as Naima's spins didn't directly answer my questions. Please refer to John Wheeler http://arxiv.org/abs/quant-ph/0101077

"The second unanswered question in the Everett picture was more subtle but equally important: what physical mechanism picks out the classical states — face up and face down for the card — as special? The problem was that from a mathematical point of view, quantum states like "face up plus face down" (let’s call this "state alpha") or "face up minus face down" ("state beta", say) are just as valid as the classical states "face up" or "face down".

So just as our fallen card in state alpha can collapse into the face up or face down states, a card that is definitely face up — which equals (alpha + beta)/2 — should be able to collapse back into the alpha or beta states, or any of an infinity of other states into which "face up" can be decomposed. Why don’t we see this happen?

Decoherence answered this question as well. The calculations showed that classical states could be defined and identified as simply those states that were most robust against decoherence. In other words, decoherence does more than just make off-diagonal matrix elements go away. If fact, if the alpha and beta states of our card were taken as the fundamental basis, the density matrix for our fallen card would be diagonal to start with, of the simple form

density matrix = [1 0]
--------------------[0 0]

since the card is definitely in state alpha. However, decoherence would almost instantaneously change the state to

density matrix = [1/2 0]
--------------------[0 1/2]

so if we could measure whether the card was in the alpha or beta-states, we would get a random outcome. In contrast, if we put the card in the state "face up", it would stay "face up" in spite of decoherence. Decoherence therefore provides what Zurek has termed a "predictability sieve", selecting out those states that display some permanence and in terms of which physics has predictive power."

Does the above mean that if the result is random outcome, it doesn't tally with the classical word so is not available to classical world apparatus yet they still somehow exist (the alpha state of face up and face down)?

6. Jan 8, 2016

### Staff: Mentor

It means that if you were to prepare a card in the state alpha (a superposition of face-up and face-down), it would very quickly (immediately, for all practical purposes) evolve to either a face-up state or a face-down state. Continued evolution from there will leave it either face-up or face-down; it will not return to state alpha (or any other state in which it is not unambiguously face-up or face-down according to how it evolved).

This behavior follows from the fact that the card is made up of a very large (maybe 1024) number of individual particles - that's what makes it subject to decoherence and makes it behave as a "classical" object. The behavior of a single particle in superposition is quite different.

7. Jan 9, 2016

### jlcd

But the above is assuming a classical divide or classical bias.. but in a pure quantum world.. both face up and alpha state of face up and face down is valid state. The density matrix is just a tool about probability, a classical device. So how could it choose the preferred basis. Are you familiar with the factorization problem? Do you agree there is such a problem? Demystifier described it thus: "Factorization should be a big problem only for those who take MWI very seriously (even if they do not realize it). But for all the others factorization is not really a problem, because in other interpretations of QM one can always identify a "natural" factorization.. .It's a problem if you think you can use any factorization. But in other interpretations you don't use any factorization. You use the "natural" factorization, which is essentially unique. "

Can anyone correct me if this is the essence of the factorization problem? Those who sweep under the rug are using classical cut or bias to prefer a natural factorization.

8. Jan 9, 2016

### Staff: Mentor

There's no assumed classical/quantum divide there. The behavior I'm describing is quantum-mechanical all the way; the classical behavior of the playing card is the result of the combining the individual quantum-mechanical behavior of each of the enormous number of particles that make up a playing card.

You've already said that you don't want the math... So you'll have to take the word of people who have done the math when they say that if you do a purely quantum-mechanical calculation of the evolution of a superimposed system of many particles, the superposition will very rapidly evolve into states that behave classically.

If you can get hold of David Lindley's book "Where does the weirdness go?", it's an excellent non-technical explanation of how decoherence works and how quantum mechanics predicts that superposition effects become less and less noticeable as the number of particles in the system increases.

9. Jan 9, 2016

### jlcd

" In a paper entitled
"Nothing happens in the Universe of the Everett Interpretation":
http://arxiv.org/abs/1210.8447
Jan-Markus Schwindt has presented an impressive argument against the many-world interpretation of quantum mechanics.

The argument he presents is not new, but, in my opinion, nobody ever presented this argument so clearly.

In a nutshell, the argument is this:
To define separate worlds of MWI, one needs a preferred basis, which is an old well-known problem of MWI. In modern literature, one often finds the claim that the basis problem is solved by decoherence. What J-M Schwindt points out is that decoherence is not enough. Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment). But if the state in the Hilbert space is all what exists, then such a split is not unique. Therefore, MWI claiming that state in the Hilbert space is all what exists cannot resolve the basis problem, and thus cannot define separate worlds. Period! One needs some additional structure not present in the states of the Hilbert space themselves.

As reasonable possibilities for the additional structure, he mentions observers of the Copenhagen interpretation, particles of the Bohmian interpretation, and the possibility that quantum mechanics is not fundamental at all."

10. Jan 9, 2016

### jlcd

Based on reading Demystifier references and others. Just because something is composed of very large (maybe 1024) number of individual particles doesn't mean it is subject to decoherence. Remember in many worlds it is pure unitary, there is no decoherence without collapse first. So you must first assume there is collapse. Without it.. the card won't even be a card. Is this reasoning correct? Also the density matrix is just not a tool for probability.. it has built in collapse to make the off diagonal terms (interference) very small as I read in a paper. Do you agree with everything I described here? Just relating what I read. Thanks.

11. Jan 9, 2016

### Staff: Mentor

Wherever you got that idea from, it's wrong. Decoherence is the result of unitary and collapse-free evolution from the initial state.

12. Jan 9, 2016

### jlcd

No. I studied this several nights and weeks. This is the misconception that confused even the mainstream experts. It's very subtle. But many experts have already agreed to it like Demystifier, Peterdonis, etc. Please read the following in details (why don't you agree with it if you don't?):

http://transactionalinterpretation....ally-split-in-the-many-worlds-interpretation/

http://philsci-archive.pitt.edu/10757/1/Einselection_and_HThm_Final.pdf

"The goal of decoherence is to obtain vanishing of the off-diagonal terms, which corresponds to
the vanishing of interference and the selection of the observable R as the one with respect to
which the universe purportedly ‘splits’ in an Everettian account. As observed by Bub, since the
resulting mixed state is an improper one, it does not license the interpretation of the system’s
density matrix as representing the determinacy of outcome perceived by observers -- but that is a
separate issue. The aim of this paper is to point out that the vanishing of the off-diagonal terms is
crucially dependent on an assumption that makes the derivation circular.

As is apparent from (5), the off-diagonal terms are periodic functions that oscillate in
value as a function of time. However, as Bub notes, z(t) will have a very small absolute value,
providing for very fast vanishing of the off-diagonal elements and a very long recurrence time
for recoherence when n is large, based on the assumption that the initial states of the n
environmental subsystems and their associated coupling constants are random
. But the
randomness appealed to here is not licensed by the Everettian program, which states that the
quantum state of the universe is that of a closed system that evolves only unitarily. The
‘randomness’ of the environmental systems does not arise from within the Everettian picture.
When one forecloses that assumption, the decoherence argument fails – and with it,
‘einselection,’ which depends on essentially the same argument to obtain a preferred
macroscopic observable for ‘pointers.’

"

13. Jan 10, 2016

### Staff: Mentor

I agree with what you've quoted above, but I see nothing there that supports your claim that "there is no decoherence without collapse first."

14. Jan 10, 2016

### jlcd

Well. Mathematically, decoherence is the decay of the off-diagonal elements of the system density matrix in a specific basis. Now the paper is saying that "The crucial point that does not yet seem to have been fully appreciated is this: in the
Everettian picture, everything is always coherently entangled, so pure states must be viewed as a
fiction -- but that means that it is also fiction that the putative 'environmental systems' are all
randomly phased
. In helping themselves to this phase randomness, Everettian decoherentists
have effectively assumed what they are trying to prove: macroscopic classicality only ‘emerges’
in this picture because a classical, non-quantum-correlated environment was illegitimately put in
by hand from the beginning. Without that unjustified presupposition, there would be no
vanishing of the off-diagonal terms and therefore no apparent diagonalization of the system’s
reduced density matrix that could support even an approximate, ‘FAPP’ mixed state
interpretation."

Therefore, without collapse to randomize the phases where initially "everything is always coherently entangled", there is no decay of the off-diagonal elements of the system density matrix hence no decoherence.

15. Jan 11, 2016

### naima

Cannot we say that when we prepare a schrodiger cat we have measured it?

16. Jan 11, 2016

### jlcd

Nugatory:

"in the Everettian picture
....."

You said in the other thread closed "Note the text that I have marked in boldface. The paper you are quoting from is talking about the Everettian picture has with decoherence; it doesn't have anything to do with the way that decoherence appears as a consequence of unitary evolution and it most certainly is not saying what you're claiming it does."

You didn't say this earlier here so I don't know. So it means depending on the picture, it is either.. that is..

In Copenhagen (and some others), Decoherence occurs first then Collapse
In Everettian, Collapase occurs first then Decoherence?

It's just a matter of semantics how they are presented.. and they are both right? Do all others like Demystifier agree to this?

This is the distinction I'm inquiring. Thank you.

17. Jan 12, 2016

### jlcd

In a thread ongoing thread "What if there are no Schrödinger's cats?" Bhobba shared in post # 22 the following old thread https://www.physicsforums.com/threads/what-if-there-are-no-schroedingers-cats.850935/page-2 for review. And I read he and atyy were discussing about improper mix state. Atyy argued improper mix state was a superposition, Bhobba said it was not. Then wrote this "
Its an improper mixed state - its requires an extra interpretive assumption to be proper. I am in no way hiding that." and atty said "Yes, that's really what I mean when I say it is a superposition. It is because the whole system remains in pure state that the mixture of the reduced density matrix isn't proper without an additional interpretive assumption (such as something like collapse or hidden variables)."

I'm trying to connect it with what kastner wanted to do. It seems she wanted to make the improper mixture into proper by her version of collapse.. but what I can't understand is why physicists like Bhobba opposed to her version and Nugatory even wanted to call it misinformation. It's very subtle and all this quite complicated. Decoherence arguments are getting so sophisticated even to physicists so we starters have to grasp even harder.

In Everettian, there is really no collapse.. yet Kastner said there was collapse. So even MWI has so many versions. This can really confuse anyone. Maybe someone can share the versions of improper vs proper and collapse vs uncollapse in all versions, Copenhagen, MWI, Bohmians as a guide for us who are perflexed.

About the book Where the Weirdness go. I've been reading it as Nugatory suggested. But it's quite incomplete.. it focused on Copenhagen only, without the more modern quantum Darwinism, a reviewer at amazon mentioned this "
The moral of all these difficulties seems to be - even with all the progress that has been made and Lindley's own rather "rosy" outlook in his book - that decoherence has NOT solved the deepest puzzles of QM. Lindley's book sub-title is "...But Not As Strange As You Think". Alas, it appears the quantum world is actually STRANGER to our familiar macroscopic intuitions than we might hope..."

If anyone familiar with Kastner. How did her version further messed up the already confusing state? If the environment is not random and all hilbert space all unitary, couldn't there be decoherence? Is this separate or related to the context of improper mixed state by the Bhobba camp. By Atyy reasoning, if the environment is random, can we even call it improper mixed state or proper mixed state since there was no superposition (as per Kastner) to begin with? And how to relate these two versions. Anyone can enlighten us?

18. Jan 12, 2016

### jlcd

The experts (Bhobba, Atyy) are tired of this subject or debate. The not so experts are not familiar with it or confused. This leaves us newbies without much input.. you can't find any layman books on the details of this. So let me give some statement for the comment of those semi experts or experts who are aware of this.

The problem of outcome is going from improper mixed state to proper mixed state. That is a convensional problem.

My question is how to connect this with Kastner papers. Is it simply about going from improper mixed state to proper mixed state. But her arguments are so different.. or is it just semantics. Her arguments being that in pure unitary Hilbert space, there is no decoherence. The conversional puzzle is the problem of improper mixed state to proper mixed state. The MWI camp says the mixed states makes up separate branches. But Demystifier or others emphasized this convensional MWI is really Bohmian MWI because of he classical branches already preferred. So is Kastner MWI the pure MWI. Note Copenhagen and Bohmians have classical preferred basis so the arguments is more of the pure quantum arguments (perhaps quantum Darwinism?). I wrote to summarize what I learnt in this very subtle complex idea to organize my mind about it.

In convensional decoherence. it's about system A entangling with the environment.. and measuring a subsystem would produce decoherence (improper mixed state). Does this assume the entire universe must be in pure state or just the system and environment in convensional decoherence. Or does is environment the universe itself? My question is why is Kastner stuff so disbelieved by many.. like Bhobba who stated the environment is naturally random and Bhobba claiming all assumed this except Kastner. But if the environment is naturally random.. it's already a collapse state.. what caused the initial collapse? isn't this a valid question? Anyone can comment on all this?

19. Jan 14, 2016

### jlcd

I finally started to read seriously the above book suggested by Nugatory. It's quite interesting. Some, I think atyy (I'll search) says Copenhagen and Decoherence are separate and incompatible. But I read the following part in the book: "Decoherence therefore completes the Copenhagen interpretation of quantum mechanics, by making the process of measurement a real physical phenomenon rather than a decree from Bohr. Niels Bohr knew that measurements in fact took place in physics laboratories all around the world, but he couldn't explain how, and so simply declared that they happened. This was always a genuine flaw with his interpretation, but decoherence resolves it." I read this passage at random which piqued my interests. I'll start reading the book from start to end. And return to Density Matrix with Nugatory after it.

20. Jan 14, 2016

### naima

When you read that decoherence completes the copenhagen, you have to understand that it adds something. Do not think it gives you a complete theory. it does not solve the problem of the outputs.