Ballentine on the quantum Zeno paradox

Demystifier
Science Advisor
Insights Author
Messages
14,566
Reaction score
7,159
I know that many people here have a very high opinion on the Ballentine's QM textbook. I am also one of them, but one particular subsection of it is (in my opinion) wrong. This is the subsection on the quantum Zeno paradox, or as Ballentine calls it, the "watched pot" paradox in Section 12.2 (Exponential and Nonexponential Decay). In this subsection, he presents a nice standard argument that a continuous observation may prevent decay (which in my opinion is correct), and then in the last paragraph argues that it is false. I think that his argument that it is false - is false itself. What do you think?
 
Physics news on Phys.org
I am not sure what "false" means in this context. But the quantum zeno effect is -as far as I know- routinely observed experimentally. I think it is even used to extend the lifetimes of some states that are used for optical clocks
 
Come on people, so many of you claimed to love this book. Is it possible that nobody has an opinion on this particular subsection?
 
I just looked at it, and I am not sure I understand correctly what he claims. It seems to me that he says that after measuring and finding a value of an observable, the state of system is not the corresponding eigenstate. BUT I thought that this is prat of QM not an interpratation. I suppose I have to read the book first, before commenting. Anyway, why do you think it si wrong.
 
Perhaps you could quote the offending passage? I too feel the Quantum Zeno effect is basic quantum mechanics, so I would like to hear his opinion that it isn't.
 
Ballentine said:
Thus we obtain Pu (t) = 1 (u for the initial undecayed state) in the limit of continuous
observation. Like the old saying “A watched pot never boils,” we have been led to the
conclusion that a continuously observed system never changes its state!

This conclusion is, of course, false. The fallacy clearly results from the
assertion that if an observation indicates no decay, then the state vector must
be |Ψu>. Each successive observation in the sequence would then “reduce” the
state back to its initial value |Ψu>, and in the limit of continuous observation
there could be no change at all. The notion of “reduction of the state vector”
during measurement was criticized and rejected in Sec. 9.3. A more detailed
critical analysis, with several examples, has been given by Ballentine (1990).
Here we see that it is disproven by the simple empirical fact that continuous
observation does not prevent motion.
It is sometimes claimed that the rival
interpretations of quantum mechanics differ only in philosophy, and cannot be
experimentally distinguished. That claim is not always true, as this example
proves.
I don't think this is an empirical fact. My current knowledge is, that decays have been successfully suppressed as predicted by the quantum zeno effect. Although I'm not familiar with recent experiments, I don't see how real "continuous" measurements could be performed at all.
 
Demystifier said:
... and then in the last paragraph argues that it is false. I think that his argument that it is false - is false itself. What do you think?

Maybe he’s both right and wrong? :rolleyes:

I haven’t read the whole book (yet), so basically I’m just a 'bum', but this is how I see it:
  • Yes, not only in the textbook has Ballentine been advocating this approach, but also in the paper http://pra.aps.org/abstract/PRA/v43/i9/p5165_1" [Phys. Rev. A 43, 5165–5167 (1991)]; "The quantum Zeno effect is not a general characteristic of continuous measurements".

  • Ballentine is a prominent advocate of the Ensemble interpretation. (And here I just know he’s dead wrong! [:smile:])

  • Ballentine seems to use the refutation of the "watched pot"/"Zeno effect" as some form of 'evidence' for a 'particular' interpretation (and we all know which! :smile:); "It is sometimes claimed that the rival interpretations of quantum mechanics differ only in philosophy, and cannot be experimentally distinguished. That claim is not always true, as this example proves". To me, this is wrong. If a QM interpretation makes different predictions than 'the others', then it’s no longer an interpretation, but a new theory.

  • According to Wikipedia, the Quantum Zeno Effect is http://en.wikipedia.org/wiki/Quantum_Zeno_effect#Experiments_and_discussion" when it comes to the limit of an infinite number of interrogations; "It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. ... The interpretation of experiments in terms of the "Zeno effect" helps describe the origin of a phenomenon. Nevertheless, such an interpretation does not bring any principally new features not described with the Schrödinger equation of the quantum system". Here it looks like Ballentine has a point.

  • And this point becomes his main argument (afaict); "We now pass to the limit of continuous observation by letting n become infinite".

If I was as smart and knowledgeable as Ballentine, and was about to write a QM textbook, I would probably have put it slightly different and hopefully more 'transparent'.

But what do I know... :rolleyes:
 
Last edited by a moderator:
The quantum Zeno effect formally works in the context of discrete eigenvalues, like spin in some direction. I'm not sure how one would apply it to motion, which is normally thought of as a situation where the eigenvalues are continuous. As far as nuclear decay, that isn't thought to be a type of time evolution at all-- there is no Schroedinger equation that evolves the expectation of nuclear decay, so again I don't see how the Zeno effect would apply there.
 
Demystifier said:
This is the subsection on the quantum Zeno paradox, or as Ballentine calls it, the "watched pot" paradox in Section 12.2 (Exponential and Nonexponential Decay). In this subsection, he presents a nice standard argument that a continuous observation may prevent decay (which in my opinion is correct), and then in the last paragraph argues that it is false. I think that his argument that it is false - is false itself.

I read that section years ago, and have just now studied it again. I offer the following thoughts...

1) My first observation is that I suspect the derivation to be faulty, because it takes a limit that corresponds essentially to an infinite tensor product space. This reminds me of what Hartle attempted in his "QM of Individual Systems (1968)" paper, which was subsequently shown to be flawed. See this earlier thread for a bit more detail and references:

Ref thread: "Hartle: QM of Individual Systems (1968)"
https://www.physicsforums.com/showthread.php?t=511885
(esp. my post #7 at the end).

2) On the experimental "evidence" for the QZ effect, there's also this later paper by Ballentine:

L.E.Ballentine, "Comment on Quantum Zeno effect",
Phys Rev A., vol 43, no 9, 1991, p5165.

Abstract:

The quantum Zeno effect is not a general characteristic of continuous measurements. In a recently reported experiment [Itano et al...], the inhibition of atomic excitation and deexcitation is not due to any "collapse of the wave function", but instead is caused by a very strong perturbation due to the optical pulses and the coupling to the radiation field. The experiment should not be cited as prividing empirical evidence in favor of the notion of "wave-function collapse".

3) If I'm right in point (1) above, all it means is that Ballentine's "corollary" -- i.e., that interpretations of QM can sometimes be experimentally distinguished, -- is no longer justified -- at least not on this evidence, if the derivation itself is flawed.
 
  • #10
DevilsAvocado said:
[*]Ballentine is a prominent advocate of the Ensemble interpretation. (And here I just know he’s dead wrong! [:smile:])
Unless you have experimental evidence that can distinguish between interpretations, you only believe he's "dead wrong".
 
  • #11
Ballentine said:
In a recently reported experiment [Itano et al...], the inhibition of atomic excitation and deexcitation is not due to any "collapse of the wave function", but instead is caused by a very strong perturbation due to the optical pulses and the coupling to the radiation field.
Hmm, so a strong perturbation in the process of doing a measurement is not a collapse of the wavefunction? It sounds like one of the more blatant examples of collapse of a wavefunction.
 
  • #12
So basically, Ballentine does not believe in the quantum Zeno paradox because he does not believe in collapse.
But it seems that he does not understand that effective collapse can almost be "explained" by modern understanding of decoherence, and it seems to be because he is not aware of the importance of decoherence.

The reason for such a suspicion comes from another part of his (otherwise great) book:
Sec. 9.3 - The Interpretation of a state vector
Subsection - The measurement theorem for general states
After Eq. (9.13) he writes:
"The terms with alpha_r1 notequal alpha _r2 indicate a coherent superposition of macroscopically distinct indicator vectors ... It is clear that the nondiagonal terms in (9.13) cannot vanish ..."
But it seems to me that someone who were familiar with decoherence would immediately recognize that they CAN vanish, due to decoherence. Nevertheless, he does not even mention decoherence - at this place at which a "Modern Introduction" to QM should.

Any comments?
 
  • #13
strangerep said:
Unless you have experimental evidence that can distinguish between interpretations, you only believe he's "dead wrong".

Okay, fair enough (if we pretend not to understand the meaning of a smiley), but as I see it, there must be a flip side to the coin; Ballentine only believe he's "dead right"! :-p
 
  • #14
Demystifier said:
So basically, Ballentine does not believe in the quantum Zeno paradox because he does not believe in collapse.
I don’t have the Ballentine’s book (and Ballentine’s no relative of mine :-) ), so the following is mostly based on his Comment quoted by DevilsAvocado in post 7 in this thread.

It does not look like “Ballentine does not believe in the quantum Zeno paradox”, he says “The quantum Zeno effect is not a general characteristic of continuous measurements.” I understand this as follows (and I may be wrong): the quantum Zeno paradox exists or does not exist depending on the specific characteristics of the actual measurement.

Furthermore, the authors of the article he (mildly) criticizes write (http://tf.nist.gov/general/pdf/905.pdf ) in the reply to his Comment: “Ballentine states that “collapse of the wave function” is not necessary to quantum mechanics”. We agree. However, we feel that the explanation given in our article, which invokes von Neumann’s “collapse” postulate, is useful for giving a simple explanation of our experiment.”

So it looks like there is agreement that collapse is not necessary.

Demystifier said:
But it seems that he does not understand that effective collapse can almost be "explained" by modern understanding of decoherence, and it seems to be because he is not aware of the importance of decoherence.

The reason for such a suspicion comes from another part of his (otherwise great) book:
Sec. 9.3 - The Interpretation of a state vector
Subsection - The measurement theorem for general states
After Eq. (9.13) he writes:
"The terms with alpha_r1 notequal alpha _r2 indicate a coherent superposition of macroscopically distinct indicator vectors ... It is clear that the nondiagonal terms in (9.13) cannot vanish ..."
But it seems to me that someone who were familiar with decoherence would immediately recognize that they CAN vanish, due to decoherence. Nevertheless, he does not even mention decoherence - at this place at which a "Modern Introduction" to QM should.

Any comments?

I cannot be sure Ballentine knew about decoherence in 1998, when his book was published (he knew about it in 2005 though :-) - http://pra.aps.org/abstract/PRA/v72/i2/e022109 ), but in the text you quoted he seems to argue that collapse is, strictly speaking, incompatible with unitary evolution, and I believe he’s right. Furthermore, it seems there is no positive experimental evidence of collapse (see the quote from Schlosshauer’s article at https://www.physicsforums.com/showpost.php?p=2534950&postcount=41 ).

You mentioned decoherence. But, as far as I understand, decoherence is a result of influence of environment, i.e. of something external with respect to the experiment, so one can talk about “effective collapse”, but that does not contradict the fact that, strictly speaking, there is no collapse (otherwise unitary evolution is wrong). So I think I fully understand Ballentine’s thrust against collapse.
 
Last edited by a moderator:
  • #16
Demystifier said:
I know that many people here have a very high opinion on the Ballentine's QM textbook. I am also one of them, but one particular subsection of it is (in my opinion) wrong. This is the subsection on the quantum Zeno paradox, or as Ballentine calls it, the "watched pot" paradox in Section 12.2 (Exponential and Nonexponential Decay). In this subsection, he presents a nice standard argument that a continuous observation may prevent decay (which in my opinion is correct), and then in the last paragraph argues that it is false. I think that his argument that it is false - is false itself. What do you think?

Best known as Quantum Zeno effect, which has been measured

http://pra.aps.org/abstract/PRA/v41/i5/p2295_1

Maybe Balentine dislikes the quantum zeno effect because is directly related to collapse process in QM, which he rejects, but collapse works

http://pra.aps.org/abstract/PRA/v43/i9/p5168_1
 
Last edited:
  • #17
akhmeteli said:
I cannot be sure Ballentine knew about decoherence in 1998, when his book was published (he knew about it in 2005 though :-) - http://pra.aps.org/abstract/PRA/v72/i2/e022109 ), but in the text you quoted he seems to argue that collapse is, strictly speaking, incompatible with unitary evolution, and I believe he’s right.

Sorry to remark the obvious but that is known since von Neumann introduced the collapse postulate in QM. As any standard textbook in QM explains there are two evolutions in QM: (1) the unitary described by the Schrödinger equation and (2) the non-unitary described by the collapse postulate.
 
Last edited:
  • #18
DevilsAvocado said:

Thank you.

Indeed, the text in the book (see, e.g., the quote in kith's post #6 in this thread) gives some grounds to think that Ballentine denies the quantum Zeno effect. It seems to me though that he does not deny the effect, rather he denies its generality. Why do I think so? Because in the quote he refers for details to his article (“Limitations of the Projection Postulate", Found. Phys. 20, 1329–1343 (1990)). He explains in the article that the influence of the detector on the decay rate may indeed all but halt the decay, IF the coupling is strong enough.
 
Last edited by a moderator:
  • #19
juanrga said:
Maybe Balentine dislikes the quantum zeno effect because is directly related to collapse process in QM, which he rejects, but collapse works

http://pra.aps.org/abstract/PRA/v43/i9/p5168_1

Collapse may be a good approximation and work in some situations, but again, the authors of the source you quote agree that ""collapse of the wave function" is not necessary to quantum mechanics."
 
  • #20
juanrga said:
Sorry to remark the obvious but that is known since von Neumann introduced the collapse postulate in QM.

I agree. The problem is many people do agree that "collapse is, strictly speaking, incompatible with unitary evolution", but immediately add: "but that's OK" :-)

juanrga said:
As any standard textbook in QM explains there are two evolutions in QM: (1) the unitary described by the Schrödinger equation and (2) the non-unitary described by the collapse postulate.

Except that the "standard textbook in QM" that we are discussing rejects the postulate of collapse:-) Let me repeat that, say, according to Schlosshauer, there is no positive experimental evidence of collapse.
 
  • #21
Demystifier said:
So basically, Ballentine does not believe in the quantum Zeno paradox because he does not believe in collapse.
But it seems that he does not understand that effective collapse can almost be "explained" by modern understanding of decoherence, and it seems to be because he is not aware of the importance of decoherence.

The reason for such a suspicion comes from another part of his (otherwise great) book:
Sec. 9.3 - The Interpretation of a state vector
Subsection - The measurement theorem for general states
After Eq. (9.13) he writes:
"The terms with alpha_r1 notequal alpha _r2 indicate a coherent superposition of macroscopically distinct indicator vectors ... It is clear that the nondiagonal terms in (9.13) cannot vanish ..."
But it seems to me that someone who were familiar with decoherence would immediately recognize that they CAN vanish, due to decoherence. Nevertheless, he does not even mention decoherence - at this place at which a "Modern Introduction" to QM should.

Any comments?

Now that I have looked at the text :-), Ballentine does mention decoherence, although not by name:

"Some of the proposed explanations [of collapse] are as follows:...

(iii) The reduction (9.9) is caused by the environment, the “environment” being defined as the rest of the universe other than [the object] (I) and [the apparatus] (II).

This proposal is a bit vague, because it has not been made clear just what part of the environment is supposed to be essential. But it is apparent that if we formally include in (II) all those parts of the environment whose influence might not be negligible, then the same argument that defeated (i) and (ii) will also defeat (iii)."
 
  • #22
Demystifier said:
So basically, Ballentine does not believe in the quantum Zeno paradox because he does not believe in collapse.
But it seems that he does not understand that effective collapse can almost be "explained" by modern understanding of decoherence, and it seems to be because he is not aware of the importance of decoherence.

The reason for such a suspicion comes from another part of his (otherwise great) book:
Sec. 9.3 - The Interpretation of a state vector
Subsection - The measurement theorem for general states
After Eq. (9.13) he writes:
"The terms with alpha_r1 notequal alpha _r2 indicate a coherent superposition of macroscopically distinct indicator vectors ... It is clear that the nondiagonal terms in (9.13) cannot vanish ..."
But it seems to me that someone who were familiar with decoherence would immediately recognize that they CAN vanish, due to decoherence. Nevertheless, he does not even mention decoherence - at this place at which a "Modern Introduction" to QM should.

Any comments?

Good point, and in fact, Ballentine is notorious for downplaying (or not understanding) the role of decoherence. In 2005 he published an attempted rebuttal (The Quantum Mechanics of Hyperion) of an argument by Zurek (Why We Don't Need Quantum Planetary Dynamics: Decoherence and the Correspondence Principle for Chaotic Systems), the argument was resolved, in Zurek's favour, by Schlosshauer (Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute)
 
  • #23
akhmeteli said:
juanrga said:
Maybe Balentine dislikes the quantum zeno effect because is directly related to collapse process in QM, which he rejects, but collapse works

http://pra.aps.org/abstract/PRA/v43/i9/p5168_1

Collapse may be a good approximation and work in some situations, but again, the authors of the source you quote agree that ""collapse of the wave function" is not necessary to quantum mechanics."

In that paper they are replying invalid statements done by Ballentine about their work.

The collapse is needed for any consistent formulation of QM. That is why standard textbooks use it.
 
  • #24
akhmeteli said:
I agree. The problem is many people do agree that "collapse is, strictly speaking, incompatible with unitary evolution", but immediately add: "but that's OK" :-)

Except that the "standard textbook in QM" that we are discussing rejects the postulate of collapse:-) Let me repeat that, say, according to Schlosshauer, there is no positive experimental evidence of collapse.

But Ballentine's is not a standard textbook in QM. It is a textbook about how he want QM to be.

Schlosshauer, could be replied with there is no positive experimental evidence of unitarity for the universe as a whole.
 
  • #25
juanrga said:
In that paper they are replying invalid statements done by Ballentine about their work.

I have no comments on this statement as it does not seem to contain any specifics relevant to this thread.

juanrga said:
The collapse is needed for any consistent formulation of QM.

Again, seems like the authors of your source agree with the opposite point of view.

Furthermore, as soon as any interpretation includes both unitary evolution and collapse, strictly speaking, it becomes inconsistent, as these two are mutually contradictory.
 
  • #26
juanrga said:
But Ballentine's is not a standard textbook in QM. It is a textbook about how he want QM to be.

What criteria do you use to declare Ballentine's book non-standard? The fact that he rejects collapse?

juanrga said:
Schlosshauer, could be replied with there is no positive experimental evidence of unitarity for the universe as a whole.

OK, so you believe we need collapse. I believe we need unitary evolution and therefore I cannot accept collapse.
 
  • #27
juanrga said:
The collapse is needed for any consistent formulation of QM. That is why standard textbooks use it.

From my reading of Bernard d'Espagnat (who has written extensively on the conceptual foundations of quantum mechanics), he doesn’t seem to consider the collapse to be a fundamental requirement for consistency within QM. I can’t offer any more than that, so I’m really just quoting d’Espagnat parrot fashion, but the fact that he does make a point in specifying that waveform collapse is “never absolutely necessary” with no mention of the shortfall in consistency you mention, then perhaps it’s not as clear cut as you imply. What he seems to imply is that it is a “useful” concept, not an “essential” concept.

From Bernard d’Espagnat, “On Physics and philosophy”.

Bernard d'Espagnat said:
Finally, concerning the quantum rules let us note that, within the convention taken up in this book, the set of them all does include the (generalized) Born rule (which yields probabilities of observation on the basis of knowledge of the wave function) but does not include the (already alluded to ) procedure called “reduction” or “collapse” of the wavefunction. The point is worth mentioning since, in many texts, the two procedures are described together and are not clearly distinguished. In fact, while the latter is often extremely useful, in theory it never is absolutely necessary.
 
  • #28
akhmeteli said:
Again, seems like the authors of your source agree with the opposite point of view.

My 'source' was the phrase immediately posterior and that you deleted. The authors of that paper also say that Everett interpretation of QM is acceptable, but that is far from true. Everett work is wrong (even well-known many-world believers accept that now) and discussion of the mistakes and inconsistencies of «many-worlds» was given in another recent thread.

That paper was cited because replies a series of incorrect statements done by Ballentine in his «comment on».

akhmeteli said:
Furthermore, as soon as any interpretation includes both unitary evolution and collapse, strictly speaking, it becomes inconsistent, as these two are mutually contradictory.

No inconsistency, because it is done clear that each postulate describes a type of evolution and not the other. Did not you read von Neumann?

The correct term is incompleteness, because the postulates by themselves cannot really say what is exactly the limit of validity of each.

Those limits are however, well understood using generalizations of QM, as the ones based in Stochastic Schrödinger equations or further generalizations.
 
Last edited:
  • #29
juanrga said:
My 'source' was the phrase immediately posterior and that you deleted.

I deleted it just because the phrase was repeated elsewhere, and I did not want to give a response to the phrase twice. My response was that not all standard textbooks use collapse, and when you said that Ballentine’s textbook is non-standard, I asked you what criteria you use, and I still wonder what those criteria might be.

juanrga said:
The authors of that paper also say that Everett interpretation of QM is acceptable, but that is far from true. Everett work is wrong (even well-known many-world believers accept that now) and discussion of the mistakes and inconsistencies of «many-worlds» was given in another recent thread.
You may reject Everett, I also reject Everett, but I believe “many-worlds” is still one of mainstream interpretations, no matter what was written “in another recent thread”, so what the authors say does not mean they are off-mainstream. I have no intention to defend many-worlds though.

juanrga said:
That paper was cited because replies a series of incorrect statements done by Ballentine in his «comment on».
Again, without any specifics, I am not sure this is relevant.


juanrga said:
No inconsistency, because it is done clear that each postulate describes a type of evolution and not the other. Did not you read von Neumann?
As a matter of fact, I did read von Neumann.
So there is no inconsistency? Congratulations! Looks like you have just solved the problem of measurement in quantum theory!
I insist though that there is no positive evidence of non-unitary evolution. On the other hand, you can consider unitary evolution of the system that, in your opinion, should be described by non-unitary evolution (including the instrument and, if you wish, the observer, in such a system). The results of the two evolutions are mutually contradictory. Any reason why unitary evolution is not appropriate for such a system?
juanrga said:
The correct term is incompleteness, because the postulates by themselves cannot really say what is exactly the limit of validity of each.

Those limits are however, well understood using generalizations of QM, as the ones based in Stochastic Schrödinger equations or further generalizations.
This seems to suggest that you admit that the projection postulate has some limits of validity. But it seems I am saying pretty much the same: the projection postulate may be a good approximation, but it’s just an approximation. On the other hand, in my opinion, unitary evolution never “calls in sick”.
 
  • #30
juanrga said:
But Ballentine's is not a standard textbook in QM. It is a textbook about how he want QM to be.

His book has far less errors than most textbooks describing quantum mechanics. I don't understand -- why is it non-standard? A professor can use whatever textbook he wishes to teach the class.
 
  • #31
Runner 1 said:
His book has far less errors than most textbooks describing quantum mechanics.
I very much agree with that statement, but that's exactly why I find important to discuss the rare errors in it.
 
  • Like
Likes 1 person
  • #32
Demystifier said:
So basically, Ballentine does not believe in the quantum Zeno paradox because he does not believe in collapse.
But it seems that he does not understand that effective collapse can almost be "explained" by modern understanding of decoherence, and it seems to be because he is not aware of the importance of decoherence.

The reason for such a suspicion comes from another part of his (otherwise great) book:
Sec. 9.3 - The Interpretation of a state vector
Subsection - The measurement theorem for general states
After Eq. (9.13) he writes:
"The terms with alpha_r1 notequal alpha _r2 indicate a coherent superposition of macroscopically distinct indicator vectors ... It is clear that the nondiagonal terms in (9.13) cannot vanish ..."
But it seems to me that someone who were familiar with decoherence would immediately recognize that they CAN vanish, due to decoherence. Nevertheless, he does not even mention decoherence - at this place at which a "Modern Introduction" to QM should.

Any comments?

Ballentine does mention decoherence in chapter 9, page 244 "This ρ is obtained form the total state operator |Ψ><Ψ| by taking the partial trace over the degrees of freedom of the environment. If the difference between the effects of taking paths ABD and ACD on the environment is so great that |e1> and |e2> are orthogonal, then the state reduces to the incoherent mixture ρinc (9.18)."

I suspect his error is that he rejects the projection or collapse postulate, but does not replace it with another postulate, ie, he is not entitled to treat the improper mixture reduced density matrix as a proper mixture without the projection postulate, or by postulating explicitly that the two are the same. Similar thoughts that there is a hidden use of the projection postulate are found in

http://arxiv.org/abs/quant-ph/0312059 (p9) "The reduced density matrix looks like a mixed state density matrix because, if one actually measured an observable of the system, one would expect to get a definite outcome with a certain probability; in terms of measurement statistics, this is equivalent to the situation in which the system is in one of the states from the set of possible outcomes from the beginning, that is, before the measurement. As Pessoa (1998, p. 432) puts it, “taking a partial trace amounts to the statistical version of the projection postulate.”"

http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf (p37) "Ignorance interpretation: The mixed states we find by taking the partial trace over the environment can be interpreted as a proper mixture. Note that this is essentially a collapse postulate."
 
  • #33
atyy said:
I suspect his error is that he rejects the projection or collapse postulate, but does not replace it with another postulate,
Good point!
 
  • #34
juanrga said:
In that paper they are replying invalid statements done by Ballentine about their work.

The collapse is needed for any consistent formulation of QM. That is why standard textbooks use it.

until it is not ruled out experimentally. stand.

Are collapse models testable with quantum oscillating systems? The case of neutrinos, kaons, chiral molecules
http://www.nature.com/srep/2013/130606/srep01952/full/srep01952.html?WT.ec_id=SREP-639-20130701

"The effect is stronger for neutral mesons, but still beyond experimental reach. Instead, chiral molecules can offer promising candidates for testing collapse models"


------
Observation of a kilogram-scale oscillator near its quantum ground state
http://eprints.gla.ac.uk/32707/1/ID32707.pdf

"allow us to prepare kilogram test masses in different quantum states, and to study their quantum dynamics"


-----
Effective Field Theory Approach to Gravitationally Induced Decoherence
http://prl.aps.org/abstract/PRL/v111/i2/e021302
http://arxiv.org/pdf/1211.4751v1.pdf


.
 
Last edited:
  • #35
juanrga said:
The collapse is needed for any consistent formulation of QM. That is why standard textbooks use it.

Collapse is a purely interpretive assumption (in the sense of what it means). Of course it exists in the QM formalism (in the sense of the system being in an eigenstate of the observable after) for filtering type observations (in many cases the original system is destroyed by the observation), where, as a result of an observation, the state instantaneously changes. But its meaning is open ie in some interpretations like Ensemble or Copenhagen - its simply a theoretical concept like probability, useful in helping to predict certain results, but actually existing out there - nope. And like probability the fact it changes instantaneously is of zero concern, any more than when you throw a dice the probability changes from 1/6th on each face to a dead cert for one face and zero for the rest.

And Ballentine indeed has a few rare errors, such as implicitly assuming interpretations where the state is considered a compete description of a Quantum System, it must be real (in fact most versions of Copenhagen explicitly assume its a subjective state of knowledge, and it matters not one whit that such instantaneously changes), but his view that collapse only really applies to filtering type observations isn't one of them.

Thanks
Bill
 
Last edited:
  • #36
atyy said:
I suspect his error is that he rejects the projection or collapse postulate, but does not replace it with another postulate

I don't believe he rejects it, but rather formally shows it only applies to filtering type observations.

That aside, and IMHO it's not that big a deal, it indeed is a big issue he rejects dechoherence as an explanation for APPARENT collapse and only alludes to it it in a round about way in his textbook, because his interpretation cries out for it. Indeed where he does mention it, its more or less forced on him - you can't really escape it - but he tries to. Rather strange really.

Maybe it skates a bit close to the Achilles Heel of his interpretation - namely exactly how is an actual outcome selected, and even more basic - why do we get any outcome at all.

Thanks
Bill..
 
  • #37
atyy said:
I suspect his error is that he rejects the projection or collapse postulate, but does not replace it with another postulate,
I see no "error" there. As I read it, he does replace the CP, not by another postulate, but by an analytical treatment of measurement by an apparatus -- in his sections 9.2 et seq.

atyy said:
Similar thoughts that there is a hidden use of the projection postulate
[...]
"As Pessoa (1998, p. 432) puts it, `taking a partial trace amounts to the statistical version of the projection postulate.'"
I don't think this is a "use" of the PP. The fact that taking a partial trace is legitimate when dealing with an observable that is trivial on that component of a composite system comes from the basic QM maths (i.e., the use of tensor product Hilbert spaces), hence is not itself a postulate.
 
  • #38
bhobba said:
[...] it indeed is a big issue [Ballentine] rejects dechoherence as an explanation for APPARENT collapse and only alludes to it it in a round about way in his textbook,
I don't see how this gives an apparent collapse. IIUC, decoherence (interaction with a thermal environment) esssentially just cause the off-diagonal terms in the state operator to decay very fast. I.e., it doesn't determine a final specific outcome, but rather reduces a quantum-probabilistic situation to one of classical probability.

Or am I missing something?

[...] Indeed where he does mention it, its more or less forced on him - you can't really escape it - but he tries to.
Where precisely are you referring to in Ballentine? (I didn't p244 that way, but maybe you had somewhere else in mind?)

Maybe it skates a bit close to the Achilles Heel of his interpretation - namely exactly how is an actual outcome selected, and even more basic - why do we get any outcome at all.
As to why we get any outcome at all, I don't see that any interpretation explains that properly -- it always seems to be some variation on "it's magic!". :biggrin:
 
  • #39
strangerep said:
I see no "error" there. As I read it, he does replace the CP, not by another postulate, but by an analytical treatment of measurement by an apparatus -- in his sections 9.2 et seq.

Yes, that's fine.
strangerep said:
I don't think this is a "use" of the PP. The fact that taking a partial trace is legitimate when dealing with an observable that is trivial on that component of a composite system comes from the basic QM maths (i.e., the use of tensor product Hilbert spaces), hence is not itself a postulate.

Yes, but then if it is used to specify the state of a sub-ensemble (when using a filtering measurement as state preparation), the density matrix must represent a proper mixture, whereas a reduced density matrix is an improper mixture. The assumption that an improper mixture can be treated as a proper mixture is an additional assumption (equivalent to collapse).
 
Last edited:
  • #40
strangerep said:
I don't see how this gives an apparent collapse.

It has been discussed innumerable times - no need to go through it again.

The basic idea is mathematically and observationally an improper mixed state is indistinguishable from a proper one. That's what is meant - and I think apparent is a very apt description of it, and since I have seen others such as dymystifyer use it, I am not the only one. If you don't think its apt - arguing about it won't change anything - semantics is a rather silly thing to argue about - simply understand that's what decoherence proponents mean.

Thanks
Bil
 
  • #41
strangerep said:
Where precisely are you referring to in Ballentine? (I didn't p244 that way, but maybe you had somewhere else in mind?)

Page 241 on the spin recombination experiment. He discusses it using the decoherence paradigm and an interesting different paradigm that gives the same answer. He doesn't name it though. In fact its well known Ballentine doesn't believe decoherence is of any value interpretation wise - and in fact - within the paradigm of his interpretation its pretty useless - unless you are worried about some issues he doesn't worry about. And no I don't want to discuss what that is - if you or anyone is interested it really requires another thread.

strangerep said:
As to why we get any outcome at all, I don't see that any interpretation explains that properly -- it always seems to be some variation on "it's magic!". :biggrin:

Well its a bit more reasonable in BM where particles have a well defined position and momentum while in MWI you must experience some world.

Thanks
Bill
 
  • #42
strangerep said:
I see no "error" there. As I read it, he does replace the CP, not by another postulate, but by an analytical treatment of measurement by an apparatus -- in his sections 9.2 et seq.

There is no error - and indeed its replaced by other reasonable assumptions and/or analysis.

I use physical continuity, and I thought Ballentine did as well, but for the life of me can't find it in his book - may have picked it up elsewhere. Either way Ballentine reduces it to other considerations.

Thanks
Bill
 
  • #43
atyy said:
if it is used to specify the state of a sub-ensemble (when using a filtering measurement as state preparation), the density matrix must represent a proper mixture, whereas a reduced density matrix is an improper mixture. [...].
I'm missing something here, possibly because I'm too indoctrinated with Ballentine's terminology: he avoids the term "mixture" because of its ambiguity, and uses instead the terms "pure state" and "nonpure state".

You're using the term "proper mixture" to mean "nonpure state", right?
(I'd better not attempt any further response until we clear this up.)
 
  • #44
bhobba said:
It has been discussed innumerable times - no need to go through it again.
It would be more helpful to give me a link to a specific thread.

[...]- simply understand that's what decoherence proponents mean.
Well, I'm not a mind-reader. That's why I raised my query -- to try and clarify "what decoherence proponents mean". When this thread was originally started, I didn't have time to participate properly -- my background on these specific points was a bit narrow. Similarly, I don't normally have time to follow every thread in the QM forum closely. But now it's the Xmas-NY break, and since this thread was re-activated, I figured I'd try to catch up on a few things.

TBH, I'm a bit disappointed that you're being so short with me. I've tried to help plenty of other people on PF in the past but I hardly ever request assistance for myself.
 
  • Like
Likes 1 person
  • #45
bhobba said:
I use physical continuity [...]
What do you mean by "physical continuitity". (A link to a paper or previous thread is fine if you can't be bothered explaining.)
 
  • #46
strangerep said:
You're using the term "proper mixture" to mean "nonpure state", right? (I'd better not attempt any further response until we clear this up.)

Not quite.

An improper mixture is his non pure state. A proper mixture is a state where states are randomly presented for observation. If the states of a proper mixture are eigenstates of the observable then measurement problem solved - the state is there prior to observation - no collapse, nothing changes, everything sweet. If it's an improper mixture you can't tell the difference, but since it wasn't prepared the same way, but rather by decoherence (via tracing over the environment) it can't be said to be the same - there is no way to tell the difference - but you can't say for sure the state was there prior to observation. You can interpret it that way - but you can't say it is the same. That is what's meant by apparent.

There is also the issue of the pointer basis it singles out (ie the 'components' of the mixed state, which as Ballentine shows is not unique) - but that is a slightly different issue that again requires its own thread.

BTW it's all treated in the paper I constantly link to regarding this stuff:
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

It's basically a cut down version of my go-to book on this by Schlosshauer:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

I personally believe, like Ballentine, it should be in the library of anyone interested in QM, but then again this sort of stuff interests me a lot and may not interest those into say QFT or solid state physics.

Thanks
Bill
 
Last edited by a moderator:
  • #47
strangerep said:
I'm missing something here, possibly because I'm too indoctrinated with Ballentine's terminology: he avoids the term "mixture" because of its ambiguity, and uses instead the terms "pure state" and "nonpure state".

You're using the term "proper mixture" to mean "nonpure state", right?
(I'd better not attempt any further response until we clear this up.)

In my understanding, there are two sorts of mixed states (which hopefully are the same as "nonpure states").

A proper mixed state is when Alice makes Ensemble A in pure state |A> and Ensemble B in pure state |B>, then she makes a Super-Ensemble C consisting of equal numbers of members of Ensemble A and Ensemble B. If she hands me C without labels A and B, I can use a mixed density matrix to describe the statistics of my measurements on C. But if in addition I receive the labels A and B, then I can divide C into two sub-ensembles, each with its own density matrix, since C was just a mixture of A and B. Here C is a "proper" mixture, which can be naturally divided into sub-ensembles. This is the sort of mixture we use in quantum statistical mechanics.

An improper mixed state is when I have an ensemble C in a pure state, each member of which consists of a subsystem A entangled with subsystem B. If I do a partial trace over B, I get a density matrix (the reduced density matrix) which describes the statistics of all measurements that are "local" to A. This reduced density matrix for A is not a pure state, and is an "improper" mixed state. There is no natural way to partition this into sub-ensembles, since there is only one ensemble C.
 
  • #48
strangerep said:
TBH, I'm a bit disappointed that you're being so short with me. I've tried to help plenty of other people on PF in the past but I hardly ever request assistance for myself.

Sorry that's my fault :biggrin: I just had a long discussion in some thread with bhobba about this, and it turned out that basically we agreed on all technical details, but I didn't like the terminology of "apparent collapse".

As I understand, the terminology "apparent collapse" as used by some decoherence folks does not imply that when decoherence is used in any interpretation with collapse, that the need for collapse is removed. The terminology seems most appropriate to me in the many-worlds interpretation.
 
  • Like
Likes 1 person
  • #50
atyy said:
@strangerep, the paper bhobba linked to http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf [Bas Hensen] has explicit examples of proper and improper mixtures and their density matrices in section 1.2.3.
Yes, I was reviewing it while you were composing your previous messages... :biggrin:

Previously, I found it difficult to get into Hensen because he uses a version of QM with collapse-to-eigenstate as a fundamental postulate. But his description of the distinction between "proper" and "improper" mixed states is clear. Observationally, they're the same: what Ballentine calls a "nonpure state". The difference is in how they were prepared. I guess that's the whole point: using a thermal coupling to the environment, and the forgetting about the environment by tracing it out, produces a state which is observationally indistinguishable from a "proper" mixed state.

I.e., "reduction" vs "apparent reduction", though not "apparent collapse". :wink:

Thus far, I have no problem with it, and... I continue to be puzzled why there's often such fuss and long-winded discussion about environmental decoherence. For me that mechanism became compelling about 10 yrs ago when I read the paper of Ford & O'Connell in which they show analytically how coupling to a thermal field causes the off-diagonal terms in the density matrix to decay extremely fast.

ISTM, Ballentine uses a summarized form of a similar idea, but in his case the fluctuations (i.e., interaction with the environment) result in incoherent phase changes in the beams on opposite sides of the apparatus in his fig 9.2.

So... I see no conflict between Ballentine's treatment in his section 9.5, and the now-well-known mechanism called "decoherence".

But such reduction from a pure density matrix by interaction-with-environment, yielding a nonpure density matrix, is not what I understand by "collapse-to-eigenstate post-measurement". So I guess my problem was with the way I'd seen the words "collapse" and "reduction" used elsewhere (seemingly interchangeably). Good to get that sorted out.

Cheers.

Edit:
[...] but I didn't like the terminology of "apparent collapse".
I think I don't like it either.
 
Last edited:
Back
Top