# Collapse and Peres' Coarse Graining

• atyy

#### atyy

Collapse is the sudden change in wave function after a measurement. Unless otherwise stated, I will assume that neither the wave function nor collapse are necessarily physical, and are just calculational tools that are part of the experimental success of quantum mechancs. Collapse is in almost all major textbooks except Ballentine and Peres, and is used in a standard way in the Bell tests.

Peres suggests that collapse can be replaced with coarse-graining. We know that collapse is verified by all experimental data to date, and that coarse-graining, if successful, must make exactly the same predictions as collapse. I think it is an interesting idea, but I have only ever seen it in Peres. Can coarse-graining without collapse successfully reproduce the predictions of quantum mechanics? In particular, can one do the coarse-graining without collapse explicitly and recover the correlations for the Bell tests? Is the coarse-graining in a Bell test a local procedure?

By looking at the Peres book, I can only find that he talks about coarse graining in the context of classical mechanics, to explain the classical irreversibility. Can you specify where exactly does he say that it can replace the quantum collapse?

On p376 he uses the term "blurring", which is what I (and I think vanhees71) was referring to:

"Consistency thus requires the measuring process to be irreversible. There are no superobservers in our physical world.

Formally, the logical consistency of the "dequantization" of a measuring apparatus implies the equivalence of two different descriptions of the same process ... This reduced density matrix can be converted by means of Eq. (10.56) into a Wigner function, ##W_{A}(q, p)##. Some blurring (see page 316) converts the latter into a fuzzy Wigner function which is nowhere negative and may be interpreted as a Liouville density ##f_{A}(q,p)## if the ##\hbar^{2}## terms in the quantum Liouville equation (10.67) are negligible for the macroscopic apparatus."

Some other references where Peres (and coauthors) mention similar ideas are:

http://arxiv.org/abs/quant-ph/9712044
Quantum and classical descriptions of a measuring apparatus
Ori Hay, Asher Peres

http://arxiv.org/abs/quant-ph/9906023
Classical interventions in quantum systems. I. The measuring process
Asher Peres

Last edited:
It seem to be hopeless to fight against this bad word "collapse" in our discussions. I don't know a single experiment, where I need a collapse to understand it. See the other thread, here for the paradigmatic example of a Stern-Gerlach experiment:

https://www.physicsforums.com/threa...y-non-physicists-1.790546/page-5#post-4969170

Anyway, collapse works FAPP. Can coarse-graining really replace the collapse postulate? If it can, is the coarse-graining local?

Thanks atty!

The Peres's explanation of blurring is quite blurred, but it seems to be similar to the much better explored idea of decoherence. For the latter, it is already well known (and much discussed at this forum) in what sense it can and cannot explain the appearance of "collapse".

• atyy
Collapse is in almost all major textbooks except Ballentine and Peres, and is used in a standard way in the Bell tests.
How exactly is Collapse used in Bell tests? I think this isn't true.

We know that collapse is verified by all experimental data to date
Again I don't believe this is true. How exactly is collapse "verified" by anything?

Thanks atty!

The Peres's explanation of blurring is quite blurred, but it seems to be similar to the much better explored idea of decoherence. For the latter, it is already well known (and much discussed at this forum) in what sense it can and cannot explain the appearance of "collapse".

Also, I guess I should say that Peres doesn't explcitly say that blurring can replace collapse. That is something that has come up more in discussions with vanhees71.

I think one way in which Peres's blurring is a bit different from decoherence is that he goes from a Wigner function (with negative portions) to a Liouville density (positive all over). In a sense, once we have a Liouville density, things have collapsed since it is a classical probability distribution with definite outcomes. So I am willing to consider that it might be different from decoherence. But yes, blurring seems not as sharply defined as collapse (which is a sharp rule once one has made the classical/quantum cut, and has measurements with time stamps), and I doubt that if it works it can be a local procedure in the Bell tests.

How exactly is Collapse used in Bell tests? I think this isn't true.

Again I don't believe this is true. How exactly is collapse "verified" by anything?
If nothing else, collapse is at least a useful bookkeeping device. Even if this is not something which exists in nature, it is "verified" through the mental practice of many theoretical physicists.

Also, I guess I should say that Peres doesn't explcitly say that blurring can replace collapse. That is something that has come up more in discussions with vanhees71.

I think one way in which Peres's blurring is a bit different from decoherence is that he goes from a Wigner function (with negative portions) to a Liouville density (positive all over). In a sense, once we have a Liouville density, things have collapsed since it is a classical probability distribution with definite outcomes. So I am willing to consider that it might be different from decoherence. But yes, blurring seems not as sharply defined as collapse (which is a sharp rule once one has made the classical/quantum cut, and has measurements with time stamps), and I doubt that if it works it can be a local procedure in the Bell tests.
Even if you get a positive Wigner function, this function still gives probabilities for different outcomes and no single outcome is picked out by it. So similarly to decoherence, you still have the single-outcome problem. In other words, you still need to postulate either collapse or some other additional assumption.

• atyy
The Peres's explanation of blurring is quite blurred, but it seems to be similar to the much better explored idea of decoherence. For the latter, it is already well known (and much discussed at this forum) in what sense it can and cannot explain the appearance of "collapse".

Just a bit to add to my reply in post #9. I think you are right. One thing which is not obvious that coarse graining can do is that collapse links a classical outcome with a quantum outcome. In Peres's case, I think all he gets is a classical outcome for the apparatus. For the quantum system, one probably obtains a decohered density matrix, which after collapse will be a proper mixed state. This should not reproduce the Bell test prediction, since the proper mixed state only represents a non-selective measurement, whereas the Bell tests use selective measurements.

Even if you get a positive Wigner function, this function still gives probabilities for different outcomes and no single outcome is picked out by it. So similarly to decoherence, you still have the single-outcome problem. In other words, you still need to postulate either collapse or some other additional assumption.

Yes, I think you are right.

I guess the point atyy makes is valid if you were exposed to the Copenhagen doctrine long enough, in the following sense. For definiteness let's discuss the Aspect experiment in ultrasimplified form, i.e., not considering wave packets for the photon states, which in principle is, what one must do for a fully correct description, also with respect to our discussion here, but I have to formulate this carefully, before I can write it down here. So let's do the handwaving arguments with considering polarization states only and just add the space-time aspects of the measurement procedure "by hand".

You start with a two-photon Fock state with entangled polarization states (usually prepared, using parametric downconversion and appropriate phase shifters for one of the photons), represented by the following ket:

$$|\Psi \rangle=\frac{1}{\sqrt{2}} (|HV \rangle-|VH \rangle).$$

The single photons are in mixed states (using the standard reduction formalism, by taking the trace over B's photon to get A's photon's state and vice versa) is for both
$$\hat{\rho}_A= \hat{\rho}_B=\frac{1}{2} (|H \rangle \langle H|+|V \rangle \langle V|,$$
i.e., the polarizations of both single photons is maximally uncertain (maximum von Neumann entropy).

Now Alice and Bob have detect at far-distant places at the polarization of one of the photons respectively. Due to the geometrical setup you can by a precise enough measurement of the time of the photon detection make sure that A and B always look at two photons belonging to the entangled pair. Suppose now that Alice is much closer to the photon-pair source than Bob, so that she detects her photon way before Bob.

I'd like to also discuss the most simple case, where Alice uses a polarization filter letting horizontally polarized photons through and Bob one letting vertically polarized photons through. I think there's no debate about the outcome of the measurement when A and B compare their measurement protocols (modulo detector inefficiencies which can be made arbitrarily small nowadays, so that we can neglect it for our idealized description): If A detects a then necessarily H-polarized photon, Bob also detects his then necessarily V-polarized photon, and if A doesn't detect her photon, also B doesn't detect his.

Now let's discuss the experiment from the point of view of a Copenhagen-collapse interpreter (which I heavily disagree with) and from the point of view of a minimal interpreter (I heavily agree with)

Copenhagen-collapse interpreter's point of view

Suppose, A detects her photon (which happens with 50% probability). Since it's then for sure H-polarized according to the Copenhagen collapse mechanism, after this measurement the entire state collapses instantaneously to the state described by
$$|\Psi' \rangle=|HV \rangle \; \Rightarrow \; \hat{\rho}_B'=|V \rangle \langle V|.$$
I've already renormalized the ket to be of norm 1 again. So taken the subensemble, where A detects her photon, then B for sure also detects his photon, because it's vertically polarized.

Criticism against this view

This point of view violates relativistic causality and contradicts the very foundations of QED, which is (I think also undoubtedly) the correct model to describe this experiments, because if there were a collapse like this, the detection of A's photon must instantaneously change the state of B's photon from ##\hat{\rho}_B## to ##\hat{\rho}_B'##, i.e., from "maximally uncertain" to "determined".

Now, by construction, this contradicts QED by construction: The interaction of A's photon with her polarization foil and photon detector is local from the point of view of QED, because QED is constructed as a local relativistic QFT, and there can be no FTL signal propagation (note that such signals are described by the retarded propagator not the Feynman propagator as in classical electrodynamics!).

So, as atyy said, if you want to invoke the collapse argument, you must not misinterpret it as a real physical process, and the 100% correlation between the outcome of A's and B's polarization measurement cannot be satisfactorily explained by the collapse. It's an ad-hoc assumption to apparently simplify the prediction of the outcome of the measurement.

Minimal interpreter's point of view

There's no need for the collapse to explain the result of the experiment in terms of QED. You just evaluate the transition probabilities according to the corresponding S-matrix elements. In this case, you simply have to take "wave-packet states", leading to a detection probabilities as a function of the times and locations of A's and B's detection event ("click of the photon detectors"). The result is of course the same: 100% correlation between the single-photon polarizations, but nowhere did I invoke a collapse argument.

As I said, I should work out this in mathematical form using standard QED (quantum optics to be more precise, because one has to use the standard effective theory to describe the optical instruments involved, i.e. in this case, polarizers).

• strangerep and billschnieder
Minimal interpreter's point of view

There's no need for the collapse to explain the result of the experiment in terms of QED. You just evaluate the transition probabilities according to the corresponding S-matrix elements. In this case, you simply have to take "wave-packet states", leading to a detection probabilities as a function of the times and locations of A's and B's detection event ("click of the photon detectors"). The result is of course the same: 100% correlation between the single-photon polarizations, but nowhere did I invoke a collapse argument.

As I said, I should work out this in mathematical form using standard QED (quantum optics to be more precise, because one has to use the standard effective theory to describe the optical instruments involved, i.e. in this case, polarizers).

Could I see an explicit calculation or a reference?

If nothing else, collapse is at least a useful bookkeeping device. Even if this is not something which exists in nature, it is "verified" through the mental practice of many theoretical physicists.
By the same line of argument, anyone can argue that "collapse" is a useful bookkeeping device in classical probability, and "verified" through the mental practice of probability.

• Demystifier
Minimal interpreter's point of view

There's no need for the collapse to explain the result of the experiment in terms of QED. You just evaluate the transition probabilities according to the corresponding S-matrix elements. In this case, you simply have to take "wave-packet states", leading to a detection probabilities as a function of the times and locations of A's and B's detection event ("click of the photon detectors"). The result is of course the same: 100% correlation between the single-photon polarizations, but nowhere did I invoke a collapse argument.

As I said, I should work out this in mathematical form using standard QED (quantum optics to be more precise, because one has to use the standard effective theory to describe the optical instruments involved, i.e. in this case, polarizers).

To add to my request in post #15 for an explicit calculation, I'd also like to ask what the calculation looks like in the Schroedinger picture. I assume we are using free fields (either Maxwell or Dirac), so the theory should rigourously exist. The real experiments are done using Maxwell fields, but it may be easier to use Dirac fields, which also show entanglement and violate Bell inequalities.

Last edited:
Criticism against this view

This point of view violates relativistic causality and contradicts the very foundations of QED, which is (I think also undoubtedly) the correct model to describe this experiments, because if there were a collapse like this, the detection of A's photon must instantaneously change the state of B's photon from ##\hat{\rho}_B## to ##\hat{\rho}_B'##, i.e., from "maximally uncertain" to "determined".

Now, by construction, this contradicts QED by construction: The interaction of A's photon with her polarization foil and photon detector is local from the point of view of QED, because QED is constructed as a local relativistic QFT, and there can be no FTL signal propagation (note that such signals are described by the retarded propagator not the Feynman propagator as in classical electrodynamics!).

So, as atyy said, if you want to invoke the collapse argument, you must not misinterpret it as a real physical process, and the 100% correlation between the outcome of A's and B's polarization measurement cannot be satisfactorily explained by the collapse. It's an ad-hoc assumption to apparently simplify the prediction of the outcome of the measurement.

As much as I like ensemble interpretation I have to note that your argument is flawed. First, Bell test is not about explaining 100% correlated measurements.
And then when we consider correct Bell test it turns out that it is not possible to get violation of Bell inequality in ideal experiment without FTL signal. (you can easily verify that using Nick Herbert's simplified proof, see for example this thread for discussion about it thread)

You get a Bell test by choosing certain relative angles of A's and B's polarizers. Nothing changes in the argument by just setting the polarizers at different relative angles than ##\pi/2## as I've chosen to simplify the discussion. There's no FTL signal necessary to explain the violation of Bell's (or related) inequalities, because there's nothing traveling faster than light. This is so by construction of QED.

To add to my request in post #15 for an explicit calculation, I'd also like to ask what the calculation looks like in the Schroedinger picture. I assume we are using free fields (either Maxwell or Dirac), so the theory should rigourously exist. The real experiments are done using Maxwell fields, but it may be easier to use Dirac fields, which also show entanglement and violate Bell inequalities.
Ok, I'll see that I get this done over the weekend, but I'll not use the Schroedinger picture, because that's very inconvenient in relativistic QFT, but of course, there are only free fields as usual in quantum optics. Then you only need a "wave-packet description" for the photons. The polarizer is described as ideal in terms of a projection operator located at Alice's and Bob's place. Everything works of course in the two-photon Fock space.

To add to my request in post #15 for an explicit calculation, I'd also like to ask what the calculation looks like in the Schroedinger picture. I assume we are using free fields (either Maxwell or Dirac), so the theory should rigourously exist. The real experiments are done using Maxwell fields, but it may be easier to use Dirac fields, which also show entanglement and violate Bell inequalities.
If you want to learn the Schrodinger picture for quantum field theory, I recommend the book by Hatfield
https://www.amazon.com/dp/0201360799/?tag=pfamazon01-20

You get a Bell test by choosing certain relative angles of A's and B's polarizers. Nothing changes in the argument by just setting the polarizers at different relative angles than ##\pi/2## as I've chosen to simplify the discussion.
When you change angle of polarizer you change preferred basis for your calculations.

Collapse is the sudden change in wave function after a measurement. Unless otherwise stated, I will assume that neither the wave function nor collapse are necessarily physical, and are just calculational tools that are part of the experimental success of quantum mechancs. Collapse is in almost all major textbooks except Ballentine and Peres, and is used in a standard way in the Bell tests.
Peres suggests that collapse can be replaced with coarse-graining. We know that collapse is verified by all experimental data to date, and that coarse-graining, if successful, must make exactly the same predictions as collapse. I think it is an interesting idea, but I have only ever seen it in Peres. Can coarse-graining without collapse successfully reproduce the predictions of quantum mechanics? In particular, can one do the coarse-graining without collapse explicitly and recover the correlations for the Bell tests? Is the coarse-graining in a Bell test a local procedure?
If collapse is treated as a black box for thermodynamical irreversibility, then it would be just a form of quantum coarse-graining, that is usually given physical meaning thru the Planckian hypothesis and consequently the Heisenberg indeterminacy (Fundamentals of statistical mechanics-Bloch, pg. 225). In that sense quantum coarse-graining(that can also be made equivalent to the notion of decoherence if one identifies the function of environment degrees of freedom with internal microstates, the result is the same qualitatively) should by definition be enough to account for anything derived from the concept of collapse(in its operational role that you allow in the OP), shouldn't it? A different thing is if one is satisfied by this way of explaining irreversibility, but then one is not in the QM interpretational camp, one is simply questioning the completeness of QM as a theory. In other words either collapse is equivalent to coarse-graining(decoherence) or the difference cannot be solved within QM.
The latter case is equivalent to claiming that mechanical statistics is not fundamental but a good approximation, and you need to justify it. Tough racket.

Last edited:
If you want to learn the Schrodinger picture for quantum field theory, I recommend the book by Hatfield
https://www.amazon.com/dp/0201360799/?tag=pfamazon01-20
Yes, that's a marvelous book. Let me do the calculation. I'll write it up and post it here, as soon as it's ready :-).

If collapse is treated as a black box for thermodynamical irreversibility, then it would be just a form of quantum coarse-graining, that is usually given physical meaning thru the Planckian hypothesis and consequently the Heisenberg indeterminacy (Fundamentals of statistical mechanics-Bloch, pg. 225). In that sense quantum coarse-graining(that can also be made equivalent to the notion of decoherence if one identifies the function of environment degrees of freedom with internal microstates, the result is the same qualitatively) should by definition be enough to account for anything derived from the concept of collapse(in its operational role that you allow in the OP), shouldn't it? A different thing is if one is satisfied by this way of explaining irreversibility, but then one is not in the QM interpretational camp, one is simply questioning the completeness of QM as a theory. In other words either collapse is equivalent to coarse-graining(decoherence) or the difference cannot be solved within QM.
The latter case is equivalent to claiming that mechanical statistics is not fundamental but a good approximation, and you need to justify it. Tough racket.
Are you saying there's a "collapse" in classical statistical mechanics? If so, what should that be. In quantum statistical physics the probabilistic nature of the description is simply due to our ignorance of the full state (position and momentum of each particle of the system). The irreversibility comes into the game by construction (e.g., Boltzmann's "Stosszahlansatz" (molecular-chaos hypothesis)) and is thus more an input than a derivation. Also the thermodynamical arrow of time is brought in as an input, because you (usually silently) use the fundamental causality direction of time.

Could I see an explicit calculation or a reference?

I don't think that there is anything special about QED here. Using the S-matrix just amounts to computing amplitudes for outcomes and squaring the amplitudes to get probabilities for outcomes. The outcomes are typically outgoing momenta and spins for various types and numbers of particles.

I think that whether the "minimal interpretation" has a collapse or not is just a matter of terminology. You compute amplitudes for various outcomes, and you observe statistics for those various outcomes (running the experiment many times). The minimal interpretation is just that the observed statistics should be related to the predicted amplitudes according to the Born rule. If you look at a single run of the experiment, then you obviously have a single outcome, while the theory predicts amplitudes for various outcomes, but in the ensemble, frequentist view, that's not a problem--the theory only talks about what happens when the experiment is performed many times.

So the minimalist interpretation just amounts to refraining from asking what happens on a single run. It's the "shut up and calculate" interpretation.

• TrickyDicky and vanhees71
Are you saying there's a "collapse" in classical statistical mechanics?

It depends on whether you consider probability to be an objective property of many identical runs of an experiment, or a subjective property of a single run. If you view classical probability as subjective--it's about your knowledge of the system, rather than a fact about the system itself--then the probability distribution "collapses" whenever you learn more information.

• TrickyDicky
Well, it's at least not a physical process, as claimed by some collapse proponents in the case of quantum theory. It's then even less needed in classical than in quantum theory.

Ok, I'll see that I get this done over the weekend, but I'll not use the Schroedinger picture, because that's very inconvenient in relativistic QFT, but of course, there are only free fields as usual in quantum optics. Then you only need a "wave-packet description" for the photons. The polarizer is described as ideal in terms of a projection operator located at Alice's and Bob's place. Everything works of course in the two-photon Fock space.

If you want to learn the Schrodinger picture for quantum field theory, I recommend the book by Hatfield
https://www.amazon.com/dp/0201360799/?tag=pfamazon01-20

Yes, that's a marvelous book. Let me do the calculation. I'll write it up and post it here, as soon as it's ready :).

Just to stress what I am skeptical about is the claim that QFT has different postulates from QM. As far as I understand, it is not sufficient to have {unitary evolution + Born rule without collapse} nor {unitary evolution + Born rule without collapse + coarse graining}. My understanding is that in QM and QFT, whatever that additional postulate is, if one works in the Schroedinger picture (and some forms of the Heisenberg picture), and asks what the state is after the first measurement conditioned on obtaining a certain result, that state will be the same state predicted by the collapse postulate. I think one could escape collapse by denying that the Schreodinger picture is correct in QFT (which would be an interesting discussion).

Last edited by a moderator:
Just to stress what I am skeptical about is the claim that QFT has different postulates from QM. As far as I understand, it is not sufficient to have {unitary evolution + Born rule} nor {unitary evolution + Born rule + coarse graining}. My understanding is that in QM and QFT, whatever that additional postulate is, if one works in the Schroedinger picture (and some forms of the Heisenberg picture), and asks what the state is after the first measurement conditioned on obtaining a certain result, that state will be the same state predicted by the collapse postulate.
Concerning Born rule, unitarity and collapse, QM and QFT share the same set of postulates.

As much as I like ensemble interpretation I have to note that your argument is flawed. First, Bell test is not about explaining 100% correlated measurements.
And then when we consider correct Bell test it turns out that it is not possible to get violation of Bell inequality in ideal experiment without FTL signal. (you can easily verify that using Nick Herbert's simplified proof, see for example this thread for discussion about it thread)

You get a Bell test by choosing certain relative angles of A's and B's polarizers. Nothing changes in the argument by just setting the polarizers at different relative angles than ##\pi/2## as I've chosen to simplify the discussion. There's no FTL signal necessary to explain the violation of Bell's (or related) inequalities, because there's nothing traveling faster than light. This is so by construction of QED.

What zonde is saying is correct. Bell's theorem says (keeping in mind the usual loopholes) that FTL is an essential part of quantum mechanics if it explains the nonlocal correlations, and collapse in the standard interpretation is one form of FTL. But this should not be misunderstood as being against relativity, because the FTL does not result in any transfer of classical information. Weinberg, for example, presents collapse as part of relativistic QFT.

Concerning Born rule, unitarity and collapse, QM and QFT share the same set of postulates.

Thanks, that is my understanding too. Weinberg says the same. I am trying to understand vanhees71's anti-collapse view.

I am trying to understand vanhees71's anti-collapse view.
Me too. :)

I don't think that there is anything special about QED here. Using the S-matrix just amounts to computing amplitudes for outcomes and squaring the amplitudes to get probabilities for outcomes. The outcomes are typically outgoing momenta and spins for various types and numbers of particles.

I think that whether the "minimal interpretation" has a collapse or not is just a matter of terminology. You compute amplitudes for various outcomes, and you observe statistics for those various outcomes (running the experiment many times). The minimal interpretation is just that the observed statistics should be related to the predicted amplitudes according to the Born rule. If you look at a single run of the experiment, then you obviously have a single outcome, while the theory predicts amplitudes for various outcomes, but in the ensemble, frequentist view, that's not a problem--the theory only talks about what happens when the experiment is performed many times.

So the minimalist interpretation just amounts to refraining from asking what happens on a single run. It's the "shut up and calculate" interpretation.

But without collapse, how can measurement be used as a means of quantum state preparation, where we use the classical result obtained to figure out the quantum state of the selected sub-ensemble? (I do understand there is a more general collapse rule than projective measurements, but let's keep things simple here, since there is still collapse in the more general rule.) Does this mean that measurement cannot be used as a form of state preparation in the minimal interpretation?

Well, it's at least not a physical process, as claimed by some collapse proponents in the case of quantum theory. It's then even less needed in classical than in quantum theory.

If probabilities are subjective, then there is no conceptual problem with collapse. But the meaning of subjective probability is that there is some actual (unknown) state of the system at each moment, and probability reflects our ignorance about that state. Quan
But without collapse, how can measurement be used as a means of quantum state preparation, where we use the classical result obtained to figure out the quantum state of the selected sub-ensemble? (I do understand there is a more general collapse rule than projective measurements, but let's keep things simple here, since there is still collapse in the more general rule.) Does this mean that measurement cannot be used as a form of state preparation in the minimal interpretation?

That's a good point. In order to apply QM, you have to assume that when you rerun the experiment many times, your preparation procedure returns the system to the same quantum mechanical state. I'm not 100% sure that you need a collapse assumption here, or not.

• vanhees71