# Experimental Tests of Projection Postulate

Have there been any experiments designed to explicitly test the projection postulate? I mean that part of it that says the measured particle is left in an eigenstate of the measured operator.

The usual devices for measuring particles (photomultipliers, phosphor screens, etc.) don't really allow the postulate to be tested, since the measured particle is absorbed, not re-emitted in a perfect eigenstate. Are there other measurement methods that allow testing the projection postulate?

You can't really "test" the postulate, since it is something that applies to some experiments and not others. For some experiments, such as Stern-Gerlach measurements, it works very well.

Quantum opticians use the term "non-demolition" to describe measurements where the state updates according to the projection postulate. There has been considerable work done on engineering these measurements in quantum optical systems. Just look up "non-demolition measurements" on the arXiv for further details.

slyboy said:
You can't really "test" the postulate, since it is something that applies to some experiments and not others. For some experiments, such as Stern-Gerlach measurements, it works very well.

Even in a Stern-Gerlach apparatus, I would think there is no actual "measurement" until the electron actually hits a detector. Until that time it would be in a superposition of possible pathways through the magnets, wouldn't it?

The perbative effects of the magnetic field would reduce the state vector I think. I've been wondering if the postulate could just be done away with by saying the operator itself is observable, but I'm not to sure whether this is correct? It seems to me that the operator has all the properties to be considered a observable, and if so would the potentials therein be considered real? Though, I'm not sure whether a von-nuemann type measurement would have a meaning, or even in such a scnerio would reduction of a state vector even have any conceptual meaning?

Even in a Stern-Gerlach apparatus, I would think there is no actual "measurement" until the electron actually hits a detector. Until that time it would be in a superposition of possible pathways through the magnets, wouldn't it?

There is always ambiguity in exactly where one applies the measurement postulate, if at all (since you might be a fan of the many-worlds viewpoint).

In the case of Stern-Gerlach, it is clear that one can recombine the outputs and perform another experiment that shows it is still a coherent superposition. This is true of the vast majority of "nondemolition" measurements, since one typically needs a great deal of control over the measurement interaction in order to make the projection postulate applicable. Typically, the "measuring device" is actually another quantum system, which can itself be coherently manipulated. In quantum optics, it might be another photon for example. The second system is then measured destructively, by absorbtion in a detector or something like that. The entire experiment could be reversed up to the point that the destructive measurement is made.

However, this might be true in principle even of more complicated measurements involving "macroscopic" measuring devices. It is just that the Hamiltonians required to do this are almost impossible to engineer in practice. Evidence that macroscopic superpositions are possible, in the experiments of Zeilinger for example, indicates that this might be true.

von Neuman emphasized that there is a great deal of ambiguity in exactly where the projection postulate is applied (at the level of quantum systems, macroscopic systems, the brain of the concious observer, etc.). Generally, it is a mathematical idealisation that allows us to calculate what will happen in experiments without having to deal with complicated entangled states of macroscopic systems. However, the probablistic aspect of QM has to be applied at some stage. Exactly where it has to be applied and what it means is one of the main topics of debate in the foundations of quantum theory.

von Neuman emphasized that there is a great deal of ambiguity in exactly where the projection postulate is applied (at the level of quantum systems, macroscopic systems, the brain of the concious observer, etc.). Generally, it is a mathematical idealisation that allows us to calculate what will happen in experiments without having to deal with complicated entangled states of macroscopic systems.

Is it generally accepted, then, that the projection postulate is not formally true, but is an approximation of the measuring device's unitary evolution, in the limit as the number of particles goes to infinity?

Is it generally accepted, then, that the projection postulate is not formally true, but is an approximation of the measuring device's unitary evolution, in the limit as the number of particles goes to infinity?

That's a thorny question. Not much is generally accepted when it comes to the measurement part of quantum theory. If you ask most physicists then they will probably mumble something about decoherence, but there is no universally accepted answer to this.

What is true is that if there is an interaction such that the quantum state is an entangled superposition of two or more systems, and you can guarantee that the two branches will not interfere with one another due to the nature of the interaction Hamiltonians that exist, then, from the perspective of one of the systems, no distinction can be made between using the full unitary dynamics of the whole superposition and first applying the projection postulate to the other systems before continuing with the unitary dynamics.

In my opinion, it doesn't have much to do with the limit of a large number of particles, because you can still imagine engineering an interaction that causes the two branches to interfere, even though it may be difficult in practice. On the other hand, macroscopic systems are more likely to cause decoherence under the Hamiltonians that typically exist in nature.

Even if you do accept that sort of answer, there are many unresolved issues. For example, how do quantum probabilities come about if the universe just consists of a massively entangled wavefunction with no collapses?

People who are bothered by this sort of question have proposed alterations to quantum mechanics that resolve the ambiguities, e.g. Bohmian mechanics and spontaneous collapse models. However, these theories have yet to be made fully compatible with relativity - indeed it is difficult to do so because reproducing violation of the Bell ineqalities means that they have to tackle nonlocality head on.

NateTG
Homework Helper
Hmm, this is a testable notion:

If you run a stream of electrons through such a Stein Gerlach seperator-recombiner in an EPR-like setup would it and then checked correlation along a perpendicular axis you should be able to check whether the results do or do not align.

If the correlation along the perpendicular axis is preserved, what happens if you put a 'nilpotent' destructive detector along the path of one of the beams? (By nilpotent I mean a destructive detector that should never detect anything.)

Hurkyl
Staff Emeritus
Gold Member
For example, how do quantum probabilities come about if the universe just consists of a massively entangled wavefunction with no collapses?

Instead of saying, for instance, that there's a 50% chance of a particle being in a spin up state, you'd say that 50% of the states in the superposition correspond to spin up registering on the measuring device.

Instead of saying you're very likely to see about 50 spin-ups in 100 experiments, you'd say that most of the states in the superposition correspond to about 50 spin-ups detected in the 100 experiments.

slyboy said:
What is true is that if there is an interaction such that the quantum state is an entangled superposition of two or more systems, and you can guarantee that the two branches will not interfere with one another due to the nature of the interaction Hamiltonians that exist, then, from the perspective of one of the systems, no distinction can be made between using the full unitary dynamics of the whole superposition and first applying the projection postulate to the other systems before continuing with the unitary dynamics.

What kind of interaction hamiltonians would prevent interference? Are they non-hermitian?

What kind of interaction hamiltonians would prevent interference? Are they non-hermitian?

No, they are just the usual hermitian interaction hamiltonians that occur in nature. The main point is that if you have a macroscopic system, such as the pointer on a measuring device, it is likely to couple to environmental degrees of freedom (the em field, dust particles, etc.) very differently depending on its state in position space. Then, you would have to be able to control all of these environmental degrees of freedom on a quantum level in order to cause the different position states of the pointer to reinterfere. This is impossible in practice, so we can apply the projection postulate to make effective predictions once we know that the system has interacted with the measuring device.

Instead of saying, for instance, that there's a 50% chance of a particle being in a spin up state, you'd say that 50% of the states in the superposition correspond to spin up registering on the measuring device. Instead of saying you're very likely to see about 50 spin-ups in 100 experiments, you'd say that most of the states in the superposition correspond to about 50 spin-ups detected in the 100 experiments.

Of course, you can say that, but the fact of the matter is that it appears to us that measurements have actual outcomes, rather than being terms in a superposition, so you have to explain why we have this experience.

More, seriously, it works well for the situation that you describe, but what about unequal superpositions, e.g.

$\frac{1}{\sqrt{3}}| \mbox{up} z \rangle | \mbox{measuring device registers up} \rangle + \frac{\sqrt{2}}{\sqrt{3}} | \mbox{down} z \rangle | \mbox{measuring device registers down} z \rangle$

There are only two terms in the superposition, so by your prescription the probabilities should be 50-50. However, the actual QM probabilities are 1/3 and 2/3. You have to explain why we can give a probability interpretation to the amplitudes of states, rather than just the number of terms.

Another, problem is how do you decide which basis it is OK to make the probability statement in? I could decompose the spin state in the x-basis, and then the relative states of the measuring device would be superpositions of the "registers up" and "registers down" states.

All these are problems that afflict any interpretation wherein QM is complete and the wavefunction is taken to be a literal specification of the state of reality, such as many worlds. I am not saying that these questions have no good answers, since the many-worlders have come up with several ingenious proposals (albeit proposals that are not universally accepted). The main point, is just that there must be more to it than simply reading the probabilities directly from the wavefunction.

slyboy said:
[...] The main point is that if you have a macroscopic system, such as the pointer on a measuring device, it is likely to couple to environmental degrees of freedom (the em field, dust particles, etc.) very differently depending on its state in position space. Then, you would have to be able to control all of these environmental degrees of freedom on a quantum level in order to cause the different position states of the pointer to reinterfere. This is impossible in practice, so we can apply the projection postulate to make effective predictions once we know that the system has interacted with the measuring device.

Is it fair to say, then, that the macroscopic pointer states still formally interfere with each other, but the intereference effects are so minute that the pointer's behavior cannot be distinguished from classical behavior?

Is it fair to say, then, that the macroscopic pointer states still formally interfere with each other, but the intereference effects are so minute that the pointer's behavior cannot be distinguished from classical behavior?

Yes, pretty much. Have a look at Zurek's Physics Today article on the subject for more details.

vanesch
Staff Emeritus
Gold Member
slyboy said:
More, seriously, it works well for the situation that you describe, but what about unequal superpositions, e.g.

$\frac{1}{\sqrt{3}}| \mbox{up} z \rangle | \mbox{measuring device registers up} \rangle + \frac{\sqrt{2}}{\sqrt{3}} | \mbox{down} z \rangle | \mbox{measuring device registers down} z \rangle$

There are only two terms in the superposition, so by your prescription the probabilities should be 50-50. However, the actual QM probabilities are 1/3 and 2/3. You have to explain why we can give a probability interpretation to the amplitudes of states, rather than just the number of terms.

This is indeed THE remark that kills off many "naive" statistical interpretations of the wavefunction, something that Everett and Co never really solved in a satisfactory way. The closest comes Deutsch's "rational decider" argument, but even there, he needs additional "reasonable assumptions".

Another, problem is how do you decide which basis it is OK to make the probability statement in? I could decompose the spin state in the x-basis, and then the relative states of the measuring device would be superpositions of the "registers up" and "registers down" states.

All these are problems that afflict any interpretation wherein QM is complete and the wavefunction is taken to be a literal specification of the state of reality, such as many worlds. I am not saying that these questions have no good answers, since the many-worlders have come up with several ingenious proposals (albeit proposals that are not universally accepted). The main point, is just that there must be more to it than simply reading the probabilities directly from the wavefunction.

I think that is very true, and MWI proponents (of which I'm in a way part, with caveat) cannot avoid ADDING extra hypotheses for the Born rule to emerge, no matter of their repeated claims of the opposite. However, there's NOTHING WRONG with adding extra hypotheses, as long as they make sense. But I think extra hypotheses are in any case necessary in order to make the Born rule appear, the most important one being that ONLY ONE of the terms (in what basis?) is only observed (with what probability?).
The difference with Copenhagen-style interpretations is that no physical process is responsable for a wavefunction collapse. It is there where Copenhagen-style interpretations "do not make much sense": with some magic, there are "measurement processes" in nature which "make the transition from the quantum to the classical world". Only, no such physical process is known - while the processes happening in the measurement apparatus ARE known - otherwise we wouldn't know what the apparatus is measuring in the first place (except gravity perhaps - but this is clearly not explicitly stated in the Copenhagen-style interpretations), and it is left very vague exactly where and how this transition is supposed to take place.

However, all this shouldn't stop you from using the projection postulate "FAPP" (for all practical purposes).

cheers,
Patrick.

I am just a grad student so my understanding of QM is still naive, but I am still not getting why the Born rule needs to be a fundamental postulate. Can't it just be viewed as an heuristic, an approximation to the hopelessly complicated unitary dynamics of a macroscopic system?

Patrick, the 'relational" measurement paper I posted about at https://www.physicsforums.com/showthread.php?t=80769, claims to solve the basis choice problem. Could you maybe take a look at it and tell us what you think?

I have quickly read it. If I am not wrong, it deals only with the additional difficulty of the time transformation due to the Lorentz invariance or the equivalence principle of GR. But I think it does not solve anything for the measurement.
The Collapse postulate just defines a property on a given reference frame for the whole system. We have to re-express the projector of this property if we change the frame of reference in order to have the same property correctly expressed in both frames. Therefore, we still have the problem of the (prediction) preferred basis.

Seratend.

vanesch
Staff Emeritus
Gold Member
Nicky said:
I am just a grad student so my understanding of QM is still naive, but I am still not getting why the Born rule needs to be a fundamental postulate. Can't it just be viewed as an heuristic, an approximation to the hopelessly complicated unitary dynamics of a macroscopic system?

Simply said, no, for a very simple reason: let us call that very complicated UNITARY operator, U. If it is unitary, no matter how complicated, it is LINEAR.

Take the system under study in state |s1>, and your measurement apparatus in its pre-measurement state |M0>.
Now, if state |s1> always gives you outcome 17 on the measurement dial, then your hopelessly complicated U has at least the following property:

U |s1>x|M0> = |s_something> x |M-17>

If state |s2> always gives you an outcome 38 on the measurement dial,
then your U has at least also the property:

U |s2> x |M0> = |s_somethingelse> x |M-38>

Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

cheers,
Patrick.

vanesch said:
Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

cheers,
Patrick.

You mean projection postulate gives one term and born rules the probability :tongue2:

Seratend.

vanesch
Staff Emeritus
Gold Member
seratend said:
You mean projection postulate gives one term and born rules the probability :tongue2:

Eh yes.

I think that is very true, and MWI proponents (of which I'm in a way part, with caveat) cannot avoid ADDING extra hypotheses for the Born rule to emerge, no matter of their repeated claims of the opposite. However, there's NOTHING WRONG with adding extra hypotheses, as long as they make sense. But I think extra hypotheses are in any case necessary in order to make the Born rule appear, the most important one being that ONLY ONE of the terms (in what basis?) is only observed (with what probability?).

I agree that extra hypotheses are needed, and that there is NOTHING WRONG with adding them in principle. However, I think we must be incredibly careful about just what kind of hypotheses we allow ourselves to add. This is because adding hypotheses about probability is very likely to constrain the possible interpretation we can give to quantum probabilities. For example, if we simply say that the probabilities ARE given by the Born rule, and leave it at that, then that strongly suggests that quantum probabilities MUST be taken to be objective probabilities of some sort.

As you probably know, philosophers, statisticians, economists, physicists, etc. have debated the interpretation of probabilities in the classical case ad infinitum, with no clear consensus having emerged. The three front-runners are the frequentist, propensity and subjective interpretation of probability.

Now, my personal favourite is the subjective interpretation, but that is pretty much beside the point here. However, I am inclined to believe (hope, pray, etc.) that the interpretational problems of quantum theory can be resolved without having to fix on a particular interpretation of probability. If not, then I don't really see any hope of resolving them, since the debate about interpretation of probability is in many ways even more divisive than the debate about quantum mechanics.

Therefore, I would say that the only sort of hypotheses we should add in order to derive the Born rule are ones that can be formulated in all three interpretations of probability. In particular, hypotheses of the form "the probability is ..." should not be allowed because they cannot be formulated in the subjective interpretation of probability.

vanesch said:
[...] Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

Thanks much, Patrick. Very clear explanation.

vanesch
Staff Emeritus
Gold Member
slyboy said:
Therefore, I would say that the only sort of hypotheses we should add in order to derive the Born rule are ones that can be formulated in all three interpretations of probability. In particular, hypotheses of the form "the probability is ..." should not be allowed because they cannot be formulated in the subjective interpretation of probability.

I'm not sure about that (even convinced of the opposite, say :tongue2: ). In a MWI setting, where objectively, nothing is probabilistic and "everything" happens, the Born-style probabilities are only observer-related (and hence subjective). So what's wrong with: "and for the observer in that branch, the subjective probability is..." ? A bit in the style of the rational decider's probabilities in Deutsch's approach ?

cheers,
Patrick.

I'm not sure about that (even convinced of the opposite, say ). In a MWI setting, where objectively, nothing is probabilistic and "everything" happens, the Born-style probabilities are only observer-related (and hence subjective). So what's wrong with: "and for the observer in that branch, the subjective probability is..." ? A bit in the style of the rational decider's probabilities in Deutsch's approach ?

That's not really a thoroughgoing subjective theory of probability. It gels with the Jaynesian view of probability perhaps, since there an agent's probability is determined by the objective information that the agent has, i.e. it is an objective fact about an agent's objective knowledge. In this case, the information that an agent has depends on which branch of the wavefunction they find themselves in, so perhaps everything works out OK.

However, in a more thoroughgoing subjective approach, there is simply no objective fact that constrains the probabilities that an agent should assign. It is a matter of their state of belief, rather than of their information or knowledge. In my opinion, this is the more justifiable subjective theory, since information and knowledge are nefarious concepts to define in this context, but belief has a well defined operational meaning via de Finetti's arguments. In this approach, things that are directly related to probability, such as quantum amplitudes, ought not to appear as part of the "objective state of reality", so this approach is thoroughly incompatible with many-worlds in any case, even without considering what additional hypotheses are needed to derive the Born rule.

slyboy said:
That's not really a thoroughgoing subjective theory of probability. It gels with the Jaynesian view of probability perhaps, since there an agent's probability is determined by the objective information that the agent has, i.e. it is an objective fact about an agent's objective knowledge. In this case, the information that an agent has depends on which branch of the wavefunction they find themselves in, so perhaps everything works out OK.

However, in a more thoroughgoing subjective approach, there is simply no objective fact that constrains the probabilities that an agent should assign. It is a matter of their state of belief, rather than of their information or knowledge. In my opinion, this is the more justifiable subjective theory, since information and knowledge are nefarious concepts to define in this context, but belief has a well defined operational meaning via de Finetti's arguments. In this approach, things that are directly related to probability, such as quantum amplitudes, ought not to appear as part of the "objective state of reality", so this approach is thoroughly incompatible with many-worlds in any case, even without considering what additional hypotheses are needed to derive the Born rule.

Waow Sliboy, this is almost phylosophy, where is the physics (the logical deductions and not the subjective or objective ones)?

Seratend.