Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Experimental Tests of Projection Postulate

  1. Jul 7, 2005 #1
    Have there been any experiments designed to explicitly test the projection postulate? I mean that part of it that says the measured particle is left in an eigenstate of the measured operator.

    The usual devices for measuring particles (photomultipliers, phosphor screens, etc.) don't really allow the postulate to be tested, since the measured particle is absorbed, not re-emitted in a perfect eigenstate. Are there other measurement methods that allow testing the projection postulate?
  2. jcsd
  3. Jul 7, 2005 #2
    You can't really "test" the postulate, since it is something that applies to some experiments and not others. For some experiments, such as Stern-Gerlach measurements, it works very well.

    Quantum opticians use the term "non-demolition" to describe measurements where the state updates according to the projection postulate. There has been considerable work done on engineering these measurements in quantum optical systems. Just look up "non-demolition measurements" on the arXiv for further details.
  4. Jul 7, 2005 #3
    Even in a Stern-Gerlach apparatus, I would think there is no actual "measurement" until the electron actually hits a detector. Until that time it would be in a superposition of possible pathways through the magnets, wouldn't it?
  5. Jul 7, 2005 #4
    The perbative effects of the magnetic field would reduce the state vector I think. I've been wondering if the postulate could just be done away with by saying the operator itself is observable, but I'm not to sure whether this is correct? It seems to me that the operator has all the properties to be considered a observable, and if so would the potentials therein be considered real? Though, I'm not sure whether a von-nuemann type measurement would have a meaning, or even in such a scnerio would reduction of a state vector even have any conceptual meaning?
  6. Jul 8, 2005 #5
    There is always ambiguity in exactly where one applies the measurement postulate, if at all (since you might be a fan of the many-worlds viewpoint).

    In the case of Stern-Gerlach, it is clear that one can recombine the outputs and perform another experiment that shows it is still a coherent superposition. This is true of the vast majority of "nondemolition" measurements, since one typically needs a great deal of control over the measurement interaction in order to make the projection postulate applicable. Typically, the "measuring device" is actually another quantum system, which can itself be coherently manipulated. In quantum optics, it might be another photon for example. The second system is then measured destructively, by absorbtion in a detector or something like that. The entire experiment could be reversed up to the point that the destructive measurement is made.

    However, this might be true in principle even of more complicated measurements involving "macroscopic" measuring devices. It is just that the Hamiltonians required to do this are almost impossible to engineer in practice. Evidence that macroscopic superpositions are possible, in the experiments of Zeilinger for example, indicates that this might be true.

    von Neuman emphasized that there is a great deal of ambiguity in exactly where the projection postulate is applied (at the level of quantum systems, macroscopic systems, the brain of the concious observer, etc.). Generally, it is a mathematical idealisation that allows us to calculate what will happen in experiments without having to deal with complicated entangled states of macroscopic systems. However, the probablistic aspect of QM has to be applied at some stage. Exactly where it has to be applied and what it means is one of the main topics of debate in the foundations of quantum theory.
  7. Jul 8, 2005 #6
    Is it generally accepted, then, that the projection postulate is not formally true, but is an approximation of the measuring device's unitary evolution, in the limit as the number of particles goes to infinity?
  8. Jul 8, 2005 #7
    That's a thorny question. Not much is generally accepted when it comes to the measurement part of quantum theory. If you ask most physicists then they will probably mumble something about decoherence, but there is no universally accepted answer to this.

    What is true is that if there is an interaction such that the quantum state is an entangled superposition of two or more systems, and you can guarantee that the two branches will not interfere with one another due to the nature of the interaction Hamiltonians that exist, then, from the perspective of one of the systems, no distinction can be made between using the full unitary dynamics of the whole superposition and first applying the projection postulate to the other systems before continuing with the unitary dynamics.

    In my opinion, it doesn't have much to do with the limit of a large number of particles, because you can still imagine engineering an interaction that causes the two branches to interfere, even though it may be difficult in practice. On the other hand, macroscopic systems are more likely to cause decoherence under the Hamiltonians that typically exist in nature.

    Even if you do accept that sort of answer, there are many unresolved issues. For example, how do quantum probabilities come about if the universe just consists of a massively entangled wavefunction with no collapses?

    People who are bothered by this sort of question have proposed alterations to quantum mechanics that resolve the ambiguities, e.g. Bohmian mechanics and spontaneous collapse models. However, these theories have yet to be made fully compatible with relativity - indeed it is difficult to do so because reproducing violation of the Bell ineqalities means that they have to tackle nonlocality head on.
  9. Jul 8, 2005 #8


    User Avatar
    Science Advisor
    Homework Helper

    Hmm, this is a testable notion:

    If you run a stream of electrons through such a Stein Gerlach seperator-recombiner in an EPR-like setup would it and then checked correlation along a perpendicular axis you should be able to check whether the results do or do not align.

    If the correlation along the perpendicular axis is preserved, what happens if you put a 'nilpotent' destructive detector along the path of one of the beams? (By nilpotent I mean a destructive detector that should never detect anything.)
  10. Jul 8, 2005 #9


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Instead of saying, for instance, that there's a 50% chance of a particle being in a spin up state, you'd say that 50% of the states in the superposition correspond to spin up registering on the measuring device.

    Instead of saying you're very likely to see about 50 spin-ups in 100 experiments, you'd say that most of the states in the superposition correspond to about 50 spin-ups detected in the 100 experiments.
  11. Jul 8, 2005 #10
    What kind of interaction hamiltonians would prevent interference? Are they non-hermitian?
  12. Jul 10, 2005 #11
    No, they are just the usual hermitian interaction hamiltonians that occur in nature. The main point is that if you have a macroscopic system, such as the pointer on a measuring device, it is likely to couple to environmental degrees of freedom (the em field, dust particles, etc.) very differently depending on its state in position space. Then, you would have to be able to control all of these environmental degrees of freedom on a quantum level in order to cause the different position states of the pointer to reinterfere. This is impossible in practice, so we can apply the projection postulate to make effective predictions once we know that the system has interacted with the measuring device.

    Of course, you can say that, but the fact of the matter is that it appears to us that measurements have actual outcomes, rather than being terms in a superposition, so you have to explain why we have this experience.

    More, seriously, it works well for the situation that you describe, but what about unequal superpositions, e.g.

    [itex] \frac{1}{\sqrt{3}}| \mbox{up} z \rangle | \mbox{measuring device registers up} \rangle + \frac{\sqrt{2}}{\sqrt{3}} | \mbox{down} z \rangle | \mbox{measuring device registers down} z \rangle

    There are only two terms in the superposition, so by your prescription the probabilities should be 50-50. However, the actual QM probabilities are 1/3 and 2/3. You have to explain why we can give a probability interpretation to the amplitudes of states, rather than just the number of terms.

    Another, problem is how do you decide which basis it is OK to make the probability statement in? I could decompose the spin state in the x-basis, and then the relative states of the measuring device would be superpositions of the "registers up" and "registers down" states.

    All these are problems that afflict any interpretation wherein QM is complete and the wavefunction is taken to be a literal specification of the state of reality, such as many worlds. I am not saying that these questions have no good answers, since the many-worlders have come up with several ingenious proposals (albeit proposals that are not universally accepted). The main point, is just that there must be more to it than simply reading the probabilities directly from the wavefunction.
  13. Jul 10, 2005 #12
    Is it fair to say, then, that the macroscopic pointer states still formally interfere with each other, but the intereference effects are so minute that the pointer's behavior cannot be distinguished from classical behavior?
  14. Jul 10, 2005 #13
    Yes, pretty much. Have a look at Zurek's Physics Today article on the subject for more details.
  15. Jul 11, 2005 #14


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is indeed THE remark that kills off many "naive" statistical interpretations of the wavefunction, something that Everett and Co never really solved in a satisfactory way. The closest comes Deutsch's "rational decider" argument, but even there, he needs additional "reasonable assumptions".

    I think that is very true, and MWI proponents (of which I'm in a way part, with caveat) cannot avoid ADDING extra hypotheses for the Born rule to emerge, no matter of their repeated claims of the opposite. However, there's NOTHING WRONG with adding extra hypotheses, as long as they make sense. But I think extra hypotheses are in any case necessary in order to make the Born rule appear, the most important one being that ONLY ONE of the terms (in what basis?) is only observed (with what probability?).
    The difference with Copenhagen-style interpretations is that no physical process is responsable for a wavefunction collapse. It is there where Copenhagen-style interpretations "do not make much sense": with some magic, there are "measurement processes" in nature which "make the transition from the quantum to the classical world". Only, no such physical process is known - while the processes happening in the measurement apparatus ARE known - otherwise we wouldn't know what the apparatus is measuring in the first place (except gravity perhaps - but this is clearly not explicitly stated in the Copenhagen-style interpretations), and it is left very vague exactly where and how this transition is supposed to take place.

    However, all this shouldn't stop you from using the projection postulate "FAPP" (for all practical purposes).

  16. Jul 11, 2005 #15


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

  17. Jul 11, 2005 #16
    I am just a grad student so my understanding of QM is still naive, but I am still not getting why the Born rule needs to be a fundamental postulate. Can't it just be viewed as an heuristic, an approximation to the hopelessly complicated unitary dynamics of a macroscopic system?
  18. Jul 11, 2005 #17
    I have quickly read it. If I am not wrong, it deals only with the additional difficulty of the time transformation due to the Lorentz invariance or the equivalence principle of GR. But I think it does not solve anything for the measurement.
    The Collapse postulate just defines a property on a given reference frame for the whole system. We have to re-express the projector of this property if we change the frame of reference in order to have the same property correctly expressed in both frames. Therefore, we still have the problem of the (prediction) preferred basis.

  19. Jul 11, 2005 #18


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Simply said, no, for a very simple reason: let us call that very complicated UNITARY operator, U. If it is unitary, no matter how complicated, it is LINEAR.

    Take the system under study in state |s1>, and your measurement apparatus in its pre-measurement state |M0>.
    Now, if state |s1> always gives you outcome 17 on the measurement dial, then your hopelessly complicated U has at least the following property:

    U |s1>x|M0> = |s_something> x |M-17>

    If state |s2> always gives you an outcome 38 on the measurement dial,
    then your U has at least also the property:

    U |s2> x |M0> = |s_somethingelse> x |M-38>

    Purely from the linearity follows then:

    U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

    and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
    Born's rule gives you ONE term, with a certain probability.

  20. Jul 11, 2005 #19
    You mean projection postulate gives one term and born rules the probability :tongue2:

  21. Jul 11, 2005 #20


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Eh yes. :redface:
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook