Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Wavefunction collapse: is that really an axiom

  1. Oct 11, 2007 #1
    Can the wavefunction collapse not be derived or is it really an axiom?

    How can the answer to this question (yes or no) be proven?

    If it is an axiom, is it the best formulation, is it not a dangerous wording?

    Let's enjoy this endless discussion !!!
  2. jcsd
  3. Oct 11, 2007 #2


    User Avatar
    Science Advisor

    There are several different interpretations of QM.
    In some of them, there is no need for a collapse postulate.
  4. Oct 11, 2007 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think the thing we can sensibly say is that wavefunction collapse cannot follow from the unitary time evolution, which is easy to establish.
  5. Oct 11, 2007 #4
    But it is if you include the interaction with the measuring device into the QM model.
  6. Oct 11, 2007 #5


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Only if you choose some model other than unitary evolution for describing the measurement process.
  7. Oct 12, 2007 #6
    Do you mean that the "measurement axiom" is contradictory to my/the postulate
    that the (unitary) equation of evolution governs all interactions? (including measurement sytems)
    Last edited: Oct 12, 2007
  8. Oct 12, 2007 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    yes, of course! That's the whole issue (or better, half the issue) in the "measurement problem". It is (to me at least) one of the reasons to consider seriously MWI.

    There's no unitary evolution (no matter how complicated) that can result in a collapsed wavefunction. This can be shown in 5 lines of algebra.
  9. Oct 12, 2007 #8
    The algebra is simple and true.
    The problem is that the wavefunction collapse doesn't really exist.
    Just like microreversibility doesn't contradict the second law: microreversibility doesn't necessarily imply the existence of a chaos demon.
  10. Oct 12, 2007 #9
    The algebra is simple and true.
    (even simple inspection of collapse algebra is enough for that, specially on the density matrix)
    The problem is that the wavefunction collapse doesn't really exist.
    Just like microreversibility doesn't contradict the second law: microreversibility doesn't necessarily imply the existence of a chaos demon.

    In addition, I am quite sure that the collapse axiom can be derived from the Schrödinger equation. But the understanding is missing, to my knowledge.
  11. Oct 12, 2007 #10


    User Avatar
    Science Advisor

    There is actually a proof that it cannot. The proof is based on the fact that the Schrodinger equation involves only local interactions, while the collapse, including the cases with two or more entangled particles, requires nonlocal interactions.

    There is, however, something that contains some elements of a collapse but can be obtained from the Schrodinger equation. This is the environment-induced decoherence. And it is closely related to the second law emerging from time-symmetric laws of a large number of degrees of freedom. See e.g.
    http://xxx.lanl.gov/abs/quant-ph/0312059 (Rev. Mod. Phys. 76, 1267-1305 (2004))
    Last edited: Oct 12, 2007
  12. Oct 12, 2007 #11
    could you please show it then? i don't know about you guys, but i am so sick of qualitative arguments involving wavefunction collapse, etc.

    to address the OP, it is my understanding is that if you postulate the Born interpretation then wavefunction collapse follows from that; in other words, physicists weren't just sitting around and postulated "wave collapse" as some popular books/shows would like one to believe.

    more concretely,

    [tex]\langle \Omega \rangle_\alpha = \sum_i \omega_i | \langle \omega_i | \alpha \rangle|^2[/tex]

    where the Born interpretation is that the quantity[tex]| \langle \omega_i | \alpha \rangle|^2[/tex] is to be interpreted as the probability amplitude of measuring a value [tex]\omega_i[/tex]

    [tex]\langle \Omega \rangle_\alpha = \sum_i \omega_i | \langle \omega_i | \alpha \rangle|^2[/tex]
    [tex]= \sum_i \omega_i \langle \omega_i | \alpha \rangle ^* \langle \omega_i | \alpha \rangle[/tex]
    [tex]= \sum_i \omega_i \langle \alpha | \omega_i \rangle \langle \omega_i | \alpha \rangle[/tex]
    [tex]= \sum_i \langle \alpha | \omega_i \rangle \omega_i \langle \omega_i | \alpha \rangle[/tex]
    [tex]= \sum_i \langle \alpha | \omega_i \rangle \langle \omega_i | \Omega | \omega_i \rangle \langle \omega_i | \alpha \rangle[/tex]
    [tex]= \langle \alpha |\left(\sum_i | \omega_i \rangle \langle \omega_i |\right) |\Omega |\left(\sum_i | \omega_i \rangle \langle \omega_i | \right)\alpha \rangle[/tex]
    [tex]=\langle \alpha | \Omega | \alpha \rangle[/tex]

    so for some state [tex]\phi = \sum_i c_i \psi_i[/tex] that is NOT an eigenket of the operator (but can always be formed from a linear combination of eigenkets), we have:

    [tex]\langle \phi | \Omega | \phi \rangle = \langle \phi | \Omega | \sum_j c_j \psi_j \rangle[/tex]
    [tex]=\langle \phi | \sum_j c_j \omega_j \psi_j \rangle[/tex]
    [tex]=\sum_j \langle \sum_i c_i \psi_i | c_j \omega_j \psi_j \rangle[/tex]
    [tex]=\sum_i \sum_j c_i^* c_j \omega_j \langle \psi_i | \psi_j \rangle[/tex]

    and by orthogonality of states, we have:

    [tex]=\sum_i |c_i|^2 \omega_i[/tex]

    which shows that the average value of our experiments will be a weighted average of the eigenstates, i.e. the "wavefunction collapse" s.t. any individual measurement will be a particular eigenvalue. in the classical limit, the spectra of eigenvalues is nearly continuous and so the effect is unnoticeable.

    so whats the big deal?? can someone explain to me why this is, for some people, such a big damn mystery??
  13. Oct 12, 2007 #12
    There is no need for any algebra to show that the wavefunction collapse is not described by a unitary evolution. This follows simply from the definition of the wavefunction. By definition, the wavefunction is a probability amplitude. This means that the measurements described by the wavefunction is a random probabilistic unpredictable process, which cannot be described by deterministic "unitary evolution". That's the whole point of quantum mechanics. In my opinion, looking for a unitary description of the collapse is equivalent to looking for "hidden variables".

  14. Oct 12, 2007 #13
    I agree, meopemuk.
    The collapse is not an unitary transformation,
    and it is even not a transformation at all.

    After the collapse, there is no wave function anymore, but a statistical mixture.

    That's the axiom.

    My view is that after the interaction of a small system with a measuring device,
    the state of the small system loses its meaning,
    and only the combined wavefunction has a meaning.

    The problem that remains is how does the axiom emerge from the "complex" evolution.
    This is a challenge, and I am confident that it will or can be explained trivially.
    I am also sure that solving this problem is not really useful for the progress of QM,
    that is it of the kind of problem that time and generations solves.
  15. Oct 12, 2007 #14


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Ok, here goes the "proof".

    Axiom 1: every state of a system is a ray in Hilbert space.

    Now, consider the system "measurement device + SUT" (SUT = system under test, say, an electron spin). This is quantum-mechanically described by a ray in hilbert space. As we have degrees of freedom belonging to the SUT and other degrees of freedom belonging to the measurement device, the hilbert space of the overall system is the tensor product of the hilbert spaces of the individual systems

    H = H_m x H_sut

    Now, consider that before the measurement, the SUT is in a certain state, say |a> + |b> and the measurement system is in a classically-looking state |M0>. As we have now individually assigned states for each of the subsystems, the overall state is given by the tensor product of both substates:

    |psi0> = |M0> x ( |a> + |b> )

    Now we do a measurement. That comes down to having an interaction hamiltonian between both subsystems, and from that interaction hamiltonian follows a unitary evolution operator over a certain time, say time T. We write this operator as U(0,T), it evolves the entire system from time 0 to time T.

    Now, let us first consider that our SUT was in state |a> and our measurement system was in (classically looking) state |M0>, which is its state before a measurement was done.

    "doing a measurement" would result in our measurement device get into a classically looking state |Ma> for sure, assuming that |a> was an eigenvector of the measurement. As such, our interaction between our system and our measurement apparatus, described by U(0,T) is given by:

    U(0,T) { |M0> x |a> } = |Ma> x |a>

    Indeed, the state of the measurement device is now for sure the classically-looking state Ma, and (property of a measurement on a system in an eigenstate) the SUT didn't change.

    We can tell now the same story if the system was in state b:

    U(0,T) { |M0> x |b> } = |Mb> x |b>

    where Mb is the classically looking state of the measurement apparatus with the pointer on "b".

    Now from linearity of U follows that:

    U(0,T) { |M0> x (|a> + |b>) } = |Ma> x |a> + |Mb> x |b>

    We didn't find "sometimes |Ma> x |a> and sometimes |Mb> x |b>" ; the unitary evolution gave us an entangled superposition.

    Now, you can say "yes, but |u> + |v> means: sometimes |v> and sometimes |u>"

    but that's not true of course. Consider the other measurement apparatus N which does the following:

    U(0,T) { |N0> x (|a> + |b>) } = |Nu> x (|a> + |b> )

    U(0,T) { |N0> x (|a> - |b>) } = |Nd> x (|a> - |b>)

    From this, we can deduce that

    U(0,T) {|N0> x |a> } = 1/2 (|Nu> x { |a> + |b> } + |Nd> x { |a> - |b> })

    Clearly if |u> + |v> means "sometimes u and sometimes v" then we could never have that |a> + |b> (which is then "sometimes a and sometimes b") always gives rise to Nu and never gives rise to Nd, because |a> gives rise to sometimes Nu and sometimes Nd.
  16. Oct 12, 2007 #15


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The problem is that the transition for a density matrix to go from "superposition" to "statistical mixture" is the following transformation:

    take the densitymatrix of the "superposition", and write it in the matrix form in the *correct basis*. Now put all non-diagonal elements to 0. You now have the statistical mixture.

    But again, that is a point-wise state change (if you take the density matrix to define the state) which is not described by the normal evolution equation of the density matrix. In other words, the transformation "superposition" -> "mixture" for the density matrix is again a state change which is not described by a physical interaction (which is normally described by the usual evolution equation of the density matrix).

    In other words, that's nothing else but another way of writing down a non-unitary evolution, which is not the result of a known physical interaction.
  17. Oct 12, 2007 #16
    My guess is that 99% of all experiments involve a single measurement of the system's state. (The counterexample is the bubble chamber, where we repeatedly measure particle's position and obtain a continuous track) In these cases we do not care what is the state of the systems and its wavefunction after the measurement (collapse). It is important to realize that one needs to consider the abrupt change of the wavefunction after measurement only in (not very common) experiments with repeated measurements performed on the same system.

  18. Oct 12, 2007 #17

    I would rather say:

    which is more conveniently described by a non-unitary transformation

    Decoherence can already easily wipe off non-diagonal elements.

    I also remember my master thesis 25+ years ago.
    I worked on the Stark effect in beam-foil spectroscopy: the time-dependence with the quantum beats and the atomic decay.
    (by the way the off-diagonal elements of the H-atoms exiting the foil were crucial in the simulation)

    The hamiltonian had to simulate also the decays of the atomic levels.
    Looks-like a non-unitary transformation too, isn't it?
    I also didn't want to embarass myself with full QED stuff.
    Guess how I modelled that:

    - adding a non-hermitian term to the hamiltonian (related to the decay rates)
    - and calculating the resulting non-unitary evolution operator
    (the density matrix was therefore decaying, which is quite natural)

    This is nothing strange or surprising and it shows clearly how non-unitary evolution can simply occur as a limit case of an unitary transformation.
    With such a simple approach the Stark effect is very well calculated, for the energy levels, for the perturbed lifetimes and for the time-dependence and polarisations of light emission.

    In somewhat pedantic words (I am not a mathematician): the limit of a series of unitary transformations may not be an unitary transformation, looks like that to me. And this can just mean that in some situations the unitary evolution may just be an academic vision: coarse graining makes it practical and non-unitary.

    Why should I go for more science-fiction?
    Last edited: Oct 12, 2007
  19. Oct 12, 2007 #18


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    More precisely, decoherence can wipe off the non-diagonal elements of a relative state -- you have to use a partial trace to discard all of the information about the environment before you can get the matrix to look diagonal.

    That is incorrect. If [itex]T_i \to T[/itex] then [itex]T_i^* \to T^*[/itex]. Finally, because multiplication is continuous,

    [tex]1 = \lim_i 1 = \lim_i T_i^* T_i = (\lim_i T_i^*) (\lim_i T_i) = T^* T.[/tex]
  20. Oct 12, 2007 #19


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Nobody is asking you to go for science-fiction. They are asking you to go to MWI. :tongue:

    And, incidentally, your argument appears quite analogous to rejecting the kinetic theory of gases simply because temperature and pressure are good enough for his favorite applications.
    Last edited: Oct 12, 2007
  21. Oct 12, 2007 #20


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Don't understand me wrong. Collapse is a very practical and good working "approximation", of course. The point is that in order for collapse to occur, you have to leave quantum mechanics. You have, as Bohr wanted it, to decide somehow about a transition to a classical world.

    When you do classical mechanics, you can describe your measurement apparatus classically. That means, for instance, in a Lagrangian formulation, that you can give a generalized degree of freedom Qm to the pointer of your measurement apparatus if you want to. Say that you have an (oldfashioned) voltmeter, connected to a system (an electric network). You can solve simply for the behaviour of the network, calculate the voltage difference between two points, and "make your transition" to a measurement, that is, say that you stop there with the system physics, and the measurement apparatus, DEFINED to be a volt meter, will measure that difference.
    But you can also include the voltmeter into your system, add the degree of freedom Qm (and all the other internal degrees of freedom of the apparatus) to your "system", and redo all the classical dynamics. You will now simply find that the end state of that variable Qm will simply indicate the result (the position of the pointer). So it is up to you to decide whether or not the internal dynamics of the apparatus and of Qm was worth the effort (taking into account the non-idealities of your measurement apparatus), but that doesn't change much the result.

    But in quantum mechanics, if you REMAIN in quantum mechanics, and you do so, you find TWO TOTALLY DIFFERENT outcomes. Indeed, the state description of your pointer is now given by "pointer states", the classically-looking states |Qm>.
    You might think, inspired by the classical example, that if you do the quantum-mechanical calculation completely, that if you include the apparatus, you would find sometimes a |Qm = 5V> and sometimes a |Qm=2V> and that the randomness is somehow given by tiny interactions with the environment or whatever. That this is the result of the apparent randomness of quantum mechanics, but that at the end of the day, you will find your apparatus in one or other pointer state, corresponding to what you observed.
    In that case, one would have the quantum-mechanical equivalent of the above classical procedure, and it would be a matter of convenience whether or not we include the apparatus in the physical description.
    We would have that:
    |a> |Q0> would always evolve in |a> |Qa>
    and that
    (|a> + |b>) |Q0> would evolve half of the time in |a> |Qa> and half of the time in |b> |Qb>.

    This would then be the "correspondence rule" and we would fully understand it. The dynamics of the interaction with the apparatus would somehow be responsible for the apparent random behaviour of quantum mechanics, and at the end of the day, we would see that the apparatus always ends up in one of its "pointer states". It would then be an approximation to go directly from (|a> + |b>) to |a> or to |b>, a very good one, exactly as in the case of the classical volt meter.

    BUT THIS IS IMPOSSIBLE in quantum mechanics if the evolution is unitary, no matter how complicated the interaction will be. That's the little proof I provided. No matter how complicated the dynamics, if the time evolution is unitary, the above evolution is not possible. You DO NOT end up in a single pointer state.

    The end state, namely |a> |Qa> + |b> |Qb> is not a good approximation of "sometimes |a> |Qa> and sometimes |b> |Qb>". We obtain, when we carefully work out the dynamics of the interaction of the measurement apparatus with the system, a totally different state, which is NOT a pointer state. THIS is the difficulty in principle.
    There is no description, in terms of quantum mechanics, which corresponds to the classical end state and which follows out of the dynamics. It is not even a close approximation. You have to LEAVE quantum mechanics in order to be able to say that the "apparatus is now in pointer state Qb". Quantum-mechanically, you can't claim that, because out of the calculation does NOT follow that the apparatus is in state |Qb>.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Wavefunction collapse: is that really an axiom
  1. Wavefunction collapse (Replies: 8)

  2. Wavefunction Collapse (Replies: 5)

  3. Collapse Wavefunction (Replies: 2)

  4. Wavefunction collapse (Replies: 1)