Coldcall said:
The evidence regarding the failure of "decoherence" to address the measurement problem is in the public domain. Even Zurek, one of the founders of "decoherence" admits its not a solution to the measurement problem.
And if you think it does solve the MP then you have been misled in a big way.
My own feeling was that Zurek's early papers (along with what others like Gell Mann were saying) was creating the right framework, but then along the line the thinking became contorted trying to cash out new formal machinery.
For me, the big problem is always in anchoring observation back to a static, located, human observer when reality is dynamic and self-organising (I presume from a systems science standpoint). So observers and their measurements have to be generalised in that direction, taking the nod from global boundary constraints thinking.
So in this view, decoherence would be about expanding light cones of QM potential. When spatiotemporal scale is still small, the potential has little context and so is less likely to encounter some crisp collapsing context. But as scale grows, it become rapidly more likely that collapse will occur.
This is hard to explain unless you can think about QM potential as a vagueness. There is not even a wavefunction crisply existent until the scale, the field of view, has grown enough to take in, say, a pair of particles who could frame some definite exchange.
It is a phase transition view I guess. When scale is small, you may have in effect a particle surrounded by a vague QM potential to "do something". The particle's gravity, EM, give it a QM potential or "presence" that propagates as a spherical boundary moving at speed of light. But it is a very raw QM state - like a chaotic jostle of dipoles in a hot bar magnet.
Then the scale grows large enough so that a second particle comes within exchange range. At that point, a crisp wavefunction can exist. There is a global boundary condition that can create constraint of that vaguer potential. General limits to what can happen are created and then something does happen. It is like the sufficient cooling that allows a crisply divided local~global state of order in a magnet.
In effect, the wavefunction and its collapse are two faces of the same thing. The wavefunction was not "always there and evolving" in an independent sense. Instead there was a rawer potential for somethingness developing, then a crisp QM wavefunction/crisp classical collapse did something with that spreading potential. We only impute an evolving wavefunction after the fact.
OK, I'm thinking aloud here as this was the general picture I took from Zurek's early writings about decoherence, combined with what I was hearing at the time from quantum vagueness guys like Chibeni (that stuff seems to have died a death sadly), and Cramer's convincing arguments for retrocausality. Plus, as I say, what seems obvious from a phase transition, systems science, way of looking at reality.
Zurek seems to be working in the right area on this...
http://arxiv.org/PS_cache/cond-mat/pdf/0701/0701768v2.pdf
But I think the key thing missing is the idea that QM information starts vague and needs a classical context to turn it into crisp QM probabilities, even if the crisp QM probabilities are still of the crisply entangled, uncertain and superimposed probabilities on the wavefunction side of things.
Boiling it down, the usual framing of the measurement problem is that we have an evolving wavefunction forever in search of the machinery that forces its collapse. The difficulty in seeing why the wavefunction should collapse (because no internal mechanism or hidden variables are permitted) leads people to say collapse requires consciousness, or perhaps in many worlds fashion, never happens.
Decoherence is broadly the attempt to put the collapse machinery back out there in the physical world. And really it would be good to have it happening as a global boundary constraint - that is, something that is presence and active over all classical spatiotemporal scales. (technical note: global means thermodynamic macrostate rather than "largest size").
Then what I take this to require is that the collapse machinery in fact manufactures the wavefunctions out of rawer QM potential. So it is the collapse that causes the wavefunctions, not the wavefunctions and that must produce a collapse.