slyboy said:
The simple reason is that you still have a superposition state if you also include the environment in your description.
That is not to say that decoherence doesn't offer some useful insights.
The way I understand it (I'm not really an expert, but have been studying decoherence for a while now) is the following, and feel free to comment, correct, etc... on it.
In the following, I'll try to make a distinction between what I'd call "unitary quantum mechanics" (UQM) and "the probability projection"(PP). Unitary quantum mechanics is the whole of quantum mechanics, with the state space and schroedinger evolution, but without a link to the probabilistic interpretation, which is added as an extra piece, which I call the "probability projection", or "collapse of the wave function" or whatever. This is the heart of the so-called "measurement problem" of quantum mechanics.
All of textbook quantum mechanics is unitary quantum mechanics, but in the end, when you have to calculate probabilities for experimental outcomes, you suddenly have to apply the "probability projection" which is a non-unitary operation and can hence not be explained by a schroedinger evolution.
The concept of the density operator rho in quantum mechanics mixes the two concepts of UQM and PP in a subtle way. A 'pure state' in UQM can be represented by a density operator rho = |state><state|. This is just another mathematical way of writing down a state. However, one can also make "weighted combinations" of states rho = sum_i p_i rho_i with p_i the classical probabilities of a statistical ensemble. This is, strictly speaking, still compatible with UQM. The extension of the notion of physical state of a system to include not only pure states (the Hilbert space) but also these statistical ensembles (mixtures) is then formalized by these rho-operators, but as such the only probabilities that we deal with are "classical" ensemble probabilities. However, there's a hic. With DIFFERENT statistical mixtures can correspond a same rho operator. When now assuming the PP (probability projection) in quantum mechanics, we can show that these apparently different statistical mixtures are experimentally indistinguishable, meaning, they give rise to exactly the same expectation values for all possible observables. But in order to establish that, it is, as I said, necessary to assume the PP idea.
We can hence say that the rho operator for pure states is still a UQM concept, but that the general rho operator describing a mixture needs, in order to be usefull to calculate expectation values (and hence physically describing a state) the PP.
If we consider two quantum systems (hilbert space is direct product of H1 and H2) and we consider observables A1,... which only relate to the first system, then the expectation values of A1... for a general mixture rho can be calculated using only rho1 which is nothing else but tr_2(rho) (this notation means that we take the trace over the second hilbert space of rho).
So rho_1 contains already all the information we need to calculate ANY expectation value of measurements which are only concerned with the first system. Note that in order to have a meaning, we need the PP: otherwise a rho-operator cannot give rise to an expectation value.
Decoherence now. Decoherence theory is nothing else but a mathematical observation in UQM. If we consider a simple quantum system, coupled to a complicated quantum system (the "measurement apparatus" or the "thermal environment"), and we start with a pure product state, and we consider a coupling term in the overall hamiltonian, then very quickly the system evolves into an entangled state according to UQM (so far, no surprises). An entangled, but a pure state.
If we now limit ourselves to observables on the simple system, we can trace out the environment to produce a rho-operator in the "simple system".
Well, it now turns out that this rho-operator becomes diagonal in a VERY short time in a special basis called the coherent states, and that the diagonal components are nothing else but the probabilities we would have calculated using the PP without taking into account the quantum behaviour of the measurement apparatus or the environment. But remember, that to do so, we needed the PP !
So decoherence explains the fact that what we do according to textbook QM, namely, restrict ourselves to the simple system, not consider any QM description of the measurement apparatus, but at the end of the day, apply the PP to the simple system and calculate probabilities of measurement outcomes, is correct, and that it doesn't matter if we would have taken into account the QM description of the measurement apparatus and environment, because, after a lot of complicated calculations, we would have found the same thing.
It also indicates what are the "robust states" in an environment: the so-called coherent states. They are, not surprisingly, the closest descriptions in QM of classical states (particles at a certain position and momentum, with small errors etc...).
What decoherence DOESN'T explain is the PP, because it needs it. However, it somehow justifies the use of the PP in that it shows that it is consistent.
The problem decoherence teaches us is that probably, we'll never find out exactly WHEN the PP has to be applied - if ever, because apparently the result of applying the PP at a high level (after entanglement with the environment) or at a low level (at the level of the measurement) gives us the same expectation values.
Here I've written down my own personal understanding of what decoherence means. Probably it is not complete, maybe wrong, so it would be interesting to discuss it...
cheers,
Patrick.