Is decoherence a resolution to the issue of measurement? I have read arguments on both sides.
It's contentious - as you say, there are arguments on both sides. Since you know this, you must expect a range of different views and are inviting opinions. So here's one.
Whether Decoherence helps with measurement issues depends upon which version of the measurement problem you have in mind. I think historically a number of related but distinct problems have gone under this heading. I think in some cases, whether or not there is a problem depends upon interpretational issues of quantum mechanics.
(a) One version of the measurement problem stems from the fact that in many versions of quantum mechanics, there are two rules governing the evolution of the wave equation. There's Schrodinger's equation, but there's also the collapse postulate. S's equation is supposed to govern what happens when no measurement is made, or when a system is left alone, but the collapse postulate says that, on measurement, the system is projected in a probabilistic manner into a new state. The collapse postulate can be used to explain why we get such different distributions in the two slit experiment depending on whether we measure which hole the particle travelled through.
This gives rise to a number of problems: how can there be two incompatible dynamic laws in a fundamental theory? Why aren't measuring devices just complexes of other quantum objects and so subject to the usual Schrodinger equation? Yet, if they were treated as such, they wouldn't cause or bring about any collapse in the wave function - it would just keep evolving a la Schrodinger. Do different laws hold for different realms? Can the macroscopic not be reduced to the microscopic? And at what point does a complex of objects stop obeying Schrodinger's equation, become a measuring device and bring about collapse?
Decoherence can make a very strong claim to solve this version of the problem. Decoherence shows that, for any system that we would normally take as a measuring device, the Schrodinger equation alone should account for the different patterns that appear in the two-slit experiment. It turns out that, in these situations where the particle is part of a system containing a measuring device, and we follow S's equation on the whole system alone will pretty much give you the same statistics as using the projection postulate. The projection postulate destroys interference terms, but Decoherence, it is claimed, shows that in these kinds of situations, the interference terms tend to zero very quickly, and the probabilities predicted are just about the same. Accordingly, the empirical and observed differences in statistical behaviour between situations where there's measurement and where there's not can be explained without the collapse postulate after all.
(b) But the collapse postulate isn't just used to explain the empirical difference for when there's collapse and not. It's also used to explain why things have sharp values or are observed to have sharp values. For many, when something is in a superpositional state, then it simply lacks a sharp value. This is why we hear about that blasted cat being both dead and alive, or neither, or fuzzily in between the two or of an electron not having a determinate spin up or spin down but being in some kind of indeterminate state. The trouble is, things don't seem to be in terribly fuzzy states when they're measured. When we measure position at the end of the two slit experiment, we don't get one big fuzz, but a number of small, point-like dots. One interpretational move is that there are sharp values *only when* the system is in an eigenstate of the relevant observable. When the system is not in an eigenstate, then the values are not sharp. But according to the S's equation, the particle at the end of its journey is in a big superposition of various positions. According to the projection postulate, at measurement, the state collapses into an eigenstate in a probabilistic way, and since it *is* an eigenstate, the relevant sharp values obtain. So the problem of how sharpness arises, definite properties, out of superpositions, is also an outcome of the collapse postulate.
I don't think decoherence does solve *this* problem. There is no collapse on decoherence, there is just S's equation, and all decoherence does is show that system+environment too gets into a superposition. If there's a problem with how things can have sharp values when they're in a superpositional state, then I don't see how decoherence helps.
If there's a problem here...this is the rub. Not everyone accepts the eigenstate-eigenvalue link - the idea that things only have sharp value when they're in an eigenstate. But this will now go right to the heart of foundational difficulties in Quantum mechanics - namely, what exactly is a superposition, and so there is no easy resolution here.
(c) There are those (Deutsch? not sure, but I've seen it argued) who think that, in conjunction with a many worlds interpretation (which is one way of solving the sharp value problem by interpreting a superposition not as the lack of sharp value, but as the possession of many sharp values - at different branches), decoherence can be used to solve the preferred basis problem for the MWI. I've never understood how this is supposed to work though - so I won't push it.
Gosh - that was long.
I think this is where decoherence is really supposed to help, but I'm not sure I agree with all of this. The result of decoherence is that the eigenstates of the system gets entangled with certain states of the environment. These states are called "pointer states". I don't know the details, but these pointer states are supposed to be records of the result of the interaction, which are stable in the sense that they will stick around at least for a while. Since the well-defined states of your memory appear to be somewhat stable records of the result of the measurement you've just performed, I think that when the physicist performing the experiment is considered part of the environment, the pointer states will be states in which the physicists memory isn't a superposition.
Suppose e.g. that you bet $1000 that the spin will be "up", and then you perform the measurement. The state of the system+environment will change like this:
(|↓>+|↑>)|> → |↑>|> + |↓>|:yuck:>
Yes, there will be other terms, which has your memory in a superposition of "smile" and "yuck", but what decoherence does is to make the coefficients in front of them go to zero very rapidly. Now each of the remaining terms is interpreted as a "world" in which a particular result happened, and "you" (a different you in each world) remember that it happened.
Edit: This was actually a mistake. What I should have done is to define |S>=|↓>+|↑> and then said that the density matrix changes as described by
|S>|><|<S| → |↑>|><|<↑| + |↓>|:yuck:><:yuck:|<↓|
This is a mixed state, not a superposition. See also my comments in the posts below this one.
But at this point a mwi interpretation is being invoked. If the question. is `how to solve prob of sharp values given we're in a superposition', then it's mwi rather than decoherence that's doing the work. Interpreting the terms as different worlds, one where the cat is dead and the other alive, was always a way getting sharpness from a superposition.
Incidentally, can I ask you - or anyone - a technical question about decoherence? The slogan is: in decohering systems, the off diagonal terms tend to zero incredibly quickly. Does this mean that the off-diagonal terms *do* reach zero at some point, or just hover near zero after a very short amount of time? I took it to be the latter, but I'm not certain.
MWI is not the only non-collapse interpretation. In BM, for example, hidden variables (particle trajectories) define what world REALLY exists. But I agree, MWI is the most logical one.
I was under the impression from the get-go that Decoherence is merely an appearance(illusion) of classical objects. Sine waves with different amplitudes pile on top of each other and when 2, 3 or 100 waves pile up, the resulting amplitude is what corresponds to what we interpret as a partilcle, e.g. at the double slit(though there never were particles at all and no wavefunction collapse!). Sine waves that cancel each other out(e.g. their crests are + and -), simply disappear and no classical objects appear(wavefunctions are fairly well localised within a certain area, i.e. probability density is high).
The envioronment is merely the other trillions sine waves(really quantum fields) spread out throughout the Universe, with which, say the waves of your body, inteact(add up or cancel each other out, i.e. coherent and decoherent waves). This might potentially explain why quantum objects don't have sharp boundaries.
ha - I was trying not to get drawn onto which no-collapse interpretation was the most logical. Merely saying that Fredrik's statement seemed to be a mwi, what with terms being interpreted as worlds, and there being many me's. He did put the key terms in scare quotes, however, but I'm never sure how to interpret scare quotes. Other than with evolution, my problem a, I don't know what role the Bohmian would give decoherence.
@Wavejumper - yes, appearance of classicality might suggest the off diagonal terms are only nearly zero rather than zero. But I have also seen people argue that decoherence gives the appearance of sharpness as well, and physicists and physicsts' memories suddenly become key parts of the story. I don't understand this approach well enough to elaborate, but you're right to bring this up as an aspect for the original poster.
(I'm less sure about your explanation of the sine waves piling up on each other.)
The way I see it, QM as defined by the Dirac-von Neumann axioms (the usual stuff about Hilbert spaces and the probability rule) either describes what actually happens, or it doesn't. The assumption that it doesn't is the ensemble interpretation, and the assumption that it does is the MWI...unless you impose additional axioms just to get rid of the other worlds. You described one such additional axiom, the idea that "collapse" is a mysterious physical process that eliminates superpositions. I recently came across another one. David Mermin's "Ithaca interpretation" is essentially the MWI with the additional idea that QM somehow doesn't apply to consciousness, and that this gets rid of the other worlds.
I think all of these ideas are more or less crazy, so for me the MWI is the natural choice if we're going to consider the possibility that QM tells us what actually happens. And if we don't, i.e. if we stick to the ensemble interpretation instead, which is what I prefer, then there is no measurement problem anyway. That's why I take the MWI as the starting point.
But that's not what we're doing, or at least not what we should be doing. I think I made a mistake in my previous post. Instead of considering the evolution of the state vector, I should have considered the evolution of the density matrix (or statistical operator, or whatever you prefer to call it). It changes from a pure state (the one corresponding to the state vector on the left in my previous post) to a very good approximation of a mixed state (a mix of pure state operators corresponding to the terms on the right). So what decoherence does is to make the other terms in the density matrix (not in the expansion of a state vector in terms of basis vectors) insignificant, and now the remaining terms can be interpreted as worlds. Without decoherence, I don't think the MWI even makes sense, because there would be nothing to separate the terms that describe the sort of correlations we actually observe from the sort of correlations that are never observed.
They don't ever reach zero. They just get really small. I see that I used language that suggested otherwise, but that was just an accident.
thanks for your answers. They're very helpful and I like the way you discuss these issues - even if we're not always seeing eye to eye.
I'll certainly go along with you on this!
But...there are many attempts to interpret qm literally in a way that doesn't commit you to MWI. I can't see where, in the formalism, there's anything which explicitly talks about many worlds. We may ultimately agree that other attempts to take qm as a theory about what actually happens lead to problems, the notion that objects really are in superpositions before measurement is too problematic - but I can't see how this *just is* MWI.
The axiom I had in mind was simply the collapse postulate. Just to make sure we're on the same page here: this is an axiom originally explicitly formulated by von Neumann. I'm aware that there are different formulations of QM, but this one appears in most textbooks, and it's typically taken as a standard part of QM. It says that:
That's it - after a measurement, the state is different from what it is before, and it's into an eigenspace. Whereas before the system was in a superposition of different values, the state is now in an eigenspace and so the superposition of these values has been eliminated. This is part of the standard (it's from Schaum - which plays it pretty safe!) evolutionary story, and it's a different story from the Schrodinger one.
What I then went on to explain was that this postulate leads to problems - and thus motivate the idea why some theorists want to ditch it. But it's certainly not part of the axiom that measurement is mysterious. In fact, I'm very surprised to see you adopt MWI in this context - makes me think I haven't understood you, or we have different version of the collapse postulate in mind. The motivation for MWI is normally trying to do QM by *dropping* von Neumann's postulate, by letting Schrodinger's equation be the complete evolutionary account. MWI needs to explain the appearance of collapse - but it's not an axiom on their view.
Is this a FAPP (for all practical purposes) argument? I think the FAPP argument is fine when we're trying to explain the first aspect of the measurement problem I mentioned - the problem about evolution. Turns out that decoherence shows that the density matrix is really very very close to what we would have got had we applied collapse - so, as far as the empirical evidence goes, we don't have to take the collapse postulate literally - it's a useful heuristic which gets close the right results, and so there's no need to invoke the process of measurement to explain failure of interference pattern in certain situations - the S equation does it single handedly. But it's not clear how the FAPP argument goes if we're dealing with the problem of sharp values, how we get a sharp value out of a superpositional state. Being close to an eigenstate is not being an eigenstate, and if we were working with the eigenstate-eigenvalue link, then we've got problems.
Of course, we might not accept the eigenstate-eigenvalue link - indeed, while this view seemed once dominant, many seems sceptical today - but at that point, it's not clear what the measurement problem is supposed to be any more. If we reject the eigentstate-eigenvalue link, why should we think that systems don't have sharp values, even when they're in superpositional states? - Or, at least, that's how one might argue.
I'm not sure what you're doing when you say 'the remaining terms can now be interpreted as worlds.' Is this a kind of gestalt thing? A heuristic? That, in certain situations, we can, if you like, think of there being worlds? I don't quite follow the physical picture. Either there are many worlds or there are not. Or do you think that they kind of emerge in certain situations - situations where decoherence applies?
I was thinking about it too
But there is a problem, if we have several observers, then why consciousness of all these observers end in the same 'branch'? If the number of consciousnesses is constant (forget about babies) and the number of branches rapidly increases, then all people we see around are P-zombies.
probably, these attempts introduces additional axioms?
'Pure' QM + Quantum Decoherence = MWI.
You can add something else (hidden variables, for example) to get non-MWI theory
What makes MWI so special is the fact that it is minimalistic.
I'm talking about a QM that includes the projection postulate.
Can you provide a definition of that postulate without using the words "measurement", "observer" or "observed"?
If yes, then measurement is just an ordinary QM process, so the whole point is lost.
If no, then it is just another form of CI, where "measurement" is "magic" process which can not be described by QM.
you're welcome to use the words "magic" if you wish. I take this to be a very strong way of saying that you find interpretations that do not give definitions here non-circular. I may even agree with you. But I recognise that this is contentious. The questions about which terms of a theory can or should or need to be given non-circular definitions is delicate. All theories have their primitives. There's nothing in what's literally written in QM textbooks that literally introduces such things as worlds - at least, not as far as I can see.
What do you mean 'described by QM'? Satisfies the S equation for evolution? But if it's part of QM that there are two principles, S and collapse, then nothing is missed out - one thing happens in measurement, another when the system is not being measured. For theoretical reasons, it may very well be better to do with just S's equation - I'm not disputing this.
And certainly, the QM that's written in the textbooks doesn't contain a definition of measurement. Or indeed of world
Note that I'm trying very hard to avoid taking sides on interpretational issues. I've nothing against them, I think they're good and valuable. But I'm just trying to be clear about what's part of 'standard text book quantum mechanics' that we all should agree on, and what further assumptions or views need to be invoked to solve or generate ensuing problems.
Yes, we dont know everything. However, when I read "... value of the observable..." I think - wait, why only ONE value? Why not a superposition? Just by saying "result fo the measurement" or "value of an observable" we already assume the existence of the collapse.I did not want to criticise the circularity of definitions (MWI also has some issues with circularity what we try to apply it to the real world). All I wanted to say was the projection postulate itself makes sense only if some "flavor" of CI had been accepted.
You cant use "pure QM"+projection postulate. You can use "pure QM"+collapse+projection postulate.
On the fundamental level I hope the dream of Max Tegmark (MUH) is true, so the whole world can be described by 1 or few equations for the omnium, like
That description will be non-recursive and all other notions (space, time, causality, measurement) will 'emerge' from it (like in his chapter "Physics from scratch")
But assigning some magical properties to some configurations of atoms, calling them 'measurement devices' ruins that dream.
There isn't. What the formalism says is that a measurement of an observable A changes a pure state [itex]\rho=|\psi\rangle\langle\psi|[/itex] into a mixed state:
[tex]\rho\rightarrow \sum_i P_i\rho P_i[/tex]
where the [itex]P_i=|a_i\rangle\langle a_i|[/itex] are the projection operators of the one-dimensional subspaces that represent the possible states after the measurement. If we reject the idea that this describes what actually happens, then we can certainly say that only one of the terms represent the state of the system after the measurement. But if we're going to claim that the above is a description of what actually happens, we're going to have to deal with the fact that there's nothing that even suggests that one of the terms is more real than the others.
I take this to be the definition of the MWI because of the above and because I haven't found any other definition that makes any kind of sense. Yes, there are people (e.g. Max Tegmark) who claim that the MWI is what you have left when you have removed the probability stuff from the axioms, but this is nonsense. All of these guys use another axiom, which is essentially equivalent to the probability rule, without admitting (or realizing) that this is what they're doing. The "essentially equivalent" axiom is that the Hilbert space of a system is the tensor product of the Hilbert spaces of the subsystems.
We're on the same page, but the version you quoted is using language that suggests that only one of the terms that appear on the right in my version is real. A mixed state can be used both to describe a single system in a specific but unknown state, or an ensemble of systems in lots of different states. Why does Schaum choose the first option? Perhaps because he's following the tradition started by von Neumann, who speculated that the "collapse" is a mysterious physical process that has nothing to do with unitary time evolution and has something to do with consciousness.
I arrived at this view of the MWI while debating it in other threads recently, so if you're interested you could find those threads.
Edit: This is the first post I wrote after I found this way of thinking of the MWI.
That's right. I'm using the decomposition of the Hilbert space of the universe into system+environment, plus the decoherence process to single out something that we can think of as "worlds". I'm defining the worlds to be certain correlations between subsystems, specifically those correlations that are described by the terms that aren't extremely small.
There's only one physical system. Penrose calls it "the omnium" rather than "the universe" because it contains all the worlds. Its time evolution is unitary and described by the Schrödinger equation. The entire history of the omnium is a curve in a Hilbert space. The omnium has subsystems, but the worlds aren't among them. The subsystems are things like "you", "this chair" and "everything else". The worlds are just correlations between the states of the subsystems.
I'm not sure this makes sense, but it's the only way to think of the MWI that I have seen that doesn't look seriously flawed to me.
I was wondering if there's another aspect to the measurement problem in terms of what one defines as the environment. I don't think I totally get decoherence yet, but doesn't it suggest that the environment gets entangled with the system so that the system's superposition gets restricted to some values? If that's the case don't we end up with a sort of infinite regress in terms of an an ever expanding environment upon which the smaller environment must be entangled. Eventually we have the entire universe and what would the entire universe have to be entangled with to have specific values?
Perhaps decoherence works with closed systems, so then this argument really wouldn't hold. Like I said I don't have a firm grasp on decoherence so I hope someone can clear this up for me. thanks!
I think this is essentially correct. I've been thinking the same thing myself. My only objection is that I think you should be talking about state operators (i.e. density matrices) instead of state vectors. (See e.g. my edit of my first post in this thread). For all practical purposes, decoherence destroys the superpositions and puts the system into a mixed state instead.
I think this means that the "worlds" are emerging slowly enough that the correlations that define a "split" between classical worlds haven't had time to spread across the universe (or even very far), before new splits have already begun.
Separate names with a comma.