Does Decoherence Solve the Measurement Problem Completely

In summary, Roland Omnes is a proponent of the decoherence approach, not just as a practise of solving the measurement problem, but also in principle.
  • #1
Prathyush
212
16
As the Title describes, Is the measuremet problem completely solved by the decoherence Program?

In specific I would like the following question addressed.

Is there is clear explanation as to what it means to Record Infromation?
Can it explain the behaviour of a photographic plate?
What happens to the appratus after measurement?
 
Physics news on Phys.org
  • #2
Some believe it solves it, others say it works only for all practical purposes (i.e. technically the state of the system+apparatus+enivornment is in superposition).

Roland Omnes is a proponent of the decoherence approach, not just as a practise of solving the measurement problem, but also in principle. See "The Interpretation of Quantum Mechanics" pages 304-309.
 
  • #3
I think this link argues quite nicely why decoherence does not solve the philosophical issues:
http://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro
Money quote:
In the special case of measuring apparatuses, it would explain why we never observe an apparatus pointing, say, to two different results, i.e. decoherence would provide a solution to the measurement problem of quantum mechanics. As pointed out by many authors, however (e.g. Adler 2003; Zeh 1995, pp. 14–15), this claim is not tenable.
See also here
http://arxiv.org/abs/quant-ph/0312059
 
  • #4
It's even hard to argue that decoherence solves any aspect of the measurement problem. All the measurement related features are implicitly imported through the backdoor by using the measurement postulate to define density operators. Any argument for decoherence giving insight into measurement is therefore circular.
 
  • #5
StevieTNZ said:
Some believe it solves it, others say it works only for all practical purposes (i.e. technically the state of the system+apparatus+enivornment is in superposition).

Exactly.

You will find a good discussion of the issue in Schlosshauers book on decoherence:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

The measurement problem has a number of parts. There is the preferred basis problem ie why a particular basis is singled out. It solves that. Then there is the issue of why a particular outcome occurs and indeed why any outcome occurs at all. It doesn't solve that in a fundamental way but does for all practical purposes meaning you can assume it does, that the outcome exists prior to observation, and no experiment can say you are wrong. If that is satisfactory depends purely on your interpretation.

Yes decoherence incorporates the Born rule and assumes it but refines it so some of its 'weirder' features are no longer an issue eg you can assume the system is in the state prior to observation which you can't do without decoherence - the reasoning is not circular. Interpretations that include decoherence such as decoherent histories call probabilities calculated without reference to an actual observational apparatus pre-probabilities - they are not manifest until decoherence occurs in an apparatus.

Is there is clear explanation as to what it means to Record Infromation?
Depends on what you accept as clear. If you mean it explains the why of a particular outcome then no.

Can it explain the behaviour of a photographic plate?
Depends on what you accept as explain - for all practical purposes it does but if you want more than that - sorry - you are out of luck.

What happens to the appratus after measurement?
Nothing - the observation selected an outcome - that's it - that's all.

Thanks
Bill
 
Last edited by a moderator:
  • #6
Jazzdude said:
It's even hard to argue that decoherence solves any aspect of the measurement problem. All the measurement related features are implicitly imported through the backdoor by using the measurement postulate to define density operators. Any argument for decoherence giving insight into measurement is therefore circular.

I'm not sure how familiar anyone is with the specifics of this approach but in the paper below Zurek goes to great lengths to derive the Born rule without any use of density operators and related concepts in order to avoid the circularity mentioned above:

Probabilities from Entanglement, Born's Rule from Envariance (Zurek, 2005)

I'm really not equipped to analyze the subtleties involved with his approach but when I read through it the following caught my eye (p.19):
Zurek said:
To demonstrate Lemma 5 we need one more property
– the fact that when a certain event U (p(U) = 1) can
be decomposed into two mutually exclusive events,
U = κ ∨ κ⊥, their probabilities must add to unity:

p(U) = p(κ ∨ κ⊥) = p(κ) + p(κ⊥) = 1 , (29)

This assumption introduces (in a very limited setting)
additivity. It is equivalent to the statement that “something
will certainly happen”.
Could someone accuse him of an act of "smuggling" here?

Also, as far as decoherence in general I quite enjoyed working my way through this:
Decoherence, the measurement problem, and [URL="https://www.physicsforums.com/insights/fundamental-difference-interpretations-quantum-mechanics/"]interpretations of quantum mechanics (Schlosshauer, 2004)[/URL]

David Wallace has written on this topic extensively, I believe.
 
  • #7
eloheim said:
I'm not sure how familiar anyone is with the specifics of this approach but in the paper below Zurek goes to great lengths to derive the Born rule without any use of density operators and related concepts in order to avoid the circularity mentioned above:

Zurek doesn't really argue in the context of decoherence, and he postulates additional structure that allows him to derive the Born rule from something that is pretty close to the Born rule already.

David Wallace has written on this topic extensively, I believe.

David Wallace' own arguments are mostly focused on decision theory based approaches to deriving the Born rule in an Everett context. This is also not decoherence and it also requires additional postulates.
 
  • #8
Jazzdude said:
David Wallace' own arguments are mostly focused on decision theory based approaches to deriving the Born rule in an Everett context. This is also not decoherence and it also requires additional postulates.

I'm pretty sure that he's talking about the work Wallace has done on explaining the emergence of worlds and preferred basis through decoherence. Like in his FAPP paper (http://arxiv.org/abs/1111.2189)
 
  • #9
Quantumental said:
I'm pretty sure that he's talking about the work Wallace has done on explaining the emergence of worlds and preferred basis through decoherence. Like in his FAPP paper (http://arxiv.org/abs/1111.2189)

This is Everett, which of course makes use of decoherence. The OP was asking about just decoherence however, which is a different thing than MWI.
 
  • #10
Jazzdude said:
This is Everett, which of course makes use of decoherence. The OP was asking about just decoherence however, which is a different thing than MWI.

Well the claim of Wallace is that Everett really just is the QM formalism + decoherence
 
  • #11
Quantumental said:
Well the claim of Wallace is that Everett really just is the QM formalism + decoherence

Yes, but it's still not what people mean when they say decoherence. Decoherence on its own is agnostic of the concept of worlds.
 
  • #12
I found this piece by Leifer disussing decoherence useful:
In conclusion, decoherence theory has done a lot for our understanding of the emergence of classicality from quantum theory. However, it does not solve all the foundational questions about quantum theory, at least not on it’s own. Further, its importance may have been overemphasized by the physics community because other less-developed approaches to emergence could turn out to be of equal importance.
What can decoherence do for us?
http://mattleifer.info/2007/01/24/what-can-decoherence-do-for-us/
 
  • #14
Jazzdude said:
Yes, but it's still not what people mean when they say decoherence. Decoherence on its own is agnostic of the concept of worlds.
Sure, but I think the more interesting debate is whether decoherence can give us a preferred basis and emerge a classical world.
According to a recent paper by Jan Scwhindt which was briefly discussed here, it cannot.

There is yet another paper that was released recently by a physicist named Oleg Lychkovskiy: http://arxiv.org/abs/1210.4124
I don't grasp this paper though, but you might.
 
  • #15
Quantumental said:
But look at Matt Leifer's comment in the comment section from 2010. It seems he has been won over by Wallace too.
I don't get that from the 2010 quote. Here is what he writes in the 2007 blog:
However, the point here is that the work is not being done by decoherence alone, as claimed by some physicists, but also by a nontrivial ontological assumption about the state-vector. As I remarked earlier, the latter is itself a point of contention, so it is clear that decoherence alone is not providing a complete solution.
And this is what he wrote in his 2010 post:
To do this, you need to add an ontology, but it turns out that most of the ontologies that have been considered end up relying on precisely these formal derivations to get emergence. Perhaps the best worked out example is in the Everett interpretation where you can look at the long papers by David Wallace to find out how decoherence leads to emergence in that case. There is no new maths in these papers, but it provides the necessary philosophical support that you are looking for in that case. Bohmian mechanics is somewhat similar in that it needs decoherence in order to make the trajectories follow their classical counterparts in a stable manner and again there is no new maths involved in understanding this. Therefore, I guess what I was trying to say is that we seem to understand the broad outline of how classicality emerges, with the proviso that the meaning attached to that understanding is ontology dependent.
So, unless I'm misunderstanding Leifer is still arguing that decoherence, by itself, cannot solve the measurement problem.
 
  • #16
bohm2 said:
So, unless I'm misunderstanding Leifer is still arguing that decoherence, by itself, cannot solve the measurement problem.

It doesn't - what it allows is for a minimalist interpretation like decoherent histories that does.

Thanks
Bill
 
  • #17
bohm2 said:
So, unless I'm misunderstanding Leifer is still arguing that decoherence, by itself, cannot solve the measurement problem.

I think the important part is where he says:

" Perhaps the best worked out example is in the Everett interpretation where you can look at the long papers by David Wallace to find out how decoherence leads to emergence in that case. There is no new maths in these papers, but it provides the necessary philosophical support that you are looking for in that case. "
 
  • #18
One must still distinguish between the physical process of decoherence (selection of preferred pointer basis, effective diagonalization of the density matrix ρ' of the subsystem S') and its interpretation. What decoherence does is that it transforms the quantum probabilities into effective classical ones; but it does not tell us which particular result encoded in the diagonal matrix ρ' will be realized in one specific experiment. In terms of Schrödinger's cat: it explains the absence of coherent superpositions, but for one single cat in one single experiment it does not tell whether this specific cat will be dead or alive after opening the box.
 
  • #19
tom.stoer said:
One must still distinguish between the physical process of decoherence (selection of preferred pointer basis, effective diagonalization of the density matrix ρ' of the subsystem S') and its interpretation. What decoherence does is that it transforms the quantum probabilities into effective classical ones; but it does not tell us which particular result encoded in the diagonal matrix ρ' will be realized in one specific experiment. In terms of Schrödinger's cat: it explains the absence of coherent superpositions, but for one single cat in one single experiment it does not tell whether this specific cat will be dead or alive after opening the box.

Sure, but Occam Razor says "both" if there is no preferred basis problem. (ignoring the Born Rule problem at the moment)
 
  • #20
What do you mean by "both"? Both dead and alive?
 
  • #21
tom.stoer said:
What do you mean by "both"? Both dead and alive?

Yes, Everett.
 
  • #22
but this does not follow mathematically from decoherence but is a (one of many) philosophical interpretation; and therefore decoherence does not fully solve the measurement problem
 
  • #23
tom.stoer said:
but this does not follow mathematically from decoherence but is a (one of many) philosophical interpretation
It follows from decoherence and the evolution of the wave function if you do not add collapses or other stuff.
 
  • #24
mfb said:
It follows from decoherence and the evolution of the wave function if you do not add collapses or other stuff.

You are mixing up two different things, namely a) formalism and b) its (ontological) interpretation:
a) the "mathematical entities" (subspaces, ...) describing the dead cat and the alive cat are both "present" after decoherence in the density matrix - I agree
b) it is not a matter of physics but of philosophical interpretation whether this corresponds to something "ontologically real" in the sense of MWI, whether you want to add a "collapse" or whatever; physically this is a matter of taste b/c there is no experimental prediction to distinguish between all these interpretations, so it's philosophy or metaphysics (Ockhams razor is philosophy, not physics)

As a platonist believing in some abstract sense in the reality of the wave function and the specific cat as its realization I may also believe in MWI. As a positivist I will not believe in any reality but only in the results of my calculation and whether they agree with experimental results or not; they agree with experiments - fine - end-of-story (it is interesting that there are positivists arguing for MWI and against a collapse - which is a self-contradictory position).

Not even Ockhams razor is sufficient to decide b/c there are two choices:
1) add complexity to the ontological level in order to reduce the complexity of the interpretation => MWI
2) add complexity to the (not fully understood) explanation or interpretation in order to reduce complexity of the ontological level => collapos (b/c there is only one world = the observable world)
Ockhams razor doesn't tell you whether (1) or (2) is the correct reasoning b/c Ockhams razor is applied two different 'categories', namely
1) to 'interpretation'
2) to 'ontology'

So decoherence as a purely physical phenomenon cannot tell us anything regarding the metaphysical level. In order to deduce a metaphysical reasoning you have to have some metaphysical input - which is not present in the formalism of QM and decoherence.

Compare the following positions:
1) There are two branches of reality, both real in the same sense, one containing the dead cat and one containing the alive cat; and there are two observers in these two branchens ... In that sense everything that is present in the density operator does exist in the above mentioned sense.
2) blablabla regarding collaps ...
3) There is a density operator describing the probability to find a dead cat; but b/c w/o any observation of both cats at the same time - which we don't have - we do not have any indication whether they both exist in some still to be defined sense, so we decide not to ascribe any ontological meaning to the density operator (nor to wave functions etc.) We use the QM formalism as a model which approximately represents a subset of aspects of "reality" but which allows us to predict results of a certain class of experiments

3) is an agnostic position. It does not allow us to explain in any sense why (!) physics (based on mathematics) is a successful description of reality - b/c neither do we make any statement regarding the relation between physics and reality, nor do we make any attempt to define 'reality'. But it still allows us to use quantum mechanics including decoherence to derive experimentally testable and accurate predictions.

Any position that goes beyond (3) like MWI in the sense of (1) or collapse (2) adds some metaphysical reasoning beyond decoherence as a pure mathematical fact.
 
  • #25
tom.stoer said:
whether you want to add a "collapse" or whatever; physically this is a matter of taste b/c there is no experimental prediction to distinguish between all these interpretations
I thought it was always possible, in principle, to discover whether or not superposition remains or a collapse has occurred as long as the relevant degrees of freedom in the environment are accounted for?

I wonder if detailed study of the line between "in principle" and "in practice" might reveal something here (based on limited information storage capacity in the universe).
 
  • #26
In the formalism of QM there is neither a collapse nor a branching into many worlds; there's only a single wave function with unitary time evolution (or a density matrix; but taking all d.o.f. into account there is not even the need to consider density matrices)

But when a human observes a pointer in an apparatus the pointer is not in any superposition, so there must be something like a collaps, a branching or whatever; and this is beyond the formalism of QM.
 
  • #27
tom.stoer said:
One must still distinguish between the physical process of decoherence (selection of preferred pointer basis, effective diagonalization of the density matrix ρ' of the subsystem S') and its interpretation. What decoherence does is that it transforms the quantum probabilities into effective classical ones; but it does not tell us which particular result encoded in the diagonal matrix ρ' will be realized in one specific experiment. In terms of Schrödinger's cat: it explains the absence of coherent superpositions, but for one single cat in one single experiment it does not tell whether this specific cat will be dead or alive after opening the box.

Exactly.

As Schlosshauer says it transforms a superposition into an 'improper' mixed state. Here improper means it mathematically looks exactly the same as a mixed state and no experiment can tell it from one but in reality it isn't. But it is this 'mimicking' of a mixed state that allows it to be interpreted as one, and as an interpretational thing solve the measurement problem. It doesn't by itself solve the measurement problem but by allowing the improper mixed states to be interpreted as proper ones does for all practical purposes. The wavefunction collapse issue is still there but swept under the rug so to speak.

Thanks
Bill
 
  • #28
bhobba said:
Exactly.

As Schlosshauer says it transforms a superposition into an 'improper' mixed state. Here improper means it mathematically looks exactly the same as a mixed state and no experiment can tell it from one but in reality it isn't. But it is this 'mimicking' of a mixed state that allows it to be interpreted as one, and as an interpretational thing solve the measurement problem. It doesn't by itself solve the measurement problem but by allowing the improper mixed states to be interpreted as proper ones does for all practical purposes. The wavefunction collapse issue is still there but swept under the rug so to speak.

Thanks
Bill

Where does Schlosshauer say this?
I'd appreciate the reference. TIA. jimgraber
 
  • #29
tom.stoer said:
In the formalism of QM there is neither a collapse nor a branching into many worlds; there's only a single wave function with unitary time evolution (or a density matrix; but taking all d.o.f. into account there is not even the need to consider density matrices)

But when a human observes a pointer in an apparatus the pointer is not in any superposition, so there must be something like a collaps, a branching or whatever; and this is beyond the formalism of QM.

Why are the pointer and observer not simply in superposition as well?
 
  • #30
eloheim said:
Why are the pointer and observer not simply in superposition as well?

You've come full circle, back to Schrodinger's cat. I can set the experiment up so that in one position of the pointer I'm dead and in the other I'm alive; and now "simply in superposition" means a superposition of me dead and me alive. That's fine as far as the formalism of QM goes, and it makes perfect sense mathematically... But it's not a particularly useful description of anything.
 
  • #31
jimgraber said:
Where does Schlosshauer say this? I'd appreciate the reference. TIA. jimgraber

Page 49 Decoherence And The Quantum To Classical Transition.

Thanks
Bill
 
  • #32
eloheim said:
Why are the pointer and observer not simply in superposition as well?
Accortding to the QM formalism they are; according to my perception they aren't. That's the core of the problem. QM doesn't tell us what we will observe, it only tell's us something about the probabilities of observations. If there is a 50% probability for "dead" I will never observe these superpositions or mixed states. I will always either observe "dead" or "alive". But there is nothing in the QM formalism which tells us how the 50% in the density matrix become the 100% in my perception.

So QM doesn't tell us how potential results become actual (real) results. Even decoherence doesn't.
 
  • #33
tom.stoer said:
Accortding to the QM formalism they are; according to my perception they aren't. That's the core of the problem. QM doesn't tell us what we will observe, it only tell's us something about the probabilities of observations. If there is a 50% probability for "dead" I will never observe these superpositions or mixed states. I will always either observe "dead" or "alive". But there is nothing in the QM formalism which tells us how the 50% in the density matrix become the 100% in my perception.

So QM doesn't tell us how potential results become actual (real) results. Even decoherence doesn't.
I like your take on this. And some others. My two cents is:

QM is a probability calculus based on classical wave mechanical concepts of the reality underlying instrumental behavior which are inferred from the instrumental behavior. Quantum superposition is a mathematical representation, based on classical wave mechanics, of the extent of our knowledge of possible instrumental behaviors. Quantum superposition has the nonclassical character it does precisely because of our ignorance of the reality underlying instrumental behavior. That is, quantum superposition is, in a most important sense, an expression of our ignorance of deep reality.

There is currently no extension or interpretation of QM (including decoherence) which explains instrumental behavior to the extent that that behavior can be predicted in any way other than assigning probabilites to the possiblities associated with any particular instrumental preparation.

Why there's only one observed experimental outcome rather than the multiple ones that might be entailed in a particular superposition isn't the question, imo. The question is, rather, eg., why was there a detection (as opposed to no detection) recorded during a certain interval. Decoherence can't answer this question, because the mathematics of decoherence doesn't tell us any more about the reality underlying instrumental behavior than can be inferred without applying the mathematics of decoherence.

Quantum amplitudes are superposed in accordance with the requirements of any consistent wave mechanical representation. Philosophical pseudo-problems and paradoxes arise due to assuming that quantum states are real ontological states, which is an assumption that has no direct evidentiary support.

The current state of affairs is that the math of quantum decoherence doesn't solve the real measurement problem. Imho, there will never be a solution to the real measurement problem.

It seems likely to me that some form of QM, ie. a probabilty calculus regarding instrumental behavior, is the best that can be hoped for -- and that the real quantum measurement problem will remain unsolved.
 
  • #34
nanosiborg, great, thanks.

A view comments:

nanosiborg said:
That is, quantum superposition is, in a most important sense, an expression of our ignorance of deep reality.
Not necessarily; it could be an ontological feature, but see below ...

nanosiborg said:
There is currently no extension or interpretation of QM (including decoherence) ...
Decoherence (in the strict sense of the formalism) isn't an interpretation; it becomes an interpretation if we add something like MWI, or if we are sloppy in our discussions ...

nanosiborg said:
Decoherence can't answer this question, because the mathematics of decoherence doesn't tell us any more about the reality underlying instrumental behavior than can be inferred without applying the mathematics of decoherence.
Exactly. It explains a lot (classical probabilities, pointer basis, perhaps Born's rule), but not everything.

nanosiborg said:
Philosophical pseudo-problems and paradoxes arise due to assuming that quantum states are real ontological states, which is an assumption that has no direct evidentiary support.
The problem is deeper. If you insint on some ontological status of QM you immediately run into these problems. But if you give up an ontological interpretation and introduce "our ignorance of reality" then logically it follows that either QM is not complete in the description of nature or our understanding of QM is not complete. So the problem is not only a philosophical one but a physical one as well. We are feeling uncomfortable with the situation that there "is" or "seems to be" more than we can calculate. We can then never be sure where the problem resides and whether there may be a physical but yet unkown solution. I think your interpretation regarding "our ignorance of reality" is something we don't like b/c it may be an interpretation only.

The case of decoherence tells us that (partially !) we can solve the measurement problem. And there's some hope - so we don't stop.

nanosiborg said:
It seems likely to me that ... the real quantum measurement problem will remain unsolved.
Yes, that's one possibility.

Perhaps the whole discussion is misguided b/c decoherence adds a pseudo-solution in introducing the incoherent (classical) environment. It seems as if adding a classical environment could solve the quantum measurement problem (the discussion shows that it doesn't). But even if decoherence applies to most experiments (fapp) we must not forget about experiments which we could construct in principle, namely measurements where the apparatus is perfectly isolated from the environment and where the branching or collaps is not due to decoherence + XYZ. In that case we still have to deal with a small number of entangled d.o.f. and decoherence simply doesn't apply!
 
Last edited:
  • #35
Thanks tom.stoer. I think I should reread your and others' comments and think about this some more. :smile:
 
<h2>1. What is decoherence and how does it relate to the measurement problem?</h2><p>Decoherence is a process in quantum mechanics where the wave-like behavior of particles becomes more classical and deterministic. It is believed to be the main reason for the appearance of classical behavior in the macroscopic world. Decoherence is often proposed as a solution to the measurement problem in quantum mechanics, which involves the strange phenomenon of wavefunction collapse during measurement.</p><h2>2. Does decoherence completely solve the measurement problem?</h2><p>No, decoherence does not completely solve the measurement problem. While it provides a plausible explanation for the appearance of classical behavior, it does not fully explain the process of wavefunction collapse during measurement. The measurement problem is still a subject of debate and research in the field of quantum mechanics.</p><h2>3. How does decoherence explain the appearance of classical behavior?</h2><p>Decoherence explains the appearance of classical behavior by showing how interactions between a quantum system and its environment can lead to the suppression of quantum interference effects. This results in the system appearing to behave classically, as the different possible states of the system become effectively isolated from each other.</p><h2>4. Are there any criticisms of using decoherence to solve the measurement problem?</h2><p>Yes, there are some criticisms of using decoherence to solve the measurement problem. One criticism is that it does not fully explain the process of wavefunction collapse and relies on the assumption that the environment is always in a definite state, which is not always the case. Another criticism is that decoherence does not provide a clear answer to the question of why we observe a particular outcome during measurement.</p><h2>5. How does decoherence impact the interpretation of quantum mechanics?</h2><p>The impact of decoherence on the interpretation of quantum mechanics is a subject of ongoing debate. Some interpretations, such as the many-worlds interpretation, incorporate decoherence as a fundamental aspect of their explanation of quantum phenomena. Other interpretations, such as the Copenhagen interpretation, view decoherence as a useful tool but do not consider it to fully solve the measurement problem. Ultimately, the interpretation of quantum mechanics is a matter of personal perspective and philosophical beliefs.</p>

1. What is decoherence and how does it relate to the measurement problem?

Decoherence is a process in quantum mechanics where the wave-like behavior of particles becomes more classical and deterministic. It is believed to be the main reason for the appearance of classical behavior in the macroscopic world. Decoherence is often proposed as a solution to the measurement problem in quantum mechanics, which involves the strange phenomenon of wavefunction collapse during measurement.

2. Does decoherence completely solve the measurement problem?

No, decoherence does not completely solve the measurement problem. While it provides a plausible explanation for the appearance of classical behavior, it does not fully explain the process of wavefunction collapse during measurement. The measurement problem is still a subject of debate and research in the field of quantum mechanics.

3. How does decoherence explain the appearance of classical behavior?

Decoherence explains the appearance of classical behavior by showing how interactions between a quantum system and its environment can lead to the suppression of quantum interference effects. This results in the system appearing to behave classically, as the different possible states of the system become effectively isolated from each other.

4. Are there any criticisms of using decoherence to solve the measurement problem?

Yes, there are some criticisms of using decoherence to solve the measurement problem. One criticism is that it does not fully explain the process of wavefunction collapse and relies on the assumption that the environment is always in a definite state, which is not always the case. Another criticism is that decoherence does not provide a clear answer to the question of why we observe a particular outcome during measurement.

5. How does decoherence impact the interpretation of quantum mechanics?

The impact of decoherence on the interpretation of quantum mechanics is a subject of ongoing debate. Some interpretations, such as the many-worlds interpretation, incorporate decoherence as a fundamental aspect of their explanation of quantum phenomena. Other interpretations, such as the Copenhagen interpretation, view decoherence as a useful tool but do not consider it to fully solve the measurement problem. Ultimately, the interpretation of quantum mechanics is a matter of personal perspective and philosophical beliefs.

Similar threads

Replies
7
Views
1K
Replies
69
Views
6K
  • Quantum Interpretations and Foundations
Replies
7
Views
868
Replies
90
Views
8K
Replies
89
Views
6K
  • Quantum Physics
Replies
4
Views
703
  • Quantum Physics
Replies
31
Views
2K
  • Quantum Physics
2
Replies
37
Views
4K
  • Quantum Physics
Replies
17
Views
1K
  • Quantum Interpretations and Foundations
Replies
21
Views
2K
Back
Top