How decoherence destroys superpositions

  • B
  • Thread starter jaydnul
  • Start date
  • Tags
    Decoherence
In summary, after decoherence, the particle is still in a superposition of the new classical probabilities, but it is no longer in a superposition of the original quantum probabilities.
  • #1
jaydnul
558
15
If the particle is undisturbed in the two slit experiment, an interference pattern will show up. The weird quantum behavior here is that the particle can have "negative probabilities" associated with its position and momentum, thus they can cancel each other and form the pattern we see.

When the particle interacts with the environment (decoheres), it is said that superposition disappears. Does this mean that the particle is still described by a probability distribution of outcomes, but those outcomes now look much more classical and can't interfere? So really when you say superposition disappears, you mean the negative amplitudes disappear, yes?
 
Physics news on Phys.org
  • #2
jaydnul said:
The weird quantum behavior here is that the particle can have "negative probabilities" associated with its position and momentum
It cannot. All probabilities are positive.
Amplitudes can be negative, but they are not observable.
jaydnul said:
When the particle interacts with the environment (decoheres), it is said that superposition disappears. Does this mean that the particle is still described by a probability distribution of outcomes, but those outcomes now look much more classical and can't interfere?
Not "still". After decoherence you can start interpreting individual amplitudes (e. g. from the two slits) as probabilities. Before this interpretation is not possible.
 
  • Like
Likes QuantumQuest and bhobba
  • #3
Perhaps you are referring to the Wigner quasi-probability distribution W(x,p). This one is negative in some regions of phase-space corresponding to superpositions (hence the quasi) but in such a way that the observable probabities
$$|\psi (x)|^2=\int dp W(x,p) $$ and $$|\psi (p)|^2=\int dx W(x,p) $$
are positive anywhere. W itself is a mathematical construction/interpretation, but it not observable itself.
When there is decoherence, W itself becomes positive everywhere, making it a mathematically 'legit' probability distribution, though again only the above marginals are physically observable.
 
  • Like
Likes QuantumQuest, vanhees71 and bhobba
  • #4
thephystudent said:
Perhaps you are referring to the Wigner quasi-probability distribution W(x,p)

I have some dim memories of that distribution, but without doubt probabilities in QM can never, ever be negative. Its utterly impossible as the Kolmogorov axioms easily show. In fact, mathematically, its known these days QM is in fact a generalized probability model - the simplest one after ordinary probability theory:
https://arxiv.org/abs/1402.6562

What happened is during the early days of QM when they applied the Klein Gordon (KG) equation they sometimes got negative probabilities which confounded the early pioneers. After a while what was really going on became apparent - it was positive probabilities of antiparticles. Nowadays its all part of a more comprehensive theory, Quantum Field theory, that explains it all from first principles.

The interesting thing about the KG equation is its the most general relativistic equation you can write for a single valued fields. Of course what that single value is isn't spelled out, but its pretty easy to see it must be complex by simply solving it. In fact it's the most general equation for a spin zero particle - but that is a whole new discussion - you can find the full detail here:
https://www.amazon.com/dp/3319192000/?tag=pfamazon01-20

There is a lot more that can be said about this interesting equation - but it way off topic - start as new thread if interested.

To answer the OP's original question what happens is a superposition is converted to a mixed state by decoherence. A lot more can be said, but is beyond a B level thread. I have tried to explain more at the B level many times, but failed utterly so won't even try. At the B level get a hold of the following:
https://www.amazon.com/dp/0465067867/?tag=pfamazon01-20

Thanks
Bill
 
Last edited:
  • #5
Maybe the original poster was getting probability and probability amplitudes confused. Interference effects involve cancellation between probability amplitudes, which can be positive, negative or even complex.
 
  • Like
Likes bhobba
  • #6
I marked it as "B" but I have a bachelor's in physics and took quantum mechanics... its just been a while. I referred to the amplitudes as "negative probabilities" because of a recent explanation I read that said "quantum mechanics is basically probability theory with negative probabilities allowed".

So decoherence effectively turns the amplitudes into probabilities? My question is about the use of the word "classical". After decoherence, the particle can still only be predicted to land in a certain location based on the probabilities given by the decohered wavefunction, but those probabilities are now spread out in a manner that looks much more classical (as opposed to the particle just stays at the value it was measured as when initially decohered).

Edit: let me try to clear up my question. After decoherence, is the particle still in a superposition of the new classical probabilities, or is the particle definitely in one of those states, and the probabilities just reflect how much we know (theoretically we could know exactly).
 
Last edited:
  • #7
jaydnul said:
After decoherence, is the particle still in a superposition of the new classical probabilities
I think it is confusing to call it superposition here.

The answer depends on your favorite interpretation of quantum mechanics.
 
  • #8
mfb said:
I think it is confusing to call it superposition here.

The answer depends on your favorite interpretation of quantum mechanics.

Let's assume the Schrodinger wave collapse interpretation. The quantum particle entangles with the measurement device and they become one quantum system together. The particle will take single value at the moment of entanglement (the probability of outcomes being proportional to the square of the amplitudes of its wavefunction). Now that they are entangled, does the particle stay at that collapsed value for the rest of time? Does it become a classical particle in that sense?
 
  • #9
jaydnul said:
The quantum particle entangles with the measurement device
It does not - because we get decoherence as soon as that would start happening, and with wavefunction collapse that means one option is chosen at random.
jaydnul said:
Now that they are entangled, does the particle stay at that collapsed value for the rest of time?
Its future depends on the setup. We have to use quantum mechanics again to predict what it does in the future - but not starting from the collapsed state as initial state.
 
  • Like
Likes bhobba
  • #10
mfb said:
It does not - because we get decoherence as soon as that would start happening, and with wavefunction collapse that means one option is chosen at random.Its future depends on the setup. We have to use quantum mechanics again to predict what it does in the future - but not starting from the collapsed state as initial state.

That's where I am confused. So the decoherence (measurement) prevents the particle from becoming entangled with the measuring device? I thought entanglement happened as a result of decoherence.
 
  • #11
One of the reasons I don't like collapse interpretations. You have to define an arbitrary point where collapse happens.

Without collapses, the particle just gets entangled with the measurement device. Nothing magical happens.
 
  • Like
Likes anorlunda
  • #12
mfb said:
One of the reasons I don't like collapse interpretations. You have to define an arbitrary point where collapse happens.

Without collapses, the particle just gets entangled with the measurement device. Nothing magical happens.

Edit for clarity: If a particle has become entangled with the measuring system, what happens when the measurement system tries to measure it again at a later time. Will have the same value as before?
 
  • #13
If nothing happens in between: yes.
 
  • #14
jaydnul said:
So decoherence effectively turns the amplitudes into probabilities?

No, what turns amplitudes into probabilities is just squaring. What decoherence does is to destroy interference between alternatives.

Here are the rules for computing probabilities in quantum mechanics. If you have an initial state [itex]|I\rangle[/itex] and a final state [itex]|F \rangle[/itex], then you compute the probability amplitude for going from state [itex]|I\rangle[/itex] to state [itex]|F\rangle[/itex] using the Hamiltonian. You get a complex number [itex]T_{IF}[/itex]. The probability of going from state [itex]|I\rangle[/itex] to [itex]|F\rangle[/itex] is then [itex]P_{IF} = |T_{IF}|^2[/itex], which is always a positive number (or zero).

Now, what about interference? Well, suppose that there are two paths to get from [itex]I[/itex] to [itex]F[/itex]: Via intermediate state [itex]|A\rangle[/itex], or via intermediate state [itex]|B\rangle[/itex]. For example, you send an electron through a screen with two slits, and the electron collides with a photographic plate beyond the screen, creating a black dot. You're trying to figure out the probability of getting a dot at a particular location on the plate. The electron could either go via the first slit, or via the second slit.

The amplitude for getting from the initial state [itex]|I\rangle[/itex] to the final state [itex]|F\rangle[/itex] is just the sum of the amplitudes for all the alternative paths. So in the simple case of two alternative intermediate states, we have:

[itex]T_{IF} = T_{IAF} +T_{IBF}[/itex]

where [itex]T_{IXF}[/itex] is the probability for getting from I to F via intermediate state X.

Then the probability will be:

[itex]P_{IF} = |T_{IF}|^2 = |T_{IAF}|^2 + |T_{IBF}|^2 + 2 Re ((T_{IAF})^* T_{IBF})[/itex]

(This just uses the fact that for two complex numbers A and B, [itex]|A + B|^2 = |A|^2 + |B|^2 + 2 Re(A^* B)[/itex])

If we write [itex]P_{IAF} = |T_{IAF}|^2[/itex] and [itex]P_{IBF} = |T_{IBF}|^2[/itex], then this becomes:

[itex]P_{IF} = P_{IAF} +P_{IBF} + \chi_{AB}[/itex]

where [itex]P_{IAF}[/itex] is the probability of going via intermediate state [itex]A[/itex], and [itex]P_{IBF}[/itex] is the probability of going via intermediate state [itex]B[/itex], and [itex]\chi_{AB}[/itex] is the interference term: [itex]2 Re ((T_{IAF})^* T_{IBF})[/itex], which can be either positive or negative. The probabilities behave like classical probabilities if the interference term is zero (or more generally, is unpredictable so that it averages to zero).

Entanglement destroys this interference term.
 

Attachments

  • 2-slit.png
    2-slit.png
    5.3 KB · Views: 377
  • Like
Likes Pedro Zanotta, zonde, Simon Phoenix and 1 other person
  • #15
mfb said:
If nothing happens in between: yes.

By nothing, do you mean the particle continues to stay entangled with the device? If so, how does it become unentangled again?

stevendaryl said:
...where [itex]P_{IAF}[/itex] is the probability of going via intermediate state [itex]A[/itex], and [itex]P_{IBF}[/itex] is the probability of going via intermediate state [itex]B[/itex], and [itex]\chi_{AB}[/itex] is the interference term: [itex]2 Re ((T_{IAF})^* T_{IBF})[/itex], which can be either positive or negative. The probabilities behave like classical probabilities if the interference term is zero (or more generally, is unpredictable so that it averages to zero).

Entanglement destroys this interference term.

This is the perfect explanation, thank you. Would it be naive to ask how that interference term is destroyed in a conceptual sense, rather than mathematically?
 
  • #16
jaydnul said:
This is the perfect explanation, thank you. Would it be naive to ask how that interference term is destroyed in a conceptual sense, rather than mathematically?

Well, interference only happens between two different intermediate states that lead to the same final state. So for example, in the two-slit experiment you set things up so that there is a source of electrons (or photons, or whatever), then they have to pass through a screen with two slits, and then finally they smash into a photographic plate beyond the screen, making a black dot. So the final state is smashing into a dot on the plate, and the two intermediate states are the two possible slits the electron can pass through. Interference between these two possibilities leads to the distinctive pattern of spots on the plate. So the initial state [itex]I[/itex] is the creation of the electron on one side of the screen, the intermediate states [itex]A[/itex] and [itex]B[/itex] are the electron passing through one or the other slit, and the final state, [itex]F[/itex] is the electron making a dot on the plate.

Now, suppose that immediately after the electron passes through one of the slits, we make a measurement as to which slit it went through. I don't know--maybe take a picture of it (that's not really possible, but say that it is). Then in that case, the two alternative paths produce different final states for the combined system of electron + measurement
  1. There is a dot on the screen at some location, and there is a record of the electron going through the first slit, A. Call that state [itex]F_A[/itex].
  2. There is a dot on the screen at that location, and there is a record of the electron going through the second slit, B. Call that state [itex]F_B[/itex]
For different final states, you don't add the amplitudes and then square, you first square and add the results. So instead of:

[itex]P_{IF} = |T_{IAF} + T_{IBF}|^2[/itex]

you get:

[itex]P_{IF} = |T_{IAF_A}|^2 + |T_{IBF_B}|^2[/itex]

The interference term is lost, because there is only interference between paths with the exact same final state.

Decoherence is like a measurement, in the sense that the particle interacts with the environment in a way that makes a permanent change to the environment. Maybe the pattern of photons radiated away from the electron. So a different path leads to different final states of the environment (even though, unlike a real measurement, you can't actually compute which path the electron took based on the patterns in the environment) and so there is no interference between the two alternatives.
 
  • Like
Likes QuantumQuest
  • #17
mfb said:
If nothing happens in between: yes.

I am still curious how unentanglement (de-entanglement?) happens. When two systems become entangled, what needs to happen to reverse it?
 
  • #18
jaydnul said:
By nothing, do you mean the particle continues to stay entangled with the device? If so, how does it become unentangled again?
By nothing, I mean exactly what I said.

If you measure the momentum of a particle, then let the particle fly through a magnetic field changing its momentum, then measure the momentum again, you will get a different result. This is true even in classical physics and shouldn't be surprising.
If you measure the momentum of a particle, then be careful to avoid anything that would change the momentum, then measure it again, you will get the same result as the first measurement.

Entanglement with things called "measurement devices" is irreversible due to decoherence. Otherwise we don't call the things measurement devices, but see them as part of the quantum system.
 
  • Like
Likes QuantumQuest
  • #19
mfb said:
By nothing, I mean exactly what I said.

If you measure the momentum of a particle, then let the particle fly through a magnetic field changing its momentum, then measure the momentum again, you will get a different result. This is true even in classical physics and shouldn't be surprising.
If you measure the momentum of a particle, then be careful to avoid anything that would change the momentum, then measure it again, you will get the same result as the first measurement.

Entanglement with things called "measurement devices" is irreversible due to decoherence. Otherwise we don't call the things measurement devices, but see them as part of the quantum system.

Would the particle's altered momentum then be predictable by classical mechanics given that you know the strength of the magnetic field and everything else about the system? Or would altering the momentum cause it to slip back into a superposition of different possible momenta?
 
  • #20
jaydnul said:
Would the particle's altered momentum then be predictable by classical mechanics given that you know the strength of the magnetic field and everything else about the system?
It depends on the system. As an example, you measure the momentum - which means you will have some position uncertainty. The magnetic field could have a different field strength at different locations.
 
  • #21
Perfect, thanks!
 
  • Like
Likes craigi
  • #22
jaydnul said:
I marked it as "B" but I have a bachelor's in physics and took quantum mechanics... its just been a while.

In that case its easy - if you remember Bra-Ket notation and linear algebra.

A state isn't really what beginning texts and even some intermediate texts tell you. Its in fact a positive operator of unit trace. Those of the form |u><u| where u is any vector are called pure and are easily mapped to the vector space u belongs to. Its easy also to see that if c is simply a phase factor ie a complex number of unit size then cu is the same state ie |cu><cu| = |u><u|. They are what the beginning texts call states, but like I said its really more general than that.

A superposition is simply that in a vector space the linear sum of any two elements is an element. These are pure states - states of the form ∑pi |ui><ui| where pi are positive and sum to 1 are called mixed states. Any state can be shown to be pure or mixed.

Now what decoherence does, without going into exactly how it happens, is convert a superposition into a mixed state. Obviously since any state is the superposition of many other states in many different ways the exact superposition that is converted to a mixed state depends on the observation. You get a state ∑pi |ui><ui| where the |ui> are the eigenvectors of the observation and the pi is the probability of getting state |pi> - with the |pi> exactly as Borns rule says.

Now all this follows from standard QM - every interpretation has it. So what is the observation problem now we know more about decoherence. Its simply this. If you present states |ui> to be observed in some way with probability pi then QM isn't that weird. Prior to observation it is in state |ui> - you just don't know which one. Such states are called proper mixed states. However states not formed that way ie via decoherence instead, you can't say that. Thay are called improper mixed states. There is no way to tell the difference between them, but the fact remains they are made differently. This is the modern version of the observation problem - ie how does an an improper mixed state become a proper one. Each interpretation has its own take, some say - why worry, mine says - just somehow, in BM its automatically a proper one anyway - it goes one and on - you can read about them if interested

Thanks
Bill
 
Last edited:
  • #23
jaydnul said:
Would it be naive to ask how that interference term is destroyed in a conceptual sense, rather than mathematically?

I'm not sure I can add much more to the excellent answers you've already received but here goes anyway.

I'm a little bit of a decoherence 'heretic' in that I'm not as convinced it provides any kind of answer to the so-called 'measurement problem' as some appear to be. As a way of treating the dynamics of a 'small' system coupled to a 'large' system it's lovely.

Let's take a 2-level system prepared in a pure state. Without any loss of generality we can write its state as $$ | \psi \rangle _S = a |0 \rangle + be^{ i\phi } |1>$$ where ##a## and ##b## are real numbers. The density operator representing the same state is $$ \rho _S = a^2 |0 \rangle \langle 0 | + b^2 | 1 \rangle \langle 1 | + ab e^{ - i \phi } |0 \rangle \langle 1 | + ab e^{ i \phi } |1 \rangle \langle 0 | $$ The 'off-diagonal' elements here, the terms with the ##\phi##, are the interference terms that Stevendaryl talked about.

Let's interact this with another 2-level system, also initially in a pure state, and we're going to get a state that can be written in the form $$ | \psi \rangle _{SM} = \alpha |00 \rangle + \beta |01 \rangle + \gamma | 10 \rangle + \delta | 11 \rangle $$ where all the numbers represented by Greek letters here are complex.

So when we look at the reduced density operator for ##S## we're still going to get these off-diagonal elements in general. The combined ##SM## system is in general entangled (it's possible that at certain times in the interaction the entanglement reduces to zero again). The maximum possible information contained in the correlation (or entanglement) is 2 bits.

Suppose we couple our 2 level system, ##S##, to an n-level system where n is much greater than 2. The maximum possible information that is contained in the entanglement is still just 2 bits and if we chose the right basis for ##M## we would see that it can be treated as a 2-level system during the interaction.

OK, but that's not quite what we want here. We don't want a single n-level system for ##M## we want ##M## to be comprised of lots of systems - the idea here is that the 'environment', or a large macroscopic object like a measuring device, is comprised of lots of quantum objects, and we want to treat the whole caboodle (system + measuring device) using the rules of QM.

Well we still can only generate 2 bits of entanglement between ##S## and ##M##, at most, and this entanglement is spread out over all of the many individual components of ##M##. So each component is a teensy-weensy bit entangled with ##S## - and in the limit where we let n tend to infinity, each of those components has zero entanglement (n here is now the number of objects comprising ##M##)

According to QM, however, that entanglement is still there. But if we're dealing with a large enough system (like a measuring device) then to all intents and purposes the entanglements can be disregarded. So for all practical purposes we're back in a classical world. There's a lot more to it than that, of course, but I'm just trying to get at some intuitive feel for what might be going on.

So decoherence can explain why we don't see the world as some entangled gloopy mess - or why our household pets aren't in some ghastly conglomeration of being alive or dead.

What it doesn't do is tell us (or predict) which version of that classical world we're in - the one where kitty is dead, or the one where kitty is alive.
 
  • Like
Likes Nugatory
  • #24
Hi Simon

I think you give a very really well thought-out, thorough, response. I ask if you could reflect on, and comment on the following:
1. With decoherence, is the reason we get a mixed state (that looks like a cat becomes dead or alive) is because we are not including everything about the state of one or both of the systems (in this case the measurement apparatus)?

2. The situation would be best stated as is in 'Quantum Engima' by Bruce Rosenblum and Fred Kuttner (2nd edition) on page 209 "Those classical-like probabilities are still probabilities of what will be observed(*1). They are not true classical probabilities of something that actually exists.(*2)"
*1 : the state being a superposition of |alive> + |dead>
*2 : the state appearing to be |alive> OR |dead>, a classical state.
 
  • #25
StevieTNZ said:
With decoherence, is the reason we get a mixed state (that looks like a cat becomes dead or alive) is because we are not including everything about the state of one or both of the systems (in this case the measurement apparatus)?

Not really. It's possible to set up a reasonable physical model and to solve it in some circumstances. For example, consider a single-mode coherent state field inside a cavity and model the interaction with the environment by coupling this field with an infinite number of (discrete) field modes indexed by frequency. It's basically just a single harmonic oscillator coupled to an infinite number of harmonic oscillators. In the Fock basis the field inside the cavity is initially pure and written as ##\sum a_n |n \rangle ##. The density matrix for the cavity field in this basis has elements ##\rho_{ij}##. So we want to find out how these elements evolve when they interact with an environment.

To solve it we assume there are a continuum of modes and develop the master equation for the cavity field density operator ##\rho##. In certain cases, for certain choices of environment (environment modes in zero-temperature thermal states, for example) the master equation can be solved exactly. We find that the off-diagonal terms of the density matrix have an exponential decay that depends on ##| i - j|##. In other words the off-diagonal terms are very rapidly driven towards zero.

You can loosely think of this as a kind of de-phasing - the effect of the environment being to randomize the phases in the superposition. Thus the phase coherence is quickly lost (by phase here I mean the complex phases that occur in the complex terms in the superposition and not the field phase).

So it's possible to solve a reasonably realistic model where reasonable approximations are made and obtain decoherence. It's not because we're ignoring stuff but a direct result of the interaction. Of course there are approximations made in deriving the master equation - we work in a weak-coupling regime, take the continuum limit, adopt a suitable coarse-graining procedure in deriving the master equation, etc. One could argue that these procedures essentially throw away information and force irreversibility artificially. But they are only the same kind of approximations made in classical treatments which also lead to irreversibility.

Long and short of it, no, I wouldn't say that the decoherence is only coming about because we're not including everything.
 
  • Like
Likes StevieTNZ
  • #26
Simon Phoenix said:
Long and short of it, no, I wouldn't say that the decoherence is only coming about because we're not including everything.

The issue with decoherence is, quite simply, that it doesn't solve the measurement problem - merely shifts it somewhat to other issues colloquially grouped as - why do we get any outcomes at all. It is a matter of personal taste and opinion if its an advance or not. There is no answer and you can go round and round in circles discussing/arguing about it but get nowhere.

The best course is to actually study it:
https://www.amazon.com/dp/3642071422/?tag=pfamazon01-20

Then make up your own mind.

I have and hold to the ignorance ensemble interpretation which basically answers - why do we get any outcomes at all - by somehow. But there are all sorts of views - there is who cares, there is you haven't resolved anything, there is BM where the reason you get outcomes is objects are objectively real, MW - tons and tons of them.

It sort of reminds me of the augments about the meaning of probability, which is hardly surprising since it is now known QM is a generalized probability model - in fact the simplest after ordinary probability theory. John Baez thinks the root of most arguments about interpretations is simply arguments about probability rehashed:
http://math.ucr.edu/home/baez/bayes.html

I know in applying probability theory you usually pick the 'interpretation' which seems most reasonable for your application. Mostly its frequentest, sometimes such as Bayesian Inference its Bayesian, and even sometimes others, such as in credibility theory used by Actuaries its decision theory. Maybe that's the best view - simply pick the one that suits you best for what you are doing. Of course even that has problems because what seems natural for one person is not necessarily the same for another.

Thanks
Bill
 
  • Like
Likes Simon Phoenix and Boing3000
  • #27
Now, suppose that immediately after the electron passes through one of the slits, we make a measurement as to which slit it went through. I don't know--maybe take a picture of it (that's not really possible, but say that it is). Then in that case, the two alternative paths produce different final states for the combined system of electron + measurement.

Hi,
I´m not a physics, but like the subject, a lot. Sorry if my question is too naive. But that is the point: the recording on the photographic plate is not doing this measurement? I mean, if the plate is posicioned more close to the slits, it can´t do the job? Why something that take a picture of the electorn works, but something that effectivelly do this (the plate) don´t work? Or, in other words, why the system divised to register the passage of the electons by the slits (A or B) does not register the simultaneous passage of the electron for both slits (as the plate does)? Is it clear? Thanks for answering. (And sorrow for my english.)
[]s
Pedro
 
  • #28
Pedro Zanotta said:
I´m not a physics, but like the subject, a lot. Sorry if my question is too naive. But that is the point: the recording on the photographic plate is not doing this measurement? I mean, if the plate is posicioned more close to the slits, it can´t do the job?

If you're talking about the two-slit experiment, the point is that there are two ways that the electron can get to the plate: it can go through the first slit, or it can go through the second slit. So the dark spot on the photographic is not a measurement of which slit the particle went through, it is a measurement of where the particle ended up. Mathematically, a measurement of an intermediate state destroys interference between alternatives, which in turn changes the probabilities for final states.
 
  • Like
Likes Pedro Zanotta
  • #29
Pedro Zanotta said:
Now, suppose that immediately after the electron passes through one of the slits, we make a measurement as to which slit it went through. I don't know--maybe take a picture of it (that's not really possible, but say that it is). Then in that case, the two alternative paths produce different final states for the combined system of electron + measurement.

If you measure the electrons position just after going through one of the slits ie immediately behind a slit you know its position exactly and you do not get a superposition of the state behind each slit hence no interference.

Please study the following:
https://arxiv.org/abs/quant-ph/0703126

Thanks
Bill
 
  • Like
Likes Pedro Zanotta
  • #30
Thank you, Bill. I´ll read the material you suggested.
 
  • #31
bhobba said:
I have and hold to the ignorance ensemble interpretation which basically answers - why do we get any outcomes at all - by somehow.

:wideeyed: and that's how 'textbook' QM answers it too - we get something, somehow. I guess I see decoherence, as applied to measurement theory, as an attempt to resolve the unitary dynamics, non-unitary measurement issue of QM. I've not really seen any interpretation of QM that I'm personally 100% happy with, they all seem to me to have this character of sweeping something under a rug - they just use different rugs.

Perhaps the standard QM advice should be re-written - "just shut up and sweep" o0)
 
  • Like
Likes Nugatory and stevendaryl
  • #32
That's the best advice you can indeed give in QM 1, but with the hint that observables and states are not abstract operators in Hilbert space but clearly defined descriptions of measurement devices and preparation prescriptions, and these are defined by the experimentalists in the lab or the observers at their telescopes or however you observe the world around us. The only thing you need to describe observations with the formalism is just Born's rule, and you should take it seriously: If an observable is indetermined, it's indetermined, and there are only probabilities known (provided the state is known with sufficient precision). If you measure it you get, by construction of the real-world measurement apparatus by a good experimental physicist or engineer, a well-defined value, and if you repeat the experiment sufficiently often with sufficiently well prepared ensembles you can measure the probability and compare with the prediction from the formalism. The most amazing thing about QT is, how accurate it makes these predictions and how robustly is withstood all attempts to disprove it.
 

1. How does decoherence destroy superpositions?

Decoherence is a process in which a quantum system interacts with its environment, causing the system to lose its quantum properties and behave like a classical system. This interaction causes the superposition state to collapse into a single state, destroying the delicate quantum information.

2. What causes decoherence?

Decoherence is caused by the interaction of a quantum system with its surrounding environment. This environment can include other particles, fields, or even measurement devices. Any interaction that causes the system to lose its isolation and become entangled with its surroundings can lead to decoherence.

3. Can decoherence be reversed?

Once decoherence has occurred, it is difficult to reverse. This is because the information about the original superposition state has been lost due to the interaction with the environment. However, in some cases, it may be possible to partially recover the quantum information through a process called quantum error correction.

4. How does decoherence affect quantum computing?

Decoherence is a major challenge in quantum computing because it can cause errors in the calculations and lead to the loss of quantum information. To overcome this, researchers are developing techniques to reduce the effects of decoherence, such as using error-correcting codes and implementing quantum error correction algorithms.

5. Can decoherence be avoided?

Decoherence is an inevitable process that occurs in all quantum systems. However, it can be minimized by isolating the system from its environment and carefully controlling its interactions. This is why quantum computers are often operated at extremely low temperatures and in highly controlled environments to reduce the effects of decoherence.

Similar threads

  • Quantum Physics
3
Replies
71
Views
3K
Replies
7
Views
1K
Replies
11
Views
1K
Replies
4
Views
859
Replies
5
Views
294
  • Quantum Physics
Replies
17
Views
1K
  • Quantum Physics
Replies
2
Views
1K
Replies
13
Views
2K
  • Quantum Physics
Replies
33
Views
4K
Replies
8
Views
2K
Back
Top