Decoherence does not collapse wavefunc.

In summary, decoherence does not directly collapse the wavefunction but rather explains the appearance of wavefunction collapse through the leakage of quantum information into the environment. This process is thermodynamically irreversible and can be reversible in principle, but not in practice. Decoherence is not equivalent to wavefunction collapse and is often seen as an escape mechanism for those who are uncomfortable with the idea of true collapse.
  • #1
ZPower
16
0
"Decoherence does not generate actual wave function collapse. It only provides an explanation for the appearance of wavefunction collapse. The quantum nature of the system is simply "leaked" into the environment"
http://en.wikipedia.org/wiki/Quantum_decoherence

I have taken a full year of quantum, but this puzzles me. So decoherence does not collapse the wavefunction? but somehow the information leaks to teh enviornment? and it is conserved?
if this is so, then this would be a reversible process, no?
further i though decoherence and wv func collapse are eqivalent?


any thoughts? :smile:
 
Physics news on Phys.org
  • #2
They're equivalent because the leak is thermodynamically irreversible: in practice there is no way to recapture all the information after it leaks into the environment.

But it is reversible in principle, and for systems contained in small environments this reversal (as well as intermediate appearance of collapse) is demonstrable.
 
  • #3
They are not equivalent because wave function collapses in a finite time and decoherence would need infinite time to collapse anything. Decoherence is an escape mechanism for those who are scared of real collapses :wink:
 
  • #4
arkajad said:
They are not equivalent because wave function collapses in a finite time and decoherence would need infinite time to collapse anything. Decoherence is an escape mechanism for those who are scared of real collapses :wink:

There is no such mechanism.
 
  • #5
The quantum nature of the system never ceases to be its quantum nature and that nature in part is that we use a probabilistic description of the system which for a quantum system manifests as a wave function.

What is "leaking into the environment" is the a priori information about the system encoded in an original sharp wave-function.

Remember that nowhere in the decoherence process is one insisting that the quantum system in question be measured, yet it is only when a measurement is made that we collapse the system's wave-function.

Take the classical analogue. Imagine you inject a classical particle into a box with smooth idealized mirror walls. You can in principle trace the trajectory and know the future state of the classical particle. But in a realistic setting the walls are thermal and rough. We can map the original classical state into a singular probability distribution (p=1 for the known state and p=0 for all others). As the particle bounces around the thermal box we see this probability distribution spread out, entropy goes up, and the "sharp" classical description "classically decoheres". If we then look for where the classical particle and measure its classical state we see the probability distribution "collapse" into another singular form, p=1 for the observed state and p=0 for all others.

Quantum systems behave the same except that they have no classical objective state and their probability description is not a distribution over a set of states. It is that uniquely quantum relative distribution over sets of commuting observables which we can express as a diagonal density operator in the corresponding basis.

Instead of a singular description we have a maximal description in the density operator in the form of a unit trace projection operator (projecting onto a 1-dim subspace) which we can discard for the moment and simply refer to a basis element.
[tex] \psi : \quad \rho = \psi \otimes \psi^\dagger[/tex]

That is the closest we get to an actual state of a quantum system. However again with decoherence we must revert to the more general density operator description. This is why we refer to psi (the wave-function) as the "state vector" even though it is not the system's state. It is the square root of its "singular probabilistic description" [tex]\rho_\psi[/tex].

So keep in mind, even with classical systems we can view our maximal descriptions "object A is in state S" as probabilistic descriptions "object A has probability p=1 of being observed with observable values corresponding to S and p=0 for all others". In both classical and quantum systems decoherence is the spreading of the probabilities due to interaction with inaccessible epistemic elements (the thermal environment), and "collapse" is due to updating the probabilistic description as we again measure the system.
 
  • #6
jambaugh said:
T
Quantum systems behave the same except that they have no classical objective state and their probability description is not a distribution over a set of states.
That is one point of view there are other points of view.

It is that uniquely quantum relative distribution over sets of commuting observables which we can express as a diagonal density operator in the corresponding basis.

That is rather minimalistic and ad hoc point of view.

In both classical and quantum systems decoherence is the spreading of the probabilities due to interaction with inaccessible epistemic elements (the thermal environment), and "collapse" is due to updating the probabilistic description as we again measure the system.

Is this separation between the system and its "epistemic environment" objective or only in the head of a given quantum decoherent physicist. Another physicist will make this separation line perpendicular to the previous one?
 
Last edited:
  • #7
jambaugh said:
Take the classical analogue. Imagine you inject a classical particle into a box with smooth idealized mirror walls. You can in principle trace the trajectory and know the future state of the classical particle. But in a realistic setting the walls are thermal and rough. We can map the original classical state into a singular probability distribution (p=1 for the known state and p=0 for all others). As the particle bounces around the thermal box we see this probability distribution spread out, entropy goes up, and the "sharp" classical description "classically decoheres". If we then look for where the classical particle and measure its classical state we see the probability distribution "collapse" into another singular form, p=1 for the observed state and p=0 for all others.
I've not heard that analogy before, Thankyou.

arkajad said:
That is one point of view there are other points of view.

arkajad said:
They are not equivalent because wave function collapses in a finite time and decoherence would need infinite time to collapse anything. Decoherence is an escape mechanism for those who are scared of real collapses :wink:

The OP is obviously trying to understand how the decoherence program is supposed to work. If you're only interested in soap-boxing your personal evidenceless viewpoint as if it were absolute truth, then how about warning the OP not to be confused by this? But if you're not too scared to then do tell me more about this finite time: in which velocity reference frame is it extremised, and where approximately is its line between microscopic and macroscopic?
 
  • #8
cesiumfrog said:
But if you're not too scared to then do tell me more about this finite time: in which velocity reference frame is it extremised, and where approximately is its line between microscopic and macroscopic?

There is nothing mysterious about finite time. It is finite in all experiments. Otherwise till now we have no data to compare our theories with. And it has nothing to do with microscopic and macroscopic. SQUIDs are macroscopic. As for the "extremised velocity frame" I have not comments - I do not know what you are referring to. Who is "extremising" what and why?
 
  • #9
You say the collapse process (of which you give no details) occurs in some finite period of time. But clearly, different observers will conclude this period of time to have different durations, according to special relativity. So, which observers will be the ones who infer the shortest duration of time for the collapse process? Those in the rest frame of the apparatus? Or the rest frame of the universe?

Also, fine if it has nothing to do with microscopic and macroscopic, but which systems will behave quantum mechanically and which systems will have collapsed into classical behaviour? So far, the decoherence has been very successful in predicting that isolated systems will behave quantum mechanically and systems in greater interaction with their environments will behave as though they have collapsed into classical states. But obviously you must disagree with some part of this, because you argue against decoherence. Are you in Penrose's camp, claiming collapse is mediated by gravitons (and somehow related to consciousness), so mass determines how quickly collapse occurs? Are you in Wigner's camp, claiming collapse depends on whether a chimpanzee mind has contemplated the experimental outcome? What criteria do you propose for predicting which systems will collapse rapidly and which will have long-lasting quantum coherence?
 
  • #10
cesiumfrog said:
You say the collapse process (of which you give no details) occurs in some finite period of time. But clearly, different observers will conclude this period of time to have different durations, according to special relativity. So, which observers will be the ones who infer the shortest duration of time for the collapse process?

No observers are needed. The process can be fully automatized.

Also, fine if it has nothing to do with microscopic and macroscopic, but which systems will behave quantum mechanically and which systems will have collapsed into classical behaviour.

What is classical behaves classically, what is non-classical needs to be described quantum mechanically. Chairs and tables are classical. Things that happen and events that are recorded are classical.

Are you in Penrose's camp, claiming collapse is mediated by gravitons (and somehow related to consciousness)? Are you in Wigner's camp ...

Neither. But perhaps I am in Niels Bohr's camp:

"All information about atoms expressed in classical concepts
All classical concepts defined through space-time pictures" [Bohr 1927]
 
Last edited:
  • #11
You are not in Bohr's camp that I can tell. His philosophy was simply: "This is what you get, and you can't understand my theory any better than what we've given you." A somewhat self serving philosophy, in my opinion.

To posit this superluminal thing called wave function collapse is an addition to the basic postulates of quantum mechanics. Not that there is anything wrong with superluminal connectivity, and I understand the motivation "to make sense" of the underlying postulates--that is, to motivate an empirical equation with elements we are happier to associate with elements of physical reality, but that the mechanisms proposed to obtain this to date are so loopy only indicates to me that no one has yet stumbled upon the right idea.
 
  • #12
arkajad said:
That is one point of view there are other points of view.
It is more than a point of view. It is exactly and precisely the assumption that probabilities arise from a distribution over a set of states which leads to Bell inequalities. Probabilities are measures, P(A xor B) + P(B xor C) > or = P(A xor C). Since QM violates Bell inequalities QM probabilities cannot be expressed as distributions over a state space. Hence the occurrence of negative "quasi-probabilities" in Wigner's quasi-distributions.

That is rather minimalistic and ad hoc point of view.
How so? It is the general setting of QM description. The proper representation of a quantum system is a density (co)operator. Only in an idealized Temp=0 limit do we imagine a sharp system wherein we can "square-root" the density matrix to manifest a mode vector (e.g. a wave-function).

Again note that using a language of probabilistic description we can still in principle incorporate certainty via P(X=x)=1, P(X= other) = 0. It is the more general language incorporating the prior "sharp mode" or "classical state" as such a special case. One can still do all of classical physics in this language by restricting the available physically actualizable observables to a commuting subset.
Is this separation between the system and its "epistemic environment" objective or only in the head of a given quantum decoherent physicist. Another physicist will make this separation line perpendicular to the previous one?

It is relative to the definition of the system. Note that relativity is the "gripping hand" of your false alternatives.

You consider an EPR pair and speak of "the left electron" and "the right electron" I consider the same EPR pair and consider "the spin-z = +1/2" and "the spin-z = -1/2" electron. We are partitioning the system into distinct halves (and can thus focus on one half as "the system" and the other as "environment") which are not simply permutations of each other. They are not inseparable but rather hyper-separable.

It is like splitting a position vector (of a classical object) into different coordinates. The coordinates are meaningful relative to a frame but we can choose a continuum of possible frames with a continuum of possible coordinate sets. This doesn't imply that the assertion that this position vector is specifically 3 dimensional, is meaningless.

Likewise in the EPR example we have two "count em! two" electrons. But they ain't objects with an objective fixed separation into components. They are quantum phenomena (a.k.a. quanta) factorable into two component quanta in a continuum of meaningful ways. (Bob looks at spin but using a different axis than mine and his component electrons will be a distinct slicing of the cake.)

And if you look at the logic in interpretations of EPR you'll see in many cases the mistake of thinking that e.g. your L vs R electrons must be a permutation of my spin up vs down electrons prior to a simultaneous measurement of z-spin and position. Until we speak of a measured electron pair we are either speaking about possible outcomes prior to measurement or speaking about the mode of pair production and not a given instance of that pair. Once we make a measurement we are changing what we are referring to and thus collapsing or otherwise discontinuously changing our description.

I bring up the EPR pair because it is a simpler example of this relativity of division. But this applies to the division of system vs episystem as well. Think of this in terms of e.g. Unruh radiation of an accelerating observer. A boosted observer sees a different subdivision of system and episystem and thus his system appears to change "state" increasing in particle number.
 
  • #13
Here's a tip. If one wants to talk about how long it takes for a physical collapse to occur, first define what one means in terms of how you actually observe a collapse e.g. what observables correspond to Non-collapsed and collapsed cases.

Until and unless this is made clear any suppositions about the time it takes or mechanism by which it occurs are no better than the rantings of a medium in a seance.

"You dead husband is wearing a blue coat! He's waving at you lovingly, he says...he says...please deposit $20 to continue your call to the netherworld!"
 
  • #14
jambaugh said:
Here's a tip. If one wants to talk about how long it takes for a physical collapse to occur, first define what one means in terms of how you actually observe a collapse e.g. what observables correspond to Non-collapsed and collapsed cases.

It is a stochastic variable with its distribution determined by the quantum state and the detector. You know quite well that, for instance, radioactive decay is a stochastic process. The same with every other event process controlled by quantum phenomena except that stochastic processes involved are somewhat more complicated than a simple decay.
 
  • #15
jambaugh said:
It is more than a point of view. It is exactly and precisely the assumption that probabilities arise from a distribution over a set of states which leads to Bell inequalities. Probabilities are measures, P(A xor B) + P(B xor C) > or = P(A xor C). Since QM violates Bell inequalities QM probabilities cannot be expressed as distributions over a state space. Hence the occurrence of negative "quasi-probabilities" in Wigner's quasi-distributions.

And why do you make such an assumption? Because, first of all, you have learned from textbooks, second, almost everybody does it, and third, you do not know anything better. But perhaps there are better assumptions? I don't think everybody is happy with such an assumption. In fact I know it.


[QUOTE}How so? It is the general setting of QM description. The proper representation of a quantum system is a density (co)operator. Only in an idealized Temp=0 limit do we imagine a sharp system wherein we can "square-root" the density matrix to manifest a mode vector (e.g. a wave-function). [/QUOTE]

Density operator has been devised for dealing with infinite ensembles of systems. You are not an infinite ensemble. You are a unique, individual system. The same with electrons that leave tracks in cloud chambers. Density matrix is of no use for describing the mechanism of formation of a unique track. And such unique tracks are being formed in the labs each day.
 
  • #16
arkajad said:
And why do you make such an assumption? Because, first of all, you have learned from textbooks, second, almost everybody does it, and third, you do not know anything better.
Firstly you are displaying a great deal of hubris to presume to say where I learned anything.

Secondly you are quite wrong as to how I acquired this "assumption".

Thirdly even should we assume I a.) did learn it from a textbook, b.) almost everybody "does it", and c.) that neither I nor these others know anything better...
none of these assumptions make any dent in the validity of my statement but rather would tend to support it.
*When you know the best way you cannot know anything better so failure to know better might be inductive evidence that the way in hand is the best way,
*absent any other information "what everybody does" usually is a pretty good way to do things given everybody has a choice in the matter,
*Textbooks though obviously not infallible are pretty damned good sources of knowledge, (My x-ray diffraction text was an especially good book, chock full of useful and insightful knowledge!).

Fourthly it is quite common to introduce such ad hominem irrelevances when you have no substantive counter to the actual point at hand. So again what makes you think you cannot on your own (as I did) derive Bell's inequalities directly from the simple assumption that your probabilities come from a measure over a state manifold, and in seeing the short and simple form of that derivation see as self evident that violation of Bell inequalities invalidates the assumption of an underlying state of reality. Locality arguments are irrelevant.

This I have done with pen and paper and Bell's book (all I lacked was a candle to complete the excommunication :wink:). You try it and you may see something I didn't.
(start with the "metric" d(A,B) = f(A~B) where ~ is set difference and f is a probability distribution over the set of possible states of your system. Bell's inequality then takes the form of the triangle inequality for this metric assuming f is a measure on the set)
but I'm not interested in your opinion until you do some leg work. (unless of course you agree with me in which case I'll find your opinion wise and fascinating!)

But perhaps there are better assumptions?
and perhaps there are not. Perhaps if I put one more quarter in the slot machine I'll win the Jackpot. Perhaps-ing in the dark gets one nowhere or worse. What evidence or argument have you for supposing a better assumption and by what criteria and value system are you ordering assumptions better to worse? State them and abide by them or remain silent.

I don't think everybody is happy with such an assumption. In fact I know it.
Happiness is irrelevant. Functional meaning and concurrence with experience is all that is meaningful in physics. Like I say when I miss a (pool) shot
"It should have gone in"... and then I say... "and I should have been rich and famous with loads of hot beautiful babes hanging all over me and getting laid every night!"
I say this to emphasize to myself that this sort of "should" is the insistence that things should conform to my dreams instead of my expectations conforming to actuality. In pool it reminds me that I'm responsible for the result and my desire is irrelevant to how things actually behave.
Density operator has been devised for dealing with infinite ensembles of systems. You are not an infinite ensemble. You are a unique, individual system. The same with electrons that leave tracks in cloud chambers. Density matrix is of no use for describing the mechanism of formation of a unique track. And such unique tracks are being formed in the labs each day.

Firstly it is irrelevant why the density operator formal language was first introduced. What is relevant is how it functions. This is why you see me refer to it as a "co-operator" it is a linear functional mapping observables to their expectation values (since it is always used in conjunction with the trace):
rho : X ---> <X> usually expressed as Tr(rho * X).

The density co-operator X-->Tr(rho*X) IS the established set of expectation values for the observables of a system and as such is the most general, all encompassing representation of our knowledge about a physical system. It is the very physical prediction of measurement and as such is the basis for any meaningful statement about a physical system.

So, instead of "the density operator" call this expectation value functional the... expectation value functional if you wish. It is a necessary ingredient in any description of any system be it a singular instance or an ensemble or logically defined class. As it has a previously defined name can we agree to use it?
 
  • #17
arkajad said:
It is a stochastic variable with its distribution determined by the quantum state and the detector.
It? You mean collapse is a variable?

An observable is a stochastic variable with distribution determined by the mode of system production and the detector i.e. what observable is to be measured. Is that what you mean?

You know quite well that, for instance, radioactive decay is a stochastic process.
Yes, as is excitation decay. But decay is not the same as collapse. You can express the wave function of a decaying atom + its decay product field and see the nice exponential "decay" in the probability amplitude of the atom being alone and existent vs the growth of the probability amplitude for the decay products to exist and the atom to be absent. All within a perfectly coherent wave-function. It is when you measure for the presence of a decay product (e.g. a gamma detector) that you collapse this composite system description into one where the atom amplitude is 0. To make it easier look at the system of a particle tunneling to a lower energy state for two square wells separated by a finite potential bridge (one square well deeper than the other). A perfectly coherent "decay" description exists without every having to invoke a collapsing wave function.
The same with every other event process controlled by quantum phenomena except that stochastic processes involved are somewhat more complicated than a simple decay.
So again I ask. What is the meaning of a physical decay as opposed to the CI version where one is only considering the paper wave-function which one updates upon a change in knowledge about the system. Where is the observable? It doesn't have to be a direct observable, it can be a quality derived from observables, just so long as you can point to a time and say "Look! Collapse hasn't happened yet" and "Look here! Collapse has occured!" (and be pointing at the system and not at a piece of paper.)
 
  • #18
jambaugh said:
It? You mean collapse is a variable?

An observable is a stochastic variable with distribution determined by the mode of system production and the detector i.e. what observable is to be measured. Is that what you mean?
What I mean is that collapse is governed by a stochastic process. Much like radioactive decay but not that simple.

So again I ask. What is the meaning of a physical decay as opposed to the CI version where one is only considering the paper wave-function which one updates upon a change in knowledge about the system. Where is the observable? It doesn't have to be a direct observable, it can be a quality derived from observables, just so long as you can point to a time and say "Look! Collapse hasn't happened yet" and "Look here! Collapse has occurred!" (and be pointing at the system and not at a piece of paper.)

Observable is what can be observed. For instance a dot on a screen. Then the question is what is the mechanism governing the appearance of such dots in time and space? Because they appear at a certain time and at a certain place. How is the time of appearance and the place of appearance decided? The answer is simple: by a specific stochastic process that is determined by both, the wave function and the screen itself. You do not see the collapse, you do not see the wave function, but you can see the dot. Dots are you data. Wave functions and collapses are the auxiliary concepts that are needed in order to explain the emergence of these data.
 
  • #19
jambaugh said:
So, instead of "the density operator" call this expectation value functional the... expectation value functional if you wish. It is a necessary ingredient in any description of any system be it a singular instance or an ensemble or logically defined class. As it has a previously defined name can we agree to use it?

Being an expectation value functional it is completely irrelevant for the description of an individual quantum system. And nowadays we experiment with individual quantum systems that are being continuously monitored in real time. Density matrix is of no use in such experiments - because it needs infinite ensembles of systems and not just one system. Density matrix is like a probability distribution in a statistical ensemble in classical statistical mechanics. In statistical mechanics we integrate over the phase space, in quantum mechanics we calculate traces. It is good for statistical description of many body systems or infinite ensembles of individual systems. Not for one planet or one electron. For one planet circling around the Sun and one electron leaving track in a cloud chamber something deeper and better is needed.
 
Last edited:
  • #20
arkajad said:
What I mean is that collapse is governed by a stochastic process. Much like radioactive decay but not that simple.
Ok I follow you there, but you still avoid the foundational issue. We can speak of the mechanism of decay, and speak of decay times because decay is an observable process.
---"look, the atom is still there at "
---"look, 3 minutes and 18 seconds later my gamma detector went 'ping'!"
With decay you can see the event (or extrapolate back from the speed of the decay products) and you can then do statistics, establish a distribution on decay times, and measure a half-life and from that information postulate a mechanism for decay.

With collapse the physical process is that a measurement is made. You can model the measurement process, express the composite of measuring device and system in a larger context and let it evolve until the description is one of an array of recorded outcomes with corresponding probabilities. Note that you MUST use a density operator formulation here both because of the entanglement between system and measuring device and because the measurement process is thermodynamic in a fundamental way. That is where one sees decoherence. Now you still haven't collapsed the wave-function or rather the density operator until you make a specific assertion: that a specific outcome was made. Collapse is a conceptual process not a physical one and thus the "time it takes" is the time it take one to think it or write it down.

And it should be apparent in this investigation of the measurement process that the things we are writing down are in the end a classical probability distribution and thus at the beginning also was a probability distribution (though by use of a more general method of representation not wholly classical.) It is a representation of our knowledge about how the system or meta-system may behave and not a representation of its physical state.

As far as the time for the decoherence implicit in the measurement process, that is arbitrary. We can make the same measurement (and represent it with the same operator) with many specific laboratory configurations provided each configuration ultimately records the same observable for the system being measured. The decoherence process could be set up to take microseconds or weeks, as we choose, and when and where the decoherence occurs is also relative to how we set up the meta-description of the system + measuring device, e.g. how far out we put our meta-system meta-episystem cut.

As far as measuring the system the details of this meta description is irrelevant. As far as trying to understand measurement and collapse in terms of a model of reality goes, one falls into an infinite regress of measuring devices to observe the measuring devices to observe the measuring devices et cetera ad infinitum.

It is like trying to speak of absolute position. Coordinates only give the position of a system relative to the observer. You can then try to speak of the observers position relative to another observer and you quickly see the futility of it and appreciate the fact that position is always relative. Not meaningless but like electrical potential only meaningful as a difference in values.

Observable is what can be observed. For instance a dot on a screen. Then the question is what is the mechanism governing the appearance of such dots in time and space? Because they appear at a certain time and at a certain place. How is the time of appearance and the place of appearance decided? The answer is simple: by a specific stochastic process that is determined by both, the wave function and the screen itself.
Don't confuse the dynamics of that dot e.g. the mechanism for electron evolution with its collapse. Look at how you use the wave function in describing that dot. Or more precisely since presumably you're speaking of a dot on a CRT screen you have a thermal source (hot cathode) you need to rather use a density matrix.
You do not see the collapse, you do not see the wave function, but you can see the dot. Dots are you data. Wave functions and collapses are the auxiliary concepts that are needed in order to explain the emergence of these data.

They are auxiliary concepts applying to the prediction of outcomes. They "explain" in so far as they do by expressing maximal prediction. Explanation as you seem to want it to mean would involve breaking the phenomenon down into component phenomena, e.g. florescence of the screen, emission of electrons from a hot cathode, propagation through the intermediate e-m field or array of slits and pinholes etc. But in the end each of these component processes must first and foremost be predictably described so that the reductive explanation of the dot makes sense. Then to further explain you must reduce these components. What comes first in this chicken and egg chaise is key. Classically we stop at a large enough scale that we can refer to an idealization of state we call reality. Quantum mechanics begins with the measurement process as an irreducible phenomenon. As such we begin with prediction and predictive description and not with reality representation.

There is a good reason for this and it is that we can express the features of a reality representation within the scope of predictive description but not (as we see in QM) the reverse. Those features are specifically that of an underlying deterministic model. But falsify the hypothesis that such a model is possible and we still have our predictive description. The predictive language of QM is more general than the representative language of CM which is why it can express both quantum and classical phenomena and does so often at the same time as with the decoherence of the system+measuring device.

Within that predictive language the collapse of either psi or rho (on paper) is when you jump from:
"I'm considering a process by which my quantum propagates from a specific source, I use description rho (or psi). It propagates to a measuring device and according to theory the outcomes of my measurements will have the following probabilities P(X=1)=bla, P(X=2) = bla bla,"
to
"I'm now considering the case where we actually observe X=1 so let's update the description so we can calculate future probabilities!"

This is how the wave functions and density operators are actually used in practice, to represent classes of actualized quantum systems. All you have in the lab are a sequence of measurements or similar events, i.e. the recorded data. You cannot be like the zoologist and pull out the preserved specimen to show what you found and double check its features.

The wave-function and density operator both, are analogous to the zoologist's category of species, and not analogous to the DNA record of a single specimen.

When teaching my probability and statistics class I had the students guess the probability that at least two of the class (of 45) had birthdays on the same day. Then I showed them the calculation for that probability and it was much higher than most guessed, (about 94%).
We then went around the room declaring birthdays and sure enough we actually had 2 pairs match up. I then asked the question again... what is the probability that at least two students have the same birthday. One said 90% then quickly corrected himself, 100%!
I asked them then was my calculation wrong? We haven't changed who is in the room?

I did this to emphasize to them the nature of logical classes as opposed to sets. Probabilistic statements are statements about classes of possible outcomes and thus classes of systems, not single instances. By calculating the probability for my room of students I was identifying them as an instance of a particular class and knowing that class I could express a prediction about the actual instance.

Wave-functions are expressions of how a given instance of a quantum system might behave given you know it to be a member of a specific class of systems via the fact that a particular measurement has been made and the value of that measurement is specified.

Thus for example a given momentum eigen-"state" and spin "state" for an electron expresses the class of electrons for which the specified values have been measured. In the momentum-spin representation you have a little dirac delta function centered at the measured momentum in a tensor product with a specific spin "ket". You can write the "wave-function" in that you expand this Hilbert space vector in terms of components of position eigen-states and that representation is useful in that is explicitly gives the (square roots of) the probabilities of subsequent position measurements. Theory tells us how P and X relate and thus that this representation is a sinusoidal curve with a specific wave-length (h/p).

But the electron is not a wave-function. The road is not a line on a map. It is an analogue and to understand the type of analogue you must look at how the map is used. In the road case the map is a direct analogue, a model of the reality of the road. In the wave-function case we look at what we do with the wave function. We use it to calculate probabilities for position measurements, it is a logical analogue not a physical one... or rather it is first and foremost a logical analogue.

You may assert it is also a physical one but you must prove your case for that. I assert that wave-function collapse is a specific indicator that it is not a physical analogue but purely a logical/predictive one since it is the logic of updating our class of systems that instigates our collapsing the wave-function on paper.

I understand the temptation to say "an electron is both wave and particle" but one is mapping the quantum electon's behavior into two distinctly classical phenomena, classical waves and classical particles. It is the spectrum of behaviors one is addressing and we see in this "either or" business the relativity of the actual classical representation. It is the necessary relativity of the "reality" one is trying to paint for the electron. The electron is not the sinusoidal wave nor the dirac delta-function particle... it is a phenomenon of actualizable measurements which we can probabilistically predict using wave functions (and/or density operators) as representations of interrelated probabilities.
 
  • #21
jambaugh said:
Ok I follow you there, but you still avoid the foundational issue. We can speak of the mechanism of decay, and speak of decay times because decay is an observable process.

You can also observe creation of dots. In the real time (like in Tonomura's lab) or post factum.
---"look, the atom is still there at "
---"look, 3 minutes and 18 seconds later my gamma detector went 'ping'!"
With decay you can see the event (or extrapolate back from the speed of the decay products) and you can then do statistics, establish a distribution on decay times, and measure a half-life and from that information postulate a mechanism for decay.

I was in Tonomura's lab I was watching creation of dots from interference experiments.

With collapse the physical process is that a measurement is made. You can model the measurement process, express the composite of measuring device and system in a larger context and let it evolve until the description is one of an array of recorded outcomes with corresponding probabilities. Note that you MUST use a density operator formulation here both because of the entanglement between system and measuring device and because the measurement process is thermodynamic in a fundamental way.

Perahaps you MUST. I don't.

That is where one sees decoherence. Now you still haven't collapsed the wave-function or rather the density operator until you make a specific assertion: that a specific outcome was made. Collapse is a conceptual process not a physical one and thus the "time it takes" is the time it take one to think it or write it down.

WEll, you have your way of looking at things, I have mine way.

And it should be apparent in this investigation of the measurement process that the things we are writing down are in the end a classical probability distribution and thus at the beginning also was a probability distribution (though by use of a more general method of representation not wholly classical.) It is a representation of our knowledge about how the system or meta-system may behave and not a representation of its physical state.

And our knowledge is a representation of the reality. So, what we are writing is a representation of the reality.

As far as the time for the decoherence implicit in the measurement process, that is arbitrary. We can make the same measurement (and represent it with the same operator) with many specific laboratory configurations provided each configuration ultimately records the same observable for the system being measured. The decoherence process could be set up to take microseconds or weeks, as we choose, and when and where the decoherence occurs is also relative to how we set up the meta-description of the system + measuring device, e.g. how far out we put our meta-system meta-episystem cut.

Someone would have to define decoherence precisely. I never seen such a definition. It is always defined in such a way that you must become a believer and stop asking questions.

As far as measuring the system the details of this meta description is irrelevant. As far as trying to understand measurement and collapse in terms of a model of reality goes, one falls into an infinite regress of measuring devices to observe the measuring devices to observe the measuring devices et cetera ad infinitum.
I think the point is not to understand "collapse" but to understand the mechanism by which a single electron creates dots on the screen or in a cloud chamber in real time.

It is like trying to speak of absolute position. Coordinates only give the position of a system relative to the observer. You can then try to speak of the observers position relative to another observer and you quickly see the futility of it and appreciate the fact that position is always relative. Not meaningless but like electrical potential only meaningful as a difference in values.

Who needs an "observer". Tracks are being formed without any observer. A photographic emulsion is needed or something equivalent like a cloud chamber or electron microscope and CCD. Observers are just to enjoy what has been created without their participation.
Don't confuse the dynamics of that dot e.g. the mechanism for electron evolution with its collapse. Look at how you use the wave function in describing that dot. Or more precisely since presumably you're speaking of a dot on a CRT screen you have a thermal source (hot cathode) you need to rather use a density matrix.

I am not confusing. Events happen, they are irreversible, and they are accompanied by wave function collapses. A finite number of them. They do occur in real time. This time and place depends on the wave function, external fields and whatever is causing the collapse - for instance the presence of the screen at a certain location. Density matrix is of no use for describing such an mechanism. I am talking about an individual process, perhaps one event that will never occur again. Of course you can always smear out such an event with additional uncertainties caused by noise, temperature etc. But they are just obscuring the phenomenon that takes place - creating a classically existing, real, dot on a real photographic plate. The dot and the plate are real. They are actualities and not only wavy possibilities. And what we need is a mechanism of how potentialities become actualities in real time.

But the electron is not a wave-function. The road is not a line on a map. It is an analogue and to understand the type of analogue you must look at how the map is used. In the road case the map is a direct analogue, a model of the reality of the road. In the wave-function case we look at what we do with the wave function. We use it to calculate probabilities for position measurements, it is a logical analogue not a physical one... or rather it is first and foremost a logical analogue.

Yes, but we can use them also to extract more than just probabilities. We can use them for the purpose of describing the track formation in real time. Except texbooks are telling you how to do that.

You may assert it is also a physical one but you must prove your case for that. I assert that wave-function collapse is a specific indicator that it is not a physical analogue but purely a logical/predictive one since it is the logic of updating our class of systems that instigates our collapsing the wave-function on paper.

Well, that is your point of view, and I am not surprised.

I understand the temptation to say "an electron is both wave and particle" but one is mapping the quantum electon's behavior into two distinctly classical phenomena, classical waves and classical particles. It is the spectrum of behaviors one is addressing and we see in this "either or" business the relativity of the actual classical representation. It is the necessary relativity of the "reality" one is trying to paint for the electron. The electron is not the sinusoidal wave nor the dirac delta-function particle... it is a phenomenon of actualizable measurements which we can probabilistically predict using wave functions (and/or density operators) as representations of interrelated probabilities.

One thing is sure: dots are dots, tables are tables, chairs are chairs, data are date. We need to explain the data and guess what is the mechanism of their formation and their organization.

Data are not wavy. They are classical and Boolean within a reasonable approximation. Their appearance, their characteristics are caused by something real and by something wavy. When you want to explain some statistical averages seen in the data - then density matrix can be handy. But when you want to understand the mechanism of creation of a single finite data set - density matrix is useless. It is much like with tossing a coin. You may be interested in finding whether the coin is biased or not by tossing the coin a large number of times or you may want to know what is the mechanism by which such an average results - that is you want to discover a stochastic process that reproduces these averages. Usually knowing the stochastic process let's you to be able to calculate more and to predict more than just the knowledge of a some of the expectation values of some random variables.
 
  • #22
The text that you have entered is too long (22531 characters). Please shorten it to 20000 characters long.
OK Gotta trim it.
 
  • #23
Sorry if this sounds silly but:
what is the purpose of decoherence? Why would it make macroscopic objects behave classically if QM applies to the macroscopic level? I'm a bit confused...

I've read in different books it doesn't collapse the wavefunction, and was even told that even though some particle may look like its not in a superposition, it still is - because the way I see it is if a theory is meant to hold all the time, then no collapse occurs because QM does not allow it.
 
  • #24
StevieTNZ said:
Sorry if this sounds silly but:
what is the purpose of decoherence?
Who's purpose? Purpose presupposes a holder of purpose.
Why would it make macroscopic objects behave classically if QM applies to the macroscopic level? I'm a bit confused...
It helps to first understand what it means to behave classically vs quantum mechanically and there are different ways to see this.

One could say that the decoherence destroys interference effects but that is not always true, a classical wave e.g. radio waves from a transmitter will interfere quite nicely.

I've read in different books it doesn't collapse the wavefunction, and was even told that even though some particle may look like its not in a superposition, it still is - because the way I see it is if a theory is meant to hold all the time, then no collapse occurs because QM does not allow it.

As I've come to understand QM, you shouldn't think of the collapse of the wavefunction as a physical process but a conceptual process we apply after the physical act of measurement when we update our information about the system. (Just as you update the value of a Lotto ticket after the drawing, or your suppositions of the likely location of your keys after you see them on the coffee table.)

Likewise superposition is not a physical property of the system but a property of how you are resolving the system in terms of potential observables. A vertically polarized photon "is not in a superposition" of Vert vs Horiz. modes but "is in a superposition" of left circular and right circular polarization modes. It is the modes (of measurement) which superpose not the photon.

Now to understand decoherence you have to go to a richer description using density operators. The sharp density operator of a system which can also be described as a wavefunction:
[tex]\rho = \psi\otimes \psi^\dag \simeq |\psi\rangle \langle \psi |[/tex]
will under decoherence become a "mixed state" density operator

[tex] \rho = p_1 \rho_1 + p_2 \rho_2 + \cdots[/tex]
where the p's are "classical" probabilities and the "rho's" are distinct sharp modes. (I think it is a mistake to distinguish classical vs quantum probabilities. Probabilities are probabilities. Rather one should distinguish a classical probability distribution from a quantum probability "distribution" since the latter is not a density of probabilities over the observable states.)

Its entropy has increased from 0 to some positive value.

So as a system decoheres its description looks more like a classical probability distribution over a set of objective states instead of as a quantum superposition.

One of the principle subjects of interest for decoherence is in considering entangled pairs or other forms of correlated measurements. Decoherence eases the degree of correlation to something we can describe in terms of classical probability distributions.

Physical manifestations are the loss of superconductivity or superfluidity above the critical temperatures. The quantum correlations which make these effects manifest is lost due to too much random interaction with the environment.
 
  • #25
arkajad said:
What I mean is that collapse is governed by a stochastic process.

like CSL models.
 
  • #26
So,should we think of decoherence as being a mathematical abstraction rather than a physical process?
 
  • #27
jambaugh said:
As I've come to understand QM, you shouldn't think of the collapse of the wavefunction as a physical process but a conceptual process we apply after the physical act of measurement when we update our information about the system. (Just as you update the value of a Lotto ticket after the drawing, or your suppositions of the likely location of your keys after you see them on the coffee table.)

Likewise superposition is not a physical property of the system but a property of how you are resolving the system in terms of potential observables. A vertically polarized photon "is not in a superposition" of Vert vs Horiz. modes but "is in a superposition" of left circular and right circular polarization modes. It is the modes (of measurement) which superpose not the photon.

I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.

In my view, the superposition state is in fact the "real" state of the system as long as it's in it. For example, take the state |+> = |0> + |1>. If you measure in the computational basis you would find for example |1>, but this does not mean that the measurement is simpy an update of information or that the state was in |1> all the time, like your keys on the table analogy suggests. In the key case they really were on the table all the time, even before the measurement, but in the |+> case this is not true because experiments done to the state before the collapse would yield quite different results between |+> and |1>, in particular measurements in the |+>,|-> basis would find the state |+> 100% of the time.

I tend to think of it more like asking a grey square whether it's black or white, you're bound to get non determined answer, and it's not just a matter of updating the information, the state after the collapse is actually different in a real and measurable way.
 
  • #28
7th bardo said:
So,should we think of decoherence as being a mathematical abstraction rather than a physical process?

No more so than we should think of entropy as a mathematical abstraction. Entropy has physical meaning but is not an observable of a system. It is rather a quantitative measure of our knowledge about a given system in so far as it is a property of a maximally restrictive class of systems to which we can say a given system belongs as an instance.

[By maximally restrictive, I mean we use all the existant knowledge about the system, not necessarily all simultaneously possible knowledge about the system. In short I'm not talking about necessarily sharp descriptions and in fact the lack of sharpness is what entropy is quantifying. One may refer to a sharp mode too as a maximally restricted class but in this case maximal in the sense of using all possible information not just what is actually known.]

Since decoherence involves an increase in entropy of a system it too is a description of a (maximally restrictive) system class associated with that system.

A class of systems is a mathematical abstraction with perfectly concrete physical meaning when the class is defined in terms of observables. E.g. the class of electrons (specifying mass and charge) for which the z component of spin has been measured at +1/2 and momentum at say some vector value p.

We express that class of systems by writing a wave-function (if it is sharply described as above) or a density operator (which is more general allowing for cases of non-zero entropy). In a laboratory we may instantiate that class (actualize an instance of an electron) which requires physical constraints and measurements.
 
  • #29
yoda jedi said:
like CSL models.

This is the simplest possibility, with an extremely simple stochastic process. I don't think it is general enough to describe all physical experiments that are being done in the labs. CSL is simple to explain, simple to apply, but it assumes one homogeneous mechanism for all collapses. This is not what we see looking at particle tracks. The collapses are evidently (at least for me) due to the presence of the detectors and there is no need (and not much use) of collapsing the wave function in a vacuum.
 
  • #30
Zarqon said:
I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.

In my view, the superposition state is in fact the "real" state of the system as long as it's in it. For example, take the state |+> = |0> + |1>.
There are multiple issues here. Letting the "real" issue sit for the moment. The "state" |+> is not in a superposition w.r.t. the |+> vs |-> basis but of course is w.r.t. the |0> vs |1> basis. Hence superposition is not "a property of the system" in an absolute sense but rather a relationship between a given ket and our choice of basis.
If you measure in the computational basis you would find for example |1>,
You might so find. Prior to adding this additional physical assumption you only know the probabilities which is to say you don't know. It is when you actualize the assumption that you "collapse" your knowledge of how the system will subsequently behave. In this sense the quantum collapse is no different from the classical collapse in the case of the glasses...
but this does not mean that the measurement is simpy an update of information or that the state was in |1> all the time, like your keys on the table analogy suggests.
The collapse component is simply an update of information. Since the subsequent measurement is not compatible with the implied previous measurement (|+> vs |->) you simultaneously loose any dependence on that previous measurement for future predictions.

Going back to the glasses analogy for a moment. If I last recall seeing my glasses in my car then my probability distribution for where I most likely will find them will take that into account. But once I see them on the coffee table that old assumption is removed.
In the key case they really were on the table all the time, even before the measurement,
Of course and this is where the "glasses" differ from the quantum system but it doesn't detract from the fact that my knowledge about where the glasses might be has been changed by my observing where they are.
but in the |+> case this is not true because experiments done to the state before the collapse would yield quite different results between |+> and |1>,
You can't have your cake and eat it too. Either you did measure |1> or you didn't. You can't go back in time and undo this. So you are talking cases and not a given system. Once you change the assumption that you did measure |1> vs |0> and that you observed the value |1> you are "uncollapsing" the wave-function... and so you have the prior prediction...
in particular measurements in the |+>,|-> basis would find the state |+> 100% of the time.

Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)

You would still write the |1> wave-function, even to describe the system prior to your measurement. If I then told you I measured a specific observable you would use that |1> wave function to predict the probability of the value I measured and finally if I said I measured |+> you would collapse the wave-function to |+> prior to my measurement to see what "alice" measured before me.

By reversing the sequence of assumptions made, I have totally change where you write the |+> description and where you write the |1> description. Can you still then say these are states of reality? Or are they not truly representations of our knowledge about the system in question?
 
  • #31
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
 
  • #32
StevieTNZ said:
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?

Quantum mechanics, when it was being consceaved was unsure about the collapse. Schrodinger himself was unsure. Then there came applications and QM concentrated on applications that doo not need collapse. The mechanism of forming tracks in cloud chambers was never explained by QM. http://en.wikipedia.org/wiki/Mott_problem" discussed probabilities of different tracks but did not say anything about the mechanism itself and about the timing of the events. So physicists decided that one is not supposed to ask about "mechanisms". Why? Because no one (except of Schrodinger, but who cares?) asks such questions.

The model of Belavkin and Melsheimer "http://arxiv.org/abs/quant-ph/0512192" " is just one possibility, but it is not completely satisfactory. There are other options available. But this is not the mainstream physics, so the territory is left to the "decoherence teams" - which form the mainstream approach these days.
 
Last edited by a moderator:
  • #33
jambaugh said:
Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)

You would still write the |1> wave-function, even to describe the system prior to your measurement. If I then told you I measured a specific observable you would use that |1> wave function to predict the probability of the value I measured and finally if I said I measured |+> you would collapse the wave-function to |+> prior to my measurement to see what "alice" measured before me.

By reversing the sequence of assumptions made, I have totally change where you write the |+> description and where you write the |1> description. Can you still then say these are states of reality? Or are they not truly representations of our knowledge about the system in question?

When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:

Consider that I start with the state |+>. If I measure in the |+>,|-> basis I would now find the state |+> with 100% probability. Let's now consider what happens if you did a measurement in the |0>,|1> basis without telling me. You would "collapse" the state to one of them, let's just say it happened to be |1>.

Now, without you telling me anything, i.e. my knowledge about the system does not change, I now have a non-zero probability of measuring |-> (50%) if I again measure in my basis. The probability of measuring |-> has thus changed without my knowledge being changed at all.

I can only interpret this as the fact that the physical state has actually changed, which is completely different from any classical analogy, where no amount of information update can ever change the location of neither keys nor glasses.
 
  • #34
StevieTNZ said:
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?

Nonlinear Quantum Mechanics states that collapses occur itself.
 
  • #35
What is the difference between linear and nonlinear quantum mechanics? Which one is correct?
 

Similar threads

Replies
8
Views
906
Replies
15
Views
1K
  • Quantum Physics
2
Replies
37
Views
4K
Replies
2
Views
911
Replies
2
Views
949
  • Quantum Physics
2
Replies
40
Views
7K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Physics
Replies
5
Views
6K
Replies
2
Views
3K
Back
Top