Can a conscious observer collapse the probability wave?

  • #51
Meselwulf said:
Also, you want to talk about the quantum flip of a coin?

If you flip a coin 100 times, you create slightly over 10^{30} universes. This disturbed Hoyle yet this is in the fact of the rational theory you seem to not be defending very well.

I don't want to talk about MW - nor am I defending it - it's pure hokum IMHO for all sorts of reasons. The huge number of universes it requires is one of those reasons - but it does not disprove it. I am simply pointing out it is an interpretation that no generally accepted refutation exists for and it does not require wavefunction collapse.

The coin analogy is also just that - an analogy. The Kochen-Sprecker theorem proves by itself an observation is not like a flip of a coin in that it does not have the property of head or tail prior to observation. However if you take into account decoherence you can say it has the property prior to observation - but that of course requires more work to understand.

Thanks
Bill
 
Last edited:
Physics news on Phys.org
  • #52
Meselwulf said:
Wrong, we have observed the wave function, it is not a mathematical anomaly.

That would be an interesting trick - observing something in QM without requiring an observable - and once you do that Copenhagen, the Ensemble interpretation, and others (not all of course) say the only thing that can be predicted is probabilities and it is the state that tells you that via the usual trace formula Tr(pR) that gives the average - p the state, R the observable.

Thanks
Bill
 
  • #53
bhobba said:
I think a number of people such as Ballentine would disagree. See chapter 9 - Ballentine - Quantum Mechanics - A Modern Development - The Interpretation Of The State Vector - page 239 - where he proves any other view leads to problems. Even bog standard Copenhagen disagrees.

My view is not that fatalistic in that I think a view of a state vector as real can be part of a valid interpretation but it is far from certain such must be. In fact most interpretations like the Ensemble Interpretation or Copenhagen don't buy into its reality. My personal interpretation - being the Ensemble interpretation combined with Decoherence - doesn't either.

Thanks
Bill


To be honest, if he disagree's, he is disagreeing with proven experimental fact. So you can continue to believe in what he says, but he has been proven wrong in his speculations.

Quantum wave function of semi-classical objects have been proven and observed. Please, look up the ''Quantum Resonator.''
 
  • #54
bhobba said:
That would be an interesting trick - observing something in QM without requiring an observable - and once you do that Copenhagen, the Ensemble interpretation, and others (not all of course) say the only thing that can be predicted is probabilities and it is the state that tells you that via the usual trace formula Tr(pR) that gives the average - p the state, R the observable.

Thanks
Bill

The definition of observing something requires there being an observable. There cannot be a logical dispute about that!
 
  • #55
Meselwulf said:
The definition of observing something requires there being an observable. There cannot be a logical dispute about that!

Then how do you know via observation a state is real?

Thanks
Bill
 
  • #56
bhobba said:
Then how do you know via observation a state is real?

Thanks
Bill

Is that a real question? The answer is self-explanatory, if you see it, and it exists by testing it experimentally, over and over again, why would one not think it is real?

This is not a rhetorical question. It's a matter of fact. Scientists have a certain proclavity to understanding how real things exist. Observables for instance are represented by Hermitian matrices, spin is an example of such a phenomenon. If spin was not real, we would not be able to measure it and know it was a real artefact of the world.
 
  • #57
Meselwulf said:
To be honest, if he disagree's, he is disagreeing with proven experimental fact. So you can continue to believe in what he says, but he has been proven wrong in his speculations. Quantum wave function of semi-classical objects have been proven and observed. Please, look up the ''Quantum Resonator.''

I believe his argument because it is very good - you should acquaint yourself with it. I do believe there are a number of ways to evade it such as MW's but they all seem a bit contrived to me. However to each there own - if you want to believe the state is real - feel free - but just don't say it must be so because quite simply QM does not demand that view - in fact most interpretations I am aware of - the generally trotted out Copenhagen among them - deny it.

Thanks
Bill
 
Last edited:
  • #58
Meselwulf said:
Is that a real question? The answer is self-explanatory, if you see it, and it exists by testing it experimentally, over and over again, why would one not think it is real?

This is bog standard basic QM. If you see it you are observing it and hence are subject to collapsing the wave function issue and all other quantum weirdness. If you are not observing it all you can say is it is in a certain state and via the usual trace formula predict probabilities if you were to observe it. The issue here is if the state is real like an electric field or simply a theoretical device - many - probably even most interpretations - do not require its reality.

Thanks
Bill
 
  • #59
bhobba said:
I believe his argument because it is very good - you should acquaint yourself with it. I do believe there are a number of ways to evade it such as MW's but they all seem a bit contrived to me. However to each there own - if you want to believe the state is real - feel free - but just don't say it must be so because quite simply QM does not demand that view - in fact most interpretations I am aware of - the generally trotted out Copenhagen among them - deny it.

Thanks
Bill

Many Worlds interpretation is not sensible for a number of reasons.I won't go over them all, but for the greatest problem concerning it, it surely be can be classed on the league of string theory, M-theory or whatever you wish to call the the five-model string theory.

It's on the same league because, there is actually no way of experimentally-proving it - the universe is what we call, ''intrinsically closed'' or ''self contained'' - usually the latter is used in cosmological terminology. This means anything which happens in anyone universe, must stay within that universe, and whilst it may only seem like a conjecture, it is pivotal that things do not leek between universe because information, just like a Black Hole swallong matter and energy, can never be truly lost.

The basis reason for scientists questioning the possibility of MWI is purely the question ''why do many probabilities show up, when, clearly only one state is ever observed?'' Everette the III then decided well, what if the universe has a wave function itself, which then led him to the idea that maybe all the wave functions in the dynamic universe was determined by playing out the possible events in other universes.

To do so however, the problems which is not even sensible (like tossing a coin 100 and finding you create staggering amount of universes by a series of splittings and merging off our own), it also implied but also not sensible because it cannot be manifestly physical. The reason why is because it actually requires and infinite amount of universes, and infinity doesn't exist in closed universes, in closed universes, everything is finite. So by this reasoning, there can be nothing isomorphic to our universe and besides, our universe has a certain proclavity to abhore infinities in general.

Copenhagen however, has been a true success, from decoherence, observed collapsing of the wave function, the uncertainty principle which is a cornerstone of the Copenhagen Interpretation. Hardly any faults on the top of my head even exist for this Interpretation, but you don't like it and your reasons seem aloof to me.
 
  • #60
bhobba said:
This is bog standard basic QM. If you see it you are observing it and hence are subject to collapsing the wave function issue and all other quantum weirdness. If you are not observing it all you can say is it is in a certain state and via the usual trace formula predict probabilities if you were to observe it. The issue here is if the state is real like an electric field or simply a theoretical device - many - probably even most interpretations - do not require its reality.

Thanks
Bill

When you are not observing it, don't you mean, that it's location is uncertain.

And I don't agree with this:

''The issue here is if the state is real like an electric field or simply a theoretical device - many - probably even most interpretations - do not require its reality.''

It makes no sense. If it is real, then it's real. There can be no question about it, and if you are saying it is real by interpretation, I am not quite sure what is truly meant by that. Interpretations make assertions on what can be measured. If the theory does not match what is measured, either it needs to be adjusted, or scrapped.
 
  • #61
Meselwulf said:
If it is real, then it's real. There can be no question about it

Its not that simple. Many many people, Einstein, Bohr, Feynman, Dirac, all sorts of people have debated it and no conclusion has ever been reached. Since QM is a theory about observations when you are not observing it you can if you wish not ascribe any definite property out there to it. The state you think is real simply tells us the probability of the outcome of an observation - nothing more. Unless that outcome is a dead cert (in in the vast majority of cases it isn't) then you can't say it has that property. In principle you can come up with an observation that determines with certainty a pure state and in that sense you might think it real - but then there are so called mixed states that are not like that.

Now since it does not tell us anything between observations its an open question if it has any real property until you observe it. You can think of the state as real if you like - and there is no way to prove you incorrect - or correct for that matter - but if you do then since you think the state is real you need to explain how something real spontaneously changes to something else - or do you believe nature is simply like that?. Although I generally don't like providing the answers to positions I do not agree with it is possible decoherence could do that - but you need to spell it out.

Thanks
Bill
 
  • #62
Meselwulf said:
Interpretations make assertions on what can be measured. If the theory does not match what is measured, either it needs to be adjusted, or scrapped.
To expand on what bobba said, I believe you have the meaning of "intepretation" confused with the meaning of a "theory"-- theories make assertions on what can be measured, regardless of interpretation. Interpretations make assertions that cannot be measured or tested in any way-- they make assertions about what is "real", or in some cases, about what is not "real".

The way I like to think about all this is that reality includes whatever apparatus is in place to establish what the reality is. That can be a conscious observer, or other things that play the same role, but it has to be something. A reality that is absent of any apparatus to establish what is real is no kind of reality at all, and we constantly have to use interpretations to "connect the dots" between the elements of the situation that are actually established as real. You can see why what philosophers label "realism" I regard as "unrealism."
 
  • #63
I see that theories make assertions, but at the same time, one has to make assertions to manifest a theory. Obviously, these assertions and theories are coupled to the direction of the most accurate model which physics at the time. To this day, Copenhagen manages to satisfy both to a much greater degree that MWI.

AS I said, it is on the same league as string theory based models. There is no experimental proof, only assumptions alone based on our mathematics.
 
  • #64
bhobba said:
Its not that simple. Many many people, Einstein, Bohr, Feynman, Dirac, all sorts of people have debated it and no conclusion has ever been reached.
Bill


I can assure you since the Einstein Bohr debates, many things have been resolved. Today our saving grace is that we have experimental evidence that the wave function is in fact real.

I have asked you three times now to look up ''quantum resonator'' it is irrefutable proof it is actually a physical manifestation.
 
  • #65
Meselwulf said:
I can assure you since the Einstein Bohr debates, many things have been resolved. Today our saving grace is that we have evidence that the function is in fact real.

is not so easy, read:
(i think that is epistemic i.e. a knowledge representation of reality, not the reality itself).
The quantum state can be interpreted statistically, again
http://physics.stackexchange.com/qu...e-interpreted-statistically-again/36390#36390

Is the wave function, an unreal tool, to partially model a real interaction?
https://www.physicsforums.com/showthread.php?t=619851

The quantum state cannot be interpreted statistically?
https://www.physicsforums.com/showthread.php?t=551554
 
Last edited:
  • #66
Meselwulf said:
I can assure you since the Einstein Bohr debates, many things have been resolved. Today our saving grace is that we have experimental evidence that the wave function is in fact real. I have asked you three times now to look up ''quantum resonator'' it is irrefutable proof it is actually a physical manifestation.

I will look it up but please answer me a simple question. Given the mixed state 1/2 |a><a| + 1/2 |b><b| what is the corresponding observable that will tell us it is in that state? And if you can't come up with one why do you think its real?

Added Later:

Looked it up - could not find any article using it as evidence a state is real. Exactly why do you believe it proves it?

Thanks
Bill
 
Last edited:
  • #67
edit/
 
  • #68
The wave function is physical. Some authors will say a thing doesn't exist until an observation is made on a system, but if the wave function is real which experimentation seems to suggest, then observation is really not needed to explain this, other than finding an object in a specific place.
 
  • #69
Zmunkz said:
Some of this conversation is suffering from a terminological problem. To be clear, we don't actually now if there is a "wave collapse" in reality... Mr. Schroedinger's equation is rather explicit that no wave can evolve from a standard superposition into a collapsed spike. The concept of wave collapse was merely an instrumentalist remedy hand-wavingly introduced by Niels Bohr, but it is not clear exactly how or even if that "event" translates into reality. Considering that the collapse could easily be an instrumental concept rather than a realist one, it's going to be hard to settle in a concrete way what "causes" the collapse. Such are the mysteries of quantum mechanics :-)
I agree that it is a terminology problem. However, I think that the terminology is slightly clearer in terms of coherency theory than in the Copenhagen interpretation or the mutliworlds interpretation. In terms of coherency, "wave collapse" and "observation" are defined in a very general way. I have in mind a close analogy in terms of synchronously pulsed lasers.
"Wave collapse" isn't much different in my mind from "mode locking". Pulsed lasers can produce wave packets that are less than a picosecond in duration. A synchronously pulsed laser has some property that is modulated with a period equal to the round trip time of the laser cavity. By clipping the tail of the pulse, the wave packet becomes very narrow.
The "observation" in coherency theory is merely the interaction of the measuring device wave with the "system wave". This appears to me very similar to synchronous mode locking. The "system wave" collapses into a wave packet, just due to the interaction. You can't predict exactly when the wave will collapse into a wave packet in a synchronously pulsed laser because the initial wave has an unknown phase. I think this is analogous to the inability to predict the position of the particle after the collapse of the wave function.
"Observation" is a poor word since it implies that there has to be conscious acknowledgment of the results of the interaction. The "observation" is merely a type of nonlinear interaction. Furthermore, "collapse" is a poor word since it implies that the system is no longer a wave after the nonlinear interaction. In actuality, what is left after an observation is a localized wave packet. By Ehrenfests theorem, the wave packet behaves approximately like a classical particle. However, the wave packet will start to disperse soon after forming.
The measuring instrument is never 100% classical in behavior. The Copenhagen interpretation implies that the measuring instrument is somehow behaving like it is made of particles (always) while the system behaves like a wave (until the interaction).
This duality is the source of the logical problem. Our intuition says that everything acts like it is made of classical particles.
Reality says that objects sometimes act like waves and sometimes like particle. This is a problem with intuition, but it is not a problem of logic. The Copenhagen interpretation gives rules that inform us when the system acts like a wave and when it acts like a particle. The logical problems come about when the the rules are not self consistent. If the rules were 100% self consistent, there would never be a logical problem. I don't know if one can say that the rules are 100% self consistent, but the percentage is high.
Coherency theory says that everything acts like a wave, but particle properties "emerge" from the wave properties. So as long as the rules regarding waves are self consistent, the theory is probable. This may be an intuitive problem. However, it is not a logical problem.
I think one of the necessary conditions for a wave being a "measuring instrument" is that it is complex. The wave that is the "classical system" has to have a many degrees of freedom. Obviously, our brains fulfill that condition in excess. So the Copenhagen interpretation may be based on a half truth. The system being examined is interacting with a complex system, which is interacting with complex sensors, which is interacting with complex nerve endings, which is interacting with complex nerves, which is interacting with a complex brain. There is a time delay between each interaction, since the nonlinear interaction has a response time. By the time the chain of interaction has reached from the examined system to the brain, the system has already interacted with a lot of complex systems. So by the time the system interacts with the brain, the wave function of the system has narrowed into a wave packet.
The interaction with our consciousness may be just a milestone rather than a fundamental condition. The real "observation" occurs immediately after the first complex system has caused a wave packet to appear. However, the brain isn't aware of it at that femtosecond. However, subsequent interactions with complex systems narrow the "wave packet" even further. By the time our brain interacts with the system, the "wave packet" is really narrow. So at that point, the system can be considered "classical".
This is just my interpretation. I will now look up some articles on synchronous model locking to support this conjecture.
Here they are. I edited this message in order to add these references. Do the email notifications include later editing?
First, some articles on mode locking laser beams.
http://en.wikipedia.org/wiki/Mode-locking
“Mode-locking is a technique in optics by which a laser can be made to produce pulses of light of extremely short duration, on the order of picoseconds (10−12 s) or femtoseconds (10−15 s).
The basis of the technique is to induce a fixed phase relationship between the modes of the laser's resonant cavity. The laser is then said to be phase-locked or mode-locked. Interference between these modes causes the laser light to be produced as a train of pulses. Depending on the properties of the laser, these pulses may be of extremely brief duration, as short as a few femtoseconds.

This process can also be considered in the time domain. The amplitude modulator acts as a weak shutter to the light bouncing between the mirrors of the cavity, attenuating the light when it is "closed", and letting it through when it is "open". If the modulation rate f is synchronised to the cavity round-trip time τ, then a single pulse of light will bounce back and forth in the cavity. The actual strength of the modulation does not have to be large; a modulator that attenuates 1% of the light when "closed" will mode-lock a laser, since the same part of the light is repeatedly attenuated as it traverses the cavity.”

http://www.dmphotonics.com/Autocorrelator/ultrafast.pdf
“Now consider the form of the wave packet output of a model locked laser.” Second, these articles describe model locking effects in systems that aren’t a laser beams.
http://arxiv.org/pdf/cond-mat/0106423.pdf
“In this paper, it is shown that a configuration modulated system described by the
Frenkel-Kontorova model can be locked at an incommensurate phase when the quantum zero point energy is taken into account.”

http://pre.aps.org/abstract/PRE/v75/i3/e036208
“Mode locking of a driven Bose-Einstein condensate”
 
Last edited:
  • #70
There is no mystery with the Cat Experiment now. Large systems are free from quantum effects. The cat will be dead if the counter releases the gas not a mixture of dead and alive. A cat is a system effected by it's large existence, quantum effects simply don't effect them strong enough to take hold of the inevitable.
 
  • #71
The basic question is simple: does a truly isolated system, regardless of size, really evolve via the Schroedinger equation, or doesn't it? There is no way out-- this question must be answered, and it makes no difference if one takes an instrumentalist or realist view, the question persists.
 
  • #72
The answer is yes - the total system - environment, system being measured, and measuring apparatus - does evolve by the Schrodinger equation. However via decoherence phase leaks to the environment transforming the pure state into a mixed state. The mixed state can be interpreted as being in an eigenstate of the measurement apparatus - but only probabilities can be assigned - we do not know which one. The arbitrariness of the pure states a mixed state can be decomposed into is removed by the definiteness of the possible states of the measurement apparatus.

http://arxiv.org/pdf/quant-ph/0312059v4.pdf
The reduced density matrix looks like a mixed state density matrix because, if one actually measured an observable of the system, one would expect to get a definite outcome with a certain probability; in terms of measurement statistics, this is equivalent to the situation in which the system is in one of the states from the set of possible outcomes from the beginning, that is, before the measurement. As Pessoa (1998, p. 432) puts it, “taking a partial trace amounts to the statistical version of the projection postulate.”

This does not resolve the measurement problem because it does not explain how a particular outcome is selected. But for all practical purposes it does because there is no way to observationally distinguish the two - one where it is in a definite state and you can predict which one it is and one where it is a definite state and all you can predict is probabilities.

Regarding the reality of a system state it is not possible to have an observable that tells what state a system is in - for pure states you can but for mixed states you can't. This suggests to me its like probabilities - not something that is real but rather a codification of knowledge about the system. It does not prove it is not real either - it simply seems more reasonable not to assume it - but opinions are like bums - everyone has one - it does not make it right. The problem of a real system state collapsing via measurement is solved by decoherence.

Thanks
Bill
 
  • #73
Ken G said:
The basic question is simple: does a truly isolated system, regardless of size, really evolve via the Schroedinger equation, or doesn't it? There is no way out-- this question must be answered, and it makes no difference if one takes an instrumentalist or realist view, the question persists.

Is this really controversial? That an isolated system evolves according to the SE has been the implicit assumption in a vast number of models and agrees with every experiment I know of; the better you isolate your system the more if behaves like an ideal QM system.

Furthermore, nowadays we've reached a point where when a system does NOT evolve accoring to the SE we often now why, i.e. we understand the interactions with the environment quite well (which doesn't neccesarily mean that we know how to reduce them). It is this understanding which has allowed us to e.g. push the coherence time of solid state qubits from tens of nanoseconds ten years ago, to hundreds of microseconds today.
 
  • #74
f95toli said:
Is this really controversial? That an isolated system evolves according to the SE has been the implicit assumption in a vast number of models and agrees with every experiment I know of; the better you isolate your system the more if behaves like an ideal QM system. Furthermore, nowadays we've reached a point where when a system does NOT evolve accoring to the SE we often now why, i.e. we understand the interactions with the environment quite well (which doesn't neccesarily mean that we know how to reduce them). It is this understanding which has allowed us to e.g. push the coherence time of solid state qubits from tens of nanoseconds ten years ago, to hundreds of microseconds today.

Exactly. To me its simply the modern view as detailed in the paper by Schlosshauer I linked to. Really scratching my head why its not more or less the generally accepted wisdom and why discussions still go on about it. One almost gets the feeling some want it to be more complicated than it really is.

I have read when Wigner first heard about how decoherence solved the measurement problem from some early papers by Zurek he recognised immediately it removed the necessity for ideas like consciousness causing collapse etc he was partial to. Since then we have deepened our understanding but the basic message seems to be the same - the measurement problem now largely has been solved. Issues do remain and research seems ongoing but as far as I can see the more 'mystical' ideas such as consciousness causing collapse no longer have traction.

Thanks
Bill
 
  • #75
Decoherence does not solve the measurement problem. It neither solves the reduction to a single observed state nor does it explain the Born rule. Claims that it does are based on a misinterpretation of the meaning of the density operator constructed by tracing over the environment.

See my blog at http://aquantumoftheory.wordpress.com for how the measurement problem can be approached in a more coherent way.
 
  • #76
Jazzdude said:
Decoherence does not solve the measurement problem. It neither solves the reduction to a single observed state nor does it explain the Born rule. Claims that it does are based on a misinterpretation of the meaning of the density operator constructed by tracing over the environment.

I don't think it explains the Born rule - but I believe Gleasons Theorem does - unless you really want to embrace contextuality. I have been carefully studying Schlosshauer's book on decoherence and feel confident the quote I gave is correct. If not there has been some hard to spot error a lot of people missed - possible of course - but that would not be my initial reaction. It most certainly does not explain how a particular state is singled out but it does explain how it is in an eigenstate prior to observation

Mind giving us a cut down version of exactly where the math of tracing over the environment fails?

Thanks
Bill
 
Last edited:
  • #77
bhobba said:
I will look it up but please answer me a simple question. Given the mixed state 1/2 |a><a| + 1/2 |b><b| what is the corresponding observable that will tell us it is in that state? And if you can't come up with one why do you think its real?

Added Later:

Looked it up - could not find any article using it as evidence a state is real. Exactly why do you believe it proves it?

Thanks
Bill
What you claimed was a “mixed state” is really a projection operator. Projection operators aren't states at all. Projection operators can be described by "defective" matrices and states can be described as vectors. By defective, I mean that the projection operator doesn't have as many as many linearly independent eigenvectors as it has eigenvalues. In any case, what you wrote can't be a state. I think that I know what you meant, though.
I assume that what you meant is the two photon state “|a>|a>+|b>|b>” which isn’t a mixed state either. However, it is at least a state. I think the question that you were trying to ask is what corresponding observable will tell us if two particles are actually in that entangled state.
If this is what you are asking, then you really want to know how to construct a Bell state analyzer. I will address that question. If I misunderstood your question, then no harm done.
The expression that you intended to write describes a two boson entangled state where two bosons are in the same single photon state. There are at least three other entangled states with different expressions. These are called boson Bell states. Hypothetically, one can determine whether a two photon state is in one of the Bell states or in a mixed state.
For completeness, I will write down the four Bell states. This way, we can discuss the experiments easier.
The letter “a” will represent the horizontal polarization vector and “b” will represent the vertical polarization vector.
A=|a>|a>
B=|b>|b>
C=|a>|a>+|b>|b>
D=|a>|b>+|b>|a>
These are called the Bell states. The Bell state that you presented is C.
One can build an environment where the four states are separately stationary. Stationary means the probability of being in this state is independent of time and trial number. A mixed state would not be stationary. The probability of paired photons being in anyone of the four states changes with time in a mixed state.
The precise definition of horizontal and vertical varies with the geometry of the measuring instrument. However, given an ideal apparatus these states are unambiguous. A mixed state with two bosons would be a superposition of at least two of these four states.
Any two photon quantum state can be expressed as,
E=wA+xB+yC+zD.
Determining w, x, y and z would involve making coincidence measurements with polarizers and mirrors. If anyone of these parameters equals 1, and the others 0, then E is identified with one of those states. The more two photon coincidences detected, the greater the precision of the measured parameters. A mixed state would involve any of these four parameters being between 0 and 1, noninclusive. I will give some references concerning the experimental determination of the state of a two photon system. Some of the articles will provide a schematic of the apparatus they used. The experimental protocol will also be used.
A Bell state analyzer is a device for determining the state of a two photon system. Descriptions of the apparatus are shown in each article. Diagrams of the apparatus are shown in the next two articles.
http://arxiv.org/pdf/quant-ph/0410244v2.pdf
“Experimental Realization of a Photonic Bell-State Analyzer
Efficient teleportation is a crucial step for quantum computation and quantum networking. In the case of qubits, four different entangled Bell states have to be distinguished. We have realized a probabilistic, but in principle deterministic, Bellstate analyzer for two photonic quantum bits by the use of a non-destructive controlled-NOT (CNOT) gate based on entirely linear optical elements. This gate was capable of distinguishing between all of the Bell states with higher than 75% fidelity without any noise substraction due to utilizing quantum interference effects.”

http://www.univie.ac.at/qfp/publications3/pdffiles/1996-04.pdf
“We present the experimental demonstration of a Bell-state analyzer employing two-photon interference effects. Photon pairs produced by parametric down-conversion allowed us to generate momentum-entangled Bell states and to demonstrate the properties of this device. The performance obtained indicates its readiness for use with quantum communication schemes and in experiments on the foundations of quantum mechanics.”

Here is some theory. By theory, I mean a hypothetical description of the experiment.
http://en.wikipedia.org/wiki/Bell_test_experiments
“Bell test experiments or Bell's inequality experiments are designed to demonstrate the real world existence of certain theoretical consequences of the phenomenon of entanglement in quantum mechanics which could not possibly occur according to a classical picture of the world, characterised by the notion of local realism. Under local realism, correlations between outcomes of different measurements performed on separated physical systems have to satisfy certain constraints, called Bell inequalities. “
mous Bell inequality.”

A theoretical discussion on the Bell states is given here.
http://en.wikipedia.org/wiki/Bell_state
“The Bell states are a concept in quantum information science and represent the simplest possible examples of entanglement. They are named after John S. Bell, as they are the subject of his famous Bell inequality.”
 
Last edited:
  • #78
Ken G said:
The basic question is simple: does a truly isolated system, regardless of size, really evolve via the Schroedinger equation, or doesn't it? There is no way out-- this question must be answered, and it makes no difference if one takes an instrumentalist or realist view, the question persists.
According to decoherence theory, the isolated system containing environmental system and probed system really evolve by Schroedinger equation. The "randomness" of the measured results corresponds to unknown phases in the environmental system. There is an assumption here that there are far more unknown phases in the environmental system then in the measured system. Thus, the environment is considered complex.
One question that I haven't entirely satisfied in my own mind is why you can't consider the unknown phases as "hidden variables". The answer, to the degree that I understand it, is that the unknown phases in the decoherence model do not have the properties of a "hidden variables" defined in Bell's Theorem. When Bell proved that "hidden variables" do not explain quantum mechanics, he carefully defined "hidden variable" in a mathematically formal way. However, the phases of the waves in decoherence theory are "variables" and they are "hidden" in the broadest meaning of the words.
I am not sure, so I would like someone else to comment. Maybe somebody could answer my questions.
1) Why can't the unknown phases in the environment of the probed system be considered "hidden variables"?
2) Why isn't "decoherence theory" ever called a "hidden variable" theory?
 
  • #79
Darwin123 said:
What you claimed was a “mixed state” is really a projection operator.
No, because squaring it doesn't yield the same operator. It is a weighted sum of projection operators, which is a special form of a mixed state operator. You don't seem to be familiar with the density matrix formalism which is essential for talking about decoherence in modern terms.

Darwin123 said:
Why isn't "decoherence theory" ever called a "hidden variable" theory?
Because the environment isn't in a pure state either, and knowing its state doesn't help you to further specify the state of the system.
 
Last edited:
  • #80
We need to be a bit careful about when we talk about "decoherence theory". It is important to understand that this is NOT an interpretation (although elements of it can of course be used to formulate interpretations if you are interested, which I am not)
Hence, I don't think anyone claims that it solves all philosophical problems with QM. However, what it DOES do is to give us quantitative ways of modelling decoherence of quantum systems.
Or, in other words, its predictions matches experimental data.

Me and everyone I know who tries to increase make their systems behave "more quantum mechanically" (i.e. increase coherence times and so on) have as a working assumptions that the reason for why we can't keep the cat half-dead (so to speak) for as long as we want, is because we are not yet good enough at controlling the interaction with the environment.
Furthermore, the fact that we often model this (say using a Lindbladian) with some phenomenological times T1, T2, T2* etc does not mean that we are appealing to some unknown mechanisms. We quite often have quite a good idea of what is limiting us: in solid state systems it would be e.g. two-level fluctuators in the substrate, itinerant photons because of inadequate filtering etc; in ion-microtraps heating of the metallization etc.

There are LOTS of us out there working to solve these problems, and the vast majority of us have very little interest in the "philosophy of QM": we just want our devices to work better and "decoherence theory" gives us a route for improvement.
 
  • #81
f95toli said:
There are LOTS of us out there working to solve these problems, and the vast majority of us have very little interest in the "philosophy of QM": we just want our devices to work better and "decoherence theory" gives us a route for improvement.

That sounds like good philosophy. I don't think the "Copenhagen interpretation" gives us a route for improvement.
 
  • #82
bhobba said:
I don't think it explains the Born rule - but I believe Gleasons Theorem does - unless you really want to embrace contextuality. I have been carefully studying Schlosshauer's book on decoherence and feel confident the quote I gave is correct. If not there has been some hard to spot error a lot of people missed - possible of course - but that would not be my initial reaction. It most certainly does not explain how a particular state is singled out but it does explain how it is in an eigenstate prior to observation

Mind giving us a cut down version of exactly where the math of tracing over the environment fails?

First of all, I have read Schlosshauer's publications too, and Wallace' and Zurek's and all the others. And I still disagree with what you say essentially, and I'm not the only one who does.

First of all, Gleason's theorem doesn't really help in deriving the Born rule. It just asserts that the Born rule is the only rules that fulfills some more or less sensible constraints. But it does not explain at all how a measurement is realized, where the randomness would come from and why the states are reduced.

Decoherence also does not explain any of that. It only explains that the ability to interfere is lost after a suitable interaction with the environment. And it specifically says nothing about systems being in eigenstates, unless you make additional, and questionable, assumptions.

There is nothing wrong with tracing over the environment, but you have to careful with the interpretation of the result of this operation. A state that is reduced by tracing over the environment is the best option for representing the remaining information in the subsystem, it is not an ensemble. And it also not indistinguishable from an ensemble in the first place. Only after you introduce the measurement postulate you can sensibly construct density operators that represent properties of ensembles and then argue that with the measurement postulate a density operator from tracing and one from an ensemble construction are indistinguishable under a measurement. Any attempt to use this to make a statement about decoherence as an essential ingredient of the measurement process will therefore result in a circular argument. This is know and has been discussed often enough. Some experts in the field don't want to see it, just like others stick to their own explanation. There are good arguments against all proposed solutions of the measurement problems and no agreement at all. So you question why the discussion continues has a simple answer: Because the problem has not been solved yet.
 
  • #83
kith said:
No, because squaring it doesn't yield the same operator. It is a weighted sum of projection operators, which is a special form of a mixed state operator. You don't seem to be familiar with the density matrix formalism which is essential for talking about decoherence in modern terms.


Because the environment isn't in a pure state either, and knowing its state doesn't help you to further specify the state of the system.
I was thinking in terms of the Schroedinger picture rather than the Heisenberg picture.
In the Schroedinger picture, it is the vector that evolves in time. The vector represents a physical system that is "rotating" in time. The rotation represents changes in the physical system.
The operator in the Schroedinger system is stationary. The operator represents an operation on the coordinates. One could say that it represents an set of mathematical axes rather than a physical system. In the Schroedinger picture, a projection operator could represent a measurement.
The density matrix that you are talking about would result from the projection operator being sandwiched between world vector and the world vector. The members of the density matrix are the matrix elements. The numerical elements of these matrix elements are invariant to the picture (Schroedinger or Heisenberg) are being used. The matrix elements are considered physical. The probabilities are actually determined by the matrix elements.
The operator is not precisely the same as the density matrix. In any case, I was wrong to call you wrong. You are using a Heisenberg picture, not a Schroedinger picture. Everyone was talking about the Schroedinger equation, so I assumed that you would be using the Schroedinger picture.
In any case, I think that more and more of the system is being included in the vector rather than the operators. I mean by this that physicists are including more and more of the experimental apparatus in the Schroedingers equation. So they now have better "rules" for deciding what is the measurement and what is the system.
That is all that I meant by "decoherence theory." If the trend continues for including parts of the experimental apparatus in the wave equation, then eventually they may get to the point where both apparatus and system are analyzed by some form of Schroedingers equation.
Or the physicists won't go the entire way. Then "decoherence theory" will "only" be a route to improving the measurements.
In the example that I gave in another post, the evolution of a C60 molecule was modeled with the thermal radiation that it gives off. In previous decades, the thermal radiation would merely have been considered a part of the "classical" apparatus. So I think that mostly answers your question. In principle, the composite system including environment and subsystem satisfies Schroedinger's equation.
Everything is all waves, and there are no particles!
 
  • #84
Darwin123 said:
I was thinking in terms of the Schroedinger picture rather than the Heisenberg picture.
...
In any case, I was wrong to call you wrong. You are using a Heisenberg picture, not a Schroedinger picture. Everyone was talking about the Schroedinger equation, so I assumed that you would be using the Schroedinger picture.

You are entirely on the wrong track. This has nothing to do with Schroedinger or Heisenberg picture. Density operators are a generalization of state vectors to allow for the description of the time evolution of subsystems (tensor factor spaces) of a unitarily evolving system. They are also used to represent classical ensembles of quantum states in a way that is compatible with the measurement postulate.
 
  • #85
Darwin123 said:
What you claimed was a “mixed state” is really a projection operator. Projection operators aren't states at all.

By definition a state is a positive operator of trace 1. A projection operator is such an operator and is a special type known as a pure state. The rest are called mixed states and it can be proved they are the convex sums of pure states ie of the form sum ai |bi><bi>. As mentioned before the average of an observable R is trace (pR) where p is the system state. If p is a pure state |a><a|then Trace (|a><a|,|a><a|) = 1 ie we have an observable, namely |a><a|, that will tell us with 100% certainty if a system is in that pure state and its outcome is the same pure state. In that sense you can consider a pure state as real. But the same does not apply to a mixed state as you will see if you work through the math of say 1/2 |a><a| + 1/2 |b><b|.

Thanks
Bill
 
Last edited:
  • #86
Jazzdude said:
First of all, I have read Schlosshauer's publications too, and Wallace' and Zurek's and all the others. And I still disagree with what you say essentially, and I'm not the only one who does.

Having discussed the issue here and elsewhere there are those that do disagree with me for sure - but there are plenty that do.

Jazzdude said:
First of all, Gleason's theorem doesn't really help in deriving the Born rule. It just asserts that the Born rule is the only rules that fulfills some more or less sensible constraints. But it does not explain at all how a measurement is realized, where the randomness would come from and why the states are reduced.

Gleason's theorem shows, provided you make sensible assumptions, that the usual trace formula follows. Of course it does not explain the mechanism of collapse but I do believe it does explain why randomness enters the theory. Determinism is actually contained in a probabilistic theory - but the only probabilities allowed are 0 or 1. That assumption though is inconsistent with the trace formula - that of course is the Kochen-Specker theorem but follows quite easily from Gleason.

Jazzdude said:
Decoherence also does not explain any of that. It only explains that the ability to interfere is lost after a suitable interaction with the environment. And it specifically says nothing about systems being in eigenstates, unless you make additional, and questionable, assumptions.

I can't follow you here. From the paper posted before:
'Interaction with the environment typically leads to a rapid vanishing of the diagonal terms in the local density matrix describing the probability distribution for the outcomes of measurements on the system. This effect has become known as environment-induced decoherence, and it has also frequently been claimed to imply at least a partial solution to the measurement problem.'

Since the off diagonal elements are for all practical purposes zero it is a mixed state of eigenstates of the measurement apparatus. By the usual interpretation of a mixed state that means it is in a eigenstate but we do not know which one - only probabilities. It has been pointed out, correctly, that for mixed states it is not uniquely decomposable into pure states so that is not a correct interpretation. However it is now entangled with the measurement apparatus whose eigenstates ensure it can be so decomposed.

Now from what you write is your issue with the above is that since it invokes the Born rule it somehow is circular? I have read that before but can't really follow it. I do not believe decoherence explains the Born rule - I have read the evarience argument and do not agree with it - but base my explanation of it on Gleason. What Gleason does is constrain the possible models the formalism of QM allows. It tells us nothing about how an observation collapses a state. But it does tell us what any model consistent with QM must contain and it is that I take as true in decoherence.

Thanks
Bill
 
Last edited:
  • #87
f95toli said:
We need to be a bit careful about when we talk about "decoherence theory". It is important to understand that this is NOT an interpretation (although elements of it can of course be used to formulate interpretations if you are interested, which I am not) Hence, I don't think anyone claims that it solves all philosophical problems with QM. However, what it DOES do is to give us quantitative ways of modelling decoherence of quantum systems. Or, in other words, its predictions matches experimental data.

Of course. It most definitely does NOT solve all the philosophical problems with QM - but it does resolve some of them such as those of Schrodinger's Cat ie with decoherence taken into account it definitely is alive or dead not in some weird superposition.

Thanks
Bill
 
  • #88
bhobba said:
Gleason's theorem shows, provided you make sensible assumptions, that the usual trace formula follows. Of course it does not explain the mechanism of collapse but I do believe it does explain why randomness enters the theory. Determinism is actually contained in a probabilistic theory - but the only probabilities allowed are 0 or 1.

It's very unclear why any of the assumptions of Gleason's theorem are mandatory in quantum theory, even why we should assign probabilities to subspaces at all. Your argument is therefore based on assumptions that I don't share. But even if I did, the lack of a mechanism that actually introduces the randomness spoils the result. Deterministic mechanisms don't just turn random because we introduce an ad-hoc probability measure.

'... This effect has become known as environment-induced decoherence, and it has also frequently been claimed to imply at least a partial solution to the measurement problem.'

Yes, claimed, but never sufficiently backed up.

Since the off diagonal elements are for all practical purposes zero it is a mixed state of eigenstates of the measurement apparatus. By the usual interpretation of a mixed state that means it is in a eigenstate but we do not know which one - only probabilities.

That's precisely where you argument goes wrong. A subsystem description of a single state has the same mathematical form as a mixed (or ensemble) state, but it is still a single state of a single system. Interpreting it as an ensemble of whatever constituents with classical probabilities is just wrong. Now some argue that it is not an ensemble, but that it is indistinguishable from an ensemble. And this arguments only holds if you already assume the full measurement postulate. So it does not contribute anything at all to solving the measurement problem


It has been pointed out, correctly, that for mixed states it is not uniquely decomposable into pure states so that is not a correct interpretation. However it is now entangled with the measurement apparatus whose eigenstates ensure it can be so decomposed.

That's the least of the problems, however even with entanglement the branches are not uniquely determined. The preferred basis problem is also not solved entirely, even though good general arguments do exist.


I do not believe decoherence explains the Born rule - I have read the evarience argument and do not agree with it - but base my explanation of it on Gleason. What Gleason does is constrain the possible models the formalism of QM allows. It tells us nothing about how an observation collapses a state. But it does tell us what any model consistent with QM must contain and it is that I take as true in decoherence.

Even then you must agree that the measurement problem is not solved. And just for the record, envariance and the similar decision theory based arguments do not work either. They contain assumptions that are practically equivalent to stating the Born rule.
 
  • #89
f95toli said:
We need to be a bit careful about when we talk about "decoherence theory". It is important to understand that this is NOT an interpretation (although elements of it can of course be used to formulate interpretations if you are interested, which I am not)

Well said, it is not an interpretation, it is an experimental fact of nature. The old quantum idea before decoherence was that it took a human being to collapse the wave function. The Born rule is just that, the statistical probability of such a state collapsing to give a real result, the stuff we call observables represented by Hermitian matrices. But since mas has not been around that long, it stands to reason that things have to collapse in natural ways and decoherence, first proven by Alan Aspect is the bridge between this dichotomy.
 
  • #90
Jazzdude said:
It's very unclear why any of the assumptions of Gleason's theorem are mandatory in quantum theory, even why we should assign probabilities to subspaces at all. Your argument is therefore based on assumptions that I don't share. But even if I did, the lack of a mechanism that actually introduces the randomness spoils the result. Deterministic mechanisms don't just turn random because we introduce an ad-hoc probability measure.

I fail to see your point. There are two types of models - stochastic (ie fundamentally random) and deterministic. If it was deterministic then you would be able to define a probability measure of 0 and 1 - which you can't do if Gleason's theorem holds. Do you really believe in contextuality and theories that have it like BM? But yes that is an assumption I make and adhere to.

Jazzdude said:
That's precisely where you argument goes wrong. A subsystem description of a single state has the same mathematical form as a mixed (or ensemble) state, but it is still a single state of a single system. Interpreting it as an ensemble of whatever constituents with classical probabilities is just wrong. Now some argue that it is not an ensemble, but that it is indistinguishable from an ensemble. And this arguments only holds if you already assume the full measurement postulate. So it does not contribute anything at all to solving the measurement problem

Since I accept as a given the measurement postulate that is not an issue. The advantage of the fact it is now a mixed state is the interpretation is different. A post I saw about it on this forum expressed it pretty well:
https://www.physicsforums.com/showthread.php?t=260622
'Seriously, a mixed state is an ensemble description. In fact, one of the peculiar things about the interplay between mixed state statistics and quantum statistics is that considering particles in a "mixed state" is indistinguishable from considering them in a randomly drawn pure state if that random drawing gives a statistically equivalent description as the mixed state. Worse, there are *different* ensembles of *different* pure states which are all observationally indistinguishable from the "mixed state". What describes a mixed state, or all of these ensembles, is the density matrix rho.'

However in decoherence, as I mentioned, since the mixed state is in fact the tensor product of the system state and the possible states of the measurement apparatus it singles out one ensemble.

Jazzdude said:
And just for the record, envariance and the similar decision theory based arguments do not work either. They contain assumptions that are practically equivalent to stating the Born rule.

Totally agree.

Thanks
Bill
 
  • #91
Meselwulf said:
The old quantum idea before decoherence was that it took a human being to collapse the wave function.

Actually very few believed that - only Wigner, Von Neumann and their cohort. Wigner later abandoned it however.

Thanks
Bill
 
  • #92
bhobba said:
I fail to see your point. There are two types of models - stochastic (ie fundamentally random) and deterministic. If it was deterministic then you would be able to define a probability measure of 0 and 1 - which you can't do if Gleason's theorem holds. Do you really believe in contextuality and theories that have it like BM? But yes that is an assumption I make and adhere to.

Your determinism argument only makes sense if you want to assign probabilities to subspaces at all. Why should we? We know it makes sense because we observe it, but that's not a good reason for assuming it. Doing so introduces exactly what we really want to understand.


Since I accept as a given the measurement postulate that is not an issue. The advantage of the fact it is now a mixed state is the interpretation is different.

If you accept the measurement postulate then you cannot derive anything relevant to solving the measurement problem from using it. Because solving the measurement problem (even partly) means to explain the origin of the measurement postulate.

'Seriously, a mixed state is an ensemble description. In fact, one of the peculiar things about the interplay between mixed state statistics and quantum statistics is that considering particles in a "mixed state" is indistinguishable from considering them in a randomly drawn pure state if that random drawing gives a statistically equivalent description as the mixed state. Worse, there are *different* ensembles of *different* pure states which are all observationally indistinguishable from the "mixed state". What describes a mixed state, or all of these ensembles, is the density matrix rho.'

Like I said, stating that a single reduced state described by a density operator is indistinguishable from an actual ensemble (no matter which realization) requires using the measurement postulate. So it does not help at all for saying anything about how measurement works. Decoherence does not solve the measurement problem, not even remotely, not with the Gleason theorem, not with MWI, just not at all.
 
  • #93
bhobba said:
Actually very few believed that - only Wigner, Von Neumann and their cohort. Wigner later abandoned it however.

Thanks
Bill

I always was of the opinion that to collapse the wave function it was essential to be wearing glasses with heavy frames, a skinny black tie, and a white lab coat.
 
  • #94
f95toli said:
There are LOTS of us out there working to solve these problems, and the vast majority of us have very little interest in the "philosophy of QM": we just want our devices to work better and "decoherence theory" gives us a route for improvement.
Good post! However, I never quite got what people mean when they talk about "decoherence theory". I know the theory of open quantum systems and how decoherence arises there. Is this equivalent to "decoherence theory" or is there more to it? If yes, what are the axioms of "decoherence theory"?
 
  • #95
Jazzdude said:
Your determinism argument only makes sense if you want to assign probabilities to subspaces at all. Why should we? We know it makes sense because we observe it, but that's not a good reason for assuming it. Doing so introduces exactly what we really want to understand.

Its from the postulate observables are Hermitian operators whose eigenvalues are the possible outcomes. The spectral theorem implies, since obviously the actual values are unimportant, the projection operators of the decomposition give the probability of getting that outcome. That is easy to see if you consider a function of the observable that gives its expectation. Although it is a stronger assumption than made by Gleason's Theorem you can in fact derive the standard trace formula from the simple assumption the expectations are additive as Von Neumann did in his proof against hidden variable theories. In fact that's the precise assumption Bell homed in on in his refutation - its not necessarily true of hidden variable theories.

Jazzdude said:
If you accept the measurement postulate then you cannot derive anything relevant to solving the measurement problem from using it. Because solving the measurement problem (even partly) means to explain the origin of the measurement postulate.

I don't get it - I really don't. The problem of observing a pure state is it discontinuously changes to an unpredictable state and the system can not be assumed to be in that state prior to observation. But, like the link I gave on mixed states said: 'a "mixed state" is indistinguishable from considering them in a randomly drawn pure state if that random drawing gives a statistically equivalent description as the mixed state.'. Both are part of the measurement postulate but the second situation does not have the problems of the first such as in Schrodinger's Cat where the cat can be alive and dead at the same time prior to observation. Being in a mixed state it is either alive or dead. It does not solve all the problems - only some of them - but it does solve some of them.

Thanks
Bill
 
  • #96
Jazzdude, do you think the dBB interpretation solves the measurement problem?

For me, the measurement problem is mainly to explain collapse / the appearance of collapse and not necessarily to explain the Born rule. If we require an explanation for every probabilistic element of QM, we are implicitly assuming that the theory is deterministic.
 
  • #97
bhobba said:
I don't get it - I really don't.
I think this is a semantic issue. I use the following definitions:

measurement problem: explain collapse
measurement postulate: collapse + Born rule

If I get him right, Jazzdude wants to explain the measurement postulate while you want to solve the measurement problem.

/edit: I forgot the "oberservables are self-adjoint operators and outcomes are eigenvalues" part in the measurement postulate. This is probably not under doubt by Jazzdude.
 
Last edited:
  • #98
kith said:
Good post! However, I never quite got what people mean when they talk about "decoherence theory". I know the theory of open quantum systems and how decoherence arises there. Is this equivalent to "decoherence theory" or is there more to it? If yes, what are the axioms of "decoherence theory"?

It makes use of the standard postulates of QM - nothing new is required.

Thanks
Bill
 
  • #99
bhobba said:
It makes use of the standard postulates of QM - nothing new is required.
I think so, too. The question is why do people talk about decoherence theory in the first place and what does it include.
 
  • #100
kith said:
I think this is a semantic issue. I use the following definitions:

measurement problem: explain collapse
measurement postulate: collapse + Born rule

If I get him right, Jazzdude wants to explain the measurement postulate while you want to solve the measurement problem.

Maybe.

To me the measurement postulate is E(R) = Tr(pR) where p is the state. I assume its true. The measurement problem for a pure state follows from the postulate in that its easy to see if p is a pure state it will in general discontinuously change to another pure state. However if p is a mixed state of the outcomes of an observation then the interpretation of the postulate is different - it can be assumed to be in one of those states prior to observation with a certain probability. Because decoherence converts a pure state to a mixed state there is no discontinuous change of the state - it can be assumed to be in that state prior to observation. Because of that, as the link I gave said 'taking a partial trace amounts to the statistical version of the projection postulate.'.

If that doesn't do it I am afraid I will leave it to someone else - I am sort of pooped.

Thanks
Bill
 
Back
Top