Can Quantum Mechanics be studied without the Copenhagen interpretation

vanhees71

Science Advisor
Insights Author
Gold Member
11,800
4,357
The problem is that, supposed quantum theory in a comprehensive description of nature (and there is nothing hinting at that this might not be the case), it also should describe the time evolution of the system consisting of the measured object and the measurement apparatus. The time evolution is unitary and very much before and after this interaction the object and the apparatus should be separated parts of the system and thus the corresponding asymptotic states should be described by a product state [itex]|\psi_{\text{obj}} \rangle \otimes |\psi_{\text{app}} \rangle[/itex]. If the interaction is described by a unitary time evolution, and if the initial state has been normalized, independently on whether the initial state was an eigenstate of the measured observable or not, then also the final state must be normalized, but that's not the case, if you make the projection (collapse) postulate.

On the other hand, if you use the unitary time evolution to describe the above described measurement process, you'll see that the final state is not the above given product state, but a superposition.

For any practical purposes (fopp), decoherence explains, why measurements work as we are used to, because in order to have a proper measurement apparatus it must be such that the measurement result is fopp irreversibly stored so that you can read it out. That requires a sufficiently classically behaving, i.e., macroscopic measurement apparatus whose pointer positions are macroscopic observables and at the same time resolving the quantity to be measured sufficiently. Such a system decoheres quickly through interaction with "the environment", and at the end you are effectively not in a superposition anymore, and the measurement result is fixed forever (fopp). Reading it off doesn't change the state of the system anymore, which may well be destroyed, before you notice the result.

BTW: A flavor of the Copenhagen Interpretation, known as the Princeton Interpretation and due to J. v. Neumann, claims that precisely the latter is the case, i.e., you collapse the state only at the moment when you (or any conscious being) takes note of the measurement result. For me that's esoterics and clearly disproved by delayed-choice experiments as the quantum eraser, e.g., the one by Walborn et al.

http://en.wikipedia.org/wiki/Quantum_eraser_experiment
 
@Ger Thank you for the link, but I was given the link in the few first comments.
 
The problem is that, supposed quantum theory in a comprehensive description of nature (and there is nothing hinting at that this might not be the case), it also should describe the time evolution of the system consisting of the measured object and the measurement apparatus. The time evolution is unitary and very much before and after this interaction the object and the apparatus should be separated parts of the system and thus the corresponding asymptotic states should be described by a product state [itex]|\psi_{\text{obj}} \rangle \otimes |\psi_{\text{app}} \rangle[/itex]. If the interaction is described by a unitary time evolution, and if the initial state has been normalized, independently on whether the initial state was an eigenstate of the measured observable or not, then also the final state must be normalized, but that's not the case, if you make the projection (collapse) postulate.

On the other hand, if you use the unitary time evolution to describe the above described measurement process, you'll see that the final state is not the above given product state, but a superposition.

For any practical purposes (fopp), decoherence explains, why measurements work as we are used to, because in order to have a proper measurement apparatus it must be such that the measurement result is fopp irreversibly stored so that you can read it out. That requires a sufficiently classically behaving, i.e., macroscopic measurement apparatus whose pointer positions are macroscopic observables and at the same time resolving the quantity to be measured sufficiently. Such a system decoheres quickly through interaction with "the environment", and at the end you are effectively not in a superposition anymore, and the measurement result is fixed forever (fopp). Reading it off doesn't change the state of the system anymore, which may well be destroyed, before you notice the result.

BTW: A flavor of the Copenhagen Interpretation, known as the Princeton Interpretation and due to J. v. Neumann, claims that precisely the latter is the case, i.e., you collapse the state only at the moment when you (or any conscious being) takes note of the measurement result. For me that's esoterics and clearly disproved by delayed-choice experiments as the quantum eraser, e.g., the one by Walborn et al.

http://en.wikipedia.org/wiki/Quantum_eraser_experiment
Thank you for your response. Let me see if I understand this correctly. You're telling me that the states of the apparatus and the states of the system are superposed rather than being tensor-multiplied, which causes the new big system (which consists of both the apparatus and the system being measured) to require a new renormalization of their states, while in the ideal case they would not require that if the states of apparatus and the system are completely independent.

Did I get that correctly?

If I did (and I really hope I did): This means that the problem doesn't necessarily lie with the Copenhagen Interpretation, but rather with our systems "not being ideal". If we had ideal systems (the ones the Quantum Computing guys are dreaming about), where the measurement doesn't affect the system being measured, then the problem would be solved and the Copenhagen interpretation would be accepted.

Meaning: The current Copenhagen interpretationg could be right, but we're not sure if it's, where we're still looking for a new mathematical formalism/interpretation that would take into account the superposition of systems without the need to renormalize them.

Is that it?

Thank you for your time, again.
 
For any practical purposes (fopp), decoherence explains, why measurements work as we are used to, because in order to have a proper measurement apparatus it must be such that the measurement result is fopp irreversibly stored so that you can read it out. That requires a sufficiently classically behaving, i.e., macroscopic measurement apparatus whose pointer positions are macroscopic observables and at the same time resolving the quantity to be measured sufficiently. Such a system decoheres quickly through interaction with "the environment", and at the end you are effectively not in a superposition anymore, and the measurement result is fixed forever (fopp). Reading it off doesn't change the state of the system anymore, which may well be destroyed, before you notice the result.
Decoherence does not explain why measurements work as we are used to, in any sense (let alone "effectively" or "for all practical purposes"). The superposition always remains. Decoherence entails that given sufficient environmental interaction the branches of the wave function tend to evolve quasi-classically. So decoherence is only useful at all if we think we can solve the measurement problem by appeal to a many-decohering worlds interpretation. But such an interpretation severs the link between probability of measurement outcomes and relative frequency of measurement outcomes, and therefore does affect our "practical purposes".

BTW: A flavor of the Copenhagen Interpretation, known as the Princeton Interpretation and due to J. v. Neumann, claims that precisely the latter is the case, i.e., you collapse the state only at the moment when you (or any conscious being) takes note of the measurement result. For me that's esoterics and clearly disproved by delayed-choice experiments as the quantum eraser, e.g., the one by Walborn et al.

http://en.wikipedia.org/wiki/Quantum_eraser_experiment
I would be interested to hear the argument for why quantum eraser experiments undermine consciousness-causes-collapse theories.
 
I would be interested to hear the argument for why quantum eraser experiments undermine consciousness-causes-collapse theories.
I'm not an expert (as you see here in my discussion), but I'd like to contribute to this. I've just read this website explaining the Quantum Eraser in detail

http://grad.physics.sunysb.edu/~amarch/ [Broken]

And I got convinced that the experiment proves that the setup of the experiment is the cause for having or not having interference patterns and not the observer looking or not looking; i.e., the article "does the universe exist if we're not looking" looked crap to me! The experiment leads directly to the conclusion that a conscious being doesn't really matter, since the entanglement gets erased or created by the new configuration of the experiment, and not by a change in the observer's position/state/etc... .
 
Last edited by a moderator:

Nugatory

Mentor
11,958
4,468
If we had ideal systems (the ones the Quantum Computing guys are dreaming about), where the measurement doesn't affect the system being measured, then the problem would be solved and the Copenhagen interpretation would be accepted.
I'm not sure how that plays out... the (non-unitary) collapse/projection seems to be both an essential aspect of a Copenhagen measurement and something that necessarily affects the system being measured.
 
I'm not an expert (as you see here in my discussion), but I'd like to contribute to this. I've just read this website explaining the Quantum Eraser in detail

http://grad.physics.sunysb.edu/~amarch/ [Broken]

And I got convinced that the experiment proves that the setup of the experiment is the cause for having or not having interference patterns and not the observer looking or not looking; i.e., the article "does the universe exist if we're not looking" looked crap to me! The experiment leads directly to the conclusion that a conscious being doesn't really matter, since the entanglement gets erased or created by the new configuration of the experiment, and not by a change in the observer's position/state/etc... .
That was a really interesting read! However, I don't see how this experiment undermines the idea that wave function collapse is caused by consciousness. As the author notes, we can think of the loss of interference in this case as being due only to the fact that the photons are entangled and that the presence of the quarter wave plates changes this entanglement. This doesn't help explain why measuring which path particles go through in the standard double-slit experiment destroys interference, and so consciousness causes collapse is still a potential explanation for that.
 
Last edited by a moderator:
9,076
1,995
This doesn't help explain why measuring which path particles go through in the standard double-slit experiment destroys interference, and so consciousness causes collapse is still a potential explanation for that.
Nothing can refute consciousness causes collapse because its basically solipsism in another form - but virtually everyone rejects it because it leads to a totally unnecessarily weird view of the world.

However decoherence (which is a form of entanglement) does explain why measuring which particles goes through the slit destroys interference.

If you want to investigate this in more detail check out Susskinds lectures about it (of course assuming its not familiar already):
http://theoreticalminimum.com/courses/quantum-entanglement/2006/fall

One of the high priests of Consciousness causes collapse, Wigner, where he found out about some of the early work of Zurek on decoherence realized it was no longer necessary and abandoned it.

Thanks
Bil
 
Last edited:
9,076
1,995
The experiment leads directly to the conclusion that a conscious being doesn't really matter
It never did except for the Consciousness causes collapse view.

It came about because of the use of the word observation people associated that with conscious observer. It really means anything capable of leaving a mark here in the common sense macro world that is assumed to have all the standard common sense properties of existing independent and external to us. Once you realize that you find a lot of the semi mystical guff written about QM becomes trivial.

It also points to the real issue with QM - how does this world of everyday experience emerge.

Thanks
Bill
 
9,076
1,995
So decoherence is only useful at all if we think we can solve the measurement problem by appeal to a many-decohering worlds interpretation.
There are many interpretations that make use of Decoherence in its foundations, MW is just one of them eg Deocherent Histories and the Ensemble Decoherence interpretation that I hold to.

The issues of Decoherence and its relation to the measurement problem is examined here (again of course assuming you or others reading this are not familiar with the issues):
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

Basically what the issue boils down to is the difference between an improper and proper mixed state.

Another thing that needs to be made clear, even though it should be obvious from reading the literature, it doesn't seek to explain the QM measurement postulate - its still required, the measurement 'problem' is still there, it simply seeks to render it benign by explaining apparent wave-function collapse (and each interpretation does that in a different way). If that is good enough is purely a matter of opinion - I believe it is - but opinions vary - and can often lead to very heated argument - some of which is a misunderstanding of what decoherence advocates like myself are saying.

Thanks
Bill
 
Last edited:
There are many interpretations that make use of Decoherence in its foundations, MW is just one of them eg Deocherent Histories and the Ensemble Decoherence interpretation that I hold to.

The issues of Decoherence and its relation to the measurement problem is examined here (again of course assuming you or others reading this are not familiar with the issues):
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

Basically what the issue boils down to is the difference between an improper and proper mixed state.

Another thing that needs to be made clear, even though it should be obvious from reading the literature, it doesn't seek to explain the QM measurement postulate - its still required, the measurement 'problem' is still there, it simply seeks to render it benign by explaining apparent wave-function collapse (and each interpretation does that in a different way). If that is good enough is purely a matter of opinion - I believe it is - but opinions vary - and can often lead to very heated argument - some of which is a misunderstanding of what decoherence advocates like myself are saying.

Thanks
Bill
The paper you cite agrees with me that decoherence fails to solve "the problem of outcomes" (which is essentially the problem I was talking about). I recommend this:
http://arxiv.org/pdf/quant-ph/0112095v3.pdf
Consciousness-causes-collapse is not solipsism simply because it postulates the wave-function over and above consciousness. Indeed the view is consistent with the claim that the early physical universe contained no consciousness at all (and so no collapse).
 
9,076
1,995
The paper you cite agrees with me that decoherence fails to solve "the problem of outcomes" (which is essentially the problem I was talking about).
Its amazing how two people can read the same thing and reach different conclusions. What it shows is it gives the appearance of wavefunction collapse so it is a non issue - its still there but benign. People that claim decoherence doesn't solve the measurement problem are correct - and decoherence people like me agree - what the claim is, is its rendered nothing to worry about - when appropriate other assumptions depending on the exact interpretation you hold to are made.

That paper is very even handed and points out correctly it leaves the collapse still there. That's true - but it's now no longer an issue:
http://en.wikipedia.org/wiki/Quantum_mind%E2%80%93body_problem [Broken]
'Decoherence does not generate literal wave function collapse. Rather, it only provides an explanation for the appearance of wavefunction collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. A total superposition of the universal wavefunction still exists (and remains coherent at the global level), but its fundamentality remains an interpretational issue. "Post-Everett" decoherence also answers the measurement problem, holding that literal wavefunction collapse simply doesn't exist.[22] Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, our observation tells us that this mixture looks like a proper quantum ensemble in a measurement situation, as we observe that measurements lead to the "realization" of precisely one state in the "ensemble".'

The consciousness causes collapse view is LIKE solipsism, not the same as it. The real issue with it is why choose such a weird view when there are plenty of other interpretations that aren't as weird. For example consider the double slit experiment but have the observer a computer. Run it and record the output in computer memory. Dismantle the apparatus and display the results a few years later. Are you seriously claiming it collapsed when you displayed the results of what the computer recorded - oh and someone was around to read it as well? If so you are led to the view what's in computer memories is not real until you read it and it is observed by a conscious observer. Try that one in a Computer Science class and they will likely leave laughing their heads off. The view it engenders is simply far too weird - I can't prove you wrong - its just a really really strange view of the world most would reject.

Thanks
Bill
 
Last edited by a moderator:

UltrafastPED

Science Advisor
Gold Member
1,910
214
The problem of "interpretation" arose when philosophers (and others) attempted to interpret the results of quantum mechanics in terms consistent with past experience - such as classical mechanics. This was very appealing from 1926-1936 or so for two reasons:

I. The semi-classical explanations from 1905-1925 often worked, and were "understandable"
II. It was not yet clear that QM was "always correct"; that is, perhaps it was incomplete. Hence the EPR paper.

If it is not possible to construct an actual experiment to decide something then perhaps there is nothing to decide ...
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,800
4,357
Thank you for your response. Let me see if I understand this correctly. You're telling me that the states of the apparatus and the states of the system are superposed rather than being tensor-multiplied, which causes the new big system (which consists of both the apparatus and the system being measured) to require a new renormalization of their states, while in the ideal case they would not require that if the states of apparatus and the system are completely independent.

Did I get that correctly?
No, that was not what I wanted to say. The point is that I suppose that quantum theory applies to the big system, consisting of the measurement apparatus + the measured system. Then unitary (linear) time evolution tells me that I don't end up with a state where the measured system is in a state with definite value [itex]a[/itex], the outcome of the measurement of the observable [itex]A[/itex], but in a superposition.

The collapse requires a non-unitary time evolution and thus assumes dynamics not described by quantum mechanics in an ad-hoc way. That's why I don't like the collapse postulate, and since I consider it pretty superfluous I like to live without it and use the minimal statistical interpretation instead the flavor(s) of the Copenhagen interpretation which assume a collapse.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,800
4,357
I'm not an expert (as you see here in my discussion), but I'd like to contribute to this. I've just read this website explaining the Quantum Eraser in detail

http://grad.physics.sunysb.edu/~amarch/ [Broken]

And I got convinced that the experiment proves that the setup of the experiment is the cause for having or not having interference patterns and not the observer looking or not looking; i.e., the article "does the universe exist if we're not looking" looked crap to me! The experiment leads directly to the conclusion that a conscious being doesn't really matter, since the entanglement gets erased or created by the new configuration of the experiment, and not by a change in the observer's position/state/etc... .
I couldn't agree more. It's just the choice of, how I evaluate the fixed outcome of the experiment, whether I get an interference pattern or full which-way information. The point is that the outcome is fixed once and for all (FAPP ;-)), and there is no retrocausal interaction with a system which isn't even there anymore when I look at my measurement protocol.
 
Last edited by a moderator:

vanhees71

Science Advisor
Insights Author
Gold Member
11,800
4,357
Decoherence does not explain why measurements work as we are used to, in any sense (let alone "effectively" or "for all practical purposes"). The superposition always remains. Decoherence entails that given sufficient environmental interaction the branches of the wave function tend to evolve quasi-classically. So decoherence is only useful at all if we think we can solve the measurement problem by appeal to a many-decohering worlds interpretation. But such an interpretation severs the link between probability of measurement outcomes and relative frequency of measurement outcomes, and therefore does affect our "practical purposes".



I would be interested to hear the argument for why quantum eraser experiments undermine consciousness-causes-collapse theories.
I never understood this argument against the decoherence argument. Of course, it's "FAPP" (and John Bell famously made funny remarks about people using such FAPP arguments; I'm very well aware of this). What I mean is that decoherence explains to me well enough, why effectively I do not end up with a superposition of pure states, but with decoherent sums of projection operators, i.e., a mixed state, which describes a state that is like a random experiment on a classical system, e.g., throwing a dice. Also the result is fixed by registering the measured value in a practically irreversible way (as a table of the outcome of measurements on a sufficiently large ensemble of equally prepared quantum objects on a piece of paper or electronicall on a storage device). After that usually, the system isn't there anymore. E.g., in the quantum erasure experiment, the photons all got absorbed somewhere and you can dismantle your whole equipment before looking at the fixed measurement protocols. Thus (FAPP) I don't see any necessity nor any logically justifiable possibility how my consciousness, taking notice of this fixed measurement protocol could retrocausally affect the measurement of a system which isn't even there anymore. I prefer non-nonsense interpretations like the minimal statistical interpretations to esoterics, which might be nice and entertaining as a science-fiction story but cannot be taken seriously in the sense of the natural sciences!
 
9,076
1,995
Of course, it's "FAPP" (and John Bell famously made funny remarks about people using such FAPP arguments; I'm very well aware of this). What I mean is that decoherence explains to me well enough, why effectively I do not end up with a superposition of pure states, but with decoherent sums of projection operators, i.e., a mixed state, which describes a state that is like a random experiment on a classical system, e.g., throwing a dice.
Mate you hit on the crux of the issue. For me, you, and many other people FAPP, and appearance of collapse etc are perfectly OK. But some really get hung up on it to the point they misconstrue what decoherence claims. They produce papers saying it doesn't solve the measurement problem, you still have the measurement postulate etc etc. But it's a straw man argument because none of those things are claimed. The measurement postulate is still required, the collapse is still there, but when applied to an improper mixed state it's observationally exactly the same as if you had been presented with a system that is in an eigenstate of whats being measured. This means you can assume the collapse has occurred because there is no way to tell otherwise. But some just don't get it.

Thanks
Bill
 
Errrrr. Its based on two axioms as found in for example Ballentine's book. These are clear and unambiguous statements about nature:

1. Observable's are Hermitian operators whose eigenvalues are the possible outcomes of an observation.

2. A positive operator P of unit trace exists, called the state, such that the expected value of the observable O is Tr(OP).


Thanks
Bill
http://en.wikipedia.org/wiki/Axiom
"As used in modern logic, an axiom is simply a premise or starting point for reasoning"


.
 
239
46
Mate you hit on the crux of the issue. For me, you, and many other people FAPP, and appearance of collapse etc are perfectly OK. But some really get hung up on it to the point they misconstrue what decoherence claims. They produce papers saying it doesn't solve the measurement problem, you still have the measurement postulate etc etc. But it's a straw man argument because none of those things are claimed.
Of course there are claims like that in the literature. Many publications claim that decoherence solves the measurement problem, and that it does so without requiring the measurement postulate. The rebuttals do quote these papers and specifically answer to their arguments.

The measurement postulate is still required, the collapse is still there, but when applied to an improper mixed state it's observationally exactly the same as if you had been presented with a system that is in an eigenstate of whats being measured. This means you can assume the collapse has occurred because there is no way to tell otherwise.
What have you gained from the fact that you can assume that the collapse happened if you postulated before that it does happen?

But some just don't get it.
Be careful with such statements. My criticizing you in the MWI thread was absolutely valid in the given context. You made a statement about MWI that was not correct because it required the measurement postulate, which is not part of the Everett formalism.


Cheers,

Jazz
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,800
4,357
Now I've to ask the stupid question, what you mean by "measurement postulate". If you mean the projection or collapse postulate, I don't see, why you need it? In the minimal interpretation the (pure or mixed) state of a system just describes what I know about the system in terms of the probabilistic content of that state according to Born's rule, not more and not less. I can verify the predictions of quantum theory by making measurements on ensembles of systems prepared in this state and not on a single system.

Concerning the many-worlds interpretation I don't understand in which way it helps to solve the measurement problem. It just introduces the claim that at any measurement the universe splits into parallel universes which, however, I cannot observe in any way. So what does this add to the solution of the problems with measurement? If I add something unobservable to the (minimally interpreted) quantum theory, I don't have added anything in the sense of science, because science is about empirically checkable notions not unobservable claims. The same holds for the "paths" in the de Broglie-Bohm-like interpretations. Maybe I overlook something behind these interpretations?
 
9,076
1,995
What have you gained from the fact that you can assume that the collapse happened if you postulated before that it does happen?
Jazz - FAPP means just that. Applying the measurement postulate to an improper mixed state and a proper mixed state means observationally its exactly the same. If one takes a system that's in an eigenstate of what you are observing and apply the measurement postulate its benign - no collapse occurs. A proper mixed state is such a system randomly presented and so applying the measurement postulate is again benign - no collapse occurred. An improper mixed state is observationally equivalent to that, meaning FAPP, and it looks like, has the appearance of, and all the descriptors I and others have thrown about regarding the issue applies. This means, when combined with a specific interpretational assumption of exactly what interpretation that incorporates decoherence you are considering, the measurement postulate is rendered benign. For example in the decoherence ensemble interpretation I hold to its the simple expedient of assuming observationally equivalent systems are equivalent. The paper I linked to examines the issue fairly and evenly, discusses the interpretation I hold to, with its pro's and cons. For many its conclusion that it leaves the essential issue untouched is what they hold to - fine - but for many like myself its solved the issue - FAPP. This is the essential disagreement - the crux of the issue - and constantly rehashing it wont change anything.

Thanks
Bill
 
9,076
1,995
Now I've to ask the stupid question, what you mean by "measurement postulate". If you mean the projection or collapse postulate, I don't see, why you need it?
I think he means the Born rule - the second postulate in Ballentine's treatment. You still need it - but its much more benign and simple - as you correctly point out - ie you can simply interpret it as the pi in the mixed state gives the probability its in that state. This means, just like you said, FAPP it resolves the collapse issue.

If I add something unobservable to the (minimally interpreted) quantum theory, I don't have added anything in the sense of science, because science is about empirically checkable notions not unobservable claims. The same holds for the "paths" in the de Broglie-Bohm-like interpretations. Maybe I overlook something behind these interpretations?
You haven't - its purely a matter of what you find appealing. The MWI, for example, has some really beautiful quite interesting aspects that appeal to some, at least to me anyway. Its not by personal interpretation, which is the ensemble interpretation with decoherence - but it has some quite interesting features.

Whether or not such is really science is of course open to debate - but I stand damned because I find them maddeningly interesting.

Thanks
Bill
 
Last edited:
9,076
1,995
http://en.wikipedia.org/wiki/Axiom
"As used in modern logic, an axiom is simply a premise or starting point for reasoning"
I think those versed in the axiomatic systems of mathematics, as physicists in general are, understand exactly what an axiom is. Axioms can contain statements about the world out there - which the axioms of QM do - that's what makes it a physical theory - not just mathematics where the axioms are expressed as formal statements.

Thanks
Bill
 
Last edited:

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top