I Decoherence and standard formalism

fanieh
Messages
274
Reaction score
12
In the standard mathematical formalism, the environment were treated classically, this is because observers (being macroscopic recording mechanisms) are treated classically, so the system is isolated. Decoherence is about open system, so how is decoherence compatible with Copenhagen or the standard formalism at all?
How can you make the standard formalism have an open system-environment too? Or perhaps is it correct to think the standard mathematical formalism is already updated and Copenhagen is already outdated? How do you link the two if you were to give a lecture about this in class (which I'll do)?
 
Physics news on Phys.org
fanieh said:
In the standard mathematical formalism, the environment were treated classically

Please provide a reference for this statement. It doesn't look correct to me.
 
PeterDonis said:
Please provide a reference for this statement. It doesn't look correct to me.

I mean, in the Copenhagen interpretation, the environment were treated classically because observers were treated classically. But I'm not interested in interpretations but in the standard mathematical formalism. How does the standard mathematical formalism deal with the environment? In the density matrix approach, the environment is traced out.. but this after there is entanglement between the system and environment. In the Copenhagen, there is no entanglement between the system and environment because the environment is treated classically and only the quantum system is treated in isolated. So I'm confused how many physicists could say the Copenhagen is the standard mathematical formalism when it treated the environment as classical and there is no entanglement between environment and system in the Copenhagen. How do you resolve the two?
 
fanieh said:
In the standard mathematical formalism, the environment were treated classically, this is because observers (being macroscopic recording mechanisms) are treated classically, so the system is isolated. Decoherence is about open system, so how is decoherence compatible with Copenhagen or the standard formalism at all?

That's definitely not correct. One of the motivations for the decoherence treatment was to explain how we can get something that looks like a non-unitary irreversible process from a theory (QM) in which the basic interactions are all unitary (and reversible). It's a similar problem to explaining the second law of thermodynamics within classical physics where all the fundamental processes at an individual level are obeying time-reversible evolutions.

So the basic question here is that if the laws of QM govern everything then surely they also govern the bits and pieces that make up a measurement device. So where does this discontinuous and irreversible change that we call a 'measurement' come from? The system and the measuring device are all made of things that obey the laws of QM which mean that all of the interactions between the things that make up our system and the things that make up our measuring device are unitary interactions.

So the very essence of the decoherence treatment is to treat everything quantum mechanically, not classically.

If, for example, we had a single EM field mode inside a cavity then there's going to be some leakage of that radiation inside the cavity to the outside world. How do we model this? Well, if some radiation is getting out of the cavity then there has to be a coupling of the field mode inside the cavity to the field modes outside the cavity. Field modes in QM are essentially just quantum harmonic oscillators and so we can model this system as a single (cavity) field mode coupled to a number of field modes outside the cavity. Essentially just a system then of coupled harmonic oscillators.

We can then make some reasonable assumptions about the form of that coupling and assume there are an infinite number of discrete field modes outside the cavity (all distinguished by some frequency). With this we can write down a Hamiltonian for the cavity mode plus outside modes. What we're after is an equation that describes the evolution of the field mode inside the cavity, because that's our system of interest. The field modes outside the cavity are our environment and we're looking for a kind of 'averaged' equation of motion for our cavity mode as a result of all of the myriad interactions with the environmental (outside) modes.

To proceed further we can take the continuum limit for the environmental mode (a continuous distribution of frequencies), assume some basic initial state for the environmental modes (thermal states, for example) and then do a coarse-grained averaging procedure to end up with a master equation for our cavity field mode. What we then have is something that can model a dissipative process in QM fully quantum mechanically. It's no different in spirit to the treatment of spontaneous emission in a fully quantum way.

What's interesting is that if we construct such a model then for certain initial environmental conditions the master equation for the cavity mode can be solved exactly and we find that the effect of the environment is to rapidly drive the cavity field mode into a diagonal density operator - which is the density operator we get from performing a measurement in which we remain ignorant of the result.

For this cavity field mode example if we begin with a cavity field mode prepared in a superposition of two coherent states then there is an exponential decay of the off-diagonal elements of the cavity field density matrix with a decay rate that is proportional to the square of the 'distance' between them - so any macroscopic (big difference between the coherent states) superpositions get driven to mixtures very, very quickly.

Zeh and Zurek did some magnificent work to look at this kind of thing in more general terms and showed that this decoherence is actually a more general feature of interactions of a 'small' quantum system with a large (but still quantum) environment. The idea being that pretty much any 'sensible' quantum environment is going to have this diagonalizing effect on the system of interest. So this leads to the idea that a measuring device can be modeled as a quantum object that is coupled to a large environment and if we treat it this way then we can get something out of it that kind of looks like a quantum measurement (it has the right density matrix for an ensemble).

As a way to model dissipative effects in QM - wonderful; as a solution to the 'measurement problem' I'm less convinced but clearly some 'decoherence' process must be happening in any measuring device - which is a suitably 'large' quantum object, of course. Whether that's quite enough to explain all the features of what constitutes a quantum measurement I don't think so - but it's a fantastic step closer to fully solving that puzzle I reckon.
 
  • Like
Likes DrClaude, fanieh, bhobba and 1 other person
Simon Phoenix said:
That's definitely not correct. One of the motivations for the decoherence treatment was to explain how we can get something that looks like a non-unitary irreversible process from a theory (QM) in which the basic interactions are all unitary (and reversible). It's a similar problem to explaining the second law of thermodynamics within classical physics where all the fundamental processes at an individual level are obeying time-reversible evolutions.

So the basic question here is that if the laws of QM govern everything then surely they also govern the bits and pieces that make up a measurement device. So where does this discontinuous and irreversible change that we call a 'measurement' come from? The system and the measuring device are all made of things that obey the laws of QM which mean that all of the interactions between the things that make up our system and the things that make up our measuring device are unitary interactions.

So the very essence of the decoherence treatment is to treat everything quantum mechanically, not classically.

If, for example, we had a single EM field mode inside a cavity then there's going to be some leakage of that radiation inside the cavity to the outside world. How do we model this? Well, if some radiation is getting out of the cavity then there has to be a coupling of the field mode inside the cavity to the field modes outside the cavity. Field modes in QM are essentially just quantum harmonic oscillators and so we can model this system as a single (cavity) field mode coupled to a number of field modes outside the cavity. Essentially just a system then of coupled harmonic oscillators.

We can then make some reasonable assumptions about the form of that coupling and assume there are an infinite number of discrete field modes outside the cavity (all distinguished by some frequency). With this we can write down a Hamiltonian for the cavity mode plus outside modes. What we're after is an equation that describes the evolution of the field mode inside the cavity, because that's our system of interest. The field modes outside the cavity are our environment and we're looking for a kind of 'averaged' equation of motion for our cavity mode as a result of all of the myriad interactions with the environmental (outside) modes.

To proceed further we can take the continuum limit for the environmental mode (a continuous distribution of frequencies), assume some basic initial state for the environmental modes (thermal states, for example) and then do a coarse-grained averaging procedure to end up with a master equation for our cavity field mode. What we then have is something that can model a dissipative process in QM fully quantum mechanically. It's no different in spirit to the treatment of spontaneous emission in a fully quantum way.

What's interesting is that if we construct such a model then for certain initial environmental conditions the master equation for the cavity mode can be solved exactly and we find that the effect of the environment is to rapidly drive the cavity field mode into a diagonal density operator - which is the density operator we get from performing a measurement in which we remain ignorant of the result.

For this cavity field mode example if we begin with a cavity field mode prepared in a superposition of two coherent states then there is an exponential decay of the off-diagonal elements of the cavity field density matrix with a decay rate that is proportional to the square of the 'distance' between them - so any macroscopic (big difference between the coherent states) superpositions get driven to mixtures very, very quickly.

Zeh and Zurek did some magnificent work to look at this kind of thing in more general terms and showed that this decoherence is actually a more general feature of interactions of a 'small' quantum system with a large (but still quantum) environment. The idea being that pretty much any 'sensible' quantum environment is going to have this diagonalizing effect on the system of interest. So this leads to the idea that a measuring device can be modeled as a quantum object that is coupled to a large environment and if we treat it this way then we can get something out of it that kind of looks like a quantum measurement (it has the right density matrix for an ensemble).

As a way to model dissipative effects in QM - wonderful; as a solution to the 'measurement problem' I'm less convinced but clearly some 'decoherence' process must be happening in any measuring device - which is a suitably 'large' quantum object, of course. Whether that's quite enough to explain all the features of what constitutes a quantum measurement I don't think so - but it's a fantastic step closer to fully solving that puzzle I reckon.

Appreciated your explanations. What I was saying was that Copenhagen was not compatible with the idea the environment is also quantum. Hence just saying the two are not compatible and whether we should make Copenhagen outdated once and for all.

By the way. Your first paragraph was not exactly right in that decoherence doesn't automatically means the born rule applied. It just delocalized the phases.. to get one outcome.. you still need the born rule.
 
fanieh said:
What I was saying was that Copenhagen was not compatible with the idea the environment is also quantum.

And I asked you for a reference for this statement. Either provide one or stop making this claim.

The standard theory of decoherence, which Simon Phoenix described, is interpretation neutral. It doesn't take any position on whether collapse happens or not.
 
  • Like
Likes bhobba
fanieh said:
What I was saying was that Copenhagen was not compatible with the idea the environment is also quantum. Hence just saying the two are not compatible and whether we should make Copenhagen outdated once and for all.

By the way. Your first paragraph was not exactly right in that decoherence doesn't automatically means the born rule applied. It just delocalized the phases.. to get one outcome.. you still need the born rule.

Yes I misunderstood your initial post slightly - sorry. The issue is to understand how we can get something that looks like 'collapse' from a theory in which all the interactions are governed by time-reversible laws of evolution - so strictly speaking that's mathematically impossible. What we get is something that, for all practical purposes, looks a lot like collapse. It neatly explains why we don't see 'macroscopic' superpositions in the real world.

The issue of why we get one particular outcome (an eigenstate) is not wholly explained within the decoherence treatment - which is one of the reasons I'm not convinced it's a full solution to the so-called 'measurement problem' in QM. I would say that it has to be a big part of that solution though.
 
  • Like
Likes bhobba
fanieh said:
Appreciated your explanations. What I was saying was that Copenhagen was not compatible with the idea the environment is also quantum.

That is false.

It is silent on the issue. It simply assumes a classical world exists and observations appear in it. It says nothing about what that world is ie if its quantum or not. We now know everything is quantum even the classical world of Copenhagen.

Thanks
Bill
 
PeterDonis said:
And I asked you for a reference for this statement. Either provide one or stop making this claim.

The standard theory of decoherence, which Simon Phoenix described, is interpretation neutral. It doesn't take any position on whether collapse happens or not.

Here:

https://arxiv.org/pdf/quant-ph/0312059v4.pdf

"The Copenhagen interpretation additionally postulates
that classicality is not to be derived from quantum
mechanics, for example, as the macroscopic limit
of an underlying quantum structure (as is in some sense
assumed, but not explicitely derived, in the standard interpretation),
but instead that it be viewed as an indispensable
and irreducible element of a complete quantum
theory—and, in fact, be considered as a concept prior to
quantum theory. In particular, the Copenhagen interpretation
assumes the existence of macroscopic measurement
apparatuses that obey classical physics and that
are not supposed to be described in quantum mechanical
terms (in sharp contrast to the von Neumann measurement
scheme, which rather belongs to the standard
interpretation); such a classical apparatus is considered
necessary in order to make quantum-mechanical phenomena
accessible to us in terms of the “classical” world of
our experience. This strict dualism between the system
S, to be described by quantum mechanics, and the apparatus
A, obeying classical physics, also entails the existence
of an essentially fixed boundary between S and A,
which separates the microworld from the macroworld (the
“Heisenberg cut”). This boundary cannot be moved significantly
without destroying the observed phenomenon
(i.e., the full interacting compound SA).Especially in the light of the insights gained from decoherence
it seems impossible to uphold the notion of a
fixed quantum–classical boundary on a fundamental level
of the theory."

How is Copenhagen based on a classical environment compatible with environment based on quantum? I'm running out of word to say in a lecture so hope you can share how they are still related. Maybe can one say Copenhagen has less explanatory power even if the mathematics can be fitted to either?
 
  • #10
Simon Phoenix said:
so strictly speaking that's mathematically impossible. What we get is something that, for all practical purposes, looks a lot like collapse.

Technically its how an improper mixture becomes a proper one. The details can't be discussed in a post - you really need to study a book. THE standard one is:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

Thanks
Bill
 
Last edited by a moderator:
  • Like
Likes martinbn and Simon Phoenix
  • #11
fanieh said:
"The Copenhagen interpretation additionally postulates
that classicality is not to be derived from quantum
mechanics,

That is false - it makes no such assumption.

I can give a link that carefully explains the details but unfortunately the source strictly speaking doesn't meet our guidelines.

The textbook I mentioned does meet our guidelines and explains it all in excruciating detail as well as some of the issues that still remain.

Thanks
Bill
 
  • #12
bhobba said:
Technically its how an improper mixture becomes a proper one. The details can't be discussed in a post - you really need to study a book. THE standard one is:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

Ooh - that looks like a fantastic book - I'm sure you've linked to it before but in my typically slow and plodding way this is the first time I've clicked on the link :sorry:
 
Last edited by a moderator:
  • #13
Simon Phoenix said:
Ooh - that looks like a fantastic book - I'm sure you've linked to it before but in my typically slow and plodding way this is the first time I've clicked on the link :sorry:

It is THE book on the issue - worth every cent IMHO. But it is no easy read - you need at least the background of Griffiths or similar to understand it, but likely with effort get by with Susskind.

Thanks
Bill
 
  • #14
bhobba said:
But it is no easy read - you need at least the background of Griffiths or similar to understand it, but likely with effort get by with Susskind

I think I might just about manage :wink:
 
  • #15
bhobba said:
That is false - it makes no such assumption.

I can give a link that carefully explains the details but unfortunately the source strictly speaking doesn't meet our guidelines.

The textbook I mentioned does meet our guidelines and explains it all in excruciating detail as well as some of the issues that still remain.

Thanks
Bill

Ey, that paper was written by the same author as the book! The paper was a condensed version of his book. I have the book. I'll find the relevant passages.
 
  • #16
bhobba said:
That is false - it makes no such assumption.

I can give a link that carefully explains the details but unfortunately the source strictly speaking doesn't meet our guidelines.

The textbook I mentioned does meet our guidelines and explains it all in excruciating detail as well as some of the issues that still remain.

Thanks
Bill

Ok. Here's the very passage from the textbook you suggested to Simon https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20
or Maximilian Schlosshauer "Decoherence and the Quantum-to-Classical Transition". In page 335. Maximilian quoted

"The Copenhagen interpretation additionally postulates that classicality is not to be derived from quantum mechanics..."

So Maximilian is wrong??
 
Last edited by a moderator:
  • Like
Likes Zafa Pi
  • #17
fanieh said:
"The Copenhagen interpretation additionally postulates that classicality is not to be derived from quantum mechanics..." So Maximilian is wrong??

On the surface yes - but context is everything eg their are a number of variants of Copenhagen and different contexts on what classically means eg some take classical to mean exactly how macro objects emerged which Copenhagen does not explain.

That book is my bible on decoherence and I know for a fact that is not his view which is that there are 3 parts to the measurement problem.

1. The problem of non observance of interference
2. How the preferred basis emerges.This is why, for example, classical objects nearly always have a definite position
3. How an improper mixed state becomes a proper one.

The first 2 is explained by decoherence, the third some interpretations simply assume, while others explain.

The three taken together imply classicality because they explain how objects for all practical purposes have definite values which is what classical physics is.

Now I could give you the page he says that but in this type of situation where context can be a problem I would like you to explain to me, in your own words, not a quote, but in your own words, how objects with definite values is not classical. In particular exactly what do you mean by classicality.

Just as a hint to what might be going on is the so called factorisation problem which IMHO is way over hyped, but if its your worry requires another thread:
https://arxiv.org/abs/1210.8447

Thanks
Bill
 
  • #18
fanieh said:
Especially in the light of the insights gained from decoherenceit seems impossible to uphold the notion of afixed quantum–classical boundary on a fundamental levelof the theory."

The above is very very true.

Now I ask you to think a bit. If there is no fixed boundary, and there isn't, (everything is quantum) what's a classical object to begin with?

Once you nut it (not a quote but in your own words) out please let us know what a classical object is for your claims to make sense.

Hint - the divide is a human construct and lies outside the theory - its also related to the factorisation issue. Either way its a very difficult problem but usually has some common characteristics such as exact position etc.

In modern times we don't worry about it - its simply when decoherence occurs.

Thanks
Bill
 
  • #19
PeterDonis said:
Please provide a reference for this statement. It doesn't look correct to me.

Its not. Its treated via QM and randomises phases so its irreversible.

Thanks
Bill
 
  • #21
bhobba said:
The above is very very true.

Now I ask you to think a bit. If there is no fixed boundary, and there isn't, (everything is quantum) what's a classical object to begin with?

Once you nut it (not a quote but in your own words) out please let us know what a classical object is for your claims to make sense.

Hint - the divide is a human construct and lies outside the theory - its also related to the factorisation issue. Either way its a very difficult problem but usually has some common characteristics such as exact position etc.

In modern times we don't worry about it - its simply when decoherence occurs.

Thanks
Bill

Thank you for your message. I will reread the book and consider what you were saying. Then I'll get back to you after reading it with broader frame of reference.
 
  • #23
fanieh said:
Then I'll get back to you after reading it with broader frame of reference.

I would suggest forgetting Copenhagen to begin with. It really is old hat:
http://scitation.aip.org/content/aip/magazine/physicstoday/article/58/11/10.1063/1.2155755

We now know both Bohr and Einstein were wrong so its a rather hard issue to debate.

Thanks
Bil
 
Last edited by a moderator:
  • #24
bhobba said:
I would suggest forgetting Copenhagen to begin with. It really is old hat:
http://scitation.aip.org/content/aip/magazine/physicstoday/article/58/11/10.1063/1.2155755

We now know both Bohr and Einstein were wrong so its a rather hard issue to debate.

Thanks
Bil

Is there any solid experimental proof that the environment is really entangled with the system? Something that would convince Bohr (if he were alive today) that the environment should also be treated quantumly?
 
Last edited by a moderator:
  • #25
fanieh said:
Is there any solid experimental proof that the environment is really entangled with the system? Something that would convince Bohr (if he were alive today) that the environment should also be treated quantumly?

We have tons of models, but I do not know their experimental status. For example a few stray photons from the CBMR has been calculated to give a dust particle a definite position. Has it been experimentally confirmed :sorry::sorry::sorry::sorry::sorry::sorry::sorry::sorry:

Thanks
Bill
 
  • #26
fanieh said:
Is there any solid experimental proof that the environment is really entangled with the system? Something that would convince Bohr (if he were alive today) that the environment should also be treated quantumly?
There are experiments in mesoscopic systems (i.e. systems somewhere between microscopic and macroscopic) which show that decoherence is an actual continuous process. Experiments demonstrate that after a finite time of interaction with environment the system can be in a partially decohered state.
 
  • Like
Likes bhobba
  • #27
Demystifier said:
There are experiments in mesoscopic systems (i.e. systems somewhere between microscopic and macroscopic) which show that decoherence is an actual continuous process. Experiments demonstrate that after a finite time of interaction with environment the system can be in a partially decohered state.

When one is walking in the street and one wears a hat.. is that hat supposed to be entangled with all the cars and all the buildings and everything in the environment even the clouds.. or is the hat supposed to be entangled only with the CBMR and air or some photons? I know photons from all the objects in the environment hit our retina so we can see them. But what would it mean our hat is entangled with all the things out there.. is the hat really entangled with every particle of the car surface or only the photons reflected by the car and would this be enough to cause superposition of hat and car such that it form an entanglement?
 
  • #28
fanieh said:
is that hat supposed to be entangled with all the cars and all the buildings and everything in the environment even the clouds..
Yes, your hat is entangled with pretty much everything, but with none of it very much. In practice, entanglement can only be observed if it is not too far from maximal entanglement. Therefore, your hat appears as if it was not entangled at all.
 
  • Like
Likes bhobba
  • #29
Demystifier said:
Yes, your hat is entangled with pretty much everything, but with none of it very much. In practice, entanglement can only be observed if it is not too far from maximal entanglement. Therefore, your hat appears as if it was not entangled at all.

The hat can only be entangled maximally if there were only one particle in the environment.. but with billions and billions in the environment.. the pure state is divided into billions too. So can this be enough to put the hat in classical state instead of slight superposition of the position observable? Then why worry how improper mixed state of the hat becomes proper mixed state as per Maximillian.. the billions of entanglement is enough to put it in proper mixed state (position that is classical)?
 
  • #30
fanieh said:
The hat can only be entangled maximally if there were only one particle in the environment.. but with billions and billions in the environment.. the pure state is divided into billions too. So can this be enough to put the hat in classical state instead of slight superposition of the position observable? Then why worry how improper mixed state of the hat becomes proper mixed state as per Maximillian.. the billions of entanglement is enough to put it in proper mixed state (position that is classical)?
You are mixing two different meanings of the word "classical state". This can be best understood with an example. Consider a classical coin with the classical probability distribution
$$p(head)=\frac{1}{2}, \;\;\; p(tail)=\frac{1}{2}.$$
Knowing only this, can you tell what is the classical state of the coin?
 
  • #31
Demystifier said:
You are mixing two different meanings of the word "classical state". This can be best understood with an example. Consider a classical coin with the classical probability distribution
$$p(head)=\frac{1}{2}, \;\;\; p(tail)=\frac{1}{2}.$$
Knowing only this, can you tell what is the classical state of the coin?

1/2?
I was asking whether you no longer need the born rule when the hat was entangled with billions in the environment.. it can artificially create the born rule or trapping the hat particle from all sides such that it's like being in a iegenposition?
 
  • #32
fanieh said:
1/2?
Nope.

fanieh said:
I was asking whether you no longer need the born rule when the hat was entangled with billions in the environment
You need the Born rule even then.
 
  • Like
Likes bhobba
  • #33
Demystifier said:
Nope.

What has this got to do with my statement "the billions of entanglement is enough to put it in proper mixed state (position that is classical)?".. in your example.. proper mixed state is the head or tail.. maybe.. but what's the 1/2.. in the hat.. the proper mixed state in the position basis is position.. how's there two meanings of "classical state".. kindly elaborate.. thank you.

You need the Born rule even then.
 
  • #34
fanieh said:
it can artificially create the born rule or trapping the hat particle from all sides such that it's like being in a iegenposition?
No, the entanglement does not work in that way. Entanglement always involves a superposition, which is precisely what you want to avoid to get a "classical" state.
 
  • #35
Demystifier said:
Nope.

Let me resend the following. I think when you received messages directly from physicsforums.. any edited won't be updated to you so you might have missed the following because of wrong "quote" parenthesis. About your nope..

What has this got to do with my statement "the billions of entanglement is enough to put it in proper mixed state (position that is classical)?".. in your example.. proper mixed state is the head or tail.. maybe.. but what's the 1/2.. in the hat.. the proper mixed state in the position basis is position.. how's there two meanings of "classical state".. kindly elaborate.. thank you.
 
  • #36
Demystifier said:
No, the entanglement does not work in that way. Entanglement always involves a superposition, which is precisely what you want to avoid to get a "classical" state.

I know. Without the born rule. So the hat is still in superposition of position but only perhaps about 1 inch (or is it mere millimetres?) from it's original position instead of meters away from the person head. How does one compute the range.. have you done it?
 
  • #37
fanieh said:
how's there two meanings of "classical state".. kindly elaborate.. thank you.
There is a classical deterministic state and a classical probability state. In the case of a coin, one classical deterministic state is head, another classical deterministic state is tail. A classical probability state is ##p(head)=1/2, \; p(tail)=1/2##. Decoherence, i.e. entanglement with the environment, explains the origin of classical probability states, but it does not explain the origin of classical deterministic states.
 
  • #38
bhobba said:
I would suggest forgetting Copenhagen to begin with. It really is old hat:
http://scitation.aip.org/content/aip/magazine/physicstoday/article/58/11/10.1063/1.2155755

We now know both Bohr and Einstein were wrong so its a rather hard issue to debate.

Thanks
Bil

Let's have a last word from Peterdonis about whether we should forget Copenhagen to begin with (as Bhobba suggested above). I think Peterdonis was saying even if the environment was treated classically.. you would still get the same experimental result or it has the same mathematical formalism? Or perhaps is it correct to say in front of audience that Copenhagen has limited power or more limited in scope (what is the right thing to say)?
 
Last edited by a moderator:
  • #39
fanieh said:
whether we should forget Copenhagen to begin with

I agree with Demystifier's point that the term "Copenhagen interpretation" doesn't actually name a specific well-defined interpretation. But as far as the mathematical formalism of QM is concerned, there is no such thing as "interpretation"; the formalism is what it is.

fanieh said:
I think Peterdonis was saying even if the environment was treated classically.. you would still get the same experimental result or it has the same mathematical formalism?

I was saying that "if the environment was treated classically" is not something that is done in the mathematical formalism. In many simple cases the environment doesn't need to be treated at all in the formalism; but in cases where it is (such as when you are trying to model decoherence for a macroscopic object), the environment is treated quantum mechanically.
 
  • #40
PeterDonis said:
I agree with Demystifier's point that the term "Copenhagen interpretation" doesn't actually name a specific well-defined interpretation. But as far as the mathematical formalism of QM is concerned, there is no such thing as "interpretation"; the formalism is what it is.
I was saying that "if the environment was treated classically" is not something that is done in the mathematical formalism. In many simple cases the environment doesn't need to be treated at all in the formalism; but in cases where it is (such as when you are trying to model decoherence for a macroscopic object), the environment is treated quantum mechanically.

Ok, so it's the domain of applicability. When you want to compute the range of your hat superposition in the position basis as it interacts with all objects in the universe, you need the mathematical formalism which has this ability. Is it right to say Zeh and Zurek make use of the mathematical formalism that was not there in the old mathematical formalism when Bohr developed Copenhagen?
 
  • #41
fanieh said:
Is it right to say Zeh and Zurek make use of the mathematical formalism that was not there in the old mathematical formalism when Bohr developed Copenhagen?

I'm not sure. The basic formalism hasn't changed: you can either use wave functions and differential operators or state vectors and matrices. Both of those formalisms (Schrodinger and Heisenberg) were developed in the mid 1920s. What has changed is the amount of work that has been done to explore how to apply these formalisms to different cases.
 
  • Like
Likes bhobba
  • #42
PeterDonis said:
I'm not sure. The basic formalism hasn't changed: you can either use wave functions and differential operators or state vectors and matrices. Both of those formalisms (Schrodinger and Heisenberg) were developed in the mid 1920s. What has changed is the amount of work that has been done to explore how to apply these formalisms to different cases.

So decoherence can be described by just wave functions. Yet the density matrix has its natural home in decoherence.. is the reason due to the density matrix probability being more natural in the formalism.. can you give 3 reasons why the density matrix must go hand in hand with decoherence or they are married compatibility optimal? Or can one teach decoherence by totally bypassing the density matrix.. like instead of saying tracing out the environment... just say collapse or born rule engaged.
 
  • #43
fanieh said:
decoherence can be described by just wave functions

Actually I should have added density matrices to that; I'm not sure wave functions (or state vectors) by themselves are sufficient, because a key point about decoherence is that you don't know the state of the environment, so you have to trace over it. That requires using density matrices. AFAIK those weren't introduced until Von Neumann's classic textbook in the early 1930s; so perhaps it is true that the original formalism wasn't enough by itself.

fanieh said:
can one teach decoherence by totally bypassing the density matrix

I don't think so, but I am not an expert in the field.
 
  • #44
Demystifier said:
There is a classical deterministic state and a classical probability state. In the case of a coin, one classical deterministic state is head, another classical deterministic state is tail. A classical probability state is ##p(head)=1/2, \; p(tail)=1/2##. Decoherence, i.e. entanglement with the environment, explains the origin of classical probability states, but it does not explain the origin of classical deterministic states.

For you. Proper mixed state is classical probability state and not classical deterministic state? The reason I thought proper mixed state was classical deterministic state was due to Bhobba writing the following in another thread to me. Is he wrong when he stated?:

"There are a number of ways of preparing mixed states. One way is simply to take a state and randomly present it with probability pi. Such are called proper mixed states. With proper mixed states everything is sweet - objective reality exists before observation - much of quantum wierdness disappears. But that is just one way of going it. Another way is to take a state and subject it to the process of decoherence - you get exactly the same mixed state and their is no way to tell the difference - no way at all. But because its prepared differently than a proper mixed state its called an improper one. The trouble is you can't say its in the state prior to opservation - there is simply no way to tell. If not quantum wierdness remains. This is the modern version of the measurement problem. What causes an improper mixture to become a proper one. Colloquially its why we get any outcomes at all. With different interpretations like MW and BM its trivial, with others like ensemble its much more controversial - even to the point of its a problem at all."

Bhobba stated that in proper mixed states, objective reality exists before observation. But in your terminology. It's not in classical determistic state, but only in classical probability state. So Bhobba is really wrong about it?
 
  • #45
fanieh said:
Bhobba stated that in proper mixed states, objective reality exists before observation. But in your terminology. It's not in classical determistic state, but only in classical probability state. So Bhobba is really wrong about it?

You are confused about basic things.

Before going any further, in your own words, define a pure state and a mixed state. Then apply the Born rule to the mixed state and see what happens.

Once that is done we can proceed,

Thanks
Bill
 
  • Like
Likes PeterDonis
  • #46
PeterDonis said:
I don't think so, but I am not an expert in the field.

Its impossible. Decoherence converts a superposition to a mixed state. It harks back to entanglement where if you observe one part of an entangled system its in a mixed state.

Thanks
Bill
 
  • #47
bhobba said:
You are confused about basic things.

Before going any further, in your own words, define a pure state and a mixed state. Then apply the Born rule to the mixed state and see what happens.

Once that is done we can proceed,

Thanks
Bill

You shared this example yourself, the double slit without detector screen is in pure state, then the screen caused it to be in mixed state (to be in left or right slit), then born rule applied that make the electron appear in an eigenposition. But note mixed state by itself doesn't mean there is born rule applied. I thought prior to Demystifier message that proper mixed state is classical state.. this is due to your description that "With proper mixed states everything is sweet - objective reality exists before observation - much of quantum wierdness disappears." I think you must add or correct it to "With proper mixed states PLUS BORN RULE APPLIED, everything is sweet - objective reality exists before observation - much of quantum wierdness disappears."! Am I right Demystifier?
 
  • #48
fanieh said:
Or can one teach decoherence by totally bypassing the density matrix

Hi fanieh, let me first off apologize in advance because I'm struggling to understand where the source of your difficulty is and so my comments may not hit the mark properly.

It looks to me like your mixing up (sorry for the unavoidable pun) 'density matrix' and 'decoherence' a bit. The first thing you've got to get clear in your head is the answer to the question "why do we need density matrices in QM at all?" Forget environments and 'classicality' for the moment - just focus on QM.

When we write down a pure state description for some entity, say ##| \psi \rangle##, then whatever we might interpret this state to mean (interpretation dependent) it is the most complete description available to us that describes the properties of that entity consistent with the label ##\psi##. There's no other mathematical object that does any 'better'.

But how does QM cope when we are less certain about the state? Suppose we're trying to prepare 2-level atoms in the excited state ##|e \rangle##, but our preparation procedure is imperfect and we get atoms in the ground state ##|g \rangle## 10% of the time - and for any given atom we just don't know without doing a measurement whether we have ##|e \rangle## or ##|g \rangle ##.

We can't now write down a pure state for any given atom - even though we know our preparation procedure has produced pure states every time - we simply don't know which of those pure states has been prepared. How are we to treat such an object, or set of objects, within the formalism of QM? In this situation the best we can do is to say something like "for each atom it is in the state ##| e \rangle## with probability 0.9 and in the state ##|g \rangle## with probability 0.1"

The best way to do that is to write down a density operator for each atom that looks like $$\rho = 0.1 |g \rangle \langle g | + 0.9 |e \rangle \langle e |$$ we can think of this as a way to deal with our 'ignorance' of the initial conditions. If you have this ignorance of the preparation conditions then this is the neatest way to describe the properties of your atoms.

You could also calculate expectation values for measurements assuming pure states ##| e \rangle## and then pure states ##| g \rangle## and then add them with the appropriate weightings 0.9 and 0.1 to get your final expectation value. When you're making a measurement you still need to apply the rules of QM (for example, the Born rule) to predict the results whether you use a density matrix approach or this latter approach of just weighting pure state results.

Now let's imagine a different situation involving 2-level atoms. This time I'm going to imagine Alice having a perfect preparation procedure that prepares every atom in its excited state ##|e \rangle##. She's now going to fire that atom through a high-Q (lossless) cavity which has nothing in it (the field inside is in its vacuum state, which is a pure state). If we've got the atomic transition frequency matching the cavity mode frequency then there's going to be an interaction between the atom and the field. We can tailor the cavity transit time such that there's going to be ##\frac 1 {2}## probability of finding the atom in its ground state after interaction with the cavity and ##\frac 1 {2}## probability of finding the atom in its excited state after it leaves the cavity.

Suppose Alice now sends these atoms that have been through the cavity on to Bob, but she doesn't make a measurement. What is the 'state' of the atoms that get to Bob? The combined state of the atom-field system is actually given by the pure state $$| \psi \rangle = \frac 1 { \sqrt 2 } ( |e,0 \rangle + |g, 1 \rangle)$$Conceptually now we cannot say that Bob's atoms are in pure states - there's no way that we could legitimately make the statement that they're actually in the pure state ##|e \rangle## with probability ##\frac 1 {2}## and the pure state ##|g \rangle## with probability ##\frac 1 {2}##

What we can say is that, if we're interested in the properties of the atoms alone, then we can treat them as if we had them in the pure state ##|e \rangle## with probability ##\frac 1 {2}## and the pure state ##|g \rangle## with probability ##\frac 1 {2}##

The beauty of the density matrix approach is that we can use the same formalism for either the first situation in which we had an imperfect preparation procedure AND the second situation. Now in the second situation Bob's 'ignorance' is coming about because he doesn't have access to the full atom-field system, he only has a subset, or a component, to work with. So if you think about it enough it's clear that we're going to have to have a formalism that looks the same (as far as Bob is concerned) for both types of situation above. So physically we can see why 'proper' and 'improper' mixtures have to be equivalent when we're only interested in the properties of subsets of larger systems.

Now - what's the substantive difference here between the fields and an environment? In the second situation above we ignore or 'trace out' over the field to get a description that works for the atoms alone. But that's exactly what we do when we trace out over environmental variables in the decoherence approach. The only difference is really that the field is a very, very 'small' environment. The entanglement between the atom and field is shared only between 2 objects. An 'environment' does not consist of just one other single entity but squillions of them and so the entanglement is shared out. We could think of the entanglement as a lump of butter - in the atom-field case it's just like spreading a big chunk of butter on a tiny drop scone. In the atom-environment case it would be like spreading the same chunk over many, many slices of toast - we'd get to the point with enough toast that we didn't even know it had been buttered!
 
  • Like
Likes dlgoff, PeterDonis, Nugatory and 3 others
  • #49
Demystifier said:
There is a classical deterministic state and a classical probability state. In the case of a coin, one classical deterministic state is head, another classical deterministic state is tail. A classical probability state is ##p(head)=1/2, \; p(tail)=1/2##. Decoherence, i.e. entanglement with the environment, explains the origin of classical probability states, but it does not explain the origin of classical deterministic states.

Maybe what you meant is the following (so as to be compatible with Bhobba statements):

Decoherence = classical probability state
Proper mixed state = classical probability state + classical deterministic state

meaning proper mixed state already has born rule applied..

prior to your message I wrote: "Then why worry how improper mixed state of the hat becomes proper mixed state as per Maximillian.. the billions of entanglement is enough to put it in proper mixed state (position that is classical)?"

What I was asking if the projections (or eigenpositions) from born rule can be imitated by surrounding the hat with all kinds of entanglement such that it would be lock in position.. I was well aware of the born rule and prior to it. And your reply was:

"You are mixing two different meanings of the word "classical state". You refer to one meaning of classical state as "head", "tail" and second meaning of classical state as the probability of 1/2. You are saying that without the born rule.. the hat won't exist in the first place even if decoherence has produced the probability or classical probability state.. please confirm if this is what you mean. Thanks.
 
  • #50
fanieh said:
Maybe what you meant is the following (so as to be compatible with Bhobba statements):

Decoherence = classical probability state
Proper mixed state = classical probability state + classical deterministic state

meaning proper mixed state already has born rule applied..
Yes, that's it. Very concise and elegant explanation. :smile:
 
Back
Top