How Does Environmentally Induced Decoherence Affect Quantum State Reduction?

  • B
  • Thread starter Feeble Wonk
  • Start date
  • Tags
    Decoherence
In summary: The unitary dynamic evolution is pure and zero entropy and exact for the composite. The reduced density operator of the system alone is mixed and has higher entropy. The reduced density operator of the environment alone is mixed and has higher entropy. But the total entropy of the composite is zero and this is the only thing that is exact and pure. As to what you are missing, that is subtler but perhaps the following will help. In summary, the concept of spontaneous quantum state reduction through environmentally induced decoherence involves the interaction between a system and its environment, causing the system to become "mixed" and increase in entropy while the composite system remains in a "pure" and zero entropy state.
  • #141
stevendaryl said:
At the intermediate scale of cats, I'm not sure what formalism is appropriate.
That of statistical mechanics, of course!
 
  • Like
Likes bhobba
Physics news on Phys.org
  • #142
stevendaryl said:
I'm a little unclear as to what you mean by this. Are you just saying that because the system of interest is constantly interacting with the environment (the electromagnetic field), you can't use unitary evolution, because that only describes an isolated system?
I found this paper recently and it seems to address this issue in an understandable way.

The interesting bit for me is part 4 "Formal Treatment of Decoherence"

Decoherence-Free Subspaces and Subsystems
Daniel A. Lidar and K. Birgitta Whaley

abstract :Decoherence is the phenomenon of non-unitary dynamics that arises as a consequence of coupling between a system and its environment. It has important harmful implications for quantum information processing, and various solutions to the problem have been proposed. Here we provide a detailed a review of the theory of decoherence-free subspaces and subsystems, focusing on their usefulness for preservation of quantum information.

http://arxiv.org/abs/quant-ph/0301032v1.pdf

I would like to know what @A. Neumaier thinks of the paper.
 
Last edited:
  • #143
A. Neumaier said:
Unitary dynamics for small quantum systems is extremely well disproved - people in quantum optics always have to work with dissipative, nonunitary dynamics to describe their small systems quantitatively. Thus it is an experimental fact that small quantum systems cannot be described by unitary evolution.
The reason is that they are almost never isolated enough to justify the unitary approximation. The state reduction or collapse accounts for that.

On the other hand, if one makes a quantum system big enough that its interaction with the neglected environment can be ignored (which is often the case in macroscopic situations) or can be described by classical external interaction terms then unitary dynamics is valid to a very good approximation.

Thus state reduction (= collapse) is not in contradiction with the unitary dynamics of an isolated system.
Well, the non-unitary dynamics doesn't disprove quantum dynamics, because it's derived from it. That's not what I mean. I'm only against using the notion of "quantum jumps". Also in stochastic equations there are no jumps but fluctuating (generalized) forces. For me "quantum jumps" a la Bohr imply that there's no dynamical law covering these rapid transitions, but that's not what any dynamical equation, be it the fundamental unitary evolution of closed systems or effective deterministic or stochastic equations for open systems.
 
  • #144
stevendaryl said:
I'm a little unclear as to what you mean by this. Are you just saying that because the system of interest is constantly interacting with the environment (the electromagnetic field), you can't use unitary evolution, because that only describes an isolated system?
Of course the interaction with the electromagnetic field on the fundamental level is also described by unitary time evolution. QED is a QT as any other!
 
  • #145
stevendaryl said:
Yes. And it's also striking that an equal mixture of spin-up and spin-down in the z-direction leads to the same mixed state as an equal mixture of spin-up and spin-down in the x-direction.
Well, it's described by ##\hat{\rho}=\hat{1}/2##. There's no direction whatsoever. That's why it's called "unpolarized" and thus the distribution must not contain any direction ;-)).
 
  • #146
Mentz114 said:
http://arxiv.org/abs/quant-ph/0301032v1.pdf
I would like to know what @A. Neumaier thinks of the paper.
It uses an unconventionally broad notion of decoherence - which is usually reserved for the very fast decay of off-diagonal entries in a density matrix given as matrix elements between pointer states.

Decoherence free subspaces (DFS) are what allows one e.g., to consider the position and spin degrees of freedom in a Stern Gerlach experiment to behave unitarily before the measurement. The experimental difficulty in quantum computing is constructing systems whose nonunitary evolution has huge-dimensional nearly DFS.
 
Last edited:
  • Like
Likes Mentz114
  • #147
vanhees71 said:
Where can I find this? I've only looked into the online version of the book a bit. The formula-to-text ratio is a bit too small to make it attractive enough for me to buy it yet. Is it nevertheless good? Of course Haroche is a Nobel Laureat, but that doesn't neceessarily imply that he writes good textbooks ;-)).
I think many will agree that the best textbook on decoherence is the one by Schlosshauer:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20
In particular, the formula-to-text ratio is higher than in Haroche. (After all, unlike Haroche, Schlosshauer is a theorist.) More importantly, the book explains why decoherence does not completely resolve the measurement problem, even though it significantly alleviates it.
 
Last edited by a moderator:
  • Like
Likes vanhees71 and bhobba
  • #148
Demystifier said:
I think many will agree that the best textbook on decoherence is the one by Schlosshauer:
\

:smile::smile::smile::smile::smile::smile::smile::smile::smile::smile:

I have a copy - its my bible.

Thanks
Bill
 
  • Like
Likes Demystifier
  • #149
bhobba said:
:smile::smile::smile::smile::smile::smile::smile::smile::smile::smile:

I have a copy - its my bible.
It's one of my bibles too. (The only Bible for decoherence, anyway.)

But for those who do not want to read the whole Bible, there is a shorter (and free) version by the same author:
http://lanl.arxiv.org/abs/quant-ph/0312059
The shorter version is even more direct in explaining what exactly is wrong with arguments in the literature that decoherence completely resolves the measurement problem.
 
  • Like
Likes Mentz114 and bhobba
  • #150
vanhees71 said:
Also in stochastic equations there are no jumps but fluctuating (generalized) forces.
You seem to think that stochastic processes must always be given by stochastic differential equations. But this is not true.

Classically, there are two basic kinds of stochastic processes - jump processes and diffusion processes; then there are combinations of these, and by a theorem of Kolmogorov no other Markov processes are possible. A classical counting process is always a jump process. Thus it is no surprise that one has the same possibilities in the quantum case.
 
  • #151
rubi said:
Well, I believe that experimenters can provide us with a bunch of numbers, but I don't really commit to anything beyond that. Apparently, something is really odd about nature, since the idea that we can assign numbers to all properties of its parts in a consistent way must be given up, and I have no idea what that implies for the interpretation of the measurement results. This is of course an interesting philosophical question, but physicists must accept it as a fact, just like they must accept the constancy of the speed of light.

So you still have a cut - why do you think this is different from the Heisenberg cut you reject?
 
  • #152
atyy said:
So you still have a cut - why do you think this is different from the Heisenberg cut you reject?
I don't have a cut. It would be consistent with QM if the present me believes to have measured a bunch of numbers and the future me concludes that the present me was in a superposition of having measured one set of numbers and another set of numbers. That can happen if the observables that correspond to the knowledge of the present me and the future me don't commute. Hopefully decoherence comes to the rescue and ensures that the present me and the future me don't disagree so much.
 
  • #153
rubi said:
I don't have a cut. It would be consistent with QM if the present me believes to have measured a bunch of numbers and the future me concludes that the present me was in a superposition of having measured one set of numbers and another set of numbers. That can happen if the observables that correspond to the knowledge of the present me and the future me don't commute. Hopefully decoherence comes to the rescue and ensures that the present me and the future me don't disagree so much.

Why doesn't the present you believe yourself to be in a superposition?
 
  • #154
rubi said:
I don't have a cut. It would be consistent with QM if the present me believes to have measured a bunch of numbers and the future me concludes that the present me was in a superposition of having measured one set of numbers and another set of numbers. That can happen if the observables that correspond to the knowledge of the present me and the future me don't commute. Hopefully decoherence comes to the rescue and ensures that the present me and the future me don't disagree so much.
[emoji15] Ouch. Wouldn't it require a cut between the multiple "present you(s)" to arrive at the single later you?
 
  • #155
atyy said:
Why doesn't the present you believe yourself to be in a superposition?
Let's assume I can be described by quantum mechanics as well, just as any other kind of matter in the universe. Let's work in the Heisenberg picture. There is a time-independent quantum state ##\Psi##. Let's assume for simplicity that my knowledge at time ##t## of the measurement results is encoded by a single observable ##X(t)## for every ##t##. It might be that ##\Psi## is an eigenstate of ##X(10)##: ##X(10)\Psi = x\Psi##. The information of the future me (##t=10##) about the measurement results is encoded in the real number ##x##. However, it might be that ##[X(0),X(10)]\neq 0##, so they don't share a common basis of (generalized) eigenvectors and thus the vector ##\Psi##, expanded in the eigenbasis of ##X(0)## might be given by a superposition ##\Psi=\sum a_i \phi_i##. Of course, if the ##X(t)## commute, this isn't an issue.
 
  • #156
rubi said:
Let's assume I can be described by quantum mechanics as well, just as any other kind of matter in the universe. Let's work in the Heisenberg picture. There is a time-independent quantum state ##\Psi##. Let's assume for simplicity that my knowledge at time ##t## of the measurement results is encoded by a single observable ##X(t)## for every ##t##. It might be that ##\Psi## is an eigenstate of ##X(10)##: ##X(10)\Psi = x\Psi##. The information of the future me (##t=10##) about the measurement results is encoded in the real number ##x##. However, it might be that ##[X(0),X(10)]\neq 0##, so they don't share a common basis of (generalized) eigenvectors and thus the vector ##\Psi##, expanded in the eigenbasis of ##X(0)## might be given by a superposition ##\Psi=\sum a_i \phi_i##. Of course, if the ##X(t)## commute, this isn't an issue.

Isn't that the reply for why your future self believes the present self to be in a superposition?

How does it explain why the present self believes the present self not to be in a superposition?
 
  • #157
Feeble Wonk said:
[emoji15] Ouch. Wouldn't it require a cut between the multiple "present you(s)" to arrive at the single later you?
I don't need a cut. I have a quantum state ##\Psi## and lots of observables that account for any question that I could ask. If some of these observables don't commute, then they can't have definite values at the same "time". Of course, it's very uncommon to include actual physicists into the description of the quantum system.
 
  • #158
atyy said:
Isn't that the reply for why your future self believes the present self to be in a superposition?

How does it explain why the present self believes the present self not to be in a superposition?
It doesn't explain anything. It just describes it. As I said earlier, I have no idea how to interpret the fact that nature prohibits us to describe it using a bunch of numbers that can be known simultaneously. I'm just saying that it is internally consistent, although it may seem pretty weird sometimes. QM has made many weird predictions in the past and all of them have been shown to be consistent with experiments.
 
  • #159
rubi said:
It doesn't explain anything. It just describes it. As I said earlier, I have no idea how to interpret the fact that nature prohibits us to describe it using a bunch of numbers that can be known simultaneously. I'm just saying that it is internally consistent, although it may seem pretty weird sometimes. QM has made many weird predictions in the past and all of them have been shown to be consistent with experiments.

No, I don't mean "explain" in that sense. I would like to know where in the formalism it says that the present self believes itself not to be in a superposition.
 
  • #160
atyy said:
No, I don't mean "explain" in that sense. I would like to know where in the formalism it says that the present self believes itself not to be in a superposition.
It doesn't need to believe that. The formalism says that the present me will use one of of the eigenvalues of ##X(0)## correpsonding to the eigenvectors ##\phi_i## as the information about the measurement results and if I were to repeat this experiment many times, this choice will be distributed according to the probabilities ##|a_i|^2##.
 
  • #161
rubi said:
It doesn't need to believe that. The formalism says that the present me will use one of of the eigenvalues of ##X(0)## correpsonding to the eigenvectors ##\phi_i## as the information about the measurement results and if I were to repeat this experiment many times, this choice will be distributed according to the probabilities ##|a_i|^2##.

But you never actually get a measurement result, do you? At least not from the viewpoint of future you?
 
  • #162
atyy said:
But you never actually get a measurement result, do you? At least not from the viewpoint of future you?
The observables ##X(t)## encode my knowledge of the measurement results. The measurement results themselves are contained in an observable ##A## corresponding the the apparatus. At every point in time, I believe to have obtained a measurement result. Quantum theory doesn't predict, which one. It's just that this knowledge isn't consistent over time unless the observables commute (which is hopefully ensured by decoherence).

---
By the way, I'm not convinced that the domain of applicability of QM extends to such scenarios, but one can pretend it does and see what follows from it. In principle, the matter that consitutes the physicist should be governed by the same laws as the rest of the universe and the knowledge of the physicist should somehow be encoded in the motion of the particles in his brain, so in principle it should be possible to eliminate the cut completely. Of course, this is nowhere near practical. In quantum gravity, such considerations are forced upon us, because we are dealing with a fully constrained Hamiltonian system and all physics is supposed to arise from looking at correlations.
 
Last edited:
  • #163
If we consider that we have |dead><dead| and|alive><alive| can we get an inner composition law thar generalises the superposition law?
bhobba said:
That's impossible - utterly impossible. A cat can never - never be alive and dead. Cats are decohered to have definite position. The position of the constituent parts of a cat are different for alive and dead cats.

The sum of |dead><dead| and|alive><alive| is diagonal . Why are you talking about dead AND alive?
Did you read the link to Manko's paper?
 
  • #164
rubi said:
In principle, the matter that consitutes the physicist should be governed by the same laws as the rest of the universe and the knowledge of the physicist should somehow be encoded in the motion of the particles in his brain, so in principle it should be possible to eliminate the cut completely.
I'm clearly missing something critical here. My understanding was that the "warm and noisy" environment of the brain essentially guarantees decoherence and associated state reduction.
Regardless of your interpretational preference, I'm still confused by the idea that the "post-observation" physicist could retrospectively view his brain as being in superposition (with respect to the observation outcome) at the time of observation.
How does this differ from opening the box and seeing whether the cat is dead or alive, then closing the box and claiming that it's state is still unknown?
 
Last edited:
  • #165
A. Neumaier said:
Unitary dynamics for small quantum systems is extremely well disproved - people in quantum optics always have to work with dissipative, nonunitary dynamics to describe their small systems quantitatively. Thus it is an experimental fact that small quantum systems cannot be described by unitary evolution.
The reason is that they are almost never isolated enough to justify the unitary approximation. The state reduction or collapse accounts for that.

On the other hand, if one makes a quantum system big enough that its interaction with the neglected environment can be ignored (which is often the case in macroscopic situations) or can be described by classical external interaction terms then unitary dynamics is valid to a very good approximation.

Thus state reduction (= collapse) is not in contradiction with the unitary dynamics of an isolated system.
Thanks. But the question is, what do we mean by an "isolated system"? Standard approaches cannot explain what gives rise to non-unitary collapse. Under TI, unitary dynamics takes place in the absence of responses from absorbing systems. As soon as you have absorber response, you get the non-unitary von Neumann measurement transition.
I've provided a quantitative (albeit fundamentally indeterministic) criterion for the conditions under which this occurs--basically, these are decay probabilities. (See http://arxiv.org/abs/1411.2072 for the basic idea and relevant references)
 
  • #166
Just to make clear:
From decoherence you get what looks like classical probabilities. However, as stated in 'Quantum Enigma' they are NOT probabilities of something that actually exists. Decoherence is simply the entanglement of quantum systems to the environment (system(s) + environment = 'system2'). You trace over the environment and you are left with mathematics describing -part- of 'system2'. So no cat, or pointer, or macroscopic object, has a definite position as a result of decoherence (as has been claimed), because 'system2' is still in superposition. All decoherence can show is 'apparent collapse'. Apparent collapse and definite observables (e.g. position) are two completely different things.

Addressing why we don't see macroscopic objects in superposition: clearly 'measurement' has taken place which is why we see an alive cat, as opposed to a dead cat. Where this measurement occurs is still in dispute. Technically there is an observable of system+apparatus+environment which can tell us whether those 3 are in superposition or not.
 
  • #167
bhobba said:
That I am not sure of.

Regarding Zurek it boils down to the typical modelling thing - there are hidden assumptions in Zurek for sure - but if they are 'benign' or not is the debate. An example is the decision theoretic approach of Wallace. I have read his book and its pretty tight if you accept using decision theory is a valid approach. For some (me include) its rather obvious - for others - it makes no sense. Personally I find Zurek just another interpretation - and not my favoured one.

Thanks
Bill
Did you see my discussion of Wallace's 'auxiliary condition' as ostensibly part of the 'bare' (Unitary-only) theory? http://arxiv.org/abs/1603.04845
That is not 'benign' in the sense that it presupposes the very quasi-classical separability that is supposedly being explained by 'decoherence'. The same goes for Zurek's basic assumptions of initially separable, localizable systems. They are putting in classicality to get classicality out.
They cannot help themselves to 'typical modeling' because they are claiming to demonstrate the emergence of the very conditions that permit us to identify separable systems in the lab--those that allow us to do the modeling in the first place. The most general quantum initial universe would have nonlocally entangled degrees of freedom with no way to identify a 'system of study' as distinct from the environment.
 
  • #168
naima said:
The sum of |dead><dead| and|alive><alive| is diagonal . Why are you talking about dead AND alive?

Because that is what was said with or without capitals. There is no sum of dead and alive - there is a density matrix with dead and alive on diagonals, but if that's what was meant then that's what should have been said.

No - I did not read the paper. How about you give a precis of it.

Thanks
Bill
 
  • #169
rkastner said:
Did you see my discussion of Wallace's 'auxiliary condition' as ostensibly part of the 'bare' (Unitary-only) theory? http://arxiv.org/abs/1603.04845

'However, classicality is implicitly contained in 2 and 3 through the partitioning of the universal degrees of freedom into separable, localized substructures interacting via Hamiltonians that do not re-entangle them, so (given U-O) one has to put in classicality to get classicality out'

That's the factorisation issue. Its a legit issue but as I have said many times far too much is made of it IMHO. We do the same thing in classical mechanics for example but no one jumps up an down about that.

That said I have read Wallaces book and he uses an approach based on histories that seems to bypass it.

Thanks
Bill
 
  • #170
naima said:
If we consider that we have |dead><dead| and|alive><alive| can we get an inner composition law thar generalises the superposition law?
Well, you can add them and if you properly normalize them, it corresponds to a statistical mixture of dead and alive.

Feeble Wonk said:
I'm clearly missing something critical here. My understanding was that the "warm and noisy" environment of the brain essentially guarantees decoherence and associated state reduction.
Yes, the brain is a pretty classical object and there should be a lot of decoherence. That's why the phenomenon I described should be very unlikely. State reduction is only apparent, but that doesn't cause problems, since the relative frequencies predicted by state reduction and apparent state reduction are the same and only those are observable.

Regardless of your interpretational preference, I'm still confused by the idea that the "post-observation" physicist could retrospectively view his brain as being in superposition (with respect to the observation outcome) at the time of observation.
How does this differ from opening the box and seeing whether the cat is dead or alive, then closing the box and claiming that it's state is still unknown?
It doesn't differ. It's the same phenomenon as in the Schrödinger cat experiment, but now applied to physicists at different times. In both situations, decoherence is supposed to account for the observed classicality.
 
  • #171
rubi said:
Well, you can add them and if you properly normalize them, it corresponds to a statistical mixture of dead and alive.

Bhobba did not read the manko paper. Did you?
Manko gives a recipe to get all the ways to "add" the density matrices. the first is to add rhe density matrices. another corresponds to add the vectors. And between them you have other inner composition laws with various fringe visibility.
He uses a trick to manage the phases.
The week end is coming. Take the time to read it!
arxiv.org/pdf/quant-ph/0207033

Abstract
An addition rule of impure density operators, which provides a pure state density operator, is for-
mulated. Quantum interference including visibility property is discussed in the context of the density
operator formalism. A measure of entanglement is then introduced as the norm of the matrix equal to
the difference between a bipartite density matrix and the tensor product of partial traces. Entanglement
for arbitrary quantum observables for multipartite systems is discussed. Star-product kernels are used
to map the formulation of the addition rule of density operators onto the addition rule of symbols of the
operators. Entanglement and nonlocalization of the pure state projector and allied operators are dis-
cussed. Tomographic and Weyl symbols (tomograms and Wigner functions) are considered as examples.
The squeezed-states and some spin-states (two qubits) are studied to illustrate the formalism.
 
Last edited:
  • #172
rubi said:
The observables ##X(t)## encode my knowledge of the measurement results. The measurement results themselves are contained in an observable ##A## corresponding the the apparatus. At every point in time, I believe to have obtained a measurement result. Quantum theory doesn't predict, which one. It's just that this knowledge isn't consistent over time unless the observables commute (which is hopefully ensured by decoherence).

---
By the way, I'm not convinced that the domain of applicability of QM extends to such scenarios, but one can pretend it does and see what follows from it. In principle, the matter that consitutes the physicist should be governed by the same laws as the rest of the universe and the knowledge of the physicist should somehow be encoded in the motion of the particles in his brain, so in principle it should be possible to eliminate the cut completely. Of course, this is nowhere near practical. In quantum gravity, such considerations are forced upon us, because we are dealing with a fully constrained Hamiltonian system and all physics is supposed to arise from looking at correlations.

I don't think you have gotten rid of the cut, since you still refer to your "knowledge of the measurement results". So you need the concept of something which can have knowledge, by which you presumably don't include a single electron.
 
  • #173
naima said:
Bhobba did not read the manko paper. Did you?
Manko gives a recipe to get all the ways to "add" the density matrices. the first is to add rhe density matrices. another corresponds to add the vectors. And between them you have other inner composition laws with various fringe visibility.
He uses a trick to manage the phases.
The week end is coming. Take the time to read it!
arxiv.org/pdf/quant-ph/0207033
I understand that he proposes additional ways to add density matrices and it might be useful in some situations, but I don't see how it is relevant to the interpretation of QM. A density matrix contains the information about all probability distributions of the observables, but in order to obtain these distributions, it doesn't matter where this density matrix came from.

atyy said:
I don't think you have gotten rid of the cut, since you still refer to your "knowledge of the measurement results". So you need the concept of something which can have knowledge, by which you presumably don't include a single electron.
Well, I put all matter on the quantum side, so there is nothing left on the "other side of the cut". The "knowledge of the measurement results" is just my way to avoid having to explain how information is encoded in the brain. As a toy model, we could certainly assume that the information about a spin measurement is encoded in the spin of a certain electron within some neuron. Light rays are reflected from the pointer of the measurement apparatus and hit the eye of the physicist. The matter of the eyes interacts with the brain matter and the brain might eventually store the information in the spin of some electron. This is almost certainly not how it works, but I'm not a neuroscientist and modeling the realistic way of how information is stored within the brain just makes the model more complex, but not conceptually different. The point is that if all matter in the universe is described on the quantum side, then nothing remains on the classical side, so there is no Heisenberg cut.
 
  • #174
rubi said:
I understand that he proposes additional ways to add density matrices and it might be useful in some situations, but I don't see how it is relevant to the interpretation of QM. A density matrix contains the information about all probability distributions of the observables, but in order to obtain these distributions, it doesn't matter where this density matrix came from.

When i began to work as a programmer we had languages like cobol, ibm assembly and so on. They used "goto" or 'branch" to a labelled line in the program. Several years later no programmer used them. we replaced them by sub programs. Of course one could find them deep in the internal machine language.

When i began to learn QM the situation was similar. Probalilities or densities of probabilities were associated to transition from one vector in a Hilbert spacesto another vector.
Many years later we began to speak in the language of POVM. the probabilities were associated now to operators. On began to think that a beam splitter receives an operator from a channel and gives two output operators. We can follow these operators along the branches of the devices just like we followed the vectors with amplitude ans phases. At the end a click will tell us which POVM was chosen by Nature.
As i told it in another post I had a doubt: Can we completely avoid addition of vectors (our "goto") to describe the details of the devices? Can we avoid the Kraus operators? When two branches meet can we describe the output only with density matrices?
It seem than Manko gives a yes answer.
The fringes visiblity is a parameter in his formula. It tells if have pure state or decohered state to "add". In the ancient language if we have to add probabilities or amplitudes of probabilities.

I know that we can go on to decompose everything in term of vectors, to add them, to square them, to multiply each case by a probability, to add them again. It works very well. But...
 
  • #175
rubi said:
Well, I put all matter on the quantum side, so there is nothing left on the "other side of the cut". The "knowledge of the measurement results" is just my way to avoid having to explain how information is encoded in the brain. As a toy model, we could certainly assume that the information about a spin measurement is encoded in the spin of a certain electron within some neuron. Light rays are reflected from the pointer of the measurement apparatus and hit the eye of the physicist. The matter of the eyes interacts with the brain matter and the brain might eventually store the information in the spin of some electron. This is almost certainly not how it works, but I'm not a neuroscientist and modeling the realistic way of how information is stored within the brain just makes the model more complex, but not conceptually different. The point is that if all matter in the universe is described on the quantum side, then nothing remains on the classical side, so there is no Heisenberg cut.

But you still need "brain" or "information" as something special. If there is no brain in the universe, then does the theory predict that anything happens?
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
7
Views
1K
  • Quantum Physics
2
Replies
40
Views
7K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Replies
6
Views
2K
Replies
12
Views
2K
Replies
2
Views
2K
Replies
102
Views
16K
  • Beyond the Standard Models
2
Replies
38
Views
8K
  • Quantum Physics
Replies
8
Views
4K
Replies
4
Views
3K
Back
Top