B How Does Environmentally Induced Decoherence Affect Quantum State Reduction?

  • B
  • Thread starter Thread starter Feeble Wonk
  • Start date Start date
  • Tags Tags
    Decoherence
Feeble Wonk
Messages
241
Reaction score
44
Despite the best efforts of some of PF's finest, I continue to struggle with the general concept of spontaneous quantum state reduction by means of environmentally induced decoherence.

I think that much of my confusion lies in the confounding degree of ambiguity in the delineation between the "system" and the "environment". On a cosmological scale, this differentiation often seems to me to be somewhat arbitrary.

Maybe part of my challenge is in simply not understanding the terms well enough. So, with regard to the three primary constituents of the decoherence process... System/Apparatus/Environment... could the physicists in the room please try to give me a conceptual definition using actual words (prohibiting any use of numbers, formulas or references to matrices). I recognize that this might feel like trying to teach me French without speaking French, but I'd greatly appreciate the effort. Consider it a charitable attempt at "No Fool Left Behind".

A possible example of what I'm looking for might be something like (pending your correction of this concept)... "The apparatus is the thing by which a preferred basis of observation is isolated and/or determined".

Any takers?
 
Physics news on Phys.org
Hmmm... I could use some more details as to your confusion but let me start with a general expo of my understanding. The system is of course that which you are representing with, presumably a density operator. As to the meaning of "the environment" that is any other system or systems which may interact with "The System". One would represent interaction by constructing a composite system representation (tensor product of Hilbert spaces and form density operator in the composite space). The interaction is represented by a joint Hamiltonian in which this composite system evolves. To revert to the original system description you would need to trace over the external system or environment component of the composite operator space. Decoherence manifest here in that entanglement occurs between system and environment and when we ignore the environment we see a less coherent system.

Now it is hard to explain this without dragging you through the density operators and partial trace operations and such but let me try it this way.
You meet a cute girl (or guy) who is a math geek and they left you their phone number in the form of two numbers on a piece of paper which add up to your number.
Together the numbers represent "coherent" information but they then tear the paper in two giving you only one of the two numbers. You now have what amounts to a random number. This is not a perfect analogy because quantum mechanical correlation (entanglement) can be stronger than classical in a way we can't model with classical pieces of paper and classical information written on them. But the decoherence stage is analogous to this and not very much different from thermal randomization as described in the classical domain. The system entropy goes up. What the quantum mechanical description allows is that the entropy of the parts is greater than the whole because you can have the maximal information in the whole encoded in a way that is not compatible with subdividing the system into those particular parts (that's incompatible in the sense of momentum vs position type complementarity).

So just as, say, a particle can evolve from a sharp state (or rather mode) of definite position to a sharp state where position is not well defined, a composite system can evolve from a sharp state describable as a composite of sharp states of its parts (system + environment e.g.) to another ***sharp*** state (of the whole) where in the best description of either part is not sharp. It has experienced decoherence.

Now given you knew the original system state and exactly how it evolved you could in principle set things up to reverse this however it is usually the case that we do not know the sharp initial state of the secondary system since it is "everything else" and once interaction has occurred it the original system is entangled with an ever expanding sphere of the electromagnetic field in space. You physically cannot catch up with this in order to work with the whole composite system and reverse the decoherence.I'm fond of saying when I take this over the top "The entropy of the universe is 0! It's only when we look at separate parts that we get positive entropy!"

I hope this clarifies more than it confuses.
 
  • Like
Likes David Lewis
jambaugh said:
The system entropy goes up. What the quantum mechanical description allows is that the entropy of the parts is greater than the whole because you can have the maximal information in the whole encoded in a way that is not compatible with subdividing the system into those particular parts (that's incompatible in the sense of momentum vs position type complementarity).

That was a very good effort! Thank you.

But before I ask a follow up question, I'd like to make sure I understand this part. When you say "The system entropy goes up", are you referring to the COMPOSITE system or the original system?
 
The original system, in the example I gave the composite retains zero entropy (sharpness is retained in the unitary dynamic evolution).
 
.
 
jambaugh said:
The original system, in the example I gave the composite retains zero entropy (sharpness is retained in the unitary dynamic evolution).

I think this is getting at my confusion regarding the mixed/pure state designations.

Before interaction with the environment, we can say that the system was in a "pure" state with zero (informational) entropy. But after interaction (is considered), the original system becomes "mixed", with increased entropy. Is that right?

But, isn't this really just a result of our ignorance of the quantum state, because if the quantum state is actually reduced as a result of the interaction, then the resultant state is still "pure" and has zero entropy.

What am I missing here?
 
Last edited:
Yes you have that right w.r.t. the first question. In your either/or question both are correct depending on answering the question "the state of what?" of the system vs of the system and environment together as a larger composite system.

Both are occurring, the system "state" is reduced (but its really not a higher entropy state or reality but a higher entropy class of states) and the resulting state of the larger system is still "pure".

Keep in mind that our wave functions and density operators (i. in the orthodox interp. and ii. especially when considering "mixed" "states") represent classes of possible systems and not specific systems. It is an important philosophical distinction.
 
jambaugh said:
Yes you have that right w.r.t. the first question. In your either/or question both are correct depending on answering the question "the state of what?" of the system vs of the system and environment together as a larger composite system.

I've been rolling this around in my head, and I keep hitting the same cognitive stumbling block.

jambaugh said:
Both are occurring, the system "state" is reduced (but its really not a higher entropy state or reality but a higher entropy class of states) and the resulting state of the larger system is still "pure".

I think that the "pure" vs "mixed" designation continues to lie at the heart of my confusion. Again... IF environmental decoherence triggers state reduction (I'm purposefully avoiding the "collapse" term to avoid unnecessary debate), then it seems to me that the reduced state should be considered pure and have zero (informational) entropy at the moment of reduction. I'm not sure what "reality" refers to in your statement here. However, IF decoherence can be considered the triggering mechanism by which state reduction occurs, then I would think it would be accurate to say that one of the potential quantum states of the reduction would be "realized".

jambaugh said:
Keep in mind that our wave functions and density operators (i. in the orthodox interp. and ii. especially when considering "mixed" "states") represent classes of possible systems and not specific systems. It is an important philosophical distinction.

Yet, this portion of your post seems to imply that the wave function is not "really" reduced by decoherence, but that decoherence simply limits which system states CAN occur upon reduction.

Can anyone please clarify this distinction further?
 
Last edited:
Maybe what you're missing is this? Suppose a system is comprised of two parts as the tensor product of the corresponding Hilbert spaces, and is in a pure state with the corresponding density matrix; in general, the density matrix of one of the two single parts (the trace over the corresponding Hilbert space of the single part) represents a mixed state, except for a specific non-entangled case (which doesn't apply here).

If you want I can prove this to you, but there will be math. Or you can take it as a dogma if you don't want the math. Honestly I don't think it's possible to explain this detail without the math, but if you just believe it I don't see the problem (or I don't see by which intuition it would have to be otherwise, it's not like the converse would be intuitive either).

So once you know this, apply it to a system + environment case: the environment interacts with the system and becomes entangled with it, the total is a pure state but the system is now a mixed state.
 
Last edited:
  • #10
ddd123 said:
So once you know this, apply it to a system + environment case: the environment interacts with the system and becomes entangled with it, the total is a pure state but the system is now a mixed state.

The system is in a "mixed" state because there are multiple "potential" states of the system?
 
  • #11
Feeble Wonk said:
The system is in a "mixed" state because there are multiple "potential" states of the system?

No. "Mixed state" and "pure state" are technical terms that refer to properties of the density matrix associated to a quantum state. A pure state's density matrix is just a projector. Both the pure and mixed states represent a probability amplitude for an observable outcome, if that's what you mean. In fact, if you write both the pure and the mixed states with a state vector and a density matrix respectively, using a particular eigenbasis, the probability outcomes of a measurement of an observable with that eigenbasis can be exactly the same. Only with a non-commuting observable do you see the difference, which lies in an interference term given by the relative phase between the pure state vector coefficients (which are complex numbers); whereas a mixed state density matrix is written using real numbers, which, in the above case of there being the same probability distribution, correspond to the former complex numbers squared. That's why the density matrix has less information than a state vector.

So you should not confuse the quantum-informational entropy with the quantity of measurement outcomes: after all, if you had only one outcome for an observable, you could get more with another observable not commuting with the first (which is typical with polarization, for example). Quantum-informational zero entropy refers to the uniqueness of the state vector, represented by complex coefficients in some basis; whereas a mixed state's density matrix, which can be seen as a mix of pure states, has real coefficients (the square of the pure states' complex coefficients, losing the phase information) for all observables, which means it represents an incoherent mix.

Edit: if by "potential states" you mean the pure states the mix is comprised of, then yes.
 
Last edited:
  • #12
I think these passages from 'Quantum Enigma' by Bruce Rosenblum and Fred Kuttner may assist (pg 209, 2nd edition):

(without referencing the experimental set-up discussed)
Suppose, however, that the photons pass through our boxes and then encounter the macroscopic environment. Assuming thermal randomness, one can calculate the extremely short time after which an interference experiment becomes impossible, for all practical purposes. Averaging over the decohered wave-functions of the atoms leaves us with an equation for a classical-like probability for each atom actually existing wholly in one or the other box...

and

Those classical-like probabilities are still probabilities of what will be observed. They are not true classical probabilities of something that actually exists.

EDIT: realized the emphasis on some words, as per the book, were not present.
 
Last edited:
  • #13
ddd123 said:
Edit: if by "potential states" you mean the pure states the mix is comprised of, then yes.

I'm afraid that it's precisely this ambiguity that confuses me. I thought that is what I meant, but now I'm not sure.
Let me try this a different way, and maybe it will help you help me (and I do very much appreciate the effort).

My confusion initially began during a previous PF thread (can't put my finger on it immediately) that was discussing the ontological "reality" of the wave function. I was trying to understand how this might relate to the process of environmental decoherence on a cosmological scale.

In that thread, there seemed to be a school of thought among some of the participants that, on a cosmological scale, the delineation between the "system" and the "environment" might be somewhat arbitrary. The question of exactly when, and why, actual state reduction occurred in an ontologically "real" wave function (secondary to decoherence) lead to the discussion of mixed vs pure quantum states of the environment/system complex. It seemed that some were suggesting that the "mixed" state (mixture of potential pure states) after interaction was a reflection of ignorance of outcome, and that the "pure" (realized?) state could not yet be determined. Others appeared to argue that the differentiation was mathematically irrelevant, and any suggestion to the contrary was utterly philosophical.

Now, I am fully aware that I probably misunderstood the discussion, and "proper" vs "improper" mixes were additional sources of confusion for me, so it's entirely possible that I've also confused the "pure/mixed" and the "proper/improper" terms. Also, I'm confident that the answer to my question is highly interpretation dependent. However, IF we are considering a "universal" wave function on a cosmological scale... and IF we are trying to consider this wave function as being ontologically "real" (whatever that means)... I still have trouble understanding how decoherence "triggers", by direct causation, quantum state collapse (or state reduction, if you prefer). It seems, to my befuddled brain anyway, that the mathematical formalism of decoherence simply places logical limitations on what quantum states can be observed... defining the possible "subsystems" of the universal wave function describing the cosmological system/environment complex.

Is this assessment even remotely close to reasonable? If not, can you identify where (in the likely long chain of errors) my conception is in error?
 
Last edited:
  • #14
jambaugh said:
Keep in mind that our wave functions and density operators (i. in the orthodox interp. and ii. especially when considering "mixed" "states") represent classes of possible systems and not specific systems. It is an important philosophical distinction.
StevieTNZ said:
I think these passages from 'Quantum Enigma' by Bruce Rosenblum and Fred Kuttner may assist
>>>
..."Those classical-like probabilities are still probabilities of what will be observed. They are not true classical probabilities of something that actually exists."

Reference https://www.physicsforums.com/threads/decoherence-clarification.828712/

Both of these postings seem to suggest a similar concept, though without the necessity of the wave function being ontologically "real".
 
  • #15
Feeble Wonk said:
It seemed that some were suggesting that the "mixed" state (mixture of potential pure states) after interaction was a reflection of ignorance of outcome, and that the "pure" (realized?) state could not yet be determined. Others appeared to argue that the differentiation was mathematically irrelevant, and any suggestion to the contrary was utterly philosophical

Okay. Indeed, the mixed state is the analogue of the classical probability density in phase space. Let me break this down more clearly.

Classical mechanics: an ideal state is a point in phase space, the corresponding density is a Dirac delta in phase space. A statistical density (of microstates) instead assign a probability distribution to possible ideal states the system is in: it's a "smeared cloud" in phase space if you want to visualize it. A classical physical state is always non-ideal due to measurement errors.

Quantum mechanics: a pure state is a ray (or orbit, if you take Gauge transformations into account) in Hilbert space, which for simplicity we call a state vector. The density is now a "matrix" (actually can also have a term with an integral over continuous observables) which is just a projector over that pure state vector. A statistical mixture - a mixed state - assigns probabilities (hence real numbers) for a quantum system to be found in a number of pure states. A quantum pure state CAN be realized physically.

As you can see there are fundamental differences: in quantum mechanics, a pure state is already of statistical nature it representing a probability amplitude. So you can consider a mixed state to be just another state: the pure state was a superposition of observable eigenstates, the mixed state is a superposition of pure states. In the formalism, there's much less "ontological" distinction between pure and mixed states than there is between ideal and ensemble states in classical mechanics. However, you can devise an interpretation in which such an ontological distinction is recovered (I guess, for a psi-ontologists that's evident).

Now, if your question is, how does a psi-ontologist justify the fact that a composite pure state has its parts taken separately that are mixed states, I guess the trivial answer is "you're not addressing the whole wave-function but only a part, hence whatever you do to a part behaves as a statistical mix to you but the whole is still a pure state".

However, IF we are considering a "universal" wave function on a cosmological scale... and IF we are trying to consider this wave function as being ontologically "real" (whatever that means)... I still have trouble understanding how decoherence "triggers", by direct causation, quantum state collapse (or state reduction, if you prefer). It seems, to my befuddled brain anyway, that the mathematical formalism of decoherence simply places logical limitations on what quantum states can be observed... defining the possible "subsystems" of the universal wave function describing the cosmological system/environment complex.

Be warned that this is NOT the mathematical formalism of decoherence per se. It's the mathematical formalism of all of quantum mechanics. It's just a mathematical lemma: given a vector in a tensor product of two Hilbert spaces, in general the density operator on just one of the two spaces is not a projector (and is never a projector if the total state is entangled). This means that in all interpretations, in all of quantum mechanics, an entangled pure state is always a mixed state for just one part of it. So the "direct causation" you're talking about is simply the entanglement that happens between the environment and the system (which is inevitable): the "logical limitations" you talk about are inevitable maths, decoherence theory didn't postulate them or anything.

The point of decoherence is that keeping track of pure states becomes impossible since they're scattered in bits and pieces all over the place, while you want to look at just one system. Suppose your system is hit by a number of photons: each of them becomes entangled with it and then scatters away. You'd have to run after each photon to recover and study the pure state, looking at the system you'll see a mixed state.
 
  • #16
Feeble Wonk said:
Despite the best efforts of some of PF's finest, I continue to struggle with the general concept of spontaneous quantum state reduction by means of environmentally induced decoherence.

I think that much of my confusion lies in the confounding degree of ambiguity in the delineation between the "system" and the "environment". On a cosmological scale, this differentiation often seems to me to be somewhat arbitrary.
I think you are right. The split into the "system" and the "environment" is quite arbitrary. In practical calculations this is usually not the problem because physicists have a good intuition about what is a "natural" split in given circumstances. So if you think of QM only as a practical mental tool for human physicists (including cosmologists), then there is no any serious problem. But if you think of QM as a fundamental law obeyed by nature itself, irrespective of humans, then there is a deep problem.

How to resolve the problem? Well, to resolve it, the minimal quantum formalism is not enough. You must use some interpretation of QM, and any choice of interpretation is somewhat controversial.

Personally, I like the Bohmian interpretation. Among other things, this interpretation gives a preferred status to the position observable, which circumvents the problem of "arbitrary split into system and environment".

For a related discussion see also
https://www.physicsforums.com/threads/many-worlds-proved-inconsistent.767809/
 
  • #17
Demystifier said:
The split into the "system" and the "environment" is quite arbitrary. In practical calculations this is usually not the problem because physicists have a good intuition about what is a "natural" split in given circumstances.

Can you provide an example in which only an intuitive but not arbitrary choice provides a correct prediction, while another choice is wrong? Of course if calculations turned out correct in all arbitrary choices there wouldn't be a problem.
 
  • #18
ddd123 said:
Can you provide an example in which only an intuitive but not arbitrary choice provides a correct prediction, while another choice is wrong?
There are some examples in Sec. 4 of
http://arxiv.org/abs/1210.8447
 
  • #19
Demystifier said:
There are some examples in Sec. 4 of
http://arxiv.org/abs/1210.8447

Thank you. This is a fascinating paper. I think this will address my confusion directly. It's very similar to a position I've read by Lee Smolin from the cosmological perspective. I'm hoping that I can just be a spectator now and watch while you professionals discuss this.
 
  • #20
Demystifier said:
There are some examples in Sec. 4 of
http://arxiv.org/abs/1210.8447
Do I understand it correctly? The author of this paper gives an argument why universal wave function equipped with unitary evolution can not predict classical world (via decoherence induced branching).
 
  • #21
As I understood it, he says it can't even predict the quantum world, in the "Nirvana factorization" there's not even entanglement.

The thread linked by Demystifier has little discussion though. What did the MWI people answer?
 
  • #22
ddd123 said:
As I understood it, he says it can't even predict the quantum world, in the "Nirvana factorization" there's not even entanglement.

The thread linked by Demystifier has little discussion though. What did the MWI people answer?
I am trying to understand why this argument concerns only MWI and not any non collapse treatment of QM. Or MWI is the only non collapse interpretation? Somehow I got different impression from comments on different threads.
 
  • #23
What about "collapse is real" treatments too, though? The iterated process it describes seems to render the idea of collapse occurring at some point ambiguous as well, that is what is collapsing in a picture is entangling in another.
 
  • #24
Hmm, but in pilot wave interpretation collapse is update of information about where particles are, right?
Well, maybe I am off but the idea that unitary evolution can predict classical world seems strange so I would like to understand how far the argument of this paper can be taken.
 
  • #25
zonde said:
Hmm, but in pilot wave interpretation collapse is update of information about where particles are, right?
Of course in any hidden variable interpretation there's no physical collapse.

I really don't know much about this topic.
 
  • #26
ddd123 said:
What about "collapse is real" treatments too, though? The iterated process it describes seems to render the idea of collapse occurring at some point ambiguous as well, that is what is collapsing in a picture is entangling in another.
At first glance, it seems that the author is suggesting that an "external observer" is necessary for a "collapse" theory to provide the preferred basis.
 
  • #27
Feeble Wonk said:
At first glance, it seems that the author is suggesting that an "external observer" is necessary for a "collapse" theory to provide the preferred basis.
No, it's non-collapse theory,
And it's factorization (decomposition into subsystems) that we can't get without external observer. Let me explain. Preferred basis is such a choice of coordinate system that measurement produces outcomes along coordinate axis of our chosen coordinate system, but factorization is splitting the universe into system, measurement equipment and the rest of the universe.
 
  • #28
Just a clarification. Once factorization occurred.. is it self sustaining.. for example.. once factorization occurred separating the universe into system, measuring equipments and environment.. do all the different objects know what is the system and the environment.. or does factorization needs to be kept constantly in force for the system and environment to have consistent relationship? For example.. in lego toys.. once you put them on table.. they can be interchanged in all kinds of combinations and the manufacturing (analogy of factorization) is only one time.. but in superconducting trains.. you need to energize the railings at all time (analogy of factorization being constantly in force or the universe would be messed up).
 
  • #29
zonde said:
And it's factorization (decomposition into subsystems) that we can't get without external observer.

Factorsation does not require an external observer.

The issue is do we get the same results if we factor the system into a natural decomposition of what is observed and what does the observing or some other weird decomposition. So far decoherence models have only been worked out for on that reasonable decomposition - although I recall reading a paper where for a simple model that was removed and it showed regardless of decomposition the same result was obtained. Critics claim it's this decomposition that theoretically leads to decoherence so you have not explained anything. Its a fringe issue that a lot of work has not been done on - most don't worry about it. You will find a lot of threads here about it, some quite heated, its one of those things that can generate a lot of 'discussion'.

Thanks
Bill
 
Last edited:
  • #30
jlcd said:
Just a clarification. Once factorization occurred.. is it self sustaining.. for example.. once factorization occurred separating

Its a theoretical thing - not an actual process.

Thanks
Bill
 
  • #31
Its a theoretical thing - not an actual process.

?? How can the following steps not be actual process.

1. Many worlds occur...
2. One branch is selected and factorization initiated to distinguish system and environment

The above is actual. I was asking is factorization needed to be at maintained at full force.. without it.. the one branch can revert back to many worlds...
 
  • #32
bhobba said:
Factorsation does not require an external observer.
My statement was about the claim of this paper: http://arxiv.org/abs/1210.8447
And it certainly claims that external observer is needed. From abstract:
"A state vector gets the property of "representing a structure" only with respect to an external observer who measures the state according to a specific factorization and basis."
 
  • #33
zonde said:
A state vector gets the property of "representing a structure" only with respect to an external observer who measures the state according to a specific factorization and basis."

I know that paper.

That claim is incorrect.

With an observation defined as just after deoherence obviously its a purely quantum phenomena requiring no external observer.

The issue of factorisation, with observations perhaps not giving the same result depending on how a system is factored, is legit, although a fringe issue. Most physicist's accept as pretty axiomatic it doesn't matter how you factor a problem the answer is the same - its done in many many areas of physics and people don't worry about it. But with decoherence some seem perturbed by it.

Thanks
Bill
 
Last edited:
  • #34
jlcd said:
How can the following steps not be actual process.
1. Many worlds occur...
2. One branch is selected and factorization initiated to distinguish system and environment

That's not what factorisation is - what you are taking about I have no idea what is meant.

In MW each part of a mixed state is considered a world. Its an interpretive thing - not an actual process.

Thanks
Bill
 
  • #35
zonde said:
Do I understand it correctly? The author of this paper gives an argument why universal wave function equipped with unitary evolution can not predict classical world (via decoherence induced branching).
Not exactly. He gives an argument why universal unitary evolving state in the Hilbert space alone cannot predict decoherence and branching. Note that state ##|\psi\rangle## in the Hilbert space is not exactly the same as the wave function ##\langle x|\psi\rangle##.
 
  • #36
That's not what factorisation is - what you are taking about I have no idea what is meant.

In MW each part of a mixed state is considered a world. Its an interpretive thing - not an actual process.

Thanks
Bill

I have zero idea why you think Many World is not an actual process. If you will ask the physicists.. they will tell you that in Many worlds, there are really many worlds or branches.
 
  • #37
jlcd said:
I have zero idea why you think Many World is not an actual process. If you will ask the physicists.. they will tell you that in Many worlds, there are really many worlds or branches.

Decoherence is a process, interpreting each outcome as a separate world isn't.

Thanks
Bill
 
  • #38
Can you give an example of Factorization and how the critiques reasoned they were different ways to factor it?
 
  • #39
zonde said:
I am trying to understand why this argument concerns only MWI and not any non collapse treatment of QM. Or MWI is the only non collapse interpretation? Somehow I got different impression from comments on different threads.
There are also other non-collapse interpretations. Let me give only two examples:
1. MWI with a priori preferred basis. For instance, it may be the position basis, so the ontology is not the state in the Hilbert space ##|\psi\rangle##, but the wave function ##\langle x_1, ... , x_n|\psi\rangle##.
2. MWI with additional variables. E.g. Bohmian interpretation where particle positions are also ontological.

What their result shows is that ##|\psi\rangle## alone is not enough.
 
  • Like
Likes bhobba
  • #40
jlcd said:
Can you give an example of Factorization and how the critiques reasoned they were different ways to factor it?

I won't discuss that paper because I don't agree with much of it. You can do a search on threads here - it has been discussed a lot - a lot more than it deserves IMHO.

But here is the issue with factorisation. Suppose you have a particle detector. Its natural in analysing how such works to divide the detector and what being observed into exactly that - the detector and what it observes. Theory shows as a result of decoherence you get a mixed state where the off diagonal elements are zero and the diagonal terms give a probability of detecting a particle and not detecting it. So far so good. But what if you instead decompose it into what's being observed + half the detector and the other half. You would have rocks in your head doing that - you job is much much harder. But as a matter of principle you must get the same result - if you don't then things are really rotten in the state of Denmark and stinks to high heaven. In many area's of physics like balls rolling down inclined planes you have exactly the same problem - but everyone believes as an unstated assumption it doesn't make any difference - still it's an issue. Those concerned about the factorisation problem say decoherence is just a result of factoring it into what's being observed an what does the observing. I personally think its a crock - but it can't be dismissed out of hand.

In relation to the early universe as the universe evolves obviously structures develop and interacts with other things. The claim of the factorisation crowd is decoherence is simply a result of humans factoring it that way so doesn't explain anything - hence the claim nothing happens in MW.

I don't agree - but that's the argument. Its also got nothing to do with observers. But that's a matter of opinion - you can have a look at threads where its discussed and make up your own mind.

Thanks
Bill
 
  • Like
Likes eloheim
  • #41
There is a factorization problem in Newtonian mechanics?
 
  • #42
ddd123 said:
There is a factorization problem in Newtonian mechanics?

Of course there isn't. Its just putting it into perspective.

Thanks
Bill
 
  • #43
Maybe it's not about taking the factorization problem seriously. In my naive view, it's a reductio of the idea that the mathematical ket of the Universe is ontologically representative. The operations done to the ket to factor it differently are all legal, so it works as a reductio. Where do you not agree?
 
  • #44
ddd123 said:
Maybe it's not about taking the factorization problem seriously. In my naive view, it's a reductio of the idea that the mathematical ket of the Universe is ontologically representative. The operations done to the ket to factor it differently are all legal, so it works as a reductio. Where do you not agree?

I am not sure I understand your point. But as far as the wave-function of the universe goes in many interpretations it makes no sense eg what prepared the universe?

Thanks
Bill
 
  • #45
The criticized interpretation is MWI so the reductio is directed at that. That it becomes non sequitur in other contexts is only natural I guess.
 
  • #46
But here is the issue with factorisation. Suppose you have a particle detector. Its natural in analysing how such works to divide the detector and what being observed into exactly that - the detector and what it observes. Theory shows as a result of decoherence you get a mixed state where the off diagonal elements are zero and the diagonal terms give a probability of detecting a particle and not detecting it. So far so good. But what if you instead decompose it into what's being observed + half the detector and the other half. You would have rocks in your head doing that - you job is much much harder. But as a matter of principle you must get the same result - if you don't then things are really rotten in the state of Denmark and stinks to high heaven. In many area's of physics like balls rolling down inclined planes you have exactly the same problem - but everyone believes as an unstated assumption it doesn't make any difference - still it's an issue. Those concerned about the factorisation problem say decoherence is just a result of factoring it into what's being observed an what does the observing. I personally think its a crock - but it can't be dismissed out of hand.

I wonder if the rocks in the head is a good example of factorization. When you are standing on the rocky ground (say in the Andes mountain) and remember that all is quantum (in our latest understanding of decoherence post Copenhagen where there is no classical-quantum division/boundary but all quantum), then one can ask why is the rock on the ground and not inside your organs like inside the head... because the universe can decompose it such that the rocks would be anywhere in your body. Is this a valid example?
 
  • #47
bhobba said:
But here is the issue with factorisation. Suppose you have a particle detector. Its natural in analysing how such works to divide the detector and what being observed into exactly that - the detector and what it observes.
This is clearly assuming the conclusion (begging the question) fallacy. In such a way you won't explain anything.
 
  • #48
I don't get the idea of decoherence, even if factorization problem can be solved.
QM gives statistical predictions. But statistics are calculated from individual events. If we say that there is no more fundamental description than wavefunction then we have to represents every individual event with identical wavefunction. But detections are rather random. So we say that particle exists as wavefunction until point X when it is randomized (wavefunction collapse).
Now decoherence idea says that initial coherent wavefunction can become non-coherent. So the first question is: after decoherence is each separate particle still described by exactly the same (but non-coherent) wavefunction or does non-coherence means that each particle is described by slightly different wave function?
 
  • #49
zonde said:
This is clearly assuming the conclusion (begging the question) fallacy. In such a way you won't explain anything.

Your 'clearly' logic escapes me.

Its a fringe issue - but a genuine one.

Thanks
Bill
 
  • #50
zonde said:
I don't get the idea of decoherence, even if factorization problem can be solved.

Then you need to study it more:
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

In particular you need to understand mixed states and the difference between proper mixed states and improper ones.

BTW it requires delving into the math - it can't be explained in words - at least I can't do it.

zonde said:
Now decoherence idea says that initial coherent wavefunction can become non-coherent. So the first question is: after decoherence is each separate particle still described by exactly the same (but non-coherent) wavefunction or does non-coherence means that each particle is described by slightly different wave function?

There is no wave-function after decoherence because its in a mixed state.

Thanks
Bill
 
Back
Top