New experimental proof of wave-function collapse?

gxu
Messages
7
Reaction score
0
The following experiment claims that it has demonstrated the wave-function collapse:

http://www.nature.com/ncomms/2015/150324/ncomms7665/full/ncomms7665.html

I would have no problem if they have claimed that, the experiment demonstrated the "non-local" (or: precisely quantum) steering effect. In my humble opinion, there is no logic to justify that "quantum steering effect is equivalent to the wave-function collapse". Here the wave-function collapse is defined in the strict Von-Neumann's postulate form.

Do I miss something important here? I posted this question on the physics SE for discussion and got no response. So please let me know your input - thank you so much!
 
Physics news on Phys.org
For those interested here is the actual paper:
http://arxiv.org/pdf/1412.7790v1.pdf

I am not an experimental person, we have a number of those that post here, and hopefully they will comment on what it actually shows.

I will simply comment from what QM actually says.

First QM actually doesn't have collapse - that's an interpretational thing - some like GRW explicitly has it - others like MW and BM don't. Others like Ensemble are ambivalent to it being compatible with interpretations that have it and those that don't - but for simplicity most, including me, would say it doesn't. Others like most versions of Copenhagen have it but it means nothing since its purely subjective.

So if they have actually observed collapse then they would have thrown out a whole heap of interpretations like BM and MW. That is highly doubtful since they were cooked up to be indistinguishable from standard QM.

I think when someone who understands experimental stuff better than I do looks at it all it will do is demonstrate something that QM predicts independent of any interpretation and so can't demonstrate wave-function collapse.

Added Later:
I gave it a quick squiz. As far as I can see its simply verifying EPR. Interesting - but no big deal.

Thanks
Bill
 
  • Like
Likes gxu
If it's simply demonstrating entanglement for position of two particles as in the EPR paper, there's no necessity to assume a collapse. In the minimal interpretation it's very simply explained without any collapse. It's the preparation in an entangled state, which implies long-range correlations. There's no spooky action at a distance caused by one local measurement on one part (at Alice's place) of the system on the other far-distant part (at Bob's place).

I've to carefully look into the paper to say something specific about it, of course.

BTW: I think, the experimental verification of this extremely non-classical predictions of quantum theory is still a very fascinating issue. Usually it's pretty simple to explain the theory, at least when simplified and idealized to the core principles, but still very surprising for our classically trained everyday experience and it delivers (sometimes highliest accurate) verifications of these "weirdest quantum properties" of nature! From an experimental point of view it's usually quite difficult to realize and an art of its own. So the experiments alone have a high esthetic appeal to me.
 
  • Like
Likes gxu and bhobba
gxu said:
The following experiment claims that it has demonstrated the wave-function collapse:

http://www.nature.com/ncomms/2015/150324/ncomms7665/full/ncomms7665.html

I would have no problem if they have claimed that, the experiment demonstrated the "non-local" (or: precisely quantum) steering effect. In my humble opinion, there is no logic to justify that "quantum steering effect is equivalent to the wave-function collapse". Here the wave-function collapse is defined in the strict Von-Neumann's postulate form.

Do I miss something important here? I posted this question on the physics SE for discussion and got no response. So please let me know your input - thank you so much!
You are right. They demonstrate non-locality, but not the collapse. For instance, their experiment can be explained by the Bohmian interpretation, which involves non-locality but not the collapse.

I am sure that at least some of the authors (e.g. Wiseman) of this paper are very well aware of this. But then why do they claim such a non-sense in their paper? Because that sells the paper, especially in high-impact journals such as Nature or Science.
 
  • Like
Likes Ponderer, harrylin, gxu and 2 others
Well, if you want to publish in Nature, you've to find a little "hype argument" [SCNR].
 
  • Like
Likes gxu, bhobba and Demystifier
vanhees71 said:
If it's simply demonstrating entanglement for position of two particles as in the EPR paper, there's no necessity to assume a collapse. In the minimal interpretation it's very simply explained without any collapse. It's the preparation in an entangled state, which implies long-range correlations. There's no spooky action at a distance caused by one local measurement on one part (at Alice's place) of the system on the other far-distant part (at Bob's place).

However, can the minimal interpretation without collapse explain it if the measurements are considered in a frame in which they are not simultaneous?
 
Demystifier said:
You are right. They demonstrate non-locality, but not the collapse. For instance, their experiment can be explained by the Bohmian interpretation, which involves non-locality but not the collapse.

I am sure that at least some of the authors (e.g. Wiseman) of this paper are very well aware of this. But then why do they claim such a non-sense in their paper? Because that sells the paper, especially in high-impact journals such as Nature or Science.

As far as I know, there is no known Bohmian interpretation of the standard model, so perhaps there is good reason not to consider the Bohmian interpretation as giving the same predictions as all of quantum mechanics?
 
atyy said:
However, can the minimal interpretation without collapse explain it if the measurements are considered in a frame in which they are not simultaneous?
Sure, it simply doesn't matter in which order the meausrements are, as long as the local measurement at A doesn't influence the measurement at B. Particularly if the measurement events are space-like separated, there shouldn't be such an influence according to relativistic causality constraints. This is the core of the EPR paradox, if you postulate that the 100% correlation is due to the "collapse" due to A's measurement, which is before B's measurement in some frame and if A's and B's measurements are space-like separated events. This is the main reason, why I don't think that collapse is a very helpful notion in the interpretation of quantum theory but only causes trouble. Since it's fortunately not needed at all, I just don't use it in my thinking about quantum theory and its interpretation.

The long-range correlations are due to the preparation procedure which is before each of these measurements are made, and this is a Poincare invariant notion, because the preparation procedure is in the past lightcone of both measurement events.
 
  • Like
Likes bhobba
atyy said:
As far as I know, there is no known Bohmian interpretation of the standard model, so perhaps there is good reason not to consider the Bohmian interpretation as giving the same predictions as all of quantum mechanics?
Actually there is such a version of Bohmian interpretation, but Bohmians are not very interested in writing it down explicitly because it looks very ugly. Bohmians (like many other theorists) are motivated by theoretical beauty and elegance. (Which, of course, are subjective notions, so not all theorists agree that Bohmian theory is beautiful and string theory elegant).
 
  • #10
Demystifier said:
Actually there is such a version of Bohmian interpretation, but Bohmians are not very interested in writing it down explicitly because it looks very ugly. Bohmians (like many other theorists) are motivated by theoretical beauty and elegance. (Which, of course, are subjective notions, so not all theorists agree that Bohmian theory is beautiful and string theory elegant).

How can one see that a Bohmian standard model exists?
 
  • #11
Well, the Bohmian interpretation looks ugly to me already for non-relativistic quantum theory. I never understood, why one should be appealed by it from an esthetic point of view, but that's hardly an argument against any theory. My subjective fealing about a theory being ugly or beautiful doesn't matter. If nature likes to behave in a way which I consider ugly, that's my personal problem with nature but that's then how it is.

My argument against Bohm's interpretation is rather that it introduces unobservable "trajectories", which are superfluous to explain the observable facts described by quantum theory, and these observable facts are probabilistic (statistical) and given by Born's rule. There's nothing more to QT than the particular realization of a probability theory defined by it.
 
  • #13
Demystifier said:
E.g. by showing that Bohmian model for any quantum theory exists. And how one can show that? See
http://lanl.arxiv.org/abs/quant-ph/0302152 [Found.Phys.Lett.18:123-138,2005]
Sec. 5.

Yes, that's among the papers I know about. I have tried to read almost all your papers with great interest! I guess I'm not enough of an expert to evaluate its correctness by myself, and I don't know if there is consensus about whether it really works, at least not the way Bohmian Mechanics for non-relativistic quantum mechanics has been examined for all sorts of tricky situations, and really does seem to work. Would it be fair to say that this is still pretty much at the frontier of research, rather than textbook knowledge? I have the same reservations about MWI - is it really an alternative interpretation to Copenhagen - or is it still an approach that it is unclear whether all the problems have really been worked out?

So would it be fair to say that at the consensus level - eg., what one can teach to undergraduates - Copenhagen is still the only interpretation of quantum mechanics?

(Consistent histories, maybe - but it essentially has collapse and all the same problems as Copenhagen, just declared not to be problems)
 
  • #14
Note that EPR, Bell's inequality and entanglement don't demonstrate "nonlocality" (though this is the common word for it) so much as it confirms the initial "superposition of states" as predicted by quantum mechanics. In other words, the initial state of the photons are not polarized in a particular direction, the initial spin of the fermions are not in some specific x-y-z direction. The "nonlocalitiy" has to do with those states being 100% correlated antisymmetrically, as required by standard quantum mechanics.

Like others in this thread, I'm not seeing anything that looks like "proof of wave function collapse". It's called "proof of existing quantum theory." There is an unfortunate tendency in physics to conceive of the math as being the reality. The math is the description of the reality, the quantitative language we use to communicate about the reality, subject to experimental verification.

Or to use an analogy from the Matrix, the quote of "There is no spoon." There is no wavefunction. There are phenomena that we measure that are described by math we call "wavefunctions", which aptly predict our measurements. The notion that you can "prove" that a mathematical construct has objective material behavior (collapsing or otherwise) is absurd.
 
  • #15
Let me try to defend the title and the authors a bit.

The words "nonlocal wavefunction collapse" describe the words and position of EPR and Schrodinger on the subject. They thought exactly that,
Alice can collapse the state of Bob into totally different states depending on her measurement choice: If she measures position, then Bob's state will be
an eigenstate of position. If she measures momentum Bob's state will be an eigenstate of momentum. Right after her measurement, she knows instantly (i.e. "nonlocally")
whether Bob's state is |x> (with a random x) or |p> (with a random p). Yes, of course, we know that there is no collapse (probably), and Wiseman also knows that pretty well as Demystifier noticed. But the authors don't talk about the measurement problem, and the problem of collapse. They use the word 'collapse' to describe the fact that Bob's local state changes instantly ('collapsing') upon Alice's measurement.

So yes they use these words cleverly to have a catchy title, but what they say is not wrong scientifically if you translate the words the way they mean it (by reading the main text).

The above effect that these authors demonstrate, where Alice can steer Bob's state, that may seem so impressive, also has a classical counterpart. It is possible classicaly for Alice to 'steer' the classical (probabilistic) state of Bob into different ensembles. The quantum steering effect that the authors above demonstrated of course cannot be performed classically, since entanglement allows Alice to 'steer' much more than what is classicaly possible. However, that doesn't change the fact that this type of "nonlocality" and "collapse" has a classical analogue.
It's good to demystify some quantum effects so that we won't be more impressed than we should, right demystifier? :smile:

(check this paper for the classical analogue: http://arxiv.org/ftp/quant-ph/papers/0310/0310017.pdf )
 
Last edited:
  • Like
Likes Demystifier
  • #16
JK423 said:
es, of course, we know that there is no collapse (probably), and Wiseman also knows that pretty well as Demystifier noticed. But the authors don't talk about the measurement problem, and the problem of collapse. They use the word 'collapse' to describe the fact that Bob's local state changes instantly ('collapsing') upon Alice's measurement.

Maybe to defend the title even more one can say that even if Bohmian Mechanics or Many-Worlds were true, one can derive collapse as an effective theory. So the only problem is that the observed phenomenon may be consistent with more theories than quantum mechanics. So very strictly, they should say the do not falsify quantum mechanics, but of course everyone knows this, so that is acceptable sloppiness, the same way that general relativity has been "proven".
 
  • #17
JK423 said:
It's good to demystify some quantum effects so that we won't be more impressed than we should, right demystifier? :smile:
Absolutely right! :smile:
 
  • #18
uumlau said:
Note that EPR, Bell's inequality and entanglement don't demonstrate "nonlocality" (though this is the common word for it) so much as it confirms the initial "superposition of states" as predicted by quantum mechanics. In other words, the initial state of the photons are not polarized in a particular direction, the initial spin of the fermions are not in some specific x-y-z direction. The "nonlocalitiy" has to do with those states being 100% correlated antisymmetrically, as required by standard quantum mechanics.

Like others in this thread, I'm not seeing anything that looks like "proof of wave function collapse". It's called "proof of existing quantum theory." There is an unfortunate tendency in physics to conceive of the math as being the reality. The math is the description of the reality, the quantitative language we use to communicate about the reality, subject to experimental verification.

Or to use an analogy from the Matrix, the quote of "There is no spoon." There is no wavefunction. There are phenomena that we measure that are described by math we call "wavefunctions", which aptly predict our measurements. The notion that you can "prove" that a mathematical construct has objective material behavior (collapsing or otherwise) is absurd.

Speaking of Matrix. Guys. Why is it valid to consider Many Worlds or Bohmian Mechanics or even Transactional Interpretation.. while physicists want to ignore the Matrix Interpretations? Did you notice many worlds or bohmians are Newtonian like in which you want to attribute the wave functions have trajectories or the branches real? Why not just consider the wave function as just part of the software or algorithm used by reality. Then it doesn't have to be Bohmians or Many worlds. It is just a program.. obsevations or measurements are just interactions in the program.. problem solved. If you consider this philosophy and avoided thinking of it, isn't Many worlds or Bohmian mechanics philosophy too since it has same prediction as QM and no new predictions? Note I'm not trying to discuss philosophy.. just want to know why Bohmians or Many worlds and even Consistent Histories are valid interpretations while the Matrix (reality created by program) Interpretation is automatically shutted out. Could anyone explain so I know? Knowing how the algorithm or the program work in the Matrix Intepretations can even solve key physics problems like quantum gravity, the hierarchy problems, etc.
 
  • #19
atyy said:
Yes, that's among the papers I know about. I have tried to read almost all your papers with great interest!
Actually, I've noticed that you read u huge number of serious physics papers with great interest, and you do all this while actually being a biologist. I am impressed and I always wonder how do you manage to do that. :wideeyed:

atyy said:
Would it be fair to say that this is still pretty much at the frontier of research, rather than textbook knowledge?
I think it would.

atyy said:
I have the same reservations about MWI - is it really an alternative interpretation to Copenhagen - or is it still an approach that it is unclear whether all the problems have really been worked out?
The latter.

atyy said:
So would it be fair to say that at the consensus level - eg., what one can teach to undergraduates - Copenhagen is still the only interpretation of quantum mechanics?
Depends on what exactly do you mean by Copenhagen interpretation:
https://www.physicsforums.com/threads/there-is-no-copenhagen-interpretation-of-qm.332269/

What is mostly taught to undergraduates is the shut-up-and-calculate interpretation.
 
  • Like
Likes atyy
  • #20
Edward Wij said:
Note I'm not trying to discuss philosophy.. just want to know why Bohmians or Many worlds and even Consistent Histories are valid interpretations while the Matrix (reality created by program) Interpretation is automatically shutted out. Could anyone explain so I know? Knowing how the algorithm or the program work in the Matrix Intepretations can even solve key physics problems like quantum gravity, the hierarchy problems, etc.

I've never heard about this as it sounds straight out from new age conspiracy sites, but aren't you essentially talking about superdeterminism? If I understood it correctly, it's uninteresting simply because it's not an explanation that is scientifically appealing, since you're not making a model that can be used for predictions, but are just saying "the law is that what happens had to happen".
 
  • #21
ddd123 said:
I've never heard about this as it sounds straight out from new age conspiracy sites, but aren't you essentially talking about superdeterminism? If I understood it correctly, it's uninteresting simply because it's not an explanation that is scientifically appealing, since you're not making a model that can be used for predictions, but are just saying "the law is that what happens had to happen".

It's not superdeterminism because the program can have genuine random generators so the born rule is really random.. and what's it connections with the new age? Can anyone enlighten me. I'm not trying to promote the program interpretation but just wondering what makes many worlds or bohmians any more valid and it ignored? It totally supports orthodox Copenhagen too because the *subjectivity* of wave function collapse is simply because it's a subroutine needed by the program and can be von Neumann placed anywhere precisely because it is just a program.
 
  • #22
Edward Wij said:
It's not superdeterminism because the program can have genuine random generators

What kind of genuine random number generator? A quantum one? Te-hee.
 
  • Like
Likes bhobba and Demystifier
  • #24
Edward Wij said:
while physicists want to ignore the Matrix Interpretations?

Why do most people reject solipsism? Same reason.

That said there are some way out ideas along those lines - but just like solipsism they are very fringe.

Thanks
Bill
 
  • #25
Demystifier said:
In fact, it seems that some of them consider it seriously:
http://phys.org/news/2012-10-real-physicists-method-universe-simulation.html
Interesting, though I am not sure "the universe is a simulation" in this version is saying more than "it is convenient to view observed the laws of physics as a two step process, the first one describing the laws obeyed by a simulator, the second one the laws used in the simulation". Whether this is actually convenient or not, reading it as simulation seems to be a question of interpretation.

Alternatively, if the simulator does not obey determinable laws, this means some phenomena are in principle beyond our reach - e.g. we cannot predict if someone running the computer will decide to change some parameter and suddenly modify the observable laws, or just switch off the computer tomorrow. Not a very appealing prospect, but at least so far the program seems to have run bug-free and uninterrupted for quite a while : )
 
  • #26
Well, there's the most unspectacular interpretation of quantum theory, which is the minimal statistical interpretation (or ensemble interpretation), and IMHO that's the one one should teach to students in the beginning.

I'm also pretty sure that you cannot derive "collapse" as an effective theory. How should that work? What one has to do is to describe the interaction of a quantum object (in the here considered case a biphoton, i.e., an entangled photon pair) with the macroscopic equipment used to prepare and detect/measure it. The usual way to build such an effective theory is the use of some coarse-graining formalism, reducing the (FAPP impossible) microscopic description of the macroscopic equipment in terms of their "relevant macroscopic" degrees of freedom. If this is done correctly within a relativistic QFT, you never get faster-than light effects of causally connected events, as it must be. This is so, because relativistic QFT is constructed as a microcausal theory, i.e., local observables commute at space-like separations. In conclusion there cannot be an effective theory implying "collapse", i.e., a spooky interaction at a distance. I read the famous EPR paper (and the much more clear follow-up paper by Einstein alone) as a criticism against collapse, included in some flavors of the Copenhagen interpretation, and not against quantum theory itself. I also think, Einstein's interpretation was close to the ensemble interpretation.

Another question is, whether "quantum mechanics can be considered complete", when following the ensemble interpretation. I think, Einstein thought till his end that it's incomplete. This was based on his philosophical world view, according to which at the most fundamental level nature should be describable by a deterministic theory. IMHO there's no solid argument for this world view. It might well be that quantum theory is complete, and then nature is inherently behaving probabilistically. I've no problems with that conclusion, but of course, one can never say that any physical theory is complete, and there's no complete quantum theory of all phenomena yet since there's no fully consistent quantum theory of gravity yet.
 
  • #27
bhobba said:
Why do most people reject solipsism? Same reason.

That said there are some way out ideas along those lines - but just like solipsism they are very fringe.
To me, solipsism seems to be the only way to make the local-information interpretation of QM consistent.
http://lanl.arxiv.org/abs/1112.2034 [Int. J. Quantum Inf. 10 (2012) 1241016]
 
  • Like
Likes Ponderer and bhobba
  • #28
According to Rkastner, objective collapse (in the spirit of this thread which must happen) must first occur before there is even decoherence.. because in pure schroedinger equations there is no way to derive decoherence. When we assume stray photons from CMBR, they are already collapsed.. without which.. there is no decoherence from the dust particles. There is even no choosing of preferred basis when we didn't assume collapse from beginning. So it seems objective collapse must really be happening first. Does anyone object to this?
 
  • #29
Edward Wij said:
According to Rkastner, objective collapse (in the spirit of this thread which must happen) must first occur before there is even decoherence.. because in pure schroedinger equations there is no way to derive decoherence. When we assume stray photons from CMBR, they are already collapsed.. without which.. there is no decoherence from the dust particles. There is even no choosing of preferred basis when we didn't assume collapse from beginning. So it seems objective collapse must really be happening first. Does anyone object to this?

Yes - its her view - not mine. She believes its a major assumption that the environment in deocherence models is modeled randomly - I don't. Nor is it the view of the majority of those that have studied the issue - in fact she is the only one I know that espouses it.

Thanks
Bill
 
Last edited:
  • #30
Assuming this holds (I have no idea), is this an argument in favor of ruling out objective collapse? (Not that I would object to that, just trying to understand)
 
  • #31
bhobba said:
Yes - its her view - not mine. She believes its a major assumption that the environment in deocherence modells is modeled randomly - I don't.

Thanks
Bill

In your thread with her. I thought you were agreeing with her. You said "Its a general assumption. For example do you believe stray photons from the CBMR interacting with a dust particle are entangled?"

She answered "But it's an assumption you're not allowed to make in order to derive decoherence, because it implicitly injects decoherence from the start. Yes, as I've said, of course things like stray photons are decohered, but that's an empirical observation that can't be used in order to theoretically 'derive' our empirical observation from only the Schrodinger evolution without any actual collapse that could really decohere things. That's why it's circular.

You didn't reply after it. So thought you were agreeing with her. Can you state the reason why you don't believe it?
 
  • #32
Edward Wij said:
Can you state the reason why you don't believe it?

The number of random states is much much greater than those that are ordered. Its a very common assumption made in mathematical modelling all the time. Its an assumption - yes - but a very very reasonable one.

The reason I didn't reply is it would likely go nowhere degenerating into a view of assumptions in mathematical models. My background is applied math where mathematical models are very common as well as the assumptions they make. I have my view - she has hers.

Thanks
Bill
 
Last edited:
  • #33
bhobba said:
The number of random states is much much greater than those that are ordered. Its a very common assumption made in mathematical modelling all the time. Its an assumption - yes - but a very very reasonable one.

Thanks
Bill

But she has addressed it in a thread I read again and again.. She said "Bill the whole point is that you can't apply 'ordinary statistical thinking' and assume 'random systems' with 'random phases' because quantum theory with only Schrodinger evolution doesn't license that. This is the circularity issue. In QM with only Sch. evolution there are no 'stray photons' unless you assume they are 'stray' (i.e, already have randomized phases) from the beginning. All the talk of this or that random dust particle or stray photon is circular. Yes these are all stray and decohered but not because of 'decoherence derivation' which assumes that from the beginning. In fact all these stray objects are decohered *because* there is real collapse. 'Apparent collapse' into the preferred basis is circular and doesn't explain anything."

Well?
 
  • #34
Edward Wij said:
But she has addressed it in a thread I read again and again.. She said "Bill the whole point is that you can't apply 'ordinary statistical thinking' and assume 'random systems' with 'random phases' because quantum theory with only Schrodinger evolution doesn't license that.

Edward - I don't agree with her and have told you why. I believe ordinary statistical thinking is applicable. BTW my view is the orthodox one - hers isn't.

Do you believe, for example, if you observe stray photons around you they will have any order? She believes such requires explanation - I don't. There is a very powerful theorem called the Central Limit Theorem that says, basically, if something is determined by many random factors of any distribution, it must follow a certain distribution. Do you really believe things you find in the environment have not been subjected to many random factors?

Thanks
Bill
 
Last edited:
  • #35
bhobba said:
Edward - I don't agree with her and have told you why. I believe ordinary statistical thinking is applicable. BTW my view is the orthodox one - hers isn't.
Do you believe, for example, if you observe stray photons they will have any order? She believes such requires explanation - I don't.

Thanks
Bill

Ok. I'll think of your objection. Why didn't you mention it last year in the thread with her this objection. It seems this is the first time you mentioned this.
 
  • #36
Edward Wij said:
Ok. I'll think of your objection. Why didn't you mention it last year in the thread with her this objection. It seems this is the first time you mentioned this.

bhobba said:
The reason I didn't reply is it would likely go nowhere degenerating into a view of assumptions in mathematical models. My background is applied math where mathematical models are very common as well as the assumptions they make. I have my view - she has hers.

I elucidated my view of why that assumption is very common - see my previous post.

Thanks
Bill
 
  • #37
bhobba said:
I elucidated my view of why that assumption is very common - see my previous post.

Thanks
Bill

Ok. This is all very important so hope others can rephrase what Bill is saying in their language. The reason this is important is because if Rkastner is (in case) right.. decoherence is just secondary. Underneath is a real objective collapse that many authors love (like the reference the OP of this thread is attributing to).

This reminds me of Penrose. His spacetime collapsing wave function makes sense. Think of wave functions as all pure state. Then you have spacetime that collapse the wave function. Spacetime is discrete so it can be the natural collapser of wave functions!
 
  • #38
Edward Wij said:
This is all very important

You do understand that this is a minority view? That doesn't make it incorrect of course - just there is likely a reason it hasn't garnished support.

Its similar to the following objection against many worlds:
http://arxiv.org/pdf/1210.8447v1.pdf

Added Later:
One thing to remember is there is an inherent randomness in QFT - for example spontaneous emission is explained by interaction with the vacuum and vacuum fluctuations that are random. That would seem to be the ultimate source of randomness.

Thanks
Bill
 
Last edited:
  • #39
Edward Wij said:
Speaking of Matrix. Guys. Why is it valid to consider Many Worlds or Bohmian Mechanics or even Transactional Interpretation.. while physicists want to ignore the Matrix Interpretations? Did you notice many worlds or bohmians are Newtonian like in which you want to attribute the wave functions have trajectories or the branches real? Why not just consider the wave function as just part of the software or algorithm used by reality. Then it doesn't have to be Bohmians or Many worlds. It is just a program.. obsevations or measurements are just interactions in the program.. problem solved. If you consider this philosophy and avoided thinking of it, isn't Many worlds or Bohmian mechanics philosophy too since it has same prediction as QM and no new predictions? Note I'm not trying to discuss philosophy.. just want to know why Bohmians or Many worlds and even Consistent Histories are valid interpretations while the Matrix (reality created by program) Interpretation is automatically shutted out. Could anyone explain so I know? Knowing how the algorithm or the program work in the Matrix Intepretations can even solve key physics problems like quantum gravity, the hierarchy problems, etc.

Or, why have an "interpretation" at all?

Answer: because quantum mechanics doesn't allow one to apply the same intuitions as classical mechanics, so people try to explain how it works in a more-or-less "classical" way. Looking at the world as a simulation, while kind of nonsensical, at least has one virtue: there is no explanation as to "why" things have to work one way or another. And while sometimes our math is very precise, much of it is going to be approximate.

Here's another analogy for pretending that "wave function collapse" actually means anything. Let's say that I'm putting apples into a basket. I use the "addition function" to describe the accumulation of apples. I put in 3, then I put in 8 and then I put in 4, and the addition function correctly describes 15 apples. If I reach into the basket, eat an apple, and toss the core back into the basket, that doesn't mean that the "addition function collapsed". Rather, it means that something happened that addition does not describe. In quantum mechanics, every case of "wave function collapse" involves the system being altered in a way that is not described by the (prior) wave function.
 
Last edited:
  • #41
Demystifier said:
What is mostly taught to undergraduates is the shut-up-and-calculate interpretation.

Well, at least 2 classics mention Bohr or Copenhagen - Landau and Lifshitz and Messiah. LL was what I read when I first learned QM (together with French and Taylor, and Gasiorowicz) Actually, Messiah was a recommended text for one of the courses I took, but I never read it till recently, and it's interesting that he writes that hidden variables may be possible, but he will just take Copenhagen since there is no evidence against it yet. He doesn't quite say hidden variables but his two options are Einstein and Copenhagen, and I interpret Einstein's approach as hidden variables.
 
  • #43
rootone said:
Isn't this just kicking the can further down the road though?

Pretty much.

If what we experience is a simulation, then what is it simulating?

Or rather, what is it that is simulating? Are they simulated too? And so on. But I tend to lean more on the idea that these people don't really take this too seriously but are just having some fun. At least, I hope so.
 
  • #44
bhobba said:
Edward - I don't agree with her and have told you why. I believe ordinary statistical thinking is applicable. BTW my view is the orthodox one - hers isn't.

Do you believe, for example, if you observe stray photons around you they will have any order? She believes such requires explanation - I don't. There is a very powerful theorem called the Central Limit Theorem that says, basically, if something is determined by many random factors of any distribution, it must follow a certain distribution. Do you really believe things you find in the environment have not been subjected to many random factors?

Thanks
Bill

I'd like to comprehend how exactly yours differs to Ruth in terms of the language of density matrix (i'm familiar with the math). When we treat a subsystem, we know the phases are not in coherence (because you ignore the rest of the system). Therefore two stray photons won't be in coherence. Of course. Are you saying that Ruth believes they are in coherence? Or maybe there is some confusion or misunderstanding in the discussion especially when something this basic and obvious. Maybe she is just focusing on the collapse part or born rule and arguing born rule must be fully applied for there to be classical outcome? I'm familiar with the math of it. So please tell me in math terms how exactly she may differ to yours and if it could just be because of the focus like I just mentioned. Thanks.
 
  • #45
Edward Wij said:
I'd like to comprehend how exactly yours differs to Ruth in terms of the language of density matrix (i'm familiar with the math). When we treat a subsystem, we know the phases are not in coherence (because you ignore the rest of the system).

Ruth and I do not differ on any of the technical detail. Its simply the interpretation. If you go and look at some specific decoherene models then you will see there is an assumption (in some of them any way) of the environment being random. She does not dispute that being true - what she disputes is its a reasonable assumption that more or less is implied by the nature of QM - eg what I mentioned before about quantum fluctuations in the vacuum leading to an inherent randomness. She thinks it a big question mark the theory needs to explain. I don't agree.

Added Later:
If you want to understand the situation you must investigate the detail. The following is the best source I know that starts from only basic knowledge and builds up to it in the final chapters:
http://quantum.phys.cmu.edu/CQT/index.html

The relevant chapter is chapter 26 at the end:
http://quantum.phys.cmu.edu/CQT/chaps/cqt26.pdf

See section 26.4 - The Random Environment. Ruth thinks the assumption it being random, while true, requires explanation. Me, and I suspect others as well, think its a result of the usual quantum randomness such as, for example, found in spontaneous emission and explained by vacuum fluctuations. The environment will have been subjected to such so frequently it will be well and truly scrambled, especially considering the Central Limit Theorem I mentioned previously.

Also, please, please, if you want to discuss that chapter put the time and effort into reading all the previous ones.

Thanks
Bill
 
Last edited:
  • #46
bhobba said:
Ruth and I do not differ on any of the technical detail. Its simply the interpretation. If you go and look at some specific decoherene models then you will see there is an assumption (in some of them any way) of the environment being random. She does not dispute that being true - what she disputes is its a reasonable assumption that more or less is implied by the nature of QM - eg what I mentioned before about quantum fluctuations in the vacuum leading to an inherent randomness. She thinks it a big question mark the theory needs to explain. I don't agree.

Thanks
Bill

In the thread last year. She was discussing about the factoring problem with you or how you decompose system and environment.. because if the decomposition differs.. decoherence may have different predictions. Is the factoring problem related to quantum fluctuations in vacuum leading to inherent randomness? Would this solve the factoring problem?
 
  • #47
Edward Wij said:
In the thread last year. She was discussing about the factoring problem with you or how you decompose system and environment.. because if the decomposition differs.. decoherence may have different predictions. Is the factoring problem related to quantum fluctuations in vacuum leading to inherent randomness? Would this solve the factoring problem?

I don't believe there is a factoring problem - note the word MAY lead to different predictions. I haven't seen two different analysis of the same situation leading to different predictions. Although I do recall in one discussion about it someone did show it happens in classical mechanics. My view, as I have mentioned many times, is I think some key theorems are required to settle the issue one way or another. That's my view - there are others. They have been thrashed out before and I will not be drawn into it again.

For definiteness I will refer to the following:
http://arxiv.org/pdf/1210.8447v1.pdf

Its saying nothing can happen in MW. Yet we have things like vacuum fluctuations causing inherent randomness. Since MW is cooked up to be indistinguishable from standard QM it should include that. So I don't necessarily accept that papers analysis as correct. In saying that, I am appealing to Quantum Field Theory which I am not as familiar with as I would like. I would like someone with more knowledge of QFT to comment of the exact cause of random vacuum fluctuations.

Thanks
Bill
 
  • #48
bhobba said:
I don't believe there is a factoring problem - note the word MAY lead to different predictions. I haven't seen two different analysis of the same situation leading to different predictions. Although I do recall in one discussion about it someone did show it happens in classical mechanics. My view, as I have mentioned many times, is I think some key theorems are required to settle the issue one way or another. That's my view - there are others. They have been thrashed out before and I will not be drawn into it again.

For definiteness I will refer to the following:
http://arxiv.org/pdf/1210.8447v1.pdf

Its saying nothing can happen in MW. Yet we have things like vacuum fluctuations causing inherent randomness. Since MW is cooked up to be indistinguishable from standard QM it should include that. So I don't necessarily accept that papers analysis as correct. In saying that, I am appealing to Quantum Field Theory which I am not as familiar with as I would like. I would like someone with more knowledge of QFT to comment of the exact cause of random vacuum fluctuations.

Thanks
Bill

Zurek and Maximillian never mentioned about the factoring problem? If you have heard them mentioning it.. let me know. They are the authority. Maybe factoring problem is only valid in Many Worlds as acknowledged by Wallace? Who are the decoherence authors who acknowledge there is factoring problem? Is it only Ruth?
 
  • #49
Edward Wij said:
Zurek and Maximillian never mentioned about the factoring problem? If you have heard them mentioning it.. let me know. They are the authority. Maybe factoring problem is only valid in Many Worlds as acknowledged by Wallace? Who are the decoherence authors who acknowledge there is factoring problem? Is it only Ruth?

Zurek and Schlosshauer do mention the factoring problem - whether it is a problem depends on what interpretation one is using.

Schlosshuaer, http://arxiv.org/abs/quant-ph/0312059
p8: Also, there exists no general criterion for how the total Hilbert space is to be divided into subsystems, while at the same time much of what is called a property of the system will depend on its correlation with other systems. This problem becomes particularly acute if one would like decoherence not only to motivate explanations for the subjective perception of classicality (as in Zurek’s “existential interpretation,” see Zurek, 1993, 1998, 2003b, and Sec. IV.C below), but moreover to allow for the definition of quasiclassical “macrofacts.” Zurek (1998, p. 1820) admits this severe conceptual difficulty: In particular, one issue which has been often taken for granted is looming big, as a foundation of the whole decoherence program. It is the question of what are the “systems” which play such a crucial role in all the discussions of the emergent classicality. (. . . ) [A] compelling explanation of what are the systems—how to define them given, say, the overall Hamiltonian in some suitably large Hilbert space—would be undoubtedly most useful.

Zurek, http://arxiv.org/abs/quant-ph/9805065
p2: We can mention two such open issues right away: Both the formulation of the measurement problem and its resolution through the appeal to decoherence require a Universe split into systems. Yet, it is far from clear how one can define systems given an overall Hilbert space “of everything” and the total Hamiltonian.

p22: As noted before, the problem of measurement cannot be even stated without a recognition of the existence of systems. Therefore, our appeal to the same assumption for its resolution is no sin. However, a compelling explanation of what are the systems — how to define them given, say, the overall Hamiltonian in some suitably large Hilbert space — would be undoubtedly most useful.
 
Last edited:
  • Like
Likes Demystifier and bhobba
  • #50
atyy said:
Zurek and Schlosshauer do mention the factoring problem - whether it is a problem depends on what interpretation one is using.

Schlosshuaer, http://arxiv.org/abs/quant-ph/0312059
p8: Also, there exists no general criterion for how the total Hilbert space is to be divided into subsystems, while at the same time much of what is called a property of the system will depend on its correlation with other systems. This problem becomes particularly acute if one would like decoherence not only to motivate explanations for the subjective perception of classicality (as in Zurek’s “existential interpretation,” see Zurek, 1993, 1998, 2003b, and Sec. IV.C below), but moreover to allow for the definition of quasiclassical “macrofacts.” Zurek (1998, p. 1820) admits this severe conceptual difficulty: In particular, one issue which has been often taken for granted is looming big, as a foundation of the whole decoherence program. It is the question of what are the “systems” which play such a crucial role in all the discussions of the emergent classicality. (. . . ) [A] compelling explanation of what are the systems—how to define them given, say, the overall Hamiltonian in some suitably large Hilbert space—would be undoubtedly most useful.

Zurek, http://arxiv.org/abs/quant-ph/9805065
p2: We can mention two such open issues right away: Both the formulation of the measurement problem and its resolution through the appeal to decoherence require a Universe split into systems. Yet, it is far from clear how one can define systems given an overall Hilbert space “of everything” and the total Hamiltonian.

p22: As noted before, the problem of measurement cannot be even stated without a recognition of the existence of systems. Therefore, our appeal to the same assumption for its resolution is no sin. However, a compelling explanation of what are the systems — how to define them given, say, the overall Hamiltonian in some suitably large Hilbert space — would be undoubtedly most useful.

What interpretations is it a problem and what interpretations is it not a problem? I think it's not a problem with Bill Ensemble because the mere fact there is outcome is measurement problem solved for him. So let's handle the others for us who are not Ensemblers.
 

Similar threads

Back
Top