Some (unrelated) questions about the measurement problem

In summary: I don't know what that implies.In summary, the conversation discusses questions related to quantum mechanics, specifically the measurement problem and the role of decoherence in solving it. The participants also touch on the Von Neumann-Wigner interpretation and whether it requires consciousness, and the possibility of formulating QM on a real vector space. Ultimately, the conversation does not reach a consensus on these topics.
  • #1
haushofer
Science Advisor
Insights Author
2,952
1,497
Dear all,

every now and then I get this itchy feeling and start to think about quantum mechanics. Which raises, of course, some questions. These concern the measurement problem. I decided to post them in 1 single topic, so I enumerate them. If someone has some insights clarifying my confusions, I'll be very happy.

1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right? See e.g. Tegmarks https://arxiv.org/abs/quant-ph/0101077, quoting "We argue that modern experiments and the discovery of decoherence have have shifted prevailing quantum interpretations away from wave function collapse towards unitary physics". Am I right that decoherence merely turns "quantum probability distributions (meaning, with interference terms)" into classical probability distributions (meaning, no interference terms)?

2) About the Von Neumann-Wigner interpretation: I am puzzled about this interpretation (who isn't, but bear with me). In e.g. https://arxiv.org/abs/1009.2404 it is stated that QM needs no consciousness. My question is: why not simply use the simple double slit experiment, put a detector near one of the slits, and only look at the screen? I assume we'll still see the interference pattern (right?), so doesn't this conclusively exclude the possibility that "a conscious observation near one of the slits is needed to make the interference pattern disappear"? And the same goes even more for the delayed choice experiments: because the indirect observation about the particle going through one of the slits is delayed, doesn't this show that consciousness is not a relevant factor in the whole setup but mere interaction of the measurement apparatus with the system is?

3) About the Von Neumann-Wigner interpretation: is there any way to "break out of the Von-Neumann chain of regression" without hidden variables or imposing extra dynamics on top of the Schrodinger equation? I see that people often use this Von-Neumann chain to motivate the Von Neumann-Wigner interpretation. And how do adherents of this interpretation explain cosmic events like e.g. the CMB we're receiving from events billions of years ago?

4) About the QM formalism itself (not really about the measurement problem): I do understand that the Schrödinger equation and Heisenberg relations impose that our Hilbert space is complex. But how far could one go with constructing a theory of QM on a real vector space, with writing down a real Schrödinger equation, real commutation relations from the Poisson brackets and so forth? What would be the first obstacle to encounter?

Any insights are appreciated :)

-edit If some moderator thinks these are too many questions for one topic, feel free to say so and take appropriate actions.
 
  • Like
Likes Demystifier
Physics news on Phys.org
  • #2
1) You are absolutely right.

2) That paper is wrong. One of the authors (who happens to be my brother) is a psychologist, and the other is, I think, a biologist. Neither of them is physicist. They mention me in Acknowledgements, but my contribution was to explain QM to them and explaining them why their arguments were wrong.

3) I think the answer to your first question is - no.

4) QM can be formulated without complex numbers. See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.
 
  • Like
Likes atyy, haushofer, lekh2003 and 4 others
  • #3
haushofer said:
1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right?

Here's the way it works in practice:
  • After decoherence, there are no more interference effects.
  • Without interference effects, quantum probabilities work just like classical probabilities.
  • So you can give the post-decoherence probabilities the same interpretation that you do classical probabilities---that the probabilities reflect ignorance of the true state of the system.
In other words, after decoherence, you might as well assume that a "collapse" has happened. The system is either in state [itex]\psi_1[/itex] or in state [itex]\psi_2[/itex], you just don't know which.

This is intellectually incoherent, in my opinion, but it works fine as a rule of thumb.
 
  • Like
Likes entropy1 and Derek P
  • #4
stevendaryl said:
After decoherence, there are no more interference effects.

Roland Omnes in “Decoherence And Ontology“ (Ontology Studies 8, 2008 55):

Let us go back however to less elevated questions. I did not yet mention that decoherence is a dynamical effect that is never perfectly exact. Entangled states of a measured quantum object and a measuring device are disentangled, but a tiny amount of entanglement (or superposition) always survives. The probability for observing a macroscopic interference effect between a dead and a live cat is never exactly zero, but extremely small and becoming exponentially smaller with larger values of time.
 
  • Like
Likes bhobba, Jilang and stevendaryl
  • #5
Demystifier said:
One of the authors (who happens to be my brother)
Isn't it a small world after all :wink:.
 
  • Like
Likes Demystifier
  • #6
Lord Jestocost said:
but extremely small and becoming exponentially smaller with larger values of time.

Interesting Omne's said that - he is a big supporter of decoherence so I doubt he is using it as an augment against it. It of course is true, but I must admit one of my real annoyances is with those that criticize decoherence using that argument. Decoherence does not solve the measurement problem - merely morphed it - let's get that straight. However the argument interference is never exactly zero so decoherence has problems is a load of the proverbial - IMHO. It decays so fast and exponentially at that so it quickly becomes such a low level that not only do we currently not have the technology detect it (except in special contrived cases to experimentally investigate decoherence) I doubt we ever will, and certainly for everyday objects it can be taken as zero because our usual senses are way way not sensitive enough to detect it. As I often say would anyone consider 1/googleplex (which since it decays exponentially very quickly it will reach after not much time at all) to not for all practical purposes be zero - if so I have some land with unbelievable 360% ocean views for sale - anyone interested :-p:-p:-p:-p:-p:-p:-p

I must add what I wrote above was a bit tongue in cheek - I can not, and will never be able to, prove those with this objection are wrong - it hinges on a certain belief to do with for all practical purposes (FAPP) being good enough. That is an opinion - not fact - hence is not science. It's perfectly OK to hold such a view - ie it never becomes zero so it is not really an explanation; because it depends entirely on your view of explanation - such is philosophy - not science. Most exposed to it accept FAPP as being fine - but some do not. It's a bit like when you see a proper derivation of SR based on symmetry and group theory most much prefer it to Lorentz Ether Theory (LET) - but a few don't. Believe it or not I read somewhere where the great John Bell believed in LET - interesting. It would be a brave man indeed to call Bell a fool - because he most certainly was not. But in a minority on that issue he certainly was. It boils down to the answer to a deep question asked by Poincare of Einstein - what is the mechanical basis of SR. Einstein replied - none - to which Poincare was in shock. It actally is a deep question not easily answered - but I will do a separate thread about that.

Thanks
Bill
 
  • #7
bhobba said:
Decoherence does not solve the measurement problem - merely morphed it - let's get that straight.

Do you mean - harking back a year or two - that it does not solve the transition from an improper mixture to a proper one? If not, I'm not sure what remains of the problem, please explain.
 
  • #8
Derek P said:
Do you mean - harking back a year or two - that it does not solve the transition from an improper mixture to a proper one? If not, I'm not sure what remains of the problem, please explain.

Exactly. It solves it FAPP - but that's all.

Thanks
Bill
 
  • Like
Likes Derek P
  • #9
Demystifier said:
1) You are absolutely right.
See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.

[Quote = "T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1"]
The ‘real’ world (using the term in its nonmathematical sense) is the world of ‘real’ quantities (using the term in its mathematical sense)
[/Quote]

I doubt that we can directly read, from measuring instruments, real number. Like complex numbers, real numbers are human conceptual constructs (imaginary). To make this an ontological criterion, in my opinion is to misunderstand the mathematical concept of numbers.

There are two well known constructions of the real numbers (among many other) from the rationals-namely the Dedekind cuts method in which a real number is defined as a class of rationals, and the Cantor-Cauchy completion method in which a real number is defined as an equivalence class of Cauchy sequences of rational number.

Best regards
Patrick
 
  • #10
haushofer said:
About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms.

Quoting Ruth E. Kastner ( ‘Einselection’ of pointer observables: The new H-theorem?, arXiv:1406.4126 [pdf] ):

It is often claimed that unitary-only dynamics, together with decoherence arguments, can explain the ‘appearance’ of wave function collapse, i.e, that Schrodinger’s Cat is either alive or is dead. This however is based on implicitly assuming that macroscopic systems (like Schrodinger’s Cat himself) are effectively already ‘decohered,’ since the presumed phase randomness of already-decohered systems is a crucial ingredient in the ‘derivation’ of decoherence.
 
  • Like
Likes entropy1 and haushofer
  • #11
haushofer said:
2) About the Von Neumann-Wigner interpretation: I am puzzled about this interpretation (who isn't, but bear with me). In e.g. https://arxiv.org/abs/1009.2404 it is stated that QM needs no consciousness. My question is: why not simply use the simple double slit experiment, put a detector near one of the slits, and only look at the screen? I assume we'll still see the interference pattern (right?), so doesn't this conclusively exclude the possibility that "a conscious observation near one of the slits is needed to make the interference pattern disappear"?.

How near? You are assuming you can look at the particle as it goes through the slits. If your detector messes with the photon as it passes then the interference pattern will disappear - not because of anything to do with observation but because the detector blocked the slit! Clever schemes with very delicate detectors run into a related problem - if the detector's positional uncertainty is small enough to locate it at a particular slit then the momentum uncertainty will be large enough to require a "kick" from the passing particle that disrupts the interference pattern. If you deduce the slit from the behaviour of an entangled partner, you don't change the pattern on the screen ever - this is frequently misunderstood.
 
Last edited by a moderator:
  • Like
Likes lodbrok
  • #12
Lord Jestocost said:
Quoting Ruth E. Kastner ( ‘Einselection’ of pointer observables: The new H-theorem?, arXiv:1406.4126 [pdf] ):

It is often claimed that unitary-only dynamics, together with decoherence arguments, can explain the ‘appearance’ of wave function collapse, i.e, that Schrodinger’s Cat is either alive or is dead. This however is based on implicitly assuming that macroscopic systems (like Schrodinger’s Cat himself) are effectively already ‘decohered,’ since the presumed phase randomness of already-decohered systems is a crucial ingredient in the ‘derivation’ of decoherence.
That is hardly an assumption. All that's needed is for the environment's state to be a superposition in the interaction basis. It would take a massive bit of einselection for the environment not to be in such a state.
 
  • #13
Demystifier said:
1) You are absolutely right.

2) That paper is wrong. One of the authors (who happens to be my brother) is a psychologist, and the other is, I think, a biologist. Neither of them is physicist. They mention me in Acknowledgements, but my contribution was to explain QM to them and explaining them why their arguments were wrong.
:D Life is tough, having brothers like that ;) But I sincerely thought you wrote the paper. Can you say in a few words why the paper is wrong?

4) QM can be formulated without complex numbers. See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.
Thanks for the link. I ordered the book; it seems like something I'll enjoy, so many thanks for that! But let say we would start out from the very beginning, building up quantum mechanics, trying to explain all these phenomena like in the early days. When would we be forced to introduce a complex wave function, or, as in your link, two coupled real fields? I like Norsen's comparison to EM-fields, by the way, never thought about that.
 
  • #14
stevendaryl said:
Here's the way it works in practice:
  • After decoherence, there are no more interference effects.
  • Without interference effects, quantum probabilities work just like classical probabilities.
  • So you can give the post-decoherence probabilities the same interpretation that you do classical probabilities---that the probabilities reflect ignorance of the true state of the system.
In other words, after decoherence, you might as well assume that a "collapse" has happened. The system is either in state [itex]\psi_1[/itex] or in state [itex]\psi_2[/itex], you just don't know which.

This is intellectually incoherent, in my opinion, but it works fine as a rule of thumb.

You mean, intellectually incoherent because we don't try to explain how the state eventually evolved to an eigenstate, right? I have to think about your statement, but at least I understand better now why people tend to be less intimidated by the measurement problem with decoherence in the back of their minds.
 
  • #15
bhobba said:
Exactly. It solves it FAPP - but that's all.

Thanks
Bill
Well, if FAPP includes all phenomena, as of course it does, what is left? Justifying the assumption that the mixture is proper - even though it looks exactly like an improper mixture? Not really physics' job surely?
 
  • #16
Derek P said:
If you deduce the slit from the behaviour of an entangled partner, you don't change the pattern on the screen ever - this is frequently misunderstood.
Wait, what? If I obtain information about "which slit" from the entanglement particle, the interference pattern is destroyed, right?
 
  • #17
haushofer said:
Can you say in a few words why the paper is wrong?
They assume that, in the setup they consider, the interference can be seen at a single detector. But it cannot. It is only seen in coincidences (correlations) between two detectors. When this is taken into account, their argument does not longer work.

haushofer said:
But let say we would start out from the very beginning, building up quantum mechanics, trying to explain all these phenomena like in the early days. When would we be forced to introduce a complex wave function, or, as in your link, two coupled real fields?
It depends on what do you take for granted before that. For instance, try to write a linear differential equation for a single real wave function, such that the dispersion relation is
$$\hbar\omega=\frac{\hbar^2k^2}{2m}$$
You should find out that it is impossible.
 
Last edited:
  • Like
Likes bhobba and Derek P
  • #18
haushofer said:
You mean, intellectually incoherent because we don't try to explain how the state eventually evolved to an eigenstate, right? I have to think about your statement, but at least I understand better now why people tend to be less intimidated by the measurement problem with decoherence in the back of their minds.
They don't have to evolve to a single eigenstate. The actual picture is of an entanglement. And an entanglement can be treated as a probability distribution of eigenstates for the system we are looking at - as long as we ignore the microstate of the environment. That way the observer state is also a PD. Ontologically, we could say that globally the system remains a pure state, it never does resolve to a single eigenstate. That would be how MWI deals with it. Otherwise we have a serious unsolved problem. In fact it is provably not solvable though I can't quote a source.
 
  • #19
haushofer said:
Wait, what? If I obtain information about "which slit" from the entanglement particle, the interference pattern is destroyed, right?
Nope. The best-known optical set-up that does such a measurement is the one used by Kim et al in their DCQE. And in that experiment photon pairs are emitted from the same slit so there is never any interference pattern at the screen i.e. the signal detector. The interference pattern doesn't go away, it's never there in the first place.

This is a manifestation of the No Communication Theorem - if you could turn the interference pattern on or off by looking at the idler, you could send messages by entanglement.
 
  • #20
Derek P said:
Well, if FAPP includes all phenomena, as of course it does, what is left?

A point of principle. I don't agree with it but that means diddly squat.

Science is funny like that.

In another thread I said the essence of science is doubt (backed up by experiment/observation of course) - this is just another example.

Personally I don't worry about it, just accept it and things are a lot easier. Don't be like Wittgenstein who started to ask why and destroyed a promising career as a astronautical researcher :rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:. But then again few can say they got the great philosopher/mathematician Russell to admit they were right and he was wrong - so maybe it wasn't wasted :-p:-p:-p:-p:-p:-p:-p

Thanks
Bill
 
  • Like
Likes Derek P
  • #21
Derek P said:
Well, if FAPP includes all phenomena, as of course it does, what is left? Justifying the assumption that the mixture is proper - even though it looks exactly like an improper mixture? Not really physics' job surely?

Well, if the theory predicts that pure states evolve into pure states, then the assumption that you have a proper mixed state after a certain length of time is inconsistent. It's what I have, in a previous thread, called a "soft inconsistency", because there seems to be no feasible way to demonstrate it.
 
  • #22
bhobba said:
I must add what I wrote above was a bit tongue in cheek - I can not, and will never be able to, prove those with this objection are wrong - it hinges on a certain belief to do with for all practical purposes (FAPP) being good enough

I think there's always a tension in physics between the desire for phenomenological models that are good enough, and desire to REALLY understand what's going on. Physicists are never satisfied with just phenomenological models. Of course, the phenomenological models of the past (Kepler's laws of planetary motion, the Bohr model for atoms, maybe Fermi's model of neutron decay, etc.) ran into limitations as to what sort of phenomena could be accurately described using them, so there was an empirical need to come up with a deeper theory.

QM is sort of unusual, in that it feels to me to be a phenomenological model, but there are no hints about limitations to its range of applicability. Or at least no hints about feasible experiments that could probe these limitations.
 
  • Like
Likes Lord Jestocost and bhobba
  • #23
stevendaryl said:
I think there's always a tension in physics between the desire for phenomenological models that are good enough, and desire to REALLY understand what's going on. Physicists are never satisfied with just phenomenological models. Of course, the phenomenological models of the past (Kepler's laws of planetary motion, the Bohr model for atoms, maybe Fermi's model of neutron decay, etc.) ran into limitations as to what sort of phenomena could be accurately described using them, so there was an empirical need to come up with a deeper theory.

QM is sort of unusual, in that it feels to me to be a phenomenological model, but there are no hints about limitations to its range of applicability. Or at least no hints about feasible experiments that could probe these limitations.

I thought that was precisely what things like entanglement experiments did. Or would you say they are too specifically anti-classical to count as probing the limits?
 
  • #24
Derek P said:
I thought that was precisely what things like entanglement experiments did. Or would you say they are too specifically anti-classical to count as probing the limits?
Hmmm... I read entanglement experiments the other way. To me they strongly support the validity of the phenomenological model provided by QM, while defying any attempt to "REALLY understand what's going on".
 
  • Like
Likes entropy1 and bhobba
  • #25
Nugatory said:
Hmmm... I read entanglement experiments the other way. To me they strongly support the validity of the phenomenological model provided by QM, while defying any attempt to "REALLY understand what's going on".

Bell's theorem is rightly interpreted to mean we cannot have local, causal, realism with definiteness all at once. Locality, causality and realism in their traditional senses are not really negotiable - jettison anyone of them and the model becomes insane. So by a process of elimination, we must abandon definiteness.

The statistics work perfectly well if probabilities are frequencies in a history - as given by MWI for instance. But then definiteness is only emergent FAPP in a given history, it is not absolute.

For instance the Kim et al DCQE experiment has to be interpreted as retrocausality if the detection of the signal photon is definite. Well I suppose you could contrive something even worse involving The Lizard People messing with our simulation, but no sane model is possible with definite values. Not even if they only become definite after detection. But under unitary evolution there is nothing definite about the detector state, it is entangled with the idler photon. The impossible correlations then become inevitable.

So given that we cannot hold on to definiteness, my reading of entanglement experiments is that they retain traditional, local, causal, realism with the perfectly benign caveat that reality is not Newtonian objects with definite positions but something else, of which the wavefunction is at least an aspect if not the thing itself. Philosophy is then back in its box and the take-away lesson is that the universe is quantum.
 
Last edited:
  • #26
What are the possible spacetime reference frames that could describe the quantum configuration space? One Bohmian mechanic researcher proposed reciprocal space to house the pilot waves. There are dozens and dozens of such models in the arxiv...

7f4ZBG.jpg


What I want to know is simply this. If the configuration space is located an actual Reciprocal space as some arxiv researchers seemed to suggest. Does it mean the pilot wave could have substance as part of their dynamics or is it purely wave? Note in momentum space with energy and momentum as axis.. it doesn't mean object with distance can't exist. Hence does reciprocal space means only wave can exist or particle can only exist?
 

Attachments

  • 7f4ZBG.jpg
    7f4ZBG.jpg
    28.9 KB · Views: 760
  • #27
Derek P said:
For instance the Kim et al DCQE experiment has to be interpreted as retrocausality if the detection of the signal photon is definite. Well I suppose you could contrive something even worse involving The Lizard People messing with our simulation, but no sane model is possible with definite values. Not even if they only become definite after detection. But under unitary evolution there is nothing definite about the detector state, it is entangled with the idler photon. The impossible correlations then become inevitable.
Can you explain further what you mean by "no sane model is possible with definite values"? Why is a simulation considered worse? And by simulation are you referring to a simulation that keeps definiteness, realism, and causality, but throws out locality?
 
  • #28
haushofer said:
1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right? See e.g. Tegmarks https://arxiv.org/abs/quant-ph/0101077, quoting "We argue that modern experiments and the discovery of decoherence have have shifted prevailing quantum interpretations away from wave function collapse towards unitary physics". Am I right that decoherence merely turns "quantum probability distributions (meaning, with interference terms)" into classical probability distributions (meaning, no interference terms)?

I think Tegmark works within MWI. I'm not sure MWI works, but stated within MWI it seems more difficult to say exactly why it is wrong.

Also, it is true within BM, since BM has unitary evolution with the addition of hidden variables.

Basically decoherence does nothing for the measurement problem. It is a basic fact of quantum mechanics, and necessary for the interpretations that work such as Copenhagen (FAPP) and BM (in non-relativistic QM).
 
  • Like
Likes haushofer
  • #29
Azurite said:
One Bohmian mechanic researcher proposed reciprocal space to house the pilot waves. There are dozens and dozens of such models in the arxiv...

Even if there are "dozens and dozens" of such models, you need to pick a specific one and give a reference to it. We can't discuss vague descriptions. We need something concrete.
 
  • Like
Likes bhobba
  • #30
kurt101 said:
Can you explain further what you mean by "no sane model is possible with definite values"? Why is a simulation considered worse? And by simulation are you referring to a simulation that keeps definiteness, realism, and causality, but throws out locality?
I don't think there's anything to be gained by discussing absurd hypotheses that do not promote understanding even within their own parameters, let alone in real physics. To do so would be to invite closure of the thread which is still, IMO, being productive.

But I will explain "no sane model is possible with definite values". Bell's theorem shows that any theory that tries to explain QM must violate causality and/or locality or else what Bell calls realism. But Bell realism means the existence of definite values. The statistical part of his theorem relies on correlations between real events corresponding to correlations in the candidate theory. As this is a bit different from what most people mean by realism it is worth spelling it out. The theorem does not rule out a theory that does not have definite values. Which is precisely the picture you get if you say the wavefunction is fundamental.
 
  • #31
Azurite said:
What are the possible spacetime reference frames that could describe the quantum configuration space? One Bohmian mechanic researcher proposed reciprocal space to house the pilot waves. There are dozens and dozens of such models in the arxiv...

View attachment 221796

What I want to know is simply this. If the configuration space is located an actual Reciprocal space as some arxiv researchers seemed to suggest. Does it mean the pilot wave could have substance as part of their dynamics or is it purely wave? Note in momentum space with energy and momentum as axis.. it doesn't mean object with distance can't exist. Hence does reciprocal space means only wave can exist or particle can only exist?
Why talk about Bohmian mechanics? The traditional wavefunction lives in phase space, not physical space. So it cannot be a physical wave. And I am reliably informed, the pilot wave in BM does all the work of the WF while the particles just go along for the ride. I have even heard BM described as "Many Worlds in denial".:oldsurprised::oldsurprised::oldsurprised:
 
  • #32
atyy said:
I think Tegmark works within MWI. I'm not sure MWI works, but stated within MWI it seems more difficult to say exactly why it is wrong.

Also, it is true within BM, since BM has unitary evolution with the addition of hidden variables.

Basically decoherence does nothing for the measurement problem. It is a basic fact of quantum mechanics, and necessary for the interpretations that work such as Copenhagen (FAPP) and BM (in non-relativistic QM).
MW works too. But its agenda is far wider-reaching than Copenhagen and BM so
atyy said:
I think Tegmark works within MWI. I'm not sure MWI works, but stated within MWI it seems more difficult to say exactly why it is wrong.

Also, it is true within BM, since BM has unitary evolution with the addition of hidden variables.

Basically decoherence does nothing for the measurement problem. It is a basic fact of quantum mechanics, and necessary for the interpretations that work such as Copenhagen (FAPP) and BM (in non-relativistic QM).
I'm trying to figure out what you mean by a model working. Copenhagen requires that observed values are definite doesn't it? In which case Bell's theorem implies that the CI violates causality in some frames. Can it still be said to work? Effects preceding their causes? MWI, on the other hand, avoids definiteness and therefore does not violate causality so I'm not sure why you think it may not work.
 
  • #33
Derek P said:
I'm trying to figure out what you mean by a model working. Copenhagen requires that observed values are definite doesn't it? In which case Bell's theorem implies that the CI violates causality in some frames. Can it still be said to work? Effects preceding their causes? MWI, on the other hand, avoids definiteness and therefore does not violate causality so I'm not sure why you think it may not work.

I'm being deliberately sloppy. CI works in the sense that it makes sense, but it does not solve the measurement problem. In fact, CI is the poster child for the measurement problem. Operational causality is not violated by CI.
 
  • #34
Derek P said:
[Bell's] theorem does not rule out a theory that does not have definite values. Which is precisely the picture you get if you say the wavefunction is fundamental.

I agree with you (I think). It seems to me that any interpretation besides MWI implicitly involves denying that QM is a fundamental theory that applies to every system, no matter how large or small. Copenhagen or the Ensemble Interpretation or the "Minimal Interpretation" all seem to require a distinction between measurement results and microscopic properties. Measurement results have definite values (the observer measured "spin up" or "spin down") while microscopic properties can be superpositions with a certain amplitude for having this value or that value. If you think of the measurement results as configurations of macroscopic quantum systems, then there is no good reason to believe that they have definite values any more than microscopic properties do. So basically, QM taken seriously as a fundamental, universal theory to me leads to MWI.

Which doesn't mean that I like MWI very much, either. There's been articles about the question of whether probabilities make sense for a deterministic theory, but I have a more basic doubt about MWI.

If the only fundamental object is the wave function evolving unitarily, then I would think that any observed properties of the universe would be properties of the wave function (or maybe the Hilbert space that it lives in, or maybe the Hamiltonian). Let's assume that the whole universe is described by a universal wave function that evolves according to Schrodinger's equation (for now, I'm going to ignore relativity, because QFT makes things a lot more complicated---hopefully this simplification isn't throwing the baby out with the bath water). So let's diagonalize the Hamiltonian, and so an arbitrary state of the universe can be described along the lines of:

[itex]|\psi(t)\rangle = \sum_n |\phi_n\rangle e^{-i E_n t}[/itex]

where [itex]|\phi_n\rangle[/itex] satisfies the equation [itex]H |\phi_n\rangle = E_n |\phi_n\rangle[/itex]

(I guess I'll assume a discrete spectrum, for the sake of discussion. I don't think anything I have to say will be changed a lot by allowing a continuous spectrum.)

If the wave function is all there is, then it seems like all the phenomena that we see in the world--planets and particles and humans, etc--have to be somehow implicit in that expression. And I think they clearly are not.

Now, I think you can get something like a description of real physical objects out of such a universal wave function. Pick some observable, say the location of some macroscopic object. Then you can certainly rewrite the universal wave function [itex]|\psi(t)\rangle[/itex] as a superposition of "possible worlds" where in each of them, that object has a more-or-less definite macroscopic properties. But the choice of how to split the universal wave function into possible worlds doesn't seem motivated by the quantum mechanics. It seems arbitrary.

I suppose you could say that our universe consists of two things: (1) a universal wave function, and (2) a recipe for dividing the wave function into possible worlds. But (2) seems to me to be an additional fact about the universe, beyond just the universal wave function.
 
  • #35
atyy said:
I'm being deliberately sloppy. CI works in the sense that it makes sense, but it does not solve the measurement problem. In fact, CI is the poster child for the measurement problem. Operational causality is not violated by CI.
Ah deliberate sloppiness... well I thought it was uncharacteristic of you. BMW
 

Similar threads

Replies
12
Views
2K
Replies
1
Views
850
Replies
6
Views
905
  • Quantum Physics
Replies
1
Views
740
Replies
8
Views
888
  • Quantum Physics
Replies
9
Views
1K
  • Quantum Physics
Replies
4
Views
725
Replies
4
Views
853
  • Quantum Physics
Replies
3
Views
245
Replies
8
Views
1K
Back
Top