Some (unrelated) questions about the measurement problem

Click For Summary

Discussion Overview

The discussion revolves around the measurement problem in quantum mechanics, specifically focusing on concepts such as decoherence, the Von Neumann-Wigner interpretation, and the formulation of quantum mechanics in real vector spaces. Participants raise questions and explore various interpretations and implications of these topics.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation

Main Points Raised

  • One participant questions whether decoherence merely transforms quantum probability distributions into classical ones, suggesting that it does not provide a mechanism for selecting a specific state from a probability distribution.
  • Another participant asserts that decoherence does not solve the measurement problem but rather alters its nature, emphasizing that interference effects are never exactly zero, although they become negligible over time.
  • A later reply discusses the implications of decoherence, stating that after decoherence, quantum probabilities can be interpreted similarly to classical probabilities, reflecting ignorance of the system's true state.
  • Questions are raised about the Von Neumann-Wigner interpretation, particularly regarding the necessity of consciousness in the measurement process and the implications of delayed choice experiments.
  • Participants inquire about the possibility of breaking out of the Von Neumann chain of regression without introducing hidden variables or additional dynamics to the Schrödinger equation.
  • One participant explores the feasibility of formulating quantum mechanics using real vector spaces instead of complex ones, questioning the potential obstacles in such an approach.

Areas of Agreement / Disagreement

Participants express differing views on the role of decoherence in the measurement problem, with some arguing it does not resolve the issue while others suggest it changes the nature of the problem. There is no consensus on the implications of the Von Neumann-Wigner interpretation or the formulation of quantum mechanics in real vector spaces.

Contextual Notes

Some discussions involve assumptions about the nature of decoherence and its effects, as well as the interpretations of quantum mechanics that may depend on specific definitions or frameworks. The conversation reflects ongoing debates in the field without reaching definitive conclusions.

haushofer
Science Advisor
Insights Author
Messages
3,076
Reaction score
1,596
Dear all,

every now and then I get this itchy feeling and start to think about quantum mechanics. Which raises, of course, some questions. These concern the measurement problem. I decided to post them in 1 single topic, so I enumerate them. If someone has some insights clarifying my confusions, I'll be very happy.

1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right? See e.g. Tegmarks https://arxiv.org/abs/quant-ph/0101077, quoting "We argue that modern experiments and the discovery of decoherence have have shifted prevailing quantum interpretations away from wave function collapse towards unitary physics". Am I right that decoherence merely turns "quantum probability distributions (meaning, with interference terms)" into classical probability distributions (meaning, no interference terms)?

2) About the Von Neumann-Wigner interpretation: I am puzzled about this interpretation (who isn't, but bear with me). In e.g. https://arxiv.org/abs/1009.2404 it is stated that QM needs no consciousness. My question is: why not simply use the simple double slit experiment, put a detector near one of the slits, and only look at the screen? I assume we'll still see the interference pattern (right?), so doesn't this conclusively exclude the possibility that "a conscious observation near one of the slits is needed to make the interference pattern disappear"? And the same goes even more for the delayed choice experiments: because the indirect observation about the particle going through one of the slits is delayed, doesn't this show that consciousness is not a relevant factor in the whole setup but mere interaction of the measurement apparatus with the system is?

3) About the Von Neumann-Wigner interpretation: is there any way to "break out of the Von-Neumann chain of regression" without hidden variables or imposing extra dynamics on top of the Schrödinger equation? I see that people often use this Von-Neumann chain to motivate the Von Neumann-Wigner interpretation. And how do adherents of this interpretation explain cosmic events like e.g. the CMB we're receiving from events billions of years ago?

4) About the QM formalism itself (not really about the measurement problem): I do understand that the Schrödinger equation and Heisenberg relations impose that our Hilbert space is complex. But how far could one go with constructing a theory of QM on a real vector space, with writing down a real Schrödinger equation, real commutation relations from the Poisson brackets and so forth? What would be the first obstacle to encounter?

Any insights are appreciated :)

-edit If some moderator thinks these are too many questions for one topic, feel free to say so and take appropriate actions.
 
  • Like
Likes   Reactions: Demystifier
Physics news on Phys.org
1) You are absolutely right.

2) That paper is wrong. One of the authors (who happens to be my brother) is a psychologist, and the other is, I think, a biologist. Neither of them is physicist. They mention me in Acknowledgements, but my contribution was to explain QM to them and explaining them why their arguments were wrong.

3) I think the answer to your first question is - no.

4) QM can be formulated without complex numbers. See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.
 
  • Like
Likes   Reactions: atyy, haushofer, lekh2003 and 4 others
haushofer said:
1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right?

Here's the way it works in practice:
  • After decoherence, there are no more interference effects.
  • Without interference effects, quantum probabilities work just like classical probabilities.
  • So you can give the post-decoherence probabilities the same interpretation that you do classical probabilities---that the probabilities reflect ignorance of the true state of the system.
In other words, after decoherence, you might as well assume that a "collapse" has happened. The system is either in state \psi_1 or in state \psi_2, you just don't know which.

This is intellectually incoherent, in my opinion, but it works fine as a rule of thumb.
 
  • Like
Likes   Reactions: entropy1 and Derek P
stevendaryl said:
After decoherence, there are no more interference effects.

Roland Omnes in “Decoherence And Ontology“ (Ontology Studies 8, 2008 55):

Let us go back however to less elevated questions. I did not yet mention that decoherence is a dynamical effect that is never perfectly exact. Entangled states of a measured quantum object and a measuring device are disentangled, but a tiny amount of entanglement (or superposition) always survives. The probability for observing a macroscopic interference effect between a dead and a live cat is never exactly zero, but extremely small and becoming exponentially smaller with larger values of time.
 
  • Like
Likes   Reactions: bhobba, Jilang and stevendaryl
Demystifier said:
One of the authors (who happens to be my brother)
Isn't it a small world after all :wink:.
 
  • Like
Likes   Reactions: Demystifier
Lord Jestocost said:
but extremely small and becoming exponentially smaller with larger values of time.

Interesting Omne's said that - he is a big supporter of decoherence so I doubt he is using it as an augment against it. It of course is true, but I must admit one of my real annoyances is with those that criticize decoherence using that argument. Decoherence does not solve the measurement problem - merely morphed it - let's get that straight. However the argument interference is never exactly zero so decoherence has problems is a load of the proverbial - IMHO. It decays so fast and exponentially at that so it quickly becomes such a low level that not only do we currently not have the technology detect it (except in special contrived cases to experimentally investigate decoherence) I doubt we ever will, and certainly for everyday objects it can be taken as zero because our usual senses are way way not sensitive enough to detect it. As I often say would anyone consider 1/googleplex (which since it decays exponentially very quickly it will reach after not much time at all) to not for all practical purposes be zero - if so I have some land with unbelievable 360% ocean views for sale - anyone interested :-p:-p:-p:-p:-p:-p:-p

I must add what I wrote above was a bit tongue in cheek - I can not, and will never be able to, prove those with this objection are wrong - it hinges on a certain belief to do with for all practical purposes (FAPP) being good enough. That is an opinion - not fact - hence is not science. It's perfectly OK to hold such a view - ie it never becomes zero so it is not really an explanation; because it depends entirely on your view of explanation - such is philosophy - not science. Most exposed to it accept FAPP as being fine - but some do not. It's a bit like when you see a proper derivation of SR based on symmetry and group theory most much prefer it to Lorentz Ether Theory (LET) - but a few don't. Believe it or not I read somewhere where the great John Bell believed in LET - interesting. It would be a brave man indeed to call Bell a fool - because he most certainly was not. But in a minority on that issue he certainly was. It boils down to the answer to a deep question asked by Poincare of Einstein - what is the mechanical basis of SR. Einstein replied - none - to which Poincare was in shock. It actally is a deep question not easily answered - but I will do a separate thread about that.

Thanks
Bill
 
bhobba said:
Decoherence does not solve the measurement problem - merely morphed it - let's get that straight.

Do you mean - harking back a year or two - that it does not solve the transition from an improper mixture to a proper one? If not, I'm not sure what remains of the problem, please explain.
 
Derek P said:
Do you mean - harking back a year or two - that it does not solve the transition from an improper mixture to a proper one? If not, I'm not sure what remains of the problem, please explain.

Exactly. It solves it FAPP - but that's all.

Thanks
Bill
 
  • Like
Likes   Reactions: Derek P
Demystifier said:
1) You are absolutely right.
See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.

[Quote = "T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1"]
The ‘real’ world (using the term in its nonmathematical sense) is the world of ‘real’ quantities (using the term in its mathematical sense)
[/Quote]

I doubt that we can directly read, from measuring instruments, real number. Like complex numbers, real numbers are human conceptual constructs (imaginary). To make this an ontological criterion, in my opinion is to misunderstand the mathematical concept of numbers.

There are two well known constructions of the real numbers (among many other) from the rationals-namely the Dedekind cuts method in which a real number is defined as a class of rationals, and the Cantor-Cauchy completion method in which a real number is defined as an equivalence class of Cauchy sequences of rational number.

Best regards
Patrick
 
  • #10
haushofer said:
About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms.

Quoting Ruth E. Kastner ( ‘Einselection’ of pointer observables: The new H-theorem?, arXiv:1406.4126 [pdf] ):

It is often claimed that unitary-only dynamics, together with decoherence arguments, can explain the ‘appearance’ of wave function collapse, i.e, that Schrödinger’s Cat is either alive or is dead. This however is based on implicitly assuming that macroscopic systems (like Schrödinger’s Cat himself) are effectively already ‘decohered,’ since the presumed phase randomness of already-decohered systems is a crucial ingredient in the ‘derivation’ of decoherence.
 
  • Like
Likes   Reactions: entropy1 and haushofer
  • #11
haushofer said:
2) About the Von Neumann-Wigner interpretation: I am puzzled about this interpretation (who isn't, but bear with me). In e.g. https://arxiv.org/abs/1009.2404 it is stated that QM needs no consciousness. My question is: why not simply use the simple double slit experiment, put a detector near one of the slits, and only look at the screen? I assume we'll still see the interference pattern (right?), so doesn't this conclusively exclude the possibility that "a conscious observation near one of the slits is needed to make the interference pattern disappear"?.

How near? You are assuming you can look at the particle as it goes through the slits. If your detector messes with the photon as it passes then the interference pattern will disappear - not because of anything to do with observation but because the detector blocked the slit! Clever schemes with very delicate detectors run into a related problem - if the detector's positional uncertainty is small enough to locate it at a particular slit then the momentum uncertainty will be large enough to require a "kick" from the passing particle that disrupts the interference pattern. If you deduce the slit from the behaviour of an entangled partner, you don't change the pattern on the screen ever - this is frequently misunderstood.
 
Last edited by a moderator:
  • Like
Likes   Reactions: lodbrok
  • #12
Lord Jestocost said:
Quoting Ruth E. Kastner ( ‘Einselection’ of pointer observables: The new H-theorem?, arXiv:1406.4126 [pdf] ):

It is often claimed that unitary-only dynamics, together with decoherence arguments, can explain the ‘appearance’ of wave function collapse, i.e, that Schrödinger’s Cat is either alive or is dead. This however is based on implicitly assuming that macroscopic systems (like Schrödinger’s Cat himself) are effectively already ‘decohered,’ since the presumed phase randomness of already-decohered systems is a crucial ingredient in the ‘derivation’ of decoherence.
That is hardly an assumption. All that's needed is for the environment's state to be a superposition in the interaction basis. It would take a massive bit of einselection for the environment not to be in such a state.
 
  • #13
Demystifier said:
1) You are absolutely right.

2) That paper is wrong. One of the authors (who happens to be my brother) is a psychologist, and the other is, I think, a biologist. Neither of them is physicist. They mention me in Acknowledgements, but my contribution was to explain QM to them and explaining them why their arguments were wrong.
:D Life is tough, having brothers like that ;) But I sincerely thought you wrote the paper. Can you say in a few words why the paper is wrong?

4) QM can be formulated without complex numbers. See T. Norsen, Foundations of Quantum Mechanics (2017), Sec. 5.1.
Thanks for the link. I ordered the book; it seems like something I'll enjoy, so many thanks for that! But let say we would start out from the very beginning, building up quantum mechanics, trying to explain all these phenomena like in the early days. When would we be forced to introduce a complex wave function, or, as in your link, two coupled real fields? I like Norsen's comparison to EM-fields, by the way, never thought about that.
 
  • #14
stevendaryl said:
Here's the way it works in practice:
  • After decoherence, there are no more interference effects.
  • Without interference effects, quantum probabilities work just like classical probabilities.
  • So you can give the post-decoherence probabilities the same interpretation that you do classical probabilities---that the probabilities reflect ignorance of the true state of the system.
In other words, after decoherence, you might as well assume that a "collapse" has happened. The system is either in state \psi_1 or in state \psi_2, you just don't know which.

This is intellectually incoherent, in my opinion, but it works fine as a rule of thumb.

You mean, intellectually incoherent because we don't try to explain how the state eventually evolved to an eigenstate, right? I have to think about your statement, but at least I understand better now why people tend to be less intimidated by the measurement problem with decoherence in the back of their minds.
 
  • #15
bhobba said:
Exactly. It solves it FAPP - but that's all.

Thanks
Bill
Well, if FAPP includes all phenomena, as of course it does, what is left? Justifying the assumption that the mixture is proper - even though it looks exactly like an improper mixture? Not really physics' job surely?
 
  • #16
Derek P said:
If you deduce the slit from the behaviour of an entangled partner, you don't change the pattern on the screen ever - this is frequently misunderstood.
Wait, what? If I obtain information about "which slit" from the entanglement particle, the interference pattern is destroyed, right?
 
  • #17
haushofer said:
Can you say in a few words why the paper is wrong?
They assume that, in the setup they consider, the interference can be seen at a single detector. But it cannot. It is only seen in coincidences (correlations) between two detectors. When this is taken into account, their argument does not longer work.

haushofer said:
But let say we would start out from the very beginning, building up quantum mechanics, trying to explain all these phenomena like in the early days. When would we be forced to introduce a complex wave function, or, as in your link, two coupled real fields?
It depends on what do you take for granted before that. For instance, try to write a linear differential equation for a single real wave function, such that the dispersion relation is
$$\hbar\omega=\frac{\hbar^2k^2}{2m}$$
You should find out that it is impossible.
 
Last edited:
  • Like
Likes   Reactions: bhobba and Derek P
  • #18
haushofer said:
You mean, intellectually incoherent because we don't try to explain how the state eventually evolved to an eigenstate, right? I have to think about your statement, but at least I understand better now why people tend to be less intimidated by the measurement problem with decoherence in the back of their minds.
They don't have to evolve to a single eigenstate. The actual picture is of an entanglement. And an entanglement can be treated as a probability distribution of eigenstates for the system we are looking at - as long as we ignore the microstate of the environment. That way the observer state is also a PD. Ontologically, we could say that globally the system remains a pure state, it never does resolve to a single eigenstate. That would be how MWI deals with it. Otherwise we have a serious unsolved problem. In fact it is provably not solvable though I can't quote a source.
 
  • #19
haushofer said:
Wait, what? If I obtain information about "which slit" from the entanglement particle, the interference pattern is destroyed, right?
Nope. The best-known optical set-up that does such a measurement is the one used by Kim et al in their DCQE. And in that experiment photon pairs are emitted from the same slit so there is never any interference pattern at the screen i.e. the signal detector. The interference pattern doesn't go away, it's never there in the first place.

This is a manifestation of the No Communication Theorem - if you could turn the interference pattern on or off by looking at the idler, you could send messages by entanglement.
 
  • #20
Derek P said:
Well, if FAPP includes all phenomena, as of course it does, what is left?

A point of principle. I don't agree with it but that means diddly squat.

Science is funny like that.

In another thread I said the essence of science is doubt (backed up by experiment/observation of course) - this is just another example.

Personally I don't worry about it, just accept it and things are a lot easier. Don't be like Wittgenstein who started to ask why and destroyed a promising career as a astronautical researcher :rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:. But then again few can say they got the great philosopher/mathematician Russell to admit they were right and he was wrong - so maybe it wasn't wasted :-p:-p:-p:-p:-p:-p:-p

Thanks
Bill
 
  • Like
Likes   Reactions: Derek P
  • #21
Derek P said:
Well, if FAPP includes all phenomena, as of course it does, what is left? Justifying the assumption that the mixture is proper - even though it looks exactly like an improper mixture? Not really physics' job surely?

Well, if the theory predicts that pure states evolve into pure states, then the assumption that you have a proper mixed state after a certain length of time is inconsistent. It's what I have, in a previous thread, called a "soft inconsistency", because there seems to be no feasible way to demonstrate it.
 
  • #22
bhobba said:
I must add what I wrote above was a bit tongue in cheek - I can not, and will never be able to, prove those with this objection are wrong - it hinges on a certain belief to do with for all practical purposes (FAPP) being good enough

I think there's always a tension in physics between the desire for phenomenological models that are good enough, and desire to REALLY understand what's going on. Physicists are never satisfied with just phenomenological models. Of course, the phenomenological models of the past (Kepler's laws of planetary motion, the Bohr model for atoms, maybe Fermi's model of neutron decay, etc.) ran into limitations as to what sort of phenomena could be accurately described using them, so there was an empirical need to come up with a deeper theory.

QM is sort of unusual, in that it feels to me to be a phenomenological model, but there are no hints about limitations to its range of applicability. Or at least no hints about feasible experiments that could probe these limitations.
 
  • Like
Likes   Reactions: Lord Jestocost and bhobba
  • #23
stevendaryl said:
I think there's always a tension in physics between the desire for phenomenological models that are good enough, and desire to REALLY understand what's going on. Physicists are never satisfied with just phenomenological models. Of course, the phenomenological models of the past (Kepler's laws of planetary motion, the Bohr model for atoms, maybe Fermi's model of neutron decay, etc.) ran into limitations as to what sort of phenomena could be accurately described using them, so there was an empirical need to come up with a deeper theory.

QM is sort of unusual, in that it feels to me to be a phenomenological model, but there are no hints about limitations to its range of applicability. Or at least no hints about feasible experiments that could probe these limitations.

I thought that was precisely what things like entanglement experiments did. Or would you say they are too specifically anti-classical to count as probing the limits?
 
  • #24
Derek P said:
I thought that was precisely what things like entanglement experiments did. Or would you say they are too specifically anti-classical to count as probing the limits?
Hmmm... I read entanglement experiments the other way. To me they strongly support the validity of the phenomenological model provided by QM, while defying any attempt to "REALLY understand what's going on".
 
  • Like
Likes   Reactions: entropy1 and bhobba
  • #25
Nugatory said:
Hmmm... I read entanglement experiments the other way. To me they strongly support the validity of the phenomenological model provided by QM, while defying any attempt to "REALLY understand what's going on".

Bell's theorem is rightly interpreted to mean we cannot have local, causal, realism with definiteness all at once. Locality, causality and realism in their traditional senses are not really negotiable - jettison anyone of them and the model becomes insane. So by a process of elimination, we must abandon definiteness.

The statistics work perfectly well if probabilities are frequencies in a history - as given by MWI for instance. But then definiteness is only emergent FAPP in a given history, it is not absolute.

For instance the Kim et al DCQE experiment has to be interpreted as retrocausality if the detection of the signal photon is definite. Well I suppose you could contrive something even worse involving The Lizard People messing with our simulation, but no sane model is possible with definite values. Not even if they only become definite after detection. But under unitary evolution there is nothing definite about the detector state, it is entangled with the idler photon. The impossible correlations then become inevitable.

So given that we cannot hold on to definiteness, my reading of entanglement experiments is that they retain traditional, local, causal, realism with the perfectly benign caveat that reality is not Newtonian objects with definite positions but something else, of which the wavefunction is at least an aspect if not the thing itself. Philosophy is then back in its box and the take-away lesson is that the universe is quantum.
 
Last edited:
  • #26
What are the possible spacetime reference frames that could describe the quantum configuration space? One Bohmian mechanic researcher proposed reciprocal space to house the pilot waves. There are dozens and dozens of such models in the arxiv...

7f4ZBG.jpg


What I want to know is simply this. If the configuration space is located an actual Reciprocal space as some arxiv researchers seemed to suggest. Does it mean the pilot wave could have substance as part of their dynamics or is it purely wave? Note in momentum space with energy and momentum as axis.. it doesn't mean object with distance can't exist. Hence does reciprocal space means only wave can exist or particle can only exist?
 

Attachments

  • 7f4ZBG.jpg
    7f4ZBG.jpg
    28.9 KB · Views: 858
  • #27
Derek P said:
For instance the Kim et al DCQE experiment has to be interpreted as retrocausality if the detection of the signal photon is definite. Well I suppose you could contrive something even worse involving The Lizard People messing with our simulation, but no sane model is possible with definite values. Not even if they only become definite after detection. But under unitary evolution there is nothing definite about the detector state, it is entangled with the idler photon. The impossible correlations then become inevitable.
Can you explain further what you mean by "no sane model is possible with definite values"? Why is a simulation considered worse? And by simulation are you referring to a simulation that keeps definiteness, realism, and causality, but throws out locality?
 
  • #28
haushofer said:
1) About decoherence: so my understanding is that decoherence is an environment-driven mechanism which erases interference-terms. If I have, say, a system which can be in two possible states ##\psi_1## and ##\psi_2##, then ##\psi_ = \psi_1+\psi_2## and the probability density becomes ##|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 + 2 Re(\psi_1^* \psi_2) =|\psi|^2 = |\psi_1|^2 + |\psi_2|^2 +\text{interference}##. Am I right that decoherence states that, if our system interacts with its environment, the interferenceterm eventually dies out and that ##|\psi|^2 \rightarrow |\psi_1|^2 + |\psi_2|^2##? If so, how on Earth can some people claim that decoherence solves the measurement problem, because you still need some mechanism which eventually from this classical probability distribution picks one of the two states out? I mean, decoherence cannot induce a unitary evolution ##\psi \rightarrow \psi_i## for ##i=1,2##, right? See e.g. Tegmarks https://arxiv.org/abs/quant-ph/0101077, quoting "We argue that modern experiments and the discovery of decoherence have have shifted prevailing quantum interpretations away from wave function collapse towards unitary physics". Am I right that decoherence merely turns "quantum probability distributions (meaning, with interference terms)" into classical probability distributions (meaning, no interference terms)?

I think Tegmark works within MWI. I'm not sure MWI works, but stated within MWI it seems more difficult to say exactly why it is wrong.

Also, it is true within BM, since BM has unitary evolution with the addition of hidden variables.

Basically decoherence does nothing for the measurement problem. It is a basic fact of quantum mechanics, and necessary for the interpretations that work such as Copenhagen (FAPP) and BM (in non-relativistic QM).
 
  • Like
Likes   Reactions: haushofer
  • #29
Azurite said:
One Bohmian mechanic researcher proposed reciprocal space to house the pilot waves. There are dozens and dozens of such models in the arxiv...

Even if there are "dozens and dozens" of such models, you need to pick a specific one and give a reference to it. We can't discuss vague descriptions. We need something concrete.
 
  • Like
Likes   Reactions: bhobba
  • #30
kurt101 said:
Can you explain further what you mean by "no sane model is possible with definite values"? Why is a simulation considered worse? And by simulation are you referring to a simulation that keeps definiteness, realism, and causality, but throws out locality?
I don't think there's anything to be gained by discussing absurd hypotheses that do not promote understanding even within their own parameters, let alone in real physics. To do so would be to invite closure of the thread which is still, IMO, being productive.

But I will explain "no sane model is possible with definite values". Bell's theorem shows that any theory that tries to explain QM must violate causality and/or locality or else what Bell calls realism. But Bell realism means the existence of definite values. The statistical part of his theorem relies on correlations between real events corresponding to correlations in the candidate theory. As this is a bit different from what most people mean by realism it is worth spelling it out. The theorem does not rule out a theory that does not have definite values. Which is precisely the picture you get if you say the wavefunction is fundamental.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 8 ·
Replies
8
Views
4K