I What has changed since the Copenhagen interpretation?

  • #101
A. Neumaier said:
How does Bohmian mechanics model the destruction of particle pairs? It would presumably require that the particles meet at the same position, which is exceedingly improbable.
It is a part of the question how to generalize BM to relativistic QFT. As you know, there is no one generally accepted approach to that question. I myself have presented several different approaches. Currently I prefer the approach outlined in Sec. 4.3 of my https://lanl.arxiv.org/abs/1703.08341 Soon I will upload a more detailed paper on arXiv.
 
Physics news on Phys.org
  • #102
Demystifier said:
Strictly speaking, yes. But if a smaller system is not much entangled with the rest of the Universe, then one can use an approximation by treating this system as a "full" system.

The problem with this view are infrared divergencies. We know that even to describe a simple system like an electron, we have to take the coupling to soft photons into account. The problem is that soft photons have arbitrary large wavelength and cannot be screened off. So even small systems are strongly coupled to the rest of the universe. The best you can hope is that these coupling doesn't change much when two systems are interacting.
 
  • #103
DrDu said:
The problem with this view are infrared divergencies. We know that even to describe a simple system like an electron, we have to take the coupling to soft photons into account. The problem is that soft photons have arbitrary large wavelength and cannot be screened off. So even small systems are strongly coupled to the rest of the universe. The best you can hope is that these coupling doesn't change much when two systems are interacting.
Well, the experience teaches us that approximations that ignore this effect are often in agreement with experiments. Take, for example, quantum mechanical treatment of the hydrogen atom.
 
  • Like
Likes bhobba
  • #104
Demystifier said:
You would help me to explain it to you if you would first tell me why do you think that it can't.
I don't have an opinion yet. But it isn't obvious to me, so I suspect that it isn't straightforward. For example what would be the function space that the wave function belongs to? Just to clarify, because you may say "why do you ask that?", if you have infinitely many particles the wave function will be a function of infinitely many variables, that would make any integration tricky.
 
  • Like
Likes bhobba
  • #105
Demystifier said:
approximations that ignore this effect are often in agreement with experiments. Take, for example, quantum mechanical treatment of the hydrogen atom.
Already for hydrogen, the agreement is only reasonable but not perfect. One needs the radiation corrections to get a nonzero Lamb shift. This is experimentally measurable.

The bigger the system, the more difficult it is to shield the system from the environment in order to keep the dynamics approximately unitary. Already for helium clusters of around 100 atoms, the concept of temperature becomes relevant - signalling dissipative (non-unitary) behavior. The dissipation is always to the environment.
 
Last edited:
  • Like
Likes bhobba, Mentz114 and Auto-Didact
  • #106
martinbn said:
I don't have an opinion yet. But it isn't obvious to me, so I suspect that it isn't straightforward. For example what would be the function space that the wave function belongs to? Just to clarify, because you may say "why do you ask that?", if you have infinitely many particles the wave function will be a function of infinitely many variables, that would make any integration tricky.
Ah, you ask from a rigorous mathematical point of view. My view is that in physics one does not need to worry too much about that, because infinities in physics are only potential infinities. For instance, if the visible universe has about ##10^{80}## particles, then one can study only those ##10^{80}## particles and approximate it by infinity only when it makes the analysis simpler.
 
  • Like
Likes Auto-Didact and bhobba
  • #107
A. Neumaier said:
Already for hydrogen, the agreement is only reasonable but not perfect. One needs the radiation corrections to get a nonzero Lamb shift. This is experimentally measurable.

The bigger the system, the more difficult it is to shield the system from the environment in order to keep the dynamics approximately unitary. Already for helium clusters of around 100 atoms, the concept of temperature becomes relevant - signalling dissipative (non-unitary) behavior. The dissipation is always to the environment.
That's all true, but note that the measurable Lamb shift is not an effect of dissipation.
 
  • #108
Demystifier said:
That's all true, but note that the measurable Lamb shift is not an effect of dissipation.
This is not quite true. Though generally only the real part is considered, the Lamb shift is actually complex, leading to observable broadened lines in the spectrum. Complex energies are the hallmark of dissipative effects. Note that real spectra always exhibit line broadening.
 
  • #109
Demystifier said:
Ah, you ask from a rigorous mathematical point of view. My view is that in physics one does not need to worry too much about that, because infinities in physics are only potential infinities. For instance, if the visible universe has about ##10^{80}## particles, then one can study only those ##10^{80}## particles and approximate it by infinity only when it makes the analysis simpler.

Here's something that I've never heard discussed before: If the universe is infinite (and the mass/energy is roughly uniformly distributed), then the wave function of the universe would involve an actual infinite number of particles. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
 
  • Like
Likes bhobba and martinbn
  • #110
stevendaryl said:
So is there a mathematical treatment for a truly infinite universe?
I'm sure there is, but I'm not sure how rigorous it is.
 
  • #111
stevendaryl said:
. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
Yes. The vacuum sector (zero density and temperature) is relevant for few particle problems, where the asymptotic in-out behavior reflected in the S-matrix is the relevant object for making predictions. This is the textbook material. For finite density and temperature other sectors of the same quantum field theories matter.
These are treated in terms of CTP (closed time path) techniques, and only need finite densities, never total energies or total particle numbers. Thus they can cope with an infinite universe with a finite density.

Of course the mathematics is as nonrigorous as for perturbative QFT, but this is a different matter.
 
  • Like
Likes eloheim, bhobba and stevendaryl
  • #112
A. Neumaier said:
These are treated in terms of CTP (closed time path) techniques, and only need finite densities, never total energies or total particle numbers. Thus they can cope with an infinite universe with a finite density.
On the other hand, Bohmian mechanics needs an explicit multiparticle Hamiltonian, hence cannot cope with an everywhere positive density. The approximate recipe by Demystifier does not work there.
 
  • #113
stevendaryl said:
Here's something that I've never heard discussed before: If the universe is infinite (and the mass/energy is roughly uniformly distributed), then the wave function of the universe would involve an actual infinite number of particles. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
The problem is that even electrons moving at different speed are states which differ by an infinite amount of soft photons. So you can't treat both in the same hilbert space. Also free and interacting QFT differ by an infinite amount of particles, which is paraphrased as Haags theorem.
 
  • #114
A. Neumaier said:
On the other hand, Bohmian mechanics needs an explicit multiparticle Hamiltonian, hence cannot cope with an everywhere positive density. The approximate recipe by Demystifier does not work there.
Yes, but Bohmian mechanics is not a technique. It is an explanation. It is similar to the Boltzmann's interpretation of thermodynamics as a motion of ##10^{23}## point-like atoms obeying the laws of classical mechanics. It is an explanation of thermodynamics, not a technique that should replace the standard techniques of the 19th century thermodynamics.
 
  • Like
Likes Auto-Didact
  • #115
Demystifier said:
It is a part of the question how to generalize BM to relativistic QFT. As you know, there is no one generally accepted approach to that question. I myself have presented several different approaches. Currently I prefer the approach outlined in Sec. 4.3 of my https://lanl.arxiv.org/abs/1703.08341 Soon I will upload a more detailed paper on arXiv.
It's uploaded now: http://de.arxiv.org/abs/1811.11643
 
  • Like
Likes Auto-Didact
  • #116
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.
 
  • #117
edmund cavendish said:
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.

When I first heard it, it sounded like a completely goofy idea. But on the other hand, who knows what might come out of the effort to reconcile quantum mechanics and gravity? And maybe it relates to the EPR=ER conjecture.
 
  • #118
edmund cavendish said:
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.
To be clear for others reading, the Diosi-Penrose objective reduction scheme (OR) i.e. non-unitary wave function collapse as an objectively occurring physical phenomenon is actually more than an interpretation; this is because it postulates that standard QM literally breaks down for masses greater than ##m_{\mathrm {Planck}}##. This happens because above this limit, gravitational fields per GR will also be in superposition leading to vacuum state issues which are explicitly not allowed in QFT.

Penrose therefore, using a bifurcation theory argument, says that gravitational field superpositions are intrinsically unstable with superposed mass functioning as the bifurcation parameter. In other words, OR explicitly predicts that all mass superpositions have a natural decay rate proportional to the superposed mass per ##\Delta t \geq \frac {\hbar} {\Delta E}##. This prediction is in direct contradiction to standard QM.

This means that OR predicts very specific different experimental results compared to standard QM, namely spontaneous collapse of any object in superposition within a time ##\tau## into a single random one of the orthogonal states. The problem is however that to date no QM experiment has ever been carried out with large enough masses for this effect to be noticeable. This OR effect will only become experimentally distinguishable from standard QM when ##\sim10^{-8} \mathrm{kg}## objects can be put into superposition. There are multiple experiments being carried out to test this.
stevendaryl said:
When I first heard it, it sounded like a completely goofy idea. But on the other hand, who knows what might come out of the effort to reconcile quantum mechanics and gravity?
The fact that the OR scheme is so simple is what I find makes it so interesting; the same is true of Bohmian mechanics (BM). The difference however is that BM is mathematically equivalent to QM and explains QM while OR goes beyond QM i.e. QM is a limiting case of (a theory with) OR and OR therefore directly points the way to quantum gravity, or more accurately 'gravitized QM' as Penrose puts it.
 
  • #119
Auto-Didact said:
BM is mathematically equivalent to QM
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
 
  • #120
A. Neumaier said:
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
The version of BM I recently proposed in http://de.arxiv.org/abs/1811.11643 even makes a new generic measurable prediction. (But not measurable with current technology.)
 
  • #121
Auto-Didact said:
Penrose therefore, using a bifurcation theory argument, says that gravitational field superpositions are intrinsically unstable with superposed mass functioning as the bifurcation parameter. In other words, OR explicitly predicts that all mass superpositions have a natural decay rate proportional to the superposed mass##\Delta t \geq \frac {\hbar} {\Delta E}##. This prediction is in direct contradiction to standard QM.
Well, of course any objective collapse proposal (or any other modification) will be in contradiction with QM. That is the whole point. Or perhaps you mean something else?
 
  • #122
martinbn said:
Well, of course any objective collapse proposal (or any other modification) will be in contradiction with QM. That is the whole point. Or perhaps you mean something else?
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.
A. Neumaier said:
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
"Standard BM" predicts ##\psi## and a quantum potential, directly derived from the Schrodinger equation. There are no additional equations, meaning that whatever conceptual differences there may be, it is mathematically equivalent to standard QM. I quote Demystifier's latest paper:
page 11 said:
Similarly to the general rule in physics discussed in Sec. 4.3, the perceptibles in BM do not depend on details of particle trajectories. This is seen from Eqs. (15) and (16), Which say that probability of a perceptible is obtained by integrating out over all microscopic positions $$p^{\mathrm {(appar)}}_l=\int_{\text {supp }\rm {A_1}}d\vec x \int d\vec y |\Psi(\vec x,\vec y)|^2.$$ Intuitively, it says that the precise particle positions are not very much important to make measurable predictions. It is important that particles have some positions (for otherwise it is not clear how can a perceptible exist), but it is much less important what exactly those positions are. That is why BM (with trajectories) makes the same measurable predictions as standard QM (without trajectories).

It is extremely important not to overlook the general idea above that the precise particle positions are not essential. For otherwise, one can easily make a false "measurable prediction” out of BM that seems to differ from standard QM, when in reality there is no such measurable prediction. The general recipe for making such a false "measurable prediction” out of BM is to put too much emphasis on trajectories and ignore the perceptibles. A lot of wrong “disproofs of BM” of that kind are published in the literature.

By a peer pressure of making new measurable predictions out of BM, even distinguished Bohmians sometimes fall into this trap. For instance, some try to make new measurable predictions of arrival times by computing the arrival times of microscopic BM trajectories (see e.g. [32, 33]). However, the microscopic trajectories are not perceptibles, so the arrival times obtained from microscopic BM trajectories may be rather deceptive from a measurable point of view. To make a measurable prediction, one must first specify how exactly the arrival time is measured [34] which requires a formulation of the problem in terms of a perceptible. When the problem is formulated in that way, BM makes the same measurable predictions as standard QM, despite the fact that there is no time operator in standard QM (recall also the discussion around Eq. (21)

...

Why cannot BM trajectories be observed? Or more precisely, why cannot a single measurement reveal a Bohmian particle position with a precision better than the spatial width of the wave function? This is not only because Bohmian positions are not perceptibles; after all atom positions are also not perceptibles, yet electron microscope can be used to observe atom positions. The true reason why Bohmian positions cannot be observed with a precision better than the spatial width of the wave function is because there are no local interactions (in the sense explained in Sec. 4.2) between BM particles. To make an analogy, trying to observe a Bohmian trajectory is like trying to observe the Moon’s trajectory by watching tides. Classical gravity is a long range force, so the observation of effect on B caused by A does not directly reveal the position of A. That is why we cannot observe the Moon’s trajectory by watching tides. That is also why there is no direct evidence for the existence of astrophysical dark matter (hypothetic matter with negligible interactions, except gravitational). In that sense, the absence of direct evidence for BM trajectories can be thought of as being analogous [14] to the absence of direct evidence for dark matter.
Demystifier said:
The version of BM I recently proposed in http://de.arxiv.org/abs/1811.11643 even makes a new generic measurable prediction. (But not measurable with current technology.)
What page/section? Assuming it is from section 5.4, I wouldn't call this standard BM, because having any QM wave function explicitly satisfy the wave equation clearly is a relativistic extension to me.
 
  • Like
Likes Demystifier
  • #123
Auto-Didact said:
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.
Where the collapse is objective and due to nonlinear modification of QM? Anyway the whole point of Penrose is to change QM to solve the measurement problem. Surely you wouldn't expect that the change will give you a theory that is equivalent to QM!
 
  • #124
martinbn said:
Where the collapse is objective and due to nonlinear modification of QM?
No, but then again I didn't say that.
martinbn said:
Anyway the whole point of Penrose is to change QM to solve the measurement problem. Surely you wouldn't expect that the change will give you a theory that is equivalent to QM!
Agreed. The point however is that this is an intermediate thread, meaning that there should be a very clear distinction made between modifications of QM and interpretations of QM, seeing an objective collapse model can be either.
 
  • #125
Auto-Didact said:
What page/section? Assuming it is from section 5.4,
I meant Sec. 5.2, last paragraph.
 
  • #126
Auto-Didact said:
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.

I would say "approximately equivalent". Exactly when and how collapse occurs makes a mathematical difference, because collapse eliminates interference terms that in principle contribute to measurable transition probabilities. But the differences are too tiny to be measurable in practice.
 
  • #127
Demystifier said:
I meant Sec. 5.2, last paragraph.
Ah I see. How would that argument relate to the inherent fractal nature of QM paths? (Abbott & Wise 1981)
stevendaryl said:
I would say "approximately equivalent". Exactly when and how collapse occurs makes a mathematical difference, because collapse eliminates interference terms that in principle contribute to measurable transition probabilities. But the differences are too tiny to be measurable in practice.
My point was that the underlying mathematical formalism of an interpretation with and without collapse are identical; given that all standard QM formalisms are equivalent, this means that we must conclude that if one of them doesn't describe the measurement process then none of them actually describe the measurement process.

In other words, any mathematical operationalization of a QM measurement into a measurement theory which is consistent with the theoretical formalism of unitary evolution, actually de facto constitutes a modification of standard QM; the operator algebra formalism especially seems to naturally suggest such modifications.
 
  • #128
Auto-Didact said:
My point was that the underlying mathematical formalism of an interpretation with and without collapse are identical; given that all standard QM formalisms are equivalent, this means that we must conclude that if one of them doesn't describe the measurement process then none of them actually describe the measurement process.
Collapse and No-collapse give different results in the Frauchiger-Renner experiment.
 
  • #129
DarMM said:
Collapse and No-collapse give different results in the Frauchiger-Renner experiment.
Didn't we establish earlier that the FR theorem referred to the inconsistency of having subjective and objective collapse within one interpretation? Such an inconsistency says absolutely nothing about the consistency or inconsistency of objective and subjective collapse between two different interpretations.
 
  • #130
Auto-Didact said:
Didn't we establish earlier that the FR theorem referred to the inconsistency of having subjective and objective collapse within one interpretation? Such an inconsistency says absolutely nothing about the consistency or inconsistency of objective and subjective collapse between two different interpretations.
No, it refers to an inconsistency in Subjective Collapse, although it can be avoided, so it is more an inconsistency in non-perspectival Subjective Collapse models. Separate to that to that it shows a difference in predictions between Objective Collapse and No-collapse.
 
  • #131
DarMM said:
No, it refers to an inconsistency in Subjective Collapse, although it can be avoided, so it is more an inconsistency in non-perspectival Subjective Collapse models. Separate to that to that it shows a difference in predictions between Objective Collapse and No-collapse.
The controversy surrounding the theorem among experts is enough to disregard it for the moment. I'll read up on it later and come back to this.
 
  • #132
Auto-Didact said:
The controversy surrounding the theorem among experts is enough to disregard it for the moment. I'll read up on it later and come back to this.
I don't think there is much of a controversy to be honest, most of the literature seems to agree with it broadly, I don't know if it is valid to reason along those lines.

There is a common objection, but it is nullfied by a modification by Luis Masanes, see this paper of Healey: https://arxiv.org/abs/1807.00421

Even ignoring Masanes's modification, the common objection (Intervention sensitivity) is true, but it is very strange what it implies, again see Healey's paper.

Current discussion is on Masanes's version, but there seems to be a deep conflict there between pure unitary QM and Relativity:
http://philsci-archive.pitt.edu/15373/
http://philsci-archive.pitt.edu/15357/
 
  • Like
Likes akvadrako
  • #133
Auto-Didact said:
Ah I see. How would that argument relate to the inherent fractal nature of QM paths? (Abbott & Wise 1981)
I don't see any relation between those two things. Why do you think that they might be related?
 
  • #134
DarMM said:
Current discussion is on Masanes's version, but there seems to be a deep conflict there between pure unitary QM and Relativity:
http://philsci-archive.pitt.edu/15373/
http://philsci-archive.pitt.edu/15357/

I think all Sean Gao is showing is the conflict between superluminal effects and special relativity. What he's saying seems straightforward: if Bob makes a measurement, it will collapse Alice's system and she will repeatable measure the same value (+1 or -1) after each reset performed by a superobserver. And if Bob waits until after all of her measurements, she'll get random answers each time.

But unitary QM doesn't have instantaneous collapse or other superluminal dynamics; decoherence propagates at ≤ c.
 
Last edited:
  • Like
Likes eloheim
  • #135
akvadrako said:
I think all Sean Gao is showing is the conflict between superluminal effects and special relativity.

But unitary QM doesn't have instantaneous collapse or other superluminal dynamics; decoherence propagates at ≤ c.
Let's say in Many Worlds, how is the agreement between Alice and Bob's measurement explained purely locally? I agree the dynamics are local, but the states are not.
 
  • #136
DarMM said:
Let's say in Many Worlds, how is the agreement between Alice and Bob's measurement explained purely locally? I agree the dynamics are local, but the states are not.

I don't see how local dynamics can lead to non-local states, but I think it's a consequence of considering gauge-equivalent states to be physically equivalent. At least most of the papers I've seen that aim to show unitary QM is local work in the Deutsch-Hayden picture, and that feature of gauge-sensitivity is not in debate.

My understanding of a basic measurement of entangled qubits is like this: Alice and Bob make their independent measurements in whatever basis. This causes their local splits into two outcomes each and eventually two labs/observers each. Now they send their results to Charles. If they measured on different bases, his lab splits into 4 worlds, the magnitude of each given by the Born rule. Tipler (2014) describes this in a bit more detail. I suppose the important aspect is that each world carries with it a "label" so they know how to recombine.
 
  • #137
akvadrako said:
I suppose the important aspect is that each world carries with it a "label" so they know how to recombine.
This is probably the best point to focus on.

Multiverse theories with hidden variables, that is extra variables beyond the wavefunction, can achieve locality as this label is an additional degree of freedom.

However in Everettian MWI where is this label to be found locally? It seems to me it is to be found only in the global state and is not associated with any region. This is what I mean by MWI states being nonlocal.

This point is made more clearly in Travis Norsen's book "Foundations of Quantum Mechanics", though it predates that book.
 
  • #138
DarMM said:
However in Everettian MWI where is this label to be found locally?

The labs should contain the information about what measurement is performed, what the results were and together the original correlation between their qubits. Somehow this must be contained in the message to Charles, perhaps in gauge degrees of freedom.

There is information that isn't locally accessible, but I don't think this implies some kind of non-classical non-locality. You get the same thing when you encrypt a message and store the key/ciphertext at different locations. You could say the message is stored globally, since neither part by itself contains anything except random data (zero information).
 
Last edited:
  • #139
akvadrako said:
The labs should contain the information about what measurement is performed, what the results were and together the original correlation between their qubits. Somehow this must be contained in the message to Charles, perhaps in gauge degrees of freedom.

There is information that isn't locally accessible, but I don't think this implies some kind of non-classical non-locality. You get the same thing when you encrypt a message and store the key/ciphertext at different locations. You could say the message is stored globally, since neither part by itself contains anything except random data (zero information).
I'm not so sure about this. Remove Charles and let's just have Alice and Bob do a measurement. How does the Alice with the 0 result always meet the Bob with a 0 result. If they split locally along their own devices' basis, shouldn't there be four worlds? A 0 and 1 Alice are produced locally and a 0 and 1 Bob are produced locally, how do they know to interact only with/are in the same world as the matching result world for the other observer?
 
  • #140
DarMM said:
I'm not so sure about this. Remove Charles and let's just have Alice and Bob do a measurement. How does the Alice with the 0 result always meet the Bob with a 0 result. If they split locally along their own devices' basis, shouldn't there be four worlds? A 0 and 1 Alice are produced locally and a 0 and 1 Bob are produced locally, how do they know to interact only with/are in the same world as the matching result world for the other observer?

I assume you are asking for something deeper than how the calculation works like shown in Tipler's paper. I only have my own incomplete idea: First, how do Alice and Bob even know which basis is up? It must be because they have a shared reference frame. I imagine the measurement basis represents a new dimension, defined relative to that reference. When the "split" happens, the copies of Alice/Bob diverge into that new dimension. So when they come into contact the matching subsystems will overlap.
 
Last edited:
  • #141
Demystifier said:
I don't see any relation between those two things. Why do you think that they might be related?
I was hoping you might have already thought deeper about it than I have. I recall Feynman (in his lectures, w.r.t. his path integral and in 1979 w.r.t. particle jets) and Veneziano talking about such things.

Given Feynman's statements, Abbott & Wise's demonstration, as well as BM trajectories clearly being fractals, this naturally suggests to me that zooming in on a particle trajectory in standard QM/BM should produce richness undreamt of, very much in line with what you describe in section 5.2 of your latest paper.

Of course, the specifics of all of this would depend on the trajectories' exact (multi)fractal characteristics in question. Given that you explicitly say Galilean and non-Lorentzian though, this would imply much less constraint than is usually considered. Is there overlap between your idea and Amelino-Camelia or Magueijo's DSR?
 
  • #142
akvadrako said:
I assume you are asking for something deeper than how the calculation works like shown in Tipler's paper. I only have my own incomplete idea: First, how do Alice and Bob even know which basis is up? It must be because they have a shared reference frame. I imagine the measurement basis represents a new dimension, defined relative to that reference. When the "split" happens, the copies of Alice/Bob diverge into that new dimension. So when they come into contact the matching subsystems will overlap.
I've now read Tipler's paper and to be honest I don't see how it solves the issue, to me it just simply declares that the evolution must be local because you could reorder it in another frame due to the events being spacelike separated. However the usual issue is more so that there is a tying together of results at spacelike separated points regardless of their temporal ordering in a frame. I couldn't see much discussion on Tipler's paper, which is surprising for such a bold claim, so I might return to it.

It led me on to reading the Deustch-Hayden paper and the surrounding literature.
akvadrako said:
I don't see how local dynamics can lead to non-local states, but I think it's a consequence of considering gauge-equivalent states to be physically equivalent. At least most of the papers I've seen that aim to show unitary QM is local work in the Deutsch-Hayden picture, and that feature of gauge-sensitivity is not in debate.
From having a look it seems it is in debate, with Wallace and Tipler, MWI proponents themselves, disagreeing with Deustsch's analysis. Wallace himself thinks that states in MWI are nonlocal, simply because the quantum state itself is nonlocal so if you take it as ontic you have nonlocal physical states. I've read Deustch's follow up paper, but to me he takes a very weak notion of "local reality" where the density matrix of one of the quantum systems (with the other system traced out) is taken as the physical state of the system, the "element of reality" to use Einstein's words. This is very strange to me, but perhaps it works out, however the details seem to be lacking in Deustch's paper.

Considering it's controversial within MWI circles and due to the modified/weakened notion of "local reality", I'm not sure what to make of these arguments, so I'm not 100% convinced MWI is local. Food for thought.
 
  • Like
Likes akvadrako and bhobba
  • #143
Auto-Didact said:
Given Feynman's statements, Abbott & Wise's demonstration, as well as BM trajectories clearly being fractals, this naturally suggests to me that zooming in on a particle trajectory in standard QM/BM should produce richness undreamt of, very much in line with what you describe in section 5.2 of your latest paper.
Note that the fractal nature in the Abbott & Wise case is caused by measurement. On the other hand, unmeasured BM trajectories do not have a fractal nature. Experiment of the Abbott & Wise type probably cannot easily test Lorentz invariance. On the other hand, Lorentz invariance can be tested by scattering experiments with particle accelerators. The currently strongest particle accelerator (LHC) sees nothing beyond the Standard Model, but my theory predicts that a much much stronger accelerator should see new particles with Lorentz non-invariant cross sections.

Auto-Didact said:
Is there overlap between your idea and Amelino-Camelia or Magueijo's DSR?
Not much.
 
  • Like
Likes Auto-Didact
  • #145
DarMM said:
Considering it's controversial within MWI circles and due to the modified/weakened notion of "local reality", I'm not sure what to make of these arguments, so I'm not 100% convinced MWI is local. Food for thought.
As a simpler version of this, since it is the case that the wavefunction for two particles ##\psi(x_1,t_1;x_2,t_2)## can be non-zero, even if the points ##(x_1,t_1)## and ##(x_2,t_2)## are spacelike separated and since the wavefunction is ontic in MWI (in fact it is the fundamental object), then there is a real physical quantity associated with spacelike separated pairs.

I don't see how one can avoid this, the wavefunction lives in the space of functions over ##n##-fold products of Minkowski space/a hypersurface thereof (depending on Schrodinger/Heisenberg picture), not Minkowski space. You could show that predictions/observations don't violate locality, but that's no different to Bohmian Mechanics. The fundamental ontology is not local.

While writing I saw @Demystifier 's post, alocal is probably a better word.
 
  • Like
Likes Demystifier
  • #146
Demystifier said:
As I argue in http://de.arxiv.org/abs/1703.08341 , MWI is neither local nor non-local. It is alocal.
Thanks for this @Demystifier , it actually leads into one of the most confusing points of MWI for me.

In MWI the wavefunction is all there is, a single point in ##\mathcal{H}##, the Hilbert space. It then "just so happens" that ##\mathcal{H}## is isomorphic to ##\mathcal{L}^{2}\left(\Sigma\right)## with ##\Sigma## a space of functions over a hypersurface of a Lorentzian manifold. Due to ##\mathcal{H}## having this structure and assuming it eventually reaches the point where it has a stable decomposition into:
$$\mathcal{H} = \mathcal{E} \times \mathcal{S}$$
with ##\mathcal{E}## the environment Hilbert space and assuming ##\mathcal{E}## has properties that make it pseudo-classical and also assuming time evolution behaves in a certain way (not overtly entangling) then one can show that components in ##\mathcal{S}## decohere in such a manner that their time evolution is approximately isomorphic to (multiple copies of) objects living in ##\mathcal{M}^{4}##, a Lorentzian manifold (Minkowski space for ease let's say).

I just find the whole picture hard to accept. Minkowski space is essentially an illusion arising only from the fact that ##\mathcal{H}## is isomorphic to a very abstract space built over it, combined with the emergence (somehow) of a stable decomposition where one part is pseudo-classical and thermal and a fortunate evolution law.
 
  • Like
Likes Demystifier
  • #147
DarMM said:
the wavefunction lives in the space of functions over ##n##-fold products of Minkowski space/a hypersurface thereof
Note: In QFT this would be "the space of square-integrable functions over tempered Schwartz distributions over a Lorentzian hypersurface, with integrable defined with respect to the only measure that leaves the Hamiltonian finite".

Quite an abstract object.
 
  • #148
I don't think it matters that the fundamental object is naturally interpreted as alocal. For a theory to be local, it must be possible to write down the states in a local manner, where all the "elements of reality" are confined to a region. As I understand, this is what Deutsch and Tipler show in their papers. Though I agree it's not totally convincing and remains an open question.

But to demonstrate that many worlds is non-local, one would have to provide an example experiment which can't be described with any quantum model that involves only local states and dynamics. So far I haven't come across this claim.

Also perhaps relevant: Against Wavefunction Realism (Wallace, 2017). He's saying that you shouldn't take Hilbert space as the fundamental ontology in many worlds, but instead consider the non-fundamental ontologies implied by a specific models. My point in linking this is to show that many worlds does not necessarily imply any specific decomposition.
 
Last edited:
  • #149
DarMM said:
However the usual issue is more so that there is a tying together of results at spacelike separated points regardless of their temporal ordering in a frame. I couldn't see much discussion on Tipler's paper, which is surprising for such a bold claim, so I might return to it.

Perhaps there is no discussion because like me, he doesn't see how it's an issue. Basically there are these "labels" attached to all the elements of the systems that encode the information necessary to know which systems they overlap with. Can you help me understand your objection a little better? Is it about the encoding of that information at the ontic level or about the mechanism which matches up those overlapping systems? I don't know how it's implemented, but I can't see any reason why it would be problematic.
 
Last edited:
  • #150
DarMM said:
Thanks for this @Demystifier , it actually leads into one of the most confusing points of MWI for me.

In MWI the wavefunction is all there is, a single point in ##\mathcal{H}##, the Hilbert space. It then "just so happens" that ##\mathcal{H}## is isomorphic to ##\mathcal{L}^{2}\left(\Sigma\right)## with ##\Sigma## a space of functions over a hypersurface of a Lorentzian manifold. Due to ##\mathcal{H}## having this structure and assuming it eventually reaches the point where it has a stable decomposition into:
$$\mathcal{H} = \mathcal{E} \times \mathcal{S}$$
with ##\mathcal{E}## the environment Hilbert space and assuming ##\mathcal{E}## has properties that make it pseudo-classical and also assuming time evolution behaves in a certain way (not overtly entangling) then one can show that components in ##\mathcal{S}## decohere in such a manner that their time evolution is approximately isomorphic to (multiple copies of) objects living in ##\mathcal{M}^{4}##, a Lorentzian manifold (Minkowski space for ease let's say).

I just find the whole picture hard to accept. Minkowski space is essentially an illusion arising only from the fact that ##\mathcal{H}## is isomorphic to a very abstract space built over it, combined with the emergence (somehow) of a stable decomposition where one part is pseudo-classical and thermal and a fortunate evolution law.
Yes, I think that's essentially the same problem as the problem I discuss in Sec. 3.3.
 
Back
Top