What has changed since the Copenhagen interpretation?

In summary, the paper casts doubt on the fact that Copenhagen-like interpretations can be considered to give an objective view of experiments, but rather are perspectival.
  • #106
martinbn said:
I don't have an opinion yet. But it isn't obvious to me, so I suspect that it isn't straightforward. For example what would be the function space that the wave function belongs to? Just to clarify, because you may say "why do you ask that?", if you have infinitely many particles the wave function will be a function of infinitely many variables, that would make any integration tricky.
Ah, you ask from a rigorous mathematical point of view. My view is that in physics one does not need to worry too much about that, because infinities in physics are only potential infinities. For instance, if the visible universe has about ##10^{80}## particles, then one can study only those ##10^{80}## particles and approximate it by infinity only when it makes the analysis simpler.
 
  • Like
Likes Auto-Didact and bhobba
Physics news on Phys.org
  • #107
A. Neumaier said:
Already for hydrogen, the agreement is only reasonable but not perfect. One needs the radiation corrections to get a nonzero Lamb shift. This is experimentally measurable.

The bigger the system, the more difficult it is to shield the system from the environment in order to keep the dynamics approximately unitary. Already for helium clusters of around 100 atoms, the concept of temperature becomes relevant - signalling dissipative (non-unitary) behavior. The dissipation is always to the environment.
That's all true, but note that the measurable Lamb shift is not an effect of dissipation.
 
  • #108
Demystifier said:
That's all true, but note that the measurable Lamb shift is not an effect of dissipation.
This is not quite true. Though generally only the real part is considered, the Lamb shift is actually complex, leading to observable broadened lines in the spectrum. Complex energies are the hallmark of dissipative effects. Note that real spectra always exhibit line broadening.
 
  • #109
Demystifier said:
Ah, you ask from a rigorous mathematical point of view. My view is that in physics one does not need to worry too much about that, because infinities in physics are only potential infinities. For instance, if the visible universe has about ##10^{80}## particles, then one can study only those ##10^{80}## particles and approximate it by infinity only when it makes the analysis simpler.

Here's something that I've never heard discussed before: If the universe is infinite (and the mass/energy is roughly uniformly distributed), then the wave function of the universe would involve an actual infinite number of particles. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
 
  • Like
Likes bhobba and martinbn
  • #110
stevendaryl said:
So is there a mathematical treatment for a truly infinite universe?
I'm sure there is, but I'm not sure how rigorous it is.
 
  • #111
stevendaryl said:
. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
Yes. The vacuum sector (zero density and temperature) is relevant for few particle problems, where the asymptotic in-out behavior reflected in the S-matrix is the relevant object for making predictions. This is the textbook material. For finite density and temperature other sectors of the same quantum field theories matter.
These are treated in terms of CTP (closed time path) techniques, and only need finite densities, never total energies or total particle numbers. Thus they can cope with an infinite universe with a finite density.

Of course the mathematics is as nonrigorous as for perturbative QFT, but this is a different matter.
 
  • Like
Likes eloheim, bhobba and stevendaryl
  • #112
A. Neumaier said:
These are treated in terms of CTP (closed time path) techniques, and only need finite densities, never total energies or total particle numbers. Thus they can cope with an infinite universe with a finite density.
On the other hand, Bohmian mechanics needs an explicit multiparticle Hamiltonian, hence cannot cope with an everywhere positive density. The approximate recipe by Demystifier does not work there.
 
  • #113
stevendaryl said:
Here's something that I've never heard discussed before: If the universe is infinite (and the mass/energy is roughly uniformly distributed), then the wave function of the universe would involve an actual infinite number of particles. The perturbative calculations in QFT assume that the state is a perturbation of the vacuum, but no state with an infinite number of particles can be obtained from the vacuum by any finite number of applications of creation/destruction operators. So is there a mathematical treatment for a truly infinite universe?
The problem is that even electrons moving at different speed are states which differ by an infinite amount of soft photons. So you can't treat both in the same hilbert space. Also free and interacting QFT differ by an infinite amount of particles, which is paraphrased as Haags theorem.
 
  • #114
A. Neumaier said:
On the other hand, Bohmian mechanics needs an explicit multiparticle Hamiltonian, hence cannot cope with an everywhere positive density. The approximate recipe by Demystifier does not work there.
Yes, but Bohmian mechanics is not a technique. It is an explanation. It is similar to the Boltzmann's interpretation of thermodynamics as a motion of ##10^{23}## point-like atoms obeying the laws of classical mechanics. It is an explanation of thermodynamics, not a technique that should replace the standard techniques of the 19th century thermodynamics.
 
  • Like
Likes Auto-Didact
  • #115
Demystifier said:
It is a part of the question how to generalize BM to relativistic QFT. As you know, there is no one generally accepted approach to that question. I myself have presented several different approaches. Currently I prefer the approach outlined in Sec. 4.3 of my https://lanl.arxiv.org/abs/1703.08341 Soon I will upload a more detailed paper on arXiv.
It's uploaded now: http://de.arxiv.org/abs/1811.11643
 
  • Like
Likes Auto-Didact
  • #116
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.
 
  • #117
edmund cavendish said:
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.

When I first heard it, it sounded like a completely goofy idea. But on the other hand, who knows what might come out of the effort to reconcile quantum mechanics and gravity? And maybe it relates to the EPR=ER conjecture.
 
  • #118
edmund cavendish said:
What about Penroses speculations about gravity induced wave function collapse? Challenged I know but potentially a significant development since/ challenge to Copenhagen? Fascinating thread.
To be clear for others reading, the Diosi-Penrose objective reduction scheme (OR) i.e. non-unitary wave function collapse as an objectively occurring physical phenomenon is actually more than an interpretation; this is because it postulates that standard QM literally breaks down for masses greater than ##m_{\mathrm {Planck}}##. This happens because above this limit, gravitational fields per GR will also be in superposition leading to vacuum state issues which are explicitly not allowed in QFT.

Penrose therefore, using a bifurcation theory argument, says that gravitational field superpositions are intrinsically unstable with superposed mass functioning as the bifurcation parameter. In other words, OR explicitly predicts that all mass superpositions have a natural decay rate proportional to the superposed mass per ##\Delta t \geq \frac {\hbar} {\Delta E}##. This prediction is in direct contradiction to standard QM.

This means that OR predicts very specific different experimental results compared to standard QM, namely spontaneous collapse of any object in superposition within a time ##\tau## into a single random one of the orthogonal states. The problem is however that to date no QM experiment has ever been carried out with large enough masses for this effect to be noticeable. This OR effect will only become experimentally distinguishable from standard QM when ##\sim10^{-8} \mathrm{kg}## objects can be put into superposition. There are multiple experiments being carried out to test this.
stevendaryl said:
When I first heard it, it sounded like a completely goofy idea. But on the other hand, who knows what might come out of the effort to reconcile quantum mechanics and gravity?
The fact that the OR scheme is so simple is what I find makes it so interesting; the same is true of Bohmian mechanics (BM). The difference however is that BM is mathematically equivalent to QM and explains QM while OR goes beyond QM i.e. QM is a limiting case of (a theory with) OR and OR therefore directly points the way to quantum gravity, or more accurately 'gravitized QM' as Penrose puts it.
 
  • #119
Auto-Didact said:
BM is mathematically equivalent to QM
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
 
  • #120
A. Neumaier said:
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
The version of BM I recently proposed in http://de.arxiv.org/abs/1811.11643 even makes a new generic measurable prediction. (But not measurable with current technology.)
 
  • #121
Auto-Didact said:
Penrose therefore, using a bifurcation theory argument, says that gravitational field superpositions are intrinsically unstable with superposed mass functioning as the bifurcation parameter. In other words, OR explicitly predicts that all mass superpositions have a natural decay rate proportional to the superposed mass##\Delta t \geq \frac {\hbar} {\Delta E}##. This prediction is in direct contradiction to standard QM.
Well, of course any objective collapse proposal (or any other modification) will be in contradiction with QM. That is the whole point. Or perhaps you mean something else?
 
  • #122
martinbn said:
Well, of course any objective collapse proposal (or any other modification) will be in contradiction with QM. That is the whole point. Or perhaps you mean something else?
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.
A. Neumaier said:
No, Bohmian mechanics predicts additional observables (exact position values at all times), in contradiction to ordinary quantum mechanics.
"Standard BM" predicts ##\psi## and a quantum potential, directly derived from the Schrodinger equation. There are no additional equations, meaning that whatever conceptual differences there may be, it is mathematically equivalent to standard QM. I quote Demystifier's latest paper:
page 11 said:
Similarly to the general rule in physics discussed in Sec. 4.3, the perceptibles in BM do not depend on details of particle trajectories. This is seen from Eqs. (15) and (16), Which say that probability of a perceptible is obtained by integrating out over all microscopic positions $$p^{\mathrm {(appar)}}_l=\int_{\text {supp }\rm {A_1}}d\vec x \int d\vec y |\Psi(\vec x,\vec y)|^2.$$ Intuitively, it says that the precise particle positions are not very much important to make measurable predictions. It is important that particles have some positions (for otherwise it is not clear how can a perceptible exist), but it is much less important what exactly those positions are. That is why BM (with trajectories) makes the same measurable predictions as standard QM (without trajectories).

It is extremely important not to overlook the general idea above that the precise particle positions are not essential. For otherwise, one can easily make a false "measurable prediction” out of BM that seems to differ from standard QM, when in reality there is no such measurable prediction. The general recipe for making such a false "measurable prediction” out of BM is to put too much emphasis on trajectories and ignore the perceptibles. A lot of wrong “disproofs of BM” of that kind are published in the literature.

By a peer pressure of making new measurable predictions out of BM, even distinguished Bohmians sometimes fall into this trap. For instance, some try to make new measurable predictions of arrival times by computing the arrival times of microscopic BM trajectories (see e.g. [32, 33]). However, the microscopic trajectories are not perceptibles, so the arrival times obtained from microscopic BM trajectories may be rather deceptive from a measurable point of view. To make a measurable prediction, one must first specify how exactly the arrival time is measured [34] which requires a formulation of the problem in terms of a perceptible. When the problem is formulated in that way, BM makes the same measurable predictions as standard QM, despite the fact that there is no time operator in standard QM (recall also the discussion around Eq. (21)

...

Why cannot BM trajectories be observed? Or more precisely, why cannot a single measurement reveal a Bohmian particle position with a precision better than the spatial width of the wave function? This is not only because Bohmian positions are not perceptibles; after all atom positions are also not perceptibles, yet electron microscope can be used to observe atom positions. The true reason why Bohmian positions cannot be observed with a precision better than the spatial width of the wave function is because there are no local interactions (in the sense explained in Sec. 4.2) between BM particles. To make an analogy, trying to observe a Bohmian trajectory is like trying to observe the Moon’s trajectory by watching tides. Classical gravity is a long range force, so the observation of effect on B caused by A does not directly reveal the position of A. That is why we cannot observe the Moon’s trajectory by watching tides. That is also why there is no direct evidence for the existence of astrophysical dark matter (hypothetic matter with negligible interactions, except gravitational). In that sense, the absence of direct evidence for BM trajectories can be thought of as being analogous [14] to the absence of direct evidence for dark matter.
Demystifier said:
The version of BM I recently proposed in http://de.arxiv.org/abs/1811.11643 even makes a new generic measurable prediction. (But not measurable with current technology.)
What page/section? Assuming it is from section 5.4, I wouldn't call this standard BM, because having any QM wave function explicitly satisfy the wave equation clearly is a relativistic extension to me.
 
  • Like
Likes Demystifier
  • #123
Auto-Didact said:
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.
Where the collapse is objective and due to nonlinear modification of QM? Anyway the whole point of Penrose is to change QM to solve the measurement problem. Surely you wouldn't expect that the change will give you a theory that is equivalent to QM!
 
  • #124
martinbn said:
Where the collapse is objective and due to nonlinear modification of QM?
No, but then again I didn't say that.
martinbn said:
Anyway the whole point of Penrose is to change QM to solve the measurement problem. Surely you wouldn't expect that the change will give you a theory that is equivalent to QM!
Agreed. The point however is that this is an intermediate thread, meaning that there should be a very clear distinction made between modifications of QM and interpretations of QM, seeing an objective collapse model can be either.
 
  • #125
Auto-Didact said:
What page/section? Assuming it is from section 5.4,
I meant Sec. 5.2, last paragraph.
 
  • #126
Auto-Didact said:
There are (or at least historically there were) collapse interpretations which are completely mathematically equivalent to standard QM.

I would say "approximately equivalent". Exactly when and how collapse occurs makes a mathematical difference, because collapse eliminates interference terms that in principle contribute to measurable transition probabilities. But the differences are too tiny to be measurable in practice.
 
  • #127
Demystifier said:
I meant Sec. 5.2, last paragraph.
Ah I see. How would that argument relate to the inherent fractal nature of QM paths? (Abbott & Wise 1981)
stevendaryl said:
I would say "approximately equivalent". Exactly when and how collapse occurs makes a mathematical difference, because collapse eliminates interference terms that in principle contribute to measurable transition probabilities. But the differences are too tiny to be measurable in practice.
My point was that the underlying mathematical formalism of an interpretation with and without collapse are identical; given that all standard QM formalisms are equivalent, this means that we must conclude that if one of them doesn't describe the measurement process then none of them actually describe the measurement process.

In other words, any mathematical operationalization of a QM measurement into a measurement theory which is consistent with the theoretical formalism of unitary evolution, actually de facto constitutes a modification of standard QM; the operator algebra formalism especially seems to naturally suggest such modifications.
 
  • #128
Auto-Didact said:
My point was that the underlying mathematical formalism of an interpretation with and without collapse are identical; given that all standard QM formalisms are equivalent, this means that we must conclude that if one of them doesn't describe the measurement process then none of them actually describe the measurement process.
Collapse and No-collapse give different results in the Frauchiger-Renner experiment.
 
  • #129
DarMM said:
Collapse and No-collapse give different results in the Frauchiger-Renner experiment.
Didn't we establish earlier that the FR theorem referred to the inconsistency of having subjective and objective collapse within one interpretation? Such an inconsistency says absolutely nothing about the consistency or inconsistency of objective and subjective collapse between two different interpretations.
 
  • #130
Auto-Didact said:
Didn't we establish earlier that the FR theorem referred to the inconsistency of having subjective and objective collapse within one interpretation? Such an inconsistency says absolutely nothing about the consistency or inconsistency of objective and subjective collapse between two different interpretations.
No, it refers to an inconsistency in Subjective Collapse, although it can be avoided, so it is more an inconsistency in non-perspectival Subjective Collapse models. Separate to that to that it shows a difference in predictions between Objective Collapse and No-collapse.
 
  • #131
DarMM said:
No, it refers to an inconsistency in Subjective Collapse, although it can be avoided, so it is more an inconsistency in non-perspectival Subjective Collapse models. Separate to that to that it shows a difference in predictions between Objective Collapse and No-collapse.
The controversy surrounding the theorem among experts is enough to disregard it for the moment. I'll read up on it later and come back to this.
 
  • #132
Auto-Didact said:
The controversy surrounding the theorem among experts is enough to disregard it for the moment. I'll read up on it later and come back to this.
I don't think there is much of a controversy to be honest, most of the literature seems to agree with it broadly, I don't know if it is valid to reason along those lines.

There is a common objection, but it is nullfied by a modification by Luis Masanes, see this paper of Healey: https://arxiv.org/abs/1807.00421

Even ignoring Masanes's modification, the common objection (Intervention sensitivity) is true, but it is very strange what it implies, again see Healey's paper.

Current discussion is on Masanes's version, but there seems to be a deep conflict there between pure unitary QM and Relativity:
http://philsci-archive.pitt.edu/15373/
http://philsci-archive.pitt.edu/15357/
 
  • Like
Likes akvadrako
  • #133
Auto-Didact said:
Ah I see. How would that argument relate to the inherent fractal nature of QM paths? (Abbott & Wise 1981)
I don't see any relation between those two things. Why do you think that they might be related?
 
  • #134
DarMM said:
Current discussion is on Masanes's version, but there seems to be a deep conflict there between pure unitary QM and Relativity:
http://philsci-archive.pitt.edu/15373/
http://philsci-archive.pitt.edu/15357/

I think all Sean Gao is showing is the conflict between superluminal effects and special relativity. What he's saying seems straightforward: if Bob makes a measurement, it will collapse Alice's system and she will repeatable measure the same value (+1 or -1) after each reset performed by a superobserver. And if Bob waits until after all of her measurements, she'll get random answers each time.

But unitary QM doesn't have instantaneous collapse or other superluminal dynamics; decoherence propagates at ≤ c.
 
Last edited:
  • Like
Likes eloheim
  • #135
akvadrako said:
I think all Sean Gao is showing is the conflict between superluminal effects and special relativity.

But unitary QM doesn't have instantaneous collapse or other superluminal dynamics; decoherence propagates at ≤ c.
Let's say in Many Worlds, how is the agreement between Alice and Bob's measurement explained purely locally? I agree the dynamics are local, but the states are not.
 
  • #136
DarMM said:
Let's say in Many Worlds, how is the agreement between Alice and Bob's measurement explained purely locally? I agree the dynamics are local, but the states are not.

I don't see how local dynamics can lead to non-local states, but I think it's a consequence of considering gauge-equivalent states to be physically equivalent. At least most of the papers I've seen that aim to show unitary QM is local work in the Deutsch-Hayden picture, and that feature of gauge-sensitivity is not in debate.

My understanding of a basic measurement of entangled qubits is like this: Alice and Bob make their independent measurements in whatever basis. This causes their local splits into two outcomes each and eventually two labs/observers each. Now they send their results to Charles. If they measured on different bases, his lab splits into 4 worlds, the magnitude of each given by the Born rule. Tipler (2014) describes this in a bit more detail. I suppose the important aspect is that each world carries with it a "label" so they know how to recombine.
 
  • #137
akvadrako said:
I suppose the important aspect is that each world carries with it a "label" so they know how to recombine.
This is probably the best point to focus on.

Multiverse theories with hidden variables, that is extra variables beyond the wavefunction, can achieve locality as this label is an additional degree of freedom.

However in Everettian MWI where is this label to be found locally? It seems to me it is to be found only in the global state and is not associated with any region. This is what I mean by MWI states being nonlocal.

This point is made more clearly in Travis Norsen's book "Foundations of Quantum Mechanics", though it predates that book.
 
  • #138
DarMM said:
However in Everettian MWI where is this label to be found locally?

The labs should contain the information about what measurement is performed, what the results were and together the original correlation between their qubits. Somehow this must be contained in the message to Charles, perhaps in gauge degrees of freedom.

There is information that isn't locally accessible, but I don't think this implies some kind of non-classical non-locality. You get the same thing when you encrypt a message and store the key/ciphertext at different locations. You could say the message is stored globally, since neither part by itself contains anything except random data (zero information).
 
Last edited:
  • #139
akvadrako said:
The labs should contain the information about what measurement is performed, what the results were and together the original correlation between their qubits. Somehow this must be contained in the message to Charles, perhaps in gauge degrees of freedom.

There is information that isn't locally accessible, but I don't think this implies some kind of non-classical non-locality. You get the same thing when you encrypt a message and store the key/ciphertext at different locations. You could say the message is stored globally, since neither part by itself contains anything except random data (zero information).
I'm not so sure about this. Remove Charles and let's just have Alice and Bob do a measurement. How does the Alice with the 0 result always meet the Bob with a 0 result. If they split locally along their own devices' basis, shouldn't there be four worlds? A 0 and 1 Alice are produced locally and a 0 and 1 Bob are produced locally, how do they know to interact only with/are in the same world as the matching result world for the other observer?
 
  • #140
DarMM said:
I'm not so sure about this. Remove Charles and let's just have Alice and Bob do a measurement. How does the Alice with the 0 result always meet the Bob with a 0 result. If they split locally along their own devices' basis, shouldn't there be four worlds? A 0 and 1 Alice are produced locally and a 0 and 1 Bob are produced locally, how do they know to interact only with/are in the same world as the matching result world for the other observer?

I assume you are asking for something deeper than how the calculation works like shown in Tipler's paper. I only have my own incomplete idea: First, how do Alice and Bob even know which basis is up? It must be because they have a shared reference frame. I imagine the measurement basis represents a new dimension, defined relative to that reference. When the "split" happens, the copies of Alice/Bob diverge into that new dimension. So when they come into contact the matching subsystems will overlap.
 
Last edited:

Similar threads

Replies
28
Views
3K
Replies
13
Views
2K
Replies
121
Views
10K
Replies
17
Views
3K
Replies
21
Views
3K
Replies
16
Views
3K
Replies
1
Views
5K
Back
Top