A How do entanglement experiments benefit from QFT (over QM)?

Click For Summary
Entanglement experiments can benefit from Quantum Field Theory (QFT) due to its ability to incorporate relativistic effects, which are crucial when reference frames impact outcomes. While non-relativistic Quantum Mechanics (QM) suffices for many entanglement scenarios, QFT is necessary for processes involving particle creation and annihilation, particularly in high-energy contexts. Discussions highlight that QFT is often implicitly used in quantum optics, even if not explicitly referenced in entanglement experiments. The consensus is that while QFT provides a more comprehensive framework, the fundamental aspects of entanglement remain consistent across both QM and QFT. Understanding the interplay between relativity and quantum mechanics is essential for addressing questions about causality and information exchange in entangled systems.
  • #91
RUTA said:
QM is just the non-relativistic limit of QFT (Zee has a nice derivation of the SE from the KG equation for example). So, if you have an experiment that is properly analyzed using QM, then your experiment has been analyzed using QFT. Thinking that QFT can somehow resolve the mystery of entanglement found in QM is to say, “I think QM can resolve the mystery of entanglement found in QM.”
The Klein-Gordon equation is not QFT!
 
  • Like
Likes vanhees71 and bhobba
Physics news on Phys.org
  • #92
Haelfix said:
Quantum field theory is a special case of quantum mechanics, not viceversa. It is the usual quantum mechanics of a special kind of object, namely fields.
Whenever QM and QFT are contrasted, as in this thread, QM refers to the case of finitely many degrees of freedom, while QFT (both relativistic and nonrelativistic) refers to the case of infinitely many degrees of freedom. These differ a lot in their properties. A non-relativistic limit does not change QFT into QM in this sense.

Moreover, many arguments in the foundations depend on things being exact, hence do not survive when limits are involved.

Finally, in QFT, position is a parameter, not an operator, which changes a lot of the foundational aspects. For example, this is the reason why there is no useful QFT version of Bohmian mechanics.

Thus foundations look quite different from the perspectives of QFT and QM.
 
Last edited:
  • Like
Likes vanhees71, Auto-Didact and PeterDonis
  • #93
atyy said:
And the detector too, and the observer :) Which means we have to include the observer in the wave function :) Which means MWI :)

Cheeky boy.:woot::woot::woot:. Really enjoying this discussion BTW. But I still think removing correlations from discussions of locality makes things a lot easier. Even in ordinary relativity you have to have some way of handling it for it to make sense. It can't be used to sync clocks so that's one way out, probably others as well. I just think not worrying about locality in the context of correlations is the easiest.

For Bell it doesn't actually change anything except how you look at it. It shows that in QM the statistical nature of correlations is different than classically. But if you want it to be the same you have to introduce the concept of non-locality into correlations. To me doing that is just making a stick to whack yourself with and we end up with a massive amount of dialogue regarding what it means - some valid, but much of it nonsense - even from people that should know better. We get a lot of papers here, proper peer reviewed ones, that are really misunderstandings of weak measurements - that''s probably the main one - but misunderstanding Bell is up there as well. That'''s why I generally link to Bells initial paper with Bertlmann's socks before discussing it.
https://hal.archives-ouvertes.fr/jpa-00220688/document
Keep going - this is really interesting.

Thanks
Bill
 
  • #94
DarMM said:
That's the detection event, not the correlations.
Indeed, as I stress for years, the detection event is not the cause of the correlations but the preparation in an entangled state (I guess you refer to the correlations described by entanglement). The preparation in an entangled state in all experiments I know refer finally back to some preparation due to local interactions either, though you can entangle far distant pieces of a larger system that have never locally interacted (entanglement swapping). That's however also due to the selection based on local manipulations of other parts of the system. After all everything causal is somehow due to local interactions, i.e., the same trick that makes classical relativistic physics local, namely the description by fields, makes also the quantum description local, namely through local (microcausal) relativstic QFTs.
 
  • #95
RUTA said:
My statement stands, it is exactly correct. QM is the non-relativistic limit of QFT, so if your experiment has been properly analyzed using QM, it has been properly analyzed using QFT. The same is true of Newtonian mechanics and special relativity (SR). If you analyze an experiment correctly using Newtonian mechanics, then your experiment is amenable to this non-relativistic limit of SR, so you have just used SR to analyze the experiment.
Nonrelativistic QT is an approximation of relativstic QFT, valid under certain assumptions. If nonrelativistic QT is applicable, it depends on the accuracy you check it, whether you realize that there are relativistic corrections. E.g., the hydrogen atom spectrum as treated in QM 1 (neglecting relativity as well as the magnetic moment of the electron) is pretty accurate, but you see fine structure, hyperfine structure and radiative corrections like the Lamb shift when looking closer. The relativistic theory so far has not been disproven. To the contrary, it's among the best confirmed theories ever.
 
  • #96
RUTA said:
We’re talking about experiments done and analyzed accurately using QM, yes, certainly that means the predictions match the experimental outcomes. Only a fool would claim otherwise and I’m not a fool. My statement stands.
If it comes to the foundations we discuss here, i.e., the compatibility of Einstein causality with QT you must argue with the relativistic theory of course since Einstein causality is for sure invalid in non-relativistic physics (quantum but as well classical). Whenever photon Fock states are involved we also must use at least for them the relativistic theory. There's no non-relativstic descriptions for photons.

Of course you are right that for much of QT it's good enough to use the non-relativistic description like atomic/molecular physics for not too large ##Z## and much of solid-state physics.
 
  • #97
Haelfix said:
Quantum field theory is a special case of quantum mechanics, not viceversa. It is the usual quantum mechanics of a special kind of object, namely fields. This is really manifest when you study the worldline formalism of QFT (which is equivalent to standard perturbative second quantization methods).
https://ncatlab.org/nlab/show/worldline+formalism
To answer the OP. The reason entanglement is rarely discussed in the context of QFT (until about fifteen years ago), was that there were significant technical challenges and the whole formalism manifestly hides most of the entanglement structure. Indeed it might come as quite a shock, but the entanglement between two field modes is, in general, so strong as to be UV divergent in the entanglement entropy. For an introduction:

https://arxiv.org/abs/1803.04993
For the purpose of the endless interpretation and foundations of QM questions, I really don't think there is anything to be gleaned from phrasing things in the more challenging language. Relativistic QFT is, by construction, formulated precisely in such a way as to ensure that field operators commute within spacelike regions. So the dynamical laws must satisfy this constraint. Exactly how and what a 'measurement' does to break this, is of course up to everyone's favorite interpretation, but it's not clear to me what you gain from speaking in the technically more challenging language...
I don't know whether this is a misunderstanding of words again, but QM is a very small part of QT, namely the non-relativistic first-quantization formalism, i.e., the quantization of non-relativsitic point-particle mechanics, using position, momentum (and spin) as the fundamental operators representing the observable algebra. This also implies that you work with a fixed number of particles.

QFT is most comprehensive. In the non-relativistic case ("second-quantization formalism") with Hamiltonians that do not include particle-number changing interaction terms it's equivalent to the first-quantization formalism. Even in the non-relativistic case QFT is much more versatile in building effective models for many-body systems. If you read a modern condensed-matter textbook, you'll see that the art usually is to find the right effective degrees of freedom (usually describing collective phenomena) and treat them as a weakly interacting gas, leading to a quasi-particle description. Usually the quasi-particle number is not conserved, and that's why you use QFT. An example are lattice vibrations of solids (quasi-particles are called phonons).

In the relativistic case there's so far even only QFT successfully used. The reason is simply that in reactions of particles at relativistic energies you usually open channels where particles can be annihilated and/or new ones created. That's most conveniently described as a QFT.

In this sense QM is a proper subset of QFT, and as Witten stresses in his article entanglement comes automatically, which is also not a surprise since already the necessity of symmetrization/antisymmetrization of product states for indistinguishable bosons/fermions, built into the theory from the very beginning by imposing commutation/anticommutation relations for the field operators. You don't need to go into these very special mathematical details to see this.

Even in non-relativistic QM entanglement is rather the rule than something exceptional. Already the description of two (distinguishable or not doesn't matter) interacting particles lead to entanglement. Disentangled are the center-mass and relative coordinates describing the free motion of the two-body system as a whole and the relative motion in terms of a quasi-particle (with the reduced mass as its mass) moving in an external potential given by the two-body interaction potential. Transforming back to the coordinates of the original particles shows that you have an entangled state with respect to these observables. For the hydrogen atom, this has been nicely discussed in

https://doi.org/10.1119/1.18977https://arxiv.org/abs/quant-ph/9709052
 
Last edited:
  • #98
vanhees71 said:
I don't know whether this is a misunderstanding of words again, but QM is a very small part of QT, namely the non-relativistic first-quantization formalism, i.e., the quantization of non-relativsitic point-particle mechanics, using position, momentum (and spin) as the fundamental operators representing the observable algebra.
This isn't a misunderstanding of just words: this is a conceptual difference coming from a difference of approach, namely physics as an empirical science (e.g. the perspective of experimental/applied physics) vs physics as (the purest form of) applied mathematics (e.g. the perspective of theoretical/mathematical physics). Your perspective doesn't require that physical theories also be proper theories within pure mathematics proper.

The world line formalism is the result of a research programme from pure mathematics and/or mathematical physics which directly implies that QFT and string theory are basically different manifestations of the same underlying mathematical theory, with QFT being the limit where a brane is reduced to a single point, i.e. a 0-dimensional particle, and string theory with the string being the 1-dimensional limit of a brane, etc.
 
  • Like
Likes dextercioby and vanhees71
  • #99
Sure, you can always try to find even more comprehensive theories, of which QFT is again an approximation, but it's clear that QFT is more comprehensive than QM, which is a speciatl case.

The only problem with your claim QFT were simply a special case of string theory is that there seems to be no string theory providing the Standard Model as a limit (or has this changed over the years?).
 
  • #100
Important to understand is that what I'm saying about QFT and string theory being different manifestations of the same mathematical theory, isn't a statement from physics, but from mathematics - specifically from a more sophisticated branch of mathematics which underlies both the theory of complex analysis and the theory of partial differential equations.

In other words, the statements are independent of string theory being physics such as reproduction of the Standard Model; in fact the statements are mathematics-based theory-independent statements about physics and as such are statements applicable to all possible (both true and false) physical theories.

To answer your question more directly - being a constructivist - the generalization of the world line formalism into world volumes is evidence to me that (conceptually and therefore mathematically and therefore) actually QFT = string theory, i.e. string theory can at best - exactly as QFT - only be an EFT, and they are therefore both incapable of serving as a foundation of physics. I spoke about this here and more at length in https://www.physicsforums.com/threads/on-fundamental-theories-in-physics.976173/, which unfortunately is not viewable anymore.

The completion of the constructive QFT programme is IMO our only hope forward of finding a new theory capable of dethroning QM as the foundation of physics, as well as unifying GR with QT; the mathematics involved in discovering and formulating string theory definitely indirectly helps theorists to find this new fundamental theory, but string theory itself as a physical theory isn't a solution nor does it directly help to find a solution.
 
  • Like
Likes vanhees71
  • #101
vanhees71 said:
Indeed, as I stress for years, the detection event is not the cause of the correlations but the preparation in an entangled state (I guess you refer to the correlations described by entanglement).
Let me explain it this way. Tsirelson and Landau showed that the Bell inequality violations come from only the two axes measured by Alice and Bob having actual values. Values along all other axes are undefined as you know. That's why we can have such strong non classical correlations. Intrinsically random variables where after measurement only the variables you measured have well defined values are capable of having stronger correlations than classical theories (even stochastic ones) because there all variables take on well defined values (even if those values are randomly generated).

However many people find this odd because how can nature intrinsically care about "measurement". So they prefer to investigate other ways of generating correlations that strong.
 
Last edited:
  • Like
Likes bhobba, vanhees71 and Auto-Didact
  • #102
Well, that's perfectly expressing my statement that we simply have to accept what our observation of nature has told us: She behaves not according to classical theories (even stochastic ones) but according to quantum theory, including the stronger-than-classically-possible correlations described by experiment. You may whine as much as you like about the loss of the "classical confort zone", but nature doesn't care ;-)).
 
  • Like
Likes DarMM
  • #103
The classical/quantum dichotomy is a red herring: researchers aren't so much calling for a return to the 'classical comfort zone' but for an even further departure away from classicality than QT, but a departure which does have a constructive basis capable of offering an explanation in terms of a mechanism. The reason people make the strawman argument that wanting a mechanism is a call back to classical physics is because classical physics also happened to have such a constructive basis: (real) analysis.

There is no reason whatsoever to think that finding a more comprehensive constructive basis for QT is impossible; on the contrary, the failure to directly formulate GR-based QFT is sufficient evidence that searching for such a more comprehensive constructive basis - more comprehensive than offered by classical physics' real analysis - is not a mere matter of academic luxury, but a logical necessity.

This searching has certainly not been in vain for there have definitely been new offerings of such constructive bases, e.g. non-commutative geometry, n-category theory and the sheaf theoretic characterization of non-locality I have spoken about. The problem is that these constructive bases tend to be too complicated for the average theorist to easily adequately fit them into the correct place during theory construction, especially if the theorist forgoes using foundational research methodology; this leaves theorists stranded, incapable of seeing the forest for the trees.
 
  • Like
Likes julcab12
  • #104
vanhees71 said:
if and where and when a photon detection occurs on your screen or CCD cam is random
Is there something random going on when a detection does not occur? I know it's a philosophical question that you find irrelevant, but that is one of the things one wants to understand with a mechanism.

vanhees71 said:
is not magic at all but due to the interaction of the field with the detector electrons, all desrcribed by QFT.
In a similar way, I could introduce a rabbit creation operator that creates a rabbit from the vacuum whenever the magician puts his hand in the previously empty hat. With a little bit of work, I could make this theory compatible with all observation by the spectators in the audience. Would you say that with such a theory there is no magic at all because it is described by the theory?
 
  • #105
A. Neumaier said:
The Klein-Gordon equation is not QFT!

That’s semantics, you can argue that with Zee
 
  • #106
vanhees71 said:
Nonrelativistic QT is an approximation of relativstic QFT, valid under certain assumptions. If nonrelativistic QT is applicable, it depends on the accuracy you check it, whether you realize that there are relativistic corrections. E.g., the hydrogen atom spectrum as treated in QM 1 (neglecting relativity as well as the magnetic moment of the electron) is pretty accurate, but you see fine structure, hyperfine structure and radiative corrections like the Lamb shift when looking closer. The relativistic theory so far has not been disproven. To the contrary, it's among the best confirmed theories ever.

Right, but the OP was asking specifically about the mystery of entanglement in experiments analyzed accurately with QM (yes, even when using photons). So, my point is simple: In any theory of physics that may or may not make correspondence with a more general theory, whenever you do an experiment that is accurately analyzed with that theory (accurate in experimental terms, not to be confused with precise), there is nothing more the general theory can add — that’s what correspondence means. If there was something amiss between the experimental outcomes and theoretical predictions, i.e., the theory failed to analyze it accurately, then that would point to something missing in the approximate theory that requires the more general version. But, that is not at all the case with the experiments accurately analyzed with QM that violate Bell’s inequality for example. Therefore, in such experiments when someone says, “You need to use QFT to understand the mysterious outcomes of that QM experiment,” they are saying, “You need to use QM to understand the mysterious outcomes of that QM experiment.” Which brings us right back to where we started.
 
  • Like
Likes DrChinese and Auto-Didact
  • #107
I apologize to PeterDonis for my lack of civility in an earlier post. That was absolutely uncalled for.
 
  • Like
Likes Auto-Didact
  • #108
vanhees71 said:
Well, that's perfectly expressing my statement that we simply have to accept what our observation of nature has told us: She behaves not according to classical theories (even stochastic ones) but according to quantum theory, including the stronger-than-classically-possible correlations described by experiment. You may whine as much as you like about the loss of the "classical confort zone", but nature doesn't care ;-)).
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".
 
  • Like
Likes julcab12
  • #109
DarMM said:
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".

I agree, there is nothing in decoherence that resolves the mystery of quantum correlations.

"Only the variables subjected to measurement have defined values." And that makes it look like measurement brings reality into existence. That wouldn't necessarily be troubling except we have the quantum correlations to explain, so this "bringing-reality-into-existence mechanism" acts ... nonlocally? Or, ... retrocausally? Or, ... ?
 
  • Like
Likes DrChinese
  • #110
But, I'm getting off topic and into the mystery of quantum entanglement in general. Here is what Dr. Chinese asked originally:

A number of posters have asserted that Quantum Field Theory (QFT) provides a better description of quantum entanglement than the non-relativistic Quantum Mechanics. Yet I don't see QFT references in experimental papers on entanglement. Why not?

My answer, as I posted earlier, is that QM is the quantum formalism used to successfully model those experiments. That is, the experimentalists are calling the theory that successfully maps onto their experiments "QM." Of course, there are all kinds of quantum formalisms, so maybe we should just use the term "quantum theory" to refer to the entire collection. [Mermin uses "quantum mechanics" for the entire collection, but I think that would be confusing.]

If you're using a formalism of quantum theory for an experiment and it doesn't match the outcome, then you've chosen the wrong formalism. The question would then be, "Is there some other quantum formalism that does map to the experiment?" If the answer is "yes," then there is something about the formalism you used that doesn't apply to the circumstances of the experiment. In that case, you need to find and use the formalism that does apply. The mysterious quantum entanglement experiments are not of this type, since the formalism (whatever you call it) does indeed map beautifully to the experiments.

If the answer is "no," then we need a new theory altogether. That situation doesn't apply to the OP, as I read it.
 
  • Like
Likes DrChinese
  • #111
vanhees71 said:
Indeed, as I stress for years, the detection event is not the cause of the correlations but the preparation in an entangled state (I guess you refer to the correlations described by entanglement). ... After all everything causal is somehow due to local interactions, i.e., the same trick that makes classical relativistic physics local, namely the description by fields, makes also the quantum description local, namely through local (microcausal) relativstic QFTs.

How can it be both causal/local AND quantum nonlocal (i.e. those nonlocal correlations, as you call them)? If by local you mean microcausal, then you are not following standard terminology. Saying something is microcausal is meaningless when talking about entanglement, because entanglement does not violate signal locality anyway. So why mention that?

You clearly acknowledge that the classical ideas of entanglement cannot be maintained post Bell, and yet you claim that entanglement outcomes are not dependent on the settings of measurement devices that are distant from each other. Why don't you just say that they are, rather than deny that measurements are a factor?
 
Last edited:
  • Like
Likes Auto-Didact
  • #112
RUTA said:
I agree, there is nothing in decoherence that resolves the mystery of quantum correlations.

"Only the variables subjected to measurement have defined values." And that makes it look like measurement brings reality into existence. That wouldn't necessarily be troubling except we have the quantum correlations to explain, so this "bringing-reality-into-existence mechanism" acts ... nonlocally? Or, ... retrocausally? Or, ... ?
I agree, if reality is being created then any description of what is going on in that creation must be retrocausal, etc. Heisenberg did seem to argue along the lines of this creation, with his idea of potentia becoming facts in measurements.

Bohr however seemed to go along the lines of the microscopic being inconceivable and that a measurement was when that inconceivable stuff "bubbled up" to leave traces at our scale. We can describe the effects on our scale with QM, but not the microscopic itself. So he didn't think reality was being created in a literal sense. To him the variables were labels for classical effects. So only the effect you provoked has a defined value. That you can't combine effects (complementarity) was just a consequence of the microscopic being beyond thought.

So Bohr escapes the need for retrocausality, etc by taking the route of the microscopic being transcendent. The problems people have with that should be clear enough.
 
  • Like
Likes Auto-Didact
  • #113
It doesn't have to be 100% "retro-causal" or "transcendent" does it? Aren't those just labels we have often applied to things that are in the moment uncomfortably mysterious?

I've heard string folks talk about the "Bulk" in the abstract. Well if there is a real "Bulk" of the kind they seem to suggest where everything but gravity is off limits, but gravity definitely goes, then that sounds like a pretty seriously a-causal semi-transcendent situation. I say a-causal (and mean also a-temporal) Because isn't "gravity" just space-time curvature and isn't space-time curvature the sole driver of "proper-time" - whatever "proper time" is...physically?

Personally I like to use "differential ageing" (or maybe "pime" aka "physical time") because I don't see any good argument that "time" actually exists - so abandoning the Newtonian fantasy of it makes me slightly less uncomfortable.

To the question of the OP - whatever new constructive models get built seems to me they need to account for the way ubiquitous entanglement is drawing what we can only call "random numbers" between very specific space-like separated events.

We've isolated the phenomenon in experiments but it's happening all the time everywhere and certainly has some fundamental relationship to the everyday and everywhere stress-energy tensor. Got to be more we can learn about how it goes in the many-body and complex systems case.

I mean isn't that why all the hubub in the condensed matter domain - re phonons and a plethora of gauge theories. IOW is it because QFT is better at digging into the many-body QM problem?
 
Last edited:
  • Like
Likes julcab12 and *now*
  • #114
Jimster41 said:
It doesn't have to be 100% "retro-causal" or "transcendent" does it? Aren't those just labels we have often applied to things that are in the moment uncomfortably mysterious?

I can't speak for DarMM, but certainly I didn't imply that those are the only two options for understanding entanglement. In fact, there are many. Sorry if my post caused you to infer otherwise.

Jimster41 said:
I've heard string folks talk about the "Bulk" in the abstract. Well if there is a real "Bulk" of the kind they seem to suggest where everything but gravity is off limits, but gravity definitely goes, then that sounds like a pretty seriously a-causal semi-transcendent situation. I say a-causal (and mean also a-temporal) Because isn't "gravity" just space-time curvature and isn't space-time curvature the sole driver of "proper-time" - whatever "proper time" is...physically?

Personally I like to use "differential ageing" (or maybe "pime" aka "physical time") because I don't see any good argument that "time" actually exists - so abandoning the Newtonian fantasy of it makes me slightly less uncomfortable.

This is getting off topic for this thread, but we deal with that issue in chapters 7 & 8 of our book, "Beyond the Dynamical Universe: Unifying Block Universe Physics and Time as Experienced." Even though modern physics is best accounted for by using constraints in a block universe (the first six chapters make that argument), our dynamical experience of time (Passage, Presence, and Direction) is not an "illusion," as some have argued (e.g., Brian Greene's video The Illusion of Time). I will have to leave it at that here, since it's too far off topic for this thread.
 
  • Like
Likes Jimster41
  • #115
Jimster41 said:
It doesn't have to be 100% "retro-causal" or "transcendent" does it?
As @RUTA said no those are certainly not the only options. I wrote "retrocausal etc" as I got tired of writing the complete list. I've given a complete list a few times on this forum. Just search for "superdeterminism" and my username and you'll find it.
 
  • #116
RUTA said:
Right, but the OP was asking specifically about the mystery of entanglement in experiments analyzed accurately with QM (yes, even when using photons). So, my point is simple: In any theory of physics that may or may not make correspondence with a more general theory, whenever you do an experiment that is accurately analyzed with that theory (accurate in experimental terms, not to be confused with precise), there is nothing more the general theory can add — that’s what correspondence means. If there was something amiss between the experimental outcomes and theoretical predictions, i.e., the theory failed to analyze it accurately, then that would point to something missing in the approximate theory that requires the more general version. But, that is not at all the case with the experiments accurately analyzed with QM that violate Bell’s inequality for example. Therefore, in such experiments when someone says, “You need to use QFT to understand the mysterious outcomes of that QM experiment,” they are saying, “You need to use QM to understand the mysterious outcomes of that QM experiment.” Which brings us right back to where we started.
Well, you have to analyze an experiment with a theory (or model) that is valid to analyze this experiment. There's no way to analyze an experiment involving photons with non-relativstic QM only since photon cannot be described non-relativsitically at all. What you can describe non-relativistically are often the matter involved in the experiement since large parts of atomic, molecular, and solid state physics can be described by non-relativsitic quantum mechanics or even classical mechanics.

Another point are fundamental issues with Einstein causality, which cannot be analyzed using non-relativstic theory at all since the question, whether causal effects are propgating faster than light or not is irrelevant for non-relativstic physics to begin with. Since in Newtonian physics actions at a distance are the usual way to describe interactions you cannot expect that the causality structure of relativistic spacetime is respected. So finding violations of Einstein causality using non-relativstic approximations is not a surprise but already put into begin with.

Of course, entanglement itself is independent on whether you use relativistic or non-relativistic QT to describe it.
 
  • Like
Likes bhobba and *now*
  • #117
DarMM said:
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".
Well, the problem are popular-science books trying to be sensational for selling their writings rather than providing a true picture of science, which is exciting enough in itself. The reason is that good popular-science writing is among the most difficult tasks ever.

You indeed quote the most abused buzz words of the popular-science literature with respect to QT

"Nonlocality": It's even difficult to understand locality vs. nonlocality among physicists in full command of the necessary mathematical equipment to describe it. In contemporary physics everything is described on the most fundamental level by relativistic local QFT. So by construction there are no nonlocal interactions on a fundamental level. What's in a sloppy sense "non-local" in QT are the strong correlations described by entanglement which can refer to parts of a quantum system that are measured (by local interactions with measurement devices!) on far-distant (space-like separated) parts of this systems. It would be much more clear to call this "inseparability" as Einstein did in his own version of the EPR paper, which is much more to the point than the EPR paper itself. The conclusion is: There's no contradiction whatsoever between "local interactions" and "inseparability".

"Multiple worlds:" This is just part of the "quantum esoterics" subculture. Since the "multiple worlds" of Everett's relative-state interpretation are unobservable and just ficitions of popular-science writers. There's not much to say about it in a scientific context.

"Retrocausality:" There's nothing retrocausal. It's mostly referring to "delayed-choice setups". It's just the selection of partial ensembles, which can be done in principle at any time after the experiment is finished providing one has the necessary information within the stored measurement protocols. Accepting the probabilistic quantum description of states and the implied correlations through entanglement, no retrocausality is left. Everything is understandable from the causal history of the experimental setup, involving the initial-state interpretation and the measurements done on the system.

Another source of confusion and weirdness comes from the sloppy statement that "only the variables subjected to measurement have defined values". It's important to distinguish clearly between "preparation" and "measurement", though of course state preparation also involves some measurements. The correct statement is that the state of the system implies which observables take determined values, i.e., when measured always (i.e., with 100% probability) lead to one specific outcome. For all other observables the measurement leads to a random result with probabilities given by the state the system is prepared in.

There is no "classical vs. quantum world". The classical behavior of macroscopic systems is just due to sufficient coarse graining, looking at macroscopic "relevant" observables. There's thus no contradiction between quantum dynamics and the apparent classical dynamics of these relevant observables. Of course, decoherence is the key mechanism, and it's hard to avoid concerning macroscopic systems.
 
  • Like
Likes physicsworks, julcab12, Hans de Vries and 3 others
  • #118
DrChinese said:
How can it be both causal/local AND quantum nonlocal (i.e. those nonlocal correlations, as you call them)? If by local you mean microcausal, then you are not following standard terminology. Saying something is microcausal is meaningless when talking about entanglement, because entanglement does not violate signal locality anyway. So why mention that?

You clearly acknowledge that the classical ideas of entanglement cannot be maintained post Bell, and yet you claim that entanglement outcomes are not dependent on the settings of measurement devices that are distant from each other. Why don't you just say that they are, rather than deny that measurements are a factor?
The problem with the world "nonlocality" is its sloppy use even in the scientific literature. In relativistic physics all successful theories are local. It's the very reason why the field concept has been developed by Faraday, who of course didn't know about relativity in his time, but indeed the field concept turned out to be crucial to formulate relativistically consistent models for interactions. The interactions are described by local laws rather than actions at a distance as in Newtonian physics. This holds true in QT, which also is formulated as QFT, and in the very construction of all successful relativistic QFT the microcausality constraint is the key point to have both relativistic covariant descriptions and the "locality" of interactions. One has to mention it, whenever claims about "nonlocality" come up, which implies the claim that the measurement on an entangled state at one place would "cause" the immediate change of the system at all far-distant places. You avoid this misunderstanding when using the term "inseperability" rather than "nonlocality" for the strong correlations among far-distant parts of quantu msystems described by entanglement.

Also you don't read my statements carefully enough. I never claimed that outcomes are independent of the settings of measurement devices. The contrary is true! Everything depends on the specific preparation of the initial state and the specific setup of measurement devices used to probe it. Measurements are due to local interactions of (parts of) the system with measurement devices. The found strong correlations are due to the initial-state preparation in an entangled state not due to the local measurement on one part of the system and some mysterious spooky actions at a distance on far-distant other parts ofthe system.
 
  • Like
Likes physicsworks, Hans de Vries, bhobba and 1 other person
  • #119
Retrocausility etc come about via the rejection of assumptions in the proof of Bell's theorem and the Kochen Specker theorem, not in the manner you've stated above.

Also inseperability is strictly weaker than the the Non-Classical correlations in the CHSH scenario. They're not synonymous.

vanhees71 said:
Another source of confusion and weirdness comes from the sloppy statement that "only the variables subjected to measurement have defined values". It's important to distinguish clearly between "preparation" and "measurement", though of course state preparation also involves some measurements. The correct statement is that the state of the system implies which observables take determined values, i.e., when measured always (i.e., with 100% probability) lead to one specific outcome. For all other observables the measurement leads to a random result with probabilities given by the state the system is prepared in.
This might be a better way to phrase it, but I feel you make things sound simpler than they are as your remarks apply equally to some classical stochastic theories.

First it needs to be modified with the proviso that the Kochen Specker theorem shows that observables are associated with phenomena in our devices not independent properties held by particles. Keamble as often quoted by Asher Peres puts it well:
Keamble 1937 said:
"We have no satisfactory reason for
ascribing objective existence to physical quantities as distinguished from the
numbers obtained when we make the measurements which we correlate with
them ... It would be more exact if we spoke of 'making measurements' of
this, that, or the other type instead of saying that we measure this, that, or
the other 'physical quantity.'"
Peres gives good examples of how, even ignoring the KS theorem, inaccuracy in a device meant to measure spin along a given axis means you might really be measuring a POVM that cannot be understood as spin in any direction or really in terms of any classical variable.

However your phrasing doesn't really distinguish between quantum and classical stochastic theories. How do you explain the fact that in a CHSH scenario where we have four observables, two for each observer:
$$S_{A1}, S_{A2}, S_{B1}, S_{B2}$$
that the outcomes for a round where ##S_{A1}, S_{B1}## are measured are not marginals of a distribution for all four observables.
 
  • Like
Likes Auto-Didact
  • #120
Do you have a reference for this specific CHSH scenario?

Of course the overarching mathematical edifice here is "probablitiy theory", as e.g., formulated with the Kolmogorov axioms. This theory is imho flexible enough to encompass both classical and quantum "stochastic theories" since it does not specify how to choose the probablities or a specific situation. This choice is of course utterly different in quantum theory and classical statistics, and classical statistics is an approximation of quantum statistics with a limited range of validity. Of course, it cannot describe everything related to EPR the violation of Bell inequalities and related issues with correlations described by entanglement, including CHSH in its various forms.

CHSH imho provides no problems within the minimatl statistical interpretation. You just do measurements on an ensemble of equally prepared systems with specific measurement setups for each of the correlations you want to measure. Any single experiment is thus consistently described within QT (no matter whether you do an idealized von Neumann filter measurement or some "weak measurement" descxribed by POVMs, which is afaik the most general case).
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
Replies
58
Views
4K
  • · Replies 1 ·
Replies
1
Views
469
Replies
1
Views
1K
  • · Replies 69 ·
3
Replies
69
Views
7K
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 115 ·
4
Replies
115
Views
9K