A How do entanglement experiments benefit from QFT (over QM)?

  • #101
vanhees71 said:
Indeed, as I stress for years, the detection event is not the cause of the correlations but the preparation in an entangled state (I guess you refer to the correlations described by entanglement).
Let me explain it this way. Tsirelson and Landau showed that the Bell inequality violations come from only the two axes measured by Alice and Bob having actual values. Values along all other axes are undefined as you know. That's why we can have such strong non classical correlations. Intrinsically random variables where after measurement only the variables you measured have well defined values are capable of having stronger correlations than classical theories (even stochastic ones) because there all variables take on well defined values (even if those values are randomly generated).

However many people find this odd because how can nature intrinsically care about "measurement". So they prefer to investigate other ways of generating correlations that strong.
 
Last edited:
  • Like
Likes bhobba, vanhees71 and Auto-Didact
Physics news on Phys.org
  • #102
Well, that's perfectly expressing my statement that we simply have to accept what our observation of nature has told us: She behaves not according to classical theories (even stochastic ones) but according to quantum theory, including the stronger-than-classically-possible correlations described by experiment. You may whine as much as you like about the loss of the "classical confort zone", but nature doesn't care ;-)).
 
  • Like
Likes DarMM
  • #103
The classical/quantum dichotomy is a red herring: researchers aren't so much calling for a return to the 'classical comfort zone' but for an even further departure away from classicality than QT, but a departure which does have a constructive basis capable of offering an explanation in terms of a mechanism. The reason people make the strawman argument that wanting a mechanism is a call back to classical physics is because classical physics also happened to have such a constructive basis: (real) analysis.

There is no reason whatsoever to think that finding a more comprehensive constructive basis for QT is impossible; on the contrary, the failure to directly formulate GR-based QFT is sufficient evidence that searching for such a more comprehensive constructive basis - more comprehensive than offered by classical physics' real analysis - is not a mere matter of academic luxury, but a logical necessity.

This searching has certainly not been in vain for there have definitely been new offerings of such constructive bases, e.g. non-commutative geometry, n-category theory and the sheaf theoretic characterization of non-locality I have spoken about. The problem is that these constructive bases tend to be too complicated for the average theorist to easily adequately fit them into the correct place during theory construction, especially if the theorist forgoes using foundational research methodology; this leaves theorists stranded, incapable of seeing the forest for the trees.
 
  • Like
Likes julcab12
  • #104
vanhees71 said:
if and where and when a photon detection occurs on your screen or CCD cam is random
Is there something random going on when a detection does not occur? I know it's a philosophical question that you find irrelevant, but that is one of the things one wants to understand with a mechanism.

vanhees71 said:
is not magic at all but due to the interaction of the field with the detector electrons, all desrcribed by QFT.
In a similar way, I could introduce a rabbit creation operator that creates a rabbit from the vacuum whenever the magician puts his hand in the previously empty hat. With a little bit of work, I could make this theory compatible with all observation by the spectators in the audience. Would you say that with such a theory there is no magic at all because it is described by the theory?
 
  • #105
A. Neumaier said:
The Klein-Gordon equation is not QFT!

That’s semantics, you can argue that with Zee
 
  • #106
vanhees71 said:
Nonrelativistic QT is an approximation of relativstic QFT, valid under certain assumptions. If nonrelativistic QT is applicable, it depends on the accuracy you check it, whether you realize that there are relativistic corrections. E.g., the hydrogen atom spectrum as treated in QM 1 (neglecting relativity as well as the magnetic moment of the electron) is pretty accurate, but you see fine structure, hyperfine structure and radiative corrections like the Lamb shift when looking closer. The relativistic theory so far has not been disproven. To the contrary, it's among the best confirmed theories ever.

Right, but the OP was asking specifically about the mystery of entanglement in experiments analyzed accurately with QM (yes, even when using photons). So, my point is simple: In any theory of physics that may or may not make correspondence with a more general theory, whenever you do an experiment that is accurately analyzed with that theory (accurate in experimental terms, not to be confused with precise), there is nothing more the general theory can add — that’s what correspondence means. If there was something amiss between the experimental outcomes and theoretical predictions, i.e., the theory failed to analyze it accurately, then that would point to something missing in the approximate theory that requires the more general version. But, that is not at all the case with the experiments accurately analyzed with QM that violate Bell’s inequality for example. Therefore, in such experiments when someone says, “You need to use QFT to understand the mysterious outcomes of that QM experiment,” they are saying, “You need to use QM to understand the mysterious outcomes of that QM experiment.” Which brings us right back to where we started.
 
  • Like
Likes DrChinese and Auto-Didact
  • #107
I apologize to PeterDonis for my lack of civility in an earlier post. That was absolutely uncalled for.
 
  • Like
Likes Auto-Didact
  • #108
vanhees71 said:
Well, that's perfectly expressing my statement that we simply have to accept what our observation of nature has told us: She behaves not according to classical theories (even stochastic ones) but according to quantum theory, including the stronger-than-classically-possible correlations described by experiment. You may whine as much as you like about the loss of the "classical confort zone", but nature doesn't care ;-)).
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".
 
  • Like
Likes julcab12
  • #109
DarMM said:
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".

I agree, there is nothing in decoherence that resolves the mystery of quantum correlations.

"Only the variables subjected to measurement have defined values." And that makes it look like measurement brings reality into existence. That wouldn't necessarily be troubling except we have the quantum correlations to explain, so this "bringing-reality-into-existence mechanism" acts ... nonlocally? Or, ... retrocausally? Or, ... ?
 
  • Like
Likes DrChinese
  • #110
But, I'm getting off topic and into the mystery of quantum entanglement in general. Here is what Dr. Chinese asked originally:

A number of posters have asserted that Quantum Field Theory (QFT) provides a better description of quantum entanglement than the non-relativistic Quantum Mechanics. Yet I don't see QFT references in experimental papers on entanglement. Why not?

My answer, as I posted earlier, is that QM is the quantum formalism used to successfully model those experiments. That is, the experimentalists are calling the theory that successfully maps onto their experiments "QM." Of course, there are all kinds of quantum formalisms, so maybe we should just use the term "quantum theory" to refer to the entire collection. [Mermin uses "quantum mechanics" for the entire collection, but I think that would be confusing.]

If you're using a formalism of quantum theory for an experiment and it doesn't match the outcome, then you've chosen the wrong formalism. The question would then be, "Is there some other quantum formalism that does map to the experiment?" If the answer is "yes," then there is something about the formalism you used that doesn't apply to the circumstances of the experiment. In that case, you need to find and use the formalism that does apply. The mysterious quantum entanglement experiments are not of this type, since the formalism (whatever you call it) does indeed map beautifully to the experiments.

If the answer is "no," then we need a new theory altogether. That situation doesn't apply to the OP, as I read it.
 
  • Like
Likes DrChinese
  • #111
vanhees71 said:
Indeed, as I stress for years, the detection event is not the cause of the correlations but the preparation in an entangled state (I guess you refer to the correlations described by entanglement). ... After all everything causal is somehow due to local interactions, i.e., the same trick that makes classical relativistic physics local, namely the description by fields, makes also the quantum description local, namely through local (microcausal) relativstic QFTs.

How can it be both causal/local AND quantum nonlocal (i.e. those nonlocal correlations, as you call them)? If by local you mean microcausal, then you are not following standard terminology. Saying something is microcausal is meaningless when talking about entanglement, because entanglement does not violate signal locality anyway. So why mention that?

You clearly acknowledge that the classical ideas of entanglement cannot be maintained post Bell, and yet you claim that entanglement outcomes are not dependent on the settings of measurement devices that are distant from each other. Why don't you just say that they are, rather than deny that measurements are a factor?
 
Last edited:
  • Like
Likes Auto-Didact
  • #112
RUTA said:
I agree, there is nothing in decoherence that resolves the mystery of quantum correlations.

"Only the variables subjected to measurement have defined values." And that makes it look like measurement brings reality into existence. That wouldn't necessarily be troubling except we have the quantum correlations to explain, so this "bringing-reality-into-existence mechanism" acts ... nonlocally? Or, ... retrocausally? Or, ... ?
I agree, if reality is being created then any description of what is going on in that creation must be retrocausal, etc. Heisenberg did seem to argue along the lines of this creation, with his idea of potentia becoming facts in measurements.

Bohr however seemed to go along the lines of the microscopic being inconceivable and that a measurement was when that inconceivable stuff "bubbled up" to leave traces at our scale. We can describe the effects on our scale with QM, but not the microscopic itself. So he didn't think reality was being created in a literal sense. To him the variables were labels for classical effects. So only the effect you provoked has a defined value. That you can't combine effects (complementarity) was just a consequence of the microscopic being beyond thought.

So Bohr escapes the need for retrocausality, etc by taking the route of the microscopic being transcendent. The problems people have with that should be clear enough.
 
  • Like
Likes Auto-Didact
  • #113
It doesn't have to be 100% "retro-causal" or "transcendent" does it? Aren't those just labels we have often applied to things that are in the moment uncomfortably mysterious?

I've heard string folks talk about the "Bulk" in the abstract. Well if there is a real "Bulk" of the kind they seem to suggest where everything but gravity is off limits, but gravity definitely goes, then that sounds like a pretty seriously a-causal semi-transcendent situation. I say a-causal (and mean also a-temporal) Because isn't "gravity" just space-time curvature and isn't space-time curvature the sole driver of "proper-time" - whatever "proper time" is...physically?

Personally I like to use "differential ageing" (or maybe "pime" aka "physical time") because I don't see any good argument that "time" actually exists - so abandoning the Newtonian fantasy of it makes me slightly less uncomfortable.

To the question of the OP - whatever new constructive models get built seems to me they need to account for the way ubiquitous entanglement is drawing what we can only call "random numbers" between very specific space-like separated events.

We've isolated the phenomenon in experiments but it's happening all the time everywhere and certainly has some fundamental relationship to the everyday and everywhere stress-energy tensor. Got to be more we can learn about how it goes in the many-body and complex systems case.

I mean isn't that why all the hubub in the condensed matter domain - re phonons and a plethora of gauge theories. IOW is it because QFT is better at digging into the many-body QM problem?
 
Last edited:
  • Like
Likes julcab12 and *now*
  • #114
Jimster41 said:
It doesn't have to be 100% "retro-causal" or "transcendent" does it? Aren't those just labels we have often applied to things that are in the moment uncomfortably mysterious?

I can't speak for DarMM, but certainly I didn't imply that those are the only two options for understanding entanglement. In fact, there are many. Sorry if my post caused you to infer otherwise.

Jimster41 said:
I've heard string folks talk about the "Bulk" in the abstract. Well if there is a real "Bulk" of the kind they seem to suggest where everything but gravity is off limits, but gravity definitely goes, then that sounds like a pretty seriously a-causal semi-transcendent situation. I say a-causal (and mean also a-temporal) Because isn't "gravity" just space-time curvature and isn't space-time curvature the sole driver of "proper-time" - whatever "proper time" is...physically?

Personally I like to use "differential ageing" (or maybe "pime" aka "physical time") because I don't see any good argument that "time" actually exists - so abandoning the Newtonian fantasy of it makes me slightly less uncomfortable.

This is getting off topic for this thread, but we deal with that issue in chapters 7 & 8 of our book, "Beyond the Dynamical Universe: Unifying Block Universe Physics and Time as Experienced." Even though modern physics is best accounted for by using constraints in a block universe (the first six chapters make that argument), our dynamical experience of time (Passage, Presence, and Direction) is not an "illusion," as some have argued (e.g., Brian Greene's video The Illusion of Time). I will have to leave it at that here, since it's too far off topic for this thread.
 
  • Like
Likes Jimster41
  • #115
Jimster41 said:
It doesn't have to be 100% "retro-causal" or "transcendent" does it?
As @RUTA said no those are certainly not the only options. I wrote "retrocausal etc" as I got tired of writing the complete list. I've given a complete list a few times on this forum. Just search for "superdeterminism" and my username and you'll find it.
 
  • #116
RUTA said:
Right, but the OP was asking specifically about the mystery of entanglement in experiments analyzed accurately with QM (yes, even when using photons). So, my point is simple: In any theory of physics that may or may not make correspondence with a more general theory, whenever you do an experiment that is accurately analyzed with that theory (accurate in experimental terms, not to be confused with precise), there is nothing more the general theory can add — that’s what correspondence means. If there was something amiss between the experimental outcomes and theoretical predictions, i.e., the theory failed to analyze it accurately, then that would point to something missing in the approximate theory that requires the more general version. But, that is not at all the case with the experiments accurately analyzed with QM that violate Bell’s inequality for example. Therefore, in such experiments when someone says, “You need to use QFT to understand the mysterious outcomes of that QM experiment,” they are saying, “You need to use QM to understand the mysterious outcomes of that QM experiment.” Which brings us right back to where we started.
Well, you have to analyze an experiment with a theory (or model) that is valid to analyze this experiment. There's no way to analyze an experiment involving photons with non-relativstic QM only since photon cannot be described non-relativsitically at all. What you can describe non-relativistically are often the matter involved in the experiement since large parts of atomic, molecular, and solid state physics can be described by non-relativsitic quantum mechanics or even classical mechanics.

Another point are fundamental issues with Einstein causality, which cannot be analyzed using non-relativstic theory at all since the question, whether causal effects are propgating faster than light or not is irrelevant for non-relativstic physics to begin with. Since in Newtonian physics actions at a distance are the usual way to describe interactions you cannot expect that the causality structure of relativistic spacetime is respected. So finding violations of Einstein causality using non-relativstic approximations is not a surprise but already put into begin with.

Of course, entanglement itself is independent on whether you use relativistic or non-relativistic QT to describe it.
 
  • Like
Likes bhobba and *now*
  • #117
DarMM said:
I think the interesting thing is the precise form of "classical comfort zone" we're losing here.

People consider nonlocality, multiple worlds, retrocausality so they're not afraid of strange ideas. It's the fact that measurement actually matters. Only the variables subjected to measurement have defined values.

From decoherence studies we know "measurement" involves anything undergoing decoherence. So that makes it a little less weird. Still though that just makes it "only those variables that get coupled to the classical world have values".
Well, the problem are popular-science books trying to be sensational for selling their writings rather than providing a true picture of science, which is exciting enough in itself. The reason is that good popular-science writing is among the most difficult tasks ever.

You indeed quote the most abused buzz words of the popular-science literature with respect to QT

"Nonlocality": It's even difficult to understand locality vs. nonlocality among physicists in full command of the necessary mathematical equipment to describe it. In contemporary physics everything is described on the most fundamental level by relativistic local QFT. So by construction there are no nonlocal interactions on a fundamental level. What's in a sloppy sense "non-local" in QT are the strong correlations described by entanglement which can refer to parts of a quantum system that are measured (by local interactions with measurement devices!) on far-distant (space-like separated) parts of this systems. It would be much more clear to call this "inseparability" as Einstein did in his own version of the EPR paper, which is much more to the point than the EPR paper itself. The conclusion is: There's no contradiction whatsoever between "local interactions" and "inseparability".

"Multiple worlds:" This is just part of the "quantum esoterics" subculture. Since the "multiple worlds" of Everett's relative-state interpretation are unobservable and just ficitions of popular-science writers. There's not much to say about it in a scientific context.

"Retrocausality:" There's nothing retrocausal. It's mostly referring to "delayed-choice setups". It's just the selection of partial ensembles, which can be done in principle at any time after the experiment is finished providing one has the necessary information within the stored measurement protocols. Accepting the probabilistic quantum description of states and the implied correlations through entanglement, no retrocausality is left. Everything is understandable from the causal history of the experimental setup, involving the initial-state interpretation and the measurements done on the system.

Another source of confusion and weirdness comes from the sloppy statement that "only the variables subjected to measurement have defined values". It's important to distinguish clearly between "preparation" and "measurement", though of course state preparation also involves some measurements. The correct statement is that the state of the system implies which observables take determined values, i.e., when measured always (i.e., with 100% probability) lead to one specific outcome. For all other observables the measurement leads to a random result with probabilities given by the state the system is prepared in.

There is no "classical vs. quantum world". The classical behavior of macroscopic systems is just due to sufficient coarse graining, looking at macroscopic "relevant" observables. There's thus no contradiction between quantum dynamics and the apparent classical dynamics of these relevant observables. Of course, decoherence is the key mechanism, and it's hard to avoid concerning macroscopic systems.
 
  • Like
Likes physicsworks, julcab12, Hans de Vries and 3 others
  • #118
DrChinese said:
How can it be both causal/local AND quantum nonlocal (i.e. those nonlocal correlations, as you call them)? If by local you mean microcausal, then you are not following standard terminology. Saying something is microcausal is meaningless when talking about entanglement, because entanglement does not violate signal locality anyway. So why mention that?

You clearly acknowledge that the classical ideas of entanglement cannot be maintained post Bell, and yet you claim that entanglement outcomes are not dependent on the settings of measurement devices that are distant from each other. Why don't you just say that they are, rather than deny that measurements are a factor?
The problem with the world "nonlocality" is its sloppy use even in the scientific literature. In relativistic physics all successful theories are local. It's the very reason why the field concept has been developed by Faraday, who of course didn't know about relativity in his time, but indeed the field concept turned out to be crucial to formulate relativistically consistent models for interactions. The interactions are described by local laws rather than actions at a distance as in Newtonian physics. This holds true in QT, which also is formulated as QFT, and in the very construction of all successful relativistic QFT the microcausality constraint is the key point to have both relativistic covariant descriptions and the "locality" of interactions. One has to mention it, whenever claims about "nonlocality" come up, which implies the claim that the measurement on an entangled state at one place would "cause" the immediate change of the system at all far-distant places. You avoid this misunderstanding when using the term "inseperability" rather than "nonlocality" for the strong correlations among far-distant parts of quantu msystems described by entanglement.

Also you don't read my statements carefully enough. I never claimed that outcomes are independent of the settings of measurement devices. The contrary is true! Everything depends on the specific preparation of the initial state and the specific setup of measurement devices used to probe it. Measurements are due to local interactions of (parts of) the system with measurement devices. The found strong correlations are due to the initial-state preparation in an entangled state not due to the local measurement on one part of the system and some mysterious spooky actions at a distance on far-distant other parts ofthe system.
 
  • Like
Likes physicsworks, Hans de Vries, bhobba and 1 other person
  • #119
Retrocausility etc come about via the rejection of assumptions in the proof of Bell's theorem and the Kochen Specker theorem, not in the manner you've stated above.

Also inseperability is strictly weaker than the the Non-Classical correlations in the CHSH scenario. They're not synonymous.

vanhees71 said:
Another source of confusion and weirdness comes from the sloppy statement that "only the variables subjected to measurement have defined values". It's important to distinguish clearly between "preparation" and "measurement", though of course state preparation also involves some measurements. The correct statement is that the state of the system implies which observables take determined values, i.e., when measured always (i.e., with 100% probability) lead to one specific outcome. For all other observables the measurement leads to a random result with probabilities given by the state the system is prepared in.
This might be a better way to phrase it, but I feel you make things sound simpler than they are as your remarks apply equally to some classical stochastic theories.

First it needs to be modified with the proviso that the Kochen Specker theorem shows that observables are associated with phenomena in our devices not independent properties held by particles. Keamble as often quoted by Asher Peres puts it well:
Keamble 1937 said:
"We have no satisfactory reason for
ascribing objective existence to physical quantities as distinguished from the
numbers obtained when we make the measurements which we correlate with
them ... It would be more exact if we spoke of 'making measurements' of
this, that, or the other type instead of saying that we measure this, that, or
the other 'physical quantity.'"
Peres gives good examples of how, even ignoring the KS theorem, inaccuracy in a device meant to measure spin along a given axis means you might really be measuring a POVM that cannot be understood as spin in any direction or really in terms of any classical variable.

However your phrasing doesn't really distinguish between quantum and classical stochastic theories. How do you explain the fact that in a CHSH scenario where we have four observables, two for each observer:
$$S_{A1}, S_{A2}, S_{B1}, S_{B2}$$
that the outcomes for a round where ##S_{A1}, S_{B1}## are measured are not marginals of a distribution for all four observables.
 
  • Like
Likes Auto-Didact
  • #120
Do you have a reference for this specific CHSH scenario?

Of course the overarching mathematical edifice here is "probablitiy theory", as e.g., formulated with the Kolmogorov axioms. This theory is imho flexible enough to encompass both classical and quantum "stochastic theories" since it does not specify how to choose the probablities or a specific situation. This choice is of course utterly different in quantum theory and classical statistics, and classical statistics is an approximation of quantum statistics with a limited range of validity. Of course, it cannot describe everything related to EPR the violation of Bell inequalities and related issues with correlations described by entanglement, including CHSH in its various forms.

CHSH imho provides no problems within the minimatl statistical interpretation. You just do measurements on an ensemble of equally prepared systems with specific measurement setups for each of the correlations you want to measure. Any single experiment is thus consistently described within QT (no matter whether you do an idealized von Neumann filter measurement or some "weak measurement" descxribed by POVMs, which is afaik the most general case).
 
  • #121
vanhees71 said:
Do you have a reference for this specific CHSH scenario?
Streaters monograph "Lost Causes in Theoretical Physics" Chapter 6. Or Chapter 6 of Peres's monograph "Quantum Theory: Concepts and Methods".

It's the well known CHSH scenario though, not a modification of it. Those authors just emphasize the fact that a given round's outcomes are not marginals.

vanhees71 said:
Of course the overarching mathematical edifice here is "probablitiy theory", as e.g., formulated with the Kolmogorov axioms. This theory is imho flexible enough to encompass both classical and quantum "stochastic theories"
It's not. Quantum theory breaks Kolmogorov's axioms. A quantum state and a context induce a Kolmogorov model via a Gelfand homomorphism. This and how it manifests in the CHSH scenario is part of the motivation for saying "Only the measured variables have defined values" and why I'm not so confident that it's just sloppy language.

vanhees71 said:
CHSH imho provides no problems within the minimatl statistical interpretation. You just do measurements on an ensemble of equally prepared systems with specific measurement setups for each of the correlations you want to measure. Any single experiment is thus consistently described within QT
The point here isn't that the minimal statistical interpretation has problems with the CHSH scenario or that the outcomes are inconsistent with QM. I'm actually using the minimal statistical interpretation as Peres and Bohr did. What I'm saying is that in the minimal statistical interpretation our measurements don't uncover or determine properties of the particle (Kochen Specker theorem) and only the measured variables take values (as given by the fact that they're not marginals of the general case)
 
  • Like
Likes julcab12 and Auto-Didact
  • #122
vanhees71 said:
"Nonlocality": It's even difficult to understand locality vs. nonlocality among physicists in full command of the necessary mathematical equipment to describe it. In contemporary physics everything is described on the most fundamental level by relativistic local QFT. So by construction there are no nonlocal interactions on a fundamental level. What's in a sloppy sense "non-local" in QT are the strong correlations described by entanglement which can refer to parts of a quantum system that are measured (by local interactions with measurement devices!) on far-distant (space-like separated) parts of this systems. It would be much more clear to call this "inseparability" as Einstein did in his own version of the EPR paper, which is much more to the point than the EPR paper itself. The conclusion is: There's no contradiction whatsoever between "local interactions" and "inseparability".

Full command of the mathematics does not mean full command of the physics. You lack full command of the physics.
 
  • #124
DarMM said:
Peres gives good examples
where?
vanhees71 said:
Of course the overarching mathematical edifice here is "probablitiy theory", as e.g., formulated with the Kolmogorov axioms. This theory is imho flexible enough to encompass both classical and quantum "stochastic theories" since it does not specify how to choose the probablities or a specific situation.
No. Quantum stochastic calculus is the noncommutative version of Kolmogorov's commutative stochastic calculus.
 
Last edited:
  • Like
Likes Auto-Didact
  • #125
  • Like
Likes Auto-Didact and A. Neumaier
  • #126
DarMM said:
Nice reference; didn't know it before. After ridiculing the textbook form of Born's rule, Peres says among others:
Asher Peres said:
If you visit a real laboratory, you will never find there Hermitian operators. All you can see are emitters (lasers, ion guns, synchrotrons and the like) and detectors. The experimenter controls the emission process and observes detection events. The theorist’s problem is to predict the probability of response of this or that detector, for a given emission procedure. Quantum mechanics tells us that whatever comes from the emitter is represented by a state ρ (a positive operator, usually normalized to 1). Detectors are represented by positive operators ##E_µ##, where µ is an arbitrary label whose sole role is to identify the detector. The probability that detector µ be excited is tr(ρ##E_µ##). A complete set of ##E_µ##, including the possibility of no detection, sums up to the unit matrix and is called a positive operator valued measure (POVM) [6].

The various ##E_µ## do not in general commute, and therefore a detection event does not correspond to what is commonly called the “measurement of an observable.” Still, the activation of a particular detector is a macroscopic, objective phenomenon. There is no uncertainty as to which detector actually clicked. [...]
Traditional concepts such as “measuring Hermitian operators,” that were borrowed or adapted from classical physics, are not appropriate in the quantum world. In the latter, as explained above, we have
emitters and detectors, and calculations are performed by means of POVMs.
Just as in the thermal interpretation. In the introduction, he echoes another point of the thermal interpretation:
Asher Peres said:
The situation is much simpler: the pair of photons is a single, nonlocal, indivisible entity . . . It is only because we force upon the photon pair the description of two separate particles that we get the paradox [...]
 
Last edited:
  • Like
Likes Auto-Didact, Jimster41, mattt and 2 others
  • #127
vanhees71 said:
Well, you have to analyze an experiment with a theory (or model) that is valid to analyze this experiment. There's no way to analyze an experiment involving photons with non-relativstic QM only since photon cannot be described non-relativsitically at all. What you can describe non-relativistically are often the matter involved in the experiement since large parts of atomic, molecular, and solid state physics can be described by non-relativsitic quantum mechanics or even classical mechanics.

Another point are fundamental issues with Einstein causality, which cannot be analyzed using non-relativstic theory at all since the question, whether causal effects are propgating faster than light or not is irrelevant for non-relativstic physics to begin with. Since in Newtonian physics actions at a distance are the usual way to describe interactions you cannot expect that the causality structure of relativistic spacetime is respected. So finding violations of Einstein causality using non-relativstic approximations is not a surprise but already put into begin with.

Of course, entanglement itself is independent on whether you use relativistic or non-relativistic QT to describe it.

To answer the OP, here is what the experimentalists say in their paper, "Entangled photons, nonlocality and Bell inequalities in the undergraduate laboratory":

Consider a quantum mechanical system consisting of two photons called, for historical reasons, the “signal” and “idler” photons.

So, to address the OP, these experimentalists consider their Bell basis state to be "quantum mechanics," not "quantum field theory" even though they created it with photons. The only place they say anything at all about "relativistic" is here:

This gives no information about the choice of alpha. It is also the probability we would find if the signal photon had not been measured. Thus quantum mechanics (in the Copenhagen interpretation) is consistent with relativistic causality. It achieves that consistency by balancing two improbable claims: the particles influence each other nonlocally, and the randomness of nature prevents us from sending messages that way. A comment by Einstein succinctly captures the oddness of this situation. In a 1947 letter to Max Born he objected that quantum mechanics entails “spooky actions at a distance.”

So, whether or not you consider the Bell basis states to be "relativistic" when created with photons, these states and standard Hilbert space formalism account for the violation of the CHSH inequality when using photons. Again, there is nothing in the formalism without additional metaphysical interpretation that resolves the mystery of the correlations.
 
  • Like
Likes Jimster41 and DrChinese
  • #128
vanhees71 said:
Also you don't read my statements carefully enough. I never claimed that outcomes are independent of the settings of measurement devices. The contrary is true! Everything depends on the specific preparation of the initial state and the specific setup of measurement devices used to probe it. Measurements are due to local interactions of (parts of) the system with measurement devices. The found strong correlations are due to the initial-state preparation in an entangled state not due to the local measurement on one part of the system and some mysterious spooky actions at a distance on far-distant other parts ofthe system.

What do you mean by local interactions with measurement devices? Take a free relativistic quantum field theory. It predicts violation of the Bell inequalities. But interactions are not in the theory.
 
  • #129
atyy said:
Take a free relativistic quantum field theory. It predicts violation of the Bell inequalities.

Does it? In a free theory different free particles cannot be entangled, since that would require them to have interacted in the past (or to be produced from some interaction).
 
  • #130
atyy said:
What do you mean by local interactions with measurement devices? Take a free relativistic quantum field theory. It predicts violation of the Bell inequalities. But interactions are not in the theory.
A free field theory by itself predicts nothing (apart from no scattering) since there are no interactions that would allow anything to be measured.
 
  • Like
Likes weirdoguy
  • #131
PeterDonis said:
In a free theory different free particles cannot be entangled, since that would require them to have interacted in the past (or to be produced from some interaction).
No. In a free theory entangled states are possible: they might have existed forever.
 
  • #132
A. Neumaier said:
A free field theory by itself predicts nothing (apart from no scattering) since there are no interactions that would allow anything to be measured.

A free theory has observables.
 
  • #133
atyy said:
A free theory has observables.
But it is a closed system involving all of spacetime, hence allows no measurement.
 
  • #134
A. Neumaier said:
But it is a closed system involving all of spacetime, hence allows no measurement.

An interacting theory would not change that.
 
  • #135
atyy said:
An interacting theory would not change that.
But it would represent the measurement process (which is what vanhees71 means) and, at least in the thermal interpretation, tell what happens.
 
  • #136
RUTA said:
To answer the OP, here is what the experimentalists say in their paper, "Entangled photons, nonlocality and Bell inequalities in the undergraduate laboratory":
So, to address the OP, these experimentalists consider their Bell basis state to be "quantum mechanics," not "quantum field theory" even though they created it with photons. The only place they say anything at all about "relativistic" is here:
So, whether or not you consider the Bell basis states to be "relativistic" when created with photons, these states and standard Hilbert space formalism account for the violation of the CHSH inequality when using photons. Again, there is nothing in the formalism without additional metaphysical interpretation that resolves the mystery of the correlations.
I don't know, where you get the information from that the authors of this nice paper do not mean the right thing in writing down Eq. (1). Ok, you might criticize that they don't take the full Bose structure into account, but that's ok here since they are interested only in the polarization state and label the photons by an distinguishable property as s and i (the physical property distinguhishing the photons is the momentum part of the states).

The other quote is, however, indeed unfortunate, because it makes the wrong claim that "the particles influence each other nonlocally". This is simply not true in standard QFT, where the microcausality constraint is valid. The "nonlocality" is about correlations not about causal effects and this together with the correct other half of the sentence: "the randomness of nature prevents us from sending messages that way", makes the entire theory consistent and indeed consequentielly there are no "spooky actions at a distance" at all. Einstein was right in criticizing the claim of such spooky actions at a distance, because this claim has been made by the Copenhagener's at this time. It's a shame that after all the decades we know better through Bell's analysis together with all the beautiful experiments done in connection with it and still make such claims, contradicting the very construction of the theory itself. As often, the math is much smarter than the gibberish sometimes grouped around it in papers and textbooks :-(.
 
  • #137
atyy said:
What do you mean by local interactions with measurement devices? Take a free relativistic quantum field theory. It predicts violation of the Bell inequalities. But interactions are not in the theory.
Thanks for this argument! This makes it very clear that the correlations described by entanglement have nothing to do with spooky actions at a distance. Entanglement is there in the free theory, where no interactions are considered.

Of course, the free theory is empty physics wise. You can invent as many non-interacting fields as you like. You cannot measure any consequences of their existence, because they don't interact with your measurement devices. So do discuss this issue about "spooky actions at a distance" you cannot simply ignore interactions, but you must consider the interaction of the measured object with the measurement device, but these interactions are of course also governed by the interactions described by the Standard Model, and thus are strictly local. If a photon hits an atom in a photodectector the interaction is with this atom, and the "signal" caused by it travels with at most the speed of light and does not lead to any spooky actions at a distance due to the very construction of the theory in terms of a local, microcausal QFT.
 
  • #138
vanhees71 said:
I don't know, where you get the information from that the authors of this nice paper do not mean the right thing in writing down Eq. (1). Ok, you might criticize that they don't take the full Bose structure into account, but that's ok here since they are interested only in the polarization state and label the photons by an distinguishable property as s and i (the physical property distinguhishing the photons is the momentum part of the states).

The point of the quote addresses the OP -- why don't experimentalists say they're using QFT when analyzing photons? They say they're using QM. This was just another example of the semantics.

vanhees71 said:
This is simply not true in standard QFT, where the microcausality constraint is valid. The "nonlocality" is about correlations not about causal effects and this together with the correct other half of the sentence: "the randomness of nature prevents us from sending messages that way", makes the entire theory consistent and indeed consequentielly there are no "spooky actions at a distance" at all.

You understand that the formalism does not supply a causal mechanism for the correlations. That's a start! Now, to understand what so many people find mysterious about entanglement, simply tell them what does explain the correlations. Keep in mind that the formalism predicts the correlations, but does not explain them, i.e., it provides no causal mechanism, as you admit. So, if you want to be done with all this "gibberish," just explain the correlations without merely invoking the formalism.
 
  • Like
Likes Auto-Didact and Lord Jestocost
  • #139
Of course the formalism doesn not supply a causal mechanism for the correlations in the sense you seem to imply (but not explicitly mention to keep all this in a mystery ;-)), because there is no causal mechanism. The causal mechanism is the preparation procedure. E.g., two photons in the polarization-singlet state are created in a parametric downconversion event, where through local (sic) interaction of a laser field (coherent state) with a birefringent crystal a photon gets annihilated and two new photons created, necessarily in accordance with conservation laws (within the limits of the uncertainty relations involved of course) leads to a two-photon state, where both the momenta and the polarization of these photons are necessarily entangled. There's nothing mysterious about this. The formalism thus indeed describes and in that sense also explains the correlations. By "explaining" in the sense of the natural sciences you always mean you can understand it (or maybe not) from the fundamental laws discovered so far. The fundamental laws themselves (in contemporary modern physics mostly expressed in terms of symmetry principles) are the result of careful empirical research and accurate measurements, development of adequate mathematical models/theories, their test and, if necessary, refinement.

It is impossible to explain any physics without invoking "the formalism". This is as if you forbid to use language in communicating. It's impossible to communicate without the use of the adequate language, and the major breakthrough in men's attitude towards science in the modern sense is to realize, as Galileo famously put it, that the language of nature is mathematics (particularly geometry), and this is the more valid with modern physics than ever.
 
  • #140
How does that explain the mechanism controlling the evolution between that preparation and the actual later (and or space-like separated) event when they get measured and “display” said correlation. You seem to imply that later (and or space-like separated) event is fully defined earlier and only therefore “locally” (in an understandable preparation process). I get that goal and in some sense I agree. No hidden variables. But then what transpires to cause delay and support separation but limit it also? What constrains it? (In addition to the human experimenter) How does that work? I am totally unsatisfied with the answer “nothing real, nothing knowable, nothing worth trying to imagine”. That hypothetical flow is to me what space-time and whatever conservation laws (of geometry and duration?) govern it do microscopically. I am okay with the statement, we will never be able to “observe” the beables of that but then we are already way down into that problem already... doesn’t mean we can’t deduce stuff.

For example: say no experimenter you or anyone else knows decides to end that particular entanglement experiment. How far does it go? What dictates that it end?
What qualifications does some space-time location have to have to manifest the correlations you prepared. Do all entanglements therefore persist indefinitely. That seems unlikely since we are standing firmly on classical stuff... and would not be here otherwise.
 
  • #141
Well, I think the first step to understand these things is to look at what's done in the lab. You have a very concrete setup consisting of a laser and certain types of birefringent crystals which through nonlinear optics enables you to create entangled photon pairs. That's the preparation procedure. This is also well understood by effective QED descriptions, i.e., using some constitutive parameters to describe the down-conversion process. It's all based on phenomenological experience and then brought into an efficient formalism to understand "how it works".

Then you have other equipment to measure polarization. In the most simple case you just use some polarization foil like "polaroid" in a certain orientation letting photons with some linear-polarization state through and the ones in the perpendicular polarization are absorbed. These filters you use on both sites where the photons are registered (or not registered). Then you can established in a series of measurements that the single-photon polarization is completely indetermined. Taking accurate measurement protocols to ensure that you can check the correlations of each of the entangled photon pairs you find a 100% correlation between polarization measurements in the same direction. The only thing that has to do with a human experimenter is that he decides what he wants to measure, and there's no subjective element in this, if that's what's behind your question.

The very idea that this is an interesting measurement is a prediction of the theory, but it's finally defined if you can set up such a concrete experiment to measure it. There's no more you can expect from natural science. What in addition do you expect? Why are such questions never asked about classical mechanics or electrodynamics? You never ask why Newton's postulates describe the Newtonian world accurately (and indeed that's true, i.e., within the now known limits of applicability Newtonian mechanics is a very good description of the corresponding phenomena observed in nature)? But why don't you ask? Is it, because the classical-physics description has no irreducible probability element in it? Isn't it a as pretty weird idea to think that everything is strictly deterministic, compared to our daily experience of pretty random events?
 
  • #142
That all makes sense. And I've heard others make that argument, that we can't grasp irreducible probilistic-ness. We aren't wired to... I think that's possible. But I'm not sure I'm willing to give up on that intinct just yet. It does smack of fatalism - and I think there are too many things left un-detailed, if not un-explained.

to which, I thought you were going to say, "it will propagate until it hits something that it must interact with"... so I've been trying to hone my question.

We can enumerate things that would meet that qualification and imagine how they might arrive in some sense in the way of our experimental infinitely propagating two-photon. And that we will never know when that happens, can't ever know. Fine, but how did those things come into existence. Or were they already there. Well, they got made in the Big Bang or shortly thereafter... particles etc. Distributed by the inflationary period, condensed from... some...

what, Quantum?
Yes, it broke down, condensed, collapsed, interacted. Space started getting bigger. Whatever, hence the ever rambling Pachinko machine of the universe full of billiard balls drifting or zooming around... entropy-ing. It's all quite... linear.

But then what causes situations of negative entropy?

It's maleable, and has ... random non-linear fluctuations.

What allows for maleability of the general process, why didn't it just run down completely right off? What governs that maleability? Whence maleable? Why not maleable? Why any non-linearity?

Accidents of interaction, rarities and anomalies, tunneling, non-perturbative effects. Sometimes in a Pachinko machine the balls bounce up.

Oh, can I make a Pachinko machine where the balls bounce up more? What if I make one Pachinko machine that works just right and another one and compare them - but find they are different... like two relativistic observers with identical physics. What common laws govern the designs of those two different machines?

Space-time rules.

What governs those space-time rules. What are those?

I don't know, Einstein said c.

How does c get worked out by space-time?
 
Last edited:
  • #143
Jimster41 said:
How does that explain the mechanism controlling the evolution between that preparation and the actual later (and or space-like separated) event when they get measured and “display” said correlation.

There is no mechanism, because the “photon”, for example, is merely an encodement of a set of potentialities or possible outcomes of measurements, viz. a mental conceptual link between particular macroscopic preparation devices and a series of macroscopic measurement events.
 
  • #144
Lord Jestocost said:
There is no mechanism, because “phonons”, for example, a merely an encodement of a set of potentialities or possible outcomes of measurements, viz. a mental conceptual link between particular macroscopic preparation devices and a series of macroscopic measurement events.

Aren't they also involved in the heat capacity of materials... as in giving different materials different heat capacity. Heat capacity is pretty important to me... on a daily basis.

Sorry, did you mean phonons or photons?
 
  • #145
Sorry, I mean of course "photon".
 
  • Like
Likes Jimster41
  • #146
You think with photons. We all do. I'm going to go ahead and call myself (and you for that matter) and all this here. Real as it gets. Don't mean to sound snippy. Cognitive dissonance.

We need to lean on the concept of photons and all these conceptual links to reality with all our weight I think. I don't see any way around it. That's why I keep coming here making an idiot out of myself.

And to try to respond to the point being made - which seems to be kind of that there is nothing universally interesting about our enumeration of observables - they are things we invented so we are mesmerized by them. That just seems pretty solipsistic.

Maybe our two photon hits a two photon made by some experimenters we don't know. We are now connected by space-time's rules... I get it's useless to either of us as a random number but the connection is physical, isn't it? It has physical implications for what happens next.
 
Last edited:
  • #147
Jimster41 said:
You think with photons.
As a working physicist, just for FAPP! From a philosophical point of view, I don't mistake the map for the territory, an instrumentalist's point of view.
 
  • Like
Likes DrChinese and Jimster41
  • #148
vanhees71 said:
Isn't it a as pretty weird idea to think that everything is strictly deterministic, compared to our daily experience of pretty random events?

I agree with this with no reservations. :smile: Human experience is demonstrably NOT deterministic, and yet there is obviously a strong desire to provide rules and order for everything else.
 
  • Like
Likes vanhees71 and Jimster41
  • #149
DrChinese said:
Human experience is demonstrably NOT deterministic, and yet there is obviously a strong desire to provide rules and order for everything else.
Casting dice is also demonstrably NOT deterministic, and yet Laplace provided rules and order for them that remained valid until the advent of quantum theory..
 
  • #150
A. Neumaier said:
But it would represent the measurement process (which is what vanhees71 means) and, at least in the thermal interpretation, tell what happens.

But the thermal interpretation is not (yet?) a standard interpretation. I do agree that what you say would likely be true of an interpretation that solves the measurement problem (eg. maybe something like Bohmian Mechanics or the thermal interpretation, but that is also not a standard interpretation at this time).
 
Back
Top