A How do entanglement experiments benefit from QFT (over QM)?

  • #151
vanhees71 said:
Of course the formalism does not supply a causal mechanism for the correlations in the sense you seem to imply (but not explicitly mention to keep all this in a mystery ;-)), because there is no causal mechanism. The causal mechanism is the preparation procedure...

I "think" what you mean is that the causal mechanism (such as it is, what you can control) essentially ENDS when the 2 photon entangled state begins. Because there is no known root cause (in any theory I know of) that explains* what the entangled outcomes would be for the various possible observations. In other words: you might be able to create the entanglement initially, but what happens "next" cannot be considered causal or deterministic via the formalism. And I naturally agree with that view, if I am close to what you mean.

And you have then said that "leads to a two-photon state, where both the momenta and the polarization of these photons are necessarily entangled." And you agree that 2 photon state is not classical, so we are in good agreement to this point. The only gap remaining :smile: is acknowledging that whatever happens next is an example of a) apparent randomness; and b) quantum nonlocality, things which MUST be present/embedded in any theoretical framework - even if to say the mechanism is unknown currently. We don't know a) why you get spin up, for example (or any value of a measurement on an entangled basis). And we don't know how the system evolves from a 2-photon state (spin/polarization undefined) to 2 matching 1-photon pure states whose distance/separation precludes influences limited by the light cone defined by a measurement.

You don't see a) and b) as mysteries, OK. We can agree that mysteries are in the eye of the beholder. :smile:*Even in MWI there is no explanation of why we see a particular outcome; and in BM there is no possibility of observing the pilot wave that guides a particular measurement outcome.
 
Physics news on Phys.org
  • #152
DrChinese said:
but what happens "next" cannot be considered causal or deterministic via the formalism.

And just because it's not deterministic doesn't mean we can have no more knowledge about it. There are plenty of statistical systems we can characterize partially (like thermodynamic ones).

To me it all leads to chemistry and there are plenty of mysteries w/respect to how chemistry does what it does - like create observers who think up a name for it called "chemistry" then notice that it has to behave with relativistic symmetry and think up names for all the symmetries involved, but can't figure out how it does it.
 
Last edited:
  • #153
atyy said:
But the thermal interpretation is not (yet?) a standard interpretation. I do agree that what you say would likely be true of an interpretation that solves the measurement problem (eg. maybe something like Bohmian Mechanics or the thermal interpretation, but that is also not a standard interpretation at this time).
Well, there is only one standard interpretation, printed in many different textbooks, and that is obviously far too idealistic, for example claiming measurements to be described by exact eigenvalues attained via Born's rule rather than by POVMs. (See the quote from Asher Peres in another thread.) Thus one cannot base arguments solely on the standard interpretation.

The thermal interpretation, though nonstandard, indeed solves the measurement problem, without introducing variables not already ubiquitous in QM and QFT. See Section 3 of Part IV of my sequence of papers.
 
  • #154
vanhees71 said:
The only thing that has to do with a human experimenter is that he decides what he wants to measure, and there's no subjective element in this
vanhees71 said:
But why don't you ask? Is it, because the classical-physics description has no irreducible probability element in it?
These are closely linked.

It's because the quantum formalism says the statistics of the variables you choose to measure are not marginals of the set of variables in general. Thus if in an experiment on entangled particles we can measure ##A, B, C, D## (##A,B## being Spin measurements on the first particle and ##C, D## being spin measurements on the second) then if we measure ##A, C## we find ##p(A,C)## is not a marginal of ##p(A, B, C, D)##.

That's the difference between QM and even a stochastic classical theory. It means the sample space is determined by your choice of what to measure.
 
  • Like
Likes mattt and Jimster41
  • #155
vanhees71 said:
Of course the formalism doesn not supply a causal mechanism for the correlations in the sense you seem to imply (but not explicitly mention to keep all this in a mystery ;-)), because there is no causal mechanism. The causal mechanism is the preparation procedure. E.g., two photons in the polarization-singlet state are created in a parametric downconversion event, where through local (sic) interaction of a laser field (coherent state) with a birefringent crystal a photon gets annihilated and two new photons created, necessarily in accordance with conservation laws (within the limits of the uncertainty relations involved of course) leads to a two-photon state, where both the momenta and the polarization of these photons are necessarily entangled. There's nothing mysterious about this. The formalism thus indeed describes and in that sense also explains the correlations. By "explaining" in the sense of the natural sciences you always mean you can understand it (or maybe not) from the fundamental laws discovered so far. The fundamental laws themselves (in contemporary modern physics mostly expressed in terms of symmetry principles) are the result of careful empirical research and accurate measurements, development of adequate mathematical models/theories, their test and, if necessary, refinement.

We know how to create and test a Bell basis state, that is not in dispute. It looks like you conflate the causal mechanism for creating a Bell basis state with the causal mechanism needed to account for the conservation principle it represents. As it turns out, the mechanism that creates the Bell basis state provides no mechanism to account for its conservation outcomes, which caused Einstein to believe quantum mechanics is incomplete (Smolin going so far as to claim it's "wrong").

For example, suppose we're talking about the (fallacious) "classical counterpart" to the spin singlet state, i.e., we have conservation of angular momentum in the classical sense. Alice and Bob would measure variable deflections through their SG magnets corresponding to some hidden underlying value of L for each particle, the sum of those hidden, underlying L's being zero per the creation of the state via conservation of L. In that case, the mechanism creating the state also provides a mechanism to explain the subsequent measurement outcomes in each trial of the experiment. Of course, with the real spin singlet state the conservation principle only holds on average when Alice and Bob make different measurements, since they both always measure +1 or -1 at all angles (no partial deflections as in the classical case, which uniquely distinguishes the quantum and classical joint distributions). [See our paper here or video summary here or here.] There is not anything in the mechanism creating the spin singlet state that also provides a mechanism to account for this manner of conservation.

I realize you don't need a causal mechanism to account for the average-only conservation to feel as though you "understand" quantum theory. But, the plain and simple fact is that others do. Thus, they don't understand why you're happy and you don't understand why they're not happy. The psychological needs of these two camps are different, that's all.

I'm writing all this because psychologically speaking, I've a foot in each camp. That is, I can live with acausality at the fundamental level, but I want a principled ontology for it. That's introduced in these two episodes of the video series (Episode 1 and Episode 2).

vanhees71 said:
It is impossible to explain any physics without invoking "the formalism". This is as if you forbid to use language in communicating. It's impossible to communicate without the use of the adequate language, and the major breakthrough in men's attitude towards science in the modern sense is to realize, as Galileo famously put it, that the language of nature is mathematics (particularly geometry), and this is the more valid with modern physics than ever.

We have the formalism and have seen that it maps to the experiments, which is the first step in understanding the phenomenon. That doesn't explain the phenomenon for everyone, as I just stated, but it is a necessary first step.
 
  • Like
Likes mattt and DrChinese
  • #156
DarMM said:
Quantum theory breaks Kolmogorov's axioms. A quantum state and a context induce a Kolmogorov model via a Gelfand homomorphism.

Yes - you can look on QM as a generalized probability model, or, as is usually done, ordinary probability plus other rules eg at the semi popular level:
https://www.scottaaronson.com/democritus/lec9.html

I suspect it's trying to tell us something - what I have no idea. I do know that QM as we know it requires continuity in pure states, which you can't do in ordinary probability theory. This allows many powerfull theorems right at the foundations of QM eg Wigner's theorem. But why is nature so mathematically accommodating? I have a sneaky suspicion nature is running us in circles on this one because it turns out to be equivalent to requiring entanglement.
https://arxiv.org/abs/0911.0695
Thanks
Bill
 
  • #157
atyy said:
But the thermal interpretation is not (yet?) a standard interpretation. I do agree that what you say would likely be true of an interpretation that solves the measurement problem (eg. maybe something like Bohmian Mechanics or the thermal interpretation, but that is also not a standard interpretation at this time).

Personally I would use not well known rather than standard. I do not think there is any standard interpretation other than the math itself. And yes I do realize you need some kind of interpretation of probability to apply it but that's true of many areas that use probability. You can prove all sorts of interesting things from the Kolmogorov axioms alone such as Brownian Motion is continuous but not differentiable anywhere (thats as far as I got with rigorous probability theory) but applying it is another matter as Ross's Probability Models makes only too clear (groan some of his problems are HARD - I took it at uni only because I liked the lecturer - never did like the subject).

Thanks
Bill
 
  • #158
bhobba said:
I do know that QM as we know it requires continuity in pure states, which you can't do in ordinary probability
Brownian motion is continuos on the level of pure states.

Don't take the finite dimensional caricature of QM presented by quantum information theory as the full truth!
 
  • Like
Likes bhobba
  • #159
bhobba said:
Personally I would use not well known rather than standard. I do not think there is any standard interpretation other than the math itself. And yes I do realize you need some kind of interpretation of probability to apply it but that's true of many areas that use probability. You can prove all sorts of interesting things from the Kolmogorov axioms alone such as Brownian Motion is continuous but not differentiable anywhere (thats as far as I got with rigorous probability theory) but applying it is another matter as Ross's Probability Models makes only too clear (groan some of his problems are HARD - I took it at uni only because I liked the lecturer - never did like the subject).

Thanks
Bill
The standard interpretation is still one of the Copenhagen flavors, usually without the collapse postulate. It's pretty close to the minimal interpretation and usually dubbed "the orthodox interpretation". With "standard interpretation" I mean the interpretation used by the majority of theoretical and experimental physicists (even in the quantum optics/AMO community, which are closest to the foundations).
 
  • #160
DrChinese said:
I "think" what you mean is that the causal mechanism (such as it is, what you can control) essentially ENDS when the 2 photon entangled state begins. Because there is no known root cause (in any theory I know of) that explains* what the entangled outcomes would be for the various possible observations. In other words: you might be able to create the entanglement initially, but what happens "next" cannot be considered causal or deterministic via the formalism. And I naturally agree with that view, if I am close to what you mean.

And you have then said that "leads to a two-photon state, where both the momenta and the polarization of these photons are necessarily entangled." And you agree that 2 photon state is not classical, so we are in good agreement to this point. The only gap remaining :smile: is acknowledging that whatever happens next is an example of a) apparent randomness; and b) quantum nonlocality, things which MUST be present/embedded in any theoretical framework - even if to say the mechanism is unknown currently. We don't know a) why you get spin up, for example (or any value of a measurement on an entangled basis). And we don't know how the system evolves from a 2-photon state (spin/polarization undefined) to 2 matching 1-photon pure states whose distance/separation precludes influences limited by the light cone defined by a measurement.

You don't see a) and b) as mysteries, OK. We can agree that mysteries are in the eye of the beholder. :smile:*Even in MWI there is no explanation of why we see a particular outcome; and in BM there is no possibility of observing the pilot wave that guides a particular measurement outcome.
The entangled state is as causal as any other. QT is a causal theory, as any dynamical theory of physics. The entangled state evolves according to the standard dynamical laws of QT as any other state.

You are always insisting on classical interpretations, not I! That's the main source of our mutual misunderstandings and quarrels. I just take QT seriously and I deny any necessity of classical interpretations. Particularly Bell's class of local deterministic (usually dubbed "realistic", which is a misleading term however) are ruled out with humongous significance while QT is confirmed!

The theory also clearly says what's random and what is not random. An observable takes a determined value according to the state preparation if and only if the outcome of the measurement leads to one value with 100% probability. Otherwise it's indetermined, and the outcome of any individual measurement is irreducibly random. When repeated on an ensemble of equally prepared systems the outcomes of these measurements are distributed according to the probabilities the state describes, and the state describes these probabilities and nothing else. According to QT, and confirmed by all Bell tests with high significance, there's nothing "behind the curtain" which could "determine" values of such observables.

Ad a) The randomness is not apparent but an objective fact of the behavior of nature.

Ad b) Interactions are local. What's called "nonlocal" refers to correlations between far distant parts of a quantum system described by entanglement.

There's nothing weird with this. It's just what we have figured out over the last 500 years about how nature behaves.
 
  • Like
Likes Hans de Vries
  • #161
DarMM said:
These are closely linked.

It's because the quantum formalism says the statistics of the variables you choose to measure are not marginals of the set of variables in general. Thus if in an experiment on entangled particles we can measure ##A, B, C, D## (##A,B## being Spin measurements on the first particle and ##C, D## being spin measurements on the second) then if we measure ##A, C## we find ##p(A,C)## is not a marginal of ##p(A, B, C, D)##.

That's the difference between QM and even a stochastic classical theory. It means the sample space is determined by your choice of what to measure.
Yes sure, but it's an established fact of 100 years testing QT. For me that's the only conclusion I can come to in view of all the Bell tests disproving local deterministic HV theories and confirm QT.
 
  • Like
Likes DarMM
  • #162
RUTA said:
We know how to create and test a Bell basis state, that is not in dispute. It looks like you conflate the causal mechanism for creating a Bell basis state with the causal mechanism needed to account for the conservation principle it represents. As it turns out, the mechanism that creates the Bell basis state provides no mechanism to account for its conservation outcomes, which caused Einstein to believe quantum mechanics is incomplete (Smolin going so far as to claim it's "wrong").

For example, suppose we're talking about the (fallacious) "classical counterpart" to the spin singlet state, i.e., we have conservation of angular momentum in the classical sense. Alice and Bob would measure variable deflections through their SG magnets corresponding to some hidden underlying value of L for each particle, the sum of those hidden, underlying L's being zero per the creation of the state via conservation of L. In that case, the mechanism creating the state also provides a mechanism to explain the subsequent measurement outcomes in each trial of the experiment. Of course, with the real spin singlet state the conservation principle only holds on average when Alice and Bob make different measurements, since they both always measure +1 or -1 at all angles (no partial deflections as in the classical case, which uniquely distinguishes the quantum and classical joint distributions). [See our paper here or video summary here or here.] There is not anything in the mechanism creating the spin singlet state that also provides a mechanism to account for this manner of conservation.

I realize you don't need a causal mechanism to account for the average-only conservation to feel as though you "understand" quantum theory. But, the plain and simple fact is that others do. Thus, they don't understand why you're happy and you don't understand why they're not happy. The psychological needs of these two camps are different, that's all.

I'm writing all this because psychologically speaking, I've a foot in each camp. That is, I can live with acausality at the fundamental level, but I want a principled ontology for it. That's introduced in these two episodes of the video series (Episode 1 and Episode 2).
We have the formalism and have seen that it maps to the experiments, which is the first step in understanding the phenomenon. That doesn't explain the phenomenon for everyone, as I just stated, but it is a necessary first step.
There is no classical counterpart of spin. Spin is generically quantum, but that's semantics.

Indeed, I think the great merit of the scientific method is that it doesn't care about our psychological needs but establishs clear facts about what's real. Obviously the worldview of classical physics is not describing reality accurately. QT describes it at least more accurately. It may be psychologically problematic for you to face this reality, but I indeed wonder why.
 
  • #163
A. Neumaier said:
Brownian motion is continuos on the level of pure states. Don't take the finite dimensional caricature of QM presented by quantum information theory as the full truth!

Good point. I even mentioned it in one of my posts. But while continuous is nowhere differentiable. Still the paper I posted making the claim 'If one requires the transformation from the last axiom to be continuous, one separates quantum theory from the classical probabilistic one.' is not correct - it should include differentiability.

Thanks
Bill
 
  • #164
bhobba said:
Good point. I even mentioned it in one of my posts. But while continuous is nowhere differentiable. Still the paper I posted making the claim 'If one requires the transformation from the last axiom to be continuous, one separates quantum theory from the classical probabilistic one.' is not correct - it should include differentiability.
This is too much required - wave functions need not be differentiable, only square integrable.
 
  • Like
Likes bhobba
  • #165
A. Neumaier said:
This is too much required - wave functions need not be differentiable, only square integrable.

Yes (we won't go into Rigged Hilbert Spaces because it only makes it worse for my position) - but not nowhere differentiable because we have Schrodinger's Equation. I need to review the number of places I have seen it. But this it getting off-topic. I will need to look at the papers that state it first.

Thanks
Bill
 
  • #166
bhobba said:
but not nowhere differentiable because we have Schrodinger's Equation.
Schrödinger's equation for ##N## particles and expectations ##\psi^*H\psi## make sense in the Soboloev space of once weakly differentiable functions on ##R^{3N}##. It contains the piecewise linear finite elements that could be used (in principle) to solve it numerically. I don't know whether this space contains nowhere differentiable functions but wouldn't be surprised.
 
Last edited:
  • Like
Likes mattt, Auto-Didact and bhobba
  • #167
bhobba said:
Yes [...] but not nowhere differentiable because we have Schrodinger's Equation.
A. Neumaier said:
Schrödinger's equation for ##N## particles and expectations ##\psi^*H\psi## make sense in the Soboloev space of once weakly differentiable functions on ##R^{3N}##. It contains the piecewise linear finite elements that could be used (in principle) to solve it numerically. I don't know whether this space contains nowhere differentiable functions but wouldn't be surprised.
Indeed, since ##3N>1##, the Soboloev space ##H^1(R^{3N})## contains nowhere differentiable functions. See the answer to my question Do Sobolev spaces contain nowhere differentiable functions? on MathOverflow.
 
  • Like
Likes mattt, bhobba and Auto-Didact
  • #168
vanhees71 said:
Of course the formalism doesn not supply a causal mechanism for the correlations in the sense you seem to imply (but not explicitly mention to keep all this in a mystery ;-)), because there is no causal mechanism.
This is a philosophical statement, not a scientific one, and certainly not a statement concerned with finding the complete pure mathematical theory for which QT is 'applied mathematics', i.e. the currently unknown uniquely correct mathematical model capable of capturing all of QT without any glaring conceptual problems.

From the history of physics, we have learned that all physical theories have such a unique form of pure mathematics underlying them: for Newtonian mechanics it is calculus, for Maxwell theory it is vector calculus, for GR it is Riemannian geometry, for Hamiltonian mechanics it is symplectic geometry, etc.; for QT we have not yet found the correct form of pure mathematics, this is still work in progress.

Having a unique mathematical theory underlying a physical theory - which moreover typically can easily directly be mathematically generalized (i.e. not merely heuristically e.g. through perturbative methods, linearizations or small angle idealizations) in a plethora of ways and directions - means that the physical theory can be derived from first principles and unified with other mathematical and/or physical theories; this means that there are no conceptual problems in the foundation of that physical theory.

All fundamental physical theories known so far were capable of being derived from first principles eventually, all except for QT, which moreover cannot easily be generalized or unified with other physical theories without extreme heuristics e.g. perturbation theory in the case of QFT.
vanhees71 said:
The causal mechanism is the preparation procedure. E.g., two photons in the polarization-singlet state are created in a parametric downconversion event, where through local (sic) interaction of a laser field (coherent state) with a birefringent crystal a photon gets annihilated and two new photons created, necessarily in accordance with conservation laws (within the limits of the uncertainty relations involved of course) leads to a two-photon state, where both the momenta and the polarization of these photons are necessarily entangled. There's nothing mysterious about this. The formalism thus indeed describes and in that sense also explains the correlations. By "explaining" in the sense of the natural sciences you always mean you can understand it (or maybe not) from the fundamental laws discovered so far. The fundamental laws themselves (in contemporary modern physics mostly expressed in terms of symmetry principles) are the result of careful empirical research and accurate measurements, development of adequate mathematical models/theories, their test and, if necessary, refinement.

It is impossible to explain any physics without invoking "the formalism". This is as if you forbid to use language in communicating. It's impossible to communicate without the use of the adequate language, and the major breakthrough in men's attitude towards science in the modern sense is to realize, as Galileo famously put it, that the language of nature is mathematics (particularly geometry), and this is the more valid with modern physics than ever.
The causal mechanism is not the preparation procedure; what you have offered is not an actual explanation but instead just a heuristic description of the phenomenology retrofitted into a post-hoc-ergo-propter-hoc statement; while such heuristics sound nice and help pragmatic experimentalists not to worry about the foundations, it is completely fallacious and therefore unacceptable for anyone really interested in rigourous explanation and understanding at an academic level.

Your heuristics import your philosophy into the practice of physics, because you are assuming that the axioms for QT that you have chosen are necessary, sufficient and capable of giving a complete conceptual description, while in actuality your chosen axioms are purely pragmatic heuristics; even worse when extended beyond their range of applicability they end up being patently fallacious and therefore fundamentally incapable of giving a complete description of the physics.

This is the danger of making a hurried premature axiomatization of a physical theory instead of finding the correct derivation from first principles i.e. constructing a new form of pure mathematics tailor-made for that physical theory which dovetails with the rest of pure mathematics: von Neumann et al. just bum-rushed a premature axiomatization of the physics into the foundation of QM and we are suffering to this day because of that.

The lesson to take away from this is that an axiomatization of a theory typically almost offers nothing of substance directly for the construction or discovery of new mathematics, especially if done sloppily/incorrectly because an axiomatization can easily so just end up being a meaningless game in formal mathematics; in other words axiomatization is an art form and not all axiomatizations are works of art, far from it.

Any physical theory which can not be based on a principle which is conceptually coherent by itself as a mathematical theory should always be looked at with the necessary cautionary suspicion; this is for me the same reason to be suspicious of string theory and also the same reason to be suspicious of the highly artificial mathematical constructions (i.e. non-pure) in mathematical economics and econometrics.

To demonstrate that your axiom-based heuristic view for QT without any coherent underlying principles - i.e. the minimal interpretation - is not a necessary way of looking at QT, others, in particular Popescu and Rohrlich have actually given a completely different way of changing the foundational structure of QT by changing the roles of axioms, postulates and principles: https://doi.org/10.1007/BF02058098
vanhees71 said:
Indeed, I think the great merit of the scientific method is that it doesn't care about our psychological needs but establishs clear facts about what's real. Obviously the worldview of classical physics is not describing reality accurately. QT describes it at least more accurately. It may be psychologically problematic for you to face this reality, but I indeed wonder why.
Psychologically problematic aspects of any explanation - especially a scientific explanation which can be put into mathematical form - implies conceptual problems within that explanation.

Conceptual problems in science practically always means that the particular chosen form of mathematics used in the explanation is not sufficient to fully describe the phenomenon that that form of mathematics is aiming to describe i.e. a more sophisticated form of mathematics is needed to naturally model/capture/explain that phenomenon.

I would say that it is pretty obvious that the problems in the foundations of QT are precisely of this nature: in the absence of glaring experimental deviations, we always needed a new form of mathematics to help solve the remaining conceptual issues and there is no reason whatsoever to suspect that the case is different for QT; on the contrary because of the unexplained introduction of complex numbers into the foundation of physics there is all the reason to suspect that a new form of mathematics is needed to resolve the problems in the foundations of QT.
 
Last edited:
  • #169
Auto-Didact said:
for QT we have not yet found the correct form of pure mathematics, this is still work in progress.
For quantum mechanics, it is functional analysis in Hilbert spaces. Only for quantum field theory, clear mathematical foundations are fragmentary. The interpretation is a completely disjoint issue.
 
  • Like
Likes bhobba and weirdoguy
  • #170
A. Neumaier said:
For quantum mechanics, it is functional analysis in Hilbert spaces. Only for quantum field theory, clear mathematical foundations are fragmentary. The interpretation is a completely disjoint issue.
It is explicitly an assumption that the interpretation is a disjoint issue: all interpretative issues in physics always change when mathematical foundations change; the removal of Newtonian absolute space and time from the foundations of physics due to relativity theory is the prime example of this. Feynman spoke a lot about the resolution of such conceptual issues by changing foundations in The Character of Physical Law, among his many works and lectures.

Functional analysis in function spaces is only a necessary but not sufficient ingredient of the pure mathematical apparatus required to describe QT in full, exactly as you say.
 
  • #171
Auto-Didact said:
It is explicitly an assumption that the interpretation is a disjoint issue:
Your arguments are also full of assumptions solely based on your faith, none of them verifiable.
Auto-Didact said:
all interpretative issues in physics always change when mathematical foundations change; the removal of Newtonian absolute space and time from the foundations of physics due to relativity theory is the prime example of this.
But there is not the slightest hint that there is a deeper nice theory ''deforming'' quantum mechanics to something of which the latter is a limiting case. If it existed, it would have been found by now.
 
  • Like
Likes vanhees71, weirdoguy and akvadrako
  • #172
A. Neumaier said:
Your arguments are also full of assumptions solely based on your faith, none of them verifiable.
Alas, making assumptions is necessary in order to progress. Making assumptions in and of itself isn't problematic if one is aware that they are making assumptions; I am fully aware that I am doing this, not just reflectively but strategically: making your assumptions explicit directly opens them up to falsification. This is a formal reasoning strategy I learned in medical practice called diagnostics.
A. Neumaier said:
But there is not the slightest hint that there is a deeper nice theory ''deforming'' quantum mechanics to something of which the latter is a limiting case. If it existed, it would have been found by now.
Not if the wrong conceptualization is missing; it of course only needs to be found once. Discovery of new pure mathematics in the absence of empirical guidance is not a trivial technical problem which can be resolved by throwing more money and man-power at it; if that was so all the Millenium Prizes in mathematics would have been solved ages ago.

It instead requires a careful solving of the conceptual issue in tandem with the construction of a novel mathematical concept; these events are exceedingly rare occurrences and they require creativity, imagination, vision and boldness beyond mere technical mastery taught in schools and upon which graduate students are selected for. Newton, Euler, Gauss and Grothendieck are prime examples of mathematicians who displayed all the required characteristics to achieve such things.
 
  • #173
Auto-Didact said:
All fundamental physical theories known so far were capable of being derived from first principles eventually, all except for QT, which moreover cannot easily be generalized or unified with other physical theories without extreme heuristics e.g. perturbation theory in the case of QFT.
Since I don't care about psychology, which is far too complicated for me as a physicist, let me just pick this quote.

I don't know, what you mean by "first principles". For me what turned a posteriori after about 400 years of scientific research since Galilei and Newton to be something like "first principles" are symmetry principles, and a great deal of QT relies on these principles. I don't know, in which sense you mean that QT were not derivable from "first principles" in contradistinction to classical physics.
 
  • Like
Likes weirdoguy
  • #174
vanhees71 said:
Since I don't care about psychology, which is far too complicated for me as a physicist, let me just pick this quote.

I don't know, what you mean by "first principles". For me what turned a posteriori after about 400 years of scientific research since Galilei and Newton to be something like "first principles" are symmetry principles, and a great deal of QT relies on these principles. I don't know, in which sense you mean that QT were not derivable from "first principles" in contradistinction to classical physics.
Derivation from first principles is a foundational research methodology used in theory construction which integrates the conceptual, the mathematical and the axiomatic based on an empirical fact. It can be done at multiple levels of completion; an example of a complete derivation from first principles would be inventing calculus, using it to define force and axiomatically defining space, time and mass all in tandem with each other in order to give a complete model of motion, an empirical phenomenon.
 
  • #175
Auto-Didact said:
Derivation from first principles is a foundational research methodology

This is getting way off the topic of this thread. Please confine discussion to the thread topic.
 
  • Like
Likes vanhees71
  • #176
Going back a bit in this thread...

I think I have learned a bit more about QFT from some of the great posts here. Especially learning some of the situations in which QFT would be helpful for application. Specifically, it seems as if QFT is best to apply when scattering is being discussed and the results might include any of a variety of particles. On the other hand: while QFT might include elements that describe entanglement, apparently that is a weaker/less useful side of things. My sense is that explains why entanglement experiments don't require the deeper theory of QFT - the basics of entanglement are well described by QM/QED without the need for any relativistic considerations (I don't consider descriptions of entangled photons as being relativistic although others might).

And as to some of the discussions about "microcausality": As I now understand it, there are 2 key (and somewhat opposing) elements at play. Both relate to the act of performing a measurement on entangled Alice and considering what happens to remote Bob (the previously entangled partner):

1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.
2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.

I realize some of the posters here may not agree with my assessments, no problem there. But hopefully I am a little further along than before. :smile:
 
  • #177
Again, as soon as photons are involved, there's no other way then QED to describe them adequately. QFT contains of course everything about entanglement as any flavor of QT.

Also read again you QFT books about what "local interaction" and "microcausality" means in contradistinction to long-ranged correlations due to entanglement. This resolves the apparent contradiction between the possibility of long-range correlations described by entanglement on the one hand and the fact that no instantaneous or even acausal influence of A's measurement on B's photons are necessary.
 
  • #178
DrChinese said:
1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.

Yes.

DrChinese said:
2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.

I think a better phrasing would be that the correlations between Bob's and Alice's measurement outcomes can violate the Bell inequalities. Putting it in terms of "change of state" raises issues (discussed already quite thoroughly in some recent thread or other) that don't need to be raised to describe the experimental facts of quantum nonlocality.
 
  • Like
Likes vanhees71, RUTA and A. Neumaier
  • #179
vanhees71 said:
Yes sure, but it's an established fact of 100 years testing QT. For me that's the only conclusion I can come to in view of all the Bell tests disproving local deterministic HV theories and confirm QT.
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
 
  • #180
vanhees71 said:
1. Again, as soon as photons are involved, there's no other way then QED to describe them adequately.

2. QFT contains of course everything about entanglement as any flavor of QT.

1. And yet, entanglement fundamentally does not require photons and does not require QFT. Hard to make that a case for a more complex theory. The old case of spin 1/2 electrons brings about the fundamental issues of quantum locality that we wish to resolve.

2. I guess I can't dispute that. But I certainly saw doubts about the entanglement side from a number of the posters. Apparently there are some entanglement issues that are not fully resolved. Although you seem satisfied, so that is a good recommendation.
 
  • #181
Kurt Gottfried and Tung-Mow Yan in “Quantum Mechanics: Fundamentals” (Second Edition):

“Thus it is finally a matter of taste whether one calls quantum mechanics local or not. In the statistical distribution of measurement outcomes on separate systems in entangled states there is no hint of non-locality. Quantum theory does not offer any means for superluminal signaling. But quantum mechanics, and by that token nature itself, does display perfect correlations between distant outcomes, even though Bell's theorem establishes that pre-existing values cannot be assigned to such outcomes and it is impossible to predict which of the correlated outcome any particular event will reveal.” [emphasized by LJ]
 
  • #182
DarMM said:
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.
 
  • #183
vanhees71 said:
Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
 
  • Like
Likes Lord Jestocost
  • #184
vanhees71 said:
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.

Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
 
  • #185
DarMM said:
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
In a highly idealized way you start with a preparation procedure. E.g., you prepare polarization-entangled (say the singlet state) photon pairs via parametric downconversion. Then A and B measure the polarization of both photons. If both choose to measure the polarization in the same direction, each get just ideally unpolarized photons. With sufficiently precise time stamps in each of the observers' measurement protocols they can relate the outcomes of their polarization measurements to each entangled pair and later select subensembles, i.e., they can select all pairs, where A measured horizontal polarization and look what B has found for his photon in the pair and finding the 100% correlation, i.e., whenever A finds H, B finds V and vice versa. It's of course a subensemble half as large as the original. The other partial ensemble is just complementary, and the total ensemble simply reflects that each of the single photon is perfectly unpolarized.

Of course, a more realistic evaluation of real-lab experiments you have to take into account that all preparations and measurements are non-ideal and you have to carefully evaluate the systematic and statistical errors. In the formalism that can (sometimes) described by the POVM formalism. I'm not arguing against the POVM formalism but against the claim it's something going beyond standard Q(F)T in the minimal interpretation.

Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations. You can choose appropriate different measurement setups to also demonstrate the violation of Bell's inequality. This is of course only possible on ensembles, because you need measurements of incompatible observables, which can not realized on a single system but only subsequently on ensembles of equally prepared systems. All these are indeed probabilistic statements about the outcome of measurements and nothing more.
 
  • #186
Lord Jestocost said:
Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
Within the minimal interpretation, which is a no-nonsense flavor of Copenhagen, it depends on the prepared state, whether an observable has a determined value or not. If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in. That's it. There's no necessity for any additional elements within QT. It's a complete description of what's observed, including the randomness for the outcome of measurements on observables that are not determined by state preparation.
 
  • #187
vanhees71 said:
Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations
But one's choice of measurement produces a complete sample space that cannot be understood as a subensemble of the preparation. The state ##\rho## and one's choice of a context give a complete sample space that cannot be seen as a subensemble of another, that's basically what the CHSH set up tells you, as does Fine's theorem.

That's what's confusing about QM, the perparation alone is not an ensemble. Only the preparation and a context.
 
  • #188
vanhees71 said:
If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in
Yes, but the Kochen-Specker theorem, the CHSH inequality and Fine's theorem show you that just because ##|\uparrow_{z}\rangle## will give ##\frac{\hbar}{2}## when measured along the ##z##-axis with certainty, the particle does not actually possesses ##\frac{\hbar}{2}## along the ##z##-axis prior to the measurement.

I mean in a certain sense one just needs the Kochen-Specker theorem alone. If you cannot assign pre-existent values to variables, but then in the measurement one obtains a value, then how do you get out of the fact that the value arises in measurement?

I mean you are either saying there was a value prior to measurement or there wasn't. If there was you run into contextuality problems and possible fine tuning and you're sort of talking about a hidden variable theory. If you are saying the latter then literally the value is created by the measurement process. I don't see what else one could be saying.
 
  • Like
Likes Auto-Didact and Lord Jestocost
  • #189
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.

Concerning CHSH, I think the example in Wikipedia,

https://en.wikipedia.org/wiki/CHSH_inequality#A_typical_CHSH_experiment
is correctly described. You need indeed "four subexperiments" distinguished by different relative orientations of the polarization measurements. You can of course not do all four measurements on a single realization. So you select four different and mutually exclusive "subensembles" by each measurement. The total ensemble, given by the same state preparation of all subexperiments.
 
  • Like
Likes DarMM
  • #190
vanhees71 said:
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
 
Last edited:
  • Like
Likes Auto-Didact and Lord Jestocost
  • #191
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants. Although, I must admit, I'm almost as bad when I point out a desire for dynamical explanation, e.g., via causal mechanisms and/or hidden variables, is what has to be abandoned and replaced by constraint-based explanation (with no dynamical counterpart). For many people, that's equivalent to telling them to forget ontological explanation altogether :-)
 
  • Like
Likes Auto-Didact and DrChinese
  • #192
RUTA said:
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants.
Forgive the psychoanalysis but from my experience with such matters, the fact that he sees that other serious physicists are worried about these issues and continues to participate in good faith demonstrates to me that he either feels he can actually alleviate our worries through his explanation, or - even though he believes his stance is pragmatically justified - he has some uncertainty regarding this important issue about which he subconsciously wishes to learn more; what better place to directly learn more than from those honestly expressing their doubts?
 
  • #193
Well this just broaches the topic of vanhees realism once more. All we can say is that for suitably prepared initial posts there are probabilities of various vanhees responses formed as irreversible records on our monitors. To go beyond this and posit an ontic vanhees is unwarrented by the formalism.

Considering the physicsforums servers are in America and there are vanhees observations in Frankfurt, the correlation between these would require a nonlocally extended ontic vanhees. There is literally no other explanation.
 
  • Haha
  • Like
Likes DrChinese, RUTA and Auto-Didact
  • #194
In any case, I am happy that @vanhees71 does continue to discuss these matters because it helps to demonstrate - from the more rigorous contrary position - exactly how fragile the minimal interpretation actually is. The demonstration thereof in the public domain may naturally elucidate feelings of uneasiness among physicists - who are not used to encounter such fragile arguments w.r.t. physics - but it is necessary for them to take these feelings seriously, because we are talking about the currently accepted foundations of physics: all of (theoretical) physics based on these foundations is what is at stake.

In the face of this uneasiness the scientist is actually being forced to make an ethical decision which displays his character: either he confronts the matter head on and honestly admits that he doesn't know or he can pretend to know and so abandon the very principles of science; those who opt for the latter choice are easy to detect because they will then tend to even begin to argue for censorship of further discussion. Self-censorship is the beginning of the death of science; it is very interesting to note that Peter Woit's latest blogpost is also on this very topic.

As Feynman puts it, the scientist - funded by and therefore having obligations to society - actually only has one choice: the scientist must fearlessly admit that he does not know and live with the uncertainty that his beloved theory might actually be wrong: doing anything else is just an exercise in self-deception and - even worse - deception of others, including deception of the public; Smolin has made this very point clearer than anyone else I have encountered either in the popular or professional literature.

As Feynman says: I can live with doubt, and uncertainty, and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers, and possible beliefs, and different degrees of certainty about different things, but I'm not absolutely sure of anything. There are many things I don't know anything about, such as whether it means anything to ask "Why are we here?" I might think about it a little bit, and if I can't figure it out then I go on to something else. But I don't have to know an answer. I don't feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose — which is the way it really is, as far as I can tell. Possibly. It doesn't frighten me.
 
  • Like
Likes DrChinese
  • #195
Regarding statistical interpretations of quantum mechanics, Paul Busch, Pekka J. Lahti and Peter Mittelstaedt put it in the following way in chapter IV.3. “Ensemble and Hidden Variable Interpretations” in „The Quantum Theory of Measurement” (Second Revised Edition, Springer):

“The statistical interpretations of quantum mechanics can be divided into two groups, the measurement statistics and the statistical ensemble interpretations (Sects. III.3.2-3). These interpretations rely explicitly on the relative frequency interpretation of probability, and in them the meaning of probability is often wrongly identified with the common method of testing probability assertions...

In the measurement statistics interpretation the quantum mechanical probability distributions, such as ##p^A_T##, are considered only epistemically as the distributions for measurement outcomes………… In this pragmatic view quantum mechanics is only a theory of measurement outcomes providing convenient means for calculating the possible distributions of such outcomes. It may well be that such an interpretation is sufficient for some practical purposes; but it is outside the interest of this treatise to go into any further details, for example, to study the presuppositions of such a minimal interpretation. The measurement problem is simply excluded in such an interpretation……...

The ensemble interpretation of quantum mechanics describes individual objects only statistically as members of ensembles. This interpretation is motivated by the idea that each physical quantity has a definite value at all times. Thus no measurement problem would occur in this interpretation. Some merits of the ensemble interpretation of quantum mechanics are put forward, for example, in [Bal70,88, d'Espagnat 1971]. But these merits seem to consist only of a more or less trivial avoiding of the conceptual problems, like the measurement problem, arising in a realistic approach. In fact it is only in the hidden variable approaches that one tries to take seriously the idea of the value-definiteness of all quantities.”
 
  • Like
Likes atyy
  • #196
DarMM said:
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
Again I don't understand. It must be possible to do the described experiments to test the CHSH relation. If you cannot do this within QT, you cannot even define the quantities entering this relation to test it.

In the example of the Wikipedia quoted above. There are four incompatible experimental setups necessary. Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment.

Since the measurements are mutually incompatible you need to prepare four ensembles in the same state and do one of the four measurements to divide each of them in the appropriate subensembles and then combine the probabilistic outcomes to check the CHSH relation.

It's like in the simpler example to test the uncertainty relation ##\Delta x \Delta p \geq \hbar/2##. Of course you cannot measure position and momentum accurately on one particle. Thus you need to prepare a first ensemble of a single particle in the state ##\hat{\rho}## and do a very accurate position measurement and evaluate ##\Delta x##. Then you have to prepare a 2nd ensemble of a single particle, again in the same state ##\hat{\rho}##, and measure momentum very accurately and evaluate ##\Delta p##. With these two incompatible measurements together you can test the uncertainty relation for particles prepared in the state ##\hat{\rho}##.
 
  • #197
vanhees71 said:
Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment
That's not the point. The point is that the ensembles found in each of the four measurement choices have mathematical features preventing them from being understood as selections from one common ensemble. If you measure ##A, C## and you measure ##B, D## they cannot be thought of as subensembles of one ensemble nor alternate course grainings of a common ensemble. They are simply two different ensembles. That is a mathematical fact reflected in the fact that there is no Gelfand homomorphism subsuming all four observables.

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
 
  • Like
Likes Auto-Didact
  • #198
DarMM said:
[]

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?

The remote influence vs non-local correlations argument cannot be settled by the formalism.
The remote influence believers should explain what actually is passed between the locations and how this can be experimentally detected.
 
  • #199
Mentz114 said:
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?
No, this is pretty clear in its actual proof.
 
  • #200
DarMM said:
No, this is pretty clear in its actual proof.
The proof only applies to projections not measurements.
What is the point in assigning values to irrelevant and unknowable properties ?
No mathematical theorem can prove the existence or not of a real thing.
 
Back
Top