Bell vs Kolmogorov: Unravelling Probability Theory Limits

In summary: QT guarantees that the only states that survive post measurement are the eigenstates of the observable operator. In the absence of QT, any process could reach detailed balance.
  • #36
Killtech said:
Why would you expect a classical probability formulation to describe the macroscopic measurement device in a non-classical fashion?
You didn't make a claim about a classical probability formulation to describe the measuring device. You made a claim about "a full classical probability formulation of QT". That includes more than just the measuring device.

Killtech said:
The point is that that seems to be a proper description of a QT system in question being able to fully explain all its possible outcomes.
This is a stronger claim than "a classical probability formulation of the measuring device", and @vanhees71 is not making this stronger claim. Only you are. @vanhees71 said:

vanhees71 said:
The result is a quantum master equation, which usually is not a Markov process. Often it can be approximated by a Markov process
Again, the bolded part is crucial, and it does not support your (@Killtech) claim. Nobody disputes that in some cases, quantum aspects of the scenario don't make any significant difference and so you can approximate what is going on with a classical model. But you are making a stronger claim than that.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #37
PeterDonis said:
You didn't make a claim about a classical probability formulation to describe the measuring device. You made a claim about "a full classical probability formulation of QT". That includes more than just the measuring device.
But what more is there in describing the results if not all that can be observed? I fear I don't understand your requirement here. In case you are asking me for such things like ontology or alike i am not fully familiar with these concepts so i cannot tell myself what you are missing.

So please specify what your requirement for that statement is. Maybe that will help me find out what i am missing about QT which makes probability theory seem to break. Because so far, i don't see anything.

PeterDonis said:
Again, the bolded part is crucial, and it does not support your claim. Nobody disputes that in some cases, quantum aspects of the scenario don't make any significant difference and so you can approximate what is going on with a classical model. But you are making a stronger claim than that.
That part of the discussion about another aspect. Here we were discussion in how far parts of the process can or cannot be described using familiar Markov framework. This statement you quoted specifically refers to the quantum master equation as a process.
 
  • #38
Killtech said:
what more is there in describing the results if not all that can be observed?
A model doesn't just describe results, it predicts them. If a classical model does not make correct predictions for all scenarios (and as the quote I gave from @vanhees71's post makes clear, it doesn't), it's not valid as an explanation for all scenarios. And your claim is about explaining results, not just describing them.
 
  • Like
Likes vanhees71
  • #39
Killtech said:
I know the quantum master equation, but I wonder if for the aim of a better understanding/interpretation a full classical probability formulation of QT isn't at least somewhat useful. And as far as i can see, i can't find any major obstacles other then terminology.

1. Measurement
What brings me to Markov is however not the regular time evolution (which the quantum master equation describes) but measurement. The way calculation of probabilities work in projective measurements has the same signature as a discrete time Markov chain. Basically all i am saying is that you can build a stochastic matrix out of the rule 6 (as linked above) - or a Markov kernel if that is done for the entire state space. Nothing more is needed to define a discrete time Markov chain. But it would then describe only a single measurement in a single experimental setup and nothing else. Is there any argument about that?
I guess there's some misunderstanding between us, what we mean by Markov process. For me it's describing a time evolution of the state without memory.

There is a hierarchy of equations for the time evolution of open quantum systems. In the first step you consider the time evolution of a closed system and then take the partial trace over the parts of the system you are not interested in but which is interacting with the part of interest. The resulting "master equation" for the state of the system of interest is non-Markovian.

In the next step one can often approximate this non-Makovian process by a Markov process, i.e., when the "memory time" is small compared to the typical time scales over which the relevant observables of the system change.
Killtech said:
2. Undisturbed time evolution
When it comes to the quantum master equation it is a huge simplification utilizing the linear formulation of the time evo of states - but due to Born's rule you cannot fully utilize the linearity of the state space and linearity of ensemble probabilities at the same time.

However, any time evolution of a deterministic system can still be written in terms of a Markov process. All you need is the Markov property that the time evolution depends only on the current state and not the history how it got there. That seems to be the case in QT and the time evo outside of measurement is actually deterministic, right?
As I said above. Usually an open quantum system is only approximately described by a Markov process.

A measurement is nothing principally different than any other interaction between the system under consideration and other parts of the larger system relevant for the description of the system under consideration. Measurement devices are described by the same physical laws as any other "piece of matter".
Killtech said:
Now let's be clear that in QT the state space H is continuous and not discrete. So the master equation for such system is usually given by the Fokker Planck equation. Now the dimension of the Hilbert space is infinite so... it's a highly dimensional problem. However, the deterministic nature means we only have a drift term to worry about with no diffusion ##D (a_{n,t},t)=\frac 1 2 \sigma^2 (a_{n,t},t)=0## (where ##a_{n,t}## is the amplitude for the state ##\Psi_n##). Hmm, if we write in terms of the amplitudes of the Hamiltonian eigenstates basis, it's actually not difficult to solve since we have only a trivial drift ##\mu (a_{n,t}, t) = iE_n \hbar^{-1}##.
The Fokker Planck equation is another simplification for special cases. Also there are both Markovian and non-Markovian versions of it. The former is equaivalent to a Langevin equation with fluctuative forces described by white noise, the latter to one with colored noise and memory. About the latter I've even a paper :-)) though it's not about the relation to quantum theory:

https://arxiv.org/abs/1905.09652
Killtech said:
So that would be the formal master equation for a continuously distributed ensemble given by a probability density ##\rho(a_1, a_2, ...)## over complex amplitudes for each basis state (we would have to limit it to a finite number of states/dimension for a probability density to exist, but whatever). For that matter it works the same for electrodynamics as it would for QT - but only for as long as there is no measurement which messes this up.

That said i have no idea how QT actually deals with any non trivial ensembles like the continuously distributed case i have above. Just don't see how a density matrix is able to handle such a case. Are such ensembles of no interest in QT?
What do you mean? The ensemble is of course described by a statistical operator, which in principle can be any self-adjoint positive semidefinite operator of trace 1. Of course in the most general case you get very complicated equations. E.g., you may have high-order correlations in the initial state, such that Wick's theorem doesn't apply any more. Concerning this a standard review paper is

K. Chou, Z. Su, B. Hao and L. Yu, Equilibrium and Nonequilibrium Formalisms made unified, Phys. Rept. 118, 1 (1985), https://doi.org/10.1016/0370-1573(85)90136-X

Killtech said:
You could potentially dump this down to a discrete state space of interest enabling a simpler matrix representation in some simplified cases and using other trickery.

3. Total time evolution
Lastly those two types of very different processes - measurement and undisturbed time evolution - have to be properly merged together to obtain the full "master equation" of QT for an experiment. For me that task sounds very much like what Markov decision processes can do.

That's it
So i hope that helped a little. Which of the 3 points is causing the biggest trouble to follow?Thanks, will have to look into it. Anyhow, your link does not work for me. I get a "DOI not found".
As I said above, what bothers me most is the claim that measurements are something different from just interactions between the measured system and the measurement device. To the contrary, the aim of the approach via the description of measurements using the formalism for describing open quantum system aims at the description of the measurement in terms of the standard dynamical description of QT.

Do you mean the link to Wolfgang Cassing's lecture notes: Here it is again. I checked that it works:

https://doi.org/10.1140/epjst/e2009-00959-x
 
  • #40
PeterDonis said:
You were talking about "a full classical probability formulation of QT". That's not what the theory of open quantum systems is. As @vanhees71 said:The bolded phrase is the crucial one, and it does not support your claim.
Indeed, classical descriptions are approximations from the full quantum description. You have to "through away" correlations. An example is the step from the Kadanoff-Baym equations (full quantum description of the one-particle phase-space distribution function) to semiclassical Boltzmann-Uehling-Uhlenbeck transport equations, which works via "gradient expansion". This is formally an expansion in powers of ##\hbar##, i.e., it applies when quantum effects average out over "macroscopically small but microscopically large" space-time scales.
 
  • #41
vanhees71 said:
I guess there's some misunderstanding between us, what we mean by Markov process. For me it's describing a time evolution of the state without memory.
It generally describes an evolution of a state without memory. The notion of time in this context is abstract.

vanhees71 said:
There is a hierarchy of equations for the time evolution of open quantum systems. In the first step you consider the time evolution of a closed system and then take the partial trace over the parts of the system you are not interested in but which is interacting with the part of interest. The resulting "master equation" for the state of the system of interest is non-Markovian.
When you talk about time evolution, do you mean a continuous type of measurement? The projective von-Neumann measurements don't progress time formally. So indeed i am not sure we are talking about the same thing.

vanhees71 said:
The Fokker Planck equation is another simplification for special cases. Also there are both Markovian and non-Markovian versions of it. The former is equaivalent to a Langevin equation with fluctuative forces described by white noise, the latter to one with colored noise and memory. About the latter I've even a paper :-)) though it's not about the relation to quantum theory:

https://arxiv.org/abs/1905.09652
In the cases you describe and in the paper you linked there is a noise. But in QT the time evolution of the state does not normally have a diffusion (unless in the presence of a continuous measurement). If the quantum system starts in a ensemble made of a single quantum state, it will never produce an ensemble made of more then one state. So there cannot be any kind of diffusion present.

This situation should be the same as if we wanted to describe an ensembles of solutions to the classic Maxwell equation. Neglecting any thermal interactions the Fokker-Planck equation in such a case does not have a diffusion, same as in QT. Both are governed by deterministic PDEs, so why should the mathematics differ between for two things that are of the same type?

vanhees71 said:
As I said above, what bothers me most is the claim that measurements are something different from just interactions between the measured system and the measurement device. To the contrary, the aim of the approach via the description of measurements using the formalism for describing open quantum system aims at the description of the measurement in terms of the standard dynamical description of QT.
I don't exactly understand where you see a disagreement there at all. I never claimed that measurement is not just an interaction... because i indeed absolutely agree with you that it is interaction between the system and the device. If i made any claim then solely that it is an interaction, as opposed to pure passive extraction of data that has no impact on the system. Pointing out that von-Neuman rules look like a Markovian state transition should make it clear since you would't need anything like it if nothing would change. That change/interaction is what prevents all random variables to have well defined values at all times and it makes a lot of sense. You don't have that effect if measurement doesn't interact/affect the state.

The thing is that QT gives different laws for how this interaction unfolds that do differ from the normal time evolution given by the Hamiltonian operator. So i wonder, why not just take QT words for it and accept them as they are? Whether these rules derive from some deeper truth that can be expressed via a unitary operator time evolution or not is of no concern for me here.
 
  • #42
Killtech said:
It generally describes an evolution of a state without memory. The notion of time in this context is abstract.
Ok. Then I don't know, what we are discussing about.
Killtech said:
When you talk about time evolution, do you mean a continuous type of measurement? The projective von-Neumann measurements don't progress time formally. So indeed i am not sure we are talking about the same thing.
I thought we discuss the time evolution of the measurement process as described by quantum theory, and that would mean we discuss about the description of the measured system, interacting with the measurement device, in terms of the formalism of open quantum systems. Since you mentioned Markov processes, I thought that's what you are referring to, and then it's in principle not true that this dynamics is described by a Markov process to begin with. The quantum-dynamical formalism always leads to a non-markovian description, which however often can be approximated by a markovian one.
Killtech said:
In the cases you describe and in the paper you linked there is a noise. But in QT the time evolution of the state does not normally have a diffusion (unless in the presence of a continuous measurement). If the quantum system starts in a ensemble made of a single quantum state, it will never produce an ensemble made of more then one state. So there cannot be any kind of diffusion present.
That's of course utterly wrong. When you want to measure something you have to let the measured system interact with the measurement device in a way that it gets entangled with the measurement device. Then performing the partial trace over the measurement device's degrees of freedom leads to a mixed state of the measured system.
Killtech said:
This situation should be the same as if we wanted to describe an ensembles of solutions to the classic Maxwell equation. Neglecting any thermal interactions the Fokker-Planck equation in such a case does not have a diffusion, same as in QT. Both are governed by deterministic PDEs, so why should the mathematics differ between for two things that are of the same type?
The Fokker-Planck equation describes drag and diffusion. It's equivalent to a Langevin equation which contains a damping term and a random-noise force term. Ensemble-Averaging then kills the random-noise term and delivers an equation of motion for the ensemble average with a friction term.
Killtech said:
I don't exactly understand where you see a disagreement there at all. I never claimed that measurement is not just an interaction... because i indeed absolutely agree with you that it is interaction between the system and the device. If i made any claim then solely that it is an interaction, as opposed to pure passive extraction of data that has no impact on the system. Pointing out that von-Neuman rules look like a Markovian state transition should make it clear since you would't need anything like it if nothing would change. That change/interaction is what prevents all random variables to have well defined values at all times and it makes a lot of sense. You don't have that effect if measurement doesn't interact/affect the state.

The thing is that QT gives different laws for how this interaction unfolds that do differ from the normal time evolution given by the Hamiltonian operator. So i wonder, why not just take QT words for it and accept them as they are? Whether these rules derive from some deeper truth that can be expressed via a unitary operator time evolution or not is of no concern for me here.
If you refer to the standard textbook collapse rules for von-Neumann filter measurement, then this is just an ad-hoc description working of course well FAPP, but it has nothing to do with markovian or non-markovian descriptions of the measurement process as a dynamical process of the measured system interacting with the measurement device.

Of course, I'm fully with you concerning the last paragraph. QT is the most comprehensive description of the behavior of matter we have, and there's not a single example, where it doesn't work. Also the notorious measurement problem seem to me to be solved by the here discussed formalism of open quantum systems and thus decoherence.
 
  • #43
vanhees71 said:
If you refer to the standard textbook collapse rules for von-Neumann filter measurement, then this is just an ad-hoc description working of course well FAPP, but it has nothing to do with markovian or non-markovian descriptions of the measurement process as a dynamical process of the measured system interacting with the measurement device.
I think i see where the misunderstanding comes from and it is indeed by mistake. I take the axioms of QT as what i am given to work with and try to make sense of with the von-Neumann measurements being a good start.

However, i have mistaken it with the much more general axiom of the wave function collapse which assumes a state transition
$$|\psi\rangle \rightarrow |a_i\rangle \text{ with probability } |\langle \psi|a_i\rangle|^2$$

That is a history independent state transition on the Hilbert space that defines a proper discrete time Markov chain that i can work with. The collapse assumptions does not specify a time how long that collapse process takes but only deals with the initial and final state. So a black box process. While there may be many flaws to this crude approach it is inherently useful as it allows a de-facto classical treatment in probability theory. It is in practice the same as the probabilistic description of a dice roll/coin flip packaging the exact physical description of the mechanics involved into a easy to deal with black box.

But as quantum states cannot be directly measured, i.e. their state space is hidden from direct measurement, the statement must be interpreted in a wider meaning outside a single experimental setup: no single experiment can distinguish any ensemble composition all on its own. It takes a vast amount of experiments using different settings to get enough statistics to be able to identify an ensemble yielding the probabilities for each state in the composition. So we can view measurement as process producing an ensemble that can be view as the initial preparation of an ensemble for any follow-up experiment.

Having that, it leaves only the time evolution of the state outside of measurement to deal with - i.e. the part that is not subject to measurement itself, yet the time evolution as described by the Hamiltonian is still critical. That part may be abstract, but it is deterministic and again does not require any memory. It also acts is defined in the same state space as the collapse processes, thus it is directly compatible with the Markov chains defined there. Both combined together seem to me like a classical model of QT.
 
  • #44
DrChinese said:
If you assume "EPR/Bell realism" and "EPR/Bell locality", then Bell's Theorem can lead to some disagreement with traditional probability concepts. For example, you *can* get to negative probabilities with these assumptions.
No. Both are quite compatible with classical probability theory. So you cannot get negative probabilities from that.

Your link adds the predictions of QT to this mixture. Once they violate the Bell inequalities, which can be proven from your other assumptions, those "negative probabilities" are simply a consequence of the old and rather trivial logical insight that from a contradiction everything follows.
 
  • #45
DrChinese said:
Well, in the example I referenced: the quantum prediction (assuming EPR/Bell realism) leads to a violation of the first and third axioms. There are 8 subcases presented, and cases [2] and [7] have a combined probability of occurring of -10.36% (violating the first axiom, that all cases be non-negative real numbers) - either [2] or [7] or both must be negative.
No, this also assumes Einstein locality. With EPR realism alone you cannot reach such results. Proof by explicit construction: Bohmian mechanics. It is realistic, that means also EPR realistic, and gives quantum probabilities in a completely Kolmogorovian way.
DrChinese said:
If you think Kolmogorov is wrong to begin with, then maybe you can rescue local realism.
This would be quite stupid. Kolmogorovian probability also follows from the objective Bayesian interpretation, that means, essentially from the logic of plausible reasoning. Giving up logic is not a reasonable decision.

Or you simply reject (classical) realism, i.e. you require an observer dependent reality (a/k/a "contextuality"). The choice of measurement basis somehow influences the reality witnessed elsewhere.
DrChinese said:
EPR rejected this path, and concluded: "No reasonable definition of reality could be expected to permit this." Of course, EPR didn't know about Bell. Had they known, I believe that the EPR authors would have conceded the point: elements of reality are limited to those that can be simultaneously measured/predicted.
Of course, one can guess that Einstein would not be strong enough to reject his own theory even if it would have been in conflict with experiment. But it is equally plausible that he would preserve EPR realism and accept a return to a preferred frame. That's a question of belief.
 
  • Like
Likes gentzen
  • #46
PeterDonis said:
Does one exist? If so, please give a reference. Personal theories/speculations are not permitted. If you can't give a reference for such a formulation, it is off topic for discussion here.
Every realistic interpretation of QT starting with Bohmian mechanics will do it.
 
  • Like
Likes gentzen, Demystifier and Killtech
  • #47
Sunil said:
Or you simply reject (classical) realism, i.e. you require an observer dependent reality (a/k/a "contextuality"). The choice of measurement basis somehow influences the reality witnessed elsewhere.
The "choice of measurement basis" means simply the choice which observable you measure (in the usual textbook sense of a precise von Neumann measurement). Of course the result depends upon what you measure, i.e., the specific interaction of the measured system with the measurement device to measure precisely this chosen observable. The profound contradistinction between QT and what you call classical realism is that in QT it depends on the state the system is prepared in, which observables take determined/certain values and which "really" don't. In classical realism you assume that all observables always take determined values, and the probabilistic nature of the observations on microscopic objects come from our ignorance of some hidden observables.

Sunil said:
Of course, one can guess that Einstein would not be strong enough to reject his own theory even if it would have been in conflict with experiment. But it is equally plausible that he would preserve EPR realism and accept a return to a preferred frame. That's a question of belief.
I'm not sure how Einstein would have reacted to the experimental results concerning the violation of Bell's inequalities, clearly showing that local deterministic HV theories are ruled out by these experiments with high accuracy and significance. I'm sure he'd still look for more classical unified field theories, because for him the inseparability described by quantum theory and the indeterminism of observables was inacceptable. What we know for sure is that he didn't like the (in)famous EPR paper very much. He thought, it misses the point of his criticism, which was indeed his quibbles with "inseparability".
 
  • #48
This quote was falsely attributed to me, I had simply forgotten to delete these two lines, now its too late to correct this:
DrChinese said:
Or you simply reject (classical) realism, i.e. you require an observer dependent reality (a/k/a "contextuality"). The choice of measurement basis somehow influences the reality witnessed elsewhere.
I would not write this, given that I don't see any incompatibility between contextuality and classical realism.
vanhees71 said:
In classical realism you assume that all observables always take determined values, and the probabilistic nature of the observations on microscopic objects come from our ignorance of some hidden observables.
Not necessarily. If there is some interaction, classical realism does not assume that the result of the interaction is predefined by the state of one side. That the interaction can be reasonably classified to be a measurement, which then presupposes that what is measured already exists, is an additional hypothesis about the reality of this particular interaction.

If I ask my friend if he wants to come to me this afternoon I do not assume he already has prepared an answer.
 
  • Like
Likes gentzen, Killtech and vanhees71
  • #49
Then you have to define, what you mean by "classical realism". As usual, the word "realism" seems to have more meanings than people using it in discussions about QT ;-)).
 
  • Like
Likes DrChinese
  • #50
vanhees71 said:
The "choice of measurement basis" means simply the choice which observable you measure (in the usual textbook sense of a precise von Neumann measurement). Of course the result depends upon what you measure, i.e., the specific interaction of the measured system with the measurement device to measure precisely this chosen observable. The profound contradistinction between QT and what you call classical realism is that in QT it depends on the state the system is prepared in, which observables take determined/certain values and which "really" don't. In classical realism you assume that all observables always take determined values, and the probabilistic nature of the observations on microscopic objects come from our ignorance of some hidden observables.
Uh, but that makes for an unrealistically hard condition to fulfil in real live.

You definition of realism requires that the outcome you measure does not just describe the state as it is after the measurement but must also describe the state as it was before? So your definition requires that the measurement is entirely passive and does not itself change the state. But you yourself claimed it to be an interaction which seems like a contradiction to me - an interaction that changes nothing isn't an interaction.

i think your reasoning more importantly relies on the classical model of measurement that does not affect what it measures.
 
  • #51
No. All I say is that given the state of the system all there is are the probabilities for the outcome of a measurement of a given observable. Whether or not after the measurement the system is described by a state where the measured observable takes a determined value depends on the specific setup of the measurement. E.g., if you measure a photon usually it's absorbed by the detector and thus it doesn't make sense to ask in which state it might be now, because it's simply destroyed by the measurement procedure. The opposite case is, e.g., a polarization filter. If the photon goes through it has a definite (linear) polarization state.
 
  • #52
Sunil said:
I would not write this, given that I don't see any incompatibility between contextuality and classical realism.

Not necessarily. If there is some interaction, classical realism does not assume that the result of the interaction is predefined by the state of one side. That the interaction can be reasonably classified to be a measurement, which then presupposes that what is measured already exists, is an additional hypothesis about the reality of this particular interaction.

If I ask my friend if he wants to come to me this afternoon I do not assume he already has prepared an answer.

You ignoring the history of the debate that Bell addresses. EPR defined "elements of reality", which are measurement outcomes that can be predicted with certainty. Asking a friend to come over this afternoon does not fit that definition.

EPR said that if measurement outcomes could be predicted in advance, that it was not reasonable to require that they be simultaneously predicted in advance. That is the dividing line that was drawn between one side (sometimes labeled as "classical realism") and contextuality - which says the choice of measurement basis is fundamental to the (statistical) results.

It is true that this places an additional requirement on the EPR realism (classical) side that is not present on the contextual side of the line. The contextual side *is* the minimal quantum viewpoint, as the actual formula for spin correlations only depends on the measurement context and nothing else. So you are mistaken, there is an important distinction/incompatibility between classical realism and contextuality in the quantum world.
 
  • #53
DrChinese said:
You ignoring the history of the debate that Bell addresses. EPR defined "elements of reality", which are measurement outcomes that can be predicted with certainty. Asking a friend to come over this afternoon does not fit that definition.
And therefore this can be something not really existing now, without violating classical as well as EPR realism.
DrChinese said:
EPR said that if measurement outcomes could be predicted in advance, that it was not reasonable to require that they be simultaneously predicted in advance. That is the dividing line that was drawn between one side (sometimes labeled as "classical realism") and contextuality - which says the choice of measurement basis is fundamental to the (statistical) results.
For the realistic interpretations of QT this is not a problem at all, reality is defined by the configuration, the only real "measurements" are measurements of the configuration, and everything else is contextual (that means, result of an interaction like asking my friend).

It is true that this places an additional requirement on the EPR realism (classical) side that is not present on the contextual side of the line. The contextual side *is* the minimal quantum viewpoint, as the actual formula for spin correlations only depends on the measurement context and nothing else. So you are mistaken, there is an important distinction/incompatibility between classical realism and contextuality in the quantum world.
[/QUOTE]
Of course there is a distinction - these are completely different things.
But EPR realism taken alone, without Einstein locality, gives nothing, because the measurement of Alice disturbs Bob's system, or at least the quantum formalism does not suggest any non-disturbance. And therefore it will be hard to construct a conflict.
 
  • #54
DrChinese said:
You ignoring the history of the debate that Bell addresses. EPR defined "elements of reality", which are measurement outcomes that can be predicted with certainty. Asking a friend to come over this afternoon does not fit that definition.
These arguments were made with the picture of classical pointlike particles in mind, for which classical realism may require such a thing. If you however reject the idea of dealing with objects of such nature to begin with, then i very much doubt classical realism can make such a restriction. After all contextuality is still present in simple scenarios, like asking a friend to come over and it would be weird if realism couldn't deal with that in general.

EDIT:
preexisting values require that the measurement does not itself change anything. That's a the main assumption they made at least implicitly.
 
Last edited:
  • #55
Killtech said:
These arguments were made with the picture of classical pointlike particles in mind, for which classical realism may require such a thing. If you however reject the idea of dealing with objects of such nature to begin with, then i very much doubt classical realism can make such a restriction. After all contextuality is still present in simple scenarios, like asking a friend to come over and it would be weird if realism couldn't deal with that in general.

EDIT:
preexisting values require that the measurement does not itself change anything. That's a the main assumption they made at least implicitly.

Again, you can't make any sense of a discussion about Bell's paper (fittingly entitled "On the Einstein Podolsky Rosen paradox") without going back to EPR. And you don't need to agree with EPR's definitions/assumptions, you simply won't be on the same page as the rest of the scientific community.
 
  • #56
Killtech said:
These arguments were made with the picture of classical pointlike particles in mind, for which classical realism may require such a thing. If you however reject the idea of dealing with objects of such nature to begin with, then i very much doubt classical realism can make such a restriction. After all contextuality is still present in simple scenarios, like asking a friend to come over and it would be weird if realism couldn't deal with that in general.

EDIT:
preexisting values require that the measurement does not itself change anything. That's a the main assumption they made at least implicitly.
In the EPR argument, the claim that measurements had pre-existing values was not an assumption, it was the conclusion of their argument. And I don’t think that the argument implicitly or otherwise assumes classical pointlike particles.

Certainly even classically, measurements don’t necessarily reveal a pre-existing value. The value of a measurement might be some kind of cooperative result of the interaction between the system being measured (the “measuree” to coin a term) and the system doing the measurement (the “measurer”). For example, if you are doing a survey to find out if someone is pro- or anti- gun control, the answer you get from them may depend on how the question is asked. I’m not sure that this is exactly what is meant by “contextuality”, so let me just call it an “emergent value ”, since the value emerges through the interaction between the “measurer” and the “measuree”.

So the question arises: is quantum uncertainty due to measurement results being emergent in this sense?

Einstein et al argued that it can’t be.

Alice and Bob are two experimenters who are far away from each other, performing measurements. Alice performs her measurement and gets a result. On the basis of her result, she knows with 100% certainty what result Bob will get (assuming he told her ahead of time which measurement he would perform).

So Einstein et al reasoned that there can’t be anything “emergent” about Bob’s result. If the result depended on details about Bob and his measurement device and exactly how the measurement was performed, then how can Alice predict with 100% certainty what result Bob will get, since she doesn’t know any of those details? She only knows (1) What measurement Bob will perform (because he told her ahead of time) and (2) what the result of her measurement was. No other details about Bob’s situation are relevant. Einstein thought that this situation could only be explained by the result being pre-determined. That was the conclusion, not the assumption.

There is nothing about particles or things being pointlike.
 
  • Like
Likes Demystifier, DrChinese, gentzen and 1 other person
  • #57
stevendaryl said:
So Einstein et al reasoned that there can’t be anything “emergent” about Bob’s result. If the result depended on details about Bob and his measurement device and exactly how the measurement was performed, then how can Alice predict with 100% certainty what result Bob will get, since she doesn’t know any of those details? She only knows (1) What measurement Bob will perform (because he told her ahead of time) and (2) what the result of her measurement was. No other details about Bob’s situation are relevant. Einstein thought that this situation could only be explained by the result being pre-determined. That was the conclusion, not the assumption.
One thing that bothers me about EPR is the use of "predict". If you can predict with 100% certainty then it has to be an element of reallity. But the prediction of Alice is not for the future of her measurment! So it is not a prediction really. She can conclude somthing about the "present", or in more relativistic terminology for the spacelike part of the space-time for her measurment. Also it is not posible given all the information in the past of Bob's measurment to make a 100% certain prediction about his outcome, you need Alices result, which is not in his past.

I would like to see the whole set up in the form of an intitial value problem. Given a spacelike hypersurface that contains the event of the production of the entagled pair, all the intitial data on it, the evolution, and then predictions on future spacelike hypersurfaces. Or if Alice's outcome is needed, then choose a spacelike hypersurface contation that event, and not containing Bob's measurement, given all the data on in, make the predictions and so on.
 
  • Like
Likes WernerQH
  • #58
If the measurement events at A's and B's places are space-like separated, i.e., that one of these measurements cannot causally influence the other, there's always an inertial frame, where both events are simultaneous, and you can use the corresponding ##t = \text{const}## hypersurface to describe the experiment. Then you can Lorentz boost to any other inertial frame, where the time order is either A's measurement happened before B's or the other way. This shows that the outcome of the joint measurement is indeed independent of the time order of each measurement, and thus the measurements themselves can not be mutually causally connected in any way. Formally that's ensured of the microcausality conditions of local observables in local relativistic QFT descriptions.
 
  • #59
vanhees71 said:
If the measurement events at A's and B's places are space-like separated, i.e., that one of these measurements cannot causally influence the other, there's always an inertial frame, where both events are simultaneous, and you can use the corresponding ##t = \text{const}## hypersurface to describe the experiment. Then you can Lorentz boost to any other inertial frame, where the time order is either A's measurement happened before B's or the other way. This shows that the outcome of the joint measurement is indeed independent of the time order of each measurement, and thus the measurements themselves can not be mutually causally connected in any way. Formally that's ensured of the microcausality conditions of local observables in local relativistic QFT descriptions.
Yes, but this is not related to my comment!
 
  • #60
It was referring to your demand of giving a description of the whole setup as an initial-value problem.
 
  • #61
vanhees71 said:
It was referring to your demand of giving a description of the whole setup as an initial-value problem.
Ok, but I want to see it done.
 
  • #63
martinbn said:
One thing that bothers me about EPR is the use of "predict". If you can predict with 100% certainty then it has to be an element of reallity. But the prediction of Alice is not for the future of her measurment!

I don’t understand this objection. Alice and Bob will get together later and tell each other what results they got. She’s making a prediction about what Bob will say.

I guess we can coin a new term, something like “teledict”, which would be defined as computing some fact about conditions that are not in your backwards lightcone (whether or not they are inthe future.
 
  • Like
Likes Demystifier and vanhees71
  • #64
stevendaryl said:
I don’t understand this objection. Alice and Bob will get together later and tell each other what results they got. She’s making a prediction about what Bob will say. I guess we can coin a new term, something like “teledict”, which would be defined as computing some fact about conditions that are not in your backwards lightcone (whether or not they are inthe future.
It is not an objection but something that bothers me because it is not clear to me. Changing it to what Bob would say is not really an improvement. Between Bob's measurement and the meeting with Alice there is nothing strange or unusual about the reality of the measurement outcome. So the question really is what will Bob measure.

What I don't understand is the following. If you can predict with 100% certainty the value of a dynamical variable, then the measurment only reviels a preexisting value, and the theory should account for that. That is how I understand EPR's what is reasanable to consider as an element of reality. And they argue that this applies to Bob's measurment. But for me to predict the outcome of Bob's measurement means that given all the data in the past light-cone of that event you can calculate the value Bob will get. And that is not possible. What they do is to look at Alice's outcome and inffer Bob's. But that is not a prediction. It is almost like looking at the answer before saying what the answer is. Of course it is not quite like that, so I am not objecting to anything, I am only stating what bothers me. The two possible outcome for Alice and Bob are the pairs {1,-1} and {-1,1} . So if you see the first half of the pair you can "predict" the other, so!
 
  • Like
Likes vanhees71
  • #65
stevendaryl said:
In the EPR argument, the claim that measurements had pre-existing values was not an assumption, it was the conclusion of their argument. And I don’t think that the argument implicitly or otherwise assumes classical pointlike particles.

Certainly even classically, measurements don’t necessarily reveal a pre-existing value. The value of a measurement might be some kind of cooperative result of the interaction between the system being measured (the “measuree” to coin a term) and the system doing the measurement (the “measurer”). For example, if you are doing a survey to find out if someone is pro- or anti- gun control, the answer you get from them may depend on how the question is asked. I’m not sure that this is exactly what is meant by “contextuality”, so let me just call it an “emergent value ”, since the value emerges through the interaction between the “measurer” and the “measuree”.

So the question arises: is quantum uncertainty due to measurement results being emergent in this sense?

Einstein et al argued that it can’t be.

Alice and Bob are two experimenters who are far away from each other, performing measurements. Alice performs her measurement and gets a result. On the basis of her result, she knows with 100% certainty what result Bob will get (assuming he told her ahead of time which measurement he would perform).

So Einstein et al reasoned that there can’t be anything “emergent” about Bob’s result. If the result depended on details about Bob and his measurement device and exactly how the measurement was performed, then how can Alice predict with 100% certainty what result Bob will get, since she doesn’t know any of those details? She only knows (1) What measurement Bob will perform (because he told her ahead of time) and (2) what the result of her measurement was. No other details about Bob’s situation are relevant. Einstein thought that this situation could only be explained by the result being pre-determined. That was the conclusion, not the assumption.

There is nothing about particles or things being pointlike.
If you present it like that, then i am sure i follow the argument there.

The argument against an emergent behavior can only be made on the grounds of the spatial separation and nothing else - and as such it already based on an assumption only.

On the other hand, any attempt to understand the behavior of Bell experiments by studying its statistics leads to an requirement of emergent behavior: we can just write down all outcomes into a logic table and each such logic table can be implemented via a logic circuit that realizes this statistic. We will find that any such implementation is fundamentally emergent - and therefore it runs into issues when the decisions of Alice and Bobs are delayed until the last possible moment and the time in between them is faster then the travel time of logical signals within the circuit.

The statistical analysis via a logic circuit represents well the way classic Kolmogorov probability theory views Bells results. So if we want to preserve it, it requires us to postulate an emergent behavior i.e. somethin akin to a "wave function" collapse interpreted in terms of some physical object collapsing.

But we know so far that no experiment was able to provide evidence for either interpretation, hence as of now both options remain just equivalent alternative interpretations and non can be excluded.
 
  • #66
martinbn said:
What I don't understand is the following. If you can predict with 100% certainty the value of a dynamical variable, then the measurment only reviels a preexisting value, and the theory should account for that. That is how I understand EPR's what is reasanable to consider as an element of reality. And they argue that this applies to Bob's measurment. But for me to predict the outcome of Bob's measurement means that given all the data in the past light-cone of that event you can calculate the value Bob will get. And that is not possible. What they do is to look at Alice's outcome and inffer Bob's. But that is not a prediction. It is almost like looking at the answer before saying what the answer is. Of course it is not quite like that, so I am not objecting to anything, I am only stating what bothers me. The two possible outcome for Alice and Bob are the pairs {1,-1} and {-1,1} . So if you see the first half of the pair you can "predict" the other, so!
The important point is that the violation of Bell's inequality together with the assumption of locality (in the sense that no faster-than-light influence of A's measurement event at B's measurement event and vice versa, if the measurement events are space-like separated) rules out that the measurement outcomes are predetermined. To the contrary both A and B measure simply the polarization of exactly unpolarized photons (I guess the preparation of such entangled pairs provides the most accurate source of single unpolarized photons possible).

According to Q(F)T however, in the entangled photon state ##|HV \rangle-|VH \rangle##, there are the 100% correlations between the outcome of A's and B's measurement (provided they measure the polarization in precisely the same or precisely perpendicular directions), i.e., they are there due to the preparation of the photon pairs in this state. So indeed if A measured H, she immediately knows that B will find V for his photon, and that's what's indeed confirmed by experiments, but of course you can only confirm this correlation by comparing the measurement protocols after both measurements were done. So there is no contradiction between locality in the sense of QFT ("microcausality") and the 100% long-ranged correlations between the two photons due to their preparation in an entangled state.
 
  • #67
martinbn said:
What I don't understand is the following. If you can predict with 100% certainty the value of a dynamical variable, then the measurment only reviels a preexisting value, and the theory should account for that.
The way you formulate it sounds deeply flawed. Like take the statement and apply it to any deterministic theory which in theory makes predictions about the future with 100% certainty. This view would strictly imply the future to be already preexisting for such a theory.

But surely we can agree that is not a general requirement for deterministic theories.
 
  • #68
Killtech said:
The way you formulate it sounds deeply flawed. Like take the statement and apply it to any deterministic theory which in theory makes predictions about the future with 100% certainty. This view would strictly imply the future to be already preexisting for such a theory.
Not the future, but the value. If you can predict with 100% certainty the location of a particle 10 years from now, then that means that “the location 10 years from now” is a function of the current state.
 
  • #69
martinbn said:
The two possible outcome for Alice and Bob are the pairs {1,-1} and {-1,1} . So if you see the first half of the pair you can "predict" the other, so!
Yes, but I don’t understand what point you are making.

Let me introduce a fictional situation and see what you think about it. Suppose that there are a pair of coins. Each coin can be flipped to give a result of “heads” or “tails”. Looking at either coin in isolation reveals no pattern to the results, other than 50/50 chance for each outcome. Yet comparing the two coins shows that the nth flip of one coin always gives the opposite result of the nth flip of the other coin.

I think that most people confronted with such a coin would assume that either the results are predetermined, or that there is some kind of long range interaction between the coins.
 
  • Like
Likes vanhees71
  • #70
stevendaryl said:
Not the future, but the value. If you can predict with 100% certainty the location of a particle 10 years from now, then that means that “the location 10 years from now” is a function of the current state.
where the "current state" is unusually a collection of values relating to various locations, i.e. non local data.

So in the thinking EBR such a "current state" would only be allowed to contain data from within the specific light cone? I suppose that EBR had something like a particle picture in mind where they though of something like local hidden variables for each particle that had the required information. Their whole argumentation makes a lot more sense with this in mind. But as of today we know that no such idea will work.

On the other hand, if the "current state" isn't restricted to light cone then there is no issue to view Bells result as emergent.
 

Similar threads

Replies
0
Views
750
Replies
1
Views
822
Replies
72
Views
4K
Replies
4
Views
1K
  • Quantum Physics
Replies
4
Views
729
Replies
50
Views
4K
  • Quantum Physics
Replies
5
Views
1K
Replies
80
Views
4K
  • Quantum Physics
Replies
16
Views
2K
Replies
1
Views
1K
Back
Top