I Classical chaos and quantum mechanics

Click For Summary
The discussion centers on the relationship between classical chaos and quantum mechanics, arguing that the unpredictability in quantum experiments is not due to chaos in the quantum state but rather in the classical observables that can be measured. It emphasizes that macroscopic measurements reflect only a small subset of observables, which can exhibit chaotic behavior despite the underlying linearity of quantum mechanics. The participants debate the implications of Bell's theorem and the assumptions made about measurement processes, suggesting that existing idealizations may not accurately represent reality. The conversation also touches on the nature of randomness and probability in quantum mechanics, highlighting the complexities of interpreting measurement outcomes. Overall, the discussion seeks to reconcile deterministic quantum motion with the probabilistic nature of observed phenomena.
  • #31
A. Neumaier said:
I challenge the applicability to reality: Real states are not exactly of the form that they can be interpreted in terms of the CHSH inequality, and real observables cannot be exactly be interpreted as random variables on of the required form. Only idealizations that strip away most observable degrees of freedom except a few spin variables can be interpreted in these terms.

Can we modify any of the better known toy examples of Bell's inequality ( e.g. the "EPR apparatus" of http://www.theory.caltech.edu/classes/ph125a/istmt.pdf or the consumer survey of http://quantumuniverse.eu/Tom/Bells Inequalities.pdf) to illustrate how the thermal approach would predict (or at least tolerate) a violation of the inequality?

Those toy examples deal with reality at the level of probability. They don't postulate any underlying state vectors as the causes of probability. What kind of ensemble is attributed thermodynamic properties in the thermal approach to QM? Are these ensembles whose individuals are described by state vectors? Or are they ensembles of that have only classical properties?
 
Physics news on Phys.org
  • #32
A. Neumaier said:
This is only because in interpreting classical mechanics the idealization is made that the observation can be done in principle without error. But a finite subsystem of a classical system cannot get perfect information about a disjoint classical subsystem with which it interacts. Once this is properly modeled, the situation becomes not very different from the quantum situation.

I certainly agree with the bold statement => the point i made can be made an issue even in classical mechanics.

However, i think that there is coincidental a reason(General Relativity) why we kind of "get away" with this in Classical Mechanics. GR describes to us in an alternative way, what might otherwise be explained in a more complex way. Also this is one good reason why GR is preferred in CM. But in unification, we might in fact need the more complex theory.

Confession: I (secretly) think that this "issue" in classical mechanics may provide a deeper insight into what gravity is - in a way that makes the marriage with standard model physics more natural. This idea also implies gravity is an emergent phenomena at "low" energy, that is explained by a balance of universal negotiations which are attractive and inertia which resist this. But after all, GR describes how matter and energy defines curvature, but it does not explain the mechanism in terms of something else.

But I can not see how to understand unification in terms of inference while ignoring this information coding limitation. I can not ignore this detail.
A. Neumaier said:
Physics is concerned with objectively valid statements about Nature, not with subjective views.
Superficially this is of course true.

However, a more careful analysis of the the inferences suggests that things are more complicated.

An inference, is by definition conditional relative to its choice of inference logic, this is what i mean by subjective. And part of my understanding of QM is that inferences are indistinguishable from physical interaction. Two interacting systems are effectively making measurements on each other, but without a classical backdrop in general - this is the hard part to grasp; it needs to be solved, and it likel implies a reconstruction of quantum theory, yielding current QM as a limiting case of classical dominant observer (so that the information codinfg limiting never gets "relevant")

What we expect out of a stable situtation is a universe where the observers (=matter content; here i count any piece of matter as an observer) "populating it" are interaction with each other in a way that the laws of interactions as inferred by any choice of these observers are consistent.

Then either this situation is stable and both the population of observers (matter) and laws are stable. This is BTW, what we expect in our universe today.
Or the situation is not stable, and both the population of observers and law are evolving (maybe early big bang, at TOE energy scale). This is also in essence smolins idea behind evolving law. (but Smolin doesn't present the whole picture i argue for here, but the specifics of evolving law he has the same view as me).

/Fredrik
 
  • #33
Stephen Tashi said:
Can we modify any of the better known toy examples of Bell's inequality ( e.g. the "EPR apparatus" of http://www.theory.caltech.edu/classes/ph125a/istmt.pdf or the consumer survey of http://quantumuniverse.eu/Tom/Bells Inequalities.pdf) to illustrate how the thermal approach would predict (or at least tolerate) a violation of the inequality?

Those toy examples deal with reality at the level of probability. They don't postulate any underlying state vectors as the causes of probability. What kind of ensemble is attributed thermodynamic properties in the thermal approach to QM? Are these ensembles whose individuals are described by state vectors? Or are they ensembles of that have only classical properties?
The thermal interpretation makes no predictions that deviate anywhere from quantum mechanics; so the standard examples of Bell violations by QM apply.
 
  • #34
Fra said:
may provide a deeper insight into what gravity is
I am primarily interested in interpreting the well-understood part of quantum theory.

Fra said:
An inference, is by definition conditional relative to its choice of inference logic, this is what i mean by subjective.
The logic used in quantum field theory, the deepest level currently understood, is still classical logic. The only subjective part are the choice of assumptions. Logic then objectively defines the possible conclusions.
 
  • Like
Likes bhobba, Fra and dextercioby
  • #35
A. Neumaier said:
The thermal interpretation makes no predictions that deviate anywhere from quantum mechanics; so the standard examples of Bell violations by QM apply.

The toy examples illustrate situations where Bell's inequality does apply. I thought that perhaps a thermal interpretation of those situations would reveal why Bell's inequality need not apply.
 
  • #36
Very interesting thread,

I read the PO thread mentioned by Urs and one thing that struck me was what Gerard t'Hooft (of all people) said:
Fine, but how then do space and time develop curvature? Do we have to dodge the question what that curvature really is?

I sort of balked - this is standard textbook stuff - see Chapter 7 - Ohanian - Gratification and Space-Time.

Of course you can't show that space-time is curved - I discussed this at length with Steve Carlip once - its a basic impossibility because you can never tell if it's actually curved or something (gravitons perhaps) simply makes it behave like its curved. Anyway as the above shows from the linearised equation, which would fit in with Dr Neumaier's interpretation, you can get the full EFE's. How you interpret them is up to you.

Thanks
Bill
 
  • #37
Stephen Tashi said:
The toy examples illustrate situations where Bell's inequality does apply. I thought that perhaps a thermal interpretation of those situations would reveal why Bell's inequality need not apply.
In QM, the standard examples violate the Bell inequalities. That's the whole point why Bell's theorem rules out local hidden variable theories.
 
  • #38
A. Neumaier said:
In QM, the standard examples violate the Bell inequalities. That's the whole point why Bell's theorem rules out local hidden variable theories.

I'm losing the thread of the thread!

In posts #9, #10,#11 @stevendaryl said your 3 assertions appear to contradict Bells theorem. (He said "it" contradicted the theorem and I don't understand if he meant one particular assertion of the three..) You stated objections to Bell's theorem, which (as I understand it) states conditions where hidden variables won't work. Is the main point of discussing the non-applicability of Bell's theorem to show that variables postulated by the thermal interpretation are not hidden variables? Or does the thermal interpretation employ hidden variables (or statistics from them) in a manner that shows Bell's theorem does not apply to real experiments?
 
  • #39
Demystifier said:
By Ehrenfest theorem, localized wave-packets move according to the classical laws. In this sense linearity of QM is compatible with classical chaos.
Only for the free particle and the harmonic oscillator since only then the equations for the expectation value of position are identical with the classical equations. This is, because ##\vec{F}## is not linear in ##\vec{x}##, of course the Ehrenfest theorem gives (after some calculation for the standard Hamiltonian ##\hat{H}=\hat{\vec{p}}^2/(2m) + V(\hat{\vec{x}})##)
$$m \frac{\mathrm{d}^2}{\mathrm{d} t^2} \langle \vec{x}=\langle \vec{F}(\vec{x}) \rangle,$$
where ##\hat{\vec{F}}=-\nabla V(\hat{\vec{x}})##, but if ##\vec{F}## is not at most linear in ##\vec{x}##, you have
$$\langle \vec{F}(\vec{x}) \rangle \neq \vec{F}(\langle \vec{x} \rangle),$$
and the EoM for the averages a la Ehrenfest is not the same as the classical equation of motion.
 
  • #40
A. Neumaier said:
No. Nature need not be defended, only explained.
The natural sciences are even less ambitious: Nature needs neither be defended nor explained but only quantitatively and objectively observed and the observations described. That the description can be brought in relatively simple mathematical form in terms of General Relativity and relativistic quantum field theory on the yet most fundamental level (which for sure is incomplete, because these two big parts are not yet made consistent), is an amazing miracle without any rational explanation. What might be behind it, is not subject of the natural sciences and thus have no place in these forums (at least I hope so).
 
  • #41
vanhees71 said:
The natural sciences are even less ambitious: Nature needs neither be defended nor explained but only quantitatively and objectively observed and the observations described. That the description can be brought in relatively simple mathematical form in terms of General Relativity and relativistic quantum field theory on the yet most fundamental level (which for sure is incomplete, because these two big parts are not yet made consistent), is an amazing miracle without any rational explanation. What might be behind it, is not subject of the natural sciences and thus have no place in these forums (at least I hope so).

We don't even need to describe. We only need to predict.
 
  • Like
Likes vanhees71
  • #42
You mean to get the next round of funding for our theoretical studies ;-))).
 
  • Like
Likes atyy
  • #43
Stephen Tashi said:
I'm losing the thread of the thread!

In posts #9, #10,#11 @stevendaryl said your 3 assertions appear to contradict Bells theorem. (He said "it" contradicted the theorem and I don't understand if he meant one particular assertion of the three..)

Perhaps I misunderstood the thermal interpretation and how/if classical chaos is involved, but let me describe why I said that.

Quantum-mechanically, there is an oddness about measurement. If you prepare an electron to be spin-up in the x-direction, and then measure its spin in the z-direction, you will get a definite result, either +1/2 or -1/2, but it seems to be nondeterministic. It's sort of puzzling as to where the nondeterminism comes from, because the dynamics, Schrodinger's equation, are deterministic. This isn't an argument of any kind, it's a just a description of why something might seem puzzling.

Now, one way that you could make it seem less puzzling is by looking at a classical analog--metastable systems. A simple example might be a coin carefully balanced on its edge inside a box. It's neither heads nor tails. But if you gently shake the box, the coin will almost certainly make a transition to a stable configuration, either heads or tails. There is no feasible way to predict which result will occur.

Metastable systems are very relevant to measurement---quantum or otherwise, because a measurement of a microscopic quantity such as an electron's spin requires amplifying the macroscopic property so that it makes a macroscopic difference. For example, spin-up leads to a visible dot on a piece of photographic paper at one position, and spin-down leads to a visible dot at a macroscopically different position. This kind of amplification requires that the measuring device be in a metastable state that can be nudged into a stable state with the tiniest of influences.

With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem. Unlike the case of the coin on its edge in classical physics, it can't be the case that the results of a quantum measurement is deterministic, but unpredictable in practice because of the macroscopic number of particles involved.
 
  • #44
stevendaryl said:
With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem.

Does the conclusion of Bell's theorem rule out that explanation of non-determinism ? - or do you mean that Bell's theorem rules out that explanation in the premises of the theorem?

Since Bell's theorem is a theorem in physics (rather than mathematics) it is remarkably difficult to find agreement on what the premises of the theorem are. For example, In the links I gave previously, the scenario for Bell's theorem is illustrated by populations of things and measuring devices in a deterministic setting. There are all sorts of ways to add stochastic elements to the scenario and it isn't clear (to me) which are allowed by the premises of the theorem. For example, can the outcome of a measurement by a particular measuring device on a particular object be stochastic with a distribution that depends only on the hidden variables ? Can each measuring device have hidden variables? Can the population of entangled pairs be stochasticaly assigned their hidden variables from some fixed but unknown distribution of values?

In this paper http://www.ijqf.org/wps/wp-content/uploads/2014/12/Tumulka-Bell-paper.pdf , Bell is quoted as saying:

It is important to note that to the limited degree to which determinism plays a role in the EPR argument [and thus the proof of Bell’s theorem], it s not assumed but inferred . [. . . ] It is remarkably difficult to get this point across, that determinism is not a presupposition of the analysis.

However, the paper doesn't explicitly answer the questions posed above.

(I'm thinking ot the theorem in the form: If ...such-and-such premises then Bell's inequality is satisfied.)
 
Last edited:
  • #45
A. Neumaier said:
I only claim that we know already how to get certain probabilistic observable effects (namely those of hydrodynamics) from deterministic quantum mechanics, and this by a mechanism involving expectation values only. And I claim that a proof about an idealized situation (as in Bell type theorems) does not tell anything conclusive about the real, nonideal situation.

I'm confused about the terminology "deterministic quantum mechanics". What would that be?

One interpretation is that if we look at expected values of certain variables then applying (conventional) quantum mechanics gives us deterministic predictions about some of these expected values as functions of others. If that interpretation is correct then what is the role of "chaos", mentioned in the title of the thread?

I suppose we could have deterministic equations involving expected values and re-introduce probability by arguing that these equations are chaotic. However, since measured averages can differ from the mathematical expected value, there is already a probabilistic aspect to applying the equations to predict experimental outcomes. Do we need the chaos to add even more variability?
 
  • #46
Stephen Tashi said:
Does the conclusion of Bell's theorem rule out that explanation of non-determinism ? - or do you mean that Bell's theorem rules out that explanation in the premises of the theorem?

No, I think it's not ruled out ahead of time.

For example, can the outcome of a measurement by a particular measuring device on a particular object be stochastic with a distribution that depends only on the hidden variables ? Can each measuring device have hidden variables? Can the population of entangled pairs be stochasticaly assigned their hidden variables from some fixed but unknown distribution of values?

What's confusing about those questions is that, yes, Bell's setup rules them out, but no, it wouldn't make any difference to the conclusion.

You can go through the derivation of Bell's theorem (in the anti-correlated spin-1/2 case) where you allow for
  1. hidden variables in the measurement devices
  2. nondeterminism in the measurement process
(The possibility that "entangled pairs [are] assigned their hidden variables from some fixed but unknown distribution of values" is already part of the derivation.)

However, in the case of EPR, the fact that you get perfect correlations or anticorrelations in certain circumstances pretty much rules out those two possibilities.

The more general assumption is that when Alice performs a measurement of the spin of her particle, she gets a result "spin-up" with probability:

P_A(\lambda, \alpha, h_a)

where \lambda is the hidden variable associated with the entangled pair, \alpha is the orientation of her detector, and h_a is some extra hidden variables associated with her detector. Similarly, we can assume that Bob gets "spin-up" with probability:

P_B(\lambda, \beta, h_b)

where \beta is the orientation of his detector and h_b are his hidden variables. Now perfect anti-correlation implies that, no matter what values \lambda, h_a, h_b have,

P_A(\lambda, \beta, h_a) P_B(\lambda, \beta, h_b) = 0

That is, they can't both get spin-up if their detectors are at the same orientation, \beta.

We also have:
(1 - P_A(\lambda, \beta, h_a)) (1-P_B(\lambda, \beta, h_b)) = 0

They can't both get spin-down, either. (Since spin-up or spin-down are the only possibilities, the probability of spin-down and the probability of spin-up must add up to 1)

So now, let's ask whether it is possible for h_a to play any role at all in the outcome.

Case 1: There is some value of h_a such that P_A(\lambda, \beta, h_a) > 0

If P_A(\lambda, \beta, h_a) > 0, this implies that P_B(\lambda, \beta, h_b) = 0 for all possible values of h_b. So for this particular combination of \beta, \lambda, the value of h_b is irrelevant.

Case 2: There is some value of h_a such that P_A(\lambda, \beta, h_a) = 0

That implies that 1 - P_B(\lambda, \beta, h_b) = 0 for all possible values of h_b, which means that for all h_b, P_B(\lambda, \beta, h_b) = 1. So the value of h_b is irrelevant in this case, as well.

These two cases imply that for any \beta, \lambda, either P_B(\lambda, \beta, h_b) = 0 for all h_b, or P_B(\lambda, \beta, h_b) = 1 for all values of h_b. So that means that if \lambda, \beta are as in Case 1, then Bob will definitely measure spin-down at orientation \beta. If \lambda, \beta are as in Case 2, then Bob will definitely measure spin-up at orientation \beta. So Bob's result must be a deterministic function of \lambda and \beta.

Similarly, we conclude that Alice's result must be a deterministic function of \lambda and \alpha, her detector's orientation.

Bell just skips to this conclusion--either because he could immediately see it, or because he had been through the argument already.
 
  • #47
Stephen Tashi said:
I'm confused about the terminology "deterministic quantum mechanics". What would that be?
It's a "contradictio in adjecto" ;-)).
 
  • Like
Likes bhobba
  • #48
vanhees71 said:
It's a "contradictio in adjecto" ;-)).
I assume it just means that quantum mechanics is indeed a deterministic theory, in that the laws the evolves one quantum state to another is deterministic.

And if we care only about only the expectation values, then we have deterministic predictions all the way and indeed it seems very similar to classical mechanics. My incomplete understanding of Neumaiers objective is that in this neighbourhood there are some ideas and some points to make, that relate somehow to the foundations of statistical methods in physics?

(Thought my original point is almost in a different direction, as i see it from the point of rational inference, the deductive "deterministic evolution" is itself merely an "expectation" that HAPPENS to not be challenged because the laws are in equilibrium right now, as required by the unification of information about initial conditions and law. i just had a vague feeling that neumaier was approaching this same thing by from a different angle)

/Fredrik
 
  • #49
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value. As any physics, of course, QT is causal (even in the strong time-local sense), i.e., if the state is known at a time ##t=t_0## it's known at any time ##t>t_0##, given the Hamiltonian of the system.
 
  • #50
vanhees71 said:
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value. As any physics, of course, QT is causal (even in the strong time-local sense), i.e., if the state is known at a time ##t=t_0## it's known at any time ##t>t_0##, given the Hamiltonian of the system.

I think you missed what i tried to say. (Except that determinism is different from causality i agree with what you say).

What i meant is. Yes, QM does not predict indivudual events of the future, the "predictions" of QM are rather probability distributions of the future. (which roughly speaking is the "expectation values", including their say standard deviation etc, however wether which kind of distribution we have is a different question). And THESE predictions are just as deterministic as is Newtons laws of mechanics.

The the problem of non-commuting observables, IMHO, has nothing at all to do with lack of deductive rules. They are merely as i see it, logical consquences of asking to specifiy properties of distributions in non-commuting ways. This in itself is a totally different issue than deductive vs probabilistic predictions due to "uncertainy to LAW".

After all, how do you scientifically verify the prediction of QM? You surely can't do with one or two samples, you more or less have to estimate the distribution of results, in order to get a statistically significant result. And this is why what the LAWS of QM realyl predict anyway, is just P-istributions of the future, and these are as exact as is determinism in classical mechanics.

/Fredrik
 
  • #51
vanhees71 said:
Only for the free particle and the harmonic oscillator since only then the equations for the expectation value of position are identical with the classical equations. This is, because ##\vec{F}## is not linear in ##\vec{x}##, of course the Ehrenfest theorem gives (after some calculation for the standard Hamiltonian ##\hat{H}=\hat{\vec{p}}^2/(2m) + V(\hat{\vec{x}})##)
$$m \frac{\mathrm{d}^2}{\mathrm{d} t^2} \langle \vec{x}=\langle \vec{F}(\vec{x}) \rangle,$$
where ##\hat{\vec{F}}=-\nabla V(\hat{\vec{x}})##, but if ##\vec{F}## is not at most linear in ##\vec{x}##, you have
$$\langle \vec{F}(\vec{x}) \rangle \neq \vec{F}(\langle \vec{x} \rangle),$$
and the EoM for the averages a la Ehrenfest is not the same as the classical equation of motion.
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
 
  • #52
Stephen Tashi said:
I'm confused about the terminology "deterministic quantum mechanics". What would that be?
vanhees71 said:
It's a "contradictio in adjecto" ;-)).
Deterministic quantum mechanics means that the quantum state satisfies a deterministic dynamical law, the Schrödinger (resp. for mixed states the von Neumann) equation.

In the thermal interpretation, the beables (in Bell's sense) are all expectation values and their deterministic dynamics is given by the Ehrenfest theorem, which expresses the time derivative of any expectation value in terms of another expectation value. This is a dynamics of manifestly extended objects. In particular it is nonlocal (in Bell's sense, not in the sense of QFT), hence there is no contradiction with Bell's theorem. On the other hand, it satisfies extended causality, hence respects the requirements of relativity theory, including a propagation speed of information bounded by the speed of light.

Stephen Tashi said:
what is the role of "chaos", mentioned in the title of the thread? I suppose we could have deterministic equations involving expected values and re-introduce probability by arguing that these equations are chaotic. However, since measured averages can differ from the mathematical expected value, there is already a probabilistic aspect to applying the equations to predict experimental outcomes. Do we need the chaos to add even more variability?
Chaos is present (at least for the hydrodynamical expectation values) independently of whether or not it is needed. It produces probabilities independent of measurement, just as in classical mechanics. This is an advantage since one can model measurement as in the classical case, and deduce the necessary presence of uncertainty in measurements from the impossibility of cloning a state. (A no cloning theorem is also valid classically.)

stevendaryl said:
Metastable systems are very relevant to measurement---quantum or otherwise, because a measurement of a microscopic quantity such as an electron's spin requires amplifying the macroscopic property so that it makes a macroscopic difference. For example, spin-up leads to a visible dot on a piece of photographic paper at one position, and spin-down leads to a visible dot at a macroscopically different position. This kind of amplification requires that the measuring device be in a metastable state that can be nudged into a stable state with the tiniest of influences.

With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem.
Practical indeterminism in theoretically deterministic systems comes like classical chaos from local instabilities, for example (but not only) from tiny perturbations of metastable states. But as known already from the existence of Bohmian mechanics, where Bell's theorem does not apply, Bell's theorem has nothing to say about nonlocal deterministic models of quantum mechanics. Thus it doesn't rule out the thermal interpretation.

Stephen Tashi said:
Bell's theorem is a theorem in physics (rather than mathematics)
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.
 
Last edited:
  • Like
Likes dextercioby and bhobba
  • #53
Fra said:
And if we care only about only the expectation values, then we have deterministic predictions all the way and indeed it seems very similar to classical mechanics. My incomplete understanding of Neumaiers objective is that in this neighbourhood there are some ideas and some points to make, that relate somehow to the foundations of statistical methods in physics?
Indeed.
vanhees71 said:
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value.
It depends on what one declares to be the observables.

In the thermal interpretation the theoretical observables (beables in Bell's sense) are the expectations, and they satisfy a deterministic dynamics. Practically, approximately observable are only a small subset of these.

In Born's interpretation, the theoretical ''observables'' are unobservable operators, Calling unobservable things observables leads to apparent indeterminism. It was a misnomer that lead to the strange, unfortunate situation in the foundations of quantum mechanics that persists now for nearly hundred years.

The thermal interpretation completely rejects Born's interpretation while retaining all the formal structure of quantum mechanics and their relation to experiment.
 
Last edited:
  • #54
Demystifier said:
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
Ehrenfest's theorem only holds for conservative dynamics, i.e. if the whole environment is included in the state. To explain that classical physics is a good approximation at the macroscopic level needs much more argumentation than just a reference to Ehrenfest's theorem, since wave packets delocalize quite quickly, and the environment is intrinsically nonlocal. One needs careful arguments with decoherence to show the emergence of (dissipative) classicality for a subsystem.
 
  • #55
A. Neumaier said:
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.

Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
 
  • #56
Stephen Tashi said:
Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
 
  • Like
Likes dextercioby
  • #57
rubi said:
Bell's theorem at full rigour is of the form: Let ##A_\alpha, B_\beta : \Lambda\rightarrow[-1,1]## be random variables (for every ##\alpha,\beta\in[0,2\pi]##) on a probability space ##(\Lambda,\Sigma,\mathrm d\mu)##

rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##

If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?

Are any of the random variables involved assumed to be pairwise independent?
 
  • #58
Stephen Tashi said:
If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?
##\alpha,\alpha^\prime,\beta,\beta^\prime## can be any numbers in ##[0,2\pi]## and they needn't even be different. But they may be.

Are any of the random variables involved assumed to be pairwise independent?
No other assumptions other than the ones I listed are required.

In fact, the theorem is much more general. I just adapted it to the typical Alice/Bob experiment, where you would call the random variable ##A_\alpha## and so on. The general theorem goes as follows:

Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##. Then the inequality ##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|\leq 2## holds.
Proof:
##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|##
##= \left|\int_\Lambda (W(\lambda)(Y(\lambda)+Z(\lambda)) + X(\lambda)(Y(\lambda)-Z(\lambda)))\,\mathrm d\mu(\lambda)\right|##
##\leq \int_\Lambda (\left|W(\lambda)\right|\left|Y(\lambda)+Z(\lambda)\right| + \left|X(\lambda)\right|\left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda (\left|Y(\lambda)+Z(\lambda)\right| + \left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda 2 \,\mathrm d\mu(\lambda) = 2##
(The proof of the last inequality is left as an exercise to the reader. :wink:)​

Now in the situation of a typical Alice/Bob experiment, the random variables should refer to the measurement of spin variables of Alice (##A##) and Bob (##B##) along some angles ##\alpha,\beta\in[0,2\pi]## and the correlations one is interested in are correlations between a spin of Alice along some angle ##\alpha## and Bob along some angle ##\beta##, for any combinations of angles. Then one just needs to fill in ##W=A_\alpha##, ..., ##Z=B_{\beta^\prime}##. So we really just apply a general theorem in probability theory to a concrete physical situation.
 
Last edited:
  • Like
Likes dextercioby and Mentz114
  • #59
rubi said:
Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##.

Thank you! I see the difficulty of applying this theorem to experimental tests. The mathematical model says that for each realization of an outcome ##\lambda_0 \in \Lambda## we have simultaneously defined values for ##W(\lambda_0), X(\lambda_0), Y(\lambda_0), Z(\lambda_0)##. Mathematically we can define a random variable such as ##H = WXYZ## on ##\Lambda## and speak of ##<WXYZ>##. However in the experimental tests of entanglement, we do not simultaneously realize all these random variables on a given outcome ##\lambda_0##.

One model for entanglement experiments is that each random variable ##V## is defined on a subset of ##\Lambda## which represents outcomes where ##V## was realized. A random variable that is a product such as ##WY## is defined on the intersection of the subsets associated with ##W## and ##Y##.
 
  • #60
rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
Reading the posts in this thread I thought that i could ask the following question.

Do bell inequalities need explicit experimental verification in special experiments aimed to check the inequalities?

The violation of the classical CHSH<=2 inequality for two spins 1/2 is based on calculations of QM correlators like <A.B>, <A.B‘>,<A‘.B‘>,<A’.B> where A and B are the the spin operators based on Pauli matrixes, <> is an average over singlet w.f. It is then easy to show that CHSH can be 2.sqrt(2)>2. The calculations are based on the rules of QM and are exact.

Now, if we think that CHSH<=2 should be preserved and try to make complicated experiments, we somehow implicitly assume the the rules of calculations that we used to calculate 2.sqrt(2) are not exact. But if it so, how then we have SM of particle physics which is a very precise proof of QM?

If it was found after 1964 Bell’s paper that CHSH is always <=2 in test experiments, this would mean that the rules of QM are not completely correct in contradiction to all other experiments in particle physics, solid state physics, ...
 
  • Like
Likes Mentz114

Similar threads

  • · Replies 39 ·
2
Replies
39
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 44 ·
2
Replies
44
Views
5K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
420
  • · Replies 3 ·
Replies
3
Views
2K