Classical chaos and quantum mechanics

In summary, Urs Schreiber is saying that the thermal interpretation of quantum mechanics provides a resolution to the problem of the origin of quantum probabilities, but this is still a conjecture, until there is a proof.
  • #36
Very interesting thread,

I read the PO thread mentioned by Urs and one thing that struck me was what Gerard t'Hooft (of all people) said:
Fine, but how then do space and time develop curvature? Do we have to dodge the question what that curvature really is?

I sort of balked - this is standard textbook stuff - see Chapter 7 - Ohanian - Gratification and Space-Time.

Of course you can't show that space-time is curved - I discussed this at length with Steve Carlip once - its a basic impossibility because you can never tell if it's actually curved or something (gravitons perhaps) simply makes it behave like its curved. Anyway as the above shows from the linearised equation, which would fit in with Dr Neumaier's interpretation, you can get the full EFE's. How you interpret them is up to you.

Thanks
Bill
 
Physics news on Phys.org
  • #37
Stephen Tashi said:
The toy examples illustrate situations where Bell's inequality does apply. I thought that perhaps a thermal interpretation of those situations would reveal why Bell's inequality need not apply.
In QM, the standard examples violate the Bell inequalities. That's the whole point why Bell's theorem rules out local hidden variable theories.
 
  • #38
A. Neumaier said:
In QM, the standard examples violate the Bell inequalities. That's the whole point why Bell's theorem rules out local hidden variable theories.

I'm losing the thread of the thread!

In posts #9, #10,#11 @stevendaryl said your 3 assertions appear to contradict Bells theorem. (He said "it" contradicted the theorem and I don't understand if he meant one particular assertion of the three..) You stated objections to Bell's theorem, which (as I understand it) states conditions where hidden variables won't work. Is the main point of discussing the non-applicability of Bell's theorem to show that variables postulated by the thermal interpretation are not hidden variables? Or does the thermal interpretation employ hidden variables (or statistics from them) in a manner that shows Bell's theorem does not apply to real experiments?
 
  • #39
Demystifier said:
By Ehrenfest theorem, localized wave-packets move according to the classical laws. In this sense linearity of QM is compatible with classical chaos.
Only for the free particle and the harmonic oscillator since only then the equations for the expectation value of position are identical with the classical equations. This is, because ##\vec{F}## is not linear in ##\vec{x}##, of course the Ehrenfest theorem gives (after some calculation for the standard Hamiltonian ##\hat{H}=\hat{\vec{p}}^2/(2m) + V(\hat{\vec{x}})##)
$$m \frac{\mathrm{d}^2}{\mathrm{d} t^2} \langle \vec{x}=\langle \vec{F}(\vec{x}) \rangle,$$
where ##\hat{\vec{F}}=-\nabla V(\hat{\vec{x}})##, but if ##\vec{F}## is not at most linear in ##\vec{x}##, you have
$$\langle \vec{F}(\vec{x}) \rangle \neq \vec{F}(\langle \vec{x} \rangle),$$
and the EoM for the averages a la Ehrenfest is not the same as the classical equation of motion.
 
  • #40
A. Neumaier said:
No. Nature need not be defended, only explained.
The natural sciences are even less ambitious: Nature needs neither be defended nor explained but only quantitatively and objectively observed and the observations described. That the description can be brought in relatively simple mathematical form in terms of General Relativity and relativistic quantum field theory on the yet most fundamental level (which for sure is incomplete, because these two big parts are not yet made consistent), is an amazing miracle without any rational explanation. What might be behind it, is not subject of the natural sciences and thus have no place in these forums (at least I hope so).
 
  • #41
vanhees71 said:
The natural sciences are even less ambitious: Nature needs neither be defended nor explained but only quantitatively and objectively observed and the observations described. That the description can be brought in relatively simple mathematical form in terms of General Relativity and relativistic quantum field theory on the yet most fundamental level (which for sure is incomplete, because these two big parts are not yet made consistent), is an amazing miracle without any rational explanation. What might be behind it, is not subject of the natural sciences and thus have no place in these forums (at least I hope so).

We don't even need to describe. We only need to predict.
 
  • Like
Likes vanhees71
  • #42
You mean to get the next round of funding for our theoretical studies ;-))).
 
  • Like
Likes atyy
  • #43
Stephen Tashi said:
I'm losing the thread of the thread!

In posts #9, #10,#11 @stevendaryl said your 3 assertions appear to contradict Bells theorem. (He said "it" contradicted the theorem and I don't understand if he meant one particular assertion of the three..)

Perhaps I misunderstood the thermal interpretation and how/if classical chaos is involved, but let me describe why I said that.

Quantum-mechanically, there is an oddness about measurement. If you prepare an electron to be spin-up in the x-direction, and then measure its spin in the z-direction, you will get a definite result, either +1/2 or -1/2, but it seems to be nondeterministic. It's sort of puzzling as to where the nondeterminism comes from, because the dynamics, Schrodinger's equation, are deterministic. This isn't an argument of any kind, it's a just a description of why something might seem puzzling.

Now, one way that you could make it seem less puzzling is by looking at a classical analog--metastable systems. A simple example might be a coin carefully balanced on its edge inside a box. It's neither heads nor tails. But if you gently shake the box, the coin will almost certainly make a transition to a stable configuration, either heads or tails. There is no feasible way to predict which result will occur.

Metastable systems are very relevant to measurement---quantum or otherwise, because a measurement of a microscopic quantity such as an electron's spin requires amplifying the macroscopic property so that it makes a macroscopic difference. For example, spin-up leads to a visible dot on a piece of photographic paper at one position, and spin-down leads to a visible dot at a macroscopically different position. This kind of amplification requires that the measuring device be in a metastable state that can be nudged into a stable state with the tiniest of influences.

With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem. Unlike the case of the coin on its edge in classical physics, it can't be the case that the results of a quantum measurement is deterministic, but unpredictable in practice because of the macroscopic number of particles involved.
 
  • #44
stevendaryl said:
With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem.

Does the conclusion of Bell's theorem rule out that explanation of non-determinism ? - or do you mean that Bell's theorem rules out that explanation in the premises of the theorem?

Since Bell's theorem is a theorem in physics (rather than mathematics) it is remarkably difficult to find agreement on what the premises of the theorem are. For example, In the links I gave previously, the scenario for Bell's theorem is illustrated by populations of things and measuring devices in a deterministic setting. There are all sorts of ways to add stochastic elements to the scenario and it isn't clear (to me) which are allowed by the premises of the theorem. For example, can the outcome of a measurement by a particular measuring device on a particular object be stochastic with a distribution that depends only on the hidden variables ? Can each measuring device have hidden variables? Can the population of entangled pairs be stochasticaly assigned their hidden variables from some fixed but unknown distribution of values?

In this paper http://www.ijqf.org/wps/wp-content/uploads/2014/12/Tumulka-Bell-paper.pdf , Bell is quoted as saying:

It is important to note that to the limited degree to which determinism plays a role in the EPR argument [and thus the proof of Bell’s theorem], it s not assumed but inferred . [. . . ] It is remarkably difficult to get this point across, that determinism is not a presupposition of the analysis.

However, the paper doesn't explicitly answer the questions posed above.

(I'm thinking ot the theorem in the form: If ...such-and-such premises then Bell's inequality is satisfied.)
 
Last edited:
  • #45
A. Neumaier said:
I only claim that we know already how to get certain probabilistic observable effects (namely those of hydrodynamics) from deterministic quantum mechanics, and this by a mechanism involving expectation values only. And I claim that a proof about an idealized situation (as in Bell type theorems) does not tell anything conclusive about the real, nonideal situation.

I'm confused about the terminology "deterministic quantum mechanics". What would that be?

One interpretation is that if we look at expected values of certain variables then applying (conventional) quantum mechanics gives us deterministic predictions about some of these expected values as functions of others. If that interpretation is correct then what is the role of "chaos", mentioned in the title of the thread?

I suppose we could have deterministic equations involving expected values and re-introduce probability by arguing that these equations are chaotic. However, since measured averages can differ from the mathematical expected value, there is already a probabilistic aspect to applying the equations to predict experimental outcomes. Do we need the chaos to add even more variability?
 
  • #46
Stephen Tashi said:
Does the conclusion of Bell's theorem rule out that explanation of non-determinism ? - or do you mean that Bell's theorem rules out that explanation in the premises of the theorem?

No, I think it's not ruled out ahead of time.

For example, can the outcome of a measurement by a particular measuring device on a particular object be stochastic with a distribution that depends only on the hidden variables ? Can each measuring device have hidden variables? Can the population of entangled pairs be stochasticaly assigned their hidden variables from some fixed but unknown distribution of values?

What's confusing about those questions is that, yes, Bell's setup rules them out, but no, it wouldn't make any difference to the conclusion.

You can go through the derivation of Bell's theorem (in the anti-correlated spin-1/2 case) where you allow for
  1. hidden variables in the measurement devices
  2. nondeterminism in the measurement process
(The possibility that "entangled pairs [are] assigned their hidden variables from some fixed but unknown distribution of values" is already part of the derivation.)

However, in the case of EPR, the fact that you get perfect correlations or anticorrelations in certain circumstances pretty much rules out those two possibilities.

The more general assumption is that when Alice performs a measurement of the spin of her particle, she gets a result "spin-up" with probability:

[itex]P_A(\lambda, \alpha, h_a)[/itex]

where [itex]\lambda[/itex] is the hidden variable associated with the entangled pair, [itex]\alpha[/itex] is the orientation of her detector, and [itex]h_a[/itex] is some extra hidden variables associated with her detector. Similarly, we can assume that Bob gets "spin-up" with probability:

[itex]P_B(\lambda, \beta, h_b)[/itex]

where [itex]\beta[/itex] is the orientation of his detector and [itex]h_b[/itex] are his hidden variables. Now perfect anti-correlation implies that, no matter what values [itex]\lambda, h_a, h_b[/itex] have,

[itex]P_A(\lambda, \beta, h_a) P_B(\lambda, \beta, h_b) = 0[/itex]

That is, they can't both get spin-up if their detectors are at the same orientation, [itex]\beta[/itex].

We also have:
[itex](1 - P_A(\lambda, \beta, h_a)) (1-P_B(\lambda, \beta, h_b)) = 0[/itex]

They can't both get spin-down, either. (Since spin-up or spin-down are the only possibilities, the probability of spin-down and the probability of spin-up must add up to 1)

So now, let's ask whether it is possible for [itex]h_a[/itex] to play any role at all in the outcome.

Case 1: There is some value of [itex]h_a[/itex] such that [itex]P_A(\lambda, \beta, h_a) > 0[/itex]

If [itex]P_A(\lambda, \beta, h_a) > 0[/itex], this implies that [itex]P_B(\lambda, \beta, h_b) = 0[/itex] for all possible values of [itex]h_b[/itex]. So for this particular combination of [itex]\beta, \lambda[/itex], the value of [itex]h_b[/itex] is irrelevant.

Case 2: There is some value of [itex]h_a[/itex] such that [itex]P_A(\lambda, \beta, h_a) = 0[/itex]

That implies that [itex]1 - P_B(\lambda, \beta, h_b) = 0[/itex] for all possible values of [itex]h_b[/itex], which means that for all [itex]h_b[/itex], [itex]P_B(\lambda, \beta, h_b) = 1[/itex]. So the value of [itex]h_b[/itex] is irrelevant in this case, as well.

These two cases imply that for any [itex]\beta, \lambda[/itex], either [itex]P_B(\lambda, \beta, h_b) = 0[/itex] for all [itex]h_b[/itex], or [itex]P_B(\lambda, \beta, h_b) = 1[/itex] for all values of [itex]h_b[/itex]. So that means that if [itex]\lambda, \beta[/itex] are as in Case 1, then Bob will definitely measure spin-down at orientation [itex]\beta[/itex]. If [itex]\lambda, \beta[/itex] are as in Case 2, then Bob will definitely measure spin-up at orientation [itex]\beta[/itex]. So Bob's result must be a deterministic function of [itex]\lambda[/itex] and [itex]\beta[/itex].

Similarly, we conclude that Alice's result must be a deterministic function of [itex]\lambda[/itex] and [itex]\alpha[/itex], her detector's orientation.

Bell just skips to this conclusion--either because he could immediately see it, or because he had been through the argument already.
 
  • #47
Stephen Tashi said:
I'm confused about the terminology "deterministic quantum mechanics". What would that be?
It's a "contradictio in adjecto" ;-)).
 
  • Like
Likes bhobba
  • #48
vanhees71 said:
It's a "contradictio in adjecto" ;-)).
I assume it just means that quantum mechanics is indeed a deterministic theory, in that the laws the evolves one quantum state to another is deterministic.

And if we care only about only the expectation values, then we have deterministic predictions all the way and indeed it seems very similar to classical mechanics. My incomplete understanding of Neumaiers objective is that in this neighbourhood there are some ideas and some points to make, that relate somehow to the foundations of statistical methods in physics?

(Thought my original point is almost in a different direction, as i see it from the point of rational inference, the deductive "deterministic evolution" is itself merely an "expectation" that HAPPENS to not be challenged because the laws are in equilibrium right now, as required by the unification of information about initial conditions and law. i just had a vague feeling that neumaier was approaching this same thing by from a different angle)

/Fredrik
 
  • #49
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value. As any physics, of course, QT is causal (even in the strong time-local sense), i.e., if the state is known at a time ##t=t_0## it's known at any time ##t>t_0##, given the Hamiltonian of the system.
 
  • #50
vanhees71 said:
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value. As any physics, of course, QT is causal (even in the strong time-local sense), i.e., if the state is known at a time ##t=t_0## it's known at any time ##t>t_0##, given the Hamiltonian of the system.

I think you missed what i tried to say. (Except that determinism is different from causality i agree with what you say).

What i meant is. Yes, QM does not predict indivudual events of the future, the "predictions" of QM are rather probability distributions of the future. (which roughly speaking is the "expectation values", including their say standard deviation etc, however wether which kind of distribution we have is a different question). And THESE predictions are just as deterministic as is Newtons laws of mechanics.

The the problem of non-commuting observables, IMHO, has nothing at all to do with lack of deductive rules. They are merely as i see it, logical consquences of asking to specifiy properties of distributions in non-commuting ways. This in itself is a totally different issue than deductive vs probabilistic predictions due to "uncertainy to LAW".

After all, how do you scientifically verify the prediction of QM? You surely can't do with one or two samples, you more or less have to estimate the distribution of results, in order to get a statistically significant result. And this is why what the LAWS of QM realyl predict anyway, is just P-istributions of the future, and these are as exact as is determinism in classical mechanics.

/Fredrik
 
  • #51
vanhees71 said:
Only for the free particle and the harmonic oscillator since only then the equations for the expectation value of position are identical with the classical equations. This is, because ##\vec{F}## is not linear in ##\vec{x}##, of course the Ehrenfest theorem gives (after some calculation for the standard Hamiltonian ##\hat{H}=\hat{\vec{p}}^2/(2m) + V(\hat{\vec{x}})##)
$$m \frac{\mathrm{d}^2}{\mathrm{d} t^2} \langle \vec{x}=\langle \vec{F}(\vec{x}) \rangle,$$
where ##\hat{\vec{F}}=-\nabla V(\hat{\vec{x}})##, but if ##\vec{F}## is not at most linear in ##\vec{x}##, you have
$$\langle \vec{F}(\vec{x}) \rangle \neq \vec{F}(\langle \vec{x} \rangle),$$
and the EoM for the averages a la Ehrenfest is not the same as the classical equation of motion.
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
 
  • #52
Stephen Tashi said:
I'm confused about the terminology "deterministic quantum mechanics". What would that be?
vanhees71 said:
It's a "contradictio in adjecto" ;-)).
Deterministic quantum mechanics means that the quantum state satisfies a deterministic dynamical law, the Schrödinger (resp. for mixed states the von Neumann) equation.

In the thermal interpretation, the beables (in Bell's sense) are all expectation values and their deterministic dynamics is given by the Ehrenfest theorem, which expresses the time derivative of any expectation value in terms of another expectation value. This is a dynamics of manifestly extended objects. In particular it is nonlocal (in Bell's sense, not in the sense of QFT), hence there is no contradiction with Bell's theorem. On the other hand, it satisfies extended causality, hence respects the requirements of relativity theory, including a propagation speed of information bounded by the speed of light.

Stephen Tashi said:
what is the role of "chaos", mentioned in the title of the thread? I suppose we could have deterministic equations involving expected values and re-introduce probability by arguing that these equations are chaotic. However, since measured averages can differ from the mathematical expected value, there is already a probabilistic aspect to applying the equations to predict experimental outcomes. Do we need the chaos to add even more variability?
Chaos is present (at least for the hydrodynamical expectation values) independently of whether or not it is needed. It produces probabilities independent of measurement, just as in classical mechanics. This is an advantage since one can model measurement as in the classical case, and deduce the necessary presence of uncertainty in measurements from the impossibility of cloning a state. (A no cloning theorem is also valid classically.)

stevendaryl said:
Metastable systems are very relevant to measurement---quantum or otherwise, because a measurement of a microscopic quantity such as an electron's spin requires amplifying the macroscopic property so that it makes a macroscopic difference. For example, spin-up leads to a visible dot on a piece of photographic paper at one position, and spin-down leads to a visible dot at a macroscopically different position. This kind of amplification requires that the measuring device be in a metastable state that can be nudged into a stable state with the tiniest of influences.

With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem.
Practical indeterminism in theoretically deterministic systems comes like classical chaos from local instabilities, for example (but not only) from tiny perturbations of metastable states. But as known already from the existence of Bohmian mechanics, where Bell's theorem does not apply, Bell's theorem has nothing to say about nonlocal deterministic models of quantum mechanics. Thus it doesn't rule out the thermal interpretation.

Stephen Tashi said:
Bell's theorem is a theorem in physics (rather than mathematics)
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.
 
Last edited:
  • Like
Likes dextercioby and bhobba
  • #53
Fra said:
And if we care only about only the expectation values, then we have deterministic predictions all the way and indeed it seems very similar to classical mechanics. My incomplete understanding of Neumaiers objective is that in this neighbourhood there are some ideas and some points to make, that relate somehow to the foundations of statistical methods in physics?
Indeed.
vanhees71 said:
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value.
It depends on what one declares to be the observables.

In the thermal interpretation the theoretical observables (beables in Bell's sense) are the expectations, and they satisfy a deterministic dynamics. Practically, approximately observable are only a small subset of these.

In Born's interpretation, the theoretical ''observables'' are unobservable operators, Calling unobservable things observables leads to apparent indeterminism. It was a misnomer that lead to the strange, unfortunate situation in the foundations of quantum mechanics that persists now for nearly hundred years.

The thermal interpretation completely rejects Born's interpretation while retaining all the formal structure of quantum mechanics and their relation to experiment.
 
Last edited:
  • #54
Demystifier said:
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
Ehrenfest's theorem only holds for conservative dynamics, i.e. if the whole environment is included in the state. To explain that classical physics is a good approximation at the macroscopic level needs much more argumentation than just a reference to Ehrenfest's theorem, since wave packets delocalize quite quickly, and the environment is intrinsically nonlocal. One needs careful arguments with decoherence to show the emergence of (dissipative) classicality for a subsystem.
 
  • #55
A. Neumaier said:
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.

Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
 
  • #56
Stephen Tashi said:
Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
 
  • Like
Likes dextercioby
  • #57
rubi said:
Bell's theorem at full rigour is of the form: Let ##A_\alpha, B_\beta : \Lambda\rightarrow[-1,1]## be random variables (for every ##\alpha,\beta\in[0,2\pi]##) on a probability space ##(\Lambda,\Sigma,\mathrm d\mu)##

rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##

If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?

Are any of the random variables involved assumed to be pairwise independent?
 
  • #58
Stephen Tashi said:
If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?
##\alpha,\alpha^\prime,\beta,\beta^\prime## can be any numbers in ##[0,2\pi]## and they needn't even be different. But they may be.

Are any of the random variables involved assumed to be pairwise independent?
No other assumptions other than the ones I listed are required.

In fact, the theorem is much more general. I just adapted it to the typical Alice/Bob experiment, where you would call the random variable ##A_\alpha## and so on. The general theorem goes as follows:

Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##. Then the inequality ##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|\leq 2## holds.
Proof:
##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|##
##= \left|\int_\Lambda (W(\lambda)(Y(\lambda)+Z(\lambda)) + X(\lambda)(Y(\lambda)-Z(\lambda)))\,\mathrm d\mu(\lambda)\right|##
##\leq \int_\Lambda (\left|W(\lambda)\right|\left|Y(\lambda)+Z(\lambda)\right| + \left|X(\lambda)\right|\left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda (\left|Y(\lambda)+Z(\lambda)\right| + \left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda 2 \,\mathrm d\mu(\lambda) = 2##
(The proof of the last inequality is left as an exercise to the reader. :wink:)

Now in the situation of a typical Alice/Bob experiment, the random variables should refer to the measurement of spin variables of Alice (##A##) and Bob (##B##) along some angles ##\alpha,\beta\in[0,2\pi]## and the correlations one is interested in are correlations between a spin of Alice along some angle ##\alpha## and Bob along some angle ##\beta##, for any combinations of angles. Then one just needs to fill in ##W=A_\alpha##, ..., ##Z=B_{\beta^\prime}##. So we really just apply a general theorem in probability theory to a concrete physical situation.
 
Last edited:
  • Like
Likes dextercioby and Mentz114
  • #59
rubi said:
Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##.

Thank you! I see the difficulty of applying this theorem to experimental tests. The mathematical model says that for each realization of an outcome ##\lambda_0 \in \Lambda## we have simultaneously defined values for ##W(\lambda_0), X(\lambda_0), Y(\lambda_0), Z(\lambda_0)##. Mathematically we can define a random variable such as ##H = WXYZ## on ##\Lambda## and speak of ##<WXYZ>##. However in the experimental tests of entanglement, we do not simultaneously realize all these random variables on a given outcome ##\lambda_0##.

One model for entanglement experiments is that each random variable ##V## is defined on a subset of ##\Lambda## which represents outcomes where ##V## was realized. A random variable that is a product such as ##WY## is defined on the intersection of the subsets associated with ##W## and ##Y##.
 
  • #60
rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
Reading the posts in this thread I thought that i could ask the following question.

Do bell inequalities need explicit experimental verification in special experiments aimed to check the inequalities?

The violation of the classical CHSH<=2 inequality for two spins 1/2 is based on calculations of QM correlators like <A.B>, <A.B‘>,<A‘.B‘>,<A’.B> where A and B are the the spin operators based on Pauli matrixes, <> is an average over singlet w.f. It is then easy to show that CHSH can be 2.sqrt(2)>2. The calculations are based on the rules of QM and are exact.

Now, if we think that CHSH<=2 should be preserved and try to make complicated experiments, we somehow implicitly assume the the rules of calculations that we used to calculate 2.sqrt(2) are not exact. But if it so, how then we have SM of particle physics which is a very precise proof of QM?

If it was found after 1964 Bell’s paper that CHSH is always <=2 in test experiments, this would mean that the rules of QM are not completely correct in contradiction to all other experiments in particle physics, solid state physics, ...
 
  • Like
Likes Mentz114
  • #61
read said:
Reading the posts in this thread I thought that i could ask the following question.

Do bell inequalities need explicit experimental verification in special experiments aimed to check the inequalities?

The violation of the classical CHSH<=2 inequality for two spins 1/2 is based on calculations of QM correlators like <A.B>, <A.B‘>,<A‘.B‘>,<A’.B> where A and B are the the spin operators based on Pauli matrixes, <> is an average over singlet w.f. It is then easy to show that CHSH can be 2.sqrt(2)>2. The calculations are based on the rules of QM and are exact.

Now, if we think that CHSH<=2 should be preserved and try to make complicated experiments, we somehow implicitly assume the the rules of calculations that we used to calculate 2.sqrt(2) are not exact. But if it so, how then we have SM of particle physics which is a very precise proof of QM?

If it was found after 1964 Bell’s paper that CHSH is always <=2 in test experiments, this would mean that the rules of QM are not completely correct in contradiction to all other experiments in particle physics, solid state physics, ...

If I understand correctly what you're saying, then you're right. QM predicts a violation of Bell's inequality (and the CHSH inequality), so if experiments didn't find a violation, that would show that QM is wrong.
 
  • #62
Fra said:
I think you missed what i tried to say. (Except that determinism is different from causality i agree with what you say).
/Fredrik
It is very important to understand the difference between determinism and causality before entering any sensible (i.e., science based vs. philosophical gibberish) discussion of QT.

Definition 1: A theory is deterministic if and only if at any time all observables of a system have determined values.

Definition 2a: A theory is causal if and only if the state of a system is given for ##t<t_0## then the state of the system is determined at any time ##t \geq t_0## either (weak form).

Quantum theory is indeterministic, because never all observables of a system can take a determined value at once, but it's causal, even in a stronger sense (locality in time): If the quantum state is given for ##t=t_0## it is determined at any later time ##t \geq t_0##.
 
  • #63
stevendaryl said:
If I understand correctly what you're saying, then you're right. QM predicts a violation of Bell's inequality (and the CHSH inequality), so if experiments didn't find a violation, that would show that QM is wrong.

I mean that the only fact that CHSH>2 calculated by QM, purely theoretically, is enough to prove nonlocality of QM. There is no need for specific experiments with entangled photons to see if this is experimentally confirmed.
 
  • #64
read said:
I mean that the only fact that CHSH>2 calculated by QM, purely theoretically, is enough to prove nonlocality of QM. There is no need for specific experiments with entangled photons to see if this is experimentally confirmed.

I would say that the theoretical prediction of QM is enough to show that it is nonlocal in Bell's sense. Actual experimental tests of the inequality are tests of QM, not demonstrations that QM is nonlocal in Bell's sense.
 
  • #65
vanhees71 said:
Definition 1: A theory is deterministic if and only if at any time all observables of a system have determined values.
This is not the definition I used, which resolves our disagreement.

What i had in mind:

A theory is deterministic iff the future state is implied (by a deductive rule) from the current state.
(The alternative to this, is a theory that is inductive, stochastic or evolutionary)

(Note the distinction of state and single events, this is the gap in the connecting the probabilistic foundation to reality, because we do not directly observe distributions as single events)
vanhees71 said:
Definition 2a: A theory is causal if and only if the state of a system is given for ##t<t_0## then the state of the system is determined at any time ##t \geq t_0## either (weak form).
This is a strange definition to me? Your definition of causality implies also determinism if you by "determined" mean exactly and uniquely determined.

You are excluding general non-deductive causations with this definition.

If we can replace the word "determined" by inferred i can agree.

I think of a theory as causal, when its inferences of the future states only depend on the current and past states. But the inference need not be deductive!

So QM is causal and deterministic in my sense. The fact that individual observations of events are only probabilistically determined by the state even if the past is known precisely, is noted separately, as single events are not what defines the state space in QM anyway. The state space is defined by (according to interpretation) P-distributions, ensembles or "information states", and the theory defines a causal flow on this space which is deterministic in QM.

About that all possible observables does not commute, in my eyes has nothing todo with indeterminism. It has to do with dependence of the underlying observables. Ie. conjugate variables (if we defined them as related by the Fourier transform) are statistically dependent.

/Fredrik
 
  • Like
Likes dextercioby
  • #66
stevendaryl said:
I would say that the theoretical prediction of QM is enough to show that it is nonlocal in Bell's sense. Actual experimental tests of the inequality are tests of QM, not demonstrations that QM is nonlocal in Bell's sense.
Still, I would like to ask further. More specifically, the correlators for CHSH are just -cos(angle(a,b)), and this is just because of Pauly matrix and singlet w.f. Now, for an angle like 135 degrees we get 2.sqrt(2), so 70% more than in classics. Why should we check CHSH inequality? If we think that we can have 70% of accuracy, then other more precise and developed experiments in particle physics also should see this.
 

Similar threads

  • Quantum Physics
Replies
3
Views
306
Replies
11
Views
1K
Replies
44
Views
3K
  • Quantum Physics
Replies
21
Views
2K
  • Quantum Physics
Replies
4
Views
992
  • Quantum Physics
7
Replies
232
Views
16K
  • Quantum Physics
Replies
7
Views
1K
Replies
4
Views
1K
  • Quantum Physics
5
Replies
143
Views
6K
Back
Top