I Classical chaos and quantum mechanics

  • #51
vanhees71 said:
Only for the free particle and the harmonic oscillator since only then the equations for the expectation value of position are identical with the classical equations. This is, because ##\vec{F}## is not linear in ##\vec{x}##, of course the Ehrenfest theorem gives (after some calculation for the standard Hamiltonian ##\hat{H}=\hat{\vec{p}}^2/(2m) + V(\hat{\vec{x}})##)
$$m \frac{\mathrm{d}^2}{\mathrm{d} t^2} \langle \vec{x}=\langle \vec{F}(\vec{x}) \rangle,$$
where ##\hat{\vec{F}}=-\nabla V(\hat{\vec{x}})##, but if ##\vec{F}## is not at most linear in ##\vec{x}##, you have
$$\langle \vec{F}(\vec{x}) \rangle \neq \vec{F}(\langle \vec{x} \rangle),$$
and the EoM for the averages a la Ehrenfest is not the same as the classical equation of motion.
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
 
Physics news on Phys.org
  • #52
Stephen Tashi said:
I'm confused about the terminology "deterministic quantum mechanics". What would that be?
vanhees71 said:
It's a "contradictio in adjecto" ;-)).
Deterministic quantum mechanics means that the quantum state satisfies a deterministic dynamical law, the Schrödinger (resp. for mixed states the von Neumann) equation.

In the thermal interpretation, the beables (in Bell's sense) are all expectation values and their deterministic dynamics is given by the Ehrenfest theorem, which expresses the time derivative of any expectation value in terms of another expectation value. This is a dynamics of manifestly extended objects. In particular it is nonlocal (in Bell's sense, not in the sense of QFT), hence there is no contradiction with Bell's theorem. On the other hand, it satisfies extended causality, hence respects the requirements of relativity theory, including a propagation speed of information bounded by the speed of light.

Stephen Tashi said:
what is the role of "chaos", mentioned in the title of the thread? I suppose we could have deterministic equations involving expected values and re-introduce probability by arguing that these equations are chaotic. However, since measured averages can differ from the mathematical expected value, there is already a probabilistic aspect to applying the equations to predict experimental outcomes. Do we need the chaos to add even more variability?
Chaos is present (at least for the hydrodynamical expectation values) independently of whether or not it is needed. It produces probabilities independent of measurement, just as in classical mechanics. This is an advantage since one can model measurement as in the classical case, and deduce the necessary presence of uncertainty in measurements from the impossibility of cloning a state. (A no cloning theorem is also valid classically.)

stevendaryl said:
Metastable systems are very relevant to measurement---quantum or otherwise, because a measurement of a microscopic quantity such as an electron's spin requires amplifying the macroscopic property so that it makes a macroscopic difference. For example, spin-up leads to a visible dot on a piece of photographic paper at one position, and spin-down leads to a visible dot at a macroscopically different position. This kind of amplification requires that the measuring device be in a metastable state that can be nudged into a stable state with the tiniest of influences.

With that background as to how I was thinking of measurements, what I thought that the thermal interpretation was saying. I thought it was saying that quantum nondeterminism was explained by metastability of the measurement devices (or more generally, the environment). That's what I was saying was ruled out by Bell's theorem.
Practical indeterminism in theoretically deterministic systems comes like classical chaos from local instabilities, for example (but not only) from tiny perturbations of metastable states. But as known already from the existence of Bohmian mechanics, where Bell's theorem does not apply, Bell's theorem has nothing to say about nonlocal deterministic models of quantum mechanics. Thus it doesn't rule out the thermal interpretation.

Stephen Tashi said:
Bell's theorem is a theorem in physics (rather than mathematics)
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.
 
Last edited:
  • Like
Likes dextercioby and bhobba
  • #53
Fra said:
And if we care only about only the expectation values, then we have deterministic predictions all the way and indeed it seems very similar to classical mechanics. My incomplete understanding of Neumaiers objective is that in this neighbourhood there are some ideas and some points to make, that relate somehow to the foundations of statistical methods in physics?
Indeed.
vanhees71 said:
No, QT is indeterministic, i.e., even with complete determination of the state, not all observables take a determined value.
It depends on what one declares to be the observables.

In the thermal interpretation the theoretical observables (beables in Bell's sense) are the expectations, and they satisfy a deterministic dynamics. Practically, approximately observable are only a small subset of these.

In Born's interpretation, the theoretical ''observables'' are unobservable operators, Calling unobservable things observables leads to apparent indeterminism. It was a misnomer that lead to the strange, unfortunate situation in the foundations of quantum mechanics that persists now for nearly hundred years.

The thermal interpretation completely rejects Born's interpretation while retaining all the formal structure of quantum mechanics and their relation to experiment.
 
Last edited:
  • #54
Demystifier said:
For well-localized wave packets, ##\langle \vec{F}(\vec{x}) \rangle = \vec{F}(\langle \vec{x} \rangle)## is a good approximation. Besides, without the Ehrenfest theorem, how would you explain that classical physics is a good approximation at the macroscopic level?
Ehrenfest's theorem only holds for conservative dynamics, i.e. if the whole environment is included in the state. To explain that classical physics is a good approximation at the macroscopic level needs much more argumentation than just a reference to Ehrenfest's theorem, since wave packets delocalize quite quickly, and the environment is intrinsically nonlocal. One needs careful arguments with decoherence to show the emergence of (dissipative) classicality for a subsystem.
 
  • #55
A. Neumaier said:
No. In (for example) the form stated earlier by Rubi, it is a purely mathematical theorem. Its application to physics is riddled by interpretation issues, since one needs interpretation to relate the mathematics to experiment.

Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
 
  • #56
Stephen Tashi said:
Interpreting what @rubi said as mathematics, depends on stating the definition of CHSH inequality. The best explanation I've found so far is that the CHSH inequality is an inequality involving conditional expectations (https://physics.stackexchange.com/questions/237321/simplified-derivation-of-chsh-bell-inequalities ). If that's the correct interpretation, I'd like to find an online article or class notes etc. that states the CHSH as an inequality involving conditional expectations (i.e. states this fact instead of presenting it implicitly in the physics of experiments carried out by Alice and Bob.)
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
 
  • Like
Likes dextercioby
  • #57
rubi said:
Bell's theorem at full rigour is of the form: Let ##A_\alpha, B_\beta : \Lambda\rightarrow[-1,1]## be random variables (for every ##\alpha,\beta\in[0,2\pi]##) on a probability space ##(\Lambda,\Sigma,\mathrm d\mu)##

rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##

If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?

Are any of the random variables involved assumed to be pairwise independent?
 
  • #58
Stephen Tashi said:
If ##\alpha## is an index, what is the definition of ##\alpha'## ? Does it denote any arbitrary index possibly different than ##\alpha## ?
##\alpha,\alpha^\prime,\beta,\beta^\prime## can be any numbers in ##[0,2\pi]## and they needn't even be different. But they may be.

Are any of the random variables involved assumed to be pairwise independent?
No other assumptions other than the ones I listed are required.

In fact, the theorem is much more general. I just adapted it to the typical Alice/Bob experiment, where you would call the random variable ##A_\alpha## and so on. The general theorem goes as follows:

Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##. Then the inequality ##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|\leq 2## holds.
Proof:
##\left|\left<WY\right>+\left<WZ\right>+\left<XY\right>-\left<XZ\right>\right|##
##= \left|\int_\Lambda (W(\lambda)(Y(\lambda)+Z(\lambda)) + X(\lambda)(Y(\lambda)-Z(\lambda)))\,\mathrm d\mu(\lambda)\right|##
##\leq \int_\Lambda (\left|W(\lambda)\right|\left|Y(\lambda)+Z(\lambda)\right| + \left|X(\lambda)\right|\left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda (\left|Y(\lambda)+Z(\lambda)\right| + \left|Y(\lambda)-Z(\lambda)\right|)\,\mathrm d\mu(\lambda)##
##\leq \int_\Lambda 2 \,\mathrm d\mu(\lambda) = 2##
(The proof of the last inequality is left as an exercise to the reader. :wink:)​

Now in the situation of a typical Alice/Bob experiment, the random variables should refer to the measurement of spin variables of Alice (##A##) and Bob (##B##) along some angles ##\alpha,\beta\in[0,2\pi]## and the correlations one is interested in are correlations between a spin of Alice along some angle ##\alpha## and Bob along some angle ##\beta##, for any combinations of angles. Then one just needs to fill in ##W=A_\alpha##, ..., ##Z=B_{\beta^\prime}##. So we really just apply a general theorem in probability theory to a concrete physical situation.
 
Last edited:
  • Like
Likes dextercioby and Mentz114
  • #59
rubi said:
Let ##W,X,Y,Z: \Lambda \rightarrow [-1,1]## be random variables on the probability space ##(\Lambda,\Sigma,\mathrm d\mu)##.

Thank you! I see the difficulty of applying this theorem to experimental tests. The mathematical model says that for each realization of an outcome ##\lambda_0 \in \Lambda## we have simultaneously defined values for ##W(\lambda_0), X(\lambda_0), Y(\lambda_0), Z(\lambda_0)##. Mathematically we can define a random variable such as ##H = WXYZ## on ##\Lambda## and speak of ##<WXYZ>##. However in the experimental tests of entanglement, we do not simultaneously realize all these random variables on a given outcome ##\lambda_0##.

One model for entanglement experiments is that each random variable ##V## is defined on a subset of ##\Lambda## which represents outcomes where ##V## was realized. A random variable that is a product such as ##WY## is defined on the intersection of the subsets associated with ##W## and ##Y##.
 
  • #60
rubi said:
I didn't write the inequality explicitly in my post, but this is the inequality that I meant:
##\left|\left<A_{\alpha}B_{\beta}\right>+\left<A_{\alpha}B_{\beta^\prime}\right>+\left<A_{\alpha^\prime}B_{\beta}\right>-\left<A_{\alpha^\prime}B_{\beta^\prime}\right>\right|\leq 2##
where
##\left<A_{\alpha}B_{\beta}\right> = \int_\Lambda A_\alpha(\lambda) B_\beta(\lambda) \,\mathrm d\mu(\lambda)##
Reading the posts in this thread I thought that i could ask the following question.

Do bell inequalities need explicit experimental verification in special experiments aimed to check the inequalities?

The violation of the classical CHSH<=2 inequality for two spins 1/2 is based on calculations of QM correlators like <A.B>, <A.B‘>,<A‘.B‘>,<A’.B> where A and B are the the spin operators based on Pauli matrixes, <> is an average over singlet w.f. It is then easy to show that CHSH can be 2.sqrt(2)>2. The calculations are based on the rules of QM and are exact.

Now, if we think that CHSH<=2 should be preserved and try to make complicated experiments, we somehow implicitly assume the the rules of calculations that we used to calculate 2.sqrt(2) are not exact. But if it so, how then we have SM of particle physics which is a very precise proof of QM?

If it was found after 1964 Bell’s paper that CHSH is always <=2 in test experiments, this would mean that the rules of QM are not completely correct in contradiction to all other experiments in particle physics, solid state physics, ...
 
  • Like
Likes Mentz114
  • #61
read said:
Reading the posts in this thread I thought that i could ask the following question.

Do bell inequalities need explicit experimental verification in special experiments aimed to check the inequalities?

The violation of the classical CHSH<=2 inequality for two spins 1/2 is based on calculations of QM correlators like <A.B>, <A.B‘>,<A‘.B‘>,<A’.B> where A and B are the the spin operators based on Pauli matrixes, <> is an average over singlet w.f. It is then easy to show that CHSH can be 2.sqrt(2)>2. The calculations are based on the rules of QM and are exact.

Now, if we think that CHSH<=2 should be preserved and try to make complicated experiments, we somehow implicitly assume the the rules of calculations that we used to calculate 2.sqrt(2) are not exact. But if it so, how then we have SM of particle physics which is a very precise proof of QM?

If it was found after 1964 Bell’s paper that CHSH is always <=2 in test experiments, this would mean that the rules of QM are not completely correct in contradiction to all other experiments in particle physics, solid state physics, ...

If I understand correctly what you're saying, then you're right. QM predicts a violation of Bell's inequality (and the CHSH inequality), so if experiments didn't find a violation, that would show that QM is wrong.
 
  • #62
Fra said:
I think you missed what i tried to say. (Except that determinism is different from causality i agree with what you say).
/Fredrik
It is very important to understand the difference between determinism and causality before entering any sensible (i.e., science based vs. philosophical gibberish) discussion of QT.

Definition 1: A theory is deterministic if and only if at any time all observables of a system have determined values.

Definition 2a: A theory is causal if and only if the state of a system is given for ##t<t_0## then the state of the system is determined at any time ##t \geq t_0## either (weak form).

Quantum theory is indeterministic, because never all observables of a system can take a determined value at once, but it's causal, even in a stronger sense (locality in time): If the quantum state is given for ##t=t_0## it is determined at any later time ##t \geq t_0##.
 
  • #63
stevendaryl said:
If I understand correctly what you're saying, then you're right. QM predicts a violation of Bell's inequality (and the CHSH inequality), so if experiments didn't find a violation, that would show that QM is wrong.

I mean that the only fact that CHSH>2 calculated by QM, purely theoretically, is enough to prove nonlocality of QM. There is no need for specific experiments with entangled photons to see if this is experimentally confirmed.
 
  • #64
read said:
I mean that the only fact that CHSH>2 calculated by QM, purely theoretically, is enough to prove nonlocality of QM. There is no need for specific experiments with entangled photons to see if this is experimentally confirmed.

I would say that the theoretical prediction of QM is enough to show that it is nonlocal in Bell's sense. Actual experimental tests of the inequality are tests of QM, not demonstrations that QM is nonlocal in Bell's sense.
 
  • #65
vanhees71 said:
Definition 1: A theory is deterministic if and only if at any time all observables of a system have determined values.
This is not the definition I used, which resolves our disagreement.

What i had in mind:

A theory is deterministic iff the future state is implied (by a deductive rule) from the current state.
(The alternative to this, is a theory that is inductive, stochastic or evolutionary)

(Note the distinction of state and single events, this is the gap in the connecting the probabilistic foundation to reality, because we do not directly observe distributions as single events)
vanhees71 said:
Definition 2a: A theory is causal if and only if the state of a system is given for ##t<t_0## then the state of the system is determined at any time ##t \geq t_0## either (weak form).
This is a strange definition to me? Your definition of causality implies also determinism if you by "determined" mean exactly and uniquely determined.

You are excluding general non-deductive causations with this definition.

If we can replace the word "determined" by inferred i can agree.

I think of a theory as causal, when its inferences of the future states only depend on the current and past states. But the inference need not be deductive!

So QM is causal and deterministic in my sense. The fact that individual observations of events are only probabilistically determined by the state even if the past is known precisely, is noted separately, as single events are not what defines the state space in QM anyway. The state space is defined by (according to interpretation) P-distributions, ensembles or "information states", and the theory defines a causal flow on this space which is deterministic in QM.

About that all possible observables does not commute, in my eyes has nothing todo with indeterminism. It has to do with dependence of the underlying observables. Ie. conjugate variables (if we defined them as related by the Fourier transform) are statistically dependent.

/Fredrik
 
  • Like
Likes dextercioby
  • #66
stevendaryl said:
I would say that the theoretical prediction of QM is enough to show that it is nonlocal in Bell's sense. Actual experimental tests of the inequality are tests of QM, not demonstrations that QM is nonlocal in Bell's sense.
Still, I would like to ask further. More specifically, the correlators for CHSH are just -cos(angle(a,b)), and this is just because of Pauly matrix and singlet w.f. Now, for an angle like 135 degrees we get 2.sqrt(2), so 70% more than in classics. Why should we check CHSH inequality? If we think that we can have 70% of accuracy, then other more precise and developed experiments in particle physics also should see this.
 
Back
Top