A skeptic's view on Bohmian Mechanics

In summary, The paper "Quantum Probability Theory and the Foundations of Quantum mechanics" discusses the use of Bohmian mechanics in understanding quantum mechanics. It references a blog article by Reinhard Werner which raises questions about the validity of Bohmian trajectories and their connection to empirical reality. The article also discusses the use of wave functions versus density operators in describing single systems and the concept of the "fapp fixed outcomes" problem. There is a debate about the usefulness of Bohmian mechanics and whether it adds any new understanding to quantum mechanics. Ultimately, the paper argues that Bohmian mechanics is just a commentary on quantum mechanics and is not necessary for physicists to understand or use.
  • #176
Demystifier said:
As usual, you use double standards. Classical mechanics also claims to be deterministic, hence not represented by an ensemble but by a single collection of classical particles. Nevertheless, you do not say that themodynamics is incompatible with classical mechanics.
This is why something like the ergodic hypothesis is needed. A classical mechanical system is definitely only in one pure state and all observables are determined by this state. In order to predict the correct thermodynamics, it must be the case that the particles of this one state are distributed in a chaotic way, such that averages turn out to be the same as if we had computed the ensemble average.
 
  • Like
Likes dextercioby
Physics news on Phys.org
  • #177
Demystifier said:
As usual, you use double standards. Classical mechanics also claims to be deterministic, hence not represented by an ensemble but by a single collection of classical particles. Nevertheless, you do not say that themodynamics is incompatible with classical mechanics.
This is because statistical mechanics does not make the assumption of equilibrium to deduce results.

The equilibrium assumption is made in statistical mechanics only for special systems known to be in equilibrium
, such as crystals, water at rest, or a gas at uniform pressure and temperature. For all other cases (the majority of real world situations), there is nonequilibrium statistical mechanics which derives hydrodynamics and elasticity theory under much weaker assumptions, or kinetic theory for very dilute gases. The latter give a fairly realistic description of the universe at last, which is not in equilibrium.

On the other hand, Bohmian mechanics cannot even start without assuming Bohmian quantum equilibrium of the whole universe! This is quite a different sort of assumption. And it is assumed without any justification except that it is needed to reproduce Born's rule.

Thus the same standards result in a very different appraisal of the two theories.
 
  • #178
rubi said:
No, it doesn't lead to Bell's inequalities. The joint assumption of both
(1) ##A(\lambda,\vec a,\vec b) = A(\lambda,\vec a)##, ##B(\lambda,\vec a,\vec b) = B(\lambda,\vec b)##
(2) ##P(\lambda,\vec a,\vec b) = P(\lambda)##
leads to Bell's inequality. In a hidden variable theory, we can deny (1), which leads to non-locality or we can deny (2), which (according to Bell) leads to superdeterminism, or we can of course deny both. Apparently, Bohmian mechanics does not only violate (1), but it also violates (2), which would mean that it is not just non-local, but also superdeterministic (according to Bell).By using an expression for the correlations, given by ##\left<A(\vec a) B(\vec b)\right> = \int A(\lambda,\vec a) B(\lambda,\vec b) P(\lambda, \vec a, \vec b)\mathrm d\lambda##, we can in principle write down a local hidden variable theory. The usual argument is that because it violates (2), it must be superdeterministic. I don't understand, why this argument does not equally apply to Bohmian mechanics, if BM violates (2), which it does.
Ah, now I see the source of confusion. When I was referring to a quantity ##P(\lambda)## for the first time, it was not the same quantity as you defined it above. What you call ##P(\lambda)##, Bell calls ##\rho(\lambda)##.

So let me explain it all over again, now using your conventions. In BM we have
$$A(\lambda,\vec a,\vec b) \neq A(\lambda,\vec a)$$
$$B(\lambda,\vec a,\vec b) \neq B(\lambda,\vec b)$$
So when the measurement setup changes, then ##A(\lambda,\vec a,\vec b)## and ##B(\lambda,\vec a,\vec b)## change. Due to this, there is no need to change ##P(\lambda)## and introduce superdeterminism in BM.
 
  • Like
Likes PeterDonis
  • #179
A. Neumaier said:
This is because statistical mechanics does not make the assumption of equilibrium to deduce results.

The equilibrium assumption is made in statistical mechanics only for special systems known to be in equilibrium
, such as crystals, water at rest, or a gas at uniform pressure and temperature. For all other cases (the majority of real world situations), there is nonequilibrium statistical mechanics which derives hydrodynamics and elasticity theory under much weaker assumptions, or kinetic theory for very dilute gases. The latter give a fairly realistic description of the universe at last, which is not in equilibrium.

On the other hand, Bohmian mechanics cannot even start without assuming Bohmian quantum equilibrium of the whole universe! This is quite a different sort of assumption. And it is assumed without any justification except that it is needed to reproduce Born's rule.

Thus the same standards result in a very different appraisal of the two theories.
As I already said, there is no point in arguing about BM as we (you and me) cannot even agree of the foundations of classical statistical mechanics, or even on the concept of probability. I wish you a good luck with your thermal interpretation inspired by your own interpretation of thermodynamics and statistical mechanics. :smile:
 
  • #180
Demystifier said:
Ah, now I see the source of confusion. When I was referring to a quantity ##P(\lambda)## for the first time, it was not the same quantity as you defined it above.
What other quantity ##P(\lambda)## did you refer to then?

Due to this, there is no need to change ##P(\lambda)## and introduce superdeterminism in BM.
It seems like ##P(\lambda)## depends on ##\vec a##, ##\vec b## through ##\lambda##, which determines the settings of the measurement apparata. In order to compute expectation values like ##\left<A(\vec a)B(\vec b)\right>##, you need to tune ##\lambda## such that it reflects the particular settings.
 
  • #181
I just wanted to say to the participants above (especially Arnold Neumaier, demystifier, rubi) that I very much enjoyed reading the back and forth. Although I doubt many minds were changed, the scope and intensity of the debate was very enlightening to me.

Thanks! :smile:
 
  • Like
Likes eloheim, rubi and Demystifier
  • #182
rubi said:
What other quantity ##P(\lambda)## did you refer to then?
In BM ##P=|\Psi|^2##, where ##\Psi## is the conditional wave function.
 
  • #183
DrChinese said:
Although I doubt many minds were changed
Even if we haven't changed our minds, at least we sharpened our arguments. :smile:
 
  • Like
Likes DrChinese
  • #184
Demystifier said:
Even if we haven't changed our minds, at least we sharpened our arguments. :smile:

I couldn't agree more. And you responded with grace at each step. Not easy from those two, coming at you from every side. :smile: I give everyone kudos for making their strongest arguments. You had me following links to some material I had not seen before (there's a lot of that it seems).
 
  • #185
Demystifier said:
In BM ##P=|\Psi|^2##, where ##\Psi## is the conditional wave function.
Okay, I actually had this in mind, too. But isn't this also the distribution with respect to which expectation values are computed?
 
  • #186
Demystifier said:
As I already said, there is no point in arguing about BM as we (you and me) cannot even agree of the foundations of classical statistical mechanics, or even on the concept of probability. I wish you a good luck with your thermal interpretation inspired by your own interpretation of thermodynamics and statistical mechanics. :smile:
All my arguments here are solely about the shut-up-and-calculate part of probability, statistics, and Bohmian mechanics. Thus they are independent of the details how one interprets probability or thermodynamics or statistical mechanics.
 
  • #187
A. Neumaier said:
All my arguments here are solely about the shut-up-and-calculate part of probability, statistics, and Bohmian mechanics.
No they are not. You neither shut up nor calculate. :-p
 
Last edited:
  • #188
rubi said:
Okay, I actually had this in mind, too. But isn't this also the distribution with respect to which expectation values are computed?
To clarify those things completely, one would need to rewrite the equations of BM in the language of Bell's theorem. As far as I am aware, nobody, not even Bell himself, has done it explicitly.

The important question is: Does BM involve fine tuning of initial conditions? For if it does, then it is superdeterministic. Well, the catch is that "fine tuning" is not a precisely defined concept. BM assumes quantum equilibrium at the beginning of the experiment. Bohmians don't think of it as "fine tuning", but some critiques, like Neumaier, think of it as fine tuning. It is somewhat subjective, but if you want to call quantum equilibrium "fine tuning", then yes, BM is "superdeterministic". But even if one calls it "superdeterminism", it is a rather soft kind of superdeterminism, much softer than superdeterminism needed in local theories.
 
  • Like
Likes eloheim
  • #189
Demystifier said:
To clarify those things completely, one would need to rewrite the equations of BM in the language of Bell's theorem. As far as I am aware, nobody, not even Bell himself, has done it explicitly.
I don't understand why this is not immediate. Aren't the particle positions ##\vec x## supposed to be the hidden variables and aren't they supposed to be distributed according to ##\left|\Psi\right|^2##? If that is the case, then the spin along some axis should be given by some functions ##A,B(\vec x,\vec a,\vec b)## and the correlations should be given by ##\int A(\vec x,\vec a,\vec b) B(\vec x,\vec a,\vec b) \left|\Psi\right|^2\mathrm d\vec x##. Now the question is whether ##\left|\Psi\right|^2## depends on ##\vec a##,##\vec b##, but it seems to me that it does, because BM must model the measurement apparatus and the environment in order supposedly reproduce QM and of course the angles ##\vec a##, ##\vec b## are some functions of the hidden variables ##\vec x##, because they are determined by the positions of the atoms that make up the apparatus. Hence, it seems to be that BM meets Bell's definition of superdeterminism. It wouldn't be that way if BM would work without taking the measurement theory into account, but we learned in the last thread that it won't reproduce QM this way.

But even if one calls it "superdeterminism", it is a rather soft kind of superdeterminism, much softer than superdeterminism needed in local theories.
What makes the BM superdeterminism softer than the superdeterminism required in local hidden variable theories?
 
  • #190
rubi said:
I don't understand why this is not immediate. Aren't the particle positions ##\vec x## supposed to be the hidden variables and aren't they supposed to be distributed according to ##\left|\Psi\right|^2##? If that is the case, then the spin along some axis should be given by some functions ##A,B(\vec x,\vec a,\vec b)## and the correlations should be given by ##\int A(\vec x,\vec a,\vec b) B(\vec x,\vec a,\vec b) \left|\Psi\right|^2\mathrm d\vec x##. Now the question is whether ##\left|\Psi\right|^2## depends on ##\vec a##,##\vec b##, but it seems to me that it does, because BM must model the measurement apparatus and the environment in order supposedly reproduce QM and of course the angles ##\vec a##, ##\vec b## are some functions of the hidden variables ##\vec x##, because they are determined by the positions of the atoms that make up the apparatus. Hence, it seems to be that BM meets Bell's definition of superdeterminism. It wouldn't be that way if BM would work without taking the measurement theory into account, but we learned in the last thread that it won't reproduce QM this way.

What makes the BM superdeterminism softer than the superdeterminism required in local hidden variable theories?
During the night I have thought about all this once again (thank you for asking good questions that force me to think more carefully), and now I have some new insights. :woot:

Now my central claim is this: The fact that ##P(\lambda, \vec a,\vec b)## depends on ##\vec a##, ##\vec b## does not automatically imply that the theory is superdeterministic.

To explain this claim, let me present a counterexample: 18th and 19th century physics! Supoose that the world is described by classical non-relativistic mechanics, in which all forces are either Newton gravitational forces or Coulomb electrostatic forces. These forces involve an action over distance, so the laws of physics are nonlocal. Let ##\vec a## and ##\vec b## be positions of two macroscopic charged balls. They produce electric field which, nonlocally, influence the motion of all other charged particles. Let ##\lambda## be the positions and momenta of all atoms in the air around the balls. The atoms are hidden variables of 18th and 19th century, so ##\lambda## is a hidden variable. Furthermore, let us introduce probability in this deterministic theory by taking elements of Boltzmann statistical mechanics. We assume that atoms of the air are in a thermal equilibrium (which is an analogue of quantum equilibrium in BM). The motion of atoms depends on the electric field produced by the balls, so the Hamiltonian describing the motion of ##\lambda## depends on ##\vec a## and ##\vec b##. Hence the Boltzmann distribution is a function of the form
$$P(\lambda | \vec a,\vec b)$$
Note that ##P(\lambda | \vec a,\vec b)## is conditional probability, which should be distinguished from joint probability ##P(\lambda, \vec a,\vec b)##. They are related by the Bayes formula
$$P(\lambda | \vec a,\vec b)=\frac{P(\lambda, \vec a,\vec b)}{P(\vec a,\vec b)}$$
where
$$P(\vec a,\vec b)=\int d\lambda \, P(\lambda, \vec a,\vec b)$$
is marginal probability.

Now what is superdeterminism? Superdeterminism means that ##\vec a## and ##\vec b## cannot be chosen independently. In other words, superdeterminism means that
$$P(\vec a,\vec b)\neq P_a(\vec a) P_b(\vec b)...(Eq. 1)$$
where
$$P_a(\vec a)=\int d\lambda \int d^3b \, P(\lambda, \vec a,\vec b)$$
and similarly for ##P_b##. So is non-local classical mechanics superdeterministic? It can be superdeterministic in principle! Nevertheless, it is not superdeterministic in FAPP (for all practical purposes) sense. Even though there are forces between the charged balls, which in principle can make them correlated, in practice we can choose the positions ##\vec a## and ##\vec b## of the balls independently. That's because the force falls off with the distance, so the mutual influence between ##\vec a## and ##\vec b## can be neglected when the balls are very far from each other. Therefore ##\vec a## and ##\vec b## are not correlated FAPP, so classical non-local mechanics is not superdeterministic FAPP.

Now Bohmian mechanics. I hope the analogy between the example above and BM is quite clear. There is only one important difference. In BM, the non-local force does not fall off with distance. Nevertheless, the mutual influence between ##\vec a## and ##\vec b## can also be neglected. That's because ##\vec a## and ##\vec b## are orientations of macroscopic apparatuses, and decoherence destroys entanglement (in a FAPP sense) between macroscopic degrees of freedom. Without entanglement (in the FAPP sense) there is no correlation between ##\vec a## and ##\vec b##. And without correlation, that is when (Eq. 1) becomes equality, there is no superdeterminism.

The only thing I didn't explain is the following. If ##P(\lambda|\vec a,\vec b)## does not imply superdeterminism (in FAPP sense), what is the justification for using ##P(\lambda)## in the Bell theorem? I don't know at the moment, but let me tell that there are proofs of non-locality which do not rest on explicit introducing of ##\lambda##, and personally I like such proofs much more. An example is the Hardy proof of non-locality, reviewed e.g. in my
https://arxiv.org/abs/quant-ph/0609163
 
  • #191
Demystifier said:
Now my central claim is this: The fact that ##P(\lambda, \vec a,\vec b)## depends on ##\vec a##, ##\vec b## does not automatically imply that the theory is superdeterministic.

The way I understand it is that it is the assumption of locality, together with the Bell's inequalities, that suggests superdeterminism. In deriving the Bell inequalities, we can write:

[itex]P(A,B | \alpha, \beta) = \sum_\lambda P(\lambda | \alpha, \beta) P(A | \alpha, \beta, \lambda) P(B | \alpha, \beta, \lambda, A)[/itex]

(where, as usual, [itex]A[/itex] is Alice's result, [itex]B[/itex] is Bob's result, [itex]\alpha[/itex] is Alice's setting, [itex]\beta[/itex] is Bob's setting, and [itex]\lambda[/itex] is some hidden variable that is a potential common influence to Alice's and Bob's measurements.)

This form is perfectly general (I think); it is not hard to come up with a model of this sort that satisfies the quantum predictions for EPR. But assuming locality and no-superdeterminism further constrains the probabilities:
  • [itex]P(\lambda | \alpha, \beta) = P(\lambda)[/itex]
If [itex]\lambda[/itex] is a common influence to both Alice's and Bob's measurements, then it must be determined in the intersection of their backwards lightcones. But (assuming no-superdeterminism), the choice of [itex]\alpha[/itex] and [itex]\beta[/itex] can be made at the last moment, too late to influence [itex]\lambda[/itex].

We can similarly argue from locality and no-superdeterminism that
  • [itex]P(A|\alpha, \beta, \lambda) = P(A|\alpha, \lambda)[/itex]
  • [itex]P(B|\alpha, \beta, \lambda, A) = P(B|\beta, \lambda)[/itex]
So the final assumed form of the probability distributions is determined by locality and "free will" (no-superdeterminism):
  • [itex]P(A, B|\alpha, \beta) = \sum_\lambda P(\lambda) P(A|\alpha, \lambda) P(B|\beta, \lambda)[/itex]
And of course, Bell shows that no such model can reproduce the predictions of QM for EPR.

If you aren't assuming locality, then there is no problem in letting [itex]\lambda[/itex] depend on [itex]\alpha[/itex] and [itex]\beta[/itex], even if they are chosen at the last minute, as long as the choices are made before either measurement.

In the case where, for example, Bob's choice is made after Alice's measurement, then the above analysis doesn't quite apply, since in that case, [itex]\lambda[/itex] cannot depend on [itex]\beta[/itex]. But if we don't assume locality, then Bob's result can depend on Alice's result, so again, there is no problem with Bell's inequality.
 
  • #192
Demystifier said:
The fact that ##P(\lambda, \vec a,\vec b)## depends on ##\vec a##, ##\vec b## does not automatically imply that the theory is superdeterministic.
And if we look at the situation at some earlier time when ##\vec a## and ##\vec b## is not yet set but instead we have only (yet to be determined) outputs or random number generators ##rnd_a## and ##rnd_b##? These random numbers don't have the type of influence as in your classical analogue. And that is the type of dependency that we call superdeterminism (dependency between ##\lambda## and ##rnd_a##, ##rnd_b##).
On the other hand if, at the moment when we set ##\vec a## or ##\vec b##, distribution of ##\lambda## changes and becomes dependent on ##\vec a## or ##\vec b## it would be non-locality instead of superdeterminism.
 
  • #193
zonde said:
And if we look at the situation at some earlier time when ##\vec a## and ##\vec b## is not yet set but instead we have only (yet to be determined) outputs or random number generators ##rnd_a## and ##rnd_b##? These random numbers don't have the type of influence as in your classical analogue.
I guess you mean psudo-random, not trully random, right? If so, then it has a classical analogue because thermal fluctuations of classical atoms=##\lambda## can serve as a pseudo-random generator.

zonde said:
And that is the type of dependency that we call superdeterminism (dependency between ##\lambda## and ##rnd_a##, ##rnd_b##).
Superdeterminism is not when ##rnd_a## and ##rnd_b## depend on ##\lambda##. Superdeterminism is when this dependence is such that ##rnd_a## is correlated with ##rnd_b##. As I have explained, such a correlation is FAPP absent in both classical and Bohmian mechanics.
 
  • #194
Demystifier said:
Even though there are forces between the charged balls, which in principle can make them correlated, in practice we can choose the positions ##\vec a## and ##\vec b## of the balls independently. That's because the force falls off with the distance, so the mutual influence between ##\vec a## and ##\vec b## can be neglected when the balls are very far from each other. Therefore ##\vec a## and ##\vec b## are not correlated FAPP, so classical non-local mechanics is not superdeterministic FAPP.
That doesn't follow. If you have a statistical description of the situation, you can integrate out the short scales ("renormalize") and thereby obtain an effective model that describes only the long range effects. In general, you will obtain extra terms that correspond to a long range correlation, even though your initial model only contained short-range interactions. (See the Ising model for example.)

Now Bohmian mechanics. I hope the analogy between the example above and BM is quite clear. There is only one important difference. In BM, the non-local force does not fall off with distance. Nevertheless, the mutual influence between ##\vec a## and ##\vec b## can also be neglected. That's because ##\vec a## and ##\vec b## are orientations of macroscopic apparatuses, and decoherence destroys entanglement (in a FAPP sense) between macroscopic degrees of freedom. Without entanglement (in the FAPP sense) there is no correlation between ##\vec a## and ##\vec b##. And without correlation, that is when (Eq. 1) becomes equality, there is no superdeterminism.
I don't think that decoherence will destroy correlations. It makes certain interference terms go away, which results in expectation values peaked on classical trajectories. However, classical trajectories needn't loose their correlations.

but let me tell that there are proofs of non-locality which do not rest on explicit introducing of ##\lambda##, and personally I like such proofs much more. An example is the Hardy proof of non-locality, reviewed e.g. in my
https://arxiv.org/abs/quant-ph/0609163
Proofs like Hardy or GHZ aren't proofs of non-locality. They rather show that hidden variable models must be contextual, which can't be deduced from Bell's inequality alone. And they don't exclude superdeterministic models either.

stevendaryl said:
But assuming locality and no-superdeterminism further constrains the probabilities:
  • [itex]P(\lambda | \alpha, \beta) = P(\lambda)[/itex]
If [itex]\lambda[/itex] is a common influence to both Alice's and Bob's measurements, then it must be determined in the intersection of their backwards lightcones. But (assuming no-superdeterminism), the choice of [itex]\alpha[/itex] and [itex]\beta[/itex] can be made at the last moment, too late to influence [itex]\lambda[/itex].
This is not a locality assumption. It just enforces that the angles ##\alpha##, ##\beta## aren't correlated with ##\lambda##. If that condition is violated, it is still possible that the reason for the correlation of ##\alpha## and ##\beta## lies in the common past. Thus the condition is not related to locality. It merely enforces that the angles can be freely choosen instead of being determined (possibly by the common past).
 
  • #195
rubi said:
That doesn't follow.
So you think that classical mechanics is superdeterministic?

rubi said:
I don't think that decoherence will destroy correlations.
I do (in FAPP sense, of course).

rubi said:
However, classical trajectories needn't loose their correlations.
Nobody ever observed them, which confirms my claim that they are FAPP absent. To repeat, they are possible in principle, but impossible FAPP.

rubi said:
Proofs like Hardy or GHZ aren't proofs of non-locality.
Well, Hardy takes it as a proof of nonlocality.

rubi said:
They rather show that hidden variable models must be contextual,
Sure, they show that too. See my paper I mentioned above where I discuss it in more detail.
 
Last edited:
  • #196
rubi said:
This is not a locality assumption. It just enforces that the angles ##\alpha##, ##\beta## aren't correlated with ##\lambda##. If that condition is violated, it is still possible that the reason for the correlation of ##\alpha## and ##\beta## lies in the common past.

But for [itex]\alpha[/itex] and [itex]\beta[/itex] to be correlated would be superdeterminism. [itex]\alpha[/itex] is the measurement choice made by Alice, and [itex]\beta[/itex] is the measurement choice made by Bob. They are free to use whatever means they like to make that choice. For the choices to be correlated is superdeterminism.
 
  • #197
RockyMarciano said:
Because they are the same in the EPR definition of realism as deterministic.
You are probably right that EPR criterion is the reason why many think that non-realism and non-determinism are the same. Thank you for that insight!

But it is a misinterpretation of the EPR criterion. The EPR criterion says something like "If we can predict with certainty ... then there is an element of reality ..." So according to EPR, determinism implies reality. However, the converse is not true; reality does not imply determinism. Therefore determinism and reality are not equivalent, and not the same. The GRW theory is a well-known example of a theory (compatible with QM) which has elements of non-deterministic reality.
 
  • #198
Demystifier said:
So you think that classical mechanics is superdeterministic?
Depends on the particular model. I'm just telling you that the short-rangedness of interactions in general leads to long-range correlations, which you denied. Also, in CM, the state isn't fine-tuned and the absence of correlations can then be justified on the basis of molecular chaos. However, in BM, the hidden variables must be distributed according to ##\left|\Psi\right|^2##, which is a restriction on the allowed distributions that can in principle introduce correlations.

I do (in FAPP sense, of course).
However, that's just an opinion. I demand proof. If the wave-function evolves unitarily, correlations shouldn't go away, but rather propagate to finer parts of the system.

Nobody ever observed them.
That doesn't invalidate the argument.

Well, Hardy takes it as a proof of nonlocality.
He's wrong. Consistent histories is local despite allowing the Hardy state, so locality can't be ruled out by the argument.

stevendaryl said:
But for [itex]\alpha[/itex] and [itex]\beta[/itex] to be correlated would be superdeterminism. [itex]\alpha[/itex] is the measurement choice made by Alice, and [itex]\beta[/itex] is the measurement choice made by Bob. They are free to use whatever means they like to make that choice. For the choices to be correlated is superdeterminism.
Which is what I'm saying. The ##P(\lambda|\vec a,\vec b)=P(\lambda)## condition prohibits superdeterminism, but it is unrelated to locality. The question remains: How can BM be non-superdeterministic despite violating this condition?
 
  • #199
Demystifier said:
I guess you mean psudo-random, not trully random, right? If so, then it has a classical analogue because thermal fluctuations of classical atoms=##\lambda## can serve as a pseudo-random generator.
Let it be pseudo-random. Why does it matters?

Demystifier said:
Superdeterminism is not when ##rnd_a## and ##rnd_b## depend on ##\lambda##. Superdeterminism is when this dependence is such that ##rnd_a## is correlated with ##rnd_b##. As I have explained, such a correlation is FAPP absent in both classical and Bohmian mechanics.
Look, if we can set hidden variable so that spin of entangled particles is always opposite in either measurement basis of Alice or Bob then we can violate Bell inequalities just as QM does.
I will use photons. Let's say entangled photons have opposite linear polarization in Alice's measurement base. Alice's measurement will give certain outcome based on two possible options HV or VH. Bob's measurement will give probabilistic outcome according to Malus law ##P=\sin^2(\beta-\alpha)## or ##P=\cos^2(\beta-\alpha)##. And that is all we need to replicate predicted correlations of QM.

This is how superdeterminism can violate Bell's inequality. And there is no need for ##\alpha## to be correlated with ##\beta##.
 
  • #200
rubi said:
Which is what I'm saying. The ##P(\lambda|\vec a,\vec b)=P(\lambda)## condition prohibits superdeterminism, but it is unrelated to locality.

I disagree; it's the combination of no-superdeterminism and locality that prevents [itex]\lambda[/itex] from depending on [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex]. If the choice of [itex]\lambda[/itex] is only made after Alice and Bob make their choices, then it doesn't imply superdeterminism, but it does imply nonlocality, since Alice's choice influences the [itex]\lambda[/itex] that in turn influences Bob's result.
 
  • #201
stevendaryl said:
I disagree; it's the combination of no-superdeterminism and locality that prevents [itex]\lambda[/itex] from depending on [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex]. If the choice of [itex]\lambda[/itex] is only made after Alice and Bob make their choices, then it doesn't imply superdeterminism, but it does imply nonlocality, since Alice's choice influences the [itex]\lambda[/itex] that in turn influences Bob's result.
If ##P(\lambda|\vec a,\vec b)## depends on ##\vec a## and ##\vec b##, it just means that it is possible that ##\vec a## and ##\vec b## depend on ##\lambda##. Let's say ##\lambda## is in the intersection of the past light cones. A local theory can perfectly well violate ##P(\lambda|\vec a,\vec b)=P(\lambda)##. You prepare two particles in the non-entangled state ##\left|\vec a\right>\otimes\left|\vec b\right>## and before you send the particles to Alice and Bob, you send Alice to order to align her detector along ##\vec a## and you do the same with Bob. This introduces a perfectly local dependence of ##\vec a## and ##\vec b## on the prepared state. So obviously, the condition can easily be violated in a local theory. Hence, it is not required by a local theory to satisfy the condition.
 
  • #202
rubi said:
Depends on the particular model. I'm just telling you that the short-rangedness of interactions in general leads to long-range correlations, which you denied.
Can you give a concrete example, where the correlations refer to macro objects?

rubi said:
Also, in CM, the state isn't fine-tuned and the absence of correlations can then be justified on the basis of molecular chaos.
Yes, but "molecular chaos" often means Boltzmann distribution, i.e. thermal equilibrium.

rubi said:
However, in BM, the hidden variables must be distributed according to ##\left|\Psi\right|^2##
This is quantum equilibrium, which is analogous to thermal equilibrium above.

rubi said:
which is a restriction on the allowed distributions that can in principle introduce correlations.
In principle yes, but FAPP no.

rubi said:
However, that's just an opinion. I demand proof.
Now you sound like Neumaier. Science is not about proofs. Science is about evidence. For evidence, see the book by Schlosshauer on decoherence.

rubi said:
If the wave-function evolves unitarily, correlations shouldn't go away, but rather propagate to finer parts of the system.
Yes, and precisely because they are in the finer parts, they are not visible FAPP.

rubi said:
He's wrong. Consistent histories is local despite allowing the Hardy state, so locality can't be ruled out by the argument.
That's because CH forbids the classical rules of logic, while Hardy proof of nonlocality uses classical rules of logic. My opinion is that the rules of logic should be universal for all science, be it classical physics, quantum physics, or beyond quantum physics.
 
  • #203
Demystifier said:
Can you give a concrete example, where the correlations refer to macro objects?
You measure two far away spins in an Ising lattice. The pointers of the measurement apparata will be correlated if ##T\leq T_\text{crit}##. This is important for harddisks.

Yes, but "molecular chaos" really means Boltzmann distribution, i.e. thermal equilibrium.This is quantum equilibrium, which is analogous to thermal equilibrium above.
But ##\left|\Psi\right|^2## changes depending on ##\Psi## and specific ##\Psi## can have a form that contains correlations. On the other hand, the Boltzmann distribution is a distribution of minimum entropy. It is even the marginal of a uniform distribution on some constant energy surface. So the amount of possible correlations are minimal. The lack of correlations in the Boltzmann distribution does not at all imply that the same must hold for ##\left|\Psi\right|^2##.

In principle yes, but FAPP no.
Again, this requires proof.

Now you sound like Neumaier. Science is not about proofs. Science is about evidence. For evidence, see the book by Schlosshauer on decoherence.
I know Schlosshauers book. Theoretical physics is of course about proofs. It's about stating some axioms and showing that they imply certain facts about physics. Of course, the level of rigor may vary, but the arguments must be convincing and continuously improved. Evidence is what we get from experimental physics. Proof (rigorous or not) is what theoretical physics is about. One can't simply postulate a claim in science. One always needs to justify it.

Yes, and precisely because they are in the finer parts, they are not visible FAPP.
I don't think this has been established. In unitary evolution, entanglement is supposed to spread to macroscopic objects, like Schrödingers cat. Decoherence just makes interference terms between the individual branches of the wave-function go away. However, within each branch, the correlations remain.

That's because CH forbids the classical rules of logic, while Hardy proof of nonlocality uses classical rules of logic. My opinion is that the rules of logic should be universal for all science, be it classical physics, quantum physics, or beyond quantum physics.
CH doesn't forbid the classical rules of logic. On the contrary, it saves the classical rules of logic. As I have explained in the last thread, without the single framework rule of CH, the classical rules of logic are violated experimentally and there is nothing we can do about it (except argue only about logically consistent statements, which are singled out by the single framework rule of CH).
 
  • #204
rubi said:
CH doesn't forbid the classical rules of logic. On the contrary, it saves the classical rules of logic. As I have explained in the last thread, without the single framework rule of CH, the classical rules of logic are violated experimentally and there is nothing we can do about it (except argue only about logically consistent statements, which are singled out by the single framework rule of CH).

Only in a very formal way. It throws common sense and reality away.
 
  • Like
Likes eloheim
  • #205
rubi said:
If ##P(\lambda|\vec a,\vec b)## depends on ##\vec a## and ##\vec b##, it just means that it is possible that ##\vec a## and ##\vec b## depend on ##\lambda##. Let's say ##\lambda## is in the intersection of the past light cones. A local theory can perfectly well violate ##P(\lambda|\vec a,\vec b)=P(\lambda)##. You prepare two particles in the non-entangled state ##\left|\vec a\right>\otimes\left|\vec b\right>##

I think there is a confusion about what the notation means. I'm assuming that [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex] are the measurement choices made by Alice and Bob, respectively. The twin pair is prepared before those choices are made.
 
  • #206
atyy said:
Only in a very formal way. It throws common sense and reality away.
It's not worse than Copenhagen and it actually clarifies Copenhagen a lot.

stevendaryl said:
I think there is a confusion about what the notation means. I'm assuming that [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex] are the measurement choices made by Alice and Bob, respectively. The twin pair is prepared before those choices are made.
Yes, sure, but it's perfectly possible and compatible with locality that the choices are determined by the preparation procedure of the twin pairs. This is superdeterminism. It will violate the condition, despite being completely local. Hence, the violation of the condition is compatible with locality. The exclusion of non-locality in Bell's proof is purely due to the requirements on the observables ##A## and ##B##.
 
  • #207
rubi said:
He's wrong. Consistent histories is local despite allowing the Hardy state, so locality can't be ruled out by the argument.

The way that I understand consistent histories (which is not all that well), there is a sense in which there is no dynamics. The laws of quantum mechanics (such as Schrodinger's equation, or QFT) are used to derive a probability distribution on histories. But within a history, you've just got an unfolding of events (or values of mutually commuting observables). You can't really talk about one event in a history causing or influencing another event. Locality to me is only meaningful in a dynamic view, where future events, or future values of variables are influenced by current events or current values of variables.
 
  • #208
rubi said:
Yes, sure, but it's perfectly possible and compatible with locality that the choices are determined by the preparation procedure of the twin pairs. This is superdeterminism.

Right. If you assume locality, then the dependence of the measurement choices on [itex]\lambda[/itex] implies superdeterminism. But if you don't assume locality, then the dependence of [itex]\lambda[/itex] on the measurement choices doesn't imply superdeterminism.
 
  • Like
Likes eloheim
  • #209
rubi said:
It's not worse than Copenhagen and it actually clarifies Copenhagen a lot.

Don't get me wrong - I actually respect immensely that the CH people have really done pursued the initial idea very conscientiously (and yes, the initial idea was worth pursuing, even if it ended that we cannot have a single fine-grained reality). And CH's conception of locality and the Bell inequalities is far better than Werner's wrong headed criticism of Bohmian Mechanics.

However, I think Copenhagen is superior to CH. Copenhagen retains common sense and is more broadminded. Copenhagen is consistent with all interpretations (BM, CH, MWI), whereas I don't see how CH is consistent with BM.
 
  • #210
rubi said:
Again, this requires proof.
Fine, let us say that I can't prove (with a level of rigor that would satisfy you) that BM is not superdeterministic. Can you prove that it is? As you can see, your arguments so far didn't convince me, and I claim (again, without a proof) that your arguments wouldn't convince Bell.

Anyway, if you claim that BM is superdeterministic, this is certainly an important claim (provided that it is correct), so I would suggest you to try to convince a referee of an important physics journal.
 
<h2>1. What is Bohmian Mechanics?</h2><p>Bohmian Mechanics, also known as the de Broglie–Bohm theory, is a theory of quantum mechanics that proposes a deterministic interpretation of quantum phenomena. It suggests that particles have definite positions and velocities at all times, even when not being observed.</p><h2>2. How does Bohmian Mechanics differ from other interpretations of quantum mechanics?</h2><p>Bohmian Mechanics differs from other interpretations, such as the Copenhagen interpretation, by rejecting the idea of wave function collapse and instead positing that particles have definite positions and trajectories at all times. It also suggests that there are hidden variables that determine the behavior of particles.</p><h2>3. Is Bohmian Mechanics widely accepted in the scientific community?</h2><p>No, Bohmian Mechanics is not widely accepted in the scientific community. It is considered a minority view and is still a topic of debate and research among physicists.</p><h2>4. What are the main criticisms of Bohmian Mechanics?</h2><p>One of the main criticisms of Bohmian Mechanics is that it is not as mathematically elegant as other interpretations of quantum mechanics. It also introduces the concept of hidden variables, which some scientists argue goes against the principles of Occam's razor.</p><h2>5. Are there any practical applications of Bohmian Mechanics?</h2><p>Currently, there are no practical applications of Bohmian Mechanics. However, some scientists believe that it may have potential in areas such as quantum computing and understanding the behavior of complex systems.</p>

1. What is Bohmian Mechanics?

Bohmian Mechanics, also known as the de Broglie–Bohm theory, is a theory of quantum mechanics that proposes a deterministic interpretation of quantum phenomena. It suggests that particles have definite positions and velocities at all times, even when not being observed.

2. How does Bohmian Mechanics differ from other interpretations of quantum mechanics?

Bohmian Mechanics differs from other interpretations, such as the Copenhagen interpretation, by rejecting the idea of wave function collapse and instead positing that particles have definite positions and trajectories at all times. It also suggests that there are hidden variables that determine the behavior of particles.

3. Is Bohmian Mechanics widely accepted in the scientific community?

No, Bohmian Mechanics is not widely accepted in the scientific community. It is considered a minority view and is still a topic of debate and research among physicists.

4. What are the main criticisms of Bohmian Mechanics?

One of the main criticisms of Bohmian Mechanics is that it is not as mathematically elegant as other interpretations of quantum mechanics. It also introduces the concept of hidden variables, which some scientists argue goes against the principles of Occam's razor.

5. Are there any practical applications of Bohmian Mechanics?

Currently, there are no practical applications of Bohmian Mechanics. However, some scientists believe that it may have potential in areas such as quantum computing and understanding the behavior of complex systems.

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
370
Views
9K
  • Quantum Interpretations and Foundations
4
Replies
109
Views
8K
  • Quantum Interpretations and Foundations
Replies
13
Views
2K
  • Quantum Interpretations and Foundations
Replies
8
Views
2K
  • Quantum Interpretations and Foundations
Replies
6
Views
3K
  • Quantum Interpretations and Foundations
Replies
25
Views
6K
  • Quantum Interpretations and Foundations
Replies
4
Views
2K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
5K
  • Quantum Interpretations and Foundations
Replies
8
Views
3K
  • Beyond the Standard Models
Replies
1
Views
2K
Back
Top