I A skeptic's view on Bohmian Mechanics

rubi

Science Advisor
847
348
I disagree; it's the combination of no-superdeterminism and locality that prevents [itex]\lambda[/itex] from depending on [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex]. If the choice of [itex]\lambda[/itex] is only made after Alice and Bob make their choices, then it doesn't imply superdeterminism, but it does imply nonlocality, since Alice's choice influences the [itex]\lambda[/itex] that in turn influences Bob's result.
If ##P(\lambda|\vec a,\vec b)## depends on ##\vec a## and ##\vec b##, it just means that it is possible that ##\vec a## and ##\vec b## depend on ##\lambda##. Let's say ##\lambda## is in the intersection of the past light cones. A local theory can perfectly well violate ##P(\lambda|\vec a,\vec b)=P(\lambda)##. You prepare two particles in the non-entangled state ##\left|\vec a\right>\otimes\left|\vec b\right>## and before you send the particles to Alice and Bob, you send Alice to order to align her detector along ##\vec a## and you do the same with Bob. This introduces a perfectly local dependence of ##\vec a## and ##\vec b## on the prepared state. So obviously, the condition can easily be violated in a local theory. Hence, it is not required by a local theory to satisfy the condition.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
Depends on the particular model. I'm just telling you that the short-rangedness of interactions in general leads to long-range correlations, which you denied.
Can you give a concrete example, where the correlations refer to macro objects?

Also, in CM, the state isn't fine-tuned and the absence of correlations can then be justified on the basis of molecular chaos.
Yes, but "molecular chaos" often means Boltzmann distribution, i.e. thermal equilibrium.

However, in BM, the hidden variables must be distributed according to ##\left|\Psi\right|^2##
This is quantum equilibrium, which is analogous to thermal equilibrium above.

which is a restriction on the allowed distributions that can in principle introduce correlations.
In principle yes, but FAPP no.

However, that's just an opinion. I demand proof.
Now you sound like Neumaier. Science is not about proofs. Science is about evidence. For evidence, see the book by Schlosshauer on decoherence.

If the wave-function evolves unitarily, correlations shouldn't go away, but rather propagate to finer parts of the system.
Yes, and precisely because they are in the finer parts, they are not visible FAPP.

He's wrong. Consistent histories is local despite allowing the Hardy state, so locality can't be ruled out by the argument.
That's because CH forbids the classical rules of logic, while Hardy proof of nonlocality uses classical rules of logic. My opinion is that the rules of logic should be universal for all science, be it classical physics, quantum physics, or beyond quantum physics.
 

rubi

Science Advisor
847
348
Can you give a concrete example, where the correlations refer to macro objects?
You measure two far away spins in an Ising lattice. The pointers of the measurement apparata will be correlated if ##T\leq T_\text{crit}##. This is important for harddisks.

Yes, but "molecular chaos" really means Boltzmann distribution, i.e. thermal equilibrium.


This is quantum equilibrium, which is analogous to thermal equilibrium above.
But ##\left|\Psi\right|^2## changes depending on ##\Psi## and specific ##\Psi## can have a form that contains correlations. On the other hand, the Boltzmann distribution is a distribution of minimum entropy. It is even the marginal of a uniform distribution on some constant energy surface. So the amount of possible correlations are minimal. The lack of correlations in the Boltzmann distribution does not at all imply that the same must hold for ##\left|\Psi\right|^2##.

In principle yes, but FAPP no.
Again, this requires proof.

Now you sound like Neumaier. Science is not about proofs. Science is about evidence. For evidence, see the book by Schlosshauer on decoherence.
I know Schlosshauers book. Theoretical physics is of course about proofs. It's about stating some axioms and showing that they imply certain facts about physics. Of course, the level of rigor may vary, but the arguments must be convincing and continuously improved. Evidence is what we get from experimental physics. Proof (rigorous or not) is what theoretical physics is about. One can't simply postulate a claim in science. One always needs to justify it.

Yes, and precisely because they are in the finer parts, they are not visible FAPP.
I don't think this has been established. In unitary evolution, entanglement is supposed to spread to macroscopic objects, like Schrödingers cat. Decoherence just makes interference terms between the individual branches of the wave-function go away. However, within each branch, the correlations remain.

That's because CH forbids the classical rules of logic, while Hardy proof of nonlocality uses classical rules of logic. My opinion is that the rules of logic should be universal for all science, be it classical physics, quantum physics, or beyond quantum physics.
CH doesn't forbid the classical rules of logic. On the contrary, it saves the classical rules of logic. As I have explained in the last thread, without the single framework rule of CH, the classical rules of logic are violated experimentally and there is nothing we can do about it (except argue only about logically consistent statements, which are singled out by the single framework rule of CH).
 

atyy

Science Advisor
13,549
1,645
CH doesn't forbid the classical rules of logic. On the contrary, it saves the classical rules of logic. As I have explained in the last thread, without the single framework rule of CH, the classical rules of logic are violated experimentally and there is nothing we can do about it (except argue only about logically consistent statements, which are singled out by the single framework rule of CH).
Only in a very formal way. It throws common sense and reality away.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
If ##P(\lambda|\vec a,\vec b)## depends on ##\vec a## and ##\vec b##, it just means that it is possible that ##\vec a## and ##\vec b## depend on ##\lambda##. Let's say ##\lambda## is in the intersection of the past light cones. A local theory can perfectly well violate ##P(\lambda|\vec a,\vec b)=P(\lambda)##. You prepare two particles in the non-entangled state ##\left|\vec a\right>\otimes\left|\vec b\right>##
I think there is a confusion about what the notation means. I'm assuming that [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex] are the measurement choices made by Alice and Bob, respectively. The twin pair is prepared before those choices are made.
 

rubi

Science Advisor
847
348
Only in a very formal way. It throws common sense and reality away.
It's not worse than Copenhagen and it actually clarifies Copenhagen a lot.

I think there is a confusion about what the notation means. I'm assuming that [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex] are the measurement choices made by Alice and Bob, respectively. The twin pair is prepared before those choices are made.
Yes, sure, but it's perfectly possible and compatible with locality that the choices are determined by the preparation procedure of the twin pairs. This is superdeterminism. It will violate the condition, despite being completely local. Hence, the violation of the condition is compatible with locality. The exclusion of non-locality in Bell's proof is purely due to the requirements on the observables ##A## and ##B##.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
He's wrong. Consistent histories is local despite allowing the Hardy state, so locality can't be ruled out by the argument.
The way that I understand consistent histories (which is not all that well), there is a sense in which there is no dynamics. The laws of quantum mechanics (such as Schrodinger's equation, or QFT) are used to derive a probability distribution on histories. But within a history, you've just got an unfolding of events (or values of mutually commuting observables). You can't really talk about one event in a history causing or influencing another event. Locality to me is only meaningful in a dynamic view, where future events, or future values of variables are influenced by current events or current values of variables.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
Yes, sure, but it's perfectly possible and compatible with locality that the choices are determined by the preparation procedure of the twin pairs. This is superdeterminism.
Right. If you assume locality, then the dependence of the measurement choices on [itex]\lambda[/itex] implies superdeterminism. But if you don't assume locality, then the dependence of [itex]\lambda[/itex] on the measurement choices doesn't imply superdeterminism.
 

atyy

Science Advisor
13,549
1,645
It's not worse than Copenhagen and it actually clarifies Copenhagen a lot.
Don't get me wrong - I actually respect immensely that the CH people have really done pursued the initial idea very conscientiously (and yes, the initial idea was worth pursuing, even if it ended that we cannot have a single fine-grained reality). And CH's conception of locality and the Bell inequalities is far better than Werner's wrong headed criticism of Bohmian Mechanics.

However, I think Copenhagen is superior to CH. Copenhagen retains common sense and is more broadminded. Copenhagen is consistent with all interpretations (BM, CH, MWI), whereas I don't see how CH is consistent with BM.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
Again, this requires proof.
Fine, let us say that I can't prove (with a level of rigor that would satisfy you) that BM is not superdeterministic. Can you prove that it is? As you can see, your arguments so far didn't convince me, and I claim (again, without a proof) that your arguments wouldn't convince Bell.

Anyway, if you claim that BM is superdeterministic, this is certainly an important claim (provided that it is correct), so I would suggest you to try to convince a referee of an important physics journal.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
You measure two far away spins in an Ising lattice. The pointers of the measurement apparata ...
To make it work, the experimentalist needs to do a lot of fine tuning (and the ability to do it is what makes him a good experimentalist). If such a correlation is not something what you want, it is very unlikely that it will happen spontaneously and ruin your intended experiment.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
But ##\left|\Psi\right|^2## changes depending on ##\Psi## and specific ##\Psi## can have a form that contains correlations. On the other hand, the Boltzmann distribution is a distribution of minimum entropy.
By finding a Bohmian version of H-theorem, Valentini has shown that quantum equilibrium, in effect, also minimizes entropy.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
CH doesn't forbid the classical rules of logic.
As I explained in
An argument against Bohmian mechanics?
it does. In classical logic, the statement Sx=+1∧Sy=−1 is either true or false, but it is a meaningful statement. In CH this statement is forbidden by claiming that it is meaningless. For me, it's a change of the rules of logic.
 

rubi

Science Advisor
847
348
The way that I understand consistent histories (which is not all that well), there is a sense in which there is no dynamics. The laws of quantum mechanics (such as Schrodinger's equation, or QFT) are used to derive a probability distribution on histories. But within a history, you've just got an unfolding of events (or values of mutually commuting observables). You can't really talk about one event in a history causing or influencing another event. Locality to me is only meaningful in a dynamic view, where future events, or future values of variables are influenced by current events or current values of variables.
I don't really understand this criticism. The dynamics in CH is probabilistic. It's an extension of the theory of classical stochastic processes to the quantum regime. Would you say that there is no dynamics in Brownian motion? Aren't stock prices dynamic? Of course, you can only calculate probabilities, but you can ask for example, what is the probability for ##X## at time ##t##, given ##Y## at time ##0##. If this is non-zero, then ##Y## has a tendency to "cause" ##X##. If the price of some stock is very high today, then it's not so likely that it drops to ##0## overnight, but you can never be sure.

But anyway, I would find it more interesting to restrict the discussion to whether Bohmian mechanics is superdeterministic or not, since this can be analyzed mathematically, while adopting CH is a matter of taste.

Right. If you assume locality, then the dependence of the measurement choices on [itex]\lambda[/itex] implies superdeterminism. But if you don't assume locality, then the dependence of [itex]\lambda[/itex] on the measurement choices doesn't imply superdeterminism.
I don't understand why not. If the formula for the correlations is ##\int A(\lambda,\alpha,\beta) B(\lambda,\alpha,\beta) P(\lambda|\alpha,\beta)\mathrm d\lambda##, then changing ##P## will in general change the correlations, and thus the specifying the correct ##\alpha##, ##\beta## dependent ##P## seems essential to reproduce the QM correlations. And if ##P## depends on ##\alpha##, ##\beta## in Bohmian mechanics, then it seems like those are determined by the dynamics earlier, i.e. without the correct dynamical determination of ##\alpha## and ##\beta##, BM is unable to reproduce the QM correlations, which sounds very superdeterministic to me.

However, I think Copenhagen is superior to CH. Copenhagen retains common sense and is more broadminded. Copenhagen is consistent with all interpretations (BM, CH, MWI), whereas I don't see how CH is consistent with BM.
I don't think of CH as a separate interpretation. It's rather an inevitable advancement of the vanilla formalism of QM. It's just not logically possible to reason about statements of the form ##S_x=1\wedge S_y=1##. You will necessarily get probabilities that don't add up to ##1##, independent of the interpretation. The single framework rule just formalizes how to obtain consistent statements. It's just that people intuitively apply the rules correctly in Copenhagen or other interpretations, except in those cases, in which they obtain paradoxes. I also don't see how CH is incompatible with BM (assuming BM reproduces QM).

Fine, let us say that I can't prove (with a level of rigor that would satisfy you) that BM is not superdeterministic. Can you prove that it is? As you can see, your arguments so far didn't convince me, and I claim (again, without a proof) that your arguments wouldn't convince Bell.
Up to now, I have carefully explained, where BM satisfies a criterion that Bell himself has proposed as a criterion that formalizes the notion of superdeterminism. So even, if the criterion does not actually imply superdeterminism, I have at least shown that BM satisfies a criterion that has been referred to as "superdeterminism". Now you have gone as far as to say that Bell is wrong and his inequality doesn't really rule out non-superdeterministic local hidden variable theories and one must really use Hardy's proof instead in order to obtain a definite result. I'm not sure Bell would agree with this. All I'm asking for is a convincing argument for why Bell's notion doesn't imply superdeterminism, but so far you have only stated your opinion.

Anyway, if you claim that BM is superdeterministic, this is certainly an important claim (provided that it is correct), so I would suggest you to try to convince a referee of an important physics journal.
Maybe I will, but it has a pretty low priority for me. :smile:

To make it work, the experimentalist needs to do a lot of fine tuning (and the ability to do it is what makes him a good experimentalist). If such a correlation is not something what you want, it is very unlikely that it will happen spontaneously and ruin your intended experiment.
The Ising model is used to describe magnetism and it has been well-tested that there is long range order in magnets. See this link.

By finding a Bohmian version of H-theorem, Valentini has shown that quantum equilibrium, in effect, also minimizes entropy.
Boltzmann's H-theorem (which relies on the Stosszahlansatz, which is difficult to prove in general) states that the entropy of a single-particle distribution always grows. Hence it will eventually attain its maximum, which is given by the Maxwell-Boltzmann distribution. It doesn't imply that the phase space distribution is given by a maximum entropy distribution. This is much more difficult to prove.
I suppose Valentini has some analogous theorem, which states that some quantity always grows and it's maximum is attained for ##\left|\Psi\right|^2##. This doesn't imply that it has maximum entropy. On the contrary, every distribution can be realized as ##\left|\Psi\right|^2## for some ##\Psi##. I only need to take the square-root. Let ##P=\left|\Psi\right|^2## be any distribution, hence in ##L^1##. Then ##\Psi## will be an admissible quantum state in some ##L^2## space. Thus, ##\left|\Psi\right|^2## will usually not maximize entropy.

As I explained in
An argument against Bohmian mechanics?
it does. In classical logic, the statement Sx=+1∧Sy=−1 is either true or false, but it is a meaningful statement. In CH this statement is forbidden by claiming that it is meaningless. For me, it's a change of the rules of logic.
##S_x=+1\wedge S_x=-1## is not a meaningful statement in any interpretation. You will inevitably get probabilities that don't add up to ##1## and violate classical logic if you allow such statements. The single framework rule just tells you which statements are meaningful, so you can use classical logic to argue about them.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
So even, if the criterion does not actually imply superdeterminism, I have at least shown that BM satisfies a criterion that has been referred to as "superdeterminism".
Fine, I can agree with that.

All I'm asking for is a convincing argument for why Bell's notion doesn't imply superdeterminism, but so far you have only stated your opinion.
I'm sorry that my argument is not sufficiently convincing for you. But Bell also argued against superdeterminism. Did you find his arguments more convincing?

Maybe I will, but it has a pretty low priority for me. :smile:
May I ask what is your main research area (if research is what you do for living anyway)? :smile:

##S_x=+1\wedge S_z=-1## is not a meaningful statement in any interpretation.
As I already mentioned, Holland found a counterexample in his book on Bohmian mechanics.
 

rubi

Science Advisor
847
348
I'm sorry that my argument is not sufficiently convincing for you. But Bell also argued against superdeterminism. Did you find his arguments more convincing?
Well, Bell has argued that we should reject superdeterministic theories and I agree with him, but did he argue that BM is not superdeterministic? Was is even known to him that BM requires the the inclusion of full measurement theory in order to reproduce QM? I thought this was a farily recent result.

May I ask what is your main research area (if research is what you do for living anyway)? :smile:
Mostly canonical quantum gravity, but also topics in axiomatic QFT. I can't be more specific, since there are only a few people with that combination and I'd prefer to stay anonymous. :biggrin:

As I already mentioned, Holland found a counterexample in his book on Bohmian mechanics.
Yes, but it must exploit the ##d=2## loophole of the KS theorem. I don't see how you can allow arbitrary logical connections of quantum propositions in ##d>2## without getting some contradiction with QM. It's a no-go theorem after all. :smile:
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
Depends on the particular model.
If, in some models, classical mechanics can be superdeterministic, and classical mechanics (as an approximation) is perfectly acceptable theory, then superdeterminism is also acceptable. If so, then I see no problem with the idea that BM may also be superdeterministic.

Of course, classical mechanics is generally not considered to be superdeterministic. I have tried to explain why it is not considered superdeterministic, and why, by a similar criterion, BM is also not superdeterministic. If you have a different criterion, by which both can be superdeterministic, I am fine with that too.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
Was is even known to him that BM requires the the inclusion of full measurement theory in order to reproduce QM? I thought this was a farily recent result.
Yes, it was very well known by him. And it is not a recent result, because it was discovered by Bohm in 1951.

Mostly canonical quantum gravity, but also topics in axiomatic QFT. I can't be more specific, since there are only a few people with that combination and I'd prefer to stay anonymous. :biggrin:
Fair enough! By contrast, anyone can check my inSPIRE record
http://inspirehep.net/search?ln=en&p=find+author+nikolic,+h&of=hcs&action_search=Search&sf=earliestdate&so=d

Yes, but it must exploit the ##d=2## loophole of the KS theorem.
No, as I already explained, it does not exploit the ##d=2## loophole. It exploits the contextuality loophole. KS theorem shows that non-contextual hidden variables are impossible (except for ##d=2##). But contextual hidden variables are not restricted by the KS theorem.
 

rubi

Science Advisor
847
348
If, in some models, classical mechanics can be superdeterministic, and classical mechanics (as an approximation) is perfectly acceptable theory, then superdeterminism is also acceptable. If so, then I see no problem with the idea that BM may also be superdeterministic.

Of course, classical mechanics is generally not considered to be superdeterministic. I have tried to explain why it is not considered superdeterministic, and why, by a similar criterion, BM is also not superdeterministic. If you have a different criterion, by which both can be superdeterministic, I am fine with that too.
Okay, but if superdeterminism was admissible in a physical theory, then why aren't we looking for a (superdeterministic) local hidden variable model instead? (Even if you don't like CH, Hardy's paradox and GHZ are certainly compatible with superdeterministic locality as well.) There would be no need to give up locality and to introduce preferred frames and violate Lorentz symmetry.

Yes, it was very well known by him. And it is not a recent result, because it was discovered by Bohm in 1951.
That seems odd, since the theory of decoherence, which your argument seems to rely on, was developed in the 70's.

No, as I already explained, it does not exploit the ##d=2## loophole. It exploits the contextuality loophole. KS theorem shows that non-contextual hidden variables are impossible (except for ##d=2##). But contextual hidden variables are not restricted by the KS theorem.
But in a contextual theory, the statement is not really ##S_x=+1\wedge S_y=-1##, but rather ##\text{In the context A}, S_x=+1\wedge \text{In the context B}, S_y=-1##. You can never have ##\text{In the context A}, S_x=+1\wedge S_y=-1##. This is exactly what the single framework rule in CH says. You can never argue about ##S_x## and ##S_y## in the same context. There is nothing mysterious about it.
 
Last edited:

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
I don't understand why not. If the formula for the correlations is ##\int A(\lambda,\alpha,\beta) B(\lambda,\alpha,\beta) P(\lambda|\alpha,\beta)\mathrm d\lambda##, then changing ##P## will in general change the correlations, and thus the specifying the correct ##\alpha##, ##\beta## dependent ##P## seems essential to reproduce the QM correlations. And if ##P## depends on ##\alpha##, ##\beta## in Bohmian mechanics, then it seems like those are determined by the dynamics earlier, i.e. without the correct dynamical determination of ##\alpha## and ##\beta##, BM is unable to reproduce the QM correlations, which sounds very superdeterministic to me.
Well, let me make up a particular model that is nonlocal, but not superdeterministic, and which agrees with the predictions of quantum mechanics:

Alice's detector has a setting, [itex]\alpha[/itex], which can take on a value from [itex]0[/itex] to [itex]2\pi[/itex], representing a direction in the x-y plane. Similarly, Bob's detector has a setting, [itex]\beta[/itex], which represents an angle in the x-y plane.

[itex]\lambda[/itex] in this model has 4 possible values:
  1. [itex]\lambda_{uu}[/itex]
  2. [itex]\lambda_{ud}[/itex]
  3. [itex]\lambda_{du}[/itex]
  4. [itex]\lambda_{dd}[/itex]
These values determine Alice's result [itex]A[/itex] and Bob's result, [itex]B[/itex] in the obvious way:
  1. [itex]A(\lambda_{uu}) = A(\lambda_{ud}) = B(\lambda_{uu}) = B(\lambda_{du}) = +1[/itex]
  2. [itex]A(\lambda_{du}) = A(\lambda_{dd}) = B(\lambda_{ud}) = B(\lambda_{dd}) = -1[/itex]
Initially, all 4 values of [itex]\lambda[/itex] are equally likely, with probability 1/4. If at any time, the value of [itex]\alpha[/itex] or [itex]\beta[/itex] changes (or when they are set for the first time), then the value of [itex]\lambda[/itex] changes nondeterministically:

If [itex]\alpha[/itex] changes, then
  • [itex]\lambda_{uu} \Rightarrow \lambda_{uu}[/itex] with probability [itex]sin^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{uu} \Rightarrow \lambda_{ud}[/itex] with probability [itex]cos^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{ud} \Rightarrow \lambda_{ud}[/itex] with probability [itex]cos^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{ud} \Rightarrow \lambda_{uu}[/itex] with probability [itex]sin^2(\frac{\alpha - \beta}{2})[/itex]
If [itex]\beta[/itex] changes, then
  • [itex]\lambda_{uu} \Rightarrow \lambda_{uu}[/itex] with probability [itex]sin^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{uu} \Rightarrow \lambda_{du}[/itex] with probability [itex]cos^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{ud} \Rightarrow \lambda_{ud}[/itex] with probability [itex]cos^2(\frac{\alpha - \beta}{2})[/itex]
  • [itex]\lambda_{ud} \Rightarrow \lambda_{dd}[/itex] with probability [itex]sin^2(\frac{\alpha - \beta}{2})[/itex]
Note that the value of [itex]\lambda[/itex] is allowed to change in-flight. But that's fine if you allow nonlocal interactions.

If I did this correctly, this is just the "collapse" interpretation dressed up in the language of hidden variables, but the "collapse" interpretation shows that superdeterminism is not implied by the quantum EPR predictions.
 

rubi

Science Advisor
847
348
Hmm... So what is the function ##P(\lambda|\alpha,\beta)## in this model and why don't ##A## and ##B## depend on ##\alpha## and ##\beta##? Also, why does ##\lambda## change in a probabilistic way? If we want to check for superdeterminism, we must first of all have a deterministic theory.
 

Demystifier

Science Advisor
Insights Author
2018 Award
10,222
3,090
Okay, but if superdeterminism was admissible in a physical theory, then why aren't we looking for a (superdeterministic) local hidden variable model instead?
The general idea is that only "soft" superdeterminism is admissible, i.e. superdeterminism which does not involve some kind of conspiracy in initial conditions. But what exactly is conspiracy? Unfortunately, there is no precise definition. Is thermal equilibrium a conspiracy? Is quantum equilibrium a conspiracy? Is 't Hooft's theory of local hidden variables a conspiracy? As you may guess, opinions differ.

That seems odd, since the theory of decoherence, which your argument seems to rely on, was developed in the 70's.
In a sense, Bohm's work was a precursor to decoherence. But Bohm was not the first. Before him, von Neumann had similar insights in 1932.

But in a contextual theory, the statement is not really ##S_x=+1\wedge S_y=-1##, but rather ##\text{In the context A}, S_x=+1\wedge \text{In the context B}, S_y=-1##. You can never have ##\text{In the context A}, S_x=+1\wedge S_y=-1##. This is exactly what the single framework rule in CH says. You can never argue about ##S_x## and ##S_y## in the same context. There is nothing mysterious about it.
As you say, this is contextuality in the CH framework. But in the framework of hidden variable theories, contextuality is interpreted in a slightly different way.

Note also the following. In BM, particle at a given instant of time has both position and momentum (velocity times mass). And yet, in the FAPP sense, it makes the same predictions as standard QM. If you think it's impossible, note again that I said FAPP. The FAPP acronym was devised by Bell, and one always needs to have the FAPP caveat in mind when thinking about BM. Without the FAPP caveat, BM looks impossible, wrong, inconsistent, or in contradiction with experiments. One must learn the FAPP way of thinking to understand how BM leads to the same predictions as standard QM.
 
Last edited:

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
Hmm... So what is the function ##P(\lambda|\alpha,\beta)## in this model and why don't ##A## and ##B## depend on ##\alpha## and ##\beta##?
Well, if [itex]\lambda[/itex] depends on [itex]\alpha[/itex] and [itex]\beta[/itex], and [itex]A[/itex] depends on [itex]\lambda[/itex], then indirectly, [itex]A[/itex] depends on [itex]\alpha[/itex] and [itex]\beta[/itex].

Also, why does ##\lambda## change in a probabilistic way? If we want to check for superdeterminism, we must first of all have a deterministic theory.
If something is not deterministic, then it surely is not superdeterministic, either. The point is to show that having [itex]\lambda[/itex] depend on [itex]\alpha[/itex] and [itex]\beta[/itex] does not imply superdeterminism.

I was about to remark that any classically probabilistic model can be turned into a deterministic model by introducing yet more hidden variables, but it occurs to me that that isn't completely trivial. It's trivial if you assume that there are only finitely many probabilistic "choices" that need to be made, but if there are potentially infinitely many, I'm not sure.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,572
The general idea is that only "soft" superdeterminism is admissible, i.e. superdeterminism which does not involve some kind of conspiracy in initial conditions. But what exactly is conspiracy? Unfortunately, there is no precise definition. Is thermal equilibrium a conspiracy? Is quantum equilibrium a conspiracy? Is 't Hooft's theory of local hidden variables a conspiracy? As you may guess, opinions differ.
To me, it's superdeterminism if the explanation for why something happened can potential involve fine-tuning the initial conditions of the entire universe.

That's the situation with EPR. The superdeterminism loophole to Bell's inequality would require that Alice's and Bob's choices (what axes to measure spin relative to) are determined ahead of time (and so the hidden-variable [itex]\lambda[/itex] can be chosen so as to take into account those choices). It's not that difficult (for me) to imagine that Alice and Bob are themselves deterministic state machines at the microscopic level. However, Alice and Bob don't have to "generate" a free choice on their own. They can base their choice on external conditions---maybe choose this or that based on radioactive decay, or the result of a soccer game, or the presence or absence of a shooting star during a particular moment, or whatever. So that's what I mean by fine-tuning of the entire universe. A hypothetical superdeterministic model would have to use Alice's and Bob's determinism to predict what their choices will, but those choices might be to defer the decision to another event, which could potentially involve anything.

I'm not 100% sure about the argument against superdeterminism, though. If you take a movie of the universe and run it backwards, it looks superdeterministic. So using superdeterminism to argue against something implicitly assumes an arrow of time which itself is unexplained. Maybe the superdeterminism required for a local hidden-variables model of QM is somehow connected to the arrow of time?

The other approach, weird in its own way, is the retrocausal approach. Rather than choosing the value of [itex]\lambda[/itex] by some potentially enormous calculation involving the entire universe to figure out Alice's and Bob's choices, you just let them make their choices however they want to, and then allow a back-in-time transmission of information communicate these choices to the moment [itex]\lambda[/itex] is decided. It seems to me that there is a sense in which a retrocausal model will look superdeterminism: [itex]\lambda[/itex] is chosen taking Alice's and Bob's future choices into account. It's just a mechanism to explain the seeming superdeterminism.

Getting back on-topic: There should be a definitive answer, one way or the other, about whether BM requires superdeterminism of the conspiracy kind. I don't see that it does.
 

Want to reply to this thread?

"A skeptic's view on Bohmian Mechanics" You must log in or register to reply here.

Related Threads for: A skeptic's view on Bohmian Mechanics

  • Posted
Replies
13
Views
4K
  • Posted
Replies
4
Views
3K
  • Posted
Replies
15
Views
2K
  • Posted
Replies
22
Views
3K
  • Posted
Replies
9
Views
1K
Replies
159
Views
5K
Replies
235
Views
40K
Replies
39
Views
1K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top