Assumptions of the Bell theorem

In summary: In fact, the whole point of doing so is to get rid of the probabilistic aspects.The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary.The list of necessary and unnecessary assumptions is preliminary, so I invite others to supplement and correct the list.
  • #211
Demystifier said:
I don't think that this symmetry assumption is justified, but I also think that you don't actually need it in the rest of your analysis.
It's not needed, but for EPR experiments, it is true, I think. There are 4 combinations for the two results:

  1. ##P_1: A = +1, B = +1##
  2. ##P_2: A = +1, B = -1##
  3. ##P_3: A = -1, B = +1##
  4. ##P_4: A = -1, B = -1##
I think that ##P_1 = P_4## and ##P_2 = P_3##. Since they have to add up to 1, we can write:
  • ##P_2 = \frac{1}{2} - P_1##
  • ##P_3 = \frac{1}{2} - P_1##
  • ##P_4 = P_1 ##
The correlation ##E## is related to the four probabilities via:

##E = P_1 + P_4 - P_2 - P_3 = 4 P_1 - 1##
 
Physics news on Phys.org
  • #212
atyy said:
Is the causal markov condition the same argument as given in Woods and Spekkens (Fig. 19)?
https://arxiv.org/abs/1208.4119

Why do you say Maudlin tacitly assumes postulates equivalent to CMC in bad faith?
Wood and Spekkens rely heavily on the CMC in their paper. Fig. 19 is just one particular case of a causal graph. The CMC is used to derive the independence relations from it, which are written below that figure. But it also appears in many other places in their paper. The whole section II is dedicated to the CMC, where they state it as "Theorem 1." It's a pretty good introduction.

Regarding Maudlin: He seems to be very aggressive towards people who disagree with him and point out the flaws in his arguments. I think he knows perfectly well that once he stops being vague, his point is moot, so he refuses to do that. His paper, "What Bell did", which was cited earlier, got published only because someone was willing to write a direct rebuttal. The publishers found it too polemical. For me, his attitude qualifies as bad faith, but that's just my opinion.
 
  • Like
Likes atyy
  • #213
stevendaryl said:
It's not needed, but for EPR experiments, it is true, I think. There are 4 combinations for the two results:

  1. ##P_1: A = +1, B = +1##
  2. ##P_2: A = +1, B = -1##
  3. ##P_3: A = -1, B = +1##
  4. ##P_4: A = -1, B = -1##
I think that ##P_1 = P_4## and ##P_2 = P_3##. Since they have to add up to 1, we can write:
  • ##P_2 = \frac{1}{2} - P_1##
  • ##P_3 = \frac{1}{2} - P_1##
  • ##P_4 = P_1 ##
The correlation ##E## is related to the four probabilities via:

##E = P_1 + P_4 - P_2 - P_3 = 4 P_1 - 1##
As far as I can see, it's valid only for the usual EPR state, only for measurement in the z-direction and most importantly, only for quantum probabilities. Probabilities predicted by a general Bell-type HV theory, which a priori don't need to have the same predictions as quantum theory, don't need to obey those symmetries.
 
  • #214
Demystifier said:
As far as I can see, it's valid only for the usual EPR state, only for measurement in the z-direction

How is it tied to the z-direction? There are parameters ##\vec{a}## and ##\vec{b}## that are arbitrary. The ##P_1## is the probability of the first measurement getting +1 in the ##\vec{a}## direction and the second measurement getting +1 in the ##\vec{b}## direction.

and most importantly, only for quantum probabilities. Probabilities predicted by a general Bell-type HV theory, which a priori don't need to have the same predictions as quantum theory, don't need to obey those symmetries.

Well, Bell’s analysis is not for a general case, it’s for a case like EPR where measurements always produce a result of ##\pm 1##. But I guess you can always cast anything to that by calling one result +1 and all incompatible result -1?

Also, Bell’s motivation for assuming that the results are deterministic relies on perfect correlations/anticorrelations.

So I don’t think of Bell’s result as being about any correlated measurement, but more specifically about EPR type measurements.
 
Last edited:
  • #215
atyy said:
Is the causal markov condition the same argument as given in Woods and Spekkens (Fig. 19)?
https://arxiv.org/abs/1208.4119

This paper has some healthy reflections about the missing understanding of the process of infering causal laws from correlations, at the conclusions section, that gives hope to the world!

"Note, however, that if the conditional probabilities that appear in classical causal models are interpreted as degrees of belief – and we take this to be the most sensible interpretation – then the transition from classical causal modelsto quantum causal models involves not only a modification to physics, but a modification to the rules of inference. In this view, the correct theory of inference is not a priori but empirical. Nonetheless, one cannot simply declare by fiat that some formulation of quantum theory is a theory of inference. One must justify this claim. At a minimum,one must determine how standard concepts in a theory of inference generalize to the quantum domain. One could also reconsider the various proposals for axiomatic derivations of classical probability theory, for instance, that ofCox [48] or that of de Finetti [49], to see whether a reasonable modification of the axioms yields a quantum theory of inference. Ideally, one would show that if quantum causal models imply a modification to both our physics and to our theory of inference, then these modifications are not independent. After all, the physics determines the precise manner in which an agent can gather information about the world and in turn act upon it and so the physics should determine what is the most adaptive theory of inference for an agent. It is in this sense that the project of defining quantum causal models is not yet complete and only with such a completion in hand can one really say that a causal explanation of the Bell correlations without recourse to fine-tuning has been achieved"
-- https://arxiv.org/abs/1208.4119

The fine-tuning conclusion here is interesting, and is one of the main reasons why I think an evolving interacting agent centered models seems plausible and only viable. This is a tough problem of course.

/Fredrik
 
  • Like
Likes atyy
  • #216
Demystifier said:
Is it your own invention to call it "factorizability assumption", or is there another reference where it is called so?
I may have seen it in a previous reference, but I'm not sure. I don't mind adopting a different term if "factorizability" is considered misleading. The point is that that assumption, the specific equation that I pointed out, is what Bell meant by "locality" for the purposes of deriving his theorem.
 
  • Like
Likes vanhees71 and Demystifier
  • #217
PeterDonis said:
I may have seen it in a previous reference, but I'm not sure.
Reading through the Stanford Encyclopedia of Philosophy article that was referenced earlier in this thread, that article does use "factorizability condition" to refer to the one I was referring to.
 
  • Like
Likes vanhees71
  • #218
NEWS RELEASE 24-SEP-2020
A question of reality
SPRINGER
Research News

Physicist Reinhold Bertlmann of the University of Vienna, Austria has published a review of the work of his late long-term collaborator John Stewart Bell of CERN, Geneva in EPJ H. This review, 'Real or Not Real: that is the question', explores Bell's inequalities and his concepts of reality and explains their relevance to quantum information and its applications.

John Stewart Bell's eponymous theorem and inequalities set out, mathematically, the contrast between quantum mechanical theories and local realism. They are used in quantum information, which has evolving applications in security, cryptography and quantum computing.

The distinguished quantum physicist John Stewart Bell (1928-1990) is best known for the eponymous theorem that proved current understanding of quantum mechanics to be incompatible with local hidden variable theories. Thirty years after his death, his long-standing collaborator Reinhold Bertlmann of the University of Vienna, Austria, has reviewed his thinking in a paper for EPJ H, 'Real or Not Real: That is the question'. In this historical and personal account, Bertlmann aims to introduce his readers to Bell's concepts of reality and contrast them with some of his own ideas of virtuality.

Bell spent most of his working life at CERN in Geneva, Switzerland, and Bertlmann first met him when he took up a short-term fellowship there in 1978. Bell had first presented his theorem in a seminal paper published in 1964, but this was largely neglected until the 1980s and the introduction of quantum information.

Bertlmann discusses the concept of Bell inequalities, which arise through thought experiments in which a pair of spin-½ particles propagate in opposite directions and are measured by independent observers, Alice and Bob. The Bell inequality distinguishes between local realism - the 'common sense' view in which Alice's observations do not depend on Bob's, and vice versa - and quantum mechanics, or, specifically, quantum entanglement. Two quantum particles, such as those in the Alice-Bob situation, are entangled when the state measured by one observer instantaneously influences that of the other. This theory is the basis of quantum information.

And quantum information is no longer just an abstruse theory. It is finding applications in fields as diverse as security protocols, cryptography and quantum computing. "Bell's scientific legacy can be seen in these, as well as in his contributions to quantum field theory," concludes Bertlmann. "And he will also be remembered for his critical thought, honesty, modesty and support for the underprivileged."
###
Reference:
R. Bertlmann (2020), Real or Not Real: that is the question, European Physical Journal H, DOI 10.1140/epjh/e2019-90071-6
https://www.eurekalert.org/pub_releases/2020-09/s-aqo092420.php
 
  • #219
Demystifier said:
But tools for what? If they are tools for making predictions, then it's OK. But physicists also use theories for conceptual understanding. So if the theory is inconsistent, then conceptual understanding can also be inconsistent. And if conceptual understanding is inconsistent and the physicist is aware of it, then the tool does not fulfill its purpose.
.

I agree, and apart, actual models can have inconsistencies, but nature is not inconsistent.
I think that comes from confusing models with reality itself.

.
 
  • Like
Likes Demystifier
  • #220
Demystifier said:
But tools for what? If they are tools for making predictions, then it's OK. But physicists also use theories for conceptual understanding.

I agree with you, but to me, the “shut up and calculate” crowd seems to not care about conceptual understanding, as long as their tools for making predictions work well enough.
 
  • Like
Likes Demystifier
  • #221
stevendaryl said:
the “shut up and calculate” crowd seems to not care about conceptual understanding, as long as their tools for making predictions work well enough.
Engineering? I always thought of science as the process of creating the tools, and engineering is about as just using the tools.

I have one funny memory from a class in analytical mechanics, where half of the group was in the science program, and half of the group with engineering students, but we both had the same books and made the same exams, the difference was more in emphasis of learning how to use, or conceptual understanding.

I think on a bad day, the lecturer who was very much a person that encouraged deeper questions, was provoced by one of the engineering students that asked many stupid questions like "I do not understand why I need to learn this, I will not have use for this when i get employed", and the teacher responded in frustration to the engineering student that "You are obviously not here to understand, you are just here to learn".

The science part of the group was amused.

/Fredrik
 
  • Like
  • Wow
Likes Demystifier, vanhees71, stevendaryl and 1 other person
  • #222
stevendaryl said:
I was going to argue that QM probabilities do violate the Kolmogorov axioms, but then I convinced myself that they don't
Isn't the event space not a sigma algebra, so unlike Kolomogorov probability if ##E## is an event and ##F## is an event we don't necessarily have a ##E \land F## as an event?
 
  • Like
Likes Fra and stevendaryl
  • #223
Kolmo said:
Isn't the event space not a sigma algebra, so unlike Kolomogorov probability if ##E## is an event and ##F## is an event we don't necessarily have a ##E \land F## as an event?
In the standard minimal Copenhagen-style interpretations, I think there is no way around this conclusion. Some people argue that quantum theory needs to be modified. But this begs the question: At what scale? Recently I read that they managed to entangle bacteria, so we already reached the the scale of living organisms. And further progress only seems to be a question of time. Also, I have yet to see a theory where this idea has been successfully employed.
 
  • #224
If the quest here is to explain QM as some ultimate generalsationation of (or more likely - as an approximation to such generalsation) probabilistic reasoning from fundamental level and up. Then if one compare things with how one needs a connection between tangentplanes, to compare vectors from different nearby spaces, the situation here is far more complicated. I envision this as instead of nearby tangent planes, we have different ways to encode information (data compression algoritms). So to make sense ##E \land F## also requires a connection that defines the recoding of information. So instead of parallell transport we have a kind of recoding of inforamtion. The most obvious "point" of recoding information is that, depending on the situation one can achieve a more effective code, that retains more significant information after lossy constraints. But in the simplest case, a non-lossy an intertible recoding may allow one to make sense out of this. The simplest case of conjugate variables, are essentially related by a Fourier transform. Here one also can motivate different codes due to their effiency of data compression, especially when subject to lossy constraints (limited capacity).

I am not aware of much explicit and working in this direction, but all this is a key part of the reconstruction I personally seek. It also holds many problematic questions to be both conceptually made sense of, and next to be made explicit in some choice of mathematical model.

This is I think naturally relevant in any perspective where one consider interacting qbist-agents. It is pretty obvious that an agent can never retain all information from it's own history, so some sort of "lossy-retention" scheme is required. And here the choice of "datacompression" because highly relevant. It seems natural to consider the choice of datacompession as part of an agents trait, that is subject to evolution. Many question arise, like what is the conceptual relation between the agents capacity to hold information, and say the conventional measures such as mass? Also what is the relation between the internal code and the communication channel? This gives many suggestive associations to holographic connections.

If anyone is aware of anything published in this direction, I would be happy to read it, even papers with failed attempts would be interesting! One find tangent papers, but none that admits a head-on reconstruction of QM (or generalisation of QM) with the method.
Nullstein said:
Some people argue that quantum theory needs to be modified. But this begs the question: At what scale?
The "scale" I envision this construction, would likely be at some very fundamental Planck scale, before the emergent of space, as it seems the "separation" of general connections between spacetime and internal spaces must be "reworked" before. So I see it more as a conceptual tool that may also have a mathematical counterpart in considering how a unified measure, is spontaneously split up into diversity.

/Fredrik
 
  • #225
The way I was taught it was that we take two random number generators and transmit them to some distant lab partner Bob. However before we do we secretly use a third RNG to generate a number ##\lambda## and store it in the two RNGs to be sent to Bob. They then display to Bob their generated value added to our ##\lambda## value.

Bob will have a probability distribution over the outcomes of the two RNGs that does not factorise, reflecting their correlation:
##p(a,b) \neq p(a)p(b)##

However if Bob learned the hidden RNG output ##\lambda## we kept to ourselves, he could decorrelate the results:
##p(a,b|\lambda) = p(a|\lambda)p(b|\lambda)##
Since he would have learned the value of the correlating variable.

Since QM violates this factorisation condition it means there is no such decorrelating variable to learn for entangled states, i.e. no hidden variable of the correlation/true probabilism of the correlation. Reflecting that QM is an intrinsically probabilistic theory.
 
  • #226
Kolmo said:
Isn't the event space not a sigma algebra, so unlike Kolomogorov probability if ##E## is an event and ##F## is an event we don't necessarily have a ##E \land F## as an event?
Is it equivalent to saying that ##E \land F## is an event with zero probability?

Here is an example. Consider a single spin-1/2 particle. One would say that ##\sigma_x=##right and ##\sigma_y=##up are events, while their conjuction isn't. But let us think from a macroscopic point of view. An event is an appearance of a dot on the screen of the Stern-Gerlach apparatus. With x-orientation of the magnet the dot appears either on the right or on the left, while with y-orientation of the magnet the dot appears either up or down. But one can also imagine that two dots appear, one on the right and one up, so the conjuction of the events is an event too. It's just that such a conjuction has zero probability.
 
  • Like
Likes Kolmo
  • #227
Yes, I realized what I said although strictly accurate doesn't really bring out the difference much because it can be the case in classical probability that ##E \land F = \emptyset##.

So it would be better to say the difference is that in classical Kolmogorov probability if ##E \land F = \emptyset## then ##P(E|F) = 0## since then ##E## is a subevent of the event ##\neg F##, i.e. being incompatible/disjunctive implies being contradictive/mutually exclusive.

Where as in QM we can have no ##E \land F## event and yet still ##P(E|F) \neq 0##.
 
Last edited:
  • #228
Kolmo said:
Where as in QM we can have no ##E \land F## event and yet still ##P(E|F) \neq 0##.
For example?
 
  • #229
Demystifier said:
Is it equivalent to saying that ##E \land F## is an event with zero probability?

Here is an example. Consider a single spin-1/2 particle. One would say that ##\sigma_x=##right and ##\sigma_y=##up are events, while their conjuction isn't. But let us think from a macroscopic point of view. An event is an appearance of a dot on the screen of the Stern-Gerlach apparatus. With x-orientation of the magnet the dot appears either on the right or on the left, while with y-orientation of the magnet the dot appears either up or down. But one can also imagine that two dots appear, one on the right and one up, so the conjuction of the events is an event too. It's just that such a conjuction has zero probability.
No, this example doesn't suffice to show the contradiction. It's not as easy to see. The contradiction arises if you have multiple sets of observables that commute within their set, but don't commute globally. The GHZ experiment is such an example.

We have the observables ##A_x, A_y, B_x, B_y, C_x, C_y## that can take values ##\pm 1##. Following Mermin, we can conclude from the probabilities ##P(A_x B_y C_y = -1) = 1##, ##P(A_y B_x C_y = -1) = 1## and ##P(A_y B_y C_x = -1) = 1## that ##P(A_x B_x C_x = -1) = 1##. However, QM predicts ##P(A_x B_y C_y = -1) = 0##.

This can only be possible if we are not allowed to form the joint event ##X\wedge Y\wedge Z##, because if we take ##X = \{A_x B_y C_y = -1\}##, ##Y = \{A_y B_x C_y = -1\}##, ##Z = \{A_y B_y C_x = -1\}## and if we could form this joint event, we would have ##X\wedge Y\wedge Z = \{A_x B_x C_x = -1\}## and its probability would be forced to be ##1## instead of ##0##.
 
  • #230
I think along the lines that @Demystifier was saying, in the GHZ experiment, the events are not

##A_x = +1##
##A_x = -1##
##B_x = +1##
etc.

The events are:
  • Someone measures the spin of particle ##A## in the x-direction, and the result is ##+1##
  • Someone measures the spin particle ##A## in the x-direction, and the result is ##-1##
  • etc

These events have well-defined probabilities that obey the Kolmogorov axioms. And since you can't measure a spin's particle along two different axes at the same time, the joint probability of:

"Someone measured the spin of particle A in the x-direction and the result is +1"

and

"Someone measured the spin of particle A in the y-direction and the result is +1"

is zero.
 
  • Like
Likes Demystifier
  • #231
stevendaryl said:
I think along the lines that @Demystifier was saying, in the GHZ experiment, the events are not

##A_x = +1##
##A_x = -1##
##B_x = +1##
etc.

The events are:
  • Someone measures the spin of particle ##A## in the x-direction, and the result is ##+1##
  • Someone measures the spin particle ##A## in the x-direction, and the result is ##-1##
  • etc

These events have well-defined probabilities that obey the Kolmogorov axioms. And since you can't measure a spin's particle along two different axes at the same time, the joint probability of:

"Someone measured the spin of particle A in the x-direction and the result is +1"

and

"Someone measured the spin of particle A in the y-direction and the result is +1"

is zero.
Yes, as I said in post 224, you can attempt to modify QM at some scale and get around the problem, but then we aren't talking about vanilla QM. The statement is just that plain vanilla QM violates Kolmogorov probability.

If you attempt to describe the event "Someone measured the spin" within QM itself, it would again have to have an operator associated with it and you would find other operators that wouldn't commute with it, so you could construct a similar contradiction at a different level. (Also, currently there is no known working example for such a modified theory.)
 
  • #232
Nullstein said:
Yes, as I said in post 224, you can attempt to modify QM at some scale and get around the problem, but then we aren't talking about vanilla QM. The statement is just that plain vanilla QM violates Kolmogorov probability.

Okay, I guess I agree with that. Except that I'm not sure that "vanilla" QM is really well-defined. To talk about probabilities in QM, you have to bring in measurements, and QM doesn't really say what goes on in a measurement.

If you attempt to describe the event "Someone measured the spin" within QM itself, it would again have to have an operator associated with it and you would find other operators that wouldn't commute with it, so you could construct a similar contradiction at a different level. (Also, currently, there is no known working example for such a modified theory.)

I think that maybe the idea that is given by many introductions to quantum mechanics, that all observables are created equal, is just wrong.

In an experiment, we mentally divide the universe into three parts:
  1. System A: the system of interest
  2. System B: the measuring device
  3. System C: the rest of the universe

With this division, it seems that we can use any observable we want for System A. But maybe that's misleading. Because when do an experiment we don't directly "observe" anything about System A. What we do is to set up an interaction between System A and System B so that the relevant microscopic property of System A can be "read off" by looking at a corresponding macroscopic property of System B.

So for System B, it is not the case that all observables are created equal. The only observables that are relevant are macroscopic observables. You know, things like "There is a black spot at this location of a photographic plate", or "The Geiger counter clicked", or "The LED turned on". The fact that there are other observables about System B that don't commute with those macroscopic observables is just not relevant.
 
  • Like
Likes Demystifier
  • #233
stevendaryl said:
So for System B, it is not the case that all observables are created equally. The only observables that are relevant are macroscopic observables. You know, things like "There is a black spot at this location of a photographic plate", or "The Geiger counter clicked", or "The LED turned on".
When one starts from that, one naturally ends up at the Bohmian interpretation, as I explained in the paper linked in my signature.
 
  • #234
stevendaryl said:
So for System B, it is not the case that all observables are created equal. The only observables that are relevant are macroscopic observables. You know, things like "There is a black spot at this location of a photographic plate", or "The Geiger counter clicked", or "The LED turned on". The fact that there are other observables about System B that don't commute with those macroscopic observables is just not relevant.
I don't think this helps, because even if you only care about such macroscopic observables, it's a general feature of QM that the time evolved version of an operator doesn't commute with the operator itself. Generically, a unitary evolution will always produce Schroedinger cat states, i.e. the macroscopic observable state gets entangled with the microscopic system. (In fact, this is required in order to have a measurement.) So you get a similar contradictions with Kolmogorov probability again for macroscopic observables at different times. In order to get around that, it doesn't suffice to just restrict the set of observables, but you must also drop unitary time evolution at the level of the full systen A+B+C.
 
  • #235
stevendaryl said:
So for System B, it is not the case that all observables are created equal. The only observables that are relevant are macroscopic observables. You know, things like "There is a black spot at this location of a photographic plate", or "The Geiger counter clicked", or "The LED turned on". The fact that there are other observables about System B that don't commute with those macroscopic observables is just not relevant.
Nullstein said:
I don't think this helps, because even if you only care about such macroscopic observables, it's a general feature of QM that the time evolved version of an operator doesn't commute with the operator itself. Generically, a unitary evolution will always produce Schroedinger cat states, i.e. the macroscopic observable state gets entangled with the microscopic system.

But what we see at the macroscopic level is not deterministic unitary evolution, but nondeterminism. Rather than evolving into a dead cat/ live cat superposition, we see either it a dead cat, with a certain probability, or a live cat, with a certain probability.
 
  • #236
stevendaryl said:
But what we see at the macroscopic level is not deterministic unitary evolution, but nondeterminism. Rather than evolving into a dead cat/ live cat superposition, we see either it a dead cat, with a certain probability, or a live cat, with a certain probability.
I agree that we don't observe macroscopic superpositions. I'm just saying that if we want to avoid conflicts with Kolmogorov probability, the time evolution of the macroscopic observables can't be determined by an unmodified microscopic unitary evolution, because unitary evolution generally produces non-commutativity, i.e. ##[A(t), A(t')] \neq 0##.
 
  • #237
Nullstein said:
it doesn't suffice to just restrict the set of observables, but you must also drop unitary time evolution at the level of the full systen A+B+C.
Both many worlds and Bohmian mechanics are counteraxamples to this claim. Fundamentally they are unitary in the full system, but at the effective emergent level they explain the illusion of non-unitary collapse.
 
  • #238
Demystifier said:
Both many worlds and Bohmian mechanics are counteraxamples to this claim. Fundamentally they are unitary in the full system, but at the effective emergent level they explain the illusion of non-unitary collapse.
No, it's not fixed in any interpretation. You also have effective non-unitarity in Copenhagen, but what is needed to escape the violation of Kolmogorov probability is non-unitarity on the level of the full system.
 
  • #239
Demystifier said:
Both many worlds and Bohmian mechanics are counteraxamples to this claim. Fundamentally they are unitary in the full system, but at the effective emergent level they explain the illusion of non-unitary collapse.
I am not convinced that Many Worlds is a counterexample, because it is difficult (for me, at least) to understand how probabilities arise in Many Worlds. I’d have to understand that in order to answer the question of whether Kolmogorov rules apply.

Bohmian mechanics avoids the problem, though.
 
  • Like
Likes Demystifier
  • #240
stevendaryl said:
Bohmian mechanics avoids the problem, though.
In Bohmian mechanics, macroscopic observables are supposed to be functions of the microscopic variables: ##A = f(x_1, \ldots, x_n)##. You then get ##A(t) = U^\dagger A U = f(U^\dagger x_1 U, \ldots, U^\dagger x_n U) = f(x_1(t), \ldots, x_n(t))##. So the non-commutativity ##[x_i, x_i(t)]## directly translates to the non-commutativity ##[A,A(t)]##, i.e. the macroscopic variables don't commute at different times. So you can construct a contradiction to Kolmogorov probability again as above.
 
  • #241
Nullstein said:
In Bohmian mechanics, macroscopic observables are supposed to be functions of the microscopic variables: ##A = f(x_1, \ldots, x_n)##. You then get ##A(t) = U^\dagger A U = f(U^\dagger x_1 U, \ldots, U^\dagger x_n U) = f(x_1(t), \ldots, x_n(t))##. So the non-commutativity ##[x_i, x_i(t)]## directly translates to the non-commutativity ##[A,A(t)]##, i.e. the macroscopic variables don't commute at different times. So you can construct a contradiction to Kolmogorov probability again as above.
I don't think that the quantity ##U^\dagger x_1 U## makes sense. In Bohmian mechanics, the quantity ##x_1## you are referring to is not an operator.
 
  • #242
Demystifier said:
I don't think that the quantity ##U^\dagger x_1 U## makes sense. In Bohmian mechanics, the quantity ##x_1## you are referring to is not an operator.
But Bohmian mechanics has to agree with the predictions of QM. The claim of Bohmian mechanics is that its predictions agree with the QM predictions when restricted to macroscopic observables ##A = f(x_1, \ldots, x_n)## (and their time evolved versions). So we're allowed to compute ##A(t)## using QM. If there was no agreement then BM and QM would lead to different time evolutions of these macroscopic variables.
 
  • #243
Nullstein said:
But Bohmian mechanics has to agree with the predictions of QM. The claim of Bohmian mechanics is that its predictions agree with the QM predictions when restricted to macroscopic observables ##A = f(x_1, \ldots, x_n)## (and their time evolved versions). So we're allowed to compute ##A(t)## using QM. If there was no agreement then BM and QM would lead to different time evolutions of these macroscopic variables.
But if you don't restrict to those macroscopic observables, presumably you can use Kolmogorov probability, since one can do that for Bohmian mechanics?
 
  • #244
Nullstein said:
But Bohmian mechanics has to agree with the predictions of QM. The claim of Bohmian mechanics is that its predictions agree with the QM predictions when restricted to macroscopic observables ##A = f(x_1, \ldots, x_n)## (and their time evolved versions). So we're allowed to compute ##A(t)## using QM. If there was no agreement then BM and QM would lead to different time evolutions of these macroscopic variables.
I see, you speak of time evolution in the Heisenberg picture. Can you translate your claims to the Schrodinger picture? That would help because Bohmian mechanics is much better understood in Scrodinger picture.
 
  • #245
atyy said:
But if you don't restrict to those macroscopic observables, presumably you can use Kolmogorov probability, since one can do that for Bohmian mechanics?
I think BM generally satisfies Kolmogorov probability if you look at just one instant of time, because they only consider commuting variables at each instant. However, unitary time evolution will evolve this set of observables into a new set, which won't commute with the old set. This happens independently of whether the observables are microscopic or macroscopic, so the situation won't improve if we include the microscopic variables. I was only referring to the macroscopic observables, because stevendaryl suggested that the problem might go away if we restrict ourselves to them.

I think there is no doubt about the paradoxes at the microscopic level. The argument then just goes like this: ##A=f(x)## is diagonal in the position eigenbases and ##A(t)=f(U^\dagger x U)## is diagonal in the time evolved position basis. Since the time evolved position eigenbasis is incompatible with the time zero position eigenbasis, ##A(t)## is incompatible with ##A##. So there are paradoxes at the microscopic level if and only if there are paradoxes at the macroscopic level (under the assumption that quantum mechanics is correct).
 

Similar threads

  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Interpretations and Foundations
Replies
2
Views
787
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
44
Views
1K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
  • Quantum Interpretations and Foundations
7
Replies
226
Views
18K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
5
Replies
153
Views
6K
  • Quantum Interpretations and Foundations
7
Replies
228
Views
12K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
Back
Top