Can Non Realism = Non Deterministic Hidden Variable Theory

In summary: Therefore, the Bell inequality can be explained by non-realism and non-counterfactual definiteness, as well as non-determinism, rather than non-locality. Additionally, it is important to note that Bell's theorem is a well-known concept and has been extensively studied in modern literature. However, it is also important to understand the modern interpretations and clarifications of the theorem before examining the original references, to avoid confusion and misinterpretation.
  • #1
morrobay
Gold Member
1,027
1,260
Given the locality assumption : p(ab|xy,λ) = p(a|x,λ) p(b|y,λ) with λ defining a single valued realism: a,b,a'b' each equal ± 1 the inequality
S = (ab) + (ab') + (a'b) - (a'b') ≤ 2 is derived. Previously Bell pointed out that classical indeterminism wouldn't be enough for any hidden variable theory to overcome the restrictions imposed by the inequality. Then later, Science 177 880-881 1972 : Given that a hidden variable theory could be non deterministic , could evolve randomly even discontinuously so that values at one instant do not specify their values at the next instant. Bell.
So if realism can be given up to explain inequality violations then why not also a non deterministic hidden variable theory ?
How can the above inequality be derived when the past variable λ is not a constant with no restrictions on causal relationships. ?
 
Physics news on Phys.org
  • #2
I think it would be wise to give the reference you got this from. If its Science 177 880-881 1972 then you should say so from the start. Also not everyone has access to journals like Science.

But what Bells theorem shows is well known. Its really got nothing to do with determinism per se - its to do with the somewhat related concept of counter-factual definiteness which you can look up.

Thanks
Bill
 
Last edited:
  • #3
I understand the definition of counter- facual defiintnes. That measurements that have not been made can be predicted. The Bell inequality is based on this. But with the definition that Bell has given for λ I would not expect CFD to apply.Therefore the inequality violations would be expected and explained by non realism and related non counter-factual definetness and non determinism. And not necessarily by non locality.
 
  • #4
bhobba said:
I think it would be wise to give the reference you got this from. If its Science 177 880-881 1972 then you should say so from the start. Also not everyone has access to journals like Science.

But what Bells theorem shows is well known. Its really got nothing to do with determinism per se - its to do with the somewhat related concept of counter-factual definiteness which you can look up.

Thanks
Bill

See page 22 for this reference : http://arxiv.org/pdf/0902.3827v4.pdf

* I like this paper up until many worlds
 
Last edited:
  • #5
morrobay said:
But with the definition that Bell has given for λ I would not expect CFD to apply

It been a long time since I read Bell.

What exactly is your intent here? To understand Bells theorem or examine original references?

If the former then there are many many sources to help you eg:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If the latter then I suggest you do the former first, otherwise you are likely to get yourself in a knot.

I used to post a lot on a relativity forum and many people used to pick apart Einstein's original papers or popularisations ignoring the huge amount of work that has been done since clarifying and expanding on it. That is not the way to proceed. The way forward is to read the modern literature on it such as the link I gave above, then, if historical stuff interests you return to the original paper. If you then find issues with the oriinal paper you can take it up with those familiar with those sources.

Thanks
Bill
 
  • #6
morrobay said:
That measurements that have not been made can be predicted.

That's not it. Its the ability to speak meaningfully of the definiteness of the results of measurements that have not been performed (i.e. the ability to assume the existence of objects, and properties of objects, even when they have not been measured).

That's likely the cause of your issues. As I said, while related to determinism is most definitely not the same.

Thanks
Bill
 
  • #8
bhobba said:
It been a long time since I read Bell.

What exactly is your intent here? To understand Bells theorem or examine original references?

If the former then there are many many sources to help you eg:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If the latter then I suggest you do the former first, otherwise you are likely to get yourself in a knot.

I used to post a lot on a relativity forum and many people used to pick apart Einstein's original papers or popularisations ignoring the huge amount of work that has been done since clarifying and expanding on it. That is not the way to proceed. The way forward is to read the modern literature on it such as the link I gave above, then, if historical stuff interests you return to the original paper. If you then find issues with the oriinal paper you can take it up with those familiar with those sources.

Thanks
Bill

Not picking apart but think QM is incomplete regarding an explanation for the inequality violations
img002.jpg
 
  • #9
morrobay said:
Not picking apart but think QM is incomplete regarding an explanation for the inequality violations

I disagree. Its a theorem. If the tests of it are loophole free is different than the theorem itself.

If you don't think so you should be able to explain in your own words, not a link, but in your own words, why you think so.

If you can't then maybe the issue lies in your understanding.

I have noticed you discuss Bell a lot, which to me suggests perhaps there is something you aren't quite grasping, like your incorrect view of counter-factual definiteness.

Thanks
Bill
 
  • #10
morrobay said:
Given the locality assumption : p(ab|xy,λ) = p(a|x,λ) p(b|y,λ) with λ defining a single valued realism: a,b,a'b' each equal ± 1 the inequality
S = (ab) + (ab') + (a'b) - (a'b') ≤ 2 is derived. Previously Bell pointed out that classical indeterminism wouldn't be enough for any hidden variable theory to overcome the restrictions imposed by the inequality. Then later, Science 177 880-881 1972 : Given that a hidden variable theory could be non deterministic , could evolve randomly even discontinuously so that values at one instant do not specify their values at the next instant.

Yes, you can certainly consider hidden-variables theories where the hidden variables evolve nondeterministically. In the case of an EPR experiment, we produce two correlated particles, and measure the spin (or polarization) of each particle along some axis. The problem with a nondeterministic theory is that if the SAME axis is chosen for both particle measurements, the results are perfectly correlated (or anti-correlated). If there were some randomness in the hidden variable, then there would be no way to get perfect correlations.
 
  • Like
Likes bhobba
  • #11
bhobba said:
I disagree. Its a theorem. If the tests of it are loophole free is different than the theorem itself.

If you don't think so you should be able to explain in your own words, not a link, but in your own words, why you think so.

If you can't then maybe the issue lies in your understanding.

I have noticed you discuss Bell a lot, which to me suggests perhaps there is something you aren't quite grasping, like your incorrect view of counter-factual definiteness.

Thanks
Bill

My understanding of counter - factual definiteness is EPR viewpoint that photons simultaneously have definite spins on the x,y,z axis. If photon 1 of entangled pair has spin up on x-axis then photon 2 is known to have spin down on x-axis so that total angular momentum = 0.
Now if the spin of photon 2 is measured on y-axis and is spin down , then with CFD you know that if photon 1 had been measured instead on the same axis it would have been spin up. Now in a deterministic hidden variable theory λ would determine those spins.
 
Last edited:
  • #12
stevendaryl said:
Yes, you can certainly consider hidden-variables theories where the hidden variables evolve nondeterministically. In the case of an EPR experiment, we produce two correlated particles, and measure the spin (or polarization) of each particle along some axis. The problem with a nondeterministic theory is that if the SAME axis is chosen for both particle measurements, the results are perfectly correlated (or anti-correlated). If there were some randomness in the hidden variable, then there would be no way to get perfect correlations.

Suppose λ is a deterministic contextual hidden variable and it evolves λt0 --> λt1. Then the ontic spin, σ, at t0 interacts with measuring device until measurement at t1, S, observed spin is measured. ( it is my understanding that Bell assumed that ontic spin could be directly measured.)
So if λ could be a contextual physical hidden variable that is dependent ot the experimental setup then the perfect correlations when detector settings are the same and the inequality violations when detector settings are not aligned could be explained. Note, if the above could be put into a formal statement it would be a possible explanation.
 
  • #13
morrobay said:
My understanding of counter - factual definiteness is EPR viewpoint that photons simultaneously have definite spins on the x,y,z axis.

Its the definition I gave previously which is different to what you said. It is not if you measure one you know the other - that is a given from entanglement. It is if you can speak meaningfully of properties when not measured ie before you measure one part of the entangled EPR pair you can speak meaningfully of it having properties.

I linked to a paper that derives Bells theorem from careful definitions. Here is its definition of counterfactual definiteness:
'Let us define a “counterfactual” theory as one whose experiments uncover properties that are pre-existing. In other words, in a counterfactual theory it is meaningful to assign a property to a system (e.g. the position of an electron) independently of whether the measurement of such property. is carried out. [Sometime this counterfactual definiteness property is also called “realism”, but it is best to avoid such philosophically laden term to avoid misconceptions].'

May I suggest you read it and we can have a discussion based on common understandings.

Thanks
Bill
 
Last edited:
  • #14
morrobay said:
So if λ could be a contextual physical hidden variable that is dependent ot the experimental setup then the perfect correlations when detector settings are the same and the inequality violations when detector settings are not aligned could be explained. Note, if the above could be put into a formal statement it would be a possible explanation.

I have zero idea what you are trying to say.

Bells theorem shows if you have counter-factual definiteness ie loosely the entangled pair has properties regardless of being measured or not, then the correlation predicted by QM requires violation of locality - providing of course you think locality is a valid concept for correlated systems. If you have a contextual hidden variable that determines what is being measured prior to measurement then you have counter-factual definiteness and hence locality is not possible. Generally contextual means dependant on what's being measured. Exactly what context do you think changes in EPR?

Thanks
Bill
 
Last edited:
  • #15
morrobay said:
it is my understanding that Bell assumed that ontic spin could be directly measured.)

Before I can comment you need to define what you mean by ontic spin.

But Bell assumed spin could be measured - no caveat - ontic or otherwise.

Thanks
Bill
 
  • #16
bhobba said:
I have zero idea what you are trying to say.

Bells theorem shows if you have counter-factual definiteness ie loosely the entangled pair has properties regardless of being measured or not, then the correlation predicted by QM requires violation of locality - providing of course you think locality is a valid concept for correlated systems. If you have a contextual hidden variable that determines what is being measured prior to measurement then you have counter-factual definiteness and hence locality is not possible. Generally contextual means dependant on what's being measured. Exactly what context do you think changes in EPR?

Thanks
Bill

Rather than paraphrase on this contextual hidden variable where counter - factual definiteness does apply please see
http://arxiv.org/pdf/quant-ph/0611259.pdf
 
  • #17
morrobay said:
Rather than paraphrase on this contextual hidden variable where counter - factual definiteness does apply please see
http://arxiv.org/pdf/quant-ph/0611259.pdf

I have bad experiences with links that supposedly claim certain things only to find they don't.

So before going any further please give a summery of its argument and we can proceed from that.

If you find that difficult for some reason say so and another way to proceed can be discussed.

Added Later:
I relented and read it. This is the so called chameleon model. It has been discussed before:
https://www.physicsforums.com/threa...ons-behind-bells-and-related-theorems.727438/

See Dr Chinese's response:
This argument has been around in numerous variations for some time, and has failed to gain traction. Primarily because it goes directly against the EPR assumption (prior paragraph) regarding simultaneous elements of reality. In other words: if you reject that EPR assumption (as Accardi essentially does after about 10 pages) then you don't get the Bell result. That is already generally accepted, hence nothing really new in this line of reasoning.

Bottom line - there is a reason its failed to gain traction - its a non issue.

Thanks
Bill
 
Last edited:
  • #19
bhobba said:
Bells theorem shows if you have counter-factual definiteness ie loosely the entangled pair has properties regardless of being measured or not, then the correlation predicted by QM requires violation of locality - providing of course you think locality is a valid concept for correlated systems.

I think this has been discussed before, but Bell's assumption about the relationship between local hidden-variables and measurement results is NOT perfectly general. What he assumed is that, in the case of twin-pair type experiments, two functions:

[itex]A(a, \lambda)[/itex]
[itex]B(b, \lambda)[/itex]

giving the measurement result [itex]A[/itex] for Alice as a function of Alice's setting [itex]a[/itex] and the hidden variable [itex]\lambda[/itex] and the measurement result [itex]B[/itex] as a function of Bob's setting [itex]b[/itex] and [itex]\lambda[/itex]. That is not perfectly general, because it would satisfy classical locality for [itex]A[/itex] to depend probabilistically on [itex]a[/itex] and [itex]\lambda[/itex], rather than being determined by it (and similarly for [itex]B[/itex]). However, this more general assumption doesn't help; if the relationship is not a function (that is, the probabilities are not 0 or 1) then there is no way to get perfect correlation in the case [itex]a=b[/itex].

Assuming that it is a function implies counter-factual definiteness, but CFD is not an assumption, it's a conclusion from the fact of perfect correlations.

You could go through the whole Bell argument without assuming CFD, and it would not change the conclusion, there would just be an extra step in the argument.
 
  • #20
Let λ be a set of local contextual ( dependent on θ2 - θ1 ) hidden variables that determine probability distribution for any settings.
The observed (S) spin = ± 1 is the result of the interaction of the ontic(σ) system where superpositions evolve according to contextual hidden variables and the measuring device. This does not imply that λt0 σ also equals λt1 S spin ±1.
So counter - factual definiteness does not apply. I am suggesting that this model
can give perfect correlations when detectors are aligned and also agree with QM , Ea) ⋅ E(b) = cos(θ21. S = 2√2 > 2
.
 
  • #21
morrobay said:
So counter - factual definiteness does not apply..

"Ontic" describes what is there, as opposed to the nature or properties of that being ie is real. So, obviously, counter-factual definiteness does apply.

You have been given a link to a previous PF thread where your paper was discussed. Its a total non issue. You have been been given a textbook link that examines it in detail. The definition it uses based on CONTEXTUAL hidden variables is explicitly non-local so does not violate Bell. Contextual means if you change what's being measured instantaneously, ie in a non local way, things change.

morrobay said:
Let λ be a set of local contextual ( dependent on θ2 - θ1 ) hidden variables that determine probability distribution for any settings.

That's the exact issue. As the textbook I linked to pointed out you can only have local contextual hidden variables under the definitions used in that paper. Under the usual definitions it's non local.

There really is no more to be said so I will leave it there. I suggest you study the textbook I linked to.

Added Later:
I forgot to mention if it depends on the difference then of course it can't be local. Change the difference - instantaneously it changes.

Thanks
Bill
 
Last edited:
  • #22
morrobay said:
Let λ be a set of local contextual ( dependent on θ2 - θ1 ) hidden variables that determine probability distribution for any settings.
The observed (S) spin = ± 1 is the result of the interaction of the ontic(σ) system where superpositions evolve according to contextual hidden variables and the measuring device. This does not imply that λt0 σ also equals λt1 S spin ±1.
So counter - factual definiteness does not apply. I am suggesting that this model

I don't understand that argument. Bell's proof makes no assumption about the existence of any "ontic" [itex]\sigma[/itex]. It's specifically about the relationship between a hidden variable [itex]\lambda[/itex], the settings [itex]a[/itex] and [itex]b[/itex] of the measurement devices, and the outcomes [itex]A[/itex] and [itex]B[/itex].
 
  • #23
stevendaryl said:
IBell's proof makes no assumption about the existence of any "ontic" [itex]\sigma[/itex]. It's specifically about the relationship between a hidden variable [itex]\lambda[/itex], the settings [itex]a[/itex] and [itex]b[/itex] of the measurement devices, and the outcomes [itex]A[/itex] and [itex]B[/itex].

I asked the OP to define what ontic meant here. He didn't elaborate. I can only assume it means real in which case its a counter-factual definiteness assumption. But regardless you are correct - Bell makes no specific assumption as far as measurements are concerned. His is a theorem about if you assume its real and exists prior to observation and if it isn't.

The biggest issue though for me is his assumption of local contextual variables that depend on the difference of two things. That means it's explicitly non-local - its a contradiction in terms. In fact in the link to the textbook I gave that analysed it that's the key point - under our usual definitions of locality its explicitly non local. The paper is basically a tricky non standard way to define it as local.

Thanks
Bill
 
Last edited:
  • #24
morrobay said:
Given the locality assumption : p(ab|xy,λ) = p(a|x,λ) p(b|y,λ) with λ defining a single valued realism: a,b,a'b' each equal ± 1 the inequality
S = (ab) + (ab') + (a'b) - (a'b') ≤ 2 is derived.
How can the above inequality be derived when the past variable λ is not a constant with no restrictions on causal relationships. ?

The past variable [itex]\lambda[/itex] may be a random variable (or set of variables), denoting any and all known and unknown information contained in the shared past (light cones) of the pair of particles measured. It is not necessarily a constant.

Bell inequalities are derived from the assumption that knowing all information [itex]\lambda[/itex] that could reach both particles by traveling at or below the speed of light would completely explain any correlation they currently share. Classically, this makes sense, since forces and interactions between pairs of particles propagate no faster than light (so far as we know).
However, quantum mechanics predicts, and experiments have shown violations of such Bell inequalities. This means that the correlations between some pairs of particles cannot be explained with all the information [itex]\lambda[/itex] in the shared past of both particles.
Either there simply is no such information to be found, or the correlations may yet be explained by influences propagating faster than light. So far, conclusions of this depend on how you interpret quantum mechanics.
 
  • #25
stevendaryl said:
I don't understand that argument. Bell's proof makes no assumption about the existence of any "ontic" [itex]\sigma[/itex]. It's specifically about the relationship between a hidden variable [itex]\lambda[/itex], the settings [itex]a[/itex] and [itex]b[/itex] of the measurement devices, and the outcomes [itex]A[/itex] and [itex]B[/itex].
jfizzix said:
The past variable [itex]\lambda[/itex] may be a random variable (or set of variables), denoting any and all known and unknown information contained in the shared past (light cones) of the pair of particles measured. It is not necessarily a constant.

Bell inequalities are derived from the assumption that knowing all information [itex]\lambda[/itex] that could reach both particles by traveling at or below the speed of light would completely explain any correlation they currently share. Classically, this makes sense, since forces and interactions between pairs of particles propagate no faster than light (so far as we know).
However, quantum mechanics predicts, and experiments have shown violations of such Bell inequalities. This means that the correlations between some pairs of particles cannot be explained with all the information [itex]\lambda[/itex] in the shared past of both particles.
Either there simply is no such information to be found, or the correlations may yet be explained by influences propagating faster than light. So far, conclusions of this depend on how you interpret quantum mechanics.

Not questioning Bells original derivation and assumptions. The intent is a local non realistic model with contextual hidden variable that can explain the inequality violations.
Predicting the quantum correlations from outcomes A & B. and also the perfect correlations when detectors are aligned.
So λ (dependent on θ2 - θ1) in this case involves not only all information of past variable but also involves the physical interactions during measurement: λt0 (ontic) , settings a and b, λ t1 (observed)
Perhaps if more emphasis in an assumption of counter factual definiteness was originally included then realism would be given up instead of locality.
 
Last edited:
  • #26
morrobay said:
The intent is a local non realistic model with contextual hidden variable that can explain the inequality violations.

It can't be non-realistic since you have ontic quantities. You haven't defined what you mean by ontic even though its been requested, but its usual dictionary meaning is real. You have a non-local real theory.

Thanks
Bill
 
  • #27
bhobba said:
It can't be non-realistic since you have ontic quantities. You haven't defined what you mean by ontic even though its been requested, but its usual dictionary meaning is real. You have a non-local real theory.

Thanks
Bill

You tell me why ontic ( in this context properties that have not been measured like position , momentum, spin ) is defined as non local
 
  • #28
morrobay said:
You tell me why ontic ( in this context properties that have not been measured like position , momentum, spin ) is defined as non local

I have explained it before - you defined your contextual hidden variables to depend on the difference between two quantities - in this case angles of a polariser. That is explicitly non local. It's the same in Newtonian gravity - that too is explicitly non local because it depends on the difference between two distances. This was explained in the textbook link I gave that I have been urging you to study.

If you define ontic that way (its not its definition - from a dictionary its - possessing the character of real rather than phenomenal existence) then its says nothing about being realistic or not realistic. Real was the meaning in the paper you linked to which leads me to suspect English may not be your native language and some of these terms aren't clear to you.

Generally properties like spin, position etc don't have the property of locality or non locality - they are real, not real obey counterfactual definiteness etc - but not locality. The reason your contextual hidden variable has the property of non-locality is you have defined it to depend on the difference of two quantities - that changes and instantaneously it changes.

Thanks
Bill
 
Last edited:
  • #29
bhobba said:
I have explained it before - you defined your contextual hidden variables to depend on the difference between two quantities - in this case angles of a polariser. That is explicitly non local. It's the same in Newtonian gravity - that too is explicitly non local because it depends on the difference between two distances. This was explained in the textbook link I gave that I have been urging you to study.

If you define ontic that way (its not its definition - from a dictionary its - possessing the character of real rather than phenomenal existence) then its says nothing about being realistic or not realistic.

Thanks
Bill

Let me define contextuaity during measurement as the angle setting at polariser A or the angle setting at polariser B. And that all interactions during measurement at A or B have no influence on each other . So as in the Accardi paper while quantum contextuality is non local
probabilistic contextuality is local.
Again the intent is for a local explanation for the distributions that do take into account θ2 - θ1 that result in Sqm = 2√2
 
  • #30
morrobay said:
Let me define contextuaity during measurement as the angle setting at polariser A or the angle setting at polariser B. And that all interactions during measurement at A or B have no influence on each other . So as in the Accardi paper while quantum contextuality is non local, probabilistic contextuality is local. Again the intent is for a local explanation for the distributions that result in Sqm = 2√2

Thats the exact point the text the link I gave made. Under the usual definitions of this stuff you have come up with a non local realistic theory. However if you define things differently you can get what you suggested. The whole thing is basically a game of semantics and a total non issue.

I do it with Bell as well. I don't believe locality is a property of correlated systems so I exclude it. However if you want a realistic theory you can't exclude it and it must be non local. Its just a game of semantics - its of no value really - just a certain way of looking at it makes more sense to me. If looking at probability that way makes more sense to you - that's fine. But that's all it is - a game of semantics.

Thanks
Bill
 
Last edited:
  • #31
morrobay said:
So λ (dependent on θ2 - θ1) in this case involves not only all information of past variable but also involves the physical interactions during measurement: λt0 (ontic) , settings a and b, λ t1 (observed)

Two things that I don't understand about that one sentence: First, [itex]\lambda[/itex] is supposed to be something that is "set" in the backwards light-cone of the two measurements--in other words, at the moment the twin pair is produced. [itex]\theta_2 - \theta_1[/itex] is a fact about the measurement process, and is definitely NOT in the backwards light-cone. That quantity can be changed in-flight, right before the detection event.

The second thing I don't understand is how [itex]\theta_2 - \theta_1[/itex] is a local quantity. It depends on two distant quantities. By definition, I think that would be nonlocal.
 
  • Like
Likes bhobba
  • #32
stevendaryl said:
Two things that I don't understand about that one sentence: First, [itex]\lambda[/itex] is supposed to be something that is "set" in the backwards light-cone of the two measurements--in other words, at the moment the twin pair is produced. [itex]\theta_2 - \theta_1[/itex] is a fact about the measurement process, and is definitely NOT in the backwards light-cone. That quantity can be changed in-flight, right before the detection event.

The second thing I don't understand is how [itex]\theta_2 - \theta_1[/itex] is a local quantity. It depends on two distant quantities. By definition, I think that would be nonlocal.

When you say that λ is "set" in past light cone, produced at entanglement of pair, are you implying that λ does not interact with observable during measurement process ?
Ie ( λt0 + ontic unmeasured particle ). ---> interactions with detector setting a or b at A or B ---> ( λt1 + observed particle measurement.)
Maybe someone can elaborate on exactly how λ and particle interact physically starting from moment pair is produced ,
during measurement interaction , to observed outcome.
And I redefined contextual hidden variable to depend on experimental setting ,θ, at A or B for locality .And as you say detector angle can be changed in flight. So whether λ is a function, outcome ± 1 or outcome is stochastic I cannot see how counter- factual definiteness could apply.
So again is a local , non realistic hidden variable theory possible from the above : That can have perfect correlations when detectors are aligned and also predict inequality violations, Sqm = 2√2 when detectors at A and B are not aligned ?
 
Last edited:
  • #33
morrobay said:
When you say that λ is "set" in past light cone, produced at entanglement of pair, are you implying that λ does not interact with observable during measurement process ?

The outcome at each detector is assumed to be a function of the variable [itex]\lambda[/itex], which is set at the moment of pair creation, and the detector setting.

There is a joint probability function, [itex]P(\theta_1, \theta_2, A, B)[/itex] which is the probability that Alice gets outcome [itex]A[/itex], and Bob gets outcome [itex]B[/itex], given that Alice's device has setting [itex]\theta_1[/itex] and Bob's device has setting [itex]\theta_2[/itex].

To explain this joint probability distribution in terms of local hidden variables would be to write it in the form:

[itex]P(\theta_1, \theta_2, A, B) = \sum_\lambda P(\lambda) P(\theta_1, A, \lambda) P(\theta_2, B, \lambda)[/itex]

where
  • [itex]P(\lambda)[/itex] is the probability that the hidden variable has value [itex]\lambda[/itex]
  • [itex]P(\theta_1, A, \lambda)[/itex] is the probability that Alice gets outcome [itex]A[/itex] given the hidden variable has value [itex]\lambda[/itex] and her device has setting [itex]\theta_1[/itex]
  • [itex]P(\theta_2, B, \lambda)[/itex] is the probability that Bob gets outcome [itex]B[/itex] given the hidden variable has value [itex]\lambda[/itex] and his device has setting [itex]\theta_2[/itex]
So, yes, the outcome for Alice is assumed to depend on some kind of interaction between [itex]\lambda[/itex] and [itex]\theta_1[/itex], and the outcome for Bob depends on some kind of interaction between [itex]\lambda[/itex] and [itex]\theta_2[/itex]. But [itex]\lambda[/itex] has nothing to do with [itex]\theta_2 - \theta_1[/itex].
 
  • #34
morrobay said:
So whether λ is a function, outcome ± 1 or outcome is stochastic I cannot see how counter- factual definiteness could apply.

We assume a probability distribution of the form:

[itex]P(\theta_1, \theta_2, A, B) = \sum_\lambda P_A(\lambda) P_B(\theta_1, A, \lambda) P(\theta_2, B, \lambda)[/itex]

Now, we use the fact that if [itex]\theta_1 = \theta_2[/itex], then the correlation (or anti-correlation) is perfect. So we have, for perfect anti-correlation:

[itex]P(\theta, \theta, A, A) = \sum_\lambda P(\lambda) P_A(\theta, A, \lambda) P_B(\theta, A, \lambda) = 0[/itex]

There is no way to have a sum of nonnegative terms add up to 0 unless each term is 0. So we conclude:

[itex]P_A(\theta, A, \lambda) P_B(\theta, A, \lambda) = 0[/itex]

(for all values of [itex]\lambda[/itex] with nonzero probability).

This implies

[itex]P_A(\theta, A, \lambda) = 0[/itex] or [itex]P_B(\theta, B, \lambda) = 0[/itex]

This is true for every value of [itex]\lambda[/itex] and [itex]\theta[/itex]. That means that for any value of [itex]\lambda[/itex], there is some angle [itex]\theta[/itex] such that either it is impossible for Alice to get result [itex]A[/itex] at that angle, or it is impossible for Bob to get result [itex]A[/itex] at that angle. So, if Alice gets result [itex]A[/itex] at some angle [itex]\theta[/itex], then it is DEFINITE that Bob cannot get result [itex]A[/itex] at that angle. That's contrafactual definiteness. It follows from the assumption of local hidden variables and the fact of perfect anti-correlation (or correlation).
 

1. What is non realism?

Non realism is a philosophical concept that suggests that reality is not objective and can be interpreted differently by individuals or societies. It rejects the idea of a single, objective reality and instead acknowledges the role of perception and subjectivity in shaping our understanding of the world.

2. What is a non deterministic hidden variable theory?

A non deterministic hidden variable theory is a scientific theory that proposes the existence of hidden variables, or factors that are not directly observable, to explain the behavior of systems. It suggests that certain outcomes may not be determined by initial conditions and can be influenced by these hidden variables.

3. Can non realism and non deterministic hidden variable theory coexist?

Yes, non realism and non deterministic hidden variable theory can coexist. Non realism does not necessarily reject the idea of hidden variables, but rather acknowledges that our understanding of reality may be influenced by subjective factors. Non deterministic hidden variable theory can be seen as a possible explanation for this subjectivity.

4. How does non realism affect our understanding of science?

Non realism challenges the traditional view of science as a completely objective and value-free pursuit. It suggests that our understanding of the world is shaped by our perceptions and beliefs, and therefore, scientific knowledge may not be entirely objective. This can lead to a more critical and reflexive approach to scientific inquiry.

5. What are the implications of a non deterministic hidden variable theory?

A non deterministic hidden variable theory challenges the idea of determinism, which suggests that all events are determined by prior causes. It opens up the possibility for a more probabilistic understanding of the universe and raises questions about free will and the predictability of certain outcomes. It also has implications for fields such as quantum mechanics, where the behavior of particles is often described as non deterministic.

Similar threads

Replies
80
Views
4K
Replies
7
Views
1K
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
21
Views
3K
Replies
44
Views
3K
  • Quantum Physics
Replies
5
Views
2K
  • Quantum Physics
4
Replies
122
Views
8K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
Replies
11
Views
2K
Back
Top