Boole vs. Bell - the latest paper of De Raedt et al

  • Thread starter harrylin
  • Start date
  • Tags
    Bell Paper
In summary: COMPLETE description of the paper's content would be completely wrong and would not be useful to summarise it. In summary, the De Raedt paper discusses the apparent contradictions of quantum theory and probability frameworks, and argues that these contradictions arise from incomplete considerations of the premises of the derivation of the inequalities. They present extended Boole-Bell inequalities which are binding for both classical and quantum models, and show that apparent violations of these inequalities can be explained in an Einstein local way.
  • #1
harrylin
3,875
93
Two years ago an intriguing paper of De Raedt's team concerning Bell's Theorem appeared in Europhysics Letters (http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.0767v2.pdf).

Now (officially next month), an elaboration on those ideas has been published:

Hans De Raedt et al: "Extended Boole-Bell inequalities applicable to quantum theory"
J. Comp. Theor. Nanosci. 8, 6(June 2011), 1011
http://www.ingentaconnect.com/content/asp/jctn/2011/00000008/00000006/art00013

Full text also in http://arxiv.org/abs/0901.2546

De Raedt et al do not pretend to be the first to discuss these issues, and they refer to quite a number of earlier papers by other authors that bring up similar points.

Below I present a little summary of their very elaborated explanations.

It all looks very plausible to me since I tend regard Bell's Theorem as a magician's trick - we tend to interpret a miracle as a trick, even if nobody can explain how the trick is done. Now, this paper appears to explain "how it's done" and I like to hear if there are valid objections.

Before we discuss their criticism about Bell's "element of reality", it may be good to discuss Boole's example of patients and illnesses, which De Raedt et all reproduce in this paper. They show that by failing to account for unknown causes for the observations, similar inequalities can be drawn up as those of Bell, without a valid reason to infer a spooky action at a distance - although it appears that way.

Does anyone challenge the correctness of that claim?

Regards,
Harald

--------------------------------------------------------
Abstract:
We address the basic meaning of apparent contradictions of quantum theory and probability frameworks as expressed by Bell's inequalities. We show that these contradictions have their origin in the incomplete considerations of the premises of the derivation of the inequalities. A careful consideration of past work, including that of Boole and Vorob'ev, has lead us to the formulation of extended Boole-Bell inequalities that are binding for both classical and quantum models. The Einstein-Podolsky-Rosen-Bohm gedanken experiment and a macroscopic quantum coherence experiment proposed by Leggett and Garg are both shown to obey the extended Boole-Bell inequalities. These examples as well as additional discussions also provide reasons for apparent violations of these inequalities.

The above summary is IMHO a rather "soft" reflection of its contents: the way I read it, basically this paper asserts to show that Bell's theorem is wrong! It does this in an elaborate way, here are some fragments of the text (the below is copied from the ArXiv version):

"the Achilles heel of Bell's interpretations: [..] all of Bell's derivations assume from the start that ordering the data into triples as well as into pairs must be appropriate and commensurate with the physics. [..] From our work above it is then an immediate corollary that Bell's inequalities cannot be violated; not even by influences at a distance."

The paper next discusses such things as "Filtering-type measurements on the spin of one spin-1/2 particle", "Application to quantum flux tunneling", "Application to Einstein-Podolsky-Rosen-Bohm (EPRB) experiments" (in particular Stern-Gerlach).

To top it off, illustrations of apparent Bell violations are given, even of a similar inequality with "a simple, realistic every-day experiment involving doctors who perform allergy tests on patients". [..] "Together these examples represent an infinitude of possibilities to explain apparent violations of Boole-Bell inequalities in an Einstein local way." Special attention is given to "EPR-Bohm experiments and measurement time synchronization".

"It is often claimed that a violation of such inequalities implies that either realism or Einstein locality should be abandoned. As we saw in our counterexample which is both Einstein local and realistic in the common sense of the word, it is the one to one correspondence of the variables to the logical elements of Boole that matters when
we determine a possible experience, but not necessarily the choice between realism and Einstein locality."
[..]
"The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs."

The -IMHO- most important conclusion of the paper is that "A violation of the Extended Boole-Bell inequalities cannot be attributed to influences at a distance"; they argue that a violation only can arise from a grouping in pairs.
 
Physics news on Phys.org
  • #2
Note: this thread may be seen as a continuation in part of the thread "Violation of Bell's Theorem"; my purpose here is to focus on the discussions in the peer reviewed literature instead of on ideas of physicsforums members.
https://www.physicsforums.com/showthread.php?t=496839
 
  • #3
harrylin said:
"The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs."

Yes, I quite agree that is an assumption of Bell. A correct one, of course! And this is not coming from the quantum mechanical side, it is coming from the realism side. As I have said many times before: if the above is NOT a concise requirement, then what DOES IT MEAN TO BE REALISTIC?

Note: virtually anything LESS than the above is essentially returning to the standard QM position.
 
  • #4
(Citing De Raedt: "The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs.")
DrChinese said:
Yes, I quite agree that is an assumption of Bell. A correct one, of course! And this is not coming from the quantum mechanical side, it is coming from the realism side. As I have said many times before: if the above is NOT a concise requirement, then what DOES IT MEAN TO BE REALISTIC?

Note: virtually anything LESS than the above is essentially returning to the standard QM position.

Isn't the unknown element of reality every time different in Boole's example of illnesses and patients? :uhh:

Harald
 
  • #5
harrylin said:
(Citing De Raedt: "The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs.")


Isn't the unknown element of reality every time different in Boole's example of illnesses and patients? :uhh:

Harald

Why should I accept this is a good example?

It is true that an inequality is violated in this very specific and artificial case. But while Boole may be criticizing French doctors (Aspect?) and their choice of days to test, I think it is clear that ANY test is subject to statistical extremes. Nonetheless, I think it is safe to say that this test has been replicated enough times to make this example reasonably rejected. For one thing, there are dozens of angle settings at which the test has been done which corresponds to dozens of attributes.

SO: it violates the prime assumption that any pair of attributes can be selected randomly (freely) by the observer (see the last paragraph of EPR). *You* are supposed to give me a realistic dataset (which was provided) and *I* pick the pairs separately without knowing the results in advance. Doesn't really work if YOU hand pick the data AND the pairs.

And guess what, that experiment has already been done by Weihs et al. In that experiment, independent random number generators fast switch the angle settings on both sides. See Figure 2.

http://arxiv.org/abs/quant-ph/9810080

So your hypothesis is invalid. But I still ask: if Bell and EPR are wrong as to the definition of reality, what is reality if a, b and c do not simultaneously exist?
 
  • #6
DrChinese said:
Why should I accept this is a good example?

It is true that an inequality is violated in this very specific and artificial case. But while Boole may be criticizing French doctors (Aspect?) and their choice of days to test, I think it is clear that ANY test is subject to statistical extremes. Nonetheless, I think it is safe to say that this test has been replicated enough times to make this example reasonably rejected. For one thing, there are dozens of angle settings at which the test has been done which corresponds to dozens of attributes.

SO: it violates the prime assumption that any pair of attributes can be selected randomly (freely) by the observer (see the last paragraph of EPR). *You* are supposed to give me a realistic dataset (which was provided) and *I* pick the pairs separately without knowing the results in advance. Doesn't really work if YOU hand pick the data AND the pairs.

And guess what, that experiment has already been done by Weihs et al. In that experiment, independent random number generators fast switch the angle settings on both sides. See Figure 2.

http://arxiv.org/abs/quant-ph/9810080

So your hypothesis is invalid. But I still ask: if Bell and EPR are wrong as to the definition of reality, what is reality if a, b and c do not simultaneously exist?

Thanks for the interesting criticism against Boole (but do you really not know that much of our probability theory originates with him so that he was dead long before Aspect?).
First of all, I have no hypothesis nor do I provide data sets; thus you must confuse me with someone else.

However, I fully agree with you that Boole's hidden variables look a bit artificial, if that is what you mean. Personally I wonder what kind of randomly fluctuating hidden variables would always yield such results, independent of the location (random fluctuation looks like a plausible assumption to me). The only thing that Boole's example clearly shows, I think, is that particular assumptions must be made about unknown variables if you want to make statistical claims about their possible effects.

Now, the cities Lille and Lyon were indeed forced on the reader (by Boole or by De Raedt, I'm not sure). Do you claim that if you choose another set of cities, you will necessarily find significantly different results? And if so, your claim is based on what assumptions about the unknown hidden variables?

Note: as we discussed in the other thread, EPR claimed to give no definition of reality, but only a (one-way) criterion. However, the essential point of the paper that we are discussing here is that such philosophical questions are not even raised.
 
Last edited:
  • #7
harrylin said:
Thanks for the interesting criticism against Boole (but do you really not know that much of our probability theory originates with him so that he was dead long before Aspect?).

...

Now, the cities Lille and Lyon were indeed forced on the reader (by Boole or by De Raedt, I'm not sure). Do you claim that if you choose another city, you will necessarily find significantly different results? And if so, your claim is based on what assumptions about the unknown hidden variables?

I was just joking around about the cities and Aspect because they are both French. I actually was not aware that Boole himself constructed this example, not sure about that either way and it doesn't matter.

The issue is: can a local realistic data set be constructed in which the data, selected by the observer (that's me), violates a Bell inequality. I say NO, and this dataset does not disprove it because it doesn't meet the rules of the game. Please feel free to attempt to give me a suitable dataset to look at. I want something where a) there are perfect correlations at any of 3 attributes AND b) the match rate averages 25% when *I* make the selection of a pair. I can tell you my selections will be pseudo-random.

So basically, I am making no assumptions at all. De Raedt (or you) can provide anything. Now please understand, the De Raedt team is very smart and they have done a lot in this area. It is quite complicated, and in fact they have provided sample datasets which can meet some of the criteria I have listed - it is quite impressive. So don't get me wrong. But this particular paper does not negate Bell.
 
  • #8
De Raedt's point doesn't seem to be remotely novel, it's long been known that in any rigorous proof of Bell's theorem you must include a "no-conspiracy condition" which says that there is no statistical correlation between which pair of elements are sampled on each trial and the likelihood that the trial will consist of any given triplet of predetermined results (or any given set of hidden variables, if you are not assuming there's a perfect correlation whenever both experimenters choose the same detector setting, and therefore the results may not be predetermined prior to measurement). See for example section D, p. 6 of this derivation of Bell's theorem, or Bell's own quote from a 1985 interview in the superdeterminism wiki article. If the choice of detector settings is truly random than the no-conspiracy condition will obviously hold, and even in a deterministic universe there are some strong conceptual arguments for thinking it's a good assumption, see my posts to "Rap" on this thread.
 
  • #9
DrChinese said:
I was just joking around about the cities and Aspect because they are both French. I actually was not aware that Boole himself constructed this example, not sure about that either way and it doesn't matter.

The issue is: can a local realistic data set be constructed in which the data, selected by the observer (that's me), violates a Bell inequality. I say NO, and this dataset does not disprove it because it doesn't meet the rules of the game. Please feel free to attempt to give me a suitable dataset to look at. I want something where a) there are perfect correlations at any of 3 attributes AND b) the match rate averages 25% when *I* make the selection of a pair. I can tell you my selections will be pseudo-random.

So basically, I am making no assumptions at all. De Raedt (or you) can provide anything. Now please understand, the De Raedt team is very smart and they have done a lot in this area. It is quite complicated, and in fact they have provided sample datasets which can meet some of the criteria I have listed - it is quite impressive. So don't get me wrong. But this particular paper does not negate Bell.

I don't know why you would think that I have the ambition to create a local and realistic model for quantum mechanics; the topic of this thread is about probability analysis as used for quantum mechanics. Bell was a smart guy and De Raedt is a smart guy too; but they disagree. If the arguments of De Raedt et al against Bell's Theorem are valid then Bell's theorem is invalid. If it is invalid, the question is open if such a model is possible; but if it is valid (or if a corrected version of it is valid!), then trying to develop such a model is like trying to develop a perpetuum mobile.

As a reminder, Bell's Theorem is the following sweeping claim about any imaginable hidden parameters:

"In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument [instantly], however remote."

Obviously, in order to make a valid general statement about such unknown hidden variables, these should be assumed to be of any value and location and to possibly exist in the measurement equipment as well as in the measured "entangled" entities. And when reading Bell's paper in the past, his introduction of "parameters lambda" suggested to me that he did keep it as general as possible, although I found his notation of them as simply "lambda" a bit doubtful.

And now:

1. We seem to agree that such hidden variables may be expected to be random - which implies that set lambda is likely to be different between each measurement.

2. You certainly agree with De Raedt that Bell assumes that the same element of reality occurs for the three different experiments with three different setting pairs.

1+2. For me that immediately invalidates Bell's Theorem, as cited here above.

Harald
 
  • #10
JesseM said:
De Raedt's point doesn't seem to be remotely novel, it's long been known that in any rigorous proof of Bell's theorem you must include a "no-conspiracy condition" which says that there is no statistical correlation between which pair of elements are sampled on each trial and the likelihood that the trial will consist of any given triplet of predetermined results (or any given set of hidden variables, if you are not assuming there's a perfect correlation whenever both experimenters choose the same detector setting, and therefore the results may not be predetermined prior to measurement). See for example section D, p. 6 of this derivation of Bell's theorem, or Bell's own quote from a 1985 interview in the superdeterminism wiki article. If the choice of detector settings is truly random than the no-conspiracy condition will obviously hold, and even in a deterministic universe there are some strong conceptual arguments for thinking it's a good assumption, see my posts to "Rap" on this thread.

Thanks. I don't think that De Raedt et al suggests any kind of conspiracy, almost certainly he assumes random values. I think that he merely presented Boole's example for the educational purpose to clarify the importance of the subtle assumptions that are often overlooked. In fact I know that for sure, since they explain what they try to teach as follows:

"it is the one to one correspondence of the variables to the logical elements of Boole that matters when we determine a possible experience, but not necessarily the choice between realism and Einstein locality."

Does anyone here know a corrected Bell-type derivation based on lambda1, lambda2 and lambda3? That would be helpful. :smile:

Harald
 
  • #11
harrylin said:
Thanks. I don't think that De Raedt et al suggests any kind of conspiracy, almost certainly he assumes random values.
But if there is no conspiracy and the choices of which values to sample are random with respect to hidden variables, then the samples of two will be statistically representative of the actual three predetermined values that are required under local realism. Isn't De Raedt's whole argument that Bell's inequality can be violated if some values of hidden variables are more likely to occur when you sample one pair (say, a and b) than when you sample another (say, b and c)? For example, the text around equation (8) in the first paper says:
Realism plays a role in the arguments of Bell and followers because they introduce a variable λ representing an element of reality and then write

[tex]\Gamma = \langle A_a (\lambda) A_b (\lambda) \rangle + \langle A_a (\lambda) A_c (\lambda) \rangle + \langle A_b (\lambda) A_c (\lambda) \rangle \geq -1 \, (8)[/tex]

Because no λ exists that would lead to a violation except a λ that depends on the index pairs (a, b), (a, c) and (b, c) the simplistic conclusion is that either elements of reality do not exist or they are non-local. The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs. This assumption implies the existence of the combinatorial-topological cyclicity that in turn implies the validity of a non-trivial inequality but has no physical basis. Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?
Of course Bell does not actually assume that for a finite number of trials, exactly the same values of hidden variables occur on trials where a and b are sampled as on trials where b and c are sampled, only that the probability of a given value of lambda on a trial where the sample was a+b is the same as the probability of that value on a trial where the sample was a+c. And note that this does not exclude the notion that the probability of getting different hidden variable values could vary with time, but in that case if you knew the probability distribution for lambda at the actual times of measurement t1, t2, ... tN then you could construct a total probability distribution for lambda for a randomly selected measurement at one of those N times, and as long as the probability of choosing a+b vs. a+c or b+c was independent of the time of measurement (so for example the measurement at t2 was equally likely to be any of those three), then you can still derive the inequality.

In the case where there is a perfect correlation when both experimenters choose the same setting, and therefore under local realism without a "conspiracy" we are forced to assume that all the outcomes for all three settings were predetermined at some time prior to the choice of which setting to use, the argument is even simpler to think about. In this case, let's say for setting a the particle must be predetermined to either have property A or property not-A, for setting b it must be predetermined to have B or not-B, and for c it must be predetermined to have C or not-C. So each particle pair has some set of predetermined values like [A, not-B, C] or [not-A, B, C] etc. In this case, if we could magically know these values for all the particles, Boole's inequality (derived here for example) would show that the following must be true of the complete set of all particles:

Number(A, not B) + Number(B, not C) ≥ Number(A, not C)

And as I discussed a bit in [post=3290345]this post[/post] and [post=3291704]this one[/post], if you add the assumption that the choice of which two settings to actually use in measurement is random and statistically uncorrelated with the three predetermined values (so that in the limit as the number of trials goes to infinity, on the trials where the measurement was a,b the fraction where the three predetermined values were [A, not-B, c] approaches being the same as the fraction of b,c trials where the tree predetermined values were [A, not-B, c]) then by the law of large numbers, you can conclude that the greater the number of trials, the smaller the chance that the following inequality will be violated:

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle b, the number in this subset where particle #1 had property A and particle #2 had property not-B]

+

[of the subset of all particle pairs where #1 was measured at angle b and #2 was measured at angle c, the number in this subset where particle #1 had property B and particle #2 had property not-C]

greater than or equal to

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle c, the number in this subset where particle #1 had property A and particle #2 had property not-C]

If you disagree, think of it this way. Suppose we generate a hypothetical list of the predetermined values for each in a series of N trials, where N is fairly large, say N=100, like this:

trial #1: [A, B, C]
trial #2: [A, not-B, not-C]
trial #3: [not-A, B, not-C]
trial #4: [A, B, not-C]
...
trial #100: [A, not-B, not-C]

You can use any algorithm you want to generate this list, including one where you pick the values for each trial based on a probability distribution for all 8 possible combinations, and the probability distribution itself changes depending on the number of the trial (equivalent to De Raedt's notion that the probability distribution for lambda might be time-dependent). Anyway, once you have the list, then select which two the imaginary experimenters are going to sample using a rule that is random with respect to the actual set of predetermined values on that trial--for example, you could use this random number generator with Min=1 and Max=3, and then on each trial if it gives "1" you say that the measurement was a,b, if it gives "2" you say the measurement was b,c, and if it gives "3" you say the measurement was a,c. I would say that regardless of what algorithm you chose to generate the original list of predetermined values, the fact that the choice of which values were sampled on each trial was random ensures that if the number of entries N on the list is large, the probability is very small that you'll get a violation of the inequality above involving measured subsets. Would you disagree with that?
harrylin said:
Does anyone here know a corrected Bell-type derivation based on lambda1, lambda2 and lambda3? That would be helpful. :smile:
The very idea that the probability distribution for lambda would be different depending on which two settings were actually used in measurements would, by definition, be a violation of the no-conspiracy condition. Just imagine that our choice of detector settings on each trial is based on the value at that moment of a parameter in a chaotic system, like the weather. In this case how could their possibly be a correlation between the probability we would choose a particular setting and the probability the hidden variables associated with the particle at an earlier time would take a particular value? Remember from our [post=3248153]earlier discussion[/post] of Bell's "La nouvelle cuisine" paper that lambda can refer to the hidden variables in a cross-section of the past light cone of the two measurements, possibly at a much earlier time than when the measurements are made (though later than the time the past light cone of both measurements stops overlapping). The google books links to the "nouvelle cuisine" paper in that earlier discussion have mostly stopped working, but if you need to refresh your memory there's also a good discussion of the arguments in that paper here, see in particular fig. 2 at the top of p. 3 where lambda is supposed to specify the value of hidden variables in region 3 in the past light cone of the measurement region 1 (though that would just be a lambda for hidden variables that condition the outcome of that measurement, if you're concerned with both measurements you'd need a lambda that gives values of hidden variables in an analogous cross-section of the past light cone of measurement region 2 as well). But if you're still not convinced that a correlation between detector settings and probabilities of hidden variables would require a very strange sort of "conspiracy", see my discussion with Rap on this thread, and we can discuss this point further here if you like.
 
Last edited:
  • #12
Jesse, I'll come back to your last message - but first I would like a clarification!

Do you disagree with De Raedt that Bell assumed the same lambda for the three trials, so that you think that it was just a flaw in notation that Bell wrote lambda instead of lambda1, lambda2, and lambda3?

Please don't forget that lambda is not a probability distribution.

Thanks,
Harald
 
  • #13
JesseM said:
Of course Bell does not actually assume that for a finite number of trials, exactly the same values of hidden variables occur on trials where a and b are sampled as on trials where b and c are sampled, only that the probability of a given value of lambda on a trial where the sample was a+b is the same as the probability of that value on a trial where the sample was a+c.

To reveal your error: Is there any difference in coincidence count rate between the a+b case, and the a+c in Bell test experiments, and if there is, do you think your naive assumption that the outcomes at angles (a, b, c) are spatially and temporally stationary is valid?

Look at Figure 7 of this paper in which the Weihs et al raw data is analyzed.
http://arxiv.org/pdf/0712.2574

It shows a plot of the number of coincident counts assuming different fixed delays between the two sides, and they show that for a different pair of angles, the maximum occurs at different time delays. In other words, as Alice or Bob rotates their polarizers, the coincidence counts change. This was also observed in the 1967 Kocker experiment.

If lambda is supposed to encapsulate all hidden parameters governing the result, and you are assuming that they are the same, then the coincidence counting should not be affected by rotating the polarizer on one side. But since it is, it means your naive assumption about lambda is false.
 
Last edited:
  • #14
harrylin said:
Jesse, I'll come back to your last message - but first I would like a clarification!

Do you disagree with De Raedt that Bell assumed the same lambda for the three trials, so that you think that it was just a flaw in notation that Bell wrote lambda instead of lambda1, lambda2, and lambda3?

Please don't forget that lambda is not a probability distribution.
lambda is a variable which can take multiple values, and Bell normally writes equations that integrate over all possible values of lambda. I agree equation (8) in De Raedt's first paper that I quoted above would be a bad choice of notation, but does this equation or one like it appear in any of Bell's papers? Can you point to a specific equation in a specific paper by Bell that you think this criticism applies to?
 
  • #15
JesseM said:
lambda is a variable which can take multiple values, and Bell normally writes equations that integrate over all possible values of lambda. I agree equation (8) in De Raedt's first paper that I quoted above would be a bad choice of notation, but does this equation or one like it appear in any of Bell's papers? Can you point to a specific equation in a specific paper by Bell that you think this criticism applies to?

Everytime Bell writes P(a,b), it is implicit that this result is obtained by normalizing over a given distribution of lambda, there is therefore no material difference between including and excluding lambda from the equation. So this line of argument is a red-herring. De Raedt is focusing on the fact that in Bell-type inequalities, the distribution of lambda is the same for each term of the inequality which you certainly do not dispute. So his criticism applies just as well to any Bell-type inequality.
 
  • #16
billschnieder said:
Everytime Bell writes P(a,b), it is implicit that this result is obtained by normalizing over a given distribution of lambda, there is therefore no material difference between including and excluding lambda from the equation. So this line of argument is a red-herring. De Raedt is focusing on the fact that in Bell-type inequalities, the distribution of lambda is the same for each term of the inequality which you certainly do not dispute. So his criticism applies just as well to any Bell-type inequality.
Did you actually read harrylin's comment I was responding to? He was concerned not with probability distributions but with particular values of lambda being different on individual measurements. I agree that Bell assumes a single probability distribution for lambda which does not vary depending on the detector settings, once again this is the "no-conspiracy condition" whose violation would require very bizarre "conspiracies" among events distributed throughout the entire past light cone of the measurement.
 
  • #17
JesseM said:
lambda is a variable which can take multiple values, and Bell normally writes equations that integrate over all possible values of lambda. I agree equation (8) in De Raedt's first paper that I quoted above would be a bad choice of notation, but does this equation or one like it appear in any of Bell's papers? Can you point to a specific equation in a specific paper by Bell that you think this criticism applies to?

I see, indeed I don't find a precise statement of Bell to that effect... :confused: - I still suspect that you misunderstood De Raedt, but perhaps so did I! :bugeye:
So I need a little "time-out" now. I'll be back. :smile:
 
  • #18
JesseM said:
I agree that Bell assumes a single probability distribution for lambda which does not vary depending on the detector settings, once again this is the "no-conspiracy condition" whose violation would require very bizarre "conspiracies" among events distributed throughout the entire past light cone of the measurement.

In other words, you are saying Bell does not allow the detectors themselves to possesses any hidden variables. This is enough to eliminate Bell's theorem off-hand. But if you have changed your mind and Bell also included detector hidden variables into "lambda", then it is indeed a conspiracy by Bell proponents when they try to claim that "lambda" should not depend on the detector angle.

Is it so conspiratorial to think that if the detector hidden variables are also included in "lambda", then "lambda" must change in a specific way according to the detector angle, at the time of measurement?

This not a serious argument at all against the De Raedt et al paper, which I do not think you have understood.
 
  • Like
Likes harrylin
  • #19
DrChinese said:
The issue is: can a local realistic data set be constructed in which the data, selected by the observer (that's me), violates a Bell inequality. I say NO, and this dataset does not disprove it because it doesn't meet the rules of the game. Please feel free to attempt to give me a suitable dataset to look at.

You have posted this nonsensical challenge too many times so I will challenge you now to follow it through. My goal here is to make sure what you are asking for is very clear. I hope you will not run away. Once your demands are very clear and unambiguous, I will produce the dataset you seek.

1) What do you mean by "local realistic data set".
2) Are you referring to results obtainable from and experiment, then describe the experiment and I will produce the dataset.
3) If not the results from an experiment, What else could dataset mean?
4) Are you planing to further select a subset of data from the "realistic dataset"? If so, what method are you going to use to select the subset.
5) Are you thinking that the "realistic dataset" will correspond to existing properties of some object, and your selection method will correspond to an experiment such as a Bell test experiment? If so, will you be splitting this dataset into three to obtain P(a,b) from the first third, P(b,c) from the second third, and P(a,c) from the last third, or are you planning to obtain P(a,b) from the full "dataset", P(b,c) from the full dataset, and the same for P(a,c)? If not please answer 4 carefully.
 
  • #20
billschnieder said:
In other words, you are saying Bell does not allow the detectors themselves to possesses any hidden variables.
No, where did you get that little notion? I already mentioned to you in an [post=3308953]earlier post[/post] that lambda could stand for all the local variables relevant to the final outcome in an entire spacelike cross-section of the past light cone of the measurement event:
JesseM said:
And suppose we are defining the "hidden variables" for each particle to be restricted to a cross-section of the past light cone of each measurement (like "region 3" in fig. 2 on p. 3 of this paper), at a time one year before the measurement was actually performed but after the time the past light cones of the two measurements had ceased to overlap (again see the diagram in that paper).
Please go look at that diagram, you might learn something (and you would learn even more if you read the full paper). Anyway, obviously if lambda consists of of everything in a cross-section of the past light cone of the measurement that's relevant to determining the final output, then it will contain any variables associated with the detectors at that time that are relevant to the final output. I suppose you could say that there might be random events which would alter the state of the detector and/or particle in unpredictable ways between this earlier time and the time of measurement, but if you think carefully about it you'll realize that if the outcomes weren't already predetermined by the full state of the past light cone cross-sections at a time after the two past light cones had ceased to overlap, then in a universe obeying locality there would be no way to explain why subsequent random events were "coordinated" in such a way as to ensure that if the experimenters chose the same setting, then they'd be guaranteed to get the same outcome with probability 1.
billschnieder said:
But if you have changed your mind and Bell also included detector hidden variables into "lambda", then it is indeed a conspiracy by Bell proponents when they try to claim that "lambda" should not depend on the detector angle.
Only detector hidden variables from a time prior to the choice of detector setting though, that's what you seem to be missing here. Again I really recommend you read the paper above, or possibly get your hands on Bell's own "La nouvelle cuisine" paper which the paper above is discussing (Bell's paper is not available online, but you can find it in https://www.amazon.com/dp/0521368693/?tag=pfamazon01-20.
 
Last edited by a moderator:
  • #21
billschnieder said:
You have posted this nonsensical challenge too many times so I will challenge you now to follow it through. My goal here is to make sure what you are asking for is very clear. I hope you will not run away. Once your demands are very clear and unambiguous, I will produce the dataset you seek.

1) What do you mean by "local realistic data set".
2) Are you referring to results obtainable from and experiment, then describe the experiment and I will produce the dataset.
3) If not the results from an experiment, What else could dataset mean?
4) Are you planing to further select a subset of data from the "realistic dataset"? If so, what method are you going to use to select the subset.
5) Are you thinking that the "realistic dataset" will correspond to existing properties of some object, and your selection method will correspond to an experiment such as a Bell test experiment? If so, will you be splitting this dataset into three to obtain P(a,b) from the first third, P(b,c) from the second third, and P(a,c) from the last third, or are you planning to obtain P(a,b) from the full "dataset", P(b,c) from the full dataset, and the same for P(a,c)? If not please answer 4 carefully.

1) If it is realistic, then there must be definite values for the angle settings I choose (0, 120, 240). If it is local, then those values don't change for Alice based on the setting at Bob, and vice versa.

2) The hypothetical experiment is source of polarization entangled photon pairs from Type I PDC crystals (i.e. identical polarization characteristics). These will exhibit perfect correlations in the manner of EPR, so that identical measurements on A and B yield identical results. You can label them +/-, H/T, 1/0 or whatever you like as long as it is some binary value. Because this is a hypothetical experiment, you will make up the data. You need only supply for 1 photon stream (say Alice), since the other is a polarization clone. We will assume that for every item you present, there was a matching detection on the other side within some suitable window which allowed us to make the pair.

3) n/a

4) Yes, I will select the subset of pairs (from the triples you provide) without regard to the values you provide me. Just let me know how many you plan to provide. I will select whimsically (since it won't exactly be "random"). It doesn't much matter as long as my selections are independent of your values.

Since we won't need a big sample, that means there is a chance that my selections will be somewhat at variance from the stats that might be anticipated from a larger collection. But I think we can manage that.

5) n/a, see 4).

I see this as an exercise to identify the issues we each consider relevant to this discussion.
 
  • #22
DrChinese said:
1) If it is realistic, then there must be definite values for the angle settings I choose (0, 120, 240). If it is local, then those values don't change for Alice based on the setting at Bob, and vice versa.
So you want a list with each entry in the list being 3 angles, like? Be specific what exactly it is this dataset is supposed to contain. Do you want me to give you a list of triples, with each triple being three angles (0,120, 240)? Be specific.

2) The hypothetical experiment is source of polarization entangled photon pairs from Type I PDC crystals (i.e. identical polarization characteristics). These will exhibit perfect correlations in the manner of EPR, so that identical measurements on A and B yield identical results.
The dataset above with each point containing three angles (?) is supposed to be generated from Alice and Bob? Those are only two people, are you expecting to obtain 3 values per data point from only Alice and Bob, or did you forget to mention Cyndi. Please be specific.

- We have (only) two photons from your PDC crystals one heading toward Alice and one Heading toward Bob. They each measure one result. In what alternate universe are you going to get a dataset with tree values per data point? Are you sure this is the correct experiment? Your description of the experiment is inconsistent with your description of the dataset. Unless by dataset, you did not mean the result of an experiment.

3) n/a
If you did not mean the result of an experiment in (2), you still need to answer this question. Otherwise, amend your description of the experiment such that the experiment generates a list of triples as you described in (1).

4) Yes, I will select the subset of pairs (from the triples you provide) without regard to the values you provide me. Just let me know how many you plan to provide. I will select whimsically (since it won't exactly be "random"). It doesn't much matter as long as my selections are independent of your values.
You ask for a dataset of triples, to be produced by an experiment which can only produce pairs, so that you will then select three subsets of pairs. Is this really what you are asking? And all of this is from a hypothetical experiment? There is a disconnect between your description of the dataset and the experiment which you heed to fix. You still need to specify the method you will use to select the subset. For example, are you going to use any data point in more than one subset? This is important because what ever you do has to be consistent with what is done in actual Bell-test experiments for this exercise to be meaningful.

Since we won't need a big sample, that means there is a chance that my selections will be somewhat at variance from the stats that might be anticipated from a larger collection. But I think we can manage that.

5) n/a, see 4).
You need to fix the earlier problems first.
 
  • #23
JesseM said:
Anyway, obviously if lambda consists of of everything in a cross-section of the past light cone of the measurement that's relevant to determining the final output, then it will contain any variables associated with the detectors at that time that are relevant to the final output. I suppose you could say that there might be random events which would alter the state of the detector and/or particle in unpredictable ways between this earlier time and the time of measurement, but if you think carefully about it you'll realize that if the outcomes weren't already predetermined by the full state of the past light cone cross-sections at a time after the two past light cones had ceased to overlap, then in a universe obeying locality there would be no way to explain why subsequent random events were "coordinated" in such a way as to ensure that if the experimenters chose the same setting, then they'd be guaranteed to get the same outcome with probability 1.
Then you are being short sighted. According to local realism, if EVERY hidden parameter (particle + detector) at the moment of interaction is identical, you SHALL obtain the EXACT same result. If you do not obtain the same result, it means the conditions were DIFFERENT. Predetermination is not limited to the photon only.

Only detector hidden variables from a time prior to the choice of detector setting though
Huh? So after the detector setting is changed, the hidden variables of the detector magically stop being relevant? What a silly idea.

A lot of Bell test experiments have been performed. I'm sure you are going to find one which confirms your repeated unsubstantiated claims that whenever, "the experimenters chose the same setting, then they'd be guaranteed to get the same outcome with probability 1".

You only need ONE experiment in which, when the detector settings were identical, the experimenters got the same outcome with probability 1. I dare you to find such a result from the multitude that have been performed. Of course you can't because it doesn't exist, so why do you keep repeating such falsehood?
 
  • #24
JesseM said:
But if there is no conspiracy and the choices of which values to sample are random with respect to hidden variables, then the samples of two will be statistically representative of the actual three predetermined values that are required under local realism.
That's a whole mouth full, and regretfully it surpasses my comprehension due to my insufficient understanding of this matter. However, see further.
Isn't De Raedt's whole argument that Bell's inequality can be violated if some values of hidden variables are more likely to occur when you sample one pair (say, a and b) than when you sample another (say, b and c)? For example, the text around equation (8) in the first paper says:
I don't think that they meant that a value would be more likely to occur for one sample pair than another: that doesn't make sense to me and it's not what they wrote. Nevertheless it may have been poorly formulated two years ago, for in the new paper that we are discussing here, they formulate it differently:

"in the actual experiments identical λ ’s are available for each of the data pairs (1, 2), (1, 3), (2, 3). This means that all of Bell’s derivations assume from the start that ordering the data into triples as well as into pairs must be appropriate and commensurate with the
physics. This “hidden” assumption was never discussed by Bell." [...]
"he implies the existence of identical elements of reality for each of the three pairs."

Of course Bell does not actually assume that for a finite number of trials, exactly the same values of hidden variables occur on trials where a and b are sampled as on trials where b and c are sampled, only that the probability of a given value of lambda on a trial where the sample was a+b is the same as the probability of that value on a trial where the sample was a+c.
After reading it over, I agree with that: Bell certainly had in mind that lambda should be allowed to have a different value for different pairs.

Now, De Raedt's argument here seems to be that although that was Bell's intention, his way of manipulating the data is only valid if lambda is the same for different pairs.

On top of that, although lambda basically stands for a single parameter, according to Bell it was innocent to substitute sets of lambdas or sets of functions of lambda for it. He did not prove that this is a valid thing to do for the probability rules that he applied. Intuitively I found that suspect and if I correctly understand it, De Raedt claims that it is wrong.

If I misread De Raedt on either issue, I will be happy to be corrected. :tongue2:
And note that this does not exclude the notion that the probability of getting different hidden variable values could vary with time, but in that case if you knew the probability distribution for lambda at the actual times of measurement t1, t2, ... tN then you could construct a total probability distribution for lambda for a randomly selected measurement at one of those N times, and as long as the probability of choosing a+b vs. a+c or b+c was independent of the time of measurement (so for example the measurement at t2 was equally likely to be any of those three), then you can still derive the inequality.
I doubt that that is what De Raedt has in mind; and I think that I already asked this, but where was such an improved, more general derivation published?
In the case where there is a perfect correlation when both experimenters choose the same setting, and therefore under local realism without a "conspiracy" we are forced to assume that all the outcomes for all three settings were predetermined at some time prior to the choice of which setting to use, the argument is even simpler to think about.
That sounds reasonable; however I think to have read somewhere (possibly Tim Moudlin) that in fact, in true measurements, there is no perfect correlation for the same settings. Can anyone clarify that?
In this case, let's say for setting a the particle must be predetermined to either have property A or property not-A, for setting b it must be predetermined to have B or not-B, and for c it must be predetermined to have C or not-C. So each particle pair has some set of predetermined values like [A, not-B, C] or [not-A, B, C] etc. In this case, if we could magically know these values for all the particles, Boole's inequality (derived here for example) would show that the following must be true of the complete set of all particles:

Number(A, not B) + Number(B, not C) ≥ Number(A, not C)

And as I discussed a bit in [post=3290345]this post[/post] and [post=3291704]this one[/post], if you add the assumption that the choice of which two settings to actually use in measurement is random and statistically uncorrelated with the three predetermined values (so that in the limit as the number of trials goes to infinity, on the trials where the measurement was a,b the fraction where the three predetermined values were [A, not-B, c] approaches being the same as the fraction of b,c trials where the tree predetermined values were [A, not-B, c]) then by the law of large numbers, you can conclude that the greater the number of trials, the smaller the chance that the following inequality will be violated:

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle b, the number in this subset where particle #1 had property A and particle #2 had property not-B]

+

[of the subset of all particle pairs where #1 was measured at angle b and #2 was measured at angle c, the number in this subset where particle #1 had property B and particle #2 had property not-C]

greater than or equal to

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle c, the number in this subset where particle #1 had property A and particle #2 had property not-C]

If you disagree, think of it this way. Suppose we generate a hypothetical list of the predetermined values for each in a series of N trials, where N is fairly large, say N=100, like this:

trial #1: [A, B, C]
trial #2: [A, not-B, not-C]
trial #3: [not-A, B, not-C]
trial #4: [A, B, not-C]
...
trial #100: [A, not-B, not-C]

You can use any algorithm you want to generate this list, including one where you pick the values for each trial based on a probability distribution for all 8 possible combinations, and the probability distribution itself changes depending on the number of the trial (equivalent to De Raedt's notion that the probability distribution for lambda might be time-dependent). Anyway, once you have the list, then select which two the imaginary experimenters are going to sample using a rule that is random with respect to the actual set of predetermined values on that trial--for example, you could use this random number generator with Min=1 and Max=3, and then on each trial if it gives "1" you say that the measurement was a,b, if it gives "2" you say the measurement was b,c, and if it gives "3" you say the measurement was a,c. I would say that regardless of what algorithm you chose to generate the original list of predetermined values, the fact that the choice of which values were sampled on each trial was random ensures that if the number of entries N on the list is large, the probability is very small that you'll get a violation of the inequality above involving measured subsets. Would you disagree with that?
Interesting! As my probability calculus is still rusty, I will consider pondering this over if the answer on my question just before is in the negative and if your essay originates from peer reviewed literature.
The very idea that the probability distribution for lambda would be different depending on which two settings were actually used in measurements would, by definition, be a violation of the no-conspiracy condition. [..]
I immediately agree with that and I suppose De Raedt et al too.
 
  • #25
billschnieder said:
So you want a list with each entry in the list being 3 angles, like? Be specific what exactly it is this dataset is supposed to contain. Do you want me to give you a list of triples, with each triple being three angles (0,120, 240)? Be specific.


The dataset above with each point containing three angles (?) is supposed to be generated from Alice and Bob? Those are only two people, are you expecting to obtain 3 values per data point from only Alice and Bob, or did you forget to mention Cyndi. Please be specific.

- We have (only) two photons from your PDC crystals one heading toward Alice and one Heading toward Bob. They each measure one result. In what alternate universe are you going to get a dataset with tree values per data point? Are you sure this is the correct experiment? Your description of the experiment is inconsistent with your description of the dataset. Unless by dataset, you did not mean the result of an experiment.


If you did not mean the result of an experiment in (2), you still need to answer this question. Otherwise, amend your description of the experiment such that the experiment generates a list of triples as you described in (1).


You ask for a dataset of triples, to be produced by an experiment which can only produce pairs, so that you will then select three subsets of pairs. Is this really what you are asking? And all of this is from a hypothetical experiment? There is a disconnect between your description of the dataset and the experiment which you heed to fix. You still need to specify the method you will use to select the subset. For example, are you going to use any data point in more than one subset? This is important because what ever you do has to be consistent with what is done in actual Bell-test experiments for this exercise to be meaningful.


You need to fix the earlier problems first.

Triples such as:

+ + -

Since Alice and Bob are clones, no need to repeat yourself. Of course in an observer independent reality, the kind EPR specifically mentions, Alice's decision on what to observe should not affect Bob in any way. And of course per EPR we expect that when Alice and Bob measure the same way, the results are identical.

The question is: can you present a realistic dataset that will yield results consistent with QM? Bell says no.

Now you specifically mention alternate universes. Interesting comment from a realist. As I have said before, I believe you actually follow the Bell program and are just too ornery to admit it.
 
  • #27
Rap said:
I think this paper might be relevant, I don't know if its been mentioned already:

http://www.mdpi.com/1099-4300/10/2/19/

Thanks, that author is mentioned by De Raedt, and the title is similar - so I agree that it's likely relevant. :smile:

PS. apparently this professor is an expert in this field:
http://w3.msi.vxu.se/Personer/akhmasda/CV.htm
 
Last edited by a moderator:
  • #28
To summarize the Khrennikov paper -
Khrennikov said:
Consider a system of three random variables [itex]a_i; i = 1,2,3[/itex]: Suppose for simplicity that they take discrete values and moreover they are dichotomous: [itex]a_i=\pm 1[/itex]: Suppose that these variables as well as their pairs can be measured and hence joint probabilities for pairs are well defined: [itex]P_{a_i,a_j}(\alpha_i,\alpha_j) \ge 0[/itex] and [itex]\sum_{\alpha_i,\alpha_j=\pm 1}P_{a_i,a_j}(\alpha_i,\alpha_j)= 1[/itex]:

Question: Is it possible to construct the joint probability distribution, [itex]P_{a1,a2,a3}(\alpha_1,\alpha_2,\alpha_3)[/itex] for any triple of random variables?

The answer is no, if Bell's inequalities are violated.

The paper concludes:
Khrennikov said:
In probability theory Bell’s type inequalities were studied during last hundred years as constraints for probabilistic compatibility of families of random variables – possibility to realize them on a single probability space. In opposite to quantum physics, such arguments as nonlocality and “death of reality” were not involved in considerations. In particular, nonexistence of a single probability space does not imply that the realistic description (a map [itex]\lambda \rightarrow a(\lambda))[/itex] is impossible to construct.
 
  • #29
Rap said:
To summarize the Khrennikov paper -

["Is it possible to construct the joint probability distribution [..]?"]
The answer is no, if Bell's inequalities are violated.

The paper concludes:
["nonexistence of a single probability space does not imply that the realistic description (a map λ→a(λ)) is impossible to construct."]

Thank you, it surely helps to understand where De Raedt et al "are coming from". A few other points of that paper may be interesting for this discussion:

"The joint probability distribution does not exist just because
those observables could not be measured simultaneously."

(That sounds very much like Bell's situation.)

"Eberhard[..] operated with statistical data obtained from three different experimental contexts, C1 , C2 , C3 , in such a way as [if] it was obtained on the basis of a single context. He took results belonging to one experimental setup and add[ed] or substract[ed] them from results belonging to another experimental setup. These are not proper manipulations from the viewpoint of statistics. One never performs algebraic mixing of data obtained for [a] totally different sample."

(Isn't that also what Bell did?)

This Bell Theorem paradox makes me increasingly think of the paradox of mutual time dilation. Some people (laymen and even a few experts) make the fallacy to think that mutual time dilation proves that observer-independent reality doesn't exist (or even that SR must be wrong because they think that it rejects reality).

Regards,
Harald

PS, there is another intriguing remark, not sure if it is on-topic:
"in contrast to the EPR-Bohm state, one can really (as EPR claimed) associate with the original EPR state a single probability measure describing incompatible quantum observables (position and momentum)."
Can someone here explain what Khrennikov meant?
 
Last edited:
  • #30
billschnieder said:
JesseM said:
Anyway, obviously if lambda consists of of everything in a cross-section of the past light cone of the measurement that's relevant to determining the final output, then it will contain any variables associated with the detectors at that time that are relevant to the final output. I suppose you could say that there might be random events which would alter the state of the detector and/or particle in unpredictable ways between this earlier time and the time of measurement, but if you think carefully about it you'll realize that if the outcomes weren't already predetermined by the full state of the past light cone cross-sections at a time after the two past light cones had ceased to overlap, then in a universe obeying locality there would be no way to explain why subsequent random events were "coordinated" in such a way as to ensure that if the experimenters chose the same setting, then they'd be guaranteed to get the same outcome with probability 1.
Then you are being short sighted. According to local realism, if EVERY hidden parameter (particle + detector) at the moment of interaction is identical, you SHALL obtain the EXACT same result. If you do not obtain the same result, it means the conditions were DIFFERENT. Predetermination is not limited to the photon only.
I don't understand how this statement is supposed to conflict with my own above. I said nothing at all about the hidden parameters at the moment of detection, I followed Bell in talking about the value of the hidden parameters in some cross-section of the past light cone well before the decision was made about which detector setting to use. Again look at fig. 2 on p. 3 of this paper, and imagine that in addition to "region 3" in the past light cone of region 1 where Alice's measurement is made, we also draw in an analogous "region 4" in the past light cone of region 2 where Bob's measurement is made. Suppose the events of Alice and Bob choosing their detector settings happens in regions 1 and 2, and that they make the choice either using some genuinely random process (if the laws of physics are not purely deterministic), or using some chaotic system like a chaotic pendulum, in which case also assume the time difference between regions 3 and 4 and regions 1 and 2 is large enough so that we can assume the butterfly effect applies and if even one tiny detail of region 3/4 had been different then that might have caused them to make a different choice of detector settings in region 1 and 2. In either case, if we then observe that on any trial where they both happen to choose the same setting, they always get identical measurement outcomes, do you disagree with the conclusion that region 3 and 4 must already contain enough information to predetermine what outcome the particle will give to each of the three possible measurement settings? I am making no assumption that the "information" in region 3 and 4 that determines consists solely of hidden variables associated with the particles themselves, it's conceivable it would also consist of variables associated with the detectors too, but the basic restriction is that we only consider hidden variables in region 3 and 4.

So please tell me, yes or no, whether you agree that there should be sufficient information in region 3 and 4 to predetermine what outcome the particle will give to each possible detector setting, under the assumption that the settings themselves are chosen by some random or pseudorandom chaotic events in regions 1 and 2, and also under the assumption that if both experimenters happen to choose the same setting then they are guaranteed to get the same outcome with probability one (if you aren't willing to make this latter assumption since you aren't interested in theoretical questions about the incompatibility of QM and local realism in idealized experiments, but are only interested in whether local realism is ruled out by real practical experiments, see the comments below).
billschnieder said:
Only detector hidden variables from a time prior to the choice of detector setting though
Huh? So after the detector setting is changed, the hidden variables of the detector magically stop being relevant? What a silly idea.
Duh, no. I'm just saying that Bell defines lambda to only deal with hidden variables from a time prior to the choice of detector settings (like region 3 in the diagram which I really hope you've at least glanced at, otherwise this discussion is a waste of time). Of course hidden variables at a later time would be relevant if you chose to consider them, but Bell doesn't in his definition of lambda. The point of defining lambda in this way is that by including enough data from a complete cross-section of the past light cone, one should be able to "screen off" correlations between measurement outcomes A and B in regions 1 and 2 that might result from something like a common cause in the region where the past light cones of measurement regions 1 and 2 overlap. In other words, although P(A|a) might be different than P(A|a,b,B) due to such a common cause, we can be sure that P(A|a,lambda) = P(A|a,lambda,b,B) in a local realistic theory where lambda can contain arbitrarily many variables in region 3 that lies before the measurement but after the past light cones have ceased to overlap. In addition, by defining lambda in such a way that it refers only to variables before the experimenters choose the detector setting, we can also show that barring a very strange "conspiracy" between seemingly unrelated events, it is reasonable to assume that there is no statistical correlation between the choice of detector setting and the specific variables encapsulated in lambda that predetermine the measurement results for different settings (or the probabilities for different measurements results at different settings, if we are not assuming that both experimenters get the same result with probability 1 if they use the same setting). In other words, P(lambda|a) = P(lambda), which is reasonable given that lambda refers to conditions well before the choice of detector setting a.
billschnieder said:
A lot of Bell test experiments have been performed. I'm sure you are going to find one which confirms your repeated unsubstantiated claims that whenever, "the experimenters chose the same setting, then they'd be guaranteed to get the same outcome with probability 1".

You only need ONE experiment in which, when the detector settings were identical, the experimenters got the same outcome with probability 1. I dare you to find such a result from the multitude that have been performed. Of course you can't because it doesn't exist, so why do you keep repeating such falsehood?
When I talked about getting the same outcome I was discussing Bell's theoretical proof of the incompatibility of QM and local hidden variables. In a theoretical proof we are free to consider an ideal experiment where the detectors are perfectly efficient and never miss a single particle, nor are there any other particles in the region that might be detected besides the members of the entangled pair emitted by the source--in this case, do you deny that the theoretical prediction of QM is that if both detectors are detecting members of an entangled pair, then if the same polarizer angle is used, QM predicts the probability of identical results (both passing through the polarizer or both being reflected) is 1?

If you don't wish to deal with ideal theoretical experiments, then basically you're telling me that you don't care about the question of whether orthodox QM and local realism are compatible or incompatible on a purely theoretical level--which is what Bell's theorem is all about! But even if you don't care about this question, I [post=3275052]already told you in a previous post you never responded to[/post] that Bell did come up with inequalities that apply even when we don't make the assumption of perfect correlations when identical settings are used. I'd be happy to discuss the derivation of these inequalities, but only if you are willing to drop your attitude on earlier threads of refusing to even consider talking about probabilities in "limit frequentist" terms (i.e. defining probabilities to mean the frequencies that would be seen as the number of repetitions of the same experimental conditions went to infinity, so that there are assumed to be true objective probabilities which our own measured frequencies only approximate--see this thread for an extensive discussion of the idea that probabilities can have objective values). In versions of his proof where he drops the assumption of perfect correlations, so that lambda can only give probabilities of outcomes rather than predetermined outcomes, the proof really only makes sense when probabilities are interpreted in such objective terms, so if you aren't willing to think about such probabilities even for the sake of argument then you're just dealing with a strawman version of Bell's argument.
 
  • #31
billschnieder said:
So you want a list with each entry in the list being 3 angles, like? Be specific what exactly it is this dataset is supposed to contain. Do you want me to give you a list of triples, with each triple being three angles (0,120, 240)? Be specific.


The dataset above with each point containing three angles (?) is supposed to be generated from Alice and Bob? Those are only two people, are you expecting to obtain 3 values per data point from only Alice and Bob, or did you forget to mention Cyndi. Please be specific.

- We have (only) two photons from your PDC crystals one heading toward Alice and one Heading toward Bob. They each measure one result. In what alternate universe are you going to get a dataset with tree values per data point? Are you sure this is the correct experiment? Your description of the experiment is inconsistent with your description of the dataset. Unless by dataset, you did not mean the result of an experiment.


If you did not mean the result of an experiment in (2), you still need to answer this question. Otherwise, amend your description of the experiment such that the experiment generates a list of triples as you described in (1).


You ask for a dataset of triples, to be produced by an experiment which can only produce pairs, so that you will then select three subsets of pairs. Is this really what you are asking? And all of this is from a hypothetical experiment? There is a disconnect between your description of the dataset and the experiment which you heed to fix. You still need to specify the method you will use to select the subset. For example, are you going to use any data point in more than one subset? This is important because what ever you do has to be consistent with what is done in actual Bell-test experiments for this exercise to be meaningful.


You need to fix the earlier problems first.


then, duplets ?



.
 
  • #32
DrChinese said:
Triples such as:

+ + -

Since Alice and Bob are clones, no need to repeat yourself. Of course in an observer independent reality, the kind EPR specifically mentions, Alice's decision on what to observe should not affect Bob in any way. And of course per EPR we expect that when Alice and Bob measure the same way, the results are identical.

The question is: can you present a realistic dataset that will yield results consistent with QM? Bell says no.

Now you specifically mention alternate universes. Interesting comment from a realist. As I have said before, I believe you actually follow the Bell program and are just too ornery to admit it.
You still haven't answered the questions clarifying what you mean by dataset. See my previous response to you and address the points raised. Specifically:

1) Do you want me to give you a dataset in which each point is a triple of angles, or a triple of outcomes? From your last post it appears you are suggesting a triple of outcomes like (++-) is that correct, just so I do not proceed on a faulty assumption.

2) If you want a triple of outcomes, those must correspond to *outcomes* from an experiment, but your description of the experiment only includes *two* stations, so where is the third outcome coming from? Unless they are not really outcomes from an experiment but something else. Please describe this "something else" so that it us three values.

3) What method are you going to use to select pairs from these triples, in a manner that is similar to what is actually done in Bell-test experiments? This question is very important and simply saying you will do it arbitrarily does not cut it. We are trying to discuss here the issues that are important for understanding the reason experiments violate Bell inequalities.

Once you address those issues, it will then be clear what you mean by "realistic dataset" and you then I will present one, if you haven't withdrawn your request by then.
 
  • #33
harrylin said:
In reply to JesseM who wrote:
"In the case where there is a perfect correlation when both experimenters choose the same setting, and therefore under local realism without a "conspiracy" we are forced to assume that all the outcomes for all three settings were predetermined at some time prior to the choice of which setting to use, the argument is even simpler to think about."

That sounds reasonable; however I think to have read somewhere (possibly Tim Moudlin) that in fact, in true measurements, there is no perfect correlation for the same settings. Can anyone clarify that? "

I now found it back, indeed on p.18 of Tim Maudlin's "Quantum non-locality and relativity":
real laboratory conditions at best allow some approximation of perfect agreement or disagreement

So, that appears to be a non-starter - except if QM predicts a perfect correlation of observations, contrary to the facts.

Harald
 
  • #34
JesseM said:
So please tell me, yes or no, whether you agree that there should be sufficient information in region 3 and 4 to predetermine what outcome the particle will give to each possible detector setting, under the assumption that the settings themselves are chosen by some random or pseudorandom chaotic events in regions 1 and 2, ..
The above is true only in the very unlikely alternate universe in which the detectors have no hidden variables themselves. If this is really the universe in which Bell was working, his work should be thrown out as a joke off the bat.

and also under the assumption that if both experimenters happen to choose the same setting then they are guaranteed to get the same outcome with probability one

Assuming that the same outcome will be guaranteed with probability one, without also assuming that ALL possible hidden parameters (including instrument parameters) are the same is foolish at best.

Duh, no. I'm just saying that Bell defines lambda to only deal with hidden variables from a time prior to the choice of detector settings
I am telling you this is a foolish approach, Duh.

If you don't wish to deal with ideal theoretical experiments, then basically you're telling me that you don't care about the question of whether orthodox QM and local realism are compatible or incompatible on a purely theoretical level
Oh on the contrary, I am dealing with such theoretical un-performable experiments in the "Violation of Bell inequalities" thread. I see no reason to repeat all those arguments here. If you have a valid response rather than an irrelevant detour into musings about frequencies, you can post them in that thread. I simply ignore such irrelevant outbursts with multiple hyperlinks to off-topic and already debunked droppings.
 
  • #35
Replying to "Bell defines lambda to only deal with hidden variables from a time prior to the choice of detector settings":
billschnieder said:
[..]
I am telling you this is a foolish approach [..]

To get this thread back on track: Although I still don't fully understand this topic, from reading the paper I guess that it's important for the mathematical arguments of De Raedt et al that hidden variables exist everywhere - thus not only in the photons but also in the instruments. Is my understanding correct?

Thus I wonder (and this is a shot in the dark), if hidden variables could only exist in either the photons or in the instruments, would then Bell's theorem perhaps be correct?

Thanks,
Harald
 

Similar threads

Replies
0
Views
662
Replies
50
Views
4K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Interpretations and Foundations
Replies
2
Views
742
Replies
25
Views
2K
Replies
6
Views
2K
  • Quantum Physics
Replies
7
Views
1K
Replies
4
Views
1K
  • Quantum Physics
3
Replies
82
Views
10K
Replies
75
Views
8K
Back
Top