Is Bell's Logic Aimed at Decoupling Correlated Outcomes in Quantum Mechanics?

  • Thread starter Thread starter Gordon Watson
  • Start date Start date
  • Tags Tags
    Logic
Click For Summary
The discussion centers on Bell's logic and its implications for decoupling correlated outcomes in quantum mechanics. Participants debate whether Bell's approach effectively separates variables or if it fails to address the underlying correlations between outcomes. Some argue that Bell's logic appears flawed, suggesting that it leads to the conclusion that outcomes G and G' are not correlated under certain conditions. Others defend Bell's reasoning, asserting that it accurately reflects the complexities of hidden variables and correlations in quantum mechanics. The conversation highlights ongoing tensions in interpreting Bell's theorem and its relevance to the physics community.
  • #61
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

The dispute is over whether the phrase 'local realism' is an appropriate characterization of the assumptions that Bell made in his theorem. I am arguing that it is not appropriate, and that physicists should drop that phrase in favor of Bell's 'local causality'.
 
Physics news on Phys.org
  • #62
morrobay said:
for those not versed in advanced probability theory.

Believe it or not, this is elementary probability theory. But it often gets muddled by unnecessarily complicated analogies.
 
  • #63
Maaneli said:
That's right, which is why I don't know what you mean by 'other hidden variables'.
I meant there might be other variables besides the ones dealing with observable things like detector settings and measurement outcomes, and that these variables (unlike the former ones) would be hidden ones. Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b. Reread the comment in this light and hopefully you will no longer find anything to disagree with there:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables
 
  • #64
JesseM said:
I meant there might be other variables besides the ones dealing with observable things like detector settings and measurement outcomes, and that these variables (unlike the former ones) would be hidden ones. Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b. Reread the comment in this light and hopefully you will no longer find anything to disagree with there:

Sorry, but I still don't understand. Are you saying that these other variables besides the ones dealing with observable things like detector settings and measurement outcomes, are not encompassed by lambda? If so, then what could you possibly mean by 'hidden'? And if not, then why not just say that there are no other hidden variables other than what Bell defines as encompassed by lambda?
 
  • #65
In the actual version of this study they weren't randomly selected. See the paradox wikipedia page[/url] where I think you got this example from (unless it also appears in other sources):

In other words, they were sampling a group that had already been assigned A or B by their doctors
My example is different from the wikipedia example, the fact the same numbers are used does not mean you should ignore everything I actually said and respond to the wikipedia treatment of simpson's paradox. For one, there is no omniscient being in the wikipedia. It seems to me you are just grasping at straws here.

JesseM said:
I already gave you an example--just get a bunch of people who haven't received any treatment yet to volunteer for a study, then have a computer with a random number generator randomly assign each person to receive treatment A or treatment B. Do you agree that P(given person will be assigned by random number generator to receive treatment A) should be uncorrelated with P(given person will have some other background factor such as high socioeconomic status or large kidney stones)? If so, then the only reason group A might contain more people with a given factor (like large kidney stones) than group B would be a random statistical fluctuation, and the likelihood of any statistically significant difference in these background factors between group A and group B would get smaller and smaller the larger your sample size.
You do not know what you are talking about. The question you asked is irrelevant to the discussion and for the last time, there are no socioeconomic factors in the example I presented. You seem to have a hard time actually following an argument, and spend a lot of ink responding to what you want the argument to be rather than what it actually is. Looks like grandstanding to me.

Your only relevant response so far is essentially that a random number generator can do the job of producing a fair sample. You clearly do not deny the fact that the probability of success of each treatment will differ from those of the omniscient being unless the proportions within the sampled population are the same as in the universe. Yet your only cop-out is the idea that a random number generator will produce the same distribution. I have performed the simulation, see attached python code, and the results confirm once and for all that you have no clue what you are saying. if you still deny do yours and post the result.

Remember, We are interested ONLY in obtain two groups that have the same proportion of large stones to small stones people as in the universe of all people with the disease. Alternative, we are interested in two groups with exactly the same proportion of small stones and large stones. Feel free to calculate the probability of drawing two groups with the same proportions.

Python Code:
Code:
import random

NUMBER_OF_TRIALS = 100
TEST_SIZE = 100
UNIVERSE_FRAC_LARGE = 0.7
UNIVERSE_SIZE = 1000000
DIFFERENCE_PERMITTED = 0.01
UNIVERSE_FRAC_SMALL = 1.0 - UNIVERSE_FRAC_LARGE

def calc_freqs(l):
    # takes a binary list and prints the fraction 
    # of large stones and small stones people.
    frac_large = 1.0 * l.count(1) / len(l)
    frac_small = 1.0 * l.count(0) / len(l)
    print 'Large: %8.2f, Small: %8.2f' % ( frac_large, frac_small)
    return frac_large, frac_small
    
# generate population of UNIVERSE_SIZE people, UNIVERSE_FRAC_LARGE of whom have large stones 
# and UNIVERSE_FRAC_SMALL with small stones as a binary list
# 1 = large stones,  0 = small stones

population = [1] * int(UNIVERSE_FRAC_LARGE * UNIVERSE_SIZE) + [0] * int(UNIVERSE_FRAC_SMALL * UNIVERSE_SIZE)

# randomize it to start with
population = random.sample(population, len(population))

# print fractions for 1000 different randomly select groups of 100 each
n = 0  # accumulator for number of groups for which the fractions
m = 0  # match to within DIFFERENCE_PERMITTED
       

# for each iteration, extract two groups of TEST_SIZE randomly from population and compute the fractions of
# large and small stones, compare with universe fractions
largest_deviation_btw = (0.0, 0.0)
largest_deviation_unv = (0.0, 0.0)

for i in range(NUMBER_OF_TRIALS):
    fl1, fs1 = calc_freqs(random.sample(population, TEST_SIZE)) # group 1
    fl2, fs2 = calc_freqs(random.sample(population, TEST_SIZE)) # group 2
    
    _dev_btw = (abs(fl1-fl2), abs(fs1-fs2))
    _dev_unv = (abs(fl1-UNIVERSE_FRAC_LARGE), abs(fs1-UNIVERSE_FRAC_SMALL))
    if _dev_btw[0] < DIFFERENCE_PERMITTED > _dev_btw[1]:
        n += 1
        if _dev_unv[0] < DIFFERENCE_PERMITTED > _dev_unv[1]:
            m += 1
    
    if largest_deviation_btw < _dev_btw:
        largest_deviation_btw = _dev_btw
    if largest_deviation_unv < _dev_unv:
        largest_deviation_unv = _dev_unv

print "Probability of producing two similar groups: %8.4f" % (float(n)/NUMBER_OF_TRIALS)
print "Probability of producing two similar groups, also similar to universe: %8.4f" % (float(m)/NUMBER_OF_TRIALS)
print "Largest deviation observed between groups -- Large: %8.2f, Small: %8.2f" % largest_deviation_btw
print "Largest deviation observed between groups and universe -- Large: %8.2f, Small: %8.2f" % largest_deviation_unv

Results:
Code:
Probability of producing two similar groups:   0.0700
Probability of producing two similar groups, also similar to universe:   0.0100
Largest deviation observed between groups -- Large:     0.21, Small:     0.21
Largest deviation observed between groups and universe -- Large:     0.13, Small:     0.13

Note, with a random number generator, you sometimes find deviations larger than 20% between groups! And this is just for a simple situation with only ONE hidden parameter. It quickly gets much-much worse if you increase the number of hidden parameters. At this rate, you will need to do an exponentially large number of experiments (compare to number of parameters) to even have the chance of measuring a single fair sample, and even then you will not know when you have had it because the experimenters do not even know what fair means. And remember we are assuming that a small stone person has a fair chance of being chosen as a large stone person. It could very well be that small stone people are shy and never volunteer, etc etc and you quickly get into a very difficult situation in which a fair sample is extremely unlikely.
 
Last edited by a moderator:
  • #66
Continuing...
JesseM said:
No you didn't. This is the key point you seem to be confused about: the marginal correlation between treatment B and recovery observed by the omniscient being is exactly the same as that observed by the experimenters. The omniscient being does not disagree that those who receive treatment B have an 83% chance of recovery, and a person who receives treatment A has a 73% chance of recovery.
Yes he does. He disagrees that treatment B is marginally more effective than treatment A. The experimenters think they are calculating a marginal probability of success for each treatment, but the omniscient being knows that they are not. This is the issue you are trying to dodge with your language here.
The problem is not with what the omniscient being knows! The problem is what the doctors believe they know from their experiments. Now I know that you are just playing tricks and avoiding the issue. Those calculating from Aspect type experiments do not know the nature of all the hidden elements of reality involved either, so they think they have fully sampled all possible hidden elements of reality at play. They think their correlations can be compared with Bell's marginal probability. How can they possibly they know that? What possible random number generator can ensure that they sample all possible hidden elements of reality fairly, when they have no clue about the details? For all we know some of them may even be excluded by the experimental set-ups!

Simple yes or no is not possible here; there is some probability the actual statistics on a finite number of trials would obey Bell's inequalities, and some probability they wouldn't, and the law of large numbers says the more trials you do, the less likely it is your statistics will differ significantly from the ideal statistics that would be seen given an infinite number of trials (so the less likely a violation of Bell's inequalites would become in a local realist universe).

This is an interesting admission. Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs? Yes or No. Or do you believe that Bell test setups are equally fair to all possible λs? Yes or No.
 
  • #67
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

The summary of my argument is this. I am making two points:

1. Bell's definition of "local causality" also excludes all "logical dependence" which is unwarranted because logical dependence exists in situations that are demonstrably locally causal.

2. Bell calculates his marginal probability for the outcome at two stations by integrating over all possible values of hidden elements λ. Therefore his inequalities are only comparable to experiments performed where all possible hidden elements of λ are realized. But since experimenters do not know anything about λ (since it is hidden). It is not possible to perform an experiment comparable to Bell's inequalities.
 
  • #68
Maaneli said:
Sorry, but I still don't understand. Are you saying that these other variables besides the ones dealing with observable things like detector settings and measurement outcomes, are not encompassed by lambda?
No, I just explained to you you that "other" was meant to contrast with non-hidden variables like A, B, a, b, not with lambda. Nowhere did I suggest any hidden variables not encompassed by lambda.
Maaneli said:
And if not, then why not just say that there are no other hidden variables other than what Bell defines as encompassed by lambda?
I thought it was clear from my previous post that you were misunderstanding when you imagined the "other" was a contrast to lambda rather than a contrast to the non-hidden variables. That's why I said 'Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b'.
 
  • #69
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

:smile:

I am the guy who presents the standard approach. If I deviate, I say so. JesseM also presents standard science.

There are 2 other groups represented. One group advocates that Bell's Theorem + Bell Tests combined do not rule out Local Realism. The argument varies, but in recent posts relates to the idea that classical phenomena can violate Bell Inequalities - thus proving that Bell cannot be relied upon. This argument has been soundly rejected, we are simply rehashing for iteration 4,823.

The other group insists that Bell essentially requires there to be a violation of locality within QM. On the other hand, the consensus is instead that either locality or realism can be violated. (I.e. take your pick.) This argument has some merit as there does not appear to be another mechanism* for explaining entanglement. However, this is not strictly a deduction from Bell. So we are debating that point. Norsen, channeled here by maaneli, is arguing for one side. I am defending the status quo.

*Actually there are at least 2 others, but this is the short version of the explanation.
 
  • #70
DrChinese said:
:smile:

I am the guy who presents the standard approach. If I deviate, I say so. JesseM also presents standard science.

There are 2 other groups represented. One group advocates that Bell's Theorem + Bell Tests combined do not rule out Local Realism. The argument varies, but in recent posts relates to the idea that classical phenomena can violate Bell Inequalities - thus proving that Bell cannot be relied upon. This argument has been soundly rejected, we are simply rehashing for iteration 4,823.

The other group insists that Bell essentially requires there to be a violation of locality within QM. On the other hand, the consensus is instead that either locality or realism can be violated. (I.e. take your pick.) This argument has some merit as there does not appear to be another mechanism* for explaining entanglement. However, this is not strictly a deduction from Bell. So we are debating that point. Norsen, channeled here by maaneli, is arguing for one side. I am defending the status quo.

*Actually there are at least 2 others, but this is the short version of the explanation.

<< [Bell] and Norsen, channeled here by maaneli, is arguing for one side. >>

It is important to recognize that I am representing Bell's own understanding of his theorem, not just Norsen's.
 
  • #71
Maaneli said:
<< [Bell] and Norsen, channeled here by Maaneli, is arguing for one side. >>

It is important to recognize that I am representing Bell's own understanding of his theorem, not just Norsen's.

You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's. Otherwise, why would Norsen have a need to write about it? And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion. It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

And by the way, my apologies for mangling the spelling of your name in a previous post.
 
  • #72
billschnieder said:
My example is different from the wikipedia example, the fact the same numbers are used does not mean you should ignore everything I actually said and respond to the wikipedia treatment of simpson's paradox. For one, there is no omniscient being in the wikipedia.
There is no literal omniscient being in Bell's example either, it's just a shorthand so we can talk about theoretical probabilities that could not actually be empirically measured by normal experimenters in such a theoretical universe. We might use the same shorthand in discussing the wikipedia example if the size of kidney stones was not known to the experimenters.
billschnieder said:
You do not know what you are talking about. The question you asked is irrelevant to the discussion and for the last time, there are no socioeconomic factors in the example I presented.
I was proposing a variant on your example in which there was some causal factor creating a marginal correlation between treatment B and recovery, unlike the case where assignment into groups was truly random and thus any marginal correlation seen in a small sample should represent a random statistical fluctuation, in the random-assignment case the marginal correlation would be guaranteed to disappear in the limit as the sample size approached infinity (law of large numbers). My variant is more relevant to the scenario Bell is analyzing, since he does say there can be a marginal correlation between measurement outcomes (i.e. a correlation when you don't condition on the hidden variables), and he doesn't say this is just a random statistical fluctuation, but rather that it represents a causal influence on the two measurement outcomes from the hidden variables.

Am I not allowed to propose my own variants on your examples? You seem to never be willing to discuss the examples I give, yet expect me to discuss the examples you propose.
billschnieder said:
Your only relevant response so far is essentially that a random number generator can do the job of producing a fair sample.
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.

Only the second sense of "fair sample" is relevant to Bell's argument. The first is not relevant, since Bell does not need to control for the influences of hidden variables on observable measurement outcomes, because he's not trying to infer any causal influence of one measurement on the other measurement. To test the Bell inequalities of course you do need a "fair sample" in the second sense of a sufficiently large number of measurements such that the frequencies of coincidences in your sample should be close to the probabilities of coincidences given by integrating (probability of coincidence given hidden state λi)*(probability of hidden state λi) over each possible value of i. But as long as your sample is "fair" in this second sense, it's no problem whatsoever if the ideal probabilities given by that integral are such that the hidden variable create marginal correlations between measurement outcomes, despite the fact that the measurement outcomes have no causal influence on one another (directly analogous to the doctors in my example being more likely to assign treatment B to those with small kidney stones, and thus creating a marginal correlation between receiving treatment B and recovery despite the fact that treatment B has no causal influence on a patient's recovery...this is why I introduced this variant example, to clearly distinguish between the two senses of 'fair' by looking at an example where the sample could be fair in the second sense even if it wasn't fair in the first).

Do you agree or disagree that as long as the sample is "fair" in the second sense, it doesn't matter to Bell's argument whether it's "fair" in the first sense?
billschnieder said:
You clearly do not deny the fact that the probability of success of each treatment will differ from those of the omniscient being unless the proportions within the sampled population are the same as in the universe.
As I understand it the word "probability" inherently refers to the frequencies that would be observed in the limit as the number of trials approaches infinity, so I would rather say the frequency of success of each treatment in your sample of 700 people differs from the probabilities the omniscient being knows would be seen if the experiment were repeated under identical conditions with an infinite number of subjects. And the fact that they differ is only because the sample isn't "fair" in the second sense.
billschnieder said:
Yet your only cop-out is the idea that a random number generator will produce the same distribution.
When I talked about a random number generator I was trying to show how you could make the same "fair" in the first sense (which is the main sense you seemed to be talking about in some of your earlier posts), assuming it was fair in the second sense. It is certainly true that in the limit as the size of your sample approaches infinity, if you are using a random number generator to assign people to treatments the proportions of people with various preexisting traits (small kidney stones, high socioeconomic status) should become identical in both treatment groups. With a finite-size sample there may still be statistical fluctuations which make the sample not "fair" in the second sense, though the larger the sample the smaller the probability of any significant difference between observed frequencies and the ideal probabilities (the frequencies that would be observed in the limit as sample size approaches infinity).
billschnieder said:
I have performed the simulation, see attached python code, and the results confirm once and for all that you have no clue what you are saying. if you still deny do yours and post the result.

Remember, We are interested ONLY in obtain two groups that have the same proportion of large stones to small stones people as in the universe of all people with the disease. Alternative, we are interested in two groups with exactly the same proportion of small stones and large stones. Feel free to calculate the probability of drawing two groups with the same proportions.
I don't know python, so if you want me to respond to this example I'll need some summary of what's being computed. Are you taking a "universe" of 1,000,000 people, of which exactly 700,000 have large kidney stones, and then randomly picking 100,000 from the entire universe to assign to 1000 groups of 100 people each? I think that's right, but after that it's unclear--what's the significance of DIFFERENCE_PERMITTED = 0.01, are you comparing the fraction of large kidney stones in each group with the 70% in the universe as a whole, and seeing how many differ by more than 1%? (i.e. the fraction of groups of 100 that have more than 71 with large kidney stones, or less than 69?) You also seem to be looking at differences between individual pairs of groups (comparing them to each other rather than to the universe), but only a comparison with the true ratio in the universe as a whole seems directly relevant to our discussion.

Also I'd be curious what you are talking about when you say "and the results confirm once and for all that you have no clue what you are saying". Certainly the "law of large numbers" doesn't say that it would be particularly unlikely to find a difference of more than 1% between the fraction of hits in a sample of 100 and the probability of hits, so which specific statements of mine do you think are disproven by your results?
billschnieder said:
Note, with a random number generator, you sometimes find deviations larger than 20% between groups! And this is just for a simple situation with only ONE hidden parameter. It quickly gets much-much worse if you increase the number of hidden parameters. At this rate, you will need to do an exponentially large number of experiments (compare to number of parameters) to even have the chance of measuring a single fair sample, and even then you will not know when you have had it because the experimenters do not even know what fair means.
It may seem counterintuitive, but if you're only concerned with the issue of whether the frequencies in measured variables are close to the ideal probabilities for those same measured variables (i.e. whether it's a 'fair sample', in the second sense above, for the measured variables only), it makes no difference at all whether the measured variables are influenced by 2 hidden variables or 2 trillion hidden variables! If you want to get a small likelihood of a significant difference between frequencies of these measured variables and the ideal probabilities for the measured variables (where 'ideal probabilities' means the frequency you'd see if the experiment were repeated an infinite number of times under the same conditions, or the probabilities that would be known by the 'omniscient being'), then the sample size you need to do this depends only on the ideal probability distribution on different values for the measured variables. Even if the ideal probability that this measured variable M will take a value Mj is computed by integrating (probability of Mj given hidden state λi)*(probability of hidden state λi) over each possible value of i, where there are 2 trillion possible values of i, this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.

If you disagree that only the ideal probability distribution on the measured variables is important when choosing the needed sample size, please respond to the coin flip simulation example in post #51, it's directly relevant to this. There the computer was programmed so that the value of the measured variable (F, which can take two values corresponding to heads and tails) depended on a very large number of hidden variables, but I claimed there would be no statistical difference in the output of this program from a simpler program that just picked a random number from 1 to 2 and used that to determine heads or tails. Would you disagree with that? Also, in that post I included a textbook equation to show that only the ideal probability distribution on the measured variable X is important when figuring out the probability that the average value of X over n trials will differ by more than some small amount [tex]\epsilon[/tex] from the ideal expectation value [tex]\mu[/tex] that would be seen over an infinite number of trials:
For a somewhat more formal argument, just look at http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter8.pdf, particularly the equation that appears on p. 3 after the sentence that starts "By Chebyshev's inequality ..." If you examine the equation and the definition of the terms above, you can see that if we look at the the average value for some random value X after n trials (the [tex]S_n / n[/tex] part), the probability that it will differ from the expectation value [tex]\mu[/tex] by an amount greater than or equal to [tex]\epsilon[/tex] must be smaller than or equal to [tex]\sigma^2 / n\epsilon^2[/tex], where [tex]\sigma^2[/tex] is the variance in the value of the original random variable X. And both the expectation value for X and the variance of X depend only on the probability that X takes different possible values (like the variable F in the coin example which has an 0.5 chance of taking F=0 and an 0.5 chance of taking F=1), it shouldn't matter if the value of X on each trial is itself determined by the value of some other variable λ which can take a huge number of possible values.
billschnieder said:
And remember we are assuming that a small stone person has a fair chance of being chosen as a large stone person. It could very well be that small stone people are shy and never volunteer, etc etc and you quickly get into a very difficult situation in which a fair sample is extremely unlikely.
Again you need to be clear what you mean by "fair sample". For "fair sample" in the first sense of establishing causality, all that matters is that people with small kidney stones are equally represented in group A and group B, it doesn't matter if the overall ratio of small stone subjects to large stone subjects in the study is the same as in the population at large (so it wouldn't matter if small stone people in the population at large were less likely to volunteer). For "fair sample" in the second sense of the frequencies matching the probabilities that would be seen in an arbitrarily large sample, it's true that in the medical test example shyness might create a sample that doesn't reflect the population at large. But this isn't an issue in the Aspect experiments, because Bell's proof can apply to any experimental conditions that meet some broad criteria (like each experimenter randomly choosing from three detector settings, and the choices and measurements having a spacelike separation), and the inequalities concern the ideal statistics one would see if one repeated the same experiment with the same conditions an infinite number of times. So as long as one repeats the experiment with the same conditions each time, and as long as the "conditions" you've chosen match some broad criteria like a spacelike separation between measurements, then the "population at large" you're considering is just a hypothetical infinite set of experiments repeated under just those same conditions. This means the only reason the statistics in your small sample might differ significantly from the ideal statistics would be random statistical fluctuation, there can't be any systematic bias that predictably causes your sample to differ in conditions from the "population at large" (as with shyness causing a systematic decrease in the proportion of small kidney stone patients in a study as compared with the general population), because of the very way the "population at large" would be understood in Bell's proof.
 
  • #73
JesseM said:
No you didn't. This is the key point you seem to be confused about: the marginal correlation between treatment B and recovery observed by the omniscient being is exactly the same as that observed by the experimenters. The omniscient being does not disagree that those who receive treatment B have an 83% chance of recovery, and a person who receives treatment A has a 73% chance of recovery.
billschnieder said:
Yes he does. He disagrees that treatment B is marginally more effective than treatment A.
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"? Are you talking about the causal influence of the two treatments on recovery chances, the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions, the correlations in marginal frequencies seen in the actual experiment, or something else? In the above statement I was just talking about the correlations in marginal frequencies seen in the actual experiment (i.e. the fact that F(treatment B, recovery) is higher than F(treatment B)*F(recovery) in the experiment), for which the omniscient being would note exactly the same frequencies as the experimenters.
billschnieder said:
The problem is not with what the omniscient being knows! The problem is what the doctors believe they know from their experiments.
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous? For example, if you're saying the doctors "believe they know" that treatment B has some causal role in recovery, are you saying that experimenters believe they know that a correlation in two observable variables (like Alice's measurement and Bob's measurement) indicates that one is having a causal influence on the other?
billschnieder said:
Now I know that you are just playing tricks and avoiding the issue. Those calculating from Aspect type experiments do not know the nature of all the hidden elements of reality involved either, so they think they have fully sampled all possible hidden elements of reality at play.
No, they don't. λ could have a vastly larger number of possible values than the number of particle pairs that could be measured in all of human history, and no one who understands Bell's proof would disagree.
billschnieder said:
They think their correlations can be compared with Bell's marginal probability. How can they possibly they know that?
Because regardless of the number of possible values λ could take on an infinite set of experiments with the same measurable conditions, and the ideal probability distribution on all the different values in this infinite set, the sample size needed to get a low risk of statistical fluctuations depends only on the ideal probability distribution on the measurable variables. To see why you need to consider either the coin flip example in post #51, or the textbook equation from the same post.
billschnieder said:
What possible random number generator can ensure that they sample all possible hidden elements of reality fairly
By referring to a "random number generator" I presume you are talking about the first sense of "fair sampling" I mentioned in the previous post, but as I said there this is irrelevant to Bell's argument. Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value on trials where measurable variables like A and a took one set of values (say, a=60 degree axis and A=spin-up) as it is to take the same value on trials where these measurable variables took different values (say, a=60 degree axis and A=spin-down).
JesseM said:
Simple yes or no is not possible here; there is some probability the actual statistics on a finite number of trials would obey Bell's inequalities, and some probability they wouldn't, and the law of large numbers says the more trials you do, the less likely it is your statistics will differ significantly from the ideal statistics that would be seen given an infinite number of trials (so the less likely a violation of Bell's inequalites would become in a local realist universe). Yes or No.

billschnieder said:
This is an interesting admission. Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs?
Yes. Note that in the ideal calculation of the marginal probability of a coincidence, you do a sum over all possible values of i of (probability of coincidence given hidden-variable state λi), multiplied by the probability of that λi, so you're already explicitly taking this possibility into account. The idea is just that whatever the experimental conditions we happen to choose, there must be some ideal probability distribution on λi's that would be seen in an infinite sample of experiments repeated under the same measurable conditions, and it's that distribution that goes into the calculation of the ideal marginal probabilities of various outcomes. And obviously there's no causal reason that a real finite sample of experiments under these conditions would systematically differ from the ideal infinite sample under exactly the same observable conditions, so any difference in frequencies from the ideal probabilities must be purely a matter of random statistical fluctuation. Finally, the law of large numbers says the probability of significant fluctuations goes down the larger your sample size, and as I've said the rate at which it goes down should depend only on the ideal probability distribution on the measured variables, it doesn't make any difference if there are a vast number of hidden-variable states that can influence the values of these measured variables.
billschnieder said:
Or do you believe that Bell test setups are equally fair to all possible λs? Yes or No.
No, the fact that equation (2) in Bell's paper includes in the integral the probability density for each given value of λ makes it obvious he wasn't assuming all values of λ are equally probable. I also made this explicit in my coin-flip-simulation example from post #51:
First, the program randomly generates a number from 1 to 1000000 (with equal probabilities of each), and each possible value is associated with some specific value of an internal variable λ; for example, it might be that if the number is 1-20 that corresponds to λ=1, while if the number is 21-250 that corresponds to λ=2 (so λ can have different probabilities of taking different values), and so forth up to some maximum λ=n.
 
  • #74
JesseM said:
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.

The definition of "fair" depends on the question you are trying to answer. If you interested in the truth, "fair" means you take 100 % of those who are right and 0% of those who are wrong. If you are looking for equal representation "fair" means you take 50% of those who are right and 50% of those who are wrong.

If you are interested in comparing the effectiveness of a drug, "fair" means the two groups on which you administer both drugs do not differ in any significant way as concerns any parameter that correlates with the effectiveness of the drug. If you are trying measure on a sample of a population in order to extrapolate results from your sample to the population, "fair" means the distribution of all parameters in your sample does not differ significantly from the distribution of the parameters in the population.

If you are measuring frequencies of photons in order to compare with inequalities generated from the perspective of an omniscient being where all possible parameters are included, "fair" means the distribution of all parameters of the photons actually measured, does not differ significantly from the distribution of the parameters in the fully universe considered by the omniscient being. I say it is impossible for experimenters to make sure of that, you say it is not and their samples are fair. It is clear here who is making the extraordinary claim.


this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.
You have given no mechanism by which experimenters have ensured this, or can ensure this.

If you disagree that only the ideal probability distribution on the measured variables is important when choosing the needed sample size, please respond to the coin flip simulation example in post #51, it's directly relevant to this. There the computer was programmed so that the value of the measured variable (F, which can take two values corresponding to heads and tails) depended on a very large number of hidden variables, but I claimed there would be no statistical difference in the output of this program from a simpler program that just picked a random number from 1 to 2 and used that to determine heads or tails. Would you disagree with that? Also, in that post I included a textbook equation to show that only the ideal probability distribution on the measured variable X is important when figuring out the probability that the average value of X over n trials will differ by more than some small amount [tex]\epsilon[/tex] from the ideal expectation value [tex]\mu[/tex] that would be seen over an infinite number of trials:
I don't see how this is relevant. It is not possible to do an infinite number of Aspect type experiments, or for doctors treating a disease to measure an infinite number of groups so I don't see the relevance here.

But this isn't an issue in the Aspect experiments, because Bell's proof can apply to any experimental conditions that meet some broad criteria (like each experimenter randomly choosing from three detector settings, and the choices and measurements having a spacelike separation), and the inequalities concern the ideal statistics one would see if one repeated the same experiment with the same conditions an infinite number of times.
"Randomly choosing three detector angles" does not mean the same as "randomly sampling all hidden elements of reality". That is the part you do not yet understand. If you have a hidden element of reality which interacts with the a detector angle such that for example everything from 0-75 deg behaves similarly but everything from 75 to 90 behaves differently, and you randomly choose an angle, you will not sample the hidden parameters fairly. Do you deny this.

So as long as one repeats the experiment with the same conditions each time, and as long as the "conditions" you've chosen match some broad criteria like a spacelike separation between measurements, then the "population at large" you're considering is just a hypothetical infinite set of experiments repeated under just those same conditions. This means the only reason the statistics in your small sample might differ significantly from the ideal statistics would be random statistical fluctuation, there can't be any systematic bias that predictably causes your sample to differ in conditions from the "population at large" (as with shyness causing a systematic decrease in the proportion of small kidney stone patients in a study as compared with the general population), because of the very way the "population at large" would be understood in Bell's proof.
Can you point me to an Aspect type experiment in which the same conditions were repeated an infinite number of times. NOTE: "Same conditions" includes all macro and microscopic properties of the detectors and the photon source for each iteration. Can you even point to an experiment in which the experimenters made sure EVEN ONE SINGLE condition was repeated on another trial. Just changing to the same angle is not enough.
 
Last edited:
  • #75
JesseM said:
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"?
if P(A) represents the marginal probability of successful treatment with drug A and P(B) represents the marginal probability of successful treatment with drug B, then if P(A) > P(B), then drug A is marginally more effective. This should have been obvious unless you are just playing semantic games here.

the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions
There may other factors that are correlated with the factors directly influencing the rate of success the thwart the experimenters attempts to generate a fair sample, and unless they know about all these relationships then can never ensure a fair sample. Not every experiment can be repeated an infinite number of times with the same conditions.

What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?


Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value ...
...
No, the fact that equation (2) in Bell's paper includes in the integral the probability density for each given value of λ makes it obvious he wasn't assuming all values of λ are equally probable. I also made this explicit in my coin-flip-simulation example from post #51:
Who said anything about different λs being equally likely. Fair does not mean all lambdas must be equality likely. Fair in this case means the likelihood of the lambdas in sample are not significantly different from their likelihoods in the population.
 
  • #76
billschnieder said:
If you are measuring frequencies of photons in order to compare with inequalities generated from the perspective of an omniscient being where all possible parameters are included, "fair" means the distribution of all parameters of the photons actually measured, does not differ significantly from the distribution of the parameters in the fully universe considered by the omniscient being. I say it is impossible for experimenters to make sure of that, you say it is not and their samples are fair. It is clear here who is making the extraordinary claim...

Not really. All science is based on experiments, and it is not possible to assure ourselves that the demon isn't fooling us by always presenting a biased sample. In other words, there is always a fair sampling assumption operating in the background of science. But that is not what is meant by the Fair Sampling Assumption. This has to do with the matching of events (usually a time window) and detector efficiencies.

As I believe has already been mentioned, Rowe et al closed this some time back. In addition, there are numerous experiments (non-Bell such as GHZ) in which the time window is not a factor. These support the Bell conclusion, providing very powerful confirming evidence. A local realist would predict that such experiments should not be possible.

So my point is that either way you define fair sampling, it should not affect one's conclusion.
 
  • #77
DrChinese said:
You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's.

Really? Why do you think that? As far as I can tell, with respect to local causality vs local realism, and with respect to understanding what EPR argued, Norsen's conclusions are the same as Bell's. And those are the things we are talking about. So I don't know how you came to your conclusion.


DrChinese said:
Otherwise, why would Norsen have a need to write about it?

Well, your premise seems to be wrong to begin with. But even if it were correct, this would still be a non-sequitur.

I can think of plentiful reasons why Norsen might have felt the need to write about it. The most obvious is to discuss and clarify the confusions (e.g. 'local realism') about what Bell actually assumed in his theorem. For example, check out Norsen's paper, 'Against Realism'.

Actually, in light of your comment, I'm curious now - have you ever read any of Norsen's papers on Bell?


DrChinese said:
And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion.

:confused: Where exactly do you think I said that my perspective is a 'common opinion'? I think you know that I have not made such a claim. I have repeatedly emphasized that I am simply presenting Bell's understanding of his own theorem, and claiming that the popular understanding of Bell (yes, even among the famous physicists that you quote) is incorrect.


DrChinese said:
It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

Yes, it is a minority view, but that has no logical bearing on its validity. Nevertheless, since you seem to be swayed by ad populum arguments, you may be interested to know that it is not nearly as minor of a view as you think. In fact, the majority of the quantum foundations physics community takes this view. That may not be as big as, say, the quantum optics community, but it is considerably larger than the two people that you know.

As for the majority view that you are channeling, I think it's rather odd that you seem so content with taking Zeilinger and Aspect's word for it, without even trying to confirm them for yourself by going directly the source (Bell's own writings). Especially when you know that there are serious people who dispute Zeilinger and Aspect's interpretation of Bell's theorem. Would it be such a terrible thing for you if Zeilinger and Aspect were wrong?


DrChinese said:
And by the way, my apologies for mangling the spelling of your name in a previous post.

No worries, it happens.
 
  • #78
DrChinese said:
You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's. Otherwise, why would Norsen have a need to write about it? And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion. It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

And by the way, my apologies for mangling the spelling of your name in a previous post.

Btw, I am still waiting for your response to my post #25.
 
  • #79
billschnieder said:
If you are interested in comparing the effectiveness of a drug, "fair" means the two groups on which you administer both drugs do not differ in any significant way as concerns any parameter that correlates with the effectiveness of the drug.
Yes, this is what I meant when I said:
One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence.
billschnieder said:
If you are trying measure on a sample of a population in order to extrapolate results from your sample to the population, "fair" means the distribution of all parameters in your sample does not differ significantly from the distribution of the parameters in the population.
Yes, and this is what I meant when I talked about the second use of "fair":
Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.
So, do you agree with my statement that of these two, Only the second sense of "fair sample" is relevant to Bell's argument?

To make the question more precise, suppose all of the following are true:

1. We repeat some experiment with particle pairs N times and observe frequencies of different values for measurable variables like A and B

2. N is sufficiently large such that, by the law of large numbers, there is only a negligible probability that these observed frequencies differ by more than some small amount [tex]\epsilon[/tex] from the ideal probabilities for the same measurable variables (the 'ideal probabilities' being the ones that would be seen if the experiment was repeated under the same observable conditions an infinite number of times)

3. Bell's reasoning is sound, so he is correct in concluding that in a universe obeying local realist laws (or with laws obeying 'local causality' as Maaneli prefers it), the ideal probabilities for measurable variables like A and B should obey various Bell inequalities

...would you agree that if all of these are true (please grant them for the sake of the argument when answering this question, even though I know you would probably disagree with 3 and perhaps also doubt it is possible in practice to pick a sufficiently large N so that 2 is true), then the experiment constitutes a valid test of local realism/local causality, so if we see a sizeable violation of Bell inequalities in our observed frequencies there is a high probability that local realism is false? Please give me a yes-or-no answer to this question.

If you say yes, it would be a valid test if 1-3 were true but you don't actually believe 2 and/or 3 could be true in reality, then we can focus on your arguments for disbelieving either of them. For example, for 2 you might claim that if N is not large enough that the frequencies of hidden-variable states are likely to match the ideal probabilities for these states (because the number of hidden-variable states can be vastly larger than any achievable N), then that also means the frequencies of values of observable variables like A and B aren't likely to match the ideal probabilities for these variables either. I would say that argument is based on a misconception about statistics, and point you to the example of the coin-flip-simulator and the more formal textbook equation in post #51 to explain why. But again, I think it will help focus the discussion if you first address the hypothetical question about whether we would have a valid test of local realism if 1-3 were all true.
JesseM said:
this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.
billschnieder said:
You have given no mechanism by which experimenters have ensured this, or can ensure this.
Again, it's just the law of large numbers. If we are repeating an experiment under the same observable conditions, do you deny that there should be some fact of the matter as to the ideal probability distribution for each variable if the experiment were (hypothetically) repeated under the same observable conditions an infinite number of times (the ideal probability distribution known by an omniscient being, perhaps)? If you don't deny that there is some "true" probability distribution for a given type of experiment in this sense, then the law of large numbers says that if you repeat the experiment N times, then the probability p that the observed frequencies differ from the ideal probabilities by more than some small amount [tex]\epsilon[/tex] can be made as small as you want by picking a sufficiently large value of N--do you disagree?

As before, please give me a yes-or-no answer. If you do disagree with either of the above you are misunderstanding something about statistics. If you don't disagree with either of the above, then the question is just how large N must be to have a fairly small chance of a significant difference between observed frequencies in values measurable variables and the ideal probabilities for the values of these measurable variables. You seem to be arguing that N would depend on the number of possible values of the hidden variable λ, but this is what my arguments in post #51 were intended to disprove.
billschnieder said:
don't see how this is relevant. It is not possible to do an infinite number of Aspect type experiments, or for doctors treating a disease to measure an infinite number of groups so I don't see the relevance here.
You acknowledge that in statistics, we can talk about "probabilities" of events which are conceptually distinct from the frequencies of those events in some finite set of trials, right? Conceptually the meaning of "probability" is just the frequencies that would be seen as the sample size approaches infinity. And by the law of large numbers, if you repeat an experiment under some specific conditions a sufficiently large number of times, you can make the likelihood that your observed frequencies will differ significantly from the ideal probabilities (i.e. the frequencies that would be seen if you repeated the experiment under the same conditions an infinite number of times) arbitrarily small.
billschnieder said:
"Randomly choosing three detector angles" does not mean the same as "randomly sampling all hidden elements of reality". That is the part you do not yet understand. If you have a hidden element of reality which interacts with the a detector angle such that for example everything from 0-75 deg behaves similarly but everything from 75 to 90 behaves differently, and you randomly choose an angle, you will not sample the hidden parameters fairly. Do you deny this.
If we're using my second definition of "fair sampling", and we're repeating the experiment a large number of times, then I would deny your claim that we're not sampling the hidden parameters fairly. There is going to be some ideal probability distribution on the hidden parameters that would occur if the experiment were repeated in the same way an infinite number of times, ideal probabilities which we can imagine are known by the omniscient being. Whatever these ideal probabilities are, by picking a sufficiently large number of trials, the likelihood that the actual frequencies differ significantly from the ideal probabilities can be made arbitrarily low.
billschnieder said:
Can you point me to an Aspect type experiment in which the same conditions were repeated an infinite number of times.
Again, "repeating an infinite number of times" is just a theoretical way of defining what we mean by the "true" or ideal probabilities for different values of any variables involved. And again, the law of large numbers says that with enough trials, you can make the actual frequencies on your set of trials be very unlikely to differ significantly from these ideal probabilities.
billschnieder said:
NOTE: "Same conditions" includes all macro and microscopic properties of the detectors and the photon source for each iteration.
Not necessary, the sample space consists of all possible cases where some observable conditions are the same but other micro conditions can vary. Are you familiar with the concept of microstates and macrostates in statistical mechanics, and how we reason about the probability a given macrostate will evolve into a different macrostate by considering all possible microstates it could be in? Same idea here.
 
Last edited:
  • #80
JesseM said:
I thought it was clear from my previous post that you were misunderstanding when you imagined the "other" was a contrast to lambda rather than a contrast to the non-hidden variables. That's why I said 'Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b'.

The way you described it was still rather unclear to me. But in any case, the point of Bell's c variable is to encompass ALL of the past (non-hidden) causes of outcomes A and B, in the experimental set-up. So your use of the word 'hidden' to refer to the "other" was just unnecessary and misleading.
 
  • #81
Maaneli said:
Well, your premise seems to be wrong to begin with. But even if it were correct, this would still be a non-sequitur.

I can think of plentiful reasons why Norsen might have felt the need to write about it. The most obvious is to discuss and clarify the confusions (e.g. 'local realism') about what Bell actually assumed in his theorem. For example, check out Norsen's paper, 'Against Realism'.

Actually, in light of your comment, I'm curious now - have you ever read any of Norsen's papers on Bell?

As for the majority view that you are channeling, I think it's rather odd that you seem so content with taking Zeilinger and Aspect's word for it, without even trying to confirm them for yourself by going directly the source (Bell's own writings). Especially when you know that there are serious people who dispute Zeilinger and Aspect's interpretation of Bell's theorem. Would it be such a terrible thing for you if Zeilinger and Aspect were wrong?

I do follow Norsen's work, and in fact have had a link to one of his papers from my site for many years. From EPR and Bell Locality (2005):

"A new formulation of the EPR argument is presented, one which uses John Bell's mathematically precise local causality condition in place of the looser locality assumption which was used in the original EPR paper and on which Niels Bohr seems to have based his objection to the EPR argument. The new formulation of EPR bears a striking resemblance to Bell's derivation of his famous inequalities. The relation between these two arguments -- in particular, the role of EPR as part one of Bell's two-part argument for nonlocality -- is also discussed in detail. "

This of course has a lot of similarity to arguments you are making, and I would be happy to discuss.

Now, I don't agree with much of his work, but I happen to think it is worth discussing. So everything you are saying about me and the majority view is pretty much backwards. Zeilinger and Aspect hardly need me to defend them, and I am fairly certain they are familiar with Norsen's ideas and "generally" reject them. That's not a cut at all, as you say serious people can have different opinions. In fact, I feel there ARE points of view that are different than my own which are worthy, and they may or may not be mainstream. Norsen has put a lot of energy into the analysis of the EPR history and it is worth a listen. And by the way, I don't say that about a lot of things. But I read a lot too, and have my own opinion of things as well.

Specifically, I cannot see any way around the Bell (14) issue and I don't see how you or Norsen get around that. Here is the issue: I demand of any realist that a suitable dataset of values at three simultaneous settings (a b c) be presented for examination. That is in fact the realism requirement, and fully follows EPR's definition regarding elements of reality. Failure to do this with a dataset which matches QM expectation values constitutes the Bell program. Clearly, Bell (2) has only a and b, and lacks c. Therefore Bell (2) is insufficient to achieve the Bell result.

I have gone around and around with Travis on the point and he could never explain it to me. But believe me, I am all ears. From a physical perspective, I do follow the idea that non-locality offers an out. But there are other outs. And further, I am not certain I can even describe what a non-realistic solution might look like. Just saying it is contextual doesn't seem to solve a lot.
 
  • #82
DrChinese said:
I do follow Norsen's work, and in fact have had a link to one of his papers from my site for many years. From EPR and Bell Locality (2005):

"A new formulation of the EPR argument is presented, one which uses John Bell's mathematically precise local causality condition in place of the looser locality assumption which was used in the original EPR paper and on which Niels Bohr seems to have based his objection to the EPR argument. The new formulation of EPR bears a striking resemblance to Bell's derivation of his famous inequalities. The relation between these two arguments -- in particular, the role of EPR as part one of Bell's two-part argument for nonlocality -- is also discussed in detail. "

This of course has a lot of similarity to arguments you are making, and I would be happy to discuss.

OK, great, how about we start with post #25?

DrChinese said:
In fact, I feel there ARE points of view that are different than my own which are worthy, and they may or may not be mainstream.

Great, so then let's discuss. But first, have you read La Nouvelle Cuisine, or The Theory of Local Beables, or Free Variables and Local Causality, or Bertlmann's socks? If not, I highly recommend all of them, and particularly La Nouvelle. Or, to keep it light, you can just start with my summary of La Nouvelle in post #25. I would like to see how you think you can reconcile Bell's reasoning with that of Zeilinger and Aspect.

DrChinese said:
Specifically, I cannot see any way around the Bell (14) issue and I don't see how you or Norsen get around that.

I'm not sure how else to explain this. The necessity of c in Bell's theorem is not being disputed. You seem to think that its introduction has something to do with the introduction of a 'realism' assumption or counterfactual definiteness. I've asked you for a reference which supports your interpretation (and explains the reasoning behind it), but you have yet to provide one. In any case, as Bell explains, c just specifies (as a consequence of the principle of local causality) the non-hidden common past causes for the outcomes A and B, in the experimental set-up. I explained this in greater detail in post #25.
 
Last edited:
  • #83
Maaneli said:
Great, so then let's discuss. But first, have you read La Nouvelle Cuisine, or The Theory of Local Beables, or Free Variables and Local Causality, or Bertlmann's socks? If not, I highly recommend all of them, and particularly La Nouvelle. Or, to keep it light, you can just start with my summary of La Nouvelle in post #25. I would like to see how you think you can reconcile Bell's reasoning with that of Zeilinger and Aspect.

I will go back to #25, and we can discuss any point you like. However, I do not accept Bell's statements in these books as a reference in and of themselves. I have his works and he says a lot of things at different times and in different contexts. So don't ask me to accept these statements at face value. And don't ask me to reconcile them to generally accepted science either. Suppose he is a Bohmian? :smile: Instead, we can discuss them wherever they are good expressions of what we want to communicate. It is Bell's 1965 work that stands in the literature, for better or worse, so I tend to work with it a lot. But I will try and be flexible.

As to the idea about a, b and c: I have already given you reference upon reference to my perspective in general terms (respected authors with less formal papers), and I can quote specific reference papers from the same sources saying the same thing in formal peer-reviewed terms. This is a generally accepted definition of realism, and it follows EPR too. If you accept a, b and c as an assumption of Bell, then we are already at the same point and there is no further debate required.

On the other hand, I think you don't accept Bell (14) as an assumption.
 
  • #84
Maaneli said:
In other words, all the realism in Bell's theorem is introduced as part of Bell's definition and application of his local causality condition. And the introduction of the unit vector, c, follows from the use of the local causality condition. Indeed, in La Nouvelle Cuisine (particularly section 9 entitled 'Locally explicable correlations'), Bell explicitly discusses the relation of c to the hidden variables, lambda, and the polarizer settings, a and b, and explicitly shows how they follow from the local causality condition. To summarize it, Bell first defines the 'principle of local causality' as follows:

"The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light."

In fact, this definition is equivalent to the definition of relativistic causality, and one can readily see that it implicitly requires the usual notion of realism in special relativity (namely, spacetime events, and their causes and effects) in its very formulation. Without any such notion of realism, I hope you can agree that there can be no principle of local causality.

Bell then defines a locally causal theory as follows:

"A theory will be said to be locally causal if the probabilities attached to values of 'local beables' ['beables' he defines as those entities in a theory which are, at least, tentatively, taken seriously, as corresponding to something real, and 'local beables' he defines as beables which are definitely associated with particular spacetime regions] in a spacetime region 1 are unaltered by specification of values of local beables in a spacelike separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified, for example by a full specification of local beables in a spacetime region 3 [he then gives a figure illustrating this]."

You can clearly see that the local causality principle cannot apply to a theory without local beables. To spell it out, this means that the principle of local causality is not applicable to nonlocal beables, nor a theory without beables of any kind.

Bell then shows how one might try to embed quantum mechanics into a locally causal theory. To do this, he starts with the description of a spacetime diagram (figure 6) in which region 1 contains the output counter A (=+1 or -1), along with the polarizer rotated to some angle a from some standard position, while region 2 contains the output counter B (=+1 or -1), along with the polarizer rotated to some angle b from some standard position which is parallel to the standard position of the polarizer rotated to a in region 1. He then continues:

"We consider a slice of space-time 3 earlier than the regions 1 and 2 and crossing both their backward light cones where they no longer overlap. In region 3 let c stand for the values of any number of other variables describing the experimental set-up, as admitted by ordinary quantum mechanics. And let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisaged by EPR. Suppose that the c and lambda together give a complete specification of at least those parts of 3 blocking the two backward light cones."

From this consideration, he writes the joint probability for particular values A and B as follows:


{A, B|a, b, c, lambda} = {A|B, a, b, c, lambda} {B|a, b, c, lambda}​

He then says, "Invoking local causality, and the assumed completeness of c and lambda in the relevant parts of region 3, we declare redundant certain of the conditional variables in the last expression, because they are at spacelike separation from the result in question. Then we have


{A, B|a, b, c, lambda} = {A|a, c, lambda} {B|b, c, lambda}.​

Bell then states that this formula has the following interpretation: "It exhibits A and B as having no dependence on one another, nor on the settings of the remote polarizers (b and a respectively), but only on the local polarizers (a and b respectively) and on the past causes, c and lambda. We can clearly refer to correlations which permit such factorization as 'locally explicable'. Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the formulation of 'local causality', but as a consequence thereof."

Bell then shows that this is the same local causality condition used in the derivation of the CSHS inequality, and which the predictions of quantum mechanics clearly violate. Hence, Bell concludes that quantum mechanics cannot be embedded in a locally causal theory.

And again, the variable c here is nothing but part of the specification of the experimental set-up (as allowed for by 'ordinary quantum mechanics'), just as are the polarizer settings a and b (in other words, a, b, and c are all local beables); and the introduction of c in the joint probability formula follows from the local causality condition, as part of the complete specification of causes of the events in regions 1 and 2. So, again, there is no notion of realism in c that is any different than in a and b and what already follows from Bell's application of his principle of local causality.

So there you go, straight from the horses mouth. I hope you will have taken the time to carefully read through what I presented above, and to corroborate it for yourself by also reading (or re-reading) La Nouvelle Cuisine.

Here is most of your #25. Notice how Bell strays from the EPR language here? He is making a somewhat different argument, which is probably OK. I do that a bit in my Bell proof web pages. So I will look back over some of the book, so we can make sure we are discussing the same apples. May be a day or so though.
 
  • #85
Maaneli said:
The way you described it was still rather unclear to me. But in any case, the point of Bell's c variable is to encompass ALL of the past (non-hidden) causes of outcomes A and B, in the experimental set-up. So your use of the word 'hidden' to refer to the "other" was just unnecessary and misleading.
Why are you talking about c, though? The "other hidden variables" referred to lambda, not c, as I already explained. Do you disagree that lambda refers to the hidden variables, and that these are "other" to the measurable variables? Again:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables (i.e., when conditioned on lambda)
 
  • #86
DrChinese said:
Here is most of your #25. Notice how Bell strays from the EPR language here? He is making a somewhat different argument, which is probably OK. I do that a bit in my Bell proof web pages.

Yes, Bell was using a more precise and quantitative formulation of the local causality criterion that EPR used. So it's not surprising that his language will stray from EPR.

DrChinese said:
So I will look back over some of the book, so we can make sure we are discussing the same apples. May be a day or so though.

Sounds good. Thanks for taking the time to read it over.
 
  • #87
JesseM said:
Why are you talking about c, though? The "other hidden variables" referred to lambda, not c, as I already explained. Do you disagree that lambda refers to the hidden variables, and that these are "other" to the measurable variables? Again:

OK, now I see what you meant. My bad.
 
  • #88
JesseM said:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables (i.e., when conditioned on lambda)

But this characterization is still problematic. It is not accurate to say that Bell's whole argument is based on explicitly considering the possibility that this marginal correlation would 'disappear' when conditioned on (locally causal) hidden variables; rather, he asked whether the correlation could be explained in terms of a theory in which the measurement outcomes were conditioned on locally causal hidden variables. In other words, he asked whether QM could be embedded within a locally causal theory.
 
  • #89
Maaneli said:
You can clearly see that the local causality principle cannot apply to a theory without local beables. To spell it out, this means that the principle of local causality is not applicable to nonlocal beables, nor a theory without beables of any kind.
Maaneli, what you say in the above resonates strongly with how I am (currently) seeing it all.

What I see is that "Bell Locality" has in it two related yet (apparently) distinct senses of 'locality':

(i) "state separability" (for spatiotemporally separated systems) ,

and

(ii) "local causality" .

The first of these seems (to me) to correspond to the idea of "local beables".

... Would you say the same?
____________________________________

Here are some definitions to make clearer where I am coming from:

[The following definitions are ((slightly) adapted) from Ruta's post #556 in another thread.]

(i) Any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own (separate) independent 'real' states such that the joint state is completely determined by the independent states.

(ii) Any two spacelike separated systems A and B are such that the separate 'real' state of A cannot be 'influenced' by events in the neighborhood of B, and vice versa.
____________________________________
____________________________________

Next.

(In a thread parallel to this one) JesseM wrote:
JesseM said:
In a local realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of space-time--ultimately reduce to some collection of local physical facts defined at individual points in spacetime (or individual 'bits' if spacetime is not infinitely divisible).
JesseM, it sounds (to me) like what you mean by "local realism" (part of which is expressed in the quote above) is equivalent (in meaning) to (i) and (ii) above.

... Do you agree with this assessment?
____________________________________
 
Last edited:
  • #90
Maaneli said:
...rather, he asked whether the correlation could be explained in terms of a theory in which the measurement outcomes were conditioned on locally causal hidden variables. In other words, he asked whether QM could be embedded within a locally causal theory.

And the apparent answer to this question was YES.

We have 2 copies of an encyclopedia which we put into 2 separate trunks. We send those trunks into separate regions of space. Then we have Alice and Bob ask questions which are answered after they open the trunks and look at the books. Their correlated results could match QM (by analogy) as far as anyone knows.

That is Bell (2), his beginning point. Which is based from the ending point of EPR. There is nothing obvious that prevents this explanation from being reasonable. As long as you have Alice and Bob, 2 parties, looking at a pair of entangled particles, at ONLY settings a and b, this might be feasible. And in fact a number of authors claim to have Local Realistic theories which can satisfy this condition (although I personally don't bother to examine such claims as they are meaningless to me apres Bell).
 

Similar threads

  • · Replies 28 ·
Replies
28
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 63 ·
3
Replies
63
Views
10K
Replies
63
Views
8K
  • · Replies 75 ·
3
Replies
75
Views
11K
Replies
18
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
8
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K