Is Bell's Logic Aimed at Decoupling Correlated Outcomes in Quantum Mechanics?

  • Thread starter Gordon Watson
  • Start date
  • Tags
    Logic
In summary, the conversation discusses the separation of Bell's logic from his mathematics and the understanding of one in relation to the other. A paper by Bell is referenced, where he suggests decoupling outcomes in order to avoid inequalities. However, his logic is deemed flawed and it is concluded that the implications of Bell's lambda and his logic are not fully understood. The importance of Bell's theorem in the physics community is also questioned.
  • #71
Maaneli said:
<< [Bell] and Norsen, channeled here by Maaneli, is arguing for one side. >>

It is important to recognize that I am representing Bell's own understanding of his theorem, not just Norsen's.

You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's. Otherwise, why would Norsen have a need to write about it? And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion. It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

And by the way, my apologies for mangling the spelling of your name in a previous post.
 
Physics news on Phys.org
  • #72
billschnieder said:
My example is different from the wikipedia example, the fact the same numbers are used does not mean you should ignore everything I actually said and respond to the wikipedia treatment of simpson's paradox. For one, there is no omniscient being in the wikipedia.
There is no literal omniscient being in Bell's example either, it's just a shorthand so we can talk about theoretical probabilities that could not actually be empirically measured by normal experimenters in such a theoretical universe. We might use the same shorthand in discussing the wikipedia example if the size of kidney stones was not known to the experimenters.
billschnieder said:
You do not know what you are talking about. The question you asked is irrelevant to the discussion and for the last time, there are no socioeconomic factors in the example I presented.
I was proposing a variant on your example in which there was some causal factor creating a marginal correlation between treatment B and recovery, unlike the case where assignment into groups was truly random and thus any marginal correlation seen in a small sample should represent a random statistical fluctuation, in the random-assignment case the marginal correlation would be guaranteed to disappear in the limit as the sample size approached infinity (law of large numbers). My variant is more relevant to the scenario Bell is analyzing, since he does say there can be a marginal correlation between measurement outcomes (i.e. a correlation when you don't condition on the hidden variables), and he doesn't say this is just a random statistical fluctuation, but rather that it represents a causal influence on the two measurement outcomes from the hidden variables.

Am I not allowed to propose my own variants on your examples? You seem to never be willing to discuss the examples I give, yet expect me to discuss the examples you propose.
billschnieder said:
Your only relevant response so far is essentially that a random number generator can do the job of producing a fair sample.
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.

Only the second sense of "fair sample" is relevant to Bell's argument. The first is not relevant, since Bell does not need to control for the influences of hidden variables on observable measurement outcomes, because he's not trying to infer any causal influence of one measurement on the other measurement. To test the Bell inequalities of course you do need a "fair sample" in the second sense of a sufficiently large number of measurements such that the frequencies of coincidences in your sample should be close to the probabilities of coincidences given by integrating (probability of coincidence given hidden state λi)*(probability of hidden state λi) over each possible value of i. But as long as your sample is "fair" in this second sense, it's no problem whatsoever if the ideal probabilities given by that integral are such that the hidden variable create marginal correlations between measurement outcomes, despite the fact that the measurement outcomes have no causal influence on one another (directly analogous to the doctors in my example being more likely to assign treatment B to those with small kidney stones, and thus creating a marginal correlation between receiving treatment B and recovery despite the fact that treatment B has no causal influence on a patient's recovery...this is why I introduced this variant example, to clearly distinguish between the two senses of 'fair' by looking at an example where the sample could be fair in the second sense even if it wasn't fair in the first).

Do you agree or disagree that as long as the sample is "fair" in the second sense, it doesn't matter to Bell's argument whether it's "fair" in the first sense?
billschnieder said:
You clearly do not deny the fact that the probability of success of each treatment will differ from those of the omniscient being unless the proportions within the sampled population are the same as in the universe.
As I understand it the word "probability" inherently refers to the frequencies that would be observed in the limit as the number of trials approaches infinity, so I would rather say the frequency of success of each treatment in your sample of 700 people differs from the probabilities the omniscient being knows would be seen if the experiment were repeated under identical conditions with an infinite number of subjects. And the fact that they differ is only because the sample isn't "fair" in the second sense.
billschnieder said:
Yet your only cop-out is the idea that a random number generator will produce the same distribution.
When I talked about a random number generator I was trying to show how you could make the same "fair" in the first sense (which is the main sense you seemed to be talking about in some of your earlier posts), assuming it was fair in the second sense. It is certainly true that in the limit as the size of your sample approaches infinity, if you are using a random number generator to assign people to treatments the proportions of people with various preexisting traits (small kidney stones, high socioeconomic status) should become identical in both treatment groups. With a finite-size sample there may still be statistical fluctuations which make the sample not "fair" in the second sense, though the larger the sample the smaller the probability of any significant difference between observed frequencies and the ideal probabilities (the frequencies that would be observed in the limit as sample size approaches infinity).
billschnieder said:
I have performed the simulation, see attached python code, and the results confirm once and for all that you have no clue what you are saying. if you still deny do yours and post the result.

Remember, We are interested ONLY in obtain two groups that have the same proportion of large stones to small stones people as in the universe of all people with the disease. Alternative, we are interested in two groups with exactly the same proportion of small stones and large stones. Feel free to calculate the probability of drawing two groups with the same proportions.
I don't know python, so if you want me to respond to this example I'll need some summary of what's being computed. Are you taking a "universe" of 1,000,000 people, of which exactly 700,000 have large kidney stones, and then randomly picking 100,000 from the entire universe to assign to 1000 groups of 100 people each? I think that's right, but after that it's unclear--what's the significance of DIFFERENCE_PERMITTED = 0.01, are you comparing the fraction of large kidney stones in each group with the 70% in the universe as a whole, and seeing how many differ by more than 1%? (i.e. the fraction of groups of 100 that have more than 71 with large kidney stones, or less than 69?) You also seem to be looking at differences between individual pairs of groups (comparing them to each other rather than to the universe), but only a comparison with the true ratio in the universe as a whole seems directly relevant to our discussion.

Also I'd be curious what you are talking about when you say "and the results confirm once and for all that you have no clue what you are saying". Certainly the "law of large numbers" doesn't say that it would be particularly unlikely to find a difference of more than 1% between the fraction of hits in a sample of 100 and the probability of hits, so which specific statements of mine do you think are disproven by your results?
billschnieder said:
Note, with a random number generator, you sometimes find deviations larger than 20% between groups! And this is just for a simple situation with only ONE hidden parameter. It quickly gets much-much worse if you increase the number of hidden parameters. At this rate, you will need to do an exponentially large number of experiments (compare to number of parameters) to even have the chance of measuring a single fair sample, and even then you will not know when you have had it because the experimenters do not even know what fair means.
It may seem counterintuitive, but if you're only concerned with the issue of whether the frequencies in measured variables are close to the ideal probabilities for those same measured variables (i.e. whether it's a 'fair sample', in the second sense above, for the measured variables only), it makes no difference at all whether the measured variables are influenced by 2 hidden variables or 2 trillion hidden variables! If you want to get a small likelihood of a significant difference between frequencies of these measured variables and the ideal probabilities for the measured variables (where 'ideal probabilities' means the frequency you'd see if the experiment were repeated an infinite number of times under the same conditions, or the probabilities that would be known by the 'omniscient being'), then the sample size you need to do this depends only on the ideal probability distribution on different values for the measured variables. Even if the ideal probability that this measured variable M will take a value Mj is computed by integrating (probability of Mj given hidden state λi)*(probability of hidden state λi) over each possible value of i, where there are 2 trillion possible values of i, this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.

If you disagree that only the ideal probability distribution on the measured variables is important when choosing the needed sample size, please respond to the coin flip simulation example in post #51, it's directly relevant to this. There the computer was programmed so that the value of the measured variable (F, which can take two values corresponding to heads and tails) depended on a very large number of hidden variables, but I claimed there would be no statistical difference in the output of this program from a simpler program that just picked a random number from 1 to 2 and used that to determine heads or tails. Would you disagree with that? Also, in that post I included a textbook equation to show that only the ideal probability distribution on the measured variable X is important when figuring out the probability that the average value of X over n trials will differ by more than some small amount [tex]\epsilon[/tex] from the ideal expectation value [tex]\mu[/tex] that would be seen over an infinite number of trials:
For a somewhat more formal argument, just look at http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter8.pdf, particularly the equation that appears on p. 3 after the sentence that starts "By Chebyshev's inequality ..." If you examine the equation and the definition of the terms above, you can see that if we look at the the average value for some random value X after n trials (the [tex]S_n / n[/tex] part), the probability that it will differ from the expectation value [tex]\mu[/tex] by an amount greater than or equal to [tex]\epsilon[/tex] must be smaller than or equal to [tex]\sigma^2 / n\epsilon^2[/tex], where [tex]\sigma^2[/tex] is the variance in the value of the original random variable X. And both the expectation value for X and the variance of X depend only on the probability that X takes different possible values (like the variable F in the coin example which has an 0.5 chance of taking F=0 and an 0.5 chance of taking F=1), it shouldn't matter if the value of X on each trial is itself determined by the value of some other variable λ which can take a huge number of possible values.
billschnieder said:
And remember we are assuming that a small stone person has a fair chance of being chosen as a large stone person. It could very well be that small stone people are shy and never volunteer, etc etc and you quickly get into a very difficult situation in which a fair sample is extremely unlikely.
Again you need to be clear what you mean by "fair sample". For "fair sample" in the first sense of establishing causality, all that matters is that people with small kidney stones are equally represented in group A and group B, it doesn't matter if the overall ratio of small stone subjects to large stone subjects in the study is the same as in the population at large (so it wouldn't matter if small stone people in the population at large were less likely to volunteer). For "fair sample" in the second sense of the frequencies matching the probabilities that would be seen in an arbitrarily large sample, it's true that in the medical test example shyness might create a sample that doesn't reflect the population at large. But this isn't an issue in the Aspect experiments, because Bell's proof can apply to any experimental conditions that meet some broad criteria (like each experimenter randomly choosing from three detector settings, and the choices and measurements having a spacelike separation), and the inequalities concern the ideal statistics one would see if one repeated the same experiment with the same conditions an infinite number of times. So as long as one repeats the experiment with the same conditions each time, and as long as the "conditions" you've chosen match some broad criteria like a spacelike separation between measurements, then the "population at large" you're considering is just a hypothetical infinite set of experiments repeated under just those same conditions. This means the only reason the statistics in your small sample might differ significantly from the ideal statistics would be random statistical fluctuation, there can't be any systematic bias that predictably causes your sample to differ in conditions from the "population at large" (as with shyness causing a systematic decrease in the proportion of small kidney stone patients in a study as compared with the general population), because of the very way the "population at large" would be understood in Bell's proof.
 
  • #73
JesseM said:
No you didn't. This is the key point you seem to be confused about: the marginal correlation between treatment B and recovery observed by the omniscient being is exactly the same as that observed by the experimenters. The omniscient being does not disagree that those who receive treatment B have an 83% chance of recovery, and a person who receives treatment A has a 73% chance of recovery.
billschnieder said:
Yes he does. He disagrees that treatment B is marginally more effective than treatment A.
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"? Are you talking about the causal influence of the two treatments on recovery chances, the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions, the correlations in marginal frequencies seen in the actual experiment, or something else? In the above statement I was just talking about the correlations in marginal frequencies seen in the actual experiment (i.e. the fact that F(treatment B, recovery) is higher than F(treatment B)*F(recovery) in the experiment), for which the omniscient being would note exactly the same frequencies as the experimenters.
billschnieder said:
The problem is not with what the omniscient being knows! The problem is what the doctors believe they know from their experiments.
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous? For example, if you're saying the doctors "believe they know" that treatment B has some causal role in recovery, are you saying that experimenters believe they know that a correlation in two observable variables (like Alice's measurement and Bob's measurement) indicates that one is having a causal influence on the other?
billschnieder said:
Now I know that you are just playing tricks and avoiding the issue. Those calculating from Aspect type experiments do not know the nature of all the hidden elements of reality involved either, so they think they have fully sampled all possible hidden elements of reality at play.
No, they don't. λ could have a vastly larger number of possible values than the number of particle pairs that could be measured in all of human history, and no one who understands Bell's proof would disagree.
billschnieder said:
They think their correlations can be compared with Bell's marginal probability. How can they possibly they know that?
Because regardless of the number of possible values λ could take on an infinite set of experiments with the same measurable conditions, and the ideal probability distribution on all the different values in this infinite set, the sample size needed to get a low risk of statistical fluctuations depends only on the ideal probability distribution on the measurable variables. To see why you need to consider either the coin flip example in post #51, or the textbook equation from the same post.
billschnieder said:
What possible random number generator can ensure that they sample all possible hidden elements of reality fairly
By referring to a "random number generator" I presume you are talking about the first sense of "fair sampling" I mentioned in the previous post, but as I said there this is irrelevant to Bell's argument. Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value on trials where measurable variables like A and a took one set of values (say, a=60 degree axis and A=spin-up) as it is to take the same value on trials where these measurable variables took different values (say, a=60 degree axis and A=spin-down).
JesseM said:
Simple yes or no is not possible here; there is some probability the actual statistics on a finite number of trials would obey Bell's inequalities, and some probability they wouldn't, and the law of large numbers says the more trials you do, the less likely it is your statistics will differ significantly from the ideal statistics that would be seen given an infinite number of trials (so the less likely a violation of Bell's inequalites would become in a local realist universe). Yes or No.

billschnieder said:
This is an interesting admission. Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs?
Yes. Note that in the ideal calculation of the marginal probability of a coincidence, you do a sum over all possible values of i of (probability of coincidence given hidden-variable state λi), multiplied by the probability of that λi, so you're already explicitly taking this possibility into account. The idea is just that whatever the experimental conditions we happen to choose, there must be some ideal probability distribution on λi's that would be seen in an infinite sample of experiments repeated under the same measurable conditions, and it's that distribution that goes into the calculation of the ideal marginal probabilities of various outcomes. And obviously there's no causal reason that a real finite sample of experiments under these conditions would systematically differ from the ideal infinite sample under exactly the same observable conditions, so any difference in frequencies from the ideal probabilities must be purely a matter of random statistical fluctuation. Finally, the law of large numbers says the probability of significant fluctuations goes down the larger your sample size, and as I've said the rate at which it goes down should depend only on the ideal probability distribution on the measured variables, it doesn't make any difference if there are a vast number of hidden-variable states that can influence the values of these measured variables.
billschnieder said:
Or do you believe that Bell test setups are equally fair to all possible λs? Yes or No.
No, the fact that equation (2) in Bell's paper includes in the integral the probability density for each given value of λ makes it obvious he wasn't assuming all values of λ are equally probable. I also made this explicit in my coin-flip-simulation example from post #51:
First, the program randomly generates a number from 1 to 1000000 (with equal probabilities of each), and each possible value is associated with some specific value of an internal variable λ; for example, it might be that if the number is 1-20 that corresponds to λ=1, while if the number is 21-250 that corresponds to λ=2 (so λ can have different probabilities of taking different values), and so forth up to some maximum λ=n.
 
  • #74
JesseM said:
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.

The definition of "fair" depends on the question you are trying to answer. If you interested in the truth, "fair" means you take 100 % of those who are right and 0% of those who are wrong. If you are looking for equal representation "fair" means you take 50% of those who are right and 50% of those who are wrong.

If you are interested in comparing the effectiveness of a drug, "fair" means the two groups on which you administer both drugs do not differ in any significant way as concerns any parameter that correlates with the effectiveness of the drug. If you are trying measure on a sample of a population in order to extrapolate results from your sample to the population, "fair" means the distribution of all parameters in your sample does not differ significantly from the distribution of the parameters in the population.

If you are measuring frequencies of photons in order to compare with inequalities generated from the perspective of an omniscient being where all possible parameters are included, "fair" means the distribution of all parameters of the photons actually measured, does not differ significantly from the distribution of the parameters in the fully universe considered by the omniscient being. I say it is impossible for experimenters to make sure of that, you say it is not and their samples are fair. It is clear here who is making the extraordinary claim.


this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.
You have given no mechanism by which experimenters have ensured this, or can ensure this.

If you disagree that only the ideal probability distribution on the measured variables is important when choosing the needed sample size, please respond to the coin flip simulation example in post #51, it's directly relevant to this. There the computer was programmed so that the value of the measured variable (F, which can take two values corresponding to heads and tails) depended on a very large number of hidden variables, but I claimed there would be no statistical difference in the output of this program from a simpler program that just picked a random number from 1 to 2 and used that to determine heads or tails. Would you disagree with that? Also, in that post I included a textbook equation to show that only the ideal probability distribution on the measured variable X is important when figuring out the probability that the average value of X over n trials will differ by more than some small amount [tex]\epsilon[/tex] from the ideal expectation value [tex]\mu[/tex] that would be seen over an infinite number of trials:
I don't see how this is relevant. It is not possible to do an infinite number of Aspect type experiments, or for doctors treating a disease to measure an infinite number of groups so I don't see the relevance here.

But this isn't an issue in the Aspect experiments, because Bell's proof can apply to any experimental conditions that meet some broad criteria (like each experimenter randomly choosing from three detector settings, and the choices and measurements having a spacelike separation), and the inequalities concern the ideal statistics one would see if one repeated the same experiment with the same conditions an infinite number of times.
"Randomly choosing three detector angles" does not mean the same as "randomly sampling all hidden elements of reality". That is the part you do not yet understand. If you have a hidden element of reality which interacts with the a detector angle such that for example everything from 0-75 deg behaves similarly but everything from 75 to 90 behaves differently, and you randomly choose an angle, you will not sample the hidden parameters fairly. Do you deny this.

So as long as one repeats the experiment with the same conditions each time, and as long as the "conditions" you've chosen match some broad criteria like a spacelike separation between measurements, then the "population at large" you're considering is just a hypothetical infinite set of experiments repeated under just those same conditions. This means the only reason the statistics in your small sample might differ significantly from the ideal statistics would be random statistical fluctuation, there can't be any systematic bias that predictably causes your sample to differ in conditions from the "population at large" (as with shyness causing a systematic decrease in the proportion of small kidney stone patients in a study as compared with the general population), because of the very way the "population at large" would be understood in Bell's proof.
Can you point me to an Aspect type experiment in which the same conditions were repeated an infinite number of times. NOTE: "Same conditions" includes all macro and microscopic properties of the detectors and the photon source for each iteration. Can you even point to an experiment in which the experimenters made sure EVEN ONE SINGLE condition was repeated on another trial. Just changing to the same angle is not enough.
 
Last edited:
  • #75
JesseM said:
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"?
if P(A) represents the marginal probability of successful treatment with drug A and P(B) represents the marginal probability of successful treatment with drug B, then if P(A) > P(B), then drug A is marginally more effective. This should have been obvious unless you are just playing semantic games here.

the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions
There may other factors that are correlated with the factors directly influencing the rate of success the thwart the experimenters attempts to generate a fair sample, and unless they know about all these relationships then can never ensure a fair sample. Not every experiment can be repeated an infinite number of times with the same conditions.

What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?


Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value ...
...
No, the fact that equation (2) in Bell's paper includes in the integral the probability density for each given value of λ makes it obvious he wasn't assuming all values of λ are equally probable. I also made this explicit in my coin-flip-simulation example from post #51:
Who said anything about different λs being equally likely. Fair does not mean all lambdas must be equality likely. Fair in this case means the likelihood of the lambdas in sample are not significantly different from their likelihoods in the population.
 
  • #76
billschnieder said:
If you are measuring frequencies of photons in order to compare with inequalities generated from the perspective of an omniscient being where all possible parameters are included, "fair" means the distribution of all parameters of the photons actually measured, does not differ significantly from the distribution of the parameters in the fully universe considered by the omniscient being. I say it is impossible for experimenters to make sure of that, you say it is not and their samples are fair. It is clear here who is making the extraordinary claim...

Not really. All science is based on experiments, and it is not possible to assure ourselves that the demon isn't fooling us by always presenting a biased sample. In other words, there is always a fair sampling assumption operating in the background of science. But that is not what is meant by the Fair Sampling Assumption. This has to do with the matching of events (usually a time window) and detector efficiencies.

As I believe has already been mentioned, Rowe et al closed this some time back. In addition, there are numerous experiments (non-Bell such as GHZ) in which the time window is not a factor. These support the Bell conclusion, providing very powerful confirming evidence. A local realist would predict that such experiments should not be possible.

So my point is that either way you define fair sampling, it should not affect one's conclusion.
 
  • #77
DrChinese said:
You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's.

Really? Why do you think that? As far as I can tell, with respect to local causality vs local realism, and with respect to understanding what EPR argued, Norsen's conclusions are the same as Bell's. And those are the things we are talking about. So I don't know how you came to your conclusion.


DrChinese said:
Otherwise, why would Norsen have a need to write about it?

Well, your premise seems to be wrong to begin with. But even if it were correct, this would still be a non-sequitur.

I can think of plentiful reasons why Norsen might have felt the need to write about it. The most obvious is to discuss and clarify the confusions (e.g. 'local realism') about what Bell actually assumed in his theorem. For example, check out Norsen's paper, 'Against Realism'.

Actually, in light of your comment, I'm curious now - have you ever read any of Norsen's papers on Bell?


DrChinese said:
And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion.

:confused: Where exactly do you think I said that my perspective is a 'common opinion'? I think you know that I have not made such a claim. I have repeatedly emphasized that I am simply presenting Bell's understanding of his own theorem, and claiming that the popular understanding of Bell (yes, even among the famous physicists that you quote) is incorrect.


DrChinese said:
It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

Yes, it is a minority view, but that has no logical bearing on its validity. Nevertheless, since you seem to be swayed by ad populum arguments, you may be interested to know that it is not nearly as minor of a view as you think. In fact, the majority of the quantum foundations physics community takes this view. That may not be as big as, say, the quantum optics community, but it is considerably larger than the two people that you know.

As for the majority view that you are channeling, I think it's rather odd that you seem so content with taking Zeilinger and Aspect's word for it, without even trying to confirm them for yourself by going directly the source (Bell's own writings). Especially when you know that there are serious people who dispute Zeilinger and Aspect's interpretation of Bell's theorem. Would it be such a terrible thing for you if Zeilinger and Aspect were wrong?


DrChinese said:
And by the way, my apologies for mangling the spelling of your name in a previous post.

No worries, it happens.
 
  • #78
DrChinese said:
You may be presenting some of Bell's thoughts, but Norsen's conclusion is most definitely NOT Bell's. Otherwise, why would Norsen have a need to write about it? And again, please, for the sake of our readers, please do not try to misrepresent the argument as your perspective being a common opinion. It is quite a minority view. Counting you, I know 2 in that camp. The majority view is represented by Zeilinger, Aspect, etc. And I channel that one. :smile:

And by the way, my apologies for mangling the spelling of your name in a previous post.

Btw, I am still waiting for your response to my post #25.
 
  • #79
billschnieder said:
If you are interested in comparing the effectiveness of a drug, "fair" means the two groups on which you administer both drugs do not differ in any significant way as concerns any parameter that correlates with the effectiveness of the drug.
Yes, this is what I meant when I said:
One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence.
billschnieder said:
If you are trying measure on a sample of a population in order to extrapolate results from your sample to the population, "fair" means the distribution of all parameters in your sample does not differ significantly from the distribution of the parameters in the population.
Yes, and this is what I meant when I talked about the second use of "fair":
Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.
So, do you agree with my statement that of these two, Only the second sense of "fair sample" is relevant to Bell's argument?

To make the question more precise, suppose all of the following are true:

1. We repeat some experiment with particle pairs N times and observe frequencies of different values for measurable variables like A and B

2. N is sufficiently large such that, by the law of large numbers, there is only a negligible probability that these observed frequencies differ by more than some small amount [tex]\epsilon[/tex] from the ideal probabilities for the same measurable variables (the 'ideal probabilities' being the ones that would be seen if the experiment was repeated under the same observable conditions an infinite number of times)

3. Bell's reasoning is sound, so he is correct in concluding that in a universe obeying local realist laws (or with laws obeying 'local causality' as Maaneli prefers it), the ideal probabilities for measurable variables like A and B should obey various Bell inequalities

...would you agree that if all of these are true (please grant them for the sake of the argument when answering this question, even though I know you would probably disagree with 3 and perhaps also doubt it is possible in practice to pick a sufficiently large N so that 2 is true), then the experiment constitutes a valid test of local realism/local causality, so if we see a sizeable violation of Bell inequalities in our observed frequencies there is a high probability that local realism is false? Please give me a yes-or-no answer to this question.

If you say yes, it would be a valid test if 1-3 were true but you don't actually believe 2 and/or 3 could be true in reality, then we can focus on your arguments for disbelieving either of them. For example, for 2 you might claim that if N is not large enough that the frequencies of hidden-variable states are likely to match the ideal probabilities for these states (because the number of hidden-variable states can be vastly larger than any achievable N), then that also means the frequencies of values of observable variables like A and B aren't likely to match the ideal probabilities for these variables either. I would say that argument is based on a misconception about statistics, and point you to the example of the coin-flip-simulator and the more formal textbook equation in post #51 to explain why. But again, I think it will help focus the discussion if you first address the hypothetical question about whether we would have a valid test of local realism if 1-3 were all true.
JesseM said:
this makes no difference whatsoever if all you care about is that the observed frequencies of different values of M match up with the ideal probabilities of different values of M.
billschnieder said:
You have given no mechanism by which experimenters have ensured this, or can ensure this.
Again, it's just the law of large numbers. If we are repeating an experiment under the same observable conditions, do you deny that there should be some fact of the matter as to the ideal probability distribution for each variable if the experiment were (hypothetically) repeated under the same observable conditions an infinite number of times (the ideal probability distribution known by an omniscient being, perhaps)? If you don't deny that there is some "true" probability distribution for a given type of experiment in this sense, then the law of large numbers says that if you repeat the experiment N times, then the probability p that the observed frequencies differ from the ideal probabilities by more than some small amount [tex]\epsilon[/tex] can be made as small as you want by picking a sufficiently large value of N--do you disagree?

As before, please give me a yes-or-no answer. If you do disagree with either of the above you are misunderstanding something about statistics. If you don't disagree with either of the above, then the question is just how large N must be to have a fairly small chance of a significant difference between observed frequencies in values measurable variables and the ideal probabilities for the values of these measurable variables. You seem to be arguing that N would depend on the number of possible values of the hidden variable λ, but this is what my arguments in post #51 were intended to disprove.
billschnieder said:
don't see how this is relevant. It is not possible to do an infinite number of Aspect type experiments, or for doctors treating a disease to measure an infinite number of groups so I don't see the relevance here.
You acknowledge that in statistics, we can talk about "probabilities" of events which are conceptually distinct from the frequencies of those events in some finite set of trials, right? Conceptually the meaning of "probability" is just the frequencies that would be seen as the sample size approaches infinity. And by the law of large numbers, if you repeat an experiment under some specific conditions a sufficiently large number of times, you can make the likelihood that your observed frequencies will differ significantly from the ideal probabilities (i.e. the frequencies that would be seen if you repeated the experiment under the same conditions an infinite number of times) arbitrarily small.
billschnieder said:
"Randomly choosing three detector angles" does not mean the same as "randomly sampling all hidden elements of reality". That is the part you do not yet understand. If you have a hidden element of reality which interacts with the a detector angle such that for example everything from 0-75 deg behaves similarly but everything from 75 to 90 behaves differently, and you randomly choose an angle, you will not sample the hidden parameters fairly. Do you deny this.
If we're using my second definition of "fair sampling", and we're repeating the experiment a large number of times, then I would deny your claim that we're not sampling the hidden parameters fairly. There is going to be some ideal probability distribution on the hidden parameters that would occur if the experiment were repeated in the same way an infinite number of times, ideal probabilities which we can imagine are known by the omniscient being. Whatever these ideal probabilities are, by picking a sufficiently large number of trials, the likelihood that the actual frequencies differ significantly from the ideal probabilities can be made arbitrarily low.
billschnieder said:
Can you point me to an Aspect type experiment in which the same conditions were repeated an infinite number of times.
Again, "repeating an infinite number of times" is just a theoretical way of defining what we mean by the "true" or ideal probabilities for different values of any variables involved. And again, the law of large numbers says that with enough trials, you can make the actual frequencies on your set of trials be very unlikely to differ significantly from these ideal probabilities.
billschnieder said:
NOTE: "Same conditions" includes all macro and microscopic properties of the detectors and the photon source for each iteration.
Not necessary, the sample space consists of all possible cases where some observable conditions are the same but other micro conditions can vary. Are you familiar with the concept of microstates and macrostates in statistical mechanics, and how we reason about the probability a given macrostate will evolve into a different macrostate by considering all possible microstates it could be in? Same idea here.
 
Last edited:
  • #80
JesseM said:
I thought it was clear from my previous post that you were misunderstanding when you imagined the "other" was a contrast to lambda rather than a contrast to the non-hidden variables. That's why I said 'Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b'.

The way you described it was still rather unclear to me. But in any case, the point of Bell's c variable is to encompass ALL of the past (non-hidden) causes of outcomes A and B, in the experimental set-up. So your use of the word 'hidden' to refer to the "other" was just unnecessary and misleading.
 
  • #81
Maaneli said:
Well, your premise seems to be wrong to begin with. But even if it were correct, this would still be a non-sequitur.

I can think of plentiful reasons why Norsen might have felt the need to write about it. The most obvious is to discuss and clarify the confusions (e.g. 'local realism') about what Bell actually assumed in his theorem. For example, check out Norsen's paper, 'Against Realism'.

Actually, in light of your comment, I'm curious now - have you ever read any of Norsen's papers on Bell?

As for the majority view that you are channeling, I think it's rather odd that you seem so content with taking Zeilinger and Aspect's word for it, without even trying to confirm them for yourself by going directly the source (Bell's own writings). Especially when you know that there are serious people who dispute Zeilinger and Aspect's interpretation of Bell's theorem. Would it be such a terrible thing for you if Zeilinger and Aspect were wrong?

I do follow Norsen's work, and in fact have had a link to one of his papers from my site for many years. From EPR and Bell Locality (2005):

"A new formulation of the EPR argument is presented, one which uses John Bell's mathematically precise local causality condition in place of the looser locality assumption which was used in the original EPR paper and on which Niels Bohr seems to have based his objection to the EPR argument. The new formulation of EPR bears a striking resemblance to Bell's derivation of his famous inequalities. The relation between these two arguments -- in particular, the role of EPR as part one of Bell's two-part argument for nonlocality -- is also discussed in detail. "

This of course has a lot of similarity to arguments you are making, and I would be happy to discuss.

Now, I don't agree with much of his work, but I happen to think it is worth discussing. So everything you are saying about me and the majority view is pretty much backwards. Zeilinger and Aspect hardly need me to defend them, and I am fairly certain they are familiar with Norsen's ideas and "generally" reject them. That's not a cut at all, as you say serious people can have different opinions. In fact, I feel there ARE points of view that are different than my own which are worthy, and they may or may not be mainstream. Norsen has put a lot of energy into the analysis of the EPR history and it is worth a listen. And by the way, I don't say that about a lot of things. But I read a lot too, and have my own opinion of things as well.

Specifically, I cannot see any way around the Bell (14) issue and I don't see how you or Norsen get around that. Here is the issue: I demand of any realist that a suitable dataset of values at three simultaneous settings (a b c) be presented for examination. That is in fact the realism requirement, and fully follows EPR's definition regarding elements of reality. Failure to do this with a dataset which matches QM expectation values constitutes the Bell program. Clearly, Bell (2) has only a and b, and lacks c. Therefore Bell (2) is insufficient to achieve the Bell result.

I have gone around and around with Travis on the point and he could never explain it to me. But believe me, I am all ears. From a physical perspective, I do follow the idea that non-locality offers an out. But there are other outs. And further, I am not certain I can even describe what a non-realistic solution might look like. Just saying it is contextual doesn't seem to solve a lot.
 
  • #82
DrChinese said:
I do follow Norsen's work, and in fact have had a link to one of his papers from my site for many years. From EPR and Bell Locality (2005):

"A new formulation of the EPR argument is presented, one which uses John Bell's mathematically precise local causality condition in place of the looser locality assumption which was used in the original EPR paper and on which Niels Bohr seems to have based his objection to the EPR argument. The new formulation of EPR bears a striking resemblance to Bell's derivation of his famous inequalities. The relation between these two arguments -- in particular, the role of EPR as part one of Bell's two-part argument for nonlocality -- is also discussed in detail. "

This of course has a lot of similarity to arguments you are making, and I would be happy to discuss.

OK, great, how about we start with post #25?

DrChinese said:
In fact, I feel there ARE points of view that are different than my own which are worthy, and they may or may not be mainstream.

Great, so then let's discuss. But first, have you read La Nouvelle Cuisine, or The Theory of Local Beables, or Free Variables and Local Causality, or Bertlmann's socks? If not, I highly recommend all of them, and particularly La Nouvelle. Or, to keep it light, you can just start with my summary of La Nouvelle in post #25. I would like to see how you think you can reconcile Bell's reasoning with that of Zeilinger and Aspect.

DrChinese said:
Specifically, I cannot see any way around the Bell (14) issue and I don't see how you or Norsen get around that.

I'm not sure how else to explain this. The necessity of c in Bell's theorem is not being disputed. You seem to think that its introduction has something to do with the introduction of a 'realism' assumption or counterfactual definiteness. I've asked you for a reference which supports your interpretation (and explains the reasoning behind it), but you have yet to provide one. In any case, as Bell explains, c just specifies (as a consequence of the principle of local causality) the non-hidden common past causes for the outcomes A and B, in the experimental set-up. I explained this in greater detail in post #25.
 
Last edited:
  • #83
Maaneli said:
Great, so then let's discuss. But first, have you read La Nouvelle Cuisine, or The Theory of Local Beables, or Free Variables and Local Causality, or Bertlmann's socks? If not, I highly recommend all of them, and particularly La Nouvelle. Or, to keep it light, you can just start with my summary of La Nouvelle in post #25. I would like to see how you think you can reconcile Bell's reasoning with that of Zeilinger and Aspect.

I will go back to #25, and we can discuss any point you like. However, I do not accept Bell's statements in these books as a reference in and of themselves. I have his works and he says a lot of things at different times and in different contexts. So don't ask me to accept these statements at face value. And don't ask me to reconcile them to generally accepted science either. Suppose he is a Bohmian? :smile: Instead, we can discuss them wherever they are good expressions of what we want to communicate. It is Bell's 1965 work that stands in the literature, for better or worse, so I tend to work with it a lot. But I will try and be flexible.

As to the idea about a, b and c: I have already given you reference upon reference to my perspective in general terms (respected authors with less formal papers), and I can quote specific reference papers from the same sources saying the same thing in formal peer-reviewed terms. This is a generally accepted definition of realism, and it follows EPR too. If you accept a, b and c as an assumption of Bell, then we are already at the same point and there is no further debate required.

On the other hand, I think you don't accept Bell (14) as an assumption.
 
  • #84
Maaneli said:
In other words, all the realism in Bell's theorem is introduced as part of Bell's definition and application of his local causality condition. And the introduction of the unit vector, c, follows from the use of the local causality condition. Indeed, in La Nouvelle Cuisine (particularly section 9 entitled 'Locally explicable correlations'), Bell explicitly discusses the relation of c to the hidden variables, lambda, and the polarizer settings, a and b, and explicitly shows how they follow from the local causality condition. To summarize it, Bell first defines the 'principle of local causality' as follows:

"The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light."

In fact, this definition is equivalent to the definition of relativistic causality, and one can readily see that it implicitly requires the usual notion of realism in special relativity (namely, spacetime events, and their causes and effects) in its very formulation. Without any such notion of realism, I hope you can agree that there can be no principle of local causality.

Bell then defines a locally causal theory as follows:

"A theory will be said to be locally causal if the probabilities attached to values of 'local beables' ['beables' he defines as those entities in a theory which are, at least, tentatively, taken seriously, as corresponding to something real, and 'local beables' he defines as beables which are definitely associated with particular spacetime regions] in a spacetime region 1 are unaltered by specification of values of local beables in a spacelike separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified, for example by a full specification of local beables in a spacetime region 3 [he then gives a figure illustrating this]."

You can clearly see that the local causality principle cannot apply to a theory without local beables. To spell it out, this means that the principle of local causality is not applicable to nonlocal beables, nor a theory without beables of any kind.

Bell then shows how one might try to embed quantum mechanics into a locally causal theory. To do this, he starts with the description of a spacetime diagram (figure 6) in which region 1 contains the output counter A (=+1 or -1), along with the polarizer rotated to some angle a from some standard position, while region 2 contains the output counter B (=+1 or -1), along with the polarizer rotated to some angle b from some standard position which is parallel to the standard position of the polarizer rotated to a in region 1. He then continues:

"We consider a slice of space-time 3 earlier than the regions 1 and 2 and crossing both their backward light cones where they no longer overlap. In region 3 let c stand for the values of any number of other variables describing the experimental set-up, as admitted by ordinary quantum mechanics. And let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisaged by EPR. Suppose that the c and lambda together give a complete specification of at least those parts of 3 blocking the two backward light cones."

From this consideration, he writes the joint probability for particular values A and B as follows:


{A, B|a, b, c, lambda} = {A|B, a, b, c, lambda} {B|a, b, c, lambda}​

He then says, "Invoking local causality, and the assumed completeness of c and lambda in the relevant parts of region 3, we declare redundant certain of the conditional variables in the last expression, because they are at spacelike separation from the result in question. Then we have


{A, B|a, b, c, lambda} = {A|a, c, lambda} {B|b, c, lambda}.​

Bell then states that this formula has the following interpretation: "It exhibits A and B as having no dependence on one another, nor on the settings of the remote polarizers (b and a respectively), but only on the local polarizers (a and b respectively) and on the past causes, c and lambda. We can clearly refer to correlations which permit such factorization as 'locally explicable'. Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the formulation of 'local causality', but as a consequence thereof."

Bell then shows that this is the same local causality condition used in the derivation of the CSHS inequality, and which the predictions of quantum mechanics clearly violate. Hence, Bell concludes that quantum mechanics cannot be embedded in a locally causal theory.

And again, the variable c here is nothing but part of the specification of the experimental set-up (as allowed for by 'ordinary quantum mechanics'), just as are the polarizer settings a and b (in other words, a, b, and c are all local beables); and the introduction of c in the joint probability formula follows from the local causality condition, as part of the complete specification of causes of the events in regions 1 and 2. So, again, there is no notion of realism in c that is any different than in a and b and what already follows from Bell's application of his principle of local causality.

So there you go, straight from the horses mouth. I hope you will have taken the time to carefully read through what I presented above, and to corroborate it for yourself by also reading (or re-reading) La Nouvelle Cuisine.

Here is most of your #25. Notice how Bell strays from the EPR language here? He is making a somewhat different argument, which is probably OK. I do that a bit in my Bell proof web pages. So I will look back over some of the book, so we can make sure we are discussing the same apples. May be a day or so though.
 
  • #85
Maaneli said:
The way you described it was still rather unclear to me. But in any case, the point of Bell's c variable is to encompass ALL of the past (non-hidden) causes of outcomes A and B, in the experimental set-up. So your use of the word 'hidden' to refer to the "other" was just unnecessary and misleading.
Why are you talking about c, though? The "other hidden variables" referred to lambda, not c, as I already explained. Do you disagree that lambda refers to the hidden variables, and that these are "other" to the measurable variables? Again:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables (i.e., when conditioned on lambda)
 
  • #86
DrChinese said:
Here is most of your #25. Notice how Bell strays from the EPR language here? He is making a somewhat different argument, which is probably OK. I do that a bit in my Bell proof web pages.

Yes, Bell was using a more precise and quantitative formulation of the local causality criterion that EPR used. So it's not surprising that his language will stray from EPR.

DrChinese said:
So I will look back over some of the book, so we can make sure we are discussing the same apples. May be a day or so though.

Sounds good. Thanks for taking the time to read it over.
 
  • #87
JesseM said:
Why are you talking about c, though? The "other hidden variables" referred to lambda, not c, as I already explained. Do you disagree that lambda refers to the hidden variables, and that these are "other" to the measurable variables? Again:

OK, now I see what you meant. My bad.
 
  • #88
JesseM said:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables (i.e., when conditioned on lambda)

But this characterization is still problematic. It is not accurate to say that Bell's whole argument is based on explicitly considering the possibility that this marginal correlation would 'disappear' when conditioned on (locally causal) hidden variables; rather, he asked whether the correlation could be explained in terms of a theory in which the measurement outcomes were conditioned on locally causal hidden variables. In other words, he asked whether QM could be embedded within a locally causal theory.
 
  • #89
Maaneli said:
You can clearly see that the local causality principle cannot apply to a theory without local beables. To spell it out, this means that the principle of local causality is not applicable to nonlocal beables, nor a theory without beables of any kind.
Maaneli, what you say in the above resonates strongly with how I am (currently) seeing it all.

What I see is that "Bell Locality" has in it two related yet (apparently) distinct senses of 'locality':

(i) "state separability" (for spatiotemporally separated systems) ,

and

(ii) "local causality" .

The first of these seems (to me) to correspond to the idea of "local beables".

... Would you say the same?
____________________________________

Here are some definitions to make clearer where I am coming from:

[The following definitions are ((slightly) adapted) from Ruta's post #556 in another thread.]

(i) Any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own (separate) independent 'real' states such that the joint state is completely determined by the independent states.

(ii) Any two spacelike separated systems A and B are such that the separate 'real' state of A cannot be 'influenced' by events in the neighborhood of B, and vice versa.
____________________________________
____________________________________

Next.

(In a thread parallel to this one) JesseM wrote:
JesseM said:
In a local realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of space-time--ultimately reduce to some collection of local physical facts defined at individual points in spacetime (or individual 'bits' if spacetime is not infinitely divisible).
JesseM, it sounds (to me) like what you mean by "local realism" (part of which is expressed in the quote above) is equivalent (in meaning) to (i) and (ii) above.

... Do you agree with this assessment?
____________________________________
 
Last edited:
  • #90
Maaneli said:
...rather, he asked whether the correlation could be explained in terms of a theory in which the measurement outcomes were conditioned on locally causal hidden variables. In other words, he asked whether QM could be embedded within a locally causal theory.

And the apparent answer to this question was YES.

We have 2 copies of an encyclopedia which we put into 2 separate trunks. We send those trunks into separate regions of space. Then we have Alice and Bob ask questions which are answered after they open the trunks and look at the books. Their correlated results could match QM (by analogy) as far as anyone knows.

That is Bell (2), his beginning point. Which is based from the ending point of EPR. There is nothing obvious that prevents this explanation from being reasonable. As long as you have Alice and Bob, 2 parties, looking at a pair of entangled particles, at ONLY settings a and b, this might be feasible. And in fact a number of authors claim to have Local Realistic theories which can satisfy this condition (although I personally don't bother to examine such claims as they are meaningless to me apres Bell).
 
  • #91
JesseM said:
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"? Are you talking about the causal influence of the two treatments on recovery chances, the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions, the correlations in marginal frequencies seen in the actual experiment, or something else?
billschnieder said:
if P(A) represents the marginal probability of successful treatment with drug A and P(B) represents the marginal probability of successful treatment with drug B, then if P(A) > P(B), then drug A is marginally more effective. This should have been obvious unless you are just playing semantic games here.
It's still not clear what you mean by "the marginal probability of successful treatment". Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity (this is the frequentist interpretation of probability, its major rival being the Bayesian interpretation--see the article Frequentists and Bayesians). If so, what would be the experimental conditions in question? Would they just involve replicating whatever experimental conditions were used in the actual experiment with 700 people, or would they involve some ideal experimental conditions which control for other variables like kidney stone size even if the actual experiment did not control for these things?

For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective. On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity), like one where the patients are randomly assigned to treatment A or B, then in this case the "marginal probability of recovery with treatment A" would be higher. So in this specific experiment where treatment was determined by the doctor, which would you say was higher, the marginal probability of recovery with treatment A or the marginal probability of recovery with treatment B? Without knowing the answer to this question I can't really understand what your terminology is supposed to mean.
JesseM said:
the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions
billschnieder said:
There may other factors that are correlated with the factors directly influencing the rate of success the thwart the experimenters attempts to generate a fair sample, and unless they know about all these relationships then can never ensure a fair sample. Not every experiment can be repeated an infinite number of times with the same conditions.
In practice no experiment can be repeated an infinite number of times, obviously. Again, I'm talking about the definition of what we mean when we talk about "probability" (as distinct from frequencies on any finite number of trials, which can differ from the 'true probabilities' due to statistical fluctuations, like if a coin has an 0.5 probability of landing heads even though for any finite number of flips you are unlikely to find that exactly half the flips were heads). In the frequentist interpretation, probability is understood to mean the frequencies that would hypothetically be seen if we could hypothetically repeat the same experiment an infinite number of times, even if this is impossible in practice. Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?) If so, perhaps you'd better give me your own definition of what you even mean by the word "probability", if you're not using the frequentist interpretation that I use.
JesseM said:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
billschnieder said:
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments, it refers to the idea that the statistics of the measured particle pairs should be representative of the statistics in all pairs emitted by the source, which doesn't really help me understand what you mean by "the doctors believe their sample is fair" since I'm not sure what larger group you want the statistics in the sample to be representative of (again, whether it's a hypothetical much larger group of tests repeated under the same experimental conditions, or a hypothetical much larger group of tests done under some different experimental conditions that may be better-designed to control for other variables, or something else entirely if you aren't using the frequentist understanding of the meaning of 'probability')
billschnieder said:
Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs?
JesseM said:
Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value
billschnieder said:
Who said anything about different λs being equally likely. Fair does not mean all lambdas must be equality likely.
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
billschnieder said:
Fair in this case means the likelihood of the lambdas in sample are not significantly different from their likelihoods in the population.
As before, you need to explain what "the population" consists of. Again, does it consist of a hypothetical repetition of the same experimental conditions a much larger (near-infinite number of times)? If so, then by definition the actual sample could not be "systematically biased" compared to the larger population, since the larger population is defined in terms of the same experimental conditions. Perhaps you mean repeating similar experimental conditions but with ideal detector efficiency so all particle pairs emitted by the source are actually detected, which would be more like the meaning of the "fair sampling assumption"? If neither of these capture your meaning, please give your own definition of what you do mean by "the population".
 
  • #92
Eye_in_the_Sky said:
Here are some definitions to make clearer where I am coming from:

[The following definitions are ((slightly) adapted) from Ruta's post #556 in another thread.]

(i) Any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own (separate) independent 'real' states such that the joint state is completely determined by the independent states.

(ii) Any two spacelike separated systems A and B are such that the separate 'real' state of A cannot be 'influenced' by events in the neighborhood of B, and vice versa.
____________________________________
____________________________________

Next.

(In a thread parallel to this one) JesseM wrote:
JesseM said:
In a local realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of space-time--ultimately reduce to some collection of local physical facts defined at individual points in spacetime (or individual 'bits' if spacetime is not infinitely divisible).
JesseM, it sounds (to me) like what you mean by "local realism" (part of which is expressed in the quote above) is equivalent (in meaning) to (i) and (ii) above.

... Do you agree with this assessment?
____________________________________
My quote above doesn't give the full definition of what I mean by "local realist", I would have to add a condition similar to your (ii), that spacelike-separated physical facts A and B cannot causally influence one another (which could be stated as the condition that if you know the complete set of local physical facts in the past light cone of A, and express that knowledge as the value of a variable λ whose different values correspond to all possible combinations of local physical facts in a past light cone, then P(A|λ)=P(A|λ,B)). With this addition, I'd say that what I mean by "local realism" does appear to be the same as your own (i) and (ii).
 
  • #93
Eye_in_the_Sky said:
Maaneli, what you say in the above resonates strongly with how I am (currently) seeing it all.

What I see is that "Bell Locality" has in it two related yet (apparently) distinct senses of 'locality':

(i) "state separability" (for spatiotemporally separated systems) ,

and

(ii) "local causality" .

The first of these seems (to me) to correspond to the idea of "local beables".

... Would you say the same?

Yes!
 
  • #94
JesseM said:
With this addition, I'd say that what I mean by "local realism" does appear to be the same as your own (i) and (ii).

If that's the case Jesse, can you tell us which parts of your definition of 'local realism' refer to 'locality', and which parts refer to 'realism', and whether these definitions are independent of each other? I think you know what I'm driving at ...
 
  • #95
JesseM said:
It's still not clear what you mean by "the marginal probability of successful treatment".
A = Treatment A results in recovery from the disease
P(A) = marginal probability of recovery after administration of treatment A.
If it is the meaning of marginal probability you are unsure of, this will help (http://en.wikipedia.org/wiki/Conditional_probability)

Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity

Probability means "Rational degree of belief" defined in the range from 0 to 1 such that 0 means uncertain and 1 means certain. Probability does not mean frequency, although probability can be calculated from frequencies. Probabilities can be assigned for many situations that can never be repeated. A rational degree of belief can be formed about a lot of situations that have never happened. The domain of probability theory is to deal with uncertainty, indeterminacy and incomplete information. As such it makes not much sense to talk of "true probability". You can talk of the "true relative frequency".

For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective.
So you agree that one man's marginal probability is another man's conditional probability. Which is the point I've been pointing out to you Ad-nauseam. Comparing probabilities defined on different probability spaces is guaranteed to produce paradoxes and spooky business.
On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity),
This is the point you still have not understood. It is not possible to control for "all other variables" which you know nothing about, even if it were possible to repeat the experiment an infinite number of times. Without knowing everything relevant about "all other variables", your claim to be randomly selecting between them is no different from the case in which the doctors did the selection. For example, imagine that I come to you today and say, I want to do an experiment on dolphins, give me a representative sample of 1000 dolphins. Without knowing anything about the details of my experiment, and all the parameters that affect the outcome of my experiment, could you explain to me how you will go about generating this "random list of dolphins", also tell me what an infinite number of times means in this context. If you could answer this question, it will help tremendously in understanding your point of view.

And let us say, you came up with some list, and I did my experiment and came up with the number of dolphins passing some test (say N), and I calculated the relative frequency N/1000. Will you call this number the marginal probability of a dolphin passing my test? Or the conditional probability of the dolphin passing my test, conditioned on on the method of selecting the list?

Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?)
You can hypothesize anything you want. But not everything that you hypothesize can be compared with something that is actually done. To be able to compare an actual experiment to a hypothetical situation, you have to make sure all relevant entities in the hypothetical situation are present in the actual experiment and vice versa.

For example, let us say your hypothetical situation assumes that a experimental condition is measured an infinite number of times (in your words, "hypothetically repeat the same experiment an infinite number of times", "hypothetical much larger group of tests repeated under the same experimental conditions"). Then if an experiment is actually performed in which the experimenters repeatedly measure at a given detector setting (say detector angle) a very large number of times.

Your argument here is that, since the hypothetical situation requires repeating the same conditions multiple times and the experimenters have done that, then their results are comparable. In other words, according to you, the results of Aspect-type experiments are comparable to Bell's inequalities.

My argument here is that, since the experimenters can never guarantee that any setting has been repeated, they can not compare their results with Bell's inequalities. In other words, if they collect 1000000 data points for the detector angle 90 degrees, the experimenters can not guarantee that they have repeated a single condition 1000000 times, rather than 1000000 different conditions exactly once each. And until they can do that, their results are not comparable to Bell's inequalities.

Of course they have control over their detector angle, but they have no clue about the detailed workings of the microscopic components. And guess what, photons interact at the microscopic level not the macroscopic level, so their claims to having repeated the same experimental conditions multiple times is bogus.

JesseM said:
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments
You asked:
JesseM said:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
To which my answer was,
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
I assumed it would be obvious to you that those doing Aspect-type experiments also believe their samples are fair, which is analogous to the doctors believing their sampling was fair, which directly answers your question!

JesseM said:
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
You must be kidding right? I don't know why I bother answering these silly questions. Look up the meaning of "biased", Einstein.

JesseM said:
As before, you need to explain what "the population" consists of. Again, does it consist of a hypothetical repetition of the same experimental conditions a much larger (near-infinite number of times)? If so, then by definition the actual sample could not be "systematically biased" compared to the larger population, since the larger population is defined in terms of the same experimental conditions. Perhaps you mean repeating similar experimental conditions but with ideal detector efficiency so all particle pairs emitted by the source are actually detected, which would be more like the meaning of the "fair sampling assumption"? If neither of these capture your meaning, please give your own definition of what you do mean by "the population".
You are confused. The population is the entirety of what actually exists of the "thing" under consideration (see http://en.wikipedia.org/wiki/Sampling_(statistics)#Population_definition). The "population" is not some hypothetical repetition of a large number of hypothetical individuals or "things".

You could have a 100% efficient detector and yet not have a fair sample. It is a mistake to assume that "fair sampling assumption" has only to do with detector efficiency. You could have a 100% efficient detector and not detect all the particle leaving the source, precisely because the whole of the experimental apparatus is responsible for non-detection of some photons, not just the detector. All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.
 
  • #96
Maaneli said:
If that's the case Jesse, can you tell us which parts of your definition of 'local realism' refer to 'locality', and which parts refer to 'realism', and whether these definitions are independent of each other? I think you know what I'm driving at ...
As I said before, my impression is that "local realism" is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing, "realism" doesn't need to have any independent meaning outside of its use in this phrase. If physicists called it "Bellian locality", would you require that "Bellian" have some independent definition beyond the definition that the whole phrase "Bellian locality" refers to the type of local theory Bell discussed?
 
  • #97
billschnieder said:
You could have a 100% efficient detector and yet not have a fair sample. It is a mistake to assume that "fair sampling assumption" has only to do with detector efficiency. You could have a 100% efficient detector and not detect all the particle leaving the source, precisely because the whole of the experimental apparatus is responsible for non-detection of some photons, not just the detector. All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.

This is true. In fact, this concept drives the De Raedt LR simulation model by introducing a time delay element which affects coincidence window size. Detector efficiency is not itself a factor. I think the net result is essentially the same whether it is detector efficiency or not.
 
  • #98
JesseM said:
It's still not clear what you mean by "the marginal probability of successful treatment".
billschnieder said:
A = Treatment A results in recovery from the disease
P(A) = marginal probability of recovery after administration of treatment A.
If it is the meaning of marginal probability you are unsure of, this will help (http://en.wikipedia.org/wiki/Conditional_probability)
I think you know perfectly well that I understand the difference between marginal and conditional as we have been using these terms extensively. It often seems like you may be intentionally playing one-upmanship games where you snip out all the context of some question or statement I ask and make it sound like I was confused about something very trivial...in this case the context made clear exactly what I found ambiguous in your terms:
For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective. On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity), like one where the patients are randomly assigned to treatment A or B, then in this case the "marginal probability of recovery with treatment A" would be higher. So in this specific experiment where treatment was determined by the doctor, which would you say was higher, the marginal probability of recovery with treatment A or the marginal probability of recovery with treatment B? Without knowing the answer to this question I can't really understand what your terminology is supposed to mean.
This scenario, where there is a systematic bias in how doctors assign treatment which influences the observed correlations in frequencies between treatment and recovery in the sample, is a perfectly well-defined one (in fact it's exactly the one assumed in the wikipedia page on Simpson's paradox), so if your terms are well-defined you should be able to answer the question about whether treatment A or treatment B has a higher "marginal probability of successful treatment" in this particular scenario. So please answer it if you want to continue using this type of terminology.

In general I notice that you almost always refuse to answer simple questions I ask you about your position, or to address examples I give you, while you have no problem coming up with examples and commanding me to address them, or posing questions and then saying "answer yes or no". Again it seems like this may be a game of one-upmanship here, where you refuse to address anything I ask you to, but then forcefully demand that I address examples/questions of yours, perhaps to prove that you are in the "dominant" position and that I "can't tell you what to do". If you are playing this sort of macho game, count me out, I'm here to try to have an intellectual discussion which gets at the truth of these matters, not to prove what an alpha male I am by forcing everyone to submit to me. I will continue to make a good-faith effort to answer your questions and address your examples, as long as you will extend me the same courtesy (not asking you to answer every sentence of mine with a question mark, just the ones I specifically/repeatedly request that you address); but if you aren't willing to do this, I won't waste any more time on this discussion.
billschnieder said:
Probability means "Rational degree of belief" defined in the range from 0 to 1 such that 0 means uncertain and 1 means certain.
"Rational degree of belief" is a very ill-defined phrase. What procedure allows me to determine the degree to which it is rational to believe a particular outcome will occur in a given scenario?
billschnieder said:
Probability does not mean frequency, although probability can be calculated from frequencies.
You seem to be unaware of the debate surrounding the meaning of "probability", and of the fact that the "frequentist interpretation" is one of the most popular ways of defining its meaning. I already linked you to the wikipedia article on frequency probability which starts out by saying:
Frequency probability is the interpretation of probability that defines an event's probability as the limit of its relative frequency in a large number of trials. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. The shift from the classical view to the frequentist view represents a paradigm shift in the progression of statistical thought.
Under the wikipedia article on the classical interpretation they say:
The classical definition of probability was called into question by several writers of the nineteenth century, including John Venn and George Boole. The frequentist definition of probability became widely accepted as a result of their criticism, and especially through the works of R.A. Fisher.
Aside from wikipedia you might look at the Interpretations of Probability article from the Stanford Encyclopedia of Philosophy. In the section on frequency interpretations they start by discussing "finite frequentism" which just defines probability in terms of frequency on some finite number of real trials, so if you flip a coin 10 times and get 7 heads that would automatically imply the "probability" of getting heads was 0.7. This interpretation has some obvious problems, so that leads them to the meaning that I am using when I discuss "ideal probabilities", known as "infinite frequentism":
Some frequentists (notably Venn 1876, Reichenbach 1949, and von Mises 1957 among others), partly in response to some of the problems above, have gone on to consider infinite reference classes, identifying probabilities with limiting relative frequencies of events or attributes therein. Thus, we require an infinite sequence of trials in order to define such probabilities. But what if the actual world does not provide an infinite sequence of trials of a given experiment? Indeed, that appears to be the norm, and perhaps even the rule. In that case, we are to identify probability with a hypothetical or counterfactual limiting relative frequency. We are to imagine hypothetical infinite extensions of an actual sequence of trials; probabilities are then what the limiting relative frequencies would be if the sequence were so extended.
The article goes on to discuss the idea that this infinite series of trials should be defined as ones that all share some well-defined set of conditions, which Von Mises called "collectives — hypothetical infinite sequences of attributes (possible outcomes) of specified experiments that meet certain requirements ... The probability of an attribute A, relative to a collective ω, is then defined as the limiting relative frequency of A in ω."

There are certainly other interpretations of probability, discussed in the article (you can find more extensive discussions of different interpretations in a book like Philosophical Theories of Probability--much of the chapter on the frequentist interpretation can be read on google books here). I think most of them would be difficult to apply to Bell's reasoning though. The more subjective definitions would have the problem that you'd have trouble who is supposed to be the "subject" that defines probabilities dealing with λ (whose value on each trial, and even possible range of values, would be unknown to human experimenters). And the more "empirical" definitions which deal only with frequencies in actual observed trials would have the same sort of problem, since we don't actually observe the value of λ.

Anyway, do you think there is anything inherently incoherent about using the frequentist interpretation of probability when following Bell's reasoning? If so, what? And if you prefer a different interpretation of the meaning of "probability", can you give a definition less vague than "rational degree of belief", preferably by referring to some existing school of thought referred to in an article or book?
billschnieder said:
Probabilities can be assigned for many situations that can never be repeated.
But the frequentist interpretation is just about hypothetical repetitions, which can include purely hypothetical ideas like "turning back the clock" and running the same single experiment over again at the same moment (with observable conditions held the same but non-observed conditions, like the precise 'microstate' in a situation where we have only observed the 'macrostate', allowed to vary randomly) rather than actually repeating it at successively later times (which might be impossible because the original experiment destroyed the object we were experimenting on, say).
billschnieder said:
The domain of probability theory is to deal with uncertainty, indeterminacy and incomplete information.
Yes, and the idea is that we are considering a large set of trials in which the things we know are the same in every trial (like the 'macrostate' in statistical mechanics which just tells us the state of macro-variables like temperature and pressure) but the things we don't know vary randomly (like the 'microstate' in statistical mechanics which deals with facts like the precise position of every microscopic particle in the system). In classical statistical mechanics the "probability" that a system with a given macrostate at t0 will evolve to another given macrostate at t1 is determined by considering every possible microstate consistent with the original macrostate at t0 (the number of possible microstates for any human-scale system being astronomically large) and seeing what fraction will evolve into a microstate at t1 which is consistent with the macrostate whose probability we want to know. So here we are considering a situation in which we only know some limited information about the system, and are figuring out the probabilities by considering a near-infinite number of possible trials in which the unknown information (the precise microstate) might take many possible values. Do you think this is an improper way of calculating probabilities? It does seem to be directly analogous to how Bell was calculating the probabilities of seeing different values of observable variables by summing over all possible values of the hidden variables.
billschnieder said:
As such it makes not much sense to talk of "true probability".
It does in the frequentist interpretation.
JesseM said:
For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective.
billschnieder said:
So you agree that one man's marginal probability is another man's conditional probability.
The comment above says nothing of the sort. I'm just saying that to talk about "probability" in the frequentist interpretation you need to define the conditions that you are imagining being repeated in an arbitrarily large number of trials. And in the case above, the conditions include the fact that on every trial the treatment was assigned by a member of some set of doctors, which means that the marginal probability of (treatment B, recovery) is higher than the marginal probability of (treatment A, recovery) despite the fact that treatment B is not causally more effective (and I'm asking you whether in this scenario you'd say treatment B is 'marginally more effective', a question you haven't yet answered). Nowhere in the above am I saying anything about conditional probabilities.

Even if you don't want to think of probabilities in frequentist terms, would you agree that whenever we talk about "probabilities" we at least need to define a sample space (or probability space, which is just a sample space with probabilities on each element) which includes the conditions that could obtain on any possible trial in our experiment? If so, would you agree that when defining the sample space, we must define what process was used to assign treatments to patients, that a sample space where treatment was assigned by doctors would be a different one than a sample space where treatment was assigned by a random number generator on a computer?
billschnieder said:
Which is the point I've been pointing out to you Ad-nauseam. Comparing probabilities defined on different probability spaces is guaranteed to produce paradoxes and spooky business.
I'm not asking you to "compare probabilities defined on different probability spaces", and Bell's argument doesn't require you to do that either. I'm just asking, for the probability space I outlined where treatments would be decided by doctors, whether you would say treatment B was "marginally more effective" if it turned out that the probability (or frequency) of (treatment B, recovery) was higher than the probability of (treatment A, recovery).
billschnieder said:
This is the point you still have not understood. It is not possible to control for "all other variables" which you know nothing about, even if it were possible to repeat the experiment an infinite number of times.
Sure it would be. If treatment was assigned by a random number generator, then in the limit as the number of trials went to infinity the probability of any correlation between traits of patients prior to treatment (like large kidney stones) and the treatment they were assigned would approach 0. This is just because there isn't any way the traits of patients would causally influence the random number generator so that there would be a systematic difference in the likelihood that patients with different versions of a trait (say, large vs. small kidney stones) would be assigned treatment A vs. treatment B. Do you disagree?

And again, if we are talking about Bell's argument it doesn't matter if there is such a correlation between the value of the hidden variable λ and the value of some measurable variable like A, you don't need to "control for" the value of the hidden variable in the sense you need to "control for" the value of a background variable like S={large kidney stones, small kidney stones} above. This is because the only need for that type of control is if you want to establish a causal relation between measurable variables like treatment and recovery, but Bell is not trying to establish a causal relation between spacelike-separated measurement outcomes, quite the opposite in fact. If you disagree it would help if you would respond to post #79 (you might not have even noticed that one because it was on an earlier page from my next post to you, #91, which you were responding to here), particularly the question I was asking here (which only requires a yes-or-no answer):
So, do you agree with my statement that of these two, Only the second sense of "fair sample" is relevant to Bell's argument?

To make the question more precise, suppose all of the following are true:

1. We repeat some experiment with particle pairs N times and observe frequencies of different values for measurable variables like A and B

2. N is sufficiently large such that, by the law of large numbers, there is only a negligible probability that these observed frequencies differ by more than some small amount [tex]\epsilon[/tex] from the ideal probabilities for the same measurable variables (the 'ideal probabilities' being the ones that would be seen if the experiment was repeated under the same observable conditions an infinite number of times)

3. Bell's reasoning is sound, so he is correct in concluding that in a universe obeying local realist laws (or with laws obeying 'local causality' as Maaneli prefers it), the ideal probabilities for measurable variables like A and B should obey various Bell inequalities

...would you agree that if all of these are true (please grant them for the sake of the argument when answering this question, even though I know you would probably disagree with 3 and perhaps also doubt it is possible in practice to pick a sufficiently large N so that 2 is true), then the experiment constitutes a valid test of local realism/local causality, so if we see a sizeable violation of Bell inequalities in our observed frequencies there is a high probability that local realism is false? Please give me a yes-or-no answer to this question.

If you say yes, it would be a valid test if 1-3 were true but you don't actually believe 2 and/or 3 could be true in reality, then we can focus on your arguments for disbelieving either of them. For example, for 2 you might claim that if N is not large enough that the frequencies of hidden-variable states are likely to match the ideal probabilities for these states (because the number of hidden-variable states can be vastly larger than any achievable N), then that also means the frequencies of values of observable variables like A and B aren't likely to match the ideal probabilities for these variables either. I would say that argument is based on a misconception about statistics, and point you to the example of the coin-flip-simulator and the more formal textbook equation in post #51 to explain why. But again, I think it will help focus the discussion if you first address the hypothetical question about whether we would have a valid test of local realism if 1-3 were all true.
 
Last edited:
  • #99
(continued from previous post)
billschnieder said:
Without knowing everything relevant about "all other variables", your claim to be randomly selecting between them is no different from the case in which the doctors did the selection.
If I am not interested in the causal relation between treatment and recovery, but am only interested in the ideal correlations between treatment and recovery that would be seen if the same experiment (where doctors assigned treatment) were repeated an infinite number of times, then there is no need to try to guarantee that there is no correlation between background variables and treatment types. After all, even in the case of a near-infinite number of trials there might be a causal relation between the background variables and treatment types (like the doctors being more likely to assign treatment A to patients with worse symptoms), and all I am interested in is that the observed frequencies in my sample are close to the frequencies that would occur if the same experiment were repeated an infinite number of times under the same conditions (including the fact that doctors are assigning treatments). Do you disagree? Please tell me yes or no.

The Aspect experiment case is directly analogous. People doing Aspect-type experiments are not interested in showing a causal link between values of observable variables like the two measurement outcomes, they're just interested in trying to measure frequencies which are close to the ideal probabilities that would be seen if the same type of experiment were repeated an infinite number of times under the same observable conditions. After all, Bell's theorem concerns the ideal probabilities that would be seen in any experiment of this type assuming local realism is true, and in the frequentist interpretation (which as I've said seems to be the most natural way to interpret 'probabilities' in the context of Bell's proof) these ideal probabilities are just the frequencies that would be seen if the same experiment were repeated an infinite number of times in a local realist universe.
billschnieder said:
For example, imagine that I come to you today and say, I want to do an experiment on dolphins, give me a representative sample of 1000 dolphins. Without knowing anything about the details of my experiment, and all the parameters that affect the outcome of my experiment, could you explain to me how you will go about generating this "random list of dolphins", also tell me what an infinite number of times means in this context. If you could answer this question, it will help tremendously in understanding your point of view.
I can't answer without a definition of what you mean by "representative sample"--representative of what? You can only define "representative" by defining what conditions you are imagining the dolphins are being sampled in the ideal case of an infinite number of trials. If the fact that *I* am making the selection on a particular date (since the dolphin population may change depending on the date) is explicitly part of these conditions, then the infinite set of trials can be imagined by supposing that we are rewinding history to the same date for each new group of 1000 in the infinite collection, and having me make the selection on that date with the same specified observable conditions. So relative to this ideal infinite set, I can use whatever method I like to select my 1000, because the fact that it's up to me to decide how to pick them is explicitly part of the conditions.

On the other hand, if the ideal infinite set of trials is defined in such a way that every dolphin currently alive at this date should appear in the sample of 1000 with equal frequency in the infinite set, this will be more difficult, because whatever method I am using to pick dolphins might bias me to be less likely to pick some dolphins currently alive than others. But the Aspect type experiments are more analogous to the first case, since Bell's reasoning applies to *any* experimental conditions that meet some basic criteria (like each experimenter choosing randomly between three detector settings, and the measurements being made at a spacelike separation), so as long as our particular experiment meets those basic criteria, we are free to define the ideal infinite set in terms of an infinite repitition of the particular observable conditions that *we* chose for our experiment.
billschnieder said:
And let us say, you came up with some list, and I did my experiment and came up with the number of dolphins passing some test (say N), and I calculated the relative frequency N/1000. Will you call this number the marginal probability of a dolphin passing my test? Or the conditional probability of the dolphin passing my test, conditioned on on the method of selecting the list?
Again, in the frequentist interpretation, to talk about any "probability" you need to specify what known conditions are obtaining in your ideal infinite set, you need to do so for your question to be well-defined.
JesseM said:
Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?)
billschnieder said:
You can hypothesize anything you want. But not everything that you hypothesize can be compared with something that is actually done. To be able to compare an actual experiment to a hypothetical situation, you have to make sure all relevant entities in the hypothetical situation are present in the actual experiment and vice versa.
Yes, and we are free to define our ideal infinite set in terms of "the same observable conditions that held in the finite number of trials we actually performed".
billschnieder said:
For example, let us say your hypothetical situation assumes that a experimental condition is measured an infinite number of times (in your words, "hypothetically repeat the same experiment an infinite number of times", "hypothetical much larger group of tests repeated under the same experimental conditions"). Then if an experiment is actually performed in which the experimenters repeatedly measure at a given detector setting (say detector angle) a very large number of times.
No, the "conditions" are conditions for each individual trial, the number of trials isn't part of the "conditions" in the frequentist interpretation. So you specify some known conditions that should hold on a particular trial (say, a certain person flipping a certain coin in a certain room on a certain date), and then define the ideal probabilities as the frequencies that would be seen if you had an infinite set of trials where those conditions applied to every individual member of the set (while other unknown conditions, like the exact position of every air molecule in the room, can vary randomly)
billschnieder said:
My argument here is that, since the experimenters can never guarantee that any setting has been repeated, they can not compare their results with Bell's inequalities. In other words, if they collect 1000000 data points for the detector angle 90 degrees, the experimenters can not guarantee that they have repeated a single condition 1000000 times, rather than 1000000 different conditions exactly once each. And until they can do that, their results are not comparable to Bell's inequalities.

Of course they have control over their detector angle, but they have no clue about the detailed workings of the microscopic components. And guess what, photons interact at the microscopic level not the macroscopic level, so their claims to having repeated the same experimental conditions multiple times is bogus.
But they don't need to "repeat a single condition", they just need to make sure the known conditions match those assumed in the ideal infinite case. As you said earlier, probability deals with situations of imperfect information, so we are holding knowns constant while allowing unknowns to vary. And as I said above in response to this comment, Bell's analysis is much like the analysis in statistical mechanics where we calculate the probabilities of one macrostate transitioning to another (with the macrostate defined in terms of measurable macro-variables like pressure and temperature) by imagining a near-infinite number of cases where the initial macrostate is held constant but the microstate (which gives the precise microscopic state of the system) is allowed to take all possible values consistent with the macrostate. Are you familiar with this type of reasoning in statistical mechanics, and if so do you have any problem with tests of the theory involving a number of trials much smaller than the number of possible initial microstates? Please give a direct answer to this question.
JesseM said:
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments
billschnieder said:
You asked:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
To which my answer was,
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
I assumed it would be obvious to you that those doing Aspect-type experiments also believe their samples are fair, which is analogous to the doctors believing their sampling was fair, which directly answers your question!
No, it doesn't answer my question at all, because in an earlier post (#72) I explained that I didn't know what you meant by "fair", giving two possible senses of this word, and you didn't tell me which sense you were using (or define another sense I didn't think of). If I don't know what you mean by the word "fair" and you refuse to explain, obviously no response of yours involving the word "fair" will qualify as an answer I know how to interpret.

Again, here were the two quite distinct meanings of "fair sample" I offered:
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.
JesseM said:
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
billschnieder said:
You must be kidding right? I don't know why I bother answering these silly questions. Look up the meaning of "biased", Einstein.
"Einstein"? Like I said, if you want to have an intellectual discussion that's fine, but if you're going to descend to the level of middle school taunts I'm not going to continue. To identify what a "biased" sample is you have to identify the population you are drawing from--this page says "A biased sample is one in which the method used to create the sample results in samples that are systematically different from the population", so see the discussion of "population" below.
billschnieder said:
You are confused. The population is the entirety of what actually exists of the "thing" under consideration (see http://en.wikipedia.org/wiki/Sampling_(statistics)#Population_definition). The "population" is not some hypothetical repetition of a large number of hypothetical individuals or "things".
Maybe you should have read your own link more carefully, in some cases they do explicitly define it that way:
In other cases, our 'population' may be even less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of some physical characteristic such as the electrical conductivity of copper.
And with "population" defined in this way, you have to define the conditions that we're imagining are being repeated "over infinitely many trials" before you can define what a "biased sample" is. So I thought it might have been that when you said "systematically biased against some λs but favored other λs", you might have been imagining the "ideal" set of trials would be one where each value of λ occurred with equal frequency, so any systematic departure from that would constitute sampling bias. If that's not what you meant, what did you mean? "Systematically biased" with respect to what "true" or "ideal" frequencies/probabilities?
billschnieder said:
You could have a 100% efficient detector and yet not have a fair sample.
If the "population" was explicitly defined in terms of an infinite set of repetitions of the exact observable experimental conditions you were using, then by definition your experimental conditions would not show any systematic bias and would thus be a "fair sample". And Bell's theorem doesn't assume anything too specific about the observed experimental conditions beyond some basic criteria like a spacelike separation between measurements (though it may be that 100% detector efficiency is needed as one of these criteria to make the proof rigorous, in which case a frequentist would only say that Bell's inequalities would be guaranteed to hold in an infinite repetition of an experiment with perfect detector efficiency, and any actual experiment with imperfect efficiency could be a biased sample relative to this infinite set)
billschnieder said:
All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.
If the apparatus "rejects photons" then doesn't that mean you don't have "a 100% efficient detector", by definition? Or do you mean "rejects" in some different sense here, like the photons more likely to have one value of the measurable property A than another depending on "their hidden properties and experimental settings"?
 
Last edited:
  • #100
JesseM said:
As I said before, my impression is that "local realism" is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing, "realism" doesn't need to have any independent meaning outside of its use in this phrase.

I wasn't asking what your impression is about how the phrase is used by the broader physics community, but rather whether in *your* use of the phrase, you can precisely relate the words 'local' and 'realism' to your definitions which seem essentially identical to Bell's principle of local causality. Now, maybe it's just a 'composite phrase' for you as well. But in that case, I would still insist that it's problematic. If the words 'local' and 'realism' have any clear meaning, then it should be possible to identify the parts of the definitions to which they correspond, as well as how they relate to each other. After all this is possible with Bell's phrase 'local causality' so why shouldn't it be possible with 'local realism'? And actually, it is not true that the phrase 'local realism' is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing. If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities. So which understanding do you hold? Do you think of locality and realism as two separate assumptions, or do you take Bell's view that only locality (as Bell defined it) and causality are assumed?
JesseM said:
If physicists called it "Bellian locality", would you require that "Bellian" have some independent definition beyond the definition that the whole phrase "Bellian locality" refers to the type of local theory Bell discussed?

The difference is that it is clear what 'Bellian' and 'locality' refers to in the phrase, 'Bellian locality', as well as how the meaning of the two words relate to each other. By contrast, it is not very clear with the phrase 'local realism'.
 
  • #101
Maaneli said:
I wasn't asking what your impression is about how the phrase is used by the broader physics community, but rather whether in *your* use of the phrase, you can precisely relate the words 'local' and 'realism' to your definitions which seem essentially identical to Bell's principle of local causality. Now, maybe it's just a 'composite phrase' for you as well.
Yeah, I would say that it's just been a composite phrase for me, I'm just using it to be understood by others so as long as they understand I'm talking about the same type of local theory Bell was talking about, that's fine with me. I do think that it'd be possible to come up with an independent definition of "realism" that fits with what I mean by the composite phrase though. For example, I might say that in a realist theory the universe should have a well-defined state at each moment in time, and then I could modify my point about deterministic vs. probabilistic local realist theories from post #63 on Understanding Bell's Mathematics:
In a realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of time--ultimately reduce to some collection of instantaneous physical facts about the state of the universe at individual moments of time. Without loss of generality, then, let G and G' be two possibilities for what happens at some moment of time T.

--In a deterministic realist theory, if λ represents the instantaneous physical facts about the state of the universe at some time prior to T, then this allows us to determine whether G or G' occurs with probability one.

--An intrinsically probabilistic realist theory is a somewhat more subtle case, but for any probabilistic realist theory it should be possible to break it up into two parts: a deterministic mathematical rule that gives the most precise possible probability of the universe having a given state at time T based on information about states prior to T, and a random "seed" number whose value is combined with the probability to determine what state actually occurred at T. This "most precise possible probability" does not represent a subjective probability estimate made by any observer, but is the probability function that nature itself is using, the most accurate possible formulation of the "laws of physics" in a universe with intrinsically probabilistic laws.

For example, if the mathematical rule determines the probability of G is 70% and the probability of G' is 30%, then the random seed number could be a randomly-selected real number on the interval from 0 to 1, with a uniform probability distribution on that interval, so that if the number picked was somewhere between 0 and 0.7 that would mean G occurred, and if it was 0.7 or greater than G' occurred. The value of the random seed number associated with each probabilistic choice (like the choice between G and G') can be taken as truly random, uncorrelated with any other fact about the state at times earlier than T, while the precise probability of different events could be generated deterministically from a λ which contained information about all instantaneous states at times prior to T.
This definition doesn't require that there be a unique correct definition of simultaneity, just that it's possible to come up with a simultaneity convention such that either the deterministic or the probabilistic case above holds. Of course there might be universes where this wasn't true which I might still want to call "realist", like one where backwards time travel was possible or some weird theory of quantum gravity where time was emergent rather than fundamental. This case is harder to deal with--maybe I'd just want to require that there is some well-defined mathematical set where each member of the set is a "possible universe" and that the laws of physics assign a well-defined probability distribution to the entire set. But at least the previous definition makes sense for a realist universe where it makes sense to order events in time, I think.
Maaneli said:
If the words 'local' and 'realism' have any clear meaning, then it should be possible to identify the parts of the definitions to which they correspond, as well as how they relate to each other.
Why is it necessarily problematic to have a composite phrase where the individual parts don't have any clear independent meaning? If we combined the phrase into one word, "localrealism", would that somehow be more acceptable since we don't expect individual parts of a single word to have their own separate meanings?
Maaneli said:
And actually, it is not true that the phrase 'local realism' is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing. If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities.
Fair enough, my understanding of how the phrase is used may be wrong. But I wonder if it's possible that a lot of physicists just haven't thought about the separate meanings very much, and assume they should have separate meanings even if they couldn't give a clear definition of what criteria a nonlocal realist theory would satisfy (even if they can point to specific 'know it when I see it' examples of nonlocal realist theories like Bohmian mechanics, and likewise can point to examples of ways of defining locality that they'd understand as nonrealist, like the definition from QFT).
Maaneli said:
The difference is that it is clear what 'Bellian' and 'locality' refers to in the phrase, 'Bellian locality', as well as how the meaning of the two words relate to each other. By contrast, it is not very clear with the phrase 'local realism'.
But "Bellian" has no independent physical definition here, it just refers to the views of a particular historical figure. For example, we wouldn't be able to make sense of the phrase "Bellian nonlocality", whereas I think you would probably require that if "realism" and "locality" have clear independent meanings, we should be able to define what set of theories would qualify as "non-local, realist".
 
  • #102
Maaneli said:
If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities.

:smile:

I was re-reading some material today on our subject, are you familiar with the work of Michael Redhead ?

Incompleteness, nonlocality, and realism (winnder, 1988 LAKATOS AWARD FOR AN OUTSTANDING CONTRIBUTION TO THE PHILOSOPHY OF SCIENCE):
http://books.google.com/books?id=Yt...ead incompleteness&pg=PP1#v=onepage&q&f=false

He analyzed the issue of whether or not locality was a sufficient criteria for the Bell result. He also provides a number of definitions of Bell locality. Generally, he did not find that this was sufficient. However, the subject gets pretty complicated as subtle changes in definitions can change your perspective. So I don't consider this work to answer the question in a manner that is exact and will settle the issue finally.

The problem I have always had is that if you start with your local causality (or my locality, both of which to me are the same thing as Bell's 2) as a premise, you tend to see it as all which is needed for Bell. On the other hand, if you start with realism as a premise, you likewise tend to see IT as all which is needed for Bell. In other words, your starting point dictates some of your perspective. That is why I believe it is usually accepted that both local causality and realism are required for the Bell result. It is a tacit acknowledgment that there are some definitional issues involved.
 
  • #103
DrChinese said:
:smile:

I was re-reading some material today on our subject, are you familiar with the work of Michael Redhead ?

Incompleteness, nonlocality, and realism (winnder, 1988 LAKATOS AWARD FOR AN OUTSTANDING CONTRIBUTION TO THE PHILOSOPHY OF SCIENCE):
http://books.google.com/books?id=Yt...ead incompleteness&pg=PP1#v=onepage&q&f=false

He analyzed the issue of whether or not locality was a sufficient criteria for the Bell result. He also provides a number of definitions of Bell locality. Generally, he did not find that this was sufficient. However, the subject gets pretty complicated as subtle changes in definitions can change your perspective. So I don't consider this work to answer the question in a manner that is exact and will settle the issue finally.

I'm familiar with the work of Redhead, but I haven't looked at this paper yet. I wonder though if he refers at all to Bell's own definition of local causality from La Nouvelle?
DrChinese said:
The problem I have always had is that if you start with your local causality (or my locality, both of which to me are the same thing as Bell's 2) as a premise, you tend to see it as all which is needed for Bell. On the other hand, if you start with realism as a premise, you likewise tend to see IT as all which is needed for Bell.

I don't think this is the issue. The point I've been making is that Bell's local causality (which he shows is all that is needed for the derivation of his inequality, as well as the CHSH inequality) requires as part of its definition, a notion of realism, specifically, the assumption of 'local beables'. If one rejects that notion of realism, then there simply is no Bell locality, and thus no Bell theorem. That's why 'realism' (assuming it refers to Bell's notion of realism) and Bell locality are not two separate assumptions, and why you cannot reject realism without rejecting Bell's theorem all together.

Also, we know that realism is not a sufficient premise for Bell. After all, there exist theories of nonlocal (contextual or noncontextual) beables which violate the Bell inequalities.

DrChinese said:
In other words, your starting point dictates some of your perspective. That is why I believe it is usually accepted that both local causality and realism are required for the Bell result. It is a tacit acknowledgment that there are some definitional issues involved.

So I don't think it's a definitional issue. I have also never seen Zeilinger or Aspect or anyone else in quantum optics argue that Bell's local causality condition is insufficient for the derivation of the Bell inequality, nor have I ever seen any indication from any of those guys that they are even familiar with Bell's definition of local causality.
 
Last edited:
  • #104
JesseM said:
Yeah, I would say that it's just been a composite phrase for me, I'm just using it to be understood by others so as long as they understand I'm talking about the same type of local theory Bell was talking about, that's fine with me. I do think that it'd be possible to come up with an independent definition of "realism" that fits with what I mean by the composite phrase though. For example, I might say that in a realist theory the universe should have a well-defined state at each moment in time, and then I could modify my point about deterministic vs. probabilistic local realist theories from post #63 on Understanding Bell's Mathematics:

When you say that the universe should have a 'well-defined state at each moment in time', are you proposing that the universe is something objectively real? Are there local beables in your universe?

Also, it sounds like the very formulation of your definition of locality depends on your definition of realism, in which case, would you agree that if one rejects your definition of realism, then there can be no locality, and thus no means by which to derive the Bell inequality?
JesseM said:
Fair enough, my understanding of how the phrase is used may be wrong. But I wonder if it's possible that a lot of physicists just haven't thought about the separate meanings very much, and assume they should have separate meanings even if they couldn't give a clear definition of what criteria a nonlocal realist theory would satisfy (even if they can point to specific 'know it when I see it' examples of nonlocal realist theories like Bohmian mechanics, and likewise can point to examples of ways of defining locality that they'd understand as nonrealist, like the definition from QFT).

Yes, I think it's the case that a lot of physicists just haven't thought about the meanings very much. And this is a problem, IMO, because, on the basis of this lack of thinking and understanding, many physicists go so far as to say that the violation of the Bell inequalities implies that reality doesn't exist, or that the world is local but 'non-real', or that hidden-variable theories have been proven to be impossible. And then they go on to teach these misunderstandings to classes of graduate students and undergrads, and mislead those students into thinking that there is no ontological way to formulate QM, and that if they try to do so, then they are just being naive or are just in denial of the facts. They also use this misunderstanding to recommend the rejection of grant proposals for research on ontological formulations of QM, because they think that such formulations of QM have already been proven to be impossible.
JesseM said:
But "Bellian" has no independent physical definition here, it just refers to the views of a particular historical figure.

Doesn't matter if it's has a 'physical' definition or not. The point is that it's logically clear what 'Bellian' refers to and how it relates to the word 'locality'.
JesseM said:
For example, we wouldn't be able to make sense of the phrase "Bellian nonlocality",

Although I could have some plausible idea as to what 'Bellian nonlocality' might entail, it's true that I wouldn't be able to identify a precise definition that I could ascribe to Bell. And that's simply because Bell did not propose a definition of 'nonlocal causality'.
JesseM said:
whereas I think you would probably require that if "realism" and "locality" have clear independent meanings, we should be able to define what set of theories would qualify as "non-local, realist".

If the term 'nonlocal' already requires as part of its definition the assumption of 'realism' (assuming that realism has been precisely defined), then I would say that the phrase 'nonlocal realist' is redundant and potentially misleading. Instead, it would be sufficient to say "we should be able to define what set of theories would qualify as 'nonlocal [causal]'".
 
Last edited:
  • #105
Maaneli said:
When you say that the universe should have a 'well-defined state at each moment in time', are you proposing that the universe is something objectively real?
The universe has an objectively real state at every moment, I don't know what it would mean to say "the universe is something objectively real" apart from this.
Maaneli said:
Are there local beables in your universe?
The definition is broad--in some universes satisfying the definition it might be possible to break down the "state of the universe at a given moment" into a collection of local states of each point in space at that time, but in others it might not be.
Maaneli said:
Also, it sounds the very formulation of your definition of locality depends on your definition of realism, in which case, would you agree that if one rejects your definition of realism, then there can be no locality, and thus no means by which to derive the Bell inequality?
Instead of saying that a theory is "local" or "nonlocal" as a whole, let's say that some mathematically-definable element of a theory is local if 1) all facts about the value of this element can be broken down into local facts about individual points in spacetime, and 2) the value at one point is only causally influenced by local facts in the point's past light cone. So in this case, if the "element" in the copenhagen interpretation is the density matrix for a measurement at a single place and time, then I think it'd make sense to say this element is local even if the copenhagen interpretation is not realist, and even though other elements of the theory like the wavefunction for entangled particles cannot really be considered local. In the case of a local realist theory, the "element" would consist of all objective facts about the state of the universe.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
6
Views
1K
  • Quantum Physics
Replies
28
Views
4K
Replies
18
Views
1K
  • Quantum Physics
2
Replies
63
Views
8K
Replies
75
Views
8K
Replies
63
Views
7K
Replies
5
Views
1K
Replies
1
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
54
Views
3K
Replies
2
Views
1K
Back
Top