Is Bell's Logic Aimed at Decoupling Correlated Outcomes in Quantum Mechanics?

  • Thread starter Thread starter Gordon Watson
  • Start date Start date
  • Tags Tags
    Logic
Click For Summary
The discussion centers on Bell's logic and its implications for decoupling correlated outcomes in quantum mechanics. Participants debate whether Bell's approach effectively separates variables or if it fails to address the underlying correlations between outcomes. Some argue that Bell's logic appears flawed, suggesting that it leads to the conclusion that outcomes G and G' are not correlated under certain conditions. Others defend Bell's reasoning, asserting that it accurately reflects the complexities of hidden variables and correlations in quantum mechanics. The conversation highlights ongoing tensions in interpreting Bell's theorem and its relevance to the physics community.
  • #91
JesseM said:
I don't know what "marginally more effective" means--how would you define the term "marginal effectiveness"? Are you talking about the causal influence of the two treatments on recovery chances, the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions, the correlations in marginal frequencies seen in the actual experiment, or something else?
billschnieder said:
if P(A) represents the marginal probability of successful treatment with drug A and P(B) represents the marginal probability of successful treatment with drug B, then if P(A) > P(B), then drug A is marginally more effective. This should have been obvious unless you are just playing semantic games here.
It's still not clear what you mean by "the marginal probability of successful treatment". Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity (this is the frequentist interpretation of probability, its major rival being the Bayesian interpretation--see the article Frequentists and Bayesians). If so, what would be the experimental conditions in question? Would they just involve replicating whatever experimental conditions were used in the actual experiment with 700 people, or would they involve some ideal experimental conditions which control for other variables like kidney stone size even if the actual experiment did not control for these things?

For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective. On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity), like one where the patients are randomly assigned to treatment A or B, then in this case the "marginal probability of recovery with treatment A" would be higher. So in this specific experiment where treatment was determined by the doctor, which would you say was higher, the marginal probability of recovery with treatment A or the marginal probability of recovery with treatment B? Without knowing the answer to this question I can't really understand what your terminology is supposed to mean.
JesseM said:
the ideal marginal correlation between the different treatments and recovery chances that would be observed if the sample size were increased to infinity and the experiment repeated with the same conditions
billschnieder said:
There may other factors that are correlated with the factors directly influencing the rate of success the thwart the experimenters attempts to generate a fair sample, and unless they know about all these relationships then can never ensure a fair sample. Not every experiment can be repeated an infinite number of times with the same conditions.
In practice no experiment can be repeated an infinite number of times, obviously. Again, I'm talking about the definition of what we mean when we talk about "probability" (as distinct from frequencies on any finite number of trials, which can differ from the 'true probabilities' due to statistical fluctuations, like if a coin has an 0.5 probability of landing heads even though for any finite number of flips you are unlikely to find that exactly half the flips were heads). In the frequentist interpretation, probability is understood to mean the frequencies that would hypothetically be seen if we could hypothetically repeat the same experiment an infinite number of times, even if this is impossible in practice. Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?) If so, perhaps you'd better give me your own definition of what you even mean by the word "probability", if you're not using the frequentist interpretation that I use.
JesseM said:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
billschnieder said:
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments, it refers to the idea that the statistics of the measured particle pairs should be representative of the statistics in all pairs emitted by the source, which doesn't really help me understand what you mean by "the doctors believe their sample is fair" since I'm not sure what larger group you want the statistics in the sample to be representative of (again, whether it's a hypothetical much larger group of tests repeated under the same experimental conditions, or a hypothetical much larger group of tests done under some different experimental conditions that may be better-designed to control for other variables, or something else entirely if you aren't using the frequentist understanding of the meaning of 'probability')
billschnieder said:
Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs?
JesseM said:
Anyone with a good understanding of Bell's argument should see it's very obvious that λ is not equally likely to take a given value
billschnieder said:
Who said anything about different λs being equally likely. Fair does not mean all lambdas must be equality likely.
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
billschnieder said:
Fair in this case means the likelihood of the lambdas in sample are not significantly different from their likelihoods in the population.
As before, you need to explain what "the population" consists of. Again, does it consist of a hypothetical repetition of the same experimental conditions a much larger (near-infinite number of times)? If so, then by definition the actual sample could not be "systematically biased" compared to the larger population, since the larger population is defined in terms of the same experimental conditions. Perhaps you mean repeating similar experimental conditions but with ideal detector efficiency so all particle pairs emitted by the source are actually detected, which would be more like the meaning of the "fair sampling assumption"? If neither of these capture your meaning, please give your own definition of what you do mean by "the population".
 
Physics news on Phys.org
  • #92
Eye_in_the_Sky said:
Here are some definitions to make clearer where I am coming from:

[The following definitions are ((slightly) adapted) from Ruta's post #556 in another thread.]

(i) Any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own (separate) independent 'real' states such that the joint state is completely determined by the independent states.

(ii) Any two spacelike separated systems A and B are such that the separate 'real' state of A cannot be 'influenced' by events in the neighborhood of B, and vice versa.
____________________________________
____________________________________

Next.

(In a thread parallel to this one) JesseM wrote:
JesseM said:
In a local realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of space-time--ultimately reduce to some collection of local physical facts defined at individual points in spacetime (or individual 'bits' if spacetime is not infinitely divisible).
JesseM, it sounds (to me) like what you mean by "local realism" (part of which is expressed in the quote above) is equivalent (in meaning) to (i) and (ii) above.

... Do you agree with this assessment?
____________________________________
My quote above doesn't give the full definition of what I mean by "local realist", I would have to add a condition similar to your (ii), that spacelike-separated physical facts A and B cannot causally influence one another (which could be stated as the condition that if you know the complete set of local physical facts in the past light cone of A, and express that knowledge as the value of a variable λ whose different values correspond to all possible combinations of local physical facts in a past light cone, then P(A|λ)=P(A|λ,B)). With this addition, I'd say that what I mean by "local realism" does appear to be the same as your own (i) and (ii).
 
  • #93
Eye_in_the_Sky said:
Maaneli, what you say in the above resonates strongly with how I am (currently) seeing it all.

What I see is that "Bell Locality" has in it two related yet (apparently) distinct senses of 'locality':

(i) "state separability" (for spatiotemporally separated systems) ,

and

(ii) "local causality" .

The first of these seems (to me) to correspond to the idea of "local beables".

... Would you say the same?

Yes!
 
  • #94
JesseM said:
With this addition, I'd say that what I mean by "local realism" does appear to be the same as your own (i) and (ii).

If that's the case Jesse, can you tell us which parts of your definition of 'local realism' refer to 'locality', and which parts refer to 'realism', and whether these definitions are independent of each other? I think you know what I'm driving at ...
 
  • #95
JesseM said:
It's still not clear what you mean by "the marginal probability of successful treatment".
A = Treatment A results in recovery from the disease
P(A) = marginal probability of recovery after administration of treatment A.
If it is the meaning of marginal probability you are unsure of, this will help (http://en.wikipedia.org/wiki/Conditional_probability)

Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity

Probability means "Rational degree of belief" defined in the range from 0 to 1 such that 0 means uncertain and 1 means certain. Probability does not mean frequency, although probability can be calculated from frequencies. Probabilities can be assigned for many situations that can never be repeated. A rational degree of belief can be formed about a lot of situations that have never happened. The domain of probability theory is to deal with uncertainty, indeterminacy and incomplete information. As such it makes not much sense to talk of "true probability". You can talk of the "true relative frequency".

For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective.
So you agree that one man's marginal probability is another man's conditional probability. Which is the point I've been pointing out to you Ad-nauseam. Comparing probabilities defined on different probability spaces is guaranteed to produce paradoxes and spooky business.
On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity),
This is the point you still have not understood. It is not possible to control for "all other variables" which you know nothing about, even if it were possible to repeat the experiment an infinite number of times. Without knowing everything relevant about "all other variables", your claim to be randomly selecting between them is no different from the case in which the doctors did the selection. For example, imagine that I come to you today and say, I want to do an experiment on dolphins, give me a representative sample of 1000 dolphins. Without knowing anything about the details of my experiment, and all the parameters that affect the outcome of my experiment, could you explain to me how you will go about generating this "random list of dolphins", also tell me what an infinite number of times means in this context. If you could answer this question, it will help tremendously in understanding your point of view.

And let us say, you came up with some list, and I did my experiment and came up with the number of dolphins passing some test (say N), and I calculated the relative frequency N/1000. Will you call this number the marginal probability of a dolphin passing my test? Or the conditional probability of the dolphin passing my test, conditioned on on the method of selecting the list?

Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?)
You can hypothesize anything you want. But not everything that you hypothesize can be compared with something that is actually done. To be able to compare an actual experiment to a hypothetical situation, you have to make sure all relevant entities in the hypothetical situation are present in the actual experiment and vice versa.

For example, let us say your hypothetical situation assumes that a experimental condition is measured an infinite number of times (in your words, "hypothetically repeat the same experiment an infinite number of times", "hypothetical much larger group of tests repeated under the same experimental conditions"). Then if an experiment is actually performed in which the experimenters repeatedly measure at a given detector setting (say detector angle) a very large number of times.

Your argument here is that, since the hypothetical situation requires repeating the same conditions multiple times and the experimenters have done that, then their results are comparable. In other words, according to you, the results of Aspect-type experiments are comparable to Bell's inequalities.

My argument here is that, since the experimenters can never guarantee that any setting has been repeated, they can not compare their results with Bell's inequalities. In other words, if they collect 1000000 data points for the detector angle 90 degrees, the experimenters can not guarantee that they have repeated a single condition 1000000 times, rather than 1000000 different conditions exactly once each. And until they can do that, their results are not comparable to Bell's inequalities.

Of course they have control over their detector angle, but they have no clue about the detailed workings of the microscopic components. And guess what, photons interact at the microscopic level not the macroscopic level, so their claims to having repeated the same experimental conditions multiple times is bogus.

JesseM said:
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments
You asked:
JesseM said:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
To which my answer was,
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
I assumed it would be obvious to you that those doing Aspect-type experiments also believe their samples are fair, which is analogous to the doctors believing their sampling was fair, which directly answers your question!

JesseM said:
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
You must be kidding right? I don't know why I bother answering these silly questions. Look up the meaning of "biased", Einstein.

JesseM said:
As before, you need to explain what "the population" consists of. Again, does it consist of a hypothetical repetition of the same experimental conditions a much larger (near-infinite number of times)? If so, then by definition the actual sample could not be "systematically biased" compared to the larger population, since the larger population is defined in terms of the same experimental conditions. Perhaps you mean repeating similar experimental conditions but with ideal detector efficiency so all particle pairs emitted by the source are actually detected, which would be more like the meaning of the "fair sampling assumption"? If neither of these capture your meaning, please give your own definition of what you do mean by "the population".
You are confused. The population is the entirety of what actually exists of the "thing" under consideration (see http://en.wikipedia.org/wiki/Sampling_(statistics)#Population_definition). The "population" is not some hypothetical repetition of a large number of hypothetical individuals or "things".

You could have a 100% efficient detector and yet not have a fair sample. It is a mistake to assume that "fair sampling assumption" has only to do with detector efficiency. You could have a 100% efficient detector and not detect all the particle leaving the source, precisely because the whole of the experimental apparatus is responsible for non-detection of some photons, not just the detector. All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.
 
  • #96
Maaneli said:
If that's the case Jesse, can you tell us which parts of your definition of 'local realism' refer to 'locality', and which parts refer to 'realism', and whether these definitions are independent of each other? I think you know what I'm driving at ...
As I said before, my impression is that "local realism" is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing, "realism" doesn't need to have any independent meaning outside of its use in this phrase. If physicists called it "Bellian locality", would you require that "Bellian" have some independent definition beyond the definition that the whole phrase "Bellian locality" refers to the type of local theory Bell discussed?
 
  • #97
billschnieder said:
You could have a 100% efficient detector and yet not have a fair sample. It is a mistake to assume that "fair sampling assumption" has only to do with detector efficiency. You could have a 100% efficient detector and not detect all the particle leaving the source, precisely because the whole of the experimental apparatus is responsible for non-detection of some photons, not just the detector. All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.

This is true. In fact, this concept drives the De Raedt LR simulation model by introducing a time delay element which affects coincidence window size. Detector efficiency is not itself a factor. I think the net result is essentially the same whether it is detector efficiency or not.
 
  • #98
JesseM said:
It's still not clear what you mean by "the marginal probability of successful treatment".
billschnieder said:
A = Treatment A results in recovery from the disease
P(A) = marginal probability of recovery after administration of treatment A.
If it is the meaning of marginal probability you are unsure of, this will help (http://en.wikipedia.org/wiki/Conditional_probability)
I think you know perfectly well that I understand the difference between marginal and conditional as we have been using these terms extensively. It often seems like you may be intentionally playing one-upmanship games where you snip out all the context of some question or statement I ask and make it sound like I was confused about something very trivial...in this case the context made clear exactly what I found ambiguous in your terms:
For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective. On the other hand, if we imagine repeating a different experiment that adequately controls for all other variables (in the limit as the sample size approaches infinity), like one where the patients are randomly assigned to treatment A or B, then in this case the "marginal probability of recovery with treatment A" would be higher. So in this specific experiment where treatment was determined by the doctor, which would you say was higher, the marginal probability of recovery with treatment A or the marginal probability of recovery with treatment B? Without knowing the answer to this question I can't really understand what your terminology is supposed to mean.
This scenario, where there is a systematic bias in how doctors assign treatment which influences the observed correlations in frequencies between treatment and recovery in the sample, is a perfectly well-defined one (in fact it's exactly the one assumed in the wikipedia page on Simpson's paradox), so if your terms are well-defined you should be able to answer the question about whether treatment A or treatment B has a higher "marginal probability of successful treatment" in this particular scenario. So please answer it if you want to continue using this type of terminology.

In general I notice that you almost always refuse to answer simple questions I ask you about your position, or to address examples I give you, while you have no problem coming up with examples and commanding me to address them, or posing questions and then saying "answer yes or no". Again it seems like this may be a game of one-upmanship here, where you refuse to address anything I ask you to, but then forcefully demand that I address examples/questions of yours, perhaps to prove that you are in the "dominant" position and that I "can't tell you what to do". If you are playing this sort of macho game, count me out, I'm here to try to have an intellectual discussion which gets at the truth of these matters, not to prove what an alpha male I am by forcing everyone to submit to me. I will continue to make a good-faith effort to answer your questions and address your examples, as long as you will extend me the same courtesy (not asking you to answer every sentence of mine with a question mark, just the ones I specifically/repeatedly request that you address); but if you aren't willing to do this, I won't waste any more time on this discussion.
billschnieder said:
Probability means "Rational degree of belief" defined in the range from 0 to 1 such that 0 means uncertain and 1 means certain.
"Rational degree of belief" is a very ill-defined phrase. What procedure allows me to determine the degree to which it is rational to believe a particular outcome will occur in a given scenario?
billschnieder said:
Probability does not mean frequency, although probability can be calculated from frequencies.
You seem to be unaware of the debate surrounding the meaning of "probability", and of the fact that the "frequentist interpretation" is one of the most popular ways of defining its meaning. I already linked you to the wikipedia article on frequency probability which starts out by saying:
Frequency probability is the interpretation of probability that defines an event's probability as the limit of its relative frequency in a large number of trials. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. The shift from the classical view to the frequentist view represents a paradigm shift in the progression of statistical thought.
Under the wikipedia article on the classical interpretation they say:
The classical definition of probability was called into question by several writers of the nineteenth century, including John Venn and George Boole. The frequentist definition of probability became widely accepted as a result of their criticism, and especially through the works of R.A. Fisher.
Aside from wikipedia you might look at the Interpretations of Probability article from the Stanford Encyclopedia of Philosophy. In the section on frequency interpretations they start by discussing "finite frequentism" which just defines probability in terms of frequency on some finite number of real trials, so if you flip a coin 10 times and get 7 heads that would automatically imply the "probability" of getting heads was 0.7. This interpretation has some obvious problems, so that leads them to the meaning that I am using when I discuss "ideal probabilities", known as "infinite frequentism":
Some frequentists (notably Venn 1876, Reichenbach 1949, and von Mises 1957 among others), partly in response to some of the problems above, have gone on to consider infinite reference classes, identifying probabilities with limiting relative frequencies of events or attributes therein. Thus, we require an infinite sequence of trials in order to define such probabilities. But what if the actual world does not provide an infinite sequence of trials of a given experiment? Indeed, that appears to be the norm, and perhaps even the rule. In that case, we are to identify probability with a hypothetical or counterfactual limiting relative frequency. We are to imagine hypothetical infinite extensions of an actual sequence of trials; probabilities are then what the limiting relative frequencies would be if the sequence were so extended.
The article goes on to discuss the idea that this infinite series of trials should be defined as ones that all share some well-defined set of conditions, which Von Mises called "collectives — hypothetical infinite sequences of attributes (possible outcomes) of specified experiments that meet certain requirements ... The probability of an attribute A, relative to a collective ω, is then defined as the limiting relative frequency of A in ω."

There are certainly other interpretations of probability, discussed in the article (you can find more extensive discussions of different interpretations in a book like Philosophical Theories of Probability--much of the chapter on the frequentist interpretation can be read on google books here). I think most of them would be difficult to apply to Bell's reasoning though. The more subjective definitions would have the problem that you'd have trouble who is supposed to be the "subject" that defines probabilities dealing with λ (whose value on each trial, and even possible range of values, would be unknown to human experimenters). And the more "empirical" definitions which deal only with frequencies in actual observed trials would have the same sort of problem, since we don't actually observe the value of λ.

Anyway, do you think there is anything inherently incoherent about using the frequentist interpretation of probability when following Bell's reasoning? If so, what? And if you prefer a different interpretation of the meaning of "probability", can you give a definition less vague than "rational degree of belief", preferably by referring to some existing school of thought referred to in an article or book?
billschnieder said:
Probabilities can be assigned for many situations that can never be repeated.
But the frequentist interpretation is just about hypothetical repetitions, which can include purely hypothetical ideas like "turning back the clock" and running the same single experiment over again at the same moment (with observable conditions held the same but non-observed conditions, like the precise 'microstate' in a situation where we have only observed the 'macrostate', allowed to vary randomly) rather than actually repeating it at successively later times (which might be impossible because the original experiment destroyed the object we were experimenting on, say).
billschnieder said:
The domain of probability theory is to deal with uncertainty, indeterminacy and incomplete information.
Yes, and the idea is that we are considering a large set of trials in which the things we know are the same in every trial (like the 'macrostate' in statistical mechanics which just tells us the state of macro-variables like temperature and pressure) but the things we don't know vary randomly (like the 'microstate' in statistical mechanics which deals with facts like the precise position of every microscopic particle in the system). In classical statistical mechanics the "probability" that a system with a given macrostate at t0 will evolve to another given macrostate at t1 is determined by considering every possible microstate consistent with the original macrostate at t0 (the number of possible microstates for any human-scale system being astronomically large) and seeing what fraction will evolve into a microstate at t1 which is consistent with the macrostate whose probability we want to know. So here we are considering a situation in which we only know some limited information about the system, and are figuring out the probabilities by considering a near-infinite number of possible trials in which the unknown information (the precise microstate) might take many possible values. Do you think this is an improper way of calculating probabilities? It does seem to be directly analogous to how Bell was calculating the probabilities of seeing different values of observable variables by summing over all possible values of the hidden variables.
billschnieder said:
As such it makes not much sense to talk of "true probability".
It does in the frequentist interpretation.
JesseM said:
For example, take my example where the actual experiment was done by sampling patients whose treatments had not been assigned randomly, but had been assigned by their doctors. In this case there might be a systematic bias where doctors are more likely to assign treatment A to patients with large kidney stones (because these patients have more severe symptoms and A is seen as a stronger treatment) and more likely to assign treatment B to patients with small ones. If we imagine repeating this experiment a near-infinite number of times with the same experimental conditions, then those same experimental conditions would still involve the same set of doctors assigning treatments to a near-infinite number of patients, so the systematic bias of the doctors would influence the final probabilities, and thus the "marginal probability of recovery with treatment B" would be higher because patients who receive treatment B are more likely to have small kidney stones, not because treatment B is causally more effective.
billschnieder said:
So you agree that one man's marginal probability is another man's conditional probability.
The comment above says nothing of the sort. I'm just saying that to talk about "probability" in the frequentist interpretation you need to define the conditions that you are imagining being repeated in an arbitrarily large number of trials. And in the case above, the conditions include the fact that on every trial the treatment was assigned by a member of some set of doctors, which means that the marginal probability of (treatment B, recovery) is higher than the marginal probability of (treatment A, recovery) despite the fact that treatment B is not causally more effective (and I'm asking you whether in this scenario you'd say treatment B is 'marginally more effective', a question you haven't yet answered). Nowhere in the above am I saying anything about conditional probabilities.

Even if you don't want to think of probabilities in frequentist terms, would you agree that whenever we talk about "probabilities" we at least need to define a sample space (or probability space, which is just a sample space with probabilities on each element) which includes the conditions that could obtain on any possible trial in our experiment? If so, would you agree that when defining the sample space, we must define what process was used to assign treatments to patients, that a sample space where treatment was assigned by doctors would be a different one than a sample space where treatment was assigned by a random number generator on a computer?
billschnieder said:
Which is the point I've been pointing out to you Ad-nauseam. Comparing probabilities defined on different probability spaces is guaranteed to produce paradoxes and spooky business.
I'm not asking you to "compare probabilities defined on different probability spaces", and Bell's argument doesn't require you to do that either. I'm just asking, for the probability space I outlined where treatments would be decided by doctors, whether you would say treatment B was "marginally more effective" if it turned out that the probability (or frequency) of (treatment B, recovery) was higher than the probability of (treatment A, recovery).
billschnieder said:
This is the point you still have not understood. It is not possible to control for "all other variables" which you know nothing about, even if it were possible to repeat the experiment an infinite number of times.
Sure it would be. If treatment was assigned by a random number generator, then in the limit as the number of trials went to infinity the probability of any correlation between traits of patients prior to treatment (like large kidney stones) and the treatment they were assigned would approach 0. This is just because there isn't any way the traits of patients would causally influence the random number generator so that there would be a systematic difference in the likelihood that patients with different versions of a trait (say, large vs. small kidney stones) would be assigned treatment A vs. treatment B. Do you disagree?

And again, if we are talking about Bell's argument it doesn't matter if there is such a correlation between the value of the hidden variable λ and the value of some measurable variable like A, you don't need to "control for" the value of the hidden variable in the sense you need to "control for" the value of a background variable like S={large kidney stones, small kidney stones} above. This is because the only need for that type of control is if you want to establish a causal relation between measurable variables like treatment and recovery, but Bell is not trying to establish a causal relation between spacelike-separated measurement outcomes, quite the opposite in fact. If you disagree it would help if you would respond to post #79 (you might not have even noticed that one because it was on an earlier page from my next post to you, #91, which you were responding to here), particularly the question I was asking here (which only requires a yes-or-no answer):
So, do you agree with my statement that of these two, Only the second sense of "fair sample" is relevant to Bell's argument?

To make the question more precise, suppose all of the following are true:

1. We repeat some experiment with particle pairs N times and observe frequencies of different values for measurable variables like A and B

2. N is sufficiently large such that, by the law of large numbers, there is only a negligible probability that these observed frequencies differ by more than some small amount \epsilon from the ideal probabilities for the same measurable variables (the 'ideal probabilities' being the ones that would be seen if the experiment was repeated under the same observable conditions an infinite number of times)

3. Bell's reasoning is sound, so he is correct in concluding that in a universe obeying local realist laws (or with laws obeying 'local causality' as Maaneli prefers it), the ideal probabilities for measurable variables like A and B should obey various Bell inequalities

...would you agree that if all of these are true (please grant them for the sake of the argument when answering this question, even though I know you would probably disagree with 3 and perhaps also doubt it is possible in practice to pick a sufficiently large N so that 2 is true), then the experiment constitutes a valid test of local realism/local causality, so if we see a sizeable violation of Bell inequalities in our observed frequencies there is a high probability that local realism is false? Please give me a yes-or-no answer to this question.

If you say yes, it would be a valid test if 1-3 were true but you don't actually believe 2 and/or 3 could be true in reality, then we can focus on your arguments for disbelieving either of them. For example, for 2 you might claim that if N is not large enough that the frequencies of hidden-variable states are likely to match the ideal probabilities for these states (because the number of hidden-variable states can be vastly larger than any achievable N), then that also means the frequencies of values of observable variables like A and B aren't likely to match the ideal probabilities for these variables either. I would say that argument is based on a misconception about statistics, and point you to the example of the coin-flip-simulator and the more formal textbook equation in post #51 to explain why. But again, I think it will help focus the discussion if you first address the hypothetical question about whether we would have a valid test of local realism if 1-3 were all true.
 
Last edited:
  • #99
(continued from previous post)
billschnieder said:
Without knowing everything relevant about "all other variables", your claim to be randomly selecting between them is no different from the case in which the doctors did the selection.
If I am not interested in the causal relation between treatment and recovery, but am only interested in the ideal correlations between treatment and recovery that would be seen if the same experiment (where doctors assigned treatment) were repeated an infinite number of times, then there is no need to try to guarantee that there is no correlation between background variables and treatment types. After all, even in the case of a near-infinite number of trials there might be a causal relation between the background variables and treatment types (like the doctors being more likely to assign treatment A to patients with worse symptoms), and all I am interested in is that the observed frequencies in my sample are close to the frequencies that would occur if the same experiment were repeated an infinite number of times under the same conditions (including the fact that doctors are assigning treatments). Do you disagree? Please tell me yes or no.

The Aspect experiment case is directly analogous. People doing Aspect-type experiments are not interested in showing a causal link between values of observable variables like the two measurement outcomes, they're just interested in trying to measure frequencies which are close to the ideal probabilities that would be seen if the same type of experiment were repeated an infinite number of times under the same observable conditions. After all, Bell's theorem concerns the ideal probabilities that would be seen in any experiment of this type assuming local realism is true, and in the frequentist interpretation (which as I've said seems to be the most natural way to interpret 'probabilities' in the context of Bell's proof) these ideal probabilities are just the frequencies that would be seen if the same experiment were repeated an infinite number of times in a local realist universe.
billschnieder said:
For example, imagine that I come to you today and say, I want to do an experiment on dolphins, give me a representative sample of 1000 dolphins. Without knowing anything about the details of my experiment, and all the parameters that affect the outcome of my experiment, could you explain to me how you will go about generating this "random list of dolphins", also tell me what an infinite number of times means in this context. If you could answer this question, it will help tremendously in understanding your point of view.
I can't answer without a definition of what you mean by "representative sample"--representative of what? You can only define "representative" by defining what conditions you are imagining the dolphins are being sampled in the ideal case of an infinite number of trials. If the fact that *I* am making the selection on a particular date (since the dolphin population may change depending on the date) is explicitly part of these conditions, then the infinite set of trials can be imagined by supposing that we are rewinding history to the same date for each new group of 1000 in the infinite collection, and having me make the selection on that date with the same specified observable conditions. So relative to this ideal infinite set, I can use whatever method I like to select my 1000, because the fact that it's up to me to decide how to pick them is explicitly part of the conditions.

On the other hand, if the ideal infinite set of trials is defined in such a way that every dolphin currently alive at this date should appear in the sample of 1000 with equal frequency in the infinite set, this will be more difficult, because whatever method I am using to pick dolphins might bias me to be less likely to pick some dolphins currently alive than others. But the Aspect type experiments are more analogous to the first case, since Bell's reasoning applies to *any* experimental conditions that meet some basic criteria (like each experimenter choosing randomly between three detector settings, and the measurements being made at a spacelike separation), so as long as our particular experiment meets those basic criteria, we are free to define the ideal infinite set in terms of an infinite repitition of the particular observable conditions that *we* chose for our experiment.
billschnieder said:
And let us say, you came up with some list, and I did my experiment and came up with the number of dolphins passing some test (say N), and I calculated the relative frequency N/1000. Will you call this number the marginal probability of a dolphin passing my test? Or the conditional probability of the dolphin passing my test, conditioned on on the method of selecting the list?
Again, in the frequentist interpretation, to talk about any "probability" you need to specify what known conditions are obtaining in your ideal infinite set, you need to do so for your question to be well-defined.
JesseM said:
Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?)
billschnieder said:
You can hypothesize anything you want. But not everything that you hypothesize can be compared with something that is actually done. To be able to compare an actual experiment to a hypothetical situation, you have to make sure all relevant entities in the hypothetical situation are present in the actual experiment and vice versa.
Yes, and we are free to define our ideal infinite set in terms of "the same observable conditions that held in the finite number of trials we actually performed".
billschnieder said:
For example, let us say your hypothetical situation assumes that a experimental condition is measured an infinite number of times (in your words, "hypothetically repeat the same experiment an infinite number of times", "hypothetical much larger group of tests repeated under the same experimental conditions"). Then if an experiment is actually performed in which the experimenters repeatedly measure at a given detector setting (say detector angle) a very large number of times.
No, the "conditions" are conditions for each individual trial, the number of trials isn't part of the "conditions" in the frequentist interpretation. So you specify some known conditions that should hold on a particular trial (say, a certain person flipping a certain coin in a certain room on a certain date), and then define the ideal probabilities as the frequencies that would be seen if you had an infinite set of trials where those conditions applied to every individual member of the set (while other unknown conditions, like the exact position of every air molecule in the room, can vary randomly)
billschnieder said:
My argument here is that, since the experimenters can never guarantee that any setting has been repeated, they can not compare their results with Bell's inequalities. In other words, if they collect 1000000 data points for the detector angle 90 degrees, the experimenters can not guarantee that they have repeated a single condition 1000000 times, rather than 1000000 different conditions exactly once each. And until they can do that, their results are not comparable to Bell's inequalities.

Of course they have control over their detector angle, but they have no clue about the detailed workings of the microscopic components. And guess what, photons interact at the microscopic level not the macroscopic level, so their claims to having repeated the same experimental conditions multiple times is bogus.
But they don't need to "repeat a single condition", they just need to make sure the known conditions match those assumed in the ideal infinite case. As you said earlier, probability deals with situations of imperfect information, so we are holding knowns constant while allowing unknowns to vary. And as I said above in response to this comment, Bell's analysis is much like the analysis in statistical mechanics where we calculate the probabilities of one macrostate transitioning to another (with the macrostate defined in terms of measurable macro-variables like pressure and temperature) by imagining a near-infinite number of cases where the initial macrostate is held constant but the microstate (which gives the precise microscopic state of the system) is allowed to take all possible values consistent with the macrostate. Are you familiar with this type of reasoning in statistical mechanics, and if so do you have any problem with tests of the theory involving a number of trials much smaller than the number of possible initial microstates? Please give a direct answer to this question.
JesseM said:
Given that my whole question was about what you meant by fair, this is not a helpful answer. The "fair sampling assumption" is a term that is used specifically in discussions of Aspect-type-experiments
billschnieder said:
You asked:
What do they believe they know, and do you think the people doing calculations from Aspect type experiments are wrongly believing they know something analogous?
To which my answer was,
The doctors believe their sample is fair but the omniscient being knows that it is not. Have you ever heard of the "fair sampling assumption"?
I assumed it would be obvious to you that those doing Aspect-type experiments also believe their samples are fair, which is analogous to the doctors believing their sampling was fair, which directly answers your question!
No, it doesn't answer my question at all, because in an earlier post (#72) I explained that I didn't know what you meant by "fair", giving two possible senses of this word, and you didn't tell me which sense you were using (or define another sense I didn't think of). If I don't know what you mean by the word "fair" and you refuse to explain, obviously no response of yours involving the word "fair" will qualify as an answer I know how to interpret.

Again, here were the two quite distinct meanings of "fair sample" I offered:
As I have tried to explain before, you are using "fair sample" in two quite distinct senses without seeming to realize it. One use of "fair" is that we are adequately controlling for other variables, so that the likelihood of having some specific value of another variable (like large kidney stones) is not correlated with the value the variables we're studying (like treatment type and recovery rate), so that any marginal correlation in the variables we're studying reflects an actual causal influence. Another use of "fair" is just that the frequencies in your sample are reasonably close to the probabilities that would be observed if the experiment were repeated under the same conditions with a sample size approaching infinity.
JesseM said:
You didn't say anything about "fair" in the question I was responding to, you just asked if the setups were "systematically biased against some λs but favored other λs". I took that to mean that under the experimental setup, some λs were systematically less likely to occur than others (what else would 'systematically biased against some λs' mean?)
billschnieder said:
You must be kidding right? I don't know why I bother answering these silly questions. Look up the meaning of "biased", Einstein.
"Einstein"? Like I said, if you want to have an intellectual discussion that's fine, but if you're going to descend to the level of middle school taunts I'm not going to continue. To identify what a "biased" sample is you have to identify the population you are drawing from--this page says "A biased sample is one in which the method used to create the sample results in samples that are systematically different from the population", so see the discussion of "population" below.
billschnieder said:
You are confused. The population is the entirety of what actually exists of the "thing" under consideration (see http://en.wikipedia.org/wiki/Sampling_(statistics)#Population_definition). The "population" is not some hypothetical repetition of a large number of hypothetical individuals or "things".
Maybe you should have read your own link more carefully, in some cases they do explicitly define it that way:
In other cases, our 'population' may be even less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of some physical characteristic such as the electrical conductivity of copper.
And with "population" defined in this way, you have to define the conditions that we're imagining are being repeated "over infinitely many trials" before you can define what a "biased sample" is. So I thought it might have been that when you said "systematically biased against some λs but favored other λs", you might have been imagining the "ideal" set of trials would be one where each value of λ occurred with equal frequency, so any systematic departure from that would constitute sampling bias. If that's not what you meant, what did you mean? "Systematically biased" with respect to what "true" or "ideal" frequencies/probabilities?
billschnieder said:
You could have a 100% efficient detector and yet not have a fair sample.
If the "population" was explicitly defined in terms of an infinite set of repetitions of the exact observable experimental conditions you were using, then by definition your experimental conditions would not show any systematic bias and would thus be a "fair sample". And Bell's theorem doesn't assume anything too specific about the observed experimental conditions beyond some basic criteria like a spacelike separation between measurements (though it may be that 100% detector efficiency is needed as one of these criteria to make the proof rigorous, in which case a frequentist would only say that Bell's inequalities would be guaranteed to hold in an infinite repetition of an experiment with perfect detector efficiency, and any actual experiment with imperfect efficiency could be a biased sample relative to this infinite set)
billschnieder said:
All you need in order to get an unfair sample, is an experimental apparatus which rejects photons based on their hidden properties and experimental settings.
If the apparatus "rejects photons" then doesn't that mean you don't have "a 100% efficient detector", by definition? Or do you mean "rejects" in some different sense here, like the photons more likely to have one value of the measurable property A than another depending on "their hidden properties and experimental settings"?
 
Last edited:
  • #100
JesseM said:
As I said before, my impression is that "local realism" is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing, "realism" doesn't need to have any independent meaning outside of its use in this phrase.

I wasn't asking what your impression is about how the phrase is used by the broader physics community, but rather whether in *your* use of the phrase, you can precisely relate the words 'local' and 'realism' to your definitions which seem essentially identical to Bell's principle of local causality. Now, maybe it's just a 'composite phrase' for you as well. But in that case, I would still insist that it's problematic. If the words 'local' and 'realism' have any clear meaning, then it should be possible to identify the parts of the definitions to which they correspond, as well as how they relate to each other. After all this is possible with Bell's phrase 'local causality' so why shouldn't it be possible with 'local realism'? And actually, it is not true that the phrase 'local realism' is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing. If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities. So which understanding do you hold? Do you think of locality and realism as two separate assumptions, or do you take Bell's view that only locality (as Bell defined it) and causality are assumed?
JesseM said:
If physicists called it "Bellian locality", would you require that "Bellian" have some independent definition beyond the definition that the whole phrase "Bellian locality" refers to the type of local theory Bell discussed?

The difference is that it is clear what 'Bellian' and 'locality' refers to in the phrase, 'Bellian locality', as well as how the meaning of the two words relate to each other. By contrast, it is not very clear with the phrase 'local realism'.
 
  • #101
Maaneli said:
I wasn't asking what your impression is about how the phrase is used by the broader physics community, but rather whether in *your* use of the phrase, you can precisely relate the words 'local' and 'realism' to your definitions which seem essentially identical to Bell's principle of local causality. Now, maybe it's just a 'composite phrase' for you as well.
Yeah, I would say that it's just been a composite phrase for me, I'm just using it to be understood by others so as long as they understand I'm talking about the same type of local theory Bell was talking about, that's fine with me. I do think that it'd be possible to come up with an independent definition of "realism" that fits with what I mean by the composite phrase though. For example, I might say that in a realist theory the universe should have a well-defined state at each moment in time, and then I could modify my point about deterministic vs. probabilistic local realist theories from post #63 on Understanding Bell's Mathematics:
In a realist theory, all physical facts--including macro-facts about "events" spread out over a finite swatch of time--ultimately reduce to some collection of instantaneous physical facts about the state of the universe at individual moments of time. Without loss of generality, then, let G and G' be two possibilities for what happens at some moment of time T.

--In a deterministic realist theory, if λ represents the instantaneous physical facts about the state of the universe at some time prior to T, then this allows us to determine whether G or G' occurs with probability one.

--An intrinsically probabilistic realist theory is a somewhat more subtle case, but for any probabilistic realist theory it should be possible to break it up into two parts: a deterministic mathematical rule that gives the most precise possible probability of the universe having a given state at time T based on information about states prior to T, and a random "seed" number whose value is combined with the probability to determine what state actually occurred at T. This "most precise possible probability" does not represent a subjective probability estimate made by any observer, but is the probability function that nature itself is using, the most accurate possible formulation of the "laws of physics" in a universe with intrinsically probabilistic laws.

For example, if the mathematical rule determines the probability of G is 70% and the probability of G' is 30%, then the random seed number could be a randomly-selected real number on the interval from 0 to 1, with a uniform probability distribution on that interval, so that if the number picked was somewhere between 0 and 0.7 that would mean G occurred, and if it was 0.7 or greater than G' occurred. The value of the random seed number associated with each probabilistic choice (like the choice between G and G') can be taken as truly random, uncorrelated with any other fact about the state at times earlier than T, while the precise probability of different events could be generated deterministically from a λ which contained information about all instantaneous states at times prior to T.
This definition doesn't require that there be a unique correct definition of simultaneity, just that it's possible to come up with a simultaneity convention such that either the deterministic or the probabilistic case above holds. Of course there might be universes where this wasn't true which I might still want to call "realist", like one where backwards time travel was possible or some weird theory of quantum gravity where time was emergent rather than fundamental. This case is harder to deal with--maybe I'd just want to require that there is some well-defined mathematical set where each member of the set is a "possible universe" and that the laws of physics assign a well-defined probability distribution to the entire set. But at least the previous definition makes sense for a realist universe where it makes sense to order events in time, I think.
Maaneli said:
If the words 'local' and 'realism' have any clear meaning, then it should be possible to identify the parts of the definitions to which they correspond, as well as how they relate to each other.
Why is it necessarily problematic to have a composite phrase where the individual parts don't have any clear independent meaning? If we combined the phrase into one word, "localrealism", would that somehow be more acceptable since we don't expect individual parts of a single word to have their own separate meanings?
Maaneli said:
And actually, it is not true that the phrase 'local realism' is mostly used as just a composite phrase which refers to the type of local theory that Bell was discussing. If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities.
Fair enough, my understanding of how the phrase is used may be wrong. But I wonder if it's possible that a lot of physicists just haven't thought about the separate meanings very much, and assume they should have separate meanings even if they couldn't give a clear definition of what criteria a nonlocal realist theory would satisfy (even if they can point to specific 'know it when I see it' examples of nonlocal realist theories like Bohmian mechanics, and likewise can point to examples of ways of defining locality that they'd understand as nonrealist, like the definition from QFT).
Maaneli said:
The difference is that it is clear what 'Bellian' and 'locality' refers to in the phrase, 'Bellian locality', as well as how the meaning of the two words relate to each other. By contrast, it is not very clear with the phrase 'local realism'.
But "Bellian" has no independent physical definition here, it just refers to the views of a particular historical figure. For example, we wouldn't be able to make sense of the phrase "Bellian nonlocality", whereas I think you would probably require that if "realism" and "locality" have clear independent meanings, we should be able to define what set of theories would qualify as "non-local, realist".
 
  • #102
Maaneli said:
If anything, it is mostly believed that locality and realism are two separate assumptions of Bell's theorem (as seen, for example, in the quotes of Zeilinger and Aspect that DrC posted), and many physicists claim that there is a choice to drop either locality or realism as a consequence of the violation of Bell inequalities.

:smile:

I was re-reading some material today on our subject, are you familiar with the work of Michael Redhead ?

Incompleteness, nonlocality, and realism (winnder, 1988 LAKATOS AWARD FOR AN OUTSTANDING CONTRIBUTION TO THE PHILOSOPHY OF SCIENCE):
http://books.google.com/books?id=Yt...ead incompleteness&pg=PP1#v=onepage&q&f=false

He analyzed the issue of whether or not locality was a sufficient criteria for the Bell result. He also provides a number of definitions of Bell locality. Generally, he did not find that this was sufficient. However, the subject gets pretty complicated as subtle changes in definitions can change your perspective. So I don't consider this work to answer the question in a manner that is exact and will settle the issue finally.

The problem I have always had is that if you start with your local causality (or my locality, both of which to me are the same thing as Bell's 2) as a premise, you tend to see it as all which is needed for Bell. On the other hand, if you start with realism as a premise, you likewise tend to see IT as all which is needed for Bell. In other words, your starting point dictates some of your perspective. That is why I believe it is usually accepted that both local causality and realism are required for the Bell result. It is a tacit acknowledgment that there are some definitional issues involved.
 
  • #103
DrChinese said:
:smile:

I was re-reading some material today on our subject, are you familiar with the work of Michael Redhead ?

Incompleteness, nonlocality, and realism (winnder, 1988 LAKATOS AWARD FOR AN OUTSTANDING CONTRIBUTION TO THE PHILOSOPHY OF SCIENCE):
http://books.google.com/books?id=Yt...ead incompleteness&pg=PP1#v=onepage&q&f=false

He analyzed the issue of whether or not locality was a sufficient criteria for the Bell result. He also provides a number of definitions of Bell locality. Generally, he did not find that this was sufficient. However, the subject gets pretty complicated as subtle changes in definitions can change your perspective. So I don't consider this work to answer the question in a manner that is exact and will settle the issue finally.

I'm familiar with the work of Redhead, but I haven't looked at this paper yet. I wonder though if he refers at all to Bell's own definition of local causality from La Nouvelle?
DrChinese said:
The problem I have always had is that if you start with your local causality (or my locality, both of which to me are the same thing as Bell's 2) as a premise, you tend to see it as all which is needed for Bell. On the other hand, if you start with realism as a premise, you likewise tend to see IT as all which is needed for Bell.

I don't think this is the issue. The point I've been making is that Bell's local causality (which he shows is all that is needed for the derivation of his inequality, as well as the CHSH inequality) requires as part of its definition, a notion of realism, specifically, the assumption of 'local beables'. If one rejects that notion of realism, then there simply is no Bell locality, and thus no Bell theorem. That's why 'realism' (assuming it refers to Bell's notion of realism) and Bell locality are not two separate assumptions, and why you cannot reject realism without rejecting Bell's theorem all together.

Also, we know that realism is not a sufficient premise for Bell. After all, there exist theories of nonlocal (contextual or noncontextual) beables which violate the Bell inequalities.

DrChinese said:
In other words, your starting point dictates some of your perspective. That is why I believe it is usually accepted that both local causality and realism are required for the Bell result. It is a tacit acknowledgment that there are some definitional issues involved.

So I don't think it's a definitional issue. I have also never seen Zeilinger or Aspect or anyone else in quantum optics argue that Bell's local causality condition is insufficient for the derivation of the Bell inequality, nor have I ever seen any indication from any of those guys that they are even familiar with Bell's definition of local causality.
 
Last edited:
  • #104
JesseM said:
Yeah, I would say that it's just been a composite phrase for me, I'm just using it to be understood by others so as long as they understand I'm talking about the same type of local theory Bell was talking about, that's fine with me. I do think that it'd be possible to come up with an independent definition of "realism" that fits with what I mean by the composite phrase though. For example, I might say that in a realist theory the universe should have a well-defined state at each moment in time, and then I could modify my point about deterministic vs. probabilistic local realist theories from post #63 on Understanding Bell's Mathematics:

When you say that the universe should have a 'well-defined state at each moment in time', are you proposing that the universe is something objectively real? Are there local beables in your universe?

Also, it sounds like the very formulation of your definition of locality depends on your definition of realism, in which case, would you agree that if one rejects your definition of realism, then there can be no locality, and thus no means by which to derive the Bell inequality?
JesseM said:
Fair enough, my understanding of how the phrase is used may be wrong. But I wonder if it's possible that a lot of physicists just haven't thought about the separate meanings very much, and assume they should have separate meanings even if they couldn't give a clear definition of what criteria a nonlocal realist theory would satisfy (even if they can point to specific 'know it when I see it' examples of nonlocal realist theories like Bohmian mechanics, and likewise can point to examples of ways of defining locality that they'd understand as nonrealist, like the definition from QFT).

Yes, I think it's the case that a lot of physicists just haven't thought about the meanings very much. And this is a problem, IMO, because, on the basis of this lack of thinking and understanding, many physicists go so far as to say that the violation of the Bell inequalities implies that reality doesn't exist, or that the world is local but 'non-real', or that hidden-variable theories have been proven to be impossible. And then they go on to teach these misunderstandings to classes of graduate students and undergrads, and mislead those students into thinking that there is no ontological way to formulate QM, and that if they try to do so, then they are just being naive or are just in denial of the facts. They also use this misunderstanding to recommend the rejection of grant proposals for research on ontological formulations of QM, because they think that such formulations of QM have already been proven to be impossible.
JesseM said:
But "Bellian" has no independent physical definition here, it just refers to the views of a particular historical figure.

Doesn't matter if it's has a 'physical' definition or not. The point is that it's logically clear what 'Bellian' refers to and how it relates to the word 'locality'.
JesseM said:
For example, we wouldn't be able to make sense of the phrase "Bellian nonlocality",

Although I could have some plausible idea as to what 'Bellian nonlocality' might entail, it's true that I wouldn't be able to identify a precise definition that I could ascribe to Bell. And that's simply because Bell did not propose a definition of 'nonlocal causality'.
JesseM said:
whereas I think you would probably require that if "realism" and "locality" have clear independent meanings, we should be able to define what set of theories would qualify as "non-local, realist".

If the term 'nonlocal' already requires as part of its definition the assumption of 'realism' (assuming that realism has been precisely defined), then I would say that the phrase 'nonlocal realist' is redundant and potentially misleading. Instead, it would be sufficient to say "we should be able to define what set of theories would qualify as 'nonlocal [causal]'".
 
Last edited:
  • #105
Maaneli said:
When you say that the universe should have a 'well-defined state at each moment in time', are you proposing that the universe is something objectively real?
The universe has an objectively real state at every moment, I don't know what it would mean to say "the universe is something objectively real" apart from this.
Maaneli said:
Are there local beables in your universe?
The definition is broad--in some universes satisfying the definition it might be possible to break down the "state of the universe at a given moment" into a collection of local states of each point in space at that time, but in others it might not be.
Maaneli said:
Also, it sounds the very formulation of your definition of locality depends on your definition of realism, in which case, would you agree that if one rejects your definition of realism, then there can be no locality, and thus no means by which to derive the Bell inequality?
Instead of saying that a theory is "local" or "nonlocal" as a whole, let's say that some mathematically-definable element of a theory is local if 1) all facts about the value of this element can be broken down into local facts about individual points in spacetime, and 2) the value at one point is only causally influenced by local facts in the point's past light cone. So in this case, if the "element" in the copenhagen interpretation is the density matrix for a measurement at a single place and time, then I think it'd make sense to say this element is local even if the copenhagen interpretation is not realist, and even though other elements of the theory like the wavefunction for entangled particles cannot really be considered local. In the case of a local realist theory, the "element" would consist of all objective facts about the state of the universe.
 
  • #106
JesseM said:
The universe has an objectively real state at every moment, I don't know what it would mean to say "the universe is something objectively real" apart from this.

Can you define what a 'state' is?
JesseM said:
The definition is broad--in some universes satisfying the definition it might be possible to break down the "state of the universe at a given moment" into a collection of local states of each point in space at that time, but in others it might not be.

The only universe I am concerned about is one in which the principle of local causality is applicable.
JesseM said:
Instead of saying that a theory is "local" or "nonlocal" as a whole, let's say that some mathematically-definable element of a theory is local if 1) all facts about the value of this element can be broken down into local facts about individual points in spacetime, and 2) the value at one point is only causally influenced by local facts in the point's past light cone. So in this case, if the "element" in the copenhagen interpretation is the density matrix for a measurement at a single place and time, then I think it'd make sense to say this element is local even if the copenhagen interpretation is not realist, and even though other elements of the theory like the wavefunction for entangled particles cannot really be considered local. In the case of a local realist theory, the "element" would consist of all objective facts about the state of the universe.

If the 'element' would consist of all objective facts about the state of the universe, then the density matrix cannot be part of them. A density matrix represents a state of knowledge about a system, not the objects facts about the system.

In any case, based on what you said, it seems that you would have to define a locally causal theory as one in which every objectively real element of a theory satisfies your 1) and 2). So if someone denies your definition of realism, then you cannot even formulate a locally causal theory.
 
Last edited:
  • #107
Maaneli said:
Also, we know that realism is not a sufficient premise for Bell. After all, there exist theories of nonlocal (contextual or noncontextual) beables which violate the Bell inequalities.

This is a debatable point. First, you and every other Bohmian I know agree that nature is contextual. So that is pretty well rejecting realism a priori (which is a reasonable view).

Second, there are some who feel that the Bohmiam program asserts forms of realism that are experimentally excluded. Now, I realize that Bohmians reject this evidence. I am just saying it is debatable. I personally am accepting that the Bohmian viewpoint is NOT clearly ruled out experimentally. Although it would be nice to see the Bohmian side come up with something first for a change. (Like some/any prediction that could be tested.)

And third, I personally think that realism IS enough for the Bell result (I have presented this logic previously). But that is a minority view. There, you have it and heard it here first. I am stating something deviant. :biggrin:
 
  • #108
DrChinese said:
This is a debatable point. First, you and every other Bohmian I know agree that nature is contextual. So that is pretty well rejecting realism a priori (which is a reasonable view).

The acceptance of contextuality only implies a rejection of the realism in, say, classical mechanics. But in a contextual theory like deBB, the particles always have definite positions in spacetime, whether or not it they are measured. So I don't see how one can think that the contextuality of deBB theory implies a rejection of realism all together.
DrChinese said:
Second, there are some who feel that the Bohmiam program asserts forms of realism that are experimentally excluded. Now, I realize that Bohmians reject this evidence. I am just saying it is debatable.

Who claims this? I'm not familiar with anyone in the physics literature who seriously argues that the realism implied by deBB is experimentally excluded. Certainly Tony Leggett does not assert this. Nor does Zeilinger.
DrChinese said:
I personally am accepting that the Bohmian viewpoint is NOT clearly ruled out experimentally. Although it would be nice to see the Bohmian side come up with something first for a change. (Like some/any prediction that could be tested.)

Do you know of Valentini's work on nonequilibrium deBB field theory in inflationary cosmology?

I've also developed some semiclassical deBB gravity models which could, in principle, be experimentally discriminated from standard semiclassical gravity theory, through the use of matter-wave interferometry with macromolecules. But that's currently new and unpublished work.
DrChinese said:
And third, I personally think that realism IS enough for the Bell result (I have presented this logic previously). But that is a minority view. There, you have it and heard it here first. I am stating something deviant. :biggrin:

Yeah, perhaps you won't be surprised if I say that I'm extremely skeptical of this claim. :wink:

But I might like to see that argument just for kicks.
 
Last edited:
  • #109
JesseM said:
It often seems like you may be intentionally playing one-upmanship games where you snip out all the context of some question or statement I ask and make it sound like I was confused about something very trivial
Pot calling kettle black.

This scenario, where there is a systematic bias in how doctors assign treatment which influences the observed correlations in frequencies between treatment and recovery in the sample, is a perfectly well-defined one
And this is different from the issue we are discussing how exactly. Haven't I told you umpteenth times that Aspect-type experimenters are unable to make sure there is no systematic bias in their experiments? How do you expect me to continue a discussion with you if you ignore everything I say and keep challenging every tiny tangential issue, like the meaning of fair, or the meaning of population. You think I have all the time in the world to be following you down these rabbit trails which are not directly relevant to the issue being discussed. Have you noticed every of your responses is now three posts long, mostly filled with tangential issues. Are you unable to focus a in on just what is relevant? You may have the time for this but I don't.

In general I notice that you almost always refuse to answer simple questions I ask you about your position
See the previous paragraph for the reason why. I answer the ones that I believe will further the relevant discussion and ignore temptations to go down yet another rabbit trail.

"Rational degree of belief" is a very ill-defined phrase. What procedure allows me to determine the degree to which it is rational to believe a particular outcome will occur in a given scenario?
It is well defined to me. If you disagree, give an example and I will show you how a rational degree of belief can be formed. Or better, give an example in which you think the above definition does not apply. My definition above covers both the "frequentists" and "bayesian" views as special cases, each of which is not a complete picture by itself. If you think it does not, explain in what way it does not.
But the frequentist interpretation is just about hypothetical repetitions, which can include purely hypothetical ideas like "turning back the clock" and running the same single experiment over again at the same moment (with observable conditions held the same but non-observed conditions, like the precise 'microstate' in a situation where we have only observed the 'macrostate', allowed to vary randomly) rather than actually repeating it at successively later times (which might be impossible because the original experiment destroyed the object we were experimenting on, say).
So why are you so surprised when I tell you that such idealized problems, which presuppose infinite independent repetitions of a "random experiment" can not be directly compared to anything real, where infinite repetition of a "random experiment" is not possible? If Bell's theorem were an entirely theoretical exercise with no comparison being made to reality, and no conclusions about reality being drawn from it, do you really believe we would be having this discussion?

If it is your view that Bell's inequalities do not say anything about reality, and no reasonable physicist can possibly draw any conclusions about the real world from Bell's theorem. Then we can end this quibble, because you and I will be in full agreement. Is that what you are saying?

The comment above says nothing of the sort. I'm just saying that to talk about "probability" in the frequentist interpretation you need to define the conditions that you are imagining being repeated in an arbitrarily large number of trials.
No I don't. You are the one who insists probability must be defined that way not me.

would you agree that when defining the sample space, we must define what process was used to assign treatments to patients, that a sample space where treatment was assigned by doctors would be a different one than a sample space where treatment was assigned by a random number generator on a computer?
Yes, I have told you as much recently. But what has that got to do with anything. All I am telling you is that you can not compare a probability defined on one sample space with one defined on another. My point is, just because you use a random number generator does not mean you have the same probability space like the idealized infinitely repeated probability you theorized about. What don't you understand about that.

I'm not asking you to "compare probabilities defined on different probability spaces", and Bell's argument doesn't require you to do that either.
Oh, but that is what you are doing by comparing Bell's inequalities with the results of Aspect type experiments whether Bell "requires" it or not. It is not about what Bell requires, it is about what is done every time a real experiment is compared to Bell's inequalities.

Sure it would be. If treatment was assigned by a random number generator, then in the limit as the number of trials went to infinity the probability of any correlation between traits of patients prior to treatment (like large kidney stones) and the treatment they were assigned would approach 0.
How exactly can actual doctors doing actual experiments repeat the trial to infinity?

This is just because there isn't any way the traits of patients would causally influence the random number generator so that there would be a systematic difference in the likelihood that patients with different versions of a trait (say, large vs. small kidney stones) would be assigned treatment A vs. treatment B. Do you disagree?

If you were repeating the experiment an infinite number of times with the random number generator producing two groups every time, then I agree that theoretically, the average composition for both groups will tend to wards the same value. But in the real world, you do not have an infinite number of people with kidney-stones, and it is impossible to repeat the experiment an infinite number of times. Therefore, unless the experimenters know that the size of the stones matter, and specifically control for that, the results of their single experiment, can not be compared to any idealized, theoretical result obtained by repeating a hypothetical experiment an infinite number of times. Is this too difficult to understand?
 
  • #110
JesseM said:
So, do you agree with my statement that of these two, Only the second sense of "fair sample" is relevant to Bell's argument?
The concept that a fair sample is needed to be able to draw inferences about the population from a sample of it is relevant to Bell's argument, irrespective of which specific type of fair sample is at issue in a specific experiment.

In post #91 you said the following, numbered for convenience
JesseM said:
As before, you need to explain what "the population" consists of.
1) Again, does it consist of a hypothetical repetition of the same experimental conditions a much larger (near-infinite number of times)? If so, then by definition the actual sample could not be "systematically biased" compared to the larger population, since the larger population is defined in terms of the same experimental conditions.
2) Perhaps you mean repeating similar experimental conditions but with ideal detector efficiency so all particle pairs emitted by the source are actually detected, which would be more like the meaning of the "fair sampling assumption"?

1) Wrong. IF you define the population like that, the actual sample in a real experiment can still be systematically biased compared to the large population, IF those doing the experiment have no way to ensure that they are actually repeating the same experiment multiple times, even if it were possible to actually repeat it multiple times.

2) A fair sample in the context of Aspect-type experiments means that the probabilities of non-detection at Alice and Bob are independent of each other, and also independent of the hidden elements of reality.



To make the question more precise, suppose all of the following are true:

1. We repeat some experiment with particle pairs N times and observe frequencies of different values for measurable variables like A and B

2. N is sufficiently large such that, by the law of large numbers, there is only a negligible probability that these observed frequencies differ by more than some small amount \epsilon from the ideal probabilities for the same measurable variables (the 'ideal probabilities' being the ones that would be seen if the experiment was repeated under the same observable conditions an infinite number of times)

3. Bell's reasoning is sound, so he is correct in concluding that in a universe obeying local realist laws (or with laws obeying 'local causality' as Maaneli prefers it), the ideal probabilities for measurable variables like A and B should obey various Bell inequalities

...would you agree that if all of these are true (please grant them for the sake of the argument when answering this question, even though I know you would probably disagree with 3 and perhaps also doubt it is possible in practice to pick a sufficiently large N so that 2 is true), then the experiment constitutes a valid test of local realism/local causality, so if we see a sizeable violation of Bell inequalities in our observed frequencies there is a high probability that local realism is false? Please give me a yes-or-no answer to this question.

No I do not agree. The premises you presented are not sufficient (even if they were all true) for the statement in bold to be true. Here is an example I have given you in a previous thread which makes the point clearer I believe:

The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

All of your premises could be true, and you will still not avoid the pitfall, if the data is not indexed in accordance with the expectations of the inequalities. But it is impossible to do that.
 
  • #111
JesseM said:
If the "population" was explicitly defined in terms of an infinite set of repetitions of the exact observable experimental conditions you were using, then by definition your experimental conditions would not show any systematic bias and would thus be a "fair sample". And Bell's theorem doesn't assume anything too specific about the observed experimental conditions beyond some basic criteria like a spacelike separation between measurements (though it may be that 100% detector efficiency is needed as one of these criteria to make the proof rigorous, in which case a frequentist would only say that Bell's inequalities would be guaranteed to hold in an infinite repetition of an experiment with perfect detector efficiency, and any actual experiment with imperfect efficiency could be a biased sample relative to this infinite set)
Is it your claim that Bell's "population" is defined in terms of "an infinite set of repetitions of the exact observable experimental conditions you were using"? If that is what you mean here then I fail to see the need to make any fair sampling assumption at all. Why would the fact that detectors are not efficient not already be included in what you call "the exact observable experimental conditions you were using"? So either, 1) that is not what Bell's population is defined as, or 2) No experimental condition testing Bell's inequalities will ever be unfair, so there is no point even making a "fair sampling assumption". Or maybe you do not understand that fair sampling is not about detector efficiency. I could have a fair sample with 1% detector efficiency, provided the rejection of photons was not based on a property of the photons themselves.

If the apparatus "rejects photons" then doesn't that mean you don't have "a 100% efficient detector", by definition?
No it doesn't mean that at all. In Aspect type experiments, you have a series of devices like beam splitters or cross-polarizers etc, not to talk of coincidence counters, before you have any detector. The detector is the device which actually detects a photon. However, even if your detector is 100% efficient, and detects everything that reaches it, it doesn't mean everything is reaching it. The rest of the apparatus could be eliminating photons prior to that.
 
  • #112
JesseM said:
For example, imagine that I come to you today and say, I want to do an experiment on dolphins, give me a representative sample of 1000 dolphins. Without knowing anything about the details of my experiment, and all the parameters that affect the outcome of my experiment, could you explain to me how you will go about generating this "random list of dolphins", also tell me what an infinite number of times means in this context. If you could answer this question, it will help tremendously in understanding your point of view.

I can't answer without a definition of what you mean by "representative sample"--representative of what?
Representative of the entire dolphin population.

You can only define "representative" by defining what conditions you are imagining the dolphins are being sampled
Oh, so you are saying you need to know the "hidden" factors in order to be able to generate a fair sample. So then you agree that without a clear understanding of what factors are important for my experiment, you can not possibly produce a representative sample. This is what I have been telling you all along. Do you see now how useless a random number generator will be in such a case, where you have no clue what the "hidden" factors are?
 
  • #113
fair sampling and the the scratch lotto-card analogy

Let us now go back to your famous scratch-lotto example:
JesseM said:
The scratch lotto analogy was only a few paragraphs and would be even shorter if I didn't explain the details of how to derive the conclusion that the probability of identical results when different boxes were scratched should be greater than 1/3, in which case it reduces to this:

Perhaps you could take a look at the scratch lotto analogy I came up with a while ago and see if it makes sense to you (note that it's explicitly based on considering how the 'hidden fruits' might be distributed if they were known by a hypothetical observer for whom they aren't 'hidden'):

Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get the same result--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a cherry too.

Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card always matches the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the same as the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must also have been created with the hidden fruits A+,B+,C-.

Is that too long for you? If you just have a weird aversion to this example (or are refusing to address it just because I have asked you a few times and you just want to be contrary)


I have modified it to make the symbols more explicit and the issue more clear as follows:

Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes (1,2,3) that, when scratched, can reveal either a cherry or a lemon (C, L). We give one card to Alice and one to Bob, and each scratches only one of the three boxes. Let us denote the outcomes (ij) such that (CL) means, Alice got a cherry and Bob got a lemon). There are therefore only 4 possible pairs of outcomes: CC, CL, LC, LL. Let us denote the pair of choices by Alice and Bob as (ab), for example (11) means they both selected box 1 on their cards, and (31) means Alice selected box 3, and Bob selected box 1. There are therefore 9 possible choice combinations: 11, 12, 13, 21, 22, 23, 31, 32 and 33.

When we repeat this many times, we find that
(a) whenever they both pick the same box to scratch, they always get the same result. That is whenever the choices are, 11, 22 or 33, the results are always CC or LL.
(b) whenever they both pick different boxes to scratch, they get the same results only with a relative frequency of 1/4.

How might we explain this?
We might suppose that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it. In which case, there are only 8 possible cards that the machine can produce: CCC, CCL, CLC, CLL, LCC, LCL, LLC, LLL. To explain outcome (a) then, we might say that "hidden" fruit in a given box of one card always matches the hidden fruit in the same box of the other card. Therefore the machine must always send the same type of card to Bob and Alice. However, doing this introduces a conflict for outcome (b) as follows:

Consider the case where the cards sent to Bob and Alice were of the LLC type. Since outcome (b) involves Alice and Bob scratching different boxes, there are six possible ways they could scratch.

12LL (ie, Alices scratches box 1, Bob scratches Box 2, Alice gets Lemon, Bob gets Lemon)
21LL
13LC
31CL
23LC
32CL (ie, Alices scratches box 3, Bob scratches Box 2, Alice gets Cherry, Bob gets Lemon)

Out of the 6 possible outcomes, only 2 (the first two) correspond to the same outcome for both Alice and Bob. Therefore the relative frequency will be 2/6 = 1/3 not 1/4 as observed. This is the case for all the types of cards produced. This is analogous to the violation of Bell's inequalities.

According to JesseM, it is impossible to explain both outcome (a) and outcome (b) with an instruction set as the above illustration shows.


JesseM,
Does this faithfully reflect the example you want me to address? If not point out any errors and I will amend as necessary.
 
  • #114
Maaneli said:
Yeah, perhaps you won't be surprised if I say that I'm extremely skeptical of this claim. :wink:

But I might like to see that argument just for kicks.

It's short and sweet, but you probably won't accept it any more than Norsen did.

A single particle, Alice, has 3 elements of reality at angles 0, 120, 240 degrees. This is by assumption, the realistic assumption, and from the fact that these angles - individually - could be predicted with certainty.

It is obvious from the Bell program that there are NO datasets of Alice which match the QM expectation value. Ergo, the assumption is invalid. And you don't need to consider settings of Bob at all. You simply cannot construct the Alice dataset. QED.

The key difference is that the elements of reality are NOT referring to separate particles. They never were intended to! All the talk about Bob's setting affecting Alice's outcome only relates to Bell tests. But it should be clear that there is no realistic Alice who can match the QM expectation value.
 
  • #115
fair sampling and the the scratch lotto-card analogy

(continuing from my last post)
So far, the conundrum is the idea that the only case which explains outcomes (a) produce relative frequencies (1/3) for outcome (b) which are significantly higher than those predicted by QM and observed in experiments (1/4).

There is however one interesting observation not included in the above treament. In all experiments performed so far, most of the particles sent to the detector are undetected. In the situation above, it is equivalent to saying, not all the cards sent to Alice or Bob reveal a fruit when scratched.

The alternative explanation:
A more complete example then must include "no-fruit" (N) as a possible outcome. So that in addition to the four outcomes listed initially (CC, CL, LC, LL) we must add the four cases for which only one fruit is revealed for each pair of cards sent (CN, NC, CL, LC) and the one case in which no fruit is revealed for each pair sent (NN). Interestingly, in real experiments, whenever only one of the pair is detected, the whole pair is discarded. This is purpose of coincidence circuitary used in Aspect-type experiments.

One might explain it by supposing that a "no-fruit" (N) result is obtained whenever Alice or Bob makes an error by scratching the chosen box too hard so that they also scratch off the hidden fruit underneath it. In other words, their scratching is not 100% inefficient. However, no matter how low their efficiencly, if this mistake is done randomly enough, the sample which reveals a fruit will still be representative of the population sent from the card machine, and by considering just those cases in which no mistake was made during scratching (cf. using coincidence circuitary), the conundrum remains. Therefore in this case, the efficiency of the detector does not matter.

There is yet another posibility. What if the "no-fruit" (N) result, is an instruction carried by the card itself rather than a result of inefficient scratching. So that instead of always having either a cherry or a lemon in each box, we allow for the posibility that some boxes are just left empty (N) and will therefore never produce a fruit no matter how efficiently they scratch.

Keeping this in mind, let us now reconsider the LLC case we discussed above, except that the machine has the freedom to generate the pair such that in one card of the pair generated at a time, one of the boxes is empty (N). For example, the card LNC is sent to Alice while the card LLC is sent to Bob. Note that now the machine is no longer sending exactly the same card to both Alice and Bob. The question then is, can this new instruction set explain both outcomes (a) and (b)? Let us verify:

(a) When both Alice and Bob select the same box to scratch, the possible outcomes for the (LNC,LLC) pair of cards sent are 11LL, 33CC, 22NL. However, since the 22NL case results in only a single fruit, it is rejected as an error case. Therefore in every case in which they both scratch the same box and they both reveal a fruit, they always reveal the same fruit. Outcome (a) is therefore explained.

(b) What about outcome (b)? All the possible results for when they select different boxes from the (LNC,LLC) pair are 12LL, 21NL, 13LC, 31CL, 23NC, 32CC. As you can see, in 2 of the 6 possible cases, only a single fruit is revealed. Therefore we reject those two and have only 4 possible outcomes for which they scratch a different box and both of them observe a fruit (12LL, 13LC, 31CL, 32CC). However, in only one of these, do they get the same fruit. Therefore in one out of the four possible outcomes in which they both scratch different boxes and both get a fruit, they get the same fruit (32CC), corresponding to a relative frequency of 1/4, just as was predicted by QM and observed in real experiments.

The same applies to all other possible instruction sets in which the machine has the freedom to put an empty box in one of the boxes of the pair sent out. The conundrum is therefore resolved.
 
Last edited:
  • #116


billschnieder said:
(continuing from my last post)
So far, the conundrum is the idea that the only case which explains outcomes (a) produce relative frequencies (1/3) for outcome (b) which are significantly higher than those predicted by QM and observed in experiments (1/4).

There is however one interesting observation not included in the above treament. In all experiments performed so far, most of the particles sent to the detector are undetected. In the situation above, it is equivalent to saying, not all the cards sent to Alice or Bob reveal a fruit when scratched.

The alternative explanation:
...

Well, yes and no. This is an area I am fairly familiar with.

First, we need to agree that the FULL universe in the LR alternative makes a different prediction than what is observed. Therefore it does not match the QM expectation value and Bell's Inequality is respected. Bell's Theorem stands.

Second, it is hypothetically possible to attempt a treatment as you describe. This does have some superficial similarity to the simultation model of De Raedt et al. However, there are in fact extremely severe constraints and getting somewhere with your idea is MUCH more difficult than you may be giving credit for. Keep in mind this approach IS NOT RULED OUT BY BELL'S THEOREM. I capitalized those letters because we are moving from one track to an entirely different one. As we will see, there are still elements of Bell's logic to consider here.

Third, let's consider your hypothesis and the constraints it must satisfy. I will just supply a couple so we can have a starting point.

a) The full universe must obey the Bell Inequality, and most authors pick a straight line function to stay as close to the QM expectation as possible. This means that there exists some BIAS() function which accounts for the different between the full universe and the sample actually detected. I will discuss this function in a followup post.
b) The alternative model you suggest will make experimentally verifiable predictions. For example, you must be able to show that there are specific parts of the apparatus that are responsible for the absorption of the "missing" radiation. So keep in mind that the complete absence of such effect is a powerful counterexample.

Now, I realize that you may think something like: "a) and b) don't matter, it at least proves that a local realistic position is tenable." But it actually doesn't, at least not in the terms you are thinking. Yes, I freely acknowledge that Bell's Theorem does NOT rule out LR theories that yield DIFFERENT predictions than QM. I think this is generally accepted as possible by the physics community. It is the idea that QM and LR are compatible that is ruled out. So this means that a) and b) are important. As mentioned I will discuss this in a followup post.
 
  • #117
I am attaching a graph of the BIAS() function for a local realistic theory in which Bell's Inequality is respected, as you are suggesting, because the full universe is not being detected. Your hypothesis is what I refer to as the Unfair Sampling Assumption. The idea is that an Unfair Sample can explain the reason why local realism exists but QM predictions hold in actual experiments.

Your LR candidate does not need to follow this graph, but it will at least match it in several respects. Presumably you want to have a minimal bias function, so I have presented that case.

You will notice something very interesting about the bias: it is not equal for all Theta! This is a big problem for a local realistic theory. And why is that? Because Theta should not be a variable in a theory in which Alice and Bob are being independently measured. On the other hand, if your theory can explain that naturally, then you would be OK. But again, this is where your theory will start making experimentally falsifiable predictions. And that won't be so easy to get around, considering every single prediction you make will involve an effect that no one has ever noticed in hundreds of thousands of experiments. So not impossible, but very difficult. Good luck! :smile:
 

Attachments

  • Bell.UnfairSamplingAssumption1.jpg
    Bell.UnfairSamplingAssumption1.jpg
    32.5 KB · Views: 414
  • #118


DrChinese said:
Well, yes and no.
Just to be clear, I need some clear answers from you before I proceed to talk about your bias function.
1). Do you agree that my explanation above explains the situation through "instruction sets" -- something Mermin said was not possible?

2) Do you at least admit that Mermin was wrong in declaring that in this specific example which he originated, it is impossible to explain the outcome through an instruction set?

3) Do you admit that the way my explanation works, more closely matches real Aspect-type experiments than Mermin's/JesseM's original example in which non-detection is not considered?

4) Do you agree that without coincidence counting, Bell's inequalities are not violated? In other words, Bell's inequalities are only violated in real experiments when the "full universe" is limited to the full universe of coincidence counts, rather than the "full universe" of emissions from the detector? If you disagree, please let me know which "full universe" you are referring to.
 
Last edited:
  • #119


billschnieder said:
Just to be clear, I need some clear answers from you before I proceed to talk about your bias function.
1). Do you agree that my explanation above explains the situation through "instruction sets" -- something Mermin said was not possible?

2) Do you at least admit that Mermin was wrong in declaring that in this specific example which he originated, it is impossible to explain the outcome through an instruction set?

3) Do you admit that the way my explanation works, more closely matches real Aspect-type experiments than Mermin's/JesseM's original example in which non-detection is not considered?

4) Do you agree that without coincidence counting, Bell's inequalities are not violated? In other words, Bell's inequalities are only violated in real experiments when the "full universe" is limited to the full universe of coincidence counts, rather than the "full universe" of emissions from the detector? If you disagree, please let me know which "full universe" you are referring to.

1. No one has ever - that I know of - said an Instruction Set explanation which does NOT match QM expectation value is impossible.

2. No, Mermin is completely correct.

3. No, there is absolutely no justification whatsoever for your ad hoc model. I have seen this plenty of times previously. For example, the graph I posted was created last year during similar discussion with someone else.

Please look at what I wrote above: a) your hypothesis does not violate Bell's Theorem; and b) your "model", which actually does NOT explain anything at all, would be susceptible to experimental falsification. IF you made any specific prediction, THEN I am virtually certain that existing experiments would prove it wrong. Of course, you would need to make one first. On the other hand, the QM model has been subjected to a barrage of tests and has passed all.

4. Sure, the full universe could consist of photons which are not being detected today. Those photons, hypothetically, could have attributes which are different, on average, than those that were detected. No argument about the principle.

But that would be hotly contested if you actually came out with a model (which you obviously have not). The reason is that there is substantial evidence that no such thing actually occurs! I am not sure how much you know about the generation and detection of entangled particles, but they are not limited to photons. And the statistics don't really leave a whole lot of room for the kind of effect you describe. Particles with mass can be more accurately detected than photons as they have a bigger footprint. For example, Rowe's experiment sees violation of a Bell inequality with detection of the full sample of ions.

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html

So my point is that it is "easy" to get around Bell by predicting a difference with QM. But that very difference leads to immediate conflict with experiment. That is why Bell's Theorem is so important.
 
  • #120
From the abstract to the Rowe paper referenced above:

"Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away. Einstein, Podolsky and Rosen used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories. Many experiments (1, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15) have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome 'loopholes' that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's 'signal' is 2.25 plus/minus 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called 'detection' loophole."

The first sentence should be recognized as something I have said many times on this board, in various ways. Namely, there are 2 critical assumptions associated with local realism, not 1. Realism being the existence of particle properties independent of the act of observation; and locality being the idea that those properties are not affected by spacelike separated events.
 
Last edited:

Similar threads

  • · Replies 28 ·
Replies
28
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 63 ·
3
Replies
63
Views
10K
Replies
63
Views
8K
  • · Replies 75 ·
3
Replies
75
Views
11K
Replies
18
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
8
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K