response to post #109:
JesseM said:
It often seems like you may be intentionally playing one-upmanship games where you snip out all the context of some question or statement I ask and make it sound like I was confused about something very trivial
billschnieder said:
Pot calling kettle black.
Can you point to any examples where you think I have done something like this? I may misunderstand your meaning at times, but I generally quote and respond to almost all the context that you provided, and in cases where I think you
may be confused about something basic I usually adopt a tentative tone and say something like "
if you are arguing X, then you're misunderstanding Y".
JesseM said:
This scenario, where there is a systematic bias in how doctors assign treatment which influences the observed correlations in frequencies between treatment and recovery in the sample, is a perfectly well-defined one
billschnieder said:
And this is different from the issue we are discussing how exactly.
It's not! If you looked carefully at the context when I brought up this example, you'd see my point was that in this example, we aren't
trying to establish a causal relation between treatment and recovery but just want to know the statistical correlation between the two under the given observable conditions (which include the fact that doctors are assigning treatments). In this case,
the systematic bias isn't a problem at all, it's just a part of the experimental conditions that we want to determine probabilities for! Suppose that the frequentist "God" knows it happens to be true that in the limit as the number of patients being assigned treatments by these doctors went to infinity, the fraction of patients who recovered under treatment B (more of whom had small gallstones) would be 82%, and the fraction of patients who recovered under treatment A (more of whom had large gallstones) would be 77%. Then if I, in my sample of 700 patients assigned treatments by doctors, found that 83% of those with treatment B recovered and 78% of those with treatment A recovered, the frequentist God would smile beneficently upon my experiment and say "good show old boy, your measured frequencies were very close to the ideal probabilities you were trying to measure!" Of course, if I tried to claim this meant treatment B was causally more effective the frequentist God would become wrathful and cast me down into the lowest circle of statistician hell (reserved for those who fail to remember that correlation is not causation), but I'd remain in his favor as long as I was humble and claimed only that my observed frequencies were close to the ideal probabilities that would result if the
same experimental conditions (including the systematic bias introduced by the doctors, which is
not a form of sampling bias since the doctors themselves are part of the experimental conditions) a near-infinite number of times.
And again, my point is that this sort of situation, where we only are interested in the ideal probabilities that would result if the same experimental conditions were repeated a near-infinite number of times, and are
not interested in establishing that correlations in observed frequencies represent actual causal influences, is directly analogous to the type of situation Bell is modeling in his equations. Whatever marginal and conditional probabilities appear in his equations, in the frequentist interpretation (which again I think is the only reasonable one to use when understanding what 'probability' means in his analysis) they just represent the frequencies that would occur if the experiment were repeated with the same observable conditions a near-infinite number of times.
billschnieder said:
Haven't I told you umpteenth times that Aspect-type experimenters are unable to make sure there is no systematic bias in their experiments?
Yes, and the type of systematic bias I talk about above (which is different from sampling bias)
isn't a problem for these experimenters, which is what I have been trying to tell you at least umpteen times. They are just trying to make sure the observed frequencies in the experiments match the ideal frequencies that would obtain
if the experiment were repeated a near-infinite number of times under the same observed macro-conditions (while other unobserved conditions, like the exact state of various micro/hidden variables, would be allowed to vary). As long as any systematic correlation between unobserved conditions and observed results (akin to the systematic correlation between unobserved gallstone size and observed treatment type) in the actual experiment is just a mirror of systematic correlations which would also exist in the ideal near-infinite run, then they
want that systematic bias to be there if their observed frequencies are supposed to match the ideal probabilities.
billschnieder said:
How do you expect me to continue a discussion with you if you ignore everything I say and keep challenging every tiny tangential issue, like the meaning of fair, or the meaning of population.
I don't think they are tangential though--as you said in
this recent post, "Please, note I am trying to engage in a precise discussions so don't assume you know where I am going with this". The fact that you conflate different meanings of "fair" is actually pretty essential, because it means you falsely argue that the experimenters need to control for different values of hidden variables in a manner akin to controlling for gallstone size in the medical experiment where they're trying to determine the causal effectiveness of different treatments, and the fact that they
don't need to "control for" the effects of hidden variables in this way in order to test local realism using Bell inequalities is central to my argument. Likewise the meaning of "population" gets to the heart of the fact that you refuse to consider Bell's analysis in terms of the frequentist view of probability (a thoroughly mainstream view, perhaps the predominant one, despite your attempts to portray my talk about infinite repetitions as somehow outlandish or absurd), where my argument is that the frequentist interpretation is really the only clear way to understand the meaning of the probabilities that appear in the proof (especially the ones involving hidden variables which of course cannot be defined in an empirical way by us ordinary mortals who can't measure them).
billschnieder said:
You think I have all the time in the world to be following you down these rabbit trails which are not directly relevant to the issue being discussed.
Notice that whenever I ask questions intended to clarify the meaning of words like "population" and "fair/biased", I tend to say things like "
if you want to continue using this term, please answer my questions"...if you think defining these terms is so "tangential", you have the option of just restructuring your argument to avoid using such terms altogether. Likewise, if you don't want to waste a lot of time on the philosophy of probability, you have the option to just say something like "I personally don't like the frequentist interpretation but I understand it's a very traditional and standard way of thinking about probabilities, and since I want to confront your (and Bells') argument
on its own terms, if you think the frequentist interpretation is the best way to think about the probabilities that appear in Bell's proof, I'll agree to adopt this interpretation for the sake of the argument rather than get into a lot of philosophical wrangling about the meaning of probability itself". But if you can't be a little accommodating in ways like these, then this sort of wrangling seems necessary to me.
billschnieder said:
Have you noticed every of your responses is now three posts long
None of my responses to individual posts of yours have gone above two posts, actually.
billschnieder said:
See the previous paragraph for the reason why. I answer the ones that I believe will further the relevant discussion and ignore temptations to go down yet another rabbit trail.
Well, at least in your most recent posts you've addressed some of my questions and the lotto example, showing you aren't just refusing for the sake of being difficult. Thanks for that. And see above for why I do think the issues I ask you are relevant and not tangential. If you
both refuse to discuss the meaning of terms and the interpretation of probability
and refuse "for the sake of the argument" to stop using the terms and adopt the mainstream frequentist view of probability, then I think there is no way to continue having a meaningful discussion.
JesseM said:
"Rational degree of belief" is a very ill-defined phrase. What procedure allows me to determine the degree to which it is rational to believe a particular outcome will occur in a given scenario?
ThomasT said:
It is well defined to me. If you disagree, give an example and I will show you how a rational degree of belief can be formed. Or better, give an example in which you think the above definition does not apply.
OK, here are a few:
--suppose we have a coin whose shape has been distorted by intense heat, and want to know the "probability" that it will come up heads when flipped, which we suspect will no longer be 0.5 due to the unsymmetrical shape and weight distribution. With "probability" defined as "rational degree of belief", do you think there can be any well-defined probability before we have actually tried flipping it a very large number of times (or modeling a large number of flips on a computer)?
--in statistical mechanics the observable state of a system like a box of gas can be summed up with a few parameters whose value gives the system's "macrostate", like temperature and pressure and entropy. A lot of calculations depend on the idea that the system is equally likely to be in any of the "microstates" consistent with that macrostate, where a microstate represents the most detailed possible knowledge about every particle making up the system. Do you think this is justified under the "rational degree of belief" interpretation, and if so how?
--How do you interpret probabilities which are conditioned on the value of a hidden variable H whose value (and even range of possible values) is impossible to measure empirically? I suppose we could imagine a quasi-omniscient being who
can measure it and form rational degrees of belief about unknown values of A and B based on knowledge of H, but this is just as non-empirical as the frequentist idea of an infinite set of trials. So would you say an expression like P(AB|H) is just inherently meaningless? You didn't seem to think it was meaningless when you debated what it should be equal to in the OP
here, though. If you still defend it as meaningful I'd be interested to hear how the "rational degree of belief" deals with a totally non-empirical case like this though.
billschnieder said:
My definition above covers both the "frequentists" and "bayesian" views as special cases
How can you view the frequentist view as a "special case" when in their interpretation
all probabilities are defined in terms of infinite samples, whereas you seem to be saying the definition of probability should
never have anything to do with imaginary scenarios involving infinite repetitions of some experiment?
billschnieder said:
So why are you so surprised when I tell you that such idealized problems, which presuppose infinite independent repetitions of a "random experiment" can not be directly compared to anything real, where infinite repetition of a "random experiment" is not possible?
I'm surprised because here you seem to categorically deny the logic of the frequentist interpretation, when it is so totally mainstream (I noticed on
p. 89 of the book I linked to earlier that the whole concept of a 'sample space' comes from Von Mises' frequentist analysis of probability, although it was originally called the 'attribute space') and when even those statisticians who don't prefer the frequentist interpretation would probably acknowledge that the
law of large numbers means it is reasonable to treat frequencies in real-world experiments with large samples as a good approximation to a frequentists' ideal frequencies in a hypothetical infinite series of trials. For example, would you deny the idea that if we flip a distorted coin 1000 times in the same style, whatever fraction it comes up heads is likely to be close to the ideal fraction that
would occur
if (purely hypothetically) we could flip it a vastly greater number of times in the same style without the coin degrading over time?
billschnieder said:
If Bell's theorem were an entirely theoretical exercise with no comparison being made to reality, and no conclusions about reality being drawn from it, do you really believe we would be having this discussion?
Again you strangely act as if I am saying something weird or bizarre by talking about infinite repetitions, suggesting either that you aren't familiar with frequentist thought or that you think a huge proportion of the statistics community is thoroughly deluded if they believe the frequentist definition is even meaningful (regardless of whether they favor it personally). Surely you must realize that the mainstream view says ideal probabilities (based on a hypothetical infinite sample size)
can be compared with real frequencies thanks to the law of large numbers, that even if you think I'm wrong to take that view there's certainly nothing novel about it.
JesseM said:
I'm just saying that to talk about "probability" in the frequentist interpretation you need to define the conditions that you are imagining being repeated in an arbitrarily large number of trials.
billschnieder said:
No I don't. You are the one who insists probability must be defined that way not me.
I don't say it "must be", just that it's a coherent view of probability and it's the one that makes the most sense when considering the totally non-empirical probabilities that appear in Bell's reasoning.
JesseM said:
would you agree that when defining the sample space, we must define what process was used to assign treatments to patients, that a sample space where treatment was assigned by doctors would be a different one than a sample space where treatment was assigned by a random number generator on a computer?
billschnieder said:
Yes, I have told you as much recently. But what has that got to do with anything.
It's got to do with the point I made at the start of this post (repeating something I had said in many previous posts), that if you are explicitly defining your sample space in terms of conditions that cause a systematic correlation between the values of observable and hidden variables (like people being more likely to be assigned treatment B if they have small kidney stones) and just trying to measure the probabilities for observable variables in this sample space (not trying to claim correlations between observable variables mean they are having a causal influence on one another, like the claim that the higher correlation between treatment B and recovery means treatment B is causally more effective), then
this particular form of "systematic bias" in your experiments is
no problem whatsoever! And this is why, in the Aspect type experiments, it's no problem if the hidden variables are more likely to take certain values on trials where observable variables like A took one value (Alice measuring spin-up with her detector setting, say) than another value (Alice measuring spin-down).