## [SOLVED] probability (was Re: EEQT)

<jabberwocky><div class="vbmenu_control"><a href="jabberwocky:;" onClick="newWindow=window.open('','usenetCode','toolbar=no,location=no, scrollbars=yes,resizable=yes,status=no,width=650,height=400'); newWindow.document.write('<HTML><HEAD><TITLE>Usenet ASCII</TITLE></HEAD><BODY topmargin=0 leftmargin=0 BGCOLOR=#F1F1F1><table border=0 width=625><td bgcolor=midnightblue><font color=#F1F1F1>This Usenet message\'s original ASCII form: </font></td></tr><tr><td width=449><br><br><font face=courier><UL><PRE>\n\n\nOn 2004-08-12, Arnold Neumaier &lt;Arnold.Neumaier@univie.ac.at&gt; wrote:\n&gt; Probabilities of single events are meaningless.\n\nOh boy. This is a really blatant misconception of probability. I can\'t\ntell whether the context rescues this (it seems not, though you are\narguing against another \'interpretation\' that I also find odious), but\nthe words themselves make probability a useless concept.\n\nOne can try to rescue it, e.g. by looking at ensembles, assuming\nexchangeability & independence, and trying to identify frequencies with\nprobabilities, but this gets incoherent, circular, or irrelevant to\nexperiment very quickly.\n\nIndependence is a statement about non-correlation between probabilities\nof single events, which you claim are meaningless.\n\nExchangeability is a statement that certain classes of possible\nrealizations of ensembles are equally likely -- in other words, it\ndepends on having a concept of probability for single events, where\nthese single events are realizations of ensemble measurements.\n\nIf you want to use finite ensembles, all probabilities must now be\nrationals, which seems a big limitation. We\'re also not guaranteed that\nthe final frequency really has any connection to the probability as we\nwant it -- instead we have to bring up "for all practical purposes"\narguments.\n\nIf you want to use infinite ensembles (the only case where we can\n_really_ say that the limiting frequencies approach the probability),\nwe have another problem: infinite ensembles\' limiting frequencies are a\ntail property. We can only measure the head, which can be arbitrarily\ndifferent, no matter how far out you measure it. There\'s no grounding\nthat lets us connect the rest of the ensemble to what we actually\nmeasure.\n\nBoth types of ensembles are imaginary and have nothing to do with\nany data we\'ve actually measured.\n\n--\nAaron Denney\n-&gt;&lt;-\n</UL></PRE></font></td></tr></table></BODY><HTML>');"> <IMG SRC=/images/buttons/ip.gif BORDER=0 ALIGN=CENTER ALT="View this Usenet post in original ASCII form">&nbsp;&nbsp;View this Usenet post in original ASCII form </a></div><P></jabberwocky>On $2004-08-12,$ Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
> Probabilities of single events are meaningless.

Oh boy. This is a really blatant misconception of probability. I can't
tell whether the context rescues this (it seems not, though you are
arguing against another 'interpretation' that I also find odious), but
the words themselves make probability a useless concept.

One can try to rescue it, e.g. by looking at ensembles, assuming
exchangeability & independence, and trying to identify frequencies with
probabilities, but this gets incoherent, circular, or irrelevant to
experiment very quickly.

Independence is a statement about non-correlation between probabilities
of single events, which you claim are meaningless.

Exchangeability is a statement that certain classes of possible
realizations of ensembles are equally likely -- in other words, it
depends on having a concept of probability for single events, where
these single events are realizations of ensemble measurements.

If you want to use finite ensembles, all probabilities must now be
rationals, which seems a big limitation. We're also not guaranteed that
the final frequency really has any connection to the probability as we
want it -- instead we have to bring up "for all practical purposes"
arguments.

If you want to use infinite ensembles (the only case where we can
_really_ say that the limiting frequencies approach the probability),
we have another problem: infinite ensembles' limiting frequencies are a
tail property. We can only measure the head, which can be arbitrarily
different, no matter how far out you measure it. There's no grounding
that lets us connect the rest of the ensemble to what we actually
measure.

Both types of ensembles are imaginary and have nothing to do with
any data we've actually measured.

--
Aaron Denney
$-><-$
 PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised


Aaron Denney wrote: > On $2004-08-12,$ Arnold Neumaier wrote: > >>Probabilities of single events are meaningless. > > Oh boy. This is a really blatant misconception of probability. The misconception seems to be completely on your side. What is the probability that 'I will die of cancer'? This is a single event that either will happen, or will not happen. If you consider this single event only, the probability is 1 or depending on what will actually happen. (But this sort of probability is not what we talk about in physics.) On the other hand one may assign a probability based on some facts about me. These facts determine an ensemble of people, from which one can form a statistical estimate of the probability. It clearly depends on which sort of ensemble one regarde me to belong to, what probability you will assign. I belong to many ensembles, and the answer is different for each of these. Thus probabilities are meaningful not for the single event but only as a property of the ensemble under consideration. This can also be seen from the mathematical foundations. Probabilities are determined by measures on the set of elementary events. All statements in measure theory are _only_ about expectations and probabilities of all possible realizations simultaneously, and say nothing at all about any particular realization. For a finite binary random sequence with independent bits, the sequence 111111111 has exactly the same status and probability as the sequence 101001101 or 000000000, although only the second looks random. Arnold Neumaier



Arnold Neumaier says... >What is the probability that 'I will die of cancer'? >This is a single event that either will happen, or will not happen. >If you consider this single event only, the probability is 1 or >depending on what will actually happen. (But this sort of probability >is not what we talk about in physics.) > >On the other hand one may assign a probability based on some facts >about me. These facts determine an ensemble of people, from which >one can form a statistical estimate of the probability. I don't see how ensembles help give any more precise meaning to probability. Suppose you say that "The probability that someone in risk group A will die of cancer is 1/3". That doesn't mean that for any 3 people in group A, 1 of them will die of cancer. It doesn't mean that for any 30 people, 10 of them will die of cancer. It doesn't even mean that in the limit as N as goes to infinity, the ratio $f_N =$ #who die of cancer/N $= 1/3$. What is true is that almost surely $f_N$ goes to 1/3 as N goes to infinity, but that "almost surely" is a probabilistic concept, as well. So you can't define probabilities completely in terms of ensembles. Saying that there is $a 1/3$ chance that I will die of cancer is meaningful without ensembles if you interpret that as a measure of my *belief* that I will die of cancer (1 meaning that I'm certain I will, meaning that I'm certain I won't). Of course, that's unsatisfying because we feel that quantum mechanical probabilities are revealing something objective, rather than subjective. So I don't know what the resolution is, but I don't think ensembles are on any firmer ground than any other interpretation of probability. -- Daryl McCullough Ithaca, NY

## [SOLVED] probability (was Re: EEQT)

<jabberwocky><div class="vbmenu_control"><a href="jabberwocky:;" onClick="newWindow=window.open('','usenetCode','toolbar=no,location=no, scrollbars=yes,resizable=yes,status=no,width=650,height=400'); newWindow.document.write('<HTML><HEAD><TITLE>Usenet ASCII</TITLE></HEAD><BODY topmargin=0 leftmargin=0 BGCOLOR=#F1F1F1><table border=0 width=625><td bgcolor=midnightblue><font color=#F1F1F1>This Usenet message\'s original ASCII form: </font></td></tr><tr><td width=449><br><br><font face=courier><UL><PRE>\n\n\nOn 2004-08-13, Arnold Neumaier &lt;Arnold.Neumaier@univie.ac.at&gt; wrote:\n&gt; Aaron Denney wrote:\n&gt;&gt; On 2004-08-12, Arnold Neumaier &lt;Arnold.Neumaier@univie.ac.at&gt; wrote:\n&gt;&gt;\n&gt;&gt;&gt;Probabilities of single events are meaningless.\n&gt;&gt;\n&gt;&gt; Oh boy. This is a really blatant misconception of probability.\n&gt;\n&gt; The misconception seems to be completely on your side.\n&gt;\n&gt; What is the probability that \'I will die of cancer\'?\n&gt; This is a single event that either will happen, or will not happen.\n\nYep. Events don\'t have to "half-happen" to have a probability of 0.5.\n\n&gt; If you consider this single event only, the probability is 1 or 0\n&gt; depending on what will actually happen.\n\nAfter the fact, yes. Before no, unless you know enough to time\nevolve the system.\n\nThis same argument would also lead to the probability of heads on a coin\nflip being 0 or 1, depending on exactly how one flips it. Actually,\nif you know enough about the flip beforehand, these would be the proper\nones, no matter how fair the coin is.\n\n&gt; (But this sort of probability is not what we talk about in physics.)\n&gt; On the other hand one may assign a probability based on some facts\n&gt; about me.\n\nIt isn\'t? It sounds like we would all the time, and it is just a\nlimiting case of your example.\n\nIf we have all the facts we can push the probabilities to 0 or 1.\n\n(speaking classically. A full QM treatmnet of your life\nwould involve interactions with radioactive particles, which\nwill undoubtedly keep it from being exactly 0 or 1, though\nlaws of large numbers will probably push near 0 or 1 -- either\nyou got enough or you didn\'t.)\n\n&gt; These facts determine an ensemble of people, from which\n&gt; one can form a statistical estimate of the probability.\n\nBut this step is unnecessary.\n\n&gt; It clearly depends on which sort of ensemble one regarde me to belong\n&gt; to, what probability you will assign. I belong to many ensembles, and\n&gt; the answer is different for each of these.\n\nCan you tell me which one is the right one to use for this question?\nWhy or why not? Since you said it is either 0 or 1, which ensemble\ngives that answer?\n\nYou don\'t belong to _any_ ensemble. They\'re sometimes useful for\nconstructing models, but they\'re not the only way to do it.\n\nIf you want to assign something a probability of 1/3, that doesn\'t\nrequire the construction of an ensemble of size 3N, which includes N\nways that event can happen.\n\n&gt; Thus probabilities are meaningful not for the single event but only\n&gt; as a property of the ensemble under consideration.\n\nAnd yet people bet on individual events all the time.\n\n&gt; This can also be seen from the mathematical foundations. Probabilities\n&gt; are determined by measures on the set of elementary events.\n\nThat\'s one way of defining the axioms of probability theory and getting\nthe standard results for manipulating probabilities in various self\nconsistent and maximally useful ways. It\'s not the only way, though\nit can give a nice intuitive understanding (assuming one knows\nmeasure theory better than probability theory, which seems unlikely).\n\nIf you want to invoke measure theory, but the measures are now the\nprobabilities, and the measures are not determined by the ensembles, the\nset you choose for the elementary events.\n\n&gt; All statements in measure theory are _only_ about expectations and\n&gt; probabilities of all possible realizations simultaneously, and say\n&gt; nothing at all about any particular realization.\n\nAll statements in measure theory are about, well, measures over\nsets and subsets. If you\'re modeling probability with it, then\nthe measure over a subset _is_ supposed to be the probability\nof that subset occuring, or the probability of those particular\nrealizations. Of course they don\'t say that the event will or\nwill not happen, unless the probability is zero or one.\n\n&gt; For a finite binary random sequence with independent bits,\n&gt; the sequence 111111111 has exactly the same status and probability\n&gt; as the sequence 101001101 or 000000000, although only the second\n&gt; looks random.\n\nI\'m not sure what you\'re getting at here. Were this phrased\na bit more precisely, I wouldn\'t disagree.\n\nHow do you define "random" and "independent" for each bit, if single\nevents don\'t have probabilities? If 1 is more or less probable than\n0 for each digit, they will indeed have different statuses and\nprobabilities.\n\n--\nAaron Denney\n-&gt;&lt;-\n</UL></PRE></font></td></tr></table></BODY><HTML>');"> <IMG SRC=/images/buttons/ip.gif BORDER=0 ALIGN=CENTER ALT="View this Usenet post in original ASCII form">&nbsp;&nbsp;View this Usenet post in original ASCII form </a></div><P></jabberwocky>On $2004-08-13,$ Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
> Aaron Denney wrote:
>> On $2004-08-12,$ Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
>>
>>>Probabilities of single events are meaningless.

>>
>> Oh boy. This is a really blatant misconception of probability.

>
> The misconception seems to be completely on your side.
>
> What is the probability that 'I will die of cancer'?
> This is a single event that either will happen, or will not happen.

Yep. Events don't have to "half-happen" to have a probability of .5.

> If you consider this single event only, the probability is 1 or
> depending on what will actually happen.

After the fact, yes. Before no, unless you know enough to time
evolve the system.

This same argument would also lead to the probability of heads on a coin
flip being or 1, depending on exactly how one flips it. Actually,
if you know enough about the flip beforehand, these would be the proper
ones, no matter how fair the coin is.

> (But this sort of probability is not what we talk about in physics.)
> On the other hand one may assign a probability based on some facts

It isn't? It sounds like we would all the time, and it is just a
limiting case of your example.

If we have all the facts we can push the probabilities to or 1.

(speaking classically. A full QM treatmnet of your life
would involve interactions with radioactive particles, which
will undoubtedly keep it from being exactly or 1, though
laws of large numbers will probably push near or 1 -- either
you got enough or you didn't.)

> These facts determine an ensemble of people, from which
> one can form a statistical estimate of the probability.

But this step is unnecessary.

> It clearly depends on which sort of ensemble one regarde me to belong
> to, what probability you will assign. I belong to many ensembles, and
> the answer is different for each of these.

Can you tell me which one is the right one to use for this question?
Why or why not? Since you said it is either or 1, which ensemble

You don't belong to _any_ ensemble. They're sometimes useful for
constructing models, but they're not the only way to do it.

If you want to assign something a probability of $1/3,$ that doesn't
require the construction of an ensemble of size 3N, which includes N
ways that event can happen.

> Thus probabilities are meaningful not for the single event but only
> as a property of the ensemble under consideration.

And yet people bet on individual events all the time.

> This can also be seen from the mathematical foundations. Probabilities
> are determined by measures on the set of elementary events.

That's one way of defining the axioms of probability theory and getting
the standard results for manipulating probabilities in various self
consistent and maximally useful ways. It's not the only way, though
it can give a nice intuitive understanding (assuming one knows
measure theory better than probability theory, which seems unlikely).

If you want to invoke measure theory, but the measures are now the
probabilities, and the measures are not determined by the ensembles, the
set you choose for the elementary events.

> All statements in measure theory are _only_ about expectations and
> probabilities of all possible realizations simultaneously, and say
> nothing at all about any particular realization.

All statements in measure theory are about, well, measures over
sets and subsets. If you're modeling probability with it, then
the measure over a subset _is_ supposed to be the probability
of that subset occuring, or the probability of those particular
realizations. Of course they don't say that the event will or
will not happen, unless the probability is zero or one.

> For a finite binary random sequence with independent bits,
> the sequence 111111111 has exactly the same status and probability
> as the sequence 101001101 or 000000000, although only the second
> looks random.

I'm not sure what you're getting at here. Were this phrased
a bit more precisely, I wouldn't disagree.

How do you define "random" and "independent" for each bit, if single
events don't have probabilities? If 1 is more or less probable than
for each digit, they will indeed have different statuses and
probabilities.

--
Aaron Denney
$-><-$


In article <411CA50D.8080100@univie.ac.at>, Arnold Neumaier writes: |> Aaron Denney wrote: $|> >$ On $2004-08-12,$ Arnold Neumaier wrote: $|> >$ |> >>Probabilities of single events are meaningless. $|> >|> > Oh$ boy. This is a really blatant misconception of probability. |> |> The misconception seems to be completely on your side. |> |> What is the probability that 'I will die of cancer'? |> This is a single event that either will happen, or will not happen. |> If you consider this single event only, the probability is 1 or |> depending on what will actually happen. (But this sort of probability |> is not what we talk about in physics.) |> |> On the other hand one may assign a probability based on some facts |> about me. These facts determine an ensemble of people, from which |> one can form a statistical estimate of the probability. I am sorry, but you are wrong on both counts. There are several concepts of probability, and mathematical statisticians work with all except the erroneous one: The purest mathematical one is a specialisation of measure theory, and probabilities are simply positive measures of $\sigma$ one over some Borel set. Nice and easy, but PURE mathematics. The simplest semi-physical approximation is a 'repeatable' experiment (let's leave the philosophy of repeatability out of it), which is what most people are taught at a naive level. There is a very common error where excessive simplification leads people to confuse the concepts of a distribution of a measurement over a sample and a probability. You have made that error, I am afraid, though not in its simple form. There is, however, also the concept of the probability of a non-repeatable event, which can be handled mathematically just as easily as a repeatable experiment. What you can't do is to MEASURE such probabilities, though you can do some measurement with ensembles (as you correctly state). Note, however, that this all depends on the existence of time's arrow (causality again!), because the probability has a meaning only up until the time that the event takes place (and it may change as time progresses, too). Whereafter, it is either or 1, true. So, if you are working with a model of the universe that does not have such a concept, your first statement is correct. But few physicists do. Regards, Nick Maclaren.



In article , Aaron Denney writes: |> On $2004-08-12,$ Arnold Neumaier wrote: $|> >$ Probabilities of single events are meaningless. |> $|> Oh$ boy. This is a really blatant misconception of probability. I can't |> tell whether the context rescues this (it seems not, though you are |> arguing against another 'interpretation' that I also find odious), but |> the words themselves make probability a useless concept. That is true :-( But see below for a possible cause of confusion. It is also claimed that talking about the probability of an inherently non-repeatable event is meaningless, but even that is based on a naive and mistaken view of probability. What is true is that it is quite hard to do much with such probabilities. |> Independence is a statement about non-correlation between probabilities |> of single events, which you claim are meaningless. Er, that is NOT well-phrased! There is also the serious problem that many people may be using "event" to mean what a probabilist would call "outcome". The word "event" is a common and fruitful source of confusion. Regards, Nick Maclaren.



Daryl McCullough wrote: > Arnold Neumaier says... > >>What is the probability that 'I will die of cancer'? >>This is a single event that either will happen, or will not happen. >>If you consider this single event only, the probability is 1 or >>depending on what will actually happen. (But this sort of probability >>is not what we talk about in physics.) >> >>On the other hand one may assign a probability based on some facts >>about me. These facts determine an ensemble of people, from which >>one can form a statistical estimate of the probability. > > I don't see how ensembles help give any more precise meaning to > probability. Suppose you say that "The probability that someone > in risk group A will die of cancer is 1/3". That doesn't mean > that for any 3 people in group A, 1 of them will die of cancer. > It doesn't mean that for any 30 people, 10 of them will die of > cancer. I claimed neither of these. To say that "The probability that someone in risk group A will die of cancer is 1/3" means nothing more or less than that exactly 1/3 of _all_ people in risk group A will die of cancer. Of course, we cannot check this before we have information about how all people in risk group A died, but once we have this information, we know. Usually we only have incomplete knowledge about the ensemble. This is why statisticians say that they _estimate_ probabilities based on _incomplete_ knowledge, collected from a sample. Whereas they _compute_ probabilities from $_assumed_complete$ knowledge about the ensemble, namely the theoretical probability distribution. Estimates are usually inaccurate but useful; this reconciles the two approaches; hence one finds no difficulties at all in actual practice. A more extensive discussion can be found in my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physics-faq.txt It currently has the following entries about classical probability: 5a. Random numbers in probability theory 5b. How meaningful are probabilities of single events? 5c. What about the subjective interpretation of probabilities? 5d. What is the meaning of probabilities? 5e. How do probabilities apply in practice? 5f. Priors and entropy in probability theory > Saying that there is $a 1/3$ chance that I will die of cancer is > meaningful without ensembles if you interpret that as a measure > of my *belief* that I will die of cancer (1 meaning that I'm > certain I will, meaning that I'm certain I won't). Of course, > that's unsatisfying because we feel that quantum mechanical > probabilities are revealing something objective, rather than > subjective. This is also unsatisfactory because "belief" is an even poorer defined concept than probability, that depends on psychology of human beings, which is not a sound foundation for physics. On the other hands, "ensemble" is a precise concept with far-reaching applications, independent of any beliefs, and hence adequate for use in quantum mechanics. Arnold Neumaier



Nick Maclaren wrote: > In article <411CA50D.8080100@univie.ac.at>, > Arnold Neumaier writes: > |> > |> What is the probability that 'I will die of cancer'? > |> This is a single event that either will happen, or will not happen. > |> If you consider this single event only, the probability is 1 or > |> depending on what will actually happen. (But this sort of probability > |> is not what we talk about in physics.) > |> > |> On the other hand one may assign a probability based on some facts > |> about me. These facts determine an ensemble of people, from which > |> one can form a statistical estimate of the probability. > > I am sorry, but you are wrong on both counts. There are several > concepts of probability, and mathematical statisticians work with > all except the erroneous one: > > The purest mathematical one is a specialisation of measure > theory, and probabilities are simply positive measures of $\sigma$ > one over some Borel set. Nice and easy, but PURE mathematics. This case conforms to my statements. A discrete measure with probabilities that are integral multiples of N can be viewed as an ensemble of N experiments. All statements that make sense for discrete measures make corresponding assertions about such an ensemble. From a practical point of view, all other measures are just useful approximations to summarise the information in ensembles with very large N. > The simplest semi-physical approximation is a 'repeatable' > experiment (let's leave the philosophy of repeatability out of > it), which is what most people are taught at a naive level. Here the concept of probzbility is already dubious, unless you specify which set of repetitions (i.e., the ensemble) you are applying it to. > There is a very common error where excessive simplification > leads people to confuse the concepts of a distribution of a > measurement over a sample and a probability. You have made that > error, I am afraid, though not in its simple form. No. There is a sample distribution, and there is a theoretical distribution. Both assign probabilities to events. The sample distribution is the right one if the ensemble is taken as the sample you know, the theoretical distribution is the right one if the ensemble is taken as the set of all element in the set upon which the $\sigma$ algebra is based. > There is, however, also the concept of the probability of a > non-repeatable event, which can be handled mathematically just > as easily as a repeatable experiment. If it is as easy, please give the mathematical formulation of the probability that the earth will be hit by an asteroid in the year 9999? > What you can't do is to > MEASURE such probabilities, though you can do some measurement > with ensembles (as you correctly state). As I mentioned in another post, probability assignments to single events can be neither verified nor falsified. Thus they are meaningless. Arnold Neumaier



Aaron Denney wrote: > On $2004-08-13,$ Arnold Neumaier wrote: > >>What is the probability that 'I will die of cancer'? >>This is a single event that either will happen, or will not happen. > > Yep. Events don't have to "half-happen" to have a probability of .5. Probability assignments to single events can be neither verified nor falsified. Indeed, suppose we intend to throw a coin exactly once. Person A claims 'the probability of the coin coming out head is 50%'. Person B claims 'the probability of the coin coming out head is 20%'. Person C claims 'the probability of the coin coming out head is 80%'. Now we throw the coin and find 'head'. Who was right? It is undecidable. Thus there cannot be objective content in the statement 'the probability of the coin coming out head is p', when applied to a single case. Subjectively, of course, every person may feel (and is entitled to feel) right about their probability assignment. But for use in science, such a subjective view (where everyone is right, no matter which statement was made) is completely useless. >>If you consider this single event only, the probability is 1 or >>depending on what will actually happen. > > After the fact, yes. Before no, unless you know enough to time > evolve the system. What if someone knows the fact and someone else doesn't??? I am discussing objective probability, since physics is an objective science. >>It clearly depends on which sort of ensemble one regarde me to belong >>to, what probability you will assign. I belong to many ensembles, and >>the answer is different for each of these. > > Can you tell me which one is the right one to use for this question? > Why or why not? Since you said it is either or 1, which ensemble > gives that answer? The ensemble consisting of me only, as appropriate for a single case. In other ensembles, the probability is just the proportion of people in the ensemble dying of cancer; of course this probability, though it is a well-defined number, can be estimated only approximately $- at$ least until I am dead ;-) >>Thus probabilities are meaningful not for the single event but only >>as a property of the ensemble under consideration. > > And yet people bet on individual events all the time. Oh yes. They estimate probabilities, based on their faforite ensemble. But as you know, people often lose their bets! >>This can also be seen from the mathematical foundations. Probabilities >>are determined by measures on the set of elementary events. > > That's one way of defining the axioms of probability theory and getting > the standard results for manipulating probabilities in various self > consistent and maximally useful ways. It's not the only way, But it is the only consistent way. >>All statements in measure theory are _only_ about expectations and >>probabilities of all possible realizations simultaneously, and say >>nothing at all about any particular realization. > > All statements in measure theory are about, well, measures over > sets and subsets. If you're modeling probability with it, then > the measure over a subset _is_ supposed to be the probability "_is_", not "_is_ supposed to be". > of that subset occuring, or the probability of those particular > realizations. Of course they don't say that the event will or > will not happen, unless the probability is zero or one. Yes; this is why they say nothing at all about the single case. >>For a finite binary random sequence with independent bits, >>the sequence 111111111 has exactly the same status and probability >>as the sequence 101001101 or 000000000, although only the second >>looks random. > > I'm not sure what you're getting at here. Were this phrased > a bit more precisely, I wouldn't disagree. > > How do you define "random" and "independent" for each bit, if single > events don't have probabilities? If 1 is more or less probable than > for each digit, they will indeed have different statuses and > probabilities. Here I meant 'random bit' to say both probabilities 1/2. A random sequence is $_not_ a$ sequence of numbers but a sequence of random numbers. Only the realizations are sequences of ordinary numbers. Sequences of ordinary numbers are _never_ random, but they can 'look random' (in a subjective sense). Arnold Neumaier



In article <411F45F1.10704@univie.ac.at>, Arnold Neumaier wrote: >Daryl McCullough wrote: > >To say that "The probability that someone in risk group A will die >of cancer is 1/3" means nothing more or less than that exactly 1/3 >of _all_ people in risk group A will die of cancer. That is completely and utterly wrong. You can see that by taking a nice, simple example (i.e. not cancer). Fair coins have a probability .5 of coming up heads. If you toss 10 fair coins, it is NOT necessarily the case that exactly 5 will show heads. You get exactly the same situation with a fixed number of electrons in quantum mechanics. >Of course, we cannot check this before we have information about >how all people in risk group A died, but once we have this information, >we know. Let us say 7 coins out of 10 show heads. Would that mean that the probability of a fair coin showing heads was 70%? Oh, come now. If you toss them again (which is where repeatable experiments come in), you might well have 4 come up heads. And so on. >Usually we only have incomplete knowledge about the ensemble. >This is why statisticians say that they _estimate_ probabilities >based on _incomplete_ knowledge, collected from a sample. That is true, but it is only ONE of the things that statisticians do. And not the most important one, either. >Whereas they _compute_ probabilities from $_assumed_complete$ knowledge >about the ensemble, namely the theoretical probability distribution. Grrk. We also compute confidence intervals on probabilities based on data, compute probabilities based on a known mathematical model and values of its parameters, compute various forms of best estimates of probabilities based on data, and so on. >Estimates are usually inaccurate but useful; this reconciles the two >approaches; hence one finds no difficulties at all in actual practice. Hmm. I am glad that you find the problems so simple. Not all leading statisticians do. >> Saying that there is $a 1/3$ chance that I will die of cancer is >> meaningful without ensembles if you interpret that as a measure >> of my *belief* that I will die of cancer (1 meaning that I'm >> certain I will, meaning that I'm certain I won't). Of course, >> that's unsatisfying because we feel that quantum mechanical >> probabilities are revealing something objective, rather than >> subjective. > >This is also unsatisfactory because "belief" is an even poorer defined >concept than probability, that depends on psychology of human beings, >which is not a sound foundation for physics. Yes and no. That is not true if he defines the mathematical model he is using and the data and methods he is using to estimate the parameters of that model. That is, after all, precisely the statistical analogue of measuring a physical constant! >On the other hands, "ensemble" is a precise concept with far-reaching >applications, independent of any beliefs, and hence adequate for use >in quantum mechanics. Well, it wasn't a standard term when I did my (masters equivalent) course in mathematical statistics, though that was some 30+ years back. I can guess what it means, but I don't think that "a precise concept" is what a mathematical statistician would call it. Please note that I am replying to this posting and not yours to mine, because it gives a clearer example of some of the issues. Regards, Nick Maclaren.



Nick Maclaren wrote: > In article <411F45F1.10704@univie.ac.at>, > Arnold Neumaier wrote: > >>Daryl McCullough wrote: >> >>To say that "The probability that someone in risk group A will die >>of cancer is 1/3" means nothing more or less than that exactly 1/3 >>of _all_ people in risk group A will die of cancer. > > > That is completely and utterly wrong. You can see that by taking > a nice, simple example (i.e. not cancer). > > Fair coins have a probability .5 of coming up heads. If you toss > 10 fair coins, it is NOT necessarily the case that exactly 5 will > show heads. This is not a correct translation of my claim. If you take any finite $\sigma$ algebra representing a fair coin, one has a finite ensemble of elementary events, and exactly half of them come out heads. If you take an infinite $\sigma$ algebra, the ensemble is infinite, but with the natural weighting, again exactly half of them come out head. This is precisely what I claimed. 'Tossing 10 fair coins' is just a sloppy way of saying 'Selecting a sample of size 10 from the total ensemble', and it is obvious that here the number of heads is 5 only on average over many random samples, again as I had claimed in an unquoted part of the post to which you replied. >>On the other hands, "ensemble" is a precise concept with far-reaching >>applications, independent of any beliefs, and hence adequate for use >>in quantum mechanics. > > Well, it wasn't a standard term when I did my (masters equivalent) > course in mathematical statistics, though that was some 30+ years > back. I can guess what it means, but I don't think that "a precise > concept" is what a mathematical statistician would call it. I am talking here (s.p.r.) physics language. In mathematical terms, a classical ensemble is the set of elementary events underlying the $\sigma$ algebra over which the measure is defined. Arnold Neumaier



Arnold Neumaier says... >Probability assignments to single events can be neither verified nor >falsified. Probabilistic predictions can *never* be verified or falsified by any (finite) number of observations. If the prediction is that half of all particles of type X decay within T seconds, how many measurements does it take to prove the prediction is true? How many measurements does it take to prove the prediction is false? The answer is that there is no number that is sufficient. It's just that as we make more and more observations, we become and more confident that the prediction is true (or that it's false). There is never a point where it is absolutely verified or absolutely falsified, although we can get to a point where we are as good as certain one way or the other. But for any finite number of observations, we don't know whether the probabilistic prediction is true or not. We're in the same boat whether we are talking about 1 observation or 1000. -- Daryl McCullough Ithaca, NY



Arnold Neumaier says... >To say that "The probability that someone in risk group A will die >of cancer is 1/3" means nothing more or less than that exactly 1/3 >of _all_ people in risk group A will die of cancer. That is not true. That's the *frequency* with which people in group A die of cancer. It is *not* the probability. The frequency is supposed to *approach* the probability in some sense, but they aren't the same thing. Think about it. If you have a coin with a 50% chance of heads and 50% chance of tails, then the frequency jumps around as time goes on. With the first coin toss, the frequency of heads is either or 1. With the second coin toss, the frequency is either $0, 1/2,$ or 1. With the third toss, the frequency is either $0, 1/3, 2/3,$ or 1. Nobody would say that the *probability* jumps around like that. The probability is always the same (well, unless there is some time-dependent effect). Probability is *not* the same thing as relative frequency, and ensembles don't help to define probability. They can help define relative frequency, but only in the case of *finite* ensembles (frequency is not well-defined for an infinite ensemble). But it is exactly in the case of finite ensembles that the difference between probability and relativity frequency is the most pronounced. -- Daryl McCullough Ithaca, NY



On $2004-08-16,$ Arnold Neumaier wrote: > To say that "The probability that someone in risk group A will die > of cancer is 1/3" means nothing more or less than that exactly 1/3 > of _all_ people in risk group A will die of cancer. > Of course, we cannot check this before we have information about > how all people in risk group A died, but once we have this information, > we know. Okay, you have two choices for defining this ensemble. Either (a) it is actual people in this ensemble, and it's a finite set, or (b) it is imaginary people similar to the ones you actually care about. For (b), the probability is not objective, not checkable by anyone else. For (a), well, suppose I flip a coin ten times, and get 6 heads and 4 tails. I really, really, hope that you don't think that the coin has a probability of exactly .6 of coming up heads. If you flip it again twice and get two tails, is the probability for heads now .5? -- Aaron Denney $-><-$



On $2004-08-16,$ Arnold Neumaier wrote: > > > > > Aaron Denney wrote: >> On $2004-08-13,$ Arnold Neumaier wrote: >> >>>What is the probability that 'I will die of cancer'? >>>This is a single event that either will happen, or will not happen. >> >> Yep. Events don't have to "half-happen" to have a probability of .5. > > Probability assignments to single events can be neither verified nor > falsified. Right. Probability assignments inherently have some subjectivity -- what someone knows determines the assignment. > Indeed, suppose we intend to throw a coin exactly once. > Person A claims 'the probability of the coin coming out head is 50%'. > Person B claims 'the probability of the coin coming out head is 20%'. > Person C claims 'the probability of the coin coming out head is 80%'. > Now we throw the coin and find 'head'. Who was right? It is undecidable. Any of them, or none of them, depending on what they knew about the prior conditions of tossing. "Appropriate" probability assignment would be better language than "correct". All of them could be correct if A knows only that it has heads and tails, and that both can come up, if B knows that the coin is heavier on the heads side by a certain amount, and C knows that the tosser is extremely practiced and can make it come out heads 80% of time. > Thus there cannot be objective content in the statement > 'the probability of the coin coming out head is p', > when applied to a single case. Subjectively, of course, every person > may feel (and is entitled to feel) right about their probability assignment. > But for use in science, such a subjective view (where everyone is right, > no matter which statement was made) is completely useless. Not at all. Suppose someone you trust completely assures you that a coin is biased so that during tests it comes up one way 80% of the time, and the other 20% of the time, but refuses to tell you which is which. What is your probability assignment that the coin comes up heads on one toss? I claim this is the same case as flipping any coin. Deterministically it will come up whatever it comes up as. $P = 0,$ or 1, if you knew everything. Still, even in this case, where the "objective" probability is .8 or .2, the best representation of the information available to you for a single toss is .5 to heads and .5 to tails. If you know it will be flipped twice, you should assign .32 to HH and TT and .18 to HT and TH. >>>If you consider this single event only, the probability is 1 or >>>depending on what will actually happen. >> >> After the fact, yes. Before no, unless you know enough to time >> evolve the system. > > What if someone knows the fact and someone else doesn't??? > I am discussing objective probability, since physics is an objective > science. Then someone gets a better estimate. Probability theory is a way of reasoning about uncertainty. If someone knows that they know enough information, they get what will actually happen with probability 1. Someone who doesn't know enough will get a whole range of possible outcomes with different probabilities. These different probabilities will still be useful information about what choices to make. It's okay for the results to be subjective because different people start with different information. If someone starts with incorrect, rather than incomplete information, they'll get incorrect results. This is expected. Probability is _not_ figuring out how often something happens in repeatable experiments. It can be applied to that, yielding the well known de Finetti exchangeability results linking long-run frequency with probability, but limiting it to that case is perverse. >>>It clearly depends on which sort of ensemble one regarde me to belong >>>to, what probability you will assign. I belong to many ensembles, and >>>the answer is different for each of these. >> >> Can you tell me which one is the right one to use for this question? >> Why or why not? Since you said it is either or 1, which ensemble >> gives that answer? > > The ensemble consisting of me only, as appropriate for a single case. > > In other ensembles, the probability is just the proportion of > people in the ensemble dying of cancer; of course this probability, > though it is a well-defined number, can be estimated only approximately > $- at$ least until I am dead ;-) Do you allow infinite ensembles, or are only rational numbers acceptable probabilities? >>>Thus probabilities are meaningful not for the single event but only >>>as a property of the ensemble under consideration. >> >> And yet people bet on individual events all the time. > > Oh yes. They estimate probabilities, based on their favorite ensemble. > But as you know, people often lose their bets! Sure. That doesn't mean they estimated the probability wrong, or are misusing probability theory. Refusing to let probability theory deal with single events reduces its applicability to almost nothing, and people do successfully use it for single events. >>>This can also be seen from the mathematical foundations. Probabilities >>>are determined by measures on the set of elementary events. >> >> That's one way of defining the axioms of probability theory and getting >> the standard results for manipulating probabilities in various self >> consistent and maximally useful ways. It's not the only way, > > But it is the only consistent way. There are at least three or four different ways. They all give the same answers in the areas where they all apply. >> of that subset occuring, or the probability of those particular >> realizations. Of course they don't say that the event will or >> will not happen, unless the probability is zero or one. > > Yes; this is why they say nothing at all about the single case. Sure they, just not something definite. That's why there probabilities instead of certainties. Now, you can do most of this reasoning about uncertainty with ensembles rather than states of knowledge, but it's much harder and more complicated. You have to make sure that the ensembles you come up with are not only consistent with your state of knowledge, but also don't tell you anything more -- that they aren't biased. The choice of ensemble to use is just as subjective as the choices of A, B, and C in your example above. -- Aaron Denney $-><-$



In article <4121F056.7090303@univie.ac.at>, Arnold Neumaier >> >>>To say that "The probability that someone in risk group A will die >>>of cancer is 1/3" means nothing more or less than that exactly 1/3 >>>of _all_ people in risk group A will die of cancer. >> >That is completely and utterly wrong. You can see that by taking >a nice, simple example (i.e. not cancer). >> >Fair coins have a probability .5 of coming up heads. If you toss >10 fair coins, it is NOT necessarily the case that exactly 5 will >show heads. > >This is not a correct translation of my claim. Good, Because it is completely wrong. It IS a correct example of what you posted, so I am glad that you didn't mean it. > >If you take any finite $\sigma$ algebra representing a > >fair coin, one has a finite ensemble of elementary events, > >and exactly half of them come out heads. > > If you take an infinite $>\sigma$ algebra, the ensemble is infinite, but with the natural >weighting, again exactly half of them come out head. The very concept of "with the natural weighting, again exactly half of them come out head" is misleading to a degree when applied to a limit process (which this is). See below. >'Tossing 10 fair coins' is just a sloppy way of saying >'Selecting a sample of size 10 from the total ensemble', >and it is obvious that here the number of heads is 5 only on >average over many random samples, again as I had claimed in an >unquoted part of the post to which you replied. > >>>On the other hands, "ensemble" is a precise concept with far-reaching >>>applications, independent of any beliefs, and hence adequate for use >>>in quantum mechanics. >> >Well, it wasn't a standard term when I did my (masters equivalent) >course in mathematical statistics, though that was some 30+ years >back. I can guess what it means, but I don't think that "a precise >concept" is what a mathematical statistician would call it. > >I am talking here (s.p.r.) physics language. >In mathematical terms, a classical ensemble is the set of elementary >events underlying the $\sigma$ algebra over which the measure is defined. I am afraid that this shows a SERIOUS misunderstanding of measure theory (i.e. Borel sets and Lebesgue measure). Yes, discrete measures (i.e. over countable sets) have such a basis, but that does NOT extend to the general case. And using that 'simplification' vastly complicates the theory. In general, there IS no set of elementary events underlying the Borel set. Even when that is defined on top of another set that does have a concept of basic elements (which is not necessarily the case), it isn't rare for the measure of all such elements to be zero. The simple and classic example is the real interval [0,1] with the uniform measure. Regards, Nick Maclaren.



Daryl McCullough wrote: > Arnold Neumaier says... > > >>Probability assignments to single events can be neither verified nor >>falsified. > > Probabilistic predictions can *never* be verified or falsified by any > (finite) number of observations. If the prediction is that half of all > particles of type X decay within T seconds, how many measurements does > it take to prove the prediction is true? How many measurements does > it take to prove the prediction is false? All those in the defining ensemble. You seem to be thinking of an infinite ensemble; then your statement is true. But if the ensemble is finite, one knows the probability of any statement about a random variable x once all realizations $x(\omega)$ and their weight are known. This completely characterizes the ensemble. Arnold Neumaier