SpectraCat said:
I am still stuck on the concept that you can't make meaningful statements about the probabilities of single events. What about the following scenario:
1) you have a group of 2 atoms of isotope A, with 5 second half-life
2) you have a group of 2 atoms of isotope B, with 5 year half-life
What is the probability that one of the A atoms will decay before one of the B atoms?
From posts
Arnold Neumaier has made on this thread, it seems he will say that the question as I have phrased it above is not scientifically meaningful. If this is true (i.e. Arnold does think that it is meaningless, and I have not misunderstood something, then please answer the following question:
How big do I have to make the pools (5 atoms, 5000 atoms, 5x10^23 atoms) before the question DOES become scientifically meaningful? Because if I have not misunderstood, other statements Prof. Neumaier has made on this thread indicate that he *does* think scientifically meaningful statements can be made about probabilities of events from "large ensembles", so it seems that at some point, the pools must reach a critical size where "statistical significance" (or whatever the proper term is) is achieved.
In general, if you have a complete specification an ensemble, you can derive scientific statements about anonymous members of the ensemble.
This is the case e.g., when analysing past data. You can say p% of the population of the US in the census of year X earned above Y Dollars.
It is also the case when you have a theoretical model defining the ensemble. You can say the probability to cast an even number with a perfect die is 50%, since the die is an anonymous member of the theoretical ensemble. But you cannot say anything about the probability of casting an even number in the next throw at a particular location in space and time, since this is an ensemble of size 1 - so the associated probabilities are provably 0 or 1.
In practice, interest is mainly in the prediction of incompletely specified ensembles.
In this case, the scientific practice is to replace the intended ensemble by a theoretical model of the ensemble, which is precisely known once one estimates its parameters from the available part of the ensemble, using a procedure that may also depend on other assumptions such as a prior (or a class of priors whose parameters are estimated as well).
In this case, all computed/estimated probabilities refer to this theoretical (often infinitely large) ensemble, not to a particular instance. (From a mathematical point of view, ensemble = probability space, the sample space being the set of all realizations of the ensemble.)
Now there is a standard way to infer from the model statements about the intended ensemble: One specifies one 's assumptions going into the model (such as independence assumptions, Gaussian measure assumptions, etc.), the method of estimating the parameters from the data, and a confidence level deemed adequate, and which
statistical tests are used to check the confidence level for a particular prediction in a particular situation. Then one makes a definite statement about the prediction
(such as ''this bridge is safe for crossing by trucks up to 10 tons'') accompanied perhaps by mentioning the confidence level. The definite statement satisfies the scientific standards of derivation and is checkable. It may still be right or wrong - this is in the nature of scientific statements.
If a method of prediction and assessment of confidence leads to wrong predictions significantly higher than the assigned confidence level the method will be branded as unreliable and phased out from scientific practice. Note that this again requires an ensemble - i.e., many predictions to be implementable. Again, a confidence level for a single prediction may serve only as a subjective guide.
The statement ''Isotope X has a half life of Y years'' is a statement about the ensemble
of all atoms representing isotope X. A huge subensemble of the still far huger full ensemble has been observed, so that we know the objective value of Y quite well, with
a very small uncertainty,, and we also know the underlying model of a Poisson process.
If we now have a group of N atoms of isotope X, we can calculate from this information
a confidence interval for any statement of the form ''In a time interval T, between M-K and M+K of the N atoms will decay''. If the confidence is large enough we can state it as a prediction that in the next experiment checking this, this statement will be found correct. And we were entitled to publish it if X was a new or interesting isotope whose decay was measured by a new method, say.
Nowhere in all I said was any reference made to a "a measure of a state of knowledge", so that the ''Bayesian probability interpretation'' as defined in
http://en.wikipedia.org/wiki/Bayesian_probability is clearly inapplicable.