WWGD said:
That is a workable definition. Still, how do you define a theory? Is it a collection of predictive algorithms , general rules of inference, etc? Is it fixed or does it allow add-ons? Does it allow for infinitely-many inference rules? Does it have a specific measure for detecting fit between predicted and modeled by the theory, etc?
Good questions. After some reflection, my suggestions are:
Let's adopt the formal definition of 'theory' that is used in
First Order Predicate Logic (FOPL), which is that a theory is a set of propositions, where a proposition is a statement that is true or false.
We then define a 'Physical Theory' to be a set of propositions T that is the closure, under the operation of deduction, of a set of propositions G, such that every proposition in G is of the form "
probability of B, given A, is p" (call a proposition of this type a
predictive proposition), where A and B are both 'constructions' of physical observations, where the set of 'constructions' is the closure of the set of observations under the operations of conjunction (AND) and disjunction (OR).
The B propositions are the
predicted physical observations and the A propositions are those on which the predictions are based.
A 'physical observation' is a proposition of the form
'result of measurement M <operand> x'
where <operand> is <, = or > and
x is a real number.
We could say T is 'a deterministic theory' if all probabilities
p in propositions in G are either 0 or 1.
Is it fixed or does it allow add-ons?
T is determined by G. If we add or subtract anything from G, then the modified generating set G' generates (via closure over deduction) a theory T' that is different from T, unless the added or removed propositions were redundant.
Does it allow for infinitely-many inference rules?
Inference rules are what generates G. So if we have a set R of inference rules. which is just a set of propositions, that generates a set G of predictive propositions via closure through deduction, subject to requiring any deduced propositions to be predictive. It is the set R of inference rules that we usually think of as a physical theory, as in the postulates of QM or of GR, but here we reserve the term 'theory' for T, the set of all propositions deducible from R , for consistency with the usual terminology of FOPL.
With that meaning of 'inference rules', there is no compelling reason to require R to be finite. If we want the theory to be comprehensible by finite beings like humans, we would have to require it to be finite, but I don't feel the need to apply that restriction, and I don't think it materially influences the issues under discussion here.
Note however, that if we don't require R to be finite, the M-Law, which is the infinite set of observations of every particle in the universe ever, qualifies as a theory.
Does it have a specific measure for detecting fit between predicted and modeled by the theory
The measures could be as follows:
- if the theory contains a proposition of the form ##P(B|A)=1 (0)## and ##A## is observed to occur and ##B## is observed not to occur (##B## is observed to occur), then the theory has been falsified and must be discarded.
- if the theory contains a proposition of the form ##P(B|A)=p## and ##A## is observed to occur and ##B## is observed not to occur then the 'degree of doubt' in the theory raised by the observations is ##p##.