Ken G said:
I think what I object to about the notion of "influence" (as a weaker version of "determine") is that we have two ways that an influence can actually occur: either it creates a probability of something happening, or else it is one of several more factors that completely determine what happens. I don't view either of those possibilities as making sense-- if an influence creates a probability, what actuates the probability?
Ken, from past discussions I think we are often more or less in agreement but you're right that we seem to differ on certain things. I'll try to focus on these differences in this post:
Here we get into the details of how to classify inference system, and this also overlaps to philosophy of science.
A ~> B
The most determinate form of inference is the easiest: logical deduction: A => B
In this picture, the inference itself is always perfect and any "uncertainty" is blamed on the premises.
The other way which is more complex is inductive reasoning. This could be expanded alot, but in short, the "simplest" form of induction is deductive probabilty: A => P(B)
But this is in fact a deduction applied to a new state space of probability distributions.
But the obvious problem with this is that probability distributions, as opposed to frequencies, refer to abstractions, in particular infinite trials and infinite amount of information. As is probably clear from most of my posts on here I have a lot to object to this.
The problem here has nothing to do with probability theory as mathemtics. It has to do with the applicability of this mathematics to reality. IMO, physics is operating in the interface between mathematical modelling and actual predictions of reality. But I won't expandon this in this thread.
The short comment is that what we really need is a model for rational reasoning based upon incomplete information: Surely in SOME cases probability theory is the answer. But in some other cases it doesn't quite make sense. I'm advocating a reconstruction of what JAynes did (probability as the logic of science) but in a different way where attention is paid to things he did not pay attention to. When you do this, one gets IMHO a discretised version of probability that is even MORE empirical since it considers actual frequences rather than limiting distributions (that are never established before the cards are on the table) and this introduces differences.
In such reconstuction, the word "probability" would be replaced by a "plausability measure" (that in limiting cases is the same as probability, but in other cases is a generalization thereof). And this is actuated by the retained observers empirical experience.
I know this SOUNDS like ontological terms but it's not how I mean it, because all the "ontological terms" are always implicitly observer dependent, and thus subjective. This means that each observer has what might hold a kind of "effective ontology" ut which is not really a proper ontology.
The problem here is that it's hard to even explain this in english. Somehow any plain text message is necessarily an imperfect analogy at best.
Ken G said:
I think where we differ here is that you like to elevate such expectations to a level of ontology, but I don't think they are ever anything more than our expectations.
Well I perfectly agree with you! This is exactly what I meant to say above. The "effective" ontological terms I use are not really ontological, they are like "expectated ontologies" that are observer dependent, but it's hard to describe this without using the words :) Maybe you can do it better than I can.
The apparent disagreement is I think mainly due to difficulty of formulating this in language.
/FRedrik