- #1

nomadreid

Gold Member

- 1,408

- 134

## Main Question or Discussion Point

Would it be possible to have a type of implication relation that weakens the truth value in a many-valued logic? For instance, a first attempt:

(1) define ⇒

(2) The rule [A⇒

(3) [A⇒

(4) If A⇒

The main problem here would be that either the system would have to either

(a) be extremely fine-tuned to avoid two different inference paths to the same conclusion resulting in two different truth values (inconvenient)

(b) adopt something like V(B) = minimum of truth values of all possible inference paths starting from the axioms (unlikely to work)

(c) make this part of a temporal logic so that two different inference paths would occur at different times, and hence V(B, t

(d) be inconsistent (unfortunate), or

(e) some variation that I have not thought of.

The reason I would like something like this is that, if one is to model human reasoning, one has the problem that humans put less confidence in conclusions when they are more abstract, i.e., when it takes more steps to arrive at the conclusions. (Perhaps I should be using confidence or preference values instead of truth values, but as far as I can see, these would be equivalent approaches.)

I am open to suggestions. Thanks.

(1) define ⇒

_{k}, 0<k__<__1, ⇒_{1}≡ ordinary ⇒ , so that, if the V(.) is the valuation (the assignment of truth value), then A⇒_{k}B gives V(B) = k*V(A).(2) The rule [A⇒

_{k}B &B⇒_{m}C] ⇒ [A⇒_{(k*m)}C] would hold instead of transitivity.(3) [A⇒

_{k}B]⇒[~B⇒_{k}~A](4) If A⇒

_{k}A, then k=1.The main problem here would be that either the system would have to either

(a) be extremely fine-tuned to avoid two different inference paths to the same conclusion resulting in two different truth values (inconvenient)

(b) adopt something like V(B) = minimum of truth values of all possible inference paths starting from the axioms (unlikely to work)

(c) make this part of a temporal logic so that two different inference paths would occur at different times, and hence V(B, t

_{0}) need not be equal to V(B, t_{1}) (that would end up being trivial),(d) be inconsistent (unfortunate), or

(e) some variation that I have not thought of.

The reason I would like something like this is that, if one is to model human reasoning, one has the problem that humans put less confidence in conclusions when they are more abstract, i.e., when it takes more steps to arrive at the conclusions. (Perhaps I should be using confidence or preference values instead of truth values, but as far as I can see, these would be equivalent approaches.)

I am open to suggestions. Thanks.