In traditional logic, a system is inconsistent if it can lead to a contradiction. Furthermore, if the inconsistency is non-explosive (not all consequences follow from the contradiction), then the system is paraconsistent. Both definitions fail to distinguish between the following two real-world scenarios: Alice has a set of axioms which is, unbeknownst to her, inconsistent. That is, she hasn't reasoned it out far enough to realize that a contradiction will arise. Once she reaches the point where the inconsistency is apparent, she rejects the system. Bob has a set of axioms which is inconsistent, and he is in full knowledge of the fact. However, the contradiction is non-explosive so that Bob can live with it, so he continues to accept the system. I thought that perhaps some sort of arrangements of developing worlds à la Kripke might distinguish the two, but in the Kripkesque arrangement, each world suffers the same lack of ability to distinguish between Alice and Bob. A "consistent histories interpretation" approach borrowed from quantum physics seems like killing a fly with a cannonball, and at first blush I am not even sure it would work. I do not see where the second-order logic systems with the quantifier KA for "Known to Agent A" solves the dilemma either. Some kind of Bayesian updating might be appropriate, but I am not sure what the details would look like. How to adapt the traditional logic to the real-world thinking of Alice and Bob?