Inconsistency versus lack of knowledge

  • I
  • Thread starter nomadreid
  • Start date
  • Tags
    Knowledge
  • Featured
In summary: But of course, if one rejects the logical axioms, one cannot even generate a theory at all.In summary, paraconsistent logics are possible, but they require a different logical language than classical logics.
  • #1
nomadreid
Gold Member
1,668
203
In traditional logic, a system is inconsistent if it can lead to a contradiction. Furthermore, if the inconsistency is non-explosive (not all consequences follow from the contradiction), then the system is paraconsistent.
Both definitions fail to distinguish between the following two real-world scenarios:
Alice has a set of axioms which is, unbeknownst to her, inconsistent. That is, she hasn't reasoned it out far enough to realize that a contradiction will arise. Once she reaches the point where the inconsistency is apparent, she rejects the system.
Bob has a set of axioms which is inconsistent, and he is in full knowledge of the fact. However, the contradiction is non-explosive so that Bob can live with it, so he continues to accept the system.
I thought that perhaps some sort of arrangements of developing worlds à la Kripke might distinguish the two, but in the Kripkesque arrangement, each world suffers the same lack of ability to distinguish between Alice and Bob.
A "consistent histories interpretation" approach borrowed from quantum physics seems like killing a fly with a cannonball, and at first blush I am not even sure it would work.
I do not see where the second-order logic systems with the quantifier KA for "Known to Agent A" solves the dilemma either.
Some kind of Bayesian updating might be appropriate, but I am not sure what the details would look like.
How to adapt the traditional logic to the real-world thinking of Alice and Bob?
 
  • Like
Likes PaoloDiM
Physics news on Phys.org
  • #2
We need to be more precise here about what we mean by 'system'. The OP is not clear whether it means 'theory' or 'language', and the two are very different.

A language L is just:
  • an alphabet that is a set of symbols, with a few additional properties such as 'arity';
  • a set of syntax rules about how expressions (wffs - well-formed formulas) may be formed from symbols;
  • a set of wffs X that are designated as 'logical axioms', which are very basic things like ##\forall x:x=x##;
  • a set of inference rules, which tell us what constitutes a proof (aka deduction).
A theory T is specific to a particular language L, and is a set of wffs in that language that is closed under deduction using the inference rules.

A set of wffs ##A## in L such that the transitive closure of ##X\cup A## under deduction using the inference rules is theory T is called a set of axioms for T. They are referred to as 'non-logical axioms', to distinguish them from X, which is considered 'logical axioms'. We say that T is the theory generated by A.

The term consistent is relevant to a theory, not to a language and means that the theory contains no self-contradictory statements. I suppose it is conceivable to have a language such that the theory generated solely by the language's logical axioms X is inconsistent, but I think any such language would be useless, and I have never come across anybody suggesting such a thing.

The term paraconsistent is relevant to a language, not a theory. A language is paraconsistent if it is not the case that given a wff F and a self-contradictory C, it can be proved that ##C\to F## (explosion). All classical logical languages, of which the various forms of First-Order Predicate Logic are the best-known have explosion, and hence are not paraconsistent.

So if one wants to introduce paraconsistency (ie get rid of explosion), one has to change the entire logical language, not just the theory. In a non-paraconsistent language, any theory that contains a contradiction is exploded, and hence contains all wffs in the language.
 
  • Like
Likes PaoloDiM, fresh_42 and nomadreid
  • #3
Thank you for laying out the distinctions clearly, andrewkirk. You are correct, "system" would mean the language, theory and semantics together. Since a contradiction remains one in all models, we can concentrate (at least at first) on the syntax. In these terms, Alice sticks to a classical language, whereby an inconsistent theory is unsatisfactory, but Bob must change his language in order to avoid explosion in his theory. So, let us forget Bob for a moment; the problem remains that Alice wrongly believes her theory to be consistent until a point in time when her theory derives the contradiction. That is, the consistency of a theory does not depend on an agent's knowledge (or, equivalently, does not depend on time) even in the modal logic into which one is dragged upon the appearance of "believe" (although I am willing to be corrected on this point), which is fine in abstract considerations, but in practice, humans do accept and reject the same theory at different stages of the available knowledge/time. (The temptation to drag semantics into the question here is almost irresistible, but I will refrain for the moment.) The conclusion that humans necessarily adopt a paraconsistent language runs into a similar problem: the logical axioms that one would have to reject are only rejected (in real life) upon encountering a contradiction, and thus we end up also with Bob in a knowledge/time dependence that is not reflected in classical logic. Unless there is yet another way to avoid the formal-logic / real-life divide, we must conclude that all humans are simply inconsistent, but act as if they were consistent (sort of like "everyone dies and everyone acts as if they will not die".) Yet if logic is to include its traditional goal to be able to model human thought, even this principle needs to somehow be formalized. If classical logic is up to the task, I do not know how, and if not, I do not know which, if any, of the available alternatives successfully address the dissonance. I hope that makes the question clearer (rather than muddling it further).:confused:
 
  • #4
nomadreid said:
we must conclude that all humans are simply inconsistent
If you mean 'logically inconsistent' then I don't see how this follows from what you wrote before it. Is it just that people change their beliefs over time? I don't see that as inconsistent. It is simply that the set of axioms they accept changes over time.

Some people do hold a set of beliefs without realising they are inconsistent. But I don't think that is the most common position.

On the other hand if you just mean 'inconsistent' as in changes over time, then certainly all humans are inconsistent. But we are no longer talking about logical consistency.

There's a quote from Emerson I like:

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.

It continues here.

 
  • #5
The quote by Emerson is cute, but it reminds me of the planet Golgafrincham, on which there were three classes; the Thinkers and Workers sent the Middle Class made up phone sanitizers and other "useless people" away on spaceships, whereupon the remaining Thinkers and Workers were wiped out by a deadly plague that was spread via dirty phones. (The Restaurant at the End of the Universe, part of the Hitchhiker's Guide to the Galaxy series, by Douglas Adams .) That is, consistency remains a key concern of the Golgafrinchans-logicians, whom Emerson would banish along with the phone sanitizers.
andrewkirk said:
Is it just that people change their beliefs over time? I don't see that as inconsistent. It is simply that the set of axioms they accept changes over time.
No, it is not just that people change their sets of axioms, but on the reason for which they do, or don't . One reason is that they change is that they find out their previous set of axioms was inconsistent, either internal (logical) consistency (naîve set theory gave way to ZFC) or "external consistency" (a theory that produces false -- under a given model -- statements) (luminiferous aether gave way to Relativity); sometimes they don't change (the early days of calculus continued to work with an inconsistent theory of infinitesimals even though they knew it was inconsistent, because it worked too well; certain powerful leaders in the world today continue to assert statements which have been shown to be contrary to fact). Despite the fact that they accept contradictions, whether it is because they haven't worked that fact out yet, or because they simply don't let the fact bother them, they don't necessarily go insane. There are various attempts to deal with this: besides paraconsistent logics, there are temporal logics, AGM (belief revision), "Chunk and Permeate" (Brown & Priest), and various others. But it seems to me that some sort of hybrid would be needed to properly account for a logic which accepts an axiom system or a language at time t1 but at t2>t1 either rejects it or ignores it. (I am reminded of the analogous problem in quantum mechanics in which, according to John Archibald Wheeler, "No phenomenon is a phenomenon until it is an observed phenomenon.")
 
  • #6
There are too many different explanations - lying to others, lying to self, not having seen the contradiction yet, not being convinced by an argument that there is a contradiction, faith that there is some explanation that resolves the contradiction but which we are not clever enough to see, etc. They are generally quite simple explanations and don't require fancy tools like paraconsistent logic.

I think an example is needed in order to make the investigation concrete and hive it focus.
 
  • #7
OK, I would like to concentrate on the option "not having seen the contradiction yet". You state that there is a simple explanation; how would you formalize it?

I will put it another way. Suppose you outfit a robot Roberta with a syntax for general reasoning (classical language plus axioms) but leave her to learn mathematics the hard way (whereby her perceptions provide the semantics). You send Roberta back in time to the early days of calculus. ("Hi, Newton and Leibniz. Which one of you... oh, sorry.") At first she takes the assurances that the existence of infinitesimal quantities (this is before Balzano) are consistent , and works happily away until one day, she discovers that they cause an inconsistency (this is before Robinson). There are three possibilities: Roberta crashes, she changes her axioms, or she changes her language. Now, of course you don't want to crash, so you could have outfitted her with a collection of meta-rules that would allow her to switch her axioms without giving up the standard rules of inference, or to switch her language, or some such. But the idea is to keep it to one collection of axioms and language. As you said, there are various proposals, including adding a weaker negation, etc., but none of them seem adequate for "not having seen the contradiction yet" which allows one to operate in a logic which would eventually lead to contradiction.
 
  • #8
Isn't "not having seen the contradiction yet" equivalent to: either there is no contradiction, or my set of meta-rules isn't sufficient to decide it? But to decide the validity of meta-rules, we will need meta-meta-rules and so forth? Thus the condition "not having seen the contradiction [of system ##M##] yet" boils down to the question, whether ##M## is decidable or not, resp. the knowledge whether it is or not.
 
  • #9
nomadreid said:
How to adapt the traditional logic to the real-world thinking of Alice and Bob?

In real real world thinking, people don't rely on a particular logical system. They can hold contradictory ideas, reason by analogy etc. So to frame your question precisely, we need a specific model of the machinery executing the logical system(s). It's important to state whether a robot investigating a logical system is interacting with some external environment. For example, in real real world thinking, we have examples like Newton observing the falling apple.
 
  • #10
nomadreid said:
none of them seem adequate for "not having seen the contradiction yet" which allows one to operate in a logic which would eventually lead to contradiction.
It's simply a matter of recognising that all axioms are ultimately tentative and open to revision. We see this all the time in detective shows on telly. They adopt a working hypothesis (tentative axiom) that X is the criminal and seek new information and make deductions based on that and the other axioms until either a contradiction is reached (eg a new alibi turns up) or there is sufficient evidence to charge X. In the case where a contradiction is reached, the tentative axiom is discarded and alternatives are sought. In a typical 45-minute episode this usually happens two or three times before the correct lead is found. The meta-logic that's needed is simply the thinking process of a Holmes, a Poirot or a Marple. Or, for that matter, an Einstein or a Godel.
 
  • #11
Thanks for the responses, fresh_42 , Stephen Tashi and andrewkirk

Stephen Tashi said:
In real real world thinking, people don't rely on a particular logical system.
This is a bit like saying that in real world thinking, people do not rely on logarithms. Consciously, no. Yet if you want to write a program which would simulate human thinking, you are going to have to put some logs in there. That is, when one says, "That;s not logical", they re referring to Aristotelian logic. Yet they do use logic.

Stephen Tashi said:
It's important to state whether a robot investigating a logical system is interacting with some external environment.
This is an interesting observation. Yes, the model (in the sense of Model Theory) is changing, expanding, as new perceptions filter in. However, although but of course existential statements are subject to change.as the universe of a model expands, as many statements are conservative over model expansion, and this is the relevant point here.

fresh_42 said:
Isn't "not having seen the contradiction yet" equivalent to: either there is no contradiction, or my set of meta-rules isn't sufficient to decide it?
No, there is a third option: that there is a contradiction, and my set of meta-rules (by which I presume you mean the function matching sentences to truth-values in the corresponding lattice of the model) is enough to decide it, given time. The problem is that explanations about logical systems in books assume that if there is a potential contradiction that can result in a logic, then that logic is inconsistent. That is, it transcends time. Humans do not.

andrewkirk said:
It's simply a matter of recognising that all axioms are ultimately tentative and open to revision. We see this all the time in detective shows on telly. They adopt a working hypothesis (tentative axiom) that X is the criminal and seek new information and make deductions based on that and the other axioms until either a contradiction is reached (eg a new alibi turns up) or there is sufficient evidence to charge X. In the case where a contradiction is reached, the tentative axiom is discarded and alternatives are sought. In a typical 45-minute episode this usually happens two or three times before the correct lead is found. The meta-logic that's needed is simply the thinking process of a Holmes, a Poirot or a Marple. Or, for that matter, an Einstein or a Godel.
Yes, but the question is how to formalize this intuition so that the "meta-logic" can be brought down to the status of a logic: put another way, so that we could theoretically program it all in one program, perhaps with the ability to call up subprograms. (That is, each set of axioms previously considered to be the entire set of axioms is really only a subset of a larger set of axioms.) The aforementioned AGM wishes to do this, but gets awfully hand-wavy. "Chunk and Permeate" is a little better, but one feels like
handwaving.jpg

As a side point, it always seems incongruous when a work of science fiction has an android (no, not the operating system...Google has ruined much of science-fiction) has on one side the capability to mimic humans marvelously, yet on the other side crashes when someone mentions the Liar Paradox to her.
 

Attachments

  • handwaving.jpg
    handwaving.jpg
    9 KB · Views: 891
  • #12
nomadreid said:
The problem is that explanations about logical systems in books assume that if there is a potential contradiction that can result in a logic, then that logic is inconsistent. That is, it transcends time. Humans do not.
So Fermat's last theorem isn't solved within your personal setup? Sorry, but such a premise would imply that centuries of scientific results are worthless, because I, as a human, cannot decide them in time. And who sets up whose meta-rules that do not transcendent time define the measure? If Tao and me both had to decide FLT, then it probably will exceed my lifetime but not Tao's. Any human aspect will immediately result in inconsistency and thus cannot be used to determine inconsistency. That's a more or less hidden cyclic argument.
 
  • #13
fresh_42 said:
who sets up whose meta-rules that do not transcendent time define the measure?
Good point: your logic would be different from Tao's, because, well, you think differently from Tao. There are two tasks for the study of logic: one is to serve as a basis for mathematics, and the other is to simulate human thought (as it exists, not as it "should" be). I am addressing the second one.
fresh_42 said:
such a premise would imply that centuries of scientific results are worthless, because I, as a human, cannot decide them in time.
Quite the contrary. The standard "this system is worthless because it can eventually lead to a contradiction" would say that all the mathematicians and scientists who earlier worked under the incorrect assumption that their mathematics was self-consistent was worthless, which is obviously not the case. Working with only-later-to-be-known calculus did not stop Newton from deriving results that send man to the moon. A logic that allows for inconsistencies without resorting to the overly restrictive limitations of the usual paraconsistent logics would account for the fact that we do not view Newton's physics or mathematics invalid just because we know something he didn't.
fresh_42 said:
Any human aspect will immediately result in inconsistency
That may well be, but humans don't crash under such inconsistencies , although if our thought processes followed the classic filter of consistent (or at least not provably inconsistent) versus inconsistent , they would. But it is a bit like quantum phenomena: they are weird, but not illogical. However, one was able to formalize quantum field theory, but formalizing human thought with all its apparent inconsistencies appears to be harder. .
 
  • #14
I would say that the reasoning used by actual human beings does not have the explosion property. If someone discovered an inconsistency in ZFC, for instance, they wouldn't conclude that 2+2=5. Because the way that we establish 2+2=4 doesn't actually rely on much (if any) of ZFC. And certainly discovering a contradiction in ZFC is not going to change our minds about real-world reasoning, that it's a bad idea to soak yourself in gasoline and then get close to a fire.

What's really true is that human reasoning, unlike the abstraction of first-order logic, isn't deductively closed. Just because we believe a collection of statements doesn't imply that we believe every logical consequence of that collection of statements. That's because our beliefs evolve over time, and it takes time to work out the logical consequences of our beliefs. Our beliefs don't just grow monotonically; as we work out the consequences of our beliefs, we might add new beliefs, which are the consequences of our old beliefs, but we also might retract a belief because we disbelieve some consequence of it.
 
  • Like
Likes nomadreid
  • #15
nomadreid said:
No, there is a third option: that there is a contradiction, and my set of meta-rules (by which I presume you mean the function matching sentences to truth-values in the corresponding lattice of the model) is enough to decide it, given time. The problem is that explanations about logical systems in books assume that if there is a potential contradiction that can result in a logic, then that logic is inconsistent. That is, it transcends time. Humans do not.

Logic as taught in school is normative. If you believe A and you believe that A implies B, then you should believe B. Whether you do or not is irrelevant. Or to turn it around, if you disbelieve B, then that means that there is either something suspicious about A, or there is something suspicious about A implies B.
 
  • #16
nomadreid said:
the question is how to formalize this intuition so that the "meta-logic" can be brought down to the status of a logic: put another way, so that we could theoretically program it all in one program
I don't think the way people think can be represented by a computer program. But in some limited cases it can be approximated. An approximation in this case might be where there is a subject language-theory pair ##(L_s,T_s)##, in which the meta reasoning takes place, and an object theory-language pair ##(L_o,T_o)##, the elements of which can be referred to in ##L_s##, and which represents the current set of beliefs. If a contradictory statement ##C## is detected in ##T_o## then the following process could take place:
Code:
S := set of axioms and axiom schemas that generate ##T_o## {note S is finite}
Ss := sorted list of S, ordered from most doubted to least doubted
{often the most recently added axiom will be the most doubted, as that's how proof by contradiction works}
n := |Ss|
inconsistent := TRUE
repeat
  S' := S##\smallsetminus ##Ss[n]
  Try to deduce C from S', using the same proofs are were used to deduce it from S, and any obvious similar paths
  if (failed to deduce C) then inconsistent := FALSE
  n := n-1
until (FALSE OR n = 0)
if (n>0) then
  S := S'
  ##T_o## becomes the theory generated by S'
else
  accept the necessity to live with cognitive dissonance, or consider investigating the language ##L_o##
I don't go into the details of how to investigate the language ##L_o##, which would require considering different forms of logic, as I am not sure that branch would ever be reached. But the process would essentially similar to the above, only working with the logical axioms of ##L_o## rather than teh non-logical axioms Ss.
 
  • Like
Likes nomadreid
  • #17
andrewkirk said:
We need to be more precise here about what we mean by 'system'. The OP is not clear whether it means 'theory' or 'language', and the two are very different.

A language L is just:
  • an alphabet that is a set of symbols, with a few additional properties such as 'arity';
  • a set of syntax rules about how expressions (wffs - well-formed formulas) may be formed from symbols;
  • a set of wffs X that are designated as 'logical axioms', which are very basic things like ##\forall x:x=x##;
  • a set of inference rules, which tell us what constitutes a proof (aka deduction).
Not that it really matters, but I would consider the language to be the set of wffs, independent of the logical axioms and the rules of inference.
 
  • Like
Likes nomadreid
  • #18
andrewkirk said:
If a contradictory statement CC is detected in ToT_o then the following process could take place:
That is a good basic approach, although the devil is in the details, such as "learn to live with cognitive dissonance" -- i.e., don't let your contradictions cause explosions, but how to do it without overly weakening one's reasoning powers is a tough question. But also, the process described would often take much longer than an average person's lifetime, and so would need to be trimmed somehow. Also, it would obviously be a higher-order logic -- second at least, although if one assumes that some axioms are second-order, then at least third. A good try to fill in some of these details is given in JSL ( Journal of Symbolic Logic), Vol. 47, No. 5, October 2018, pp. 877-912, Model Semantics for Theories: An Approach by H. Andres, https://link.springer.com/article/10.1007/s10992-017-9453-y but which can certainly be improved upon, although I am not sure how.

andrewkirk said:
I don't think the way people think can be represented by a computer program
Not at present, but as it is theoretically possible (that is, some rather messy computers known as the human nervous system contain the program; even though we don't know what that program looks like, it's there), one can strive in that direction.

stevendaryl said:
Logic as taught in school is normative.
Yes, that is good for the education of those dealing with most problems. It's those annoying questions on the fringe, such as why the radiation doesn't come out in a continuous spectrum from a heated black body...

stevendaryl said:
I would consider the language to be the set of wffs, independent of the logical axioms and the rules of inference.
Ah, yes, I was too quick to agree to the definition in post #2. I know Wikipedia is not a good source to quote, but it agrees with you, that the language is a selected set of words, each formed by concatenation rules from the alphabet, and that including a grammar is (an acceptable) abuse of language. Nothing about axioms or rules of inference, which are separate.

stevendaryl said:
the reasoning used by actual human beings does not have the explosion property.

stevendaryl said:
human reasoning, unlike the abstraction of first-order logic, isn't deductively closed.

stevendaryl said:
our beliefs evolve over time, and it takes time to work out the logical consequences of our beliefs. Our beliefs don't just grow monotonically; as we work out the consequences of our beliefs, we might add new beliefs, which are the consequences of our old beliefs, but we also might retract a belief because we disbelieve some consequence of it.

Agreed. The question is how to formalize this.
 
  • #19

1. What is the difference between inconsistency and lack of knowledge?

Inconsistency refers to a situation where two or more pieces of information or data contradict each other, making it impossible for all of them to be true. Lack of knowledge, on the other hand, refers to a situation where there is a lack of information or understanding about a particular subject or topic.

2. How do inconsistencies and lack of knowledge impact scientific research?

Inconsistencies can lead to erroneous conclusions and hinder the progress of research. Lack of knowledge may result in gaps in understanding and limit the scope of research. Both can impede the validity and reliability of scientific findings.

3. Can inconsistencies and lack of knowledge coexist in scientific research?

Yes, it is possible for inconsistencies and lack of knowledge to coexist in scientific research. In fact, identifying and addressing inconsistencies can often lead to a better understanding of the subject and uncover new knowledge.

4. How can scientists address inconsistencies and lack of knowledge in their research?

Scientists can address inconsistencies by critically examining their data and methodology, seeking out additional sources of information, and conducting further experiments or studies. Lack of knowledge can be addressed by conducting thorough research and consulting with other experts in the field.

5. Why is it important for scientists to acknowledge and address inconsistencies and lack of knowledge?

It is important for scientists to acknowledge and address inconsistencies and lack of knowledge because it ensures the validity and accuracy of their findings. By addressing these issues, scientists can also contribute to the advancement of knowledge in their field and promote scientific progress.

Similar threads

  • Programming and Computer Science
Replies
29
Views
3K
  • General Discussion
Replies
27
Views
6K
Replies
4
Views
1K
  • General Discussion
4
Replies
108
Views
16K
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
3K
  • General Discussion
Replies
2
Views
3K
  • Other Physics Topics
Replies
3
Views
3K
Replies
3
Views
8K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K

Back
Top