Undergrad "Theory" in multi-valued logic?

  • Thread starter Thread starter nomadreid
  • Start date Start date
  • Tags Tags
    Logic Theory
Click For Summary
The discussion centers on the definition of "theory" in the context of multi-valued logics compared to binary-valued logics. It explores whether the term applies exclusively to binary systems or if it can encompass multi-valued frameworks, emphasizing that a theory includes axioms and rules of inference but may also allow for contradictions. Participants debate the implications of completeness in multi-valued logics, noting that traditional definitions may not hold due to the existence of statements that are neither true nor false. The conversation touches on the relationship between soundness and theorems in multi-valued logics, suggesting that soundness can be maintained even if not all theorems are fully true. Overall, the thread highlights the complexity of defining logical theories across different systems.
nomadreid
Gold Member
Messages
1,762
Reaction score
248
TL;DR
The definition of a "theory" for a model in classical (two-valued) logic is the collection of true sentences. But in multi-valued logic, is the theory the collection of only the purely true sentences, or of the "not-purely false" sentences, or what?
As in the summary: is the term "theory" only used for binary-valued logics? If not, how is it defined for multi-valued logics?
 
Physics news on Phys.org
Try google.
 
I did. Perhaps it is my search skills, but I came up empty. Therefore I had hoped that someone here might know.
 
Somewhat ironically, google is useful when you know enough about the topic to do an effective filtering of the millions of hits you get. But if you already know enough...
 
  • Like
Likes nomadreid
According to my book* ( which I thoroughly recommend; _..._ refers to a word referenced in the book), a theory is "a _formal language _ together with its _axioms_ and _ rules of inference_. Such a system generates a set of truths, the _theorems_but it cannot refer to the truth of its own sentences." Hope it was helpful. No special reference to multi-valued or single-valued logics. Note one of the (co) authors is a philosopher of Mathematics.

*Borwein and Borowski's Dictionary of Mathematics.
 
Thanks, WWGD. According to your definition, the theory is only the axioms and rules of inference, but not the consequences, and can include contradictions. Upon reflection, I realize that the phrase "inconsistent theory" is permissible, meaning that even if the definition is altered to include the consequences, neither of the options in my original post would be valid; there is no reason to exclude any degree of truth value, including 0.

The point that threw me off was the following, from the classic Chang and Keisler: "Let A be a model for L: then the theory of A is the set of all sentences which hold in A." Since a contradictory sentence does not hold in a model, then it would seem from this definition that contradictions or even other falsehoods cannot be part of a theory of a model. So the conclusion I make is that "theory" is a much broader concept than "theory of a model."

Anyway, your allusion to Gödel points out that it is tricky to say whether a theory will produce a contradiction, so best in the case of a definition of such a general word as "theory" is not to require the absence of contradictions.

Probably my question should have perhaps been about "consistent theories": if a consistent theory is one that does not produce a contradiction, then if one uses the idea of including consequences as part of the theory (apparently the two usages are current: some, as you do, and some, the collection of consequences), then the definition of a consistent multi-valued logic would be the second choice in my original post.

Would that be a fair summary?
 
(Starting with bivalent logic...)
Some good notes on formal methods:
http://web.stanford.edu/class/cs357/lecture9.pdf
Including your definition of a theory (which is the one I’m familiar with: a theory is the set of sentences in a formal system that is closed under implication). A model is an algebraic structure (set of objects + relations) which satisfies a theory. If you have a contradiction in your theory, then by the principle of explosion, the theory is the set of all sentences. This particular theory doesn’t have a model to satisfy it.

The notes above go at least some distance to answering your question. In particular, in bivalent logic, a theory ##\mathcal{T}## is complete iff for every sentence ##\sigma##, either ##\sigma \in \mathcal{T}## or ##\neg\sigma \in \mathcal{T}##. In addition, given a model ##\mathcal{M}##, a theory of that model ##Th (\mathcal{M})## is the set of sentences true in that model. The notes assert that ##Th( \mathcal{M})## is complete, but this is not true in multivalent logic. You might want to search around for model theory in multivalent logic for a more complete (no pun intended) answer. Maybe the references included in SEP’s entry here:
https://plato.stanford.edu/entries/logic-manyvalued/
 
Last edited:
  • Like
Likes jim mcnamara
Maybe Math Stack exchange or Math Stack Overflow can give better answers.
 
Thanks, TeethWhitener. It is interesting to compare your remark about the theory of a model being all those sentences which are true (by which you apparently mean full truth) under the given model with the statement that the theory is all those statements which are satisfied by the model. Referring you to your Stanford link, you will find that the definitions at the end of Section 1.1 allow for other values (so-called "designated values") to qualify to make a set of well-formed formulas have a model. (It does not however use the term "theory" for this set.)

I would be interested to know the justification for the statement that a multi-valued logic cannot be complete. (In answering, you can assume that I am familiar with the basics of model theory.) The condition that, for every sentence, either it or its negation are satisfied by the model, does not say that either the statement or its negation have to be given a valuation of full truth.

Thanks for the references, but as far as I could see, neither one of them explicitly defined the theory of a model for multi-valued logics.
 
  • #10
nomadreid said:
I would be interested to know the justification for the statement that a multi-valued logic cannot be complete
I was just referring to the definition of complete given in the notes. Since in a many-valued logic there are (presumably) statements that are neither true nor whose negations are true, you have a situation where ##\sigma \notin \mathcal{T}## and ##\neg\sigma\notin\mathcal{T}##. So by the definition in the notes, any such theory would not be complete.
 
  • #11
TeethWhitener said:
I was just referring to the definition of complete given in the notes.
The notes referred to bivalent logics. That a every sentence in the theory must be (fully) true is a consequence of there being only two possible values (since, if there were a false statement, explosion would stop it from being complete.) But this implication does not hold in a multi-valued logic. Having sentences that are merely not (fully) false does not lead to explosion.

Based on the definition of completeness, that for every sentence either it or its negation is in the theory, then
TeethWhitener said:
you have a situation where σ∉T\sigma \notin \mathcal{T} and ¬σ∉T\neg\sigma\notin\mathcal{T}.
Not necessarily. This would only work if you defined the theory as having only fully true sentences, which is not part of the definition of a complete theory. To try to prove that you can't have partially true sentences (sentences with a valuation between 0 and 1, not inclusive) by assuming this is circular.
 
  • #12
nomadreid said:
The notes referred to bivalent logics.
Yes I know. My point was simply that the bivalent definition of complete excludes multivalent logics from being complete. (Of course, it's possible that you could construct a multivalent logic where every possible sentence is either strictly true or strictly false, but this is kind of a pathological case that defeats the spirit of multivalency.)
 
  • #13
TeethWhitener said:
My point was simply that the bivalent definition of complete excludes multivalent logics from being complete. (Of course, it's possible that you could construct a multivalent logic where every possible sentence is either strictly true or strictly false,
Actually, no, even apart from the trivial degenerate cases. First, note that there are two types of completeness here: semantic and syntactic. I'll take the stronger version, the syntactic. The definition is that for every sentence in the language, either it or its negation is a theorem. This brings us to the question as to the definition of a theorem: a theorem of a theory is any statement that follows from the axioms using the rules of inference. In a multi-valued or fuzzy logic, the theorems need not acquire an evaluation of 1.

The bivalent definition of complete is not different from the multivalent definition, but in a bivalent logic, there is the consequence that all its theorems must be true; this consequence does not follow in a multivalent logic, and so this consequence does not stop a multivalued logic from being complete.
 
  • #14
Ok I'm trying to wrap my head around what you've written. The problem I'm having with your post involves this:
nomadreid said:
In a multi-valued or fuzzy logic, the theorems need not acquire an evaluation of 1.
I don't see how this is true, namely because any system like this would be unsound (since there would be theorems that are not true).
I think also maybe we're talking past each other. I'll think more about it.

Meanwhile, the following document might shed some light on your initial question:
http://www.personal.usyd.edu.au/~njjsmith/papers/smith-many-valued-logics.pdf
Particularly on page 6, where the author discusses the different logical systems that arise from interpreting tautologies in multivalent logic in in different ways (always true vs. never false).
 
  • #15
Thanks for the link, Teeth Whitener. I have downloaded it, and will be looking through it.
In the meantime:
TeethWhitener said:
I don't see how this is true, namely because any system like this would be unsound (since there would be theorems that are not true).
No, all you need for soundness is that implication preserves truth, by which is meant that a false statement (valuation of 0) will never be a theorem -- which does not imply that all theorems are fully true -- something that can perfectly well be part of a multi-valued logic. A bit stronger interpretation of soundness is that if A implies B, then the valuation of B in your truth-value lattice is greater than or equal to the valuation of A. In the case of a bivalent logic, this reduces to true statements implying true statements, and this property is maintained in a sound multi-valued logic, but you can have other theorems if at least one of your axioms has a valuation less than one.
 
  • Informative
Likes TeethWhitener
  • #16
Yes this seems to be related to the issue of always true vs. never false. Interesting stuff.
 
  • #17
But are theories then syntactic objects or are they semantic ones too?
 
  • #18
There is an analogy here to the different views of classical physics and quantum mechanics. Classical physics is very deterministic and somewhat tied to 1st order predicate logic, whereas QM is oriented towards expressing truths in terms of probability of a truth, and hence you have to reason with the logic of probabilities instead of one of black and white truths. More to the point, when you have to deal with truths that have to be expressed in probabilities or levels of likelihood, then fuzzy logic is more applicable. Consult the works of Lotfi Zadeh for relevant theory, of which there is a lot. Also, consider reviewing the topic of fuzzy arithmetic and how it handles arithmetic logic with fuzzy variables.
There are many things in the real world (the physical world) of sufficient complexity that they can only be expressed in terms of probabilities. This is kind of equivalent to saying that some things cannot be given 100% certainty of outcome, only statistical meanings. In the abstract world (for example anything non-physical like mathematics), you can successfully use binary logic. But the real world is inherently uncertain in some ways and cannot be fully predicted, only estimated. For example, you cannot fully predict the stock market, or turbulent flow. Sometimes you can approximate it, but have to deal with acceptable error.
I work with AI and modeling and see that some situations can be handled with binary decision making but other models can only be handled with probability or statistical reasoning.
 
Last edited:
  • Like
Likes nomadreid
  • #19
rstone4500 said:
There is an analogy here to the different views of classical physics and quantum mechanics. Classical physics is very deterministic and somewhat tied to 1st order predicate logic, whereas QM is oriented towards expressing truths in terms of probability of a truth, and hence you have to reason with the logic of probabilities instead of one of black and white truths. More to the point, when you have to deal with truths that have to be expressed in probabilities or levels of likelihood, then fuzzy logic is more applicable. Consult the works of Lotfi Zadeh for relevant theory, of which there is a lot. Also, consider reviewing the topic of fuzzy arithmetic and how it handles arithmetic logic with fuzzy variables.
There are many things in the real world (the physical world) of sufficient complexity that they can only be expressed in terms of probabilities. This is kind of equivalent to saying that some things cannot be given 100% certainty of outcome, only statistical meanings. In the abstract world (for example anything non-physical like mathematics), you can successfully use binary logic. But the real world is inherently uncertain in some ways and cannot be fully predicted, only estimated. For example, you cannot fully predict the stock market, or turbulent flow. Sometimes you can approximate it, but have to deal with acceptable error.
I work with AI and modeling and see that some situations can be handled with binary decision making but other models can only be handled with probability or statistical reasoning.
Well, the Jim Simons foundation seems to have done a good job of predicting the market.

I hope I don't throw things of by wondering if we can somehow think of probability theory as infinite-valued logic and if there may be some sort of limiting process that takes us from one to the other.
 
  • #20
'good job' is a close approximation to 'fully predicting' but certainly involves risk. Another example of non-fully predictable models is social and political systems. An unpredictable political - economic decision can swing markets, and you can't know for sure beforehand. Because the full model is too complex to fully model. For example we could not have predicted with fully certainty that some virus would pop up in China and disrupt world markets. We cannot know when a magnetic pole shift might reduce protection from radiation and cause population reductions.

Indeed, the probability approach is a way to handle infinite-valued logic and it may even be the only way. I look on QM as requiring infinite-valued logic, a big paradigm shift from classical mechanics. I think that the eventual bridging between small scale QM and large scale classical models will require a new form of scaleable causality reasoning.
 
  • #21
I think that if we want to escape from 'yes-no' logic we still should stick to finite integerially-indicable subsets of the unit interval -- sayable numbers please -- I hope that's ok. I'm sometimes inaccurate enough to be as if to suppose that there's such a thing as 'sorta true' but . . .
 
Last edited:
  • #22
I will only be satisfied with uncountable infinities! Okay, conceding, we have to be practical.
In my AI work I realize that somehow the human brain manages to deal with the serious amount of uncertainty in the world, without going over the cliff too often. Somehow we manage to mostly survive. When you look at it, this is a remarkable accomplishment and even more remarkable that evolution got us there.

But I agree, as far as I can see, that whatever logic we use to understand the universe needs to be managable from a practical standpoint. It is interesting that our neural systems perform admirably while using only relatively low resolution parameter values. The cells add up a bunch of spikes for awhile and then make a hard yes or no, but the system as a whole manages to deal with very fuzzy logic. So the low end is deterministic, but the high levels work with uncertainty and reduce it to decisions. And even better, the brain somewhat works useably even when drunk; that is, a degraded system still logics. Kinda. At least well enough for us to pick up a woman in a bar. :)
 
  • Like
Likes sysprog
  • #23
rstone4500 said:
I will only be satisfied with uncountable infinities! Okay, conceding, we have to be practical.
In my AI work I realize that somehow the human brain manages to deal with the serious amount of uncertainty in the world, without going over the cliff too often. Somehow we manage to mostly survive. When you look at it, this is a remarkable accomplishment and even more remarkable that evolution got us there.

But I agree, as far as I can see, that whatever logic we use to understand the universe needs to be managable from a practical standpoint. It is interesting that our neural systems perform admirably while using only relatively low resolution parameter values. The cells add up a bunch of spikes for awhile and then make a hard yes or no, but the system as a whole manages to deal with very fuzzy logic. So the low end is deterministic, but the high levels work with uncertainty and reduce it to decisions. And even better, the brain somewhat works useably even when drunk; that is, a degraded system still logics. Kinda. At least well enough for us to pick up a woman in a bar. :)
Sounds like an "Insights" in the making...
 
  • Like
Likes sysprog
  • #24
Just curious. In many-valued logic, proof by contradiction, in the standard sense, does not work, as it makes use of the excluded middle ; if ~A is shown to lead to a contradiction then we conclude A. But this is because we assume there is no third option. What sort of rule of inference other than MP do we then use in multi-logic?
 
  • #25
First of all, the principle of excluded middle is usually stated as an axiom, not a rule of inference, and many classical logics also have only one rule of inference (MP). Also, there is not just one multi-valued logic; there are very many, and the rules of inference can differ from one to the other (although it is usually the axioms that differ). In general, given MP, axioms usually are sufficient for a logical system to deal with inference.
 
  • #26
nomadreid said:
First of all, the principle of excluded middle is usually stated as an axiom, not a rule of inference, and many classical logics also have only one rule of inference (MP). Also, there is not just one multi-valued logic; there are very many, and the rules of inference can differ from one to the other (although it is usually the axioms that differ). In general, given MP, axioms usually are sufficient for a logical system to deal with inference.
I meant proof by contradiction as a rule of inference may not apply for 3+-valued logics.
 
  • #27
WWGD said:
I meant proof by contradiction as a rule of inference may not apply for 3+-valued logics
Well, no, but it also doesn't apply as a rule of inference for binary logics. It isn't a rule of inference, it's an axiom. In addition, a proof is a series of applications of the rules of inference to sentences, whereby the first step is a theorem (classifying axioms as a trivial theorems). So, to better state your point: the principle of the excluded middle is not an axiom in multi-valued logics. However, there are principles that are similar: for example, in intuitionist logic, ~(∀A:(A∨~A)) is consistent with the axioms. In Lukasiewicz's (with apologies for not having the first letter quite correct) 3-valued logic, (~A→~B)→(B→A)) and ((A→~A)→A)→A) are axioms.
 
  • #28
nomadreid said:
Well, no, but it also doesn't apply as a rule of inference for binary logics. It isn't a rule of inference, it's an axiom. In addition, a proof is a series of applications of the rules of inference to sentences, whereby the first step is a theorem (classifying axioms as a trivial theorems). So, to better state your point: the principle of the excluded middle is not an axiom in multi-valued logics. However, there are principles that are similar: for example, in intuitionist logic, ~(∀A:(A∨~A)) is consistent with the axioms. In Lukasiewicz's (with apologies for not having the first letter quite correct) 3-valued logic, (~A→~B)→(B→A)) and ((A→~A)→A)→A) are axioms.

IIRC, a rule of inference is just a function from an n-ple of premises into another premise called the conclusion. Then proof by contradiction:

Assume ~B
Prove C& ~C
--------------
Conclude B

Is a rule of inference. Maybe not used in classical layout but can be considered a rule of inference.
 
  • #29
It is an interesting question as to the difference between an axiom and a rule of inference, and in some systems there is no difference. However, in others there is, and I am sure that my characterization will not be the best one, so I would appreciate input by others, but here are a couple of ways one can distinguish them:

(a) sentences belong to the theory, and rules of inference are meta-theory; axioms are sentences

(b) rules of inference are functions taking collections of sentences, also known as the antecedents, to sentences; an axiom has no antecedents (here the distinction between material implication → and syntactical consequence (the turnstile |- ) steps in.

However, the distinction appears to a point of view, and can be blurred: for example, one could consider an axiom as a rule of inference with no antecedents, thereby obliterating the difference, or one could make the deduction theorem implicit, etc.

Any input from those who can express this better?
 
  • Like
Likes WWGD
  • #30
In some normally deterministic machines, logic circuits determine microcode response, microcode determines response to machine instructions, and machine instructions determine output based on input.
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K