I "Theory" in multi-valued logic?

  • I
  • Thread starter Thread starter nomadreid
  • Start date Start date
  • Tags Tags
    Logic Theory
AI Thread Summary
The discussion centers on the definition of "theory" in the context of multi-valued logics compared to binary-valued logics. It explores whether the term applies exclusively to binary systems or if it can encompass multi-valued frameworks, emphasizing that a theory includes axioms and rules of inference but may also allow for contradictions. Participants debate the implications of completeness in multi-valued logics, noting that traditional definitions may not hold due to the existence of statements that are neither true nor false. The conversation touches on the relationship between soundness and theorems in multi-valued logics, suggesting that soundness can be maintained even if not all theorems are fully true. Overall, the thread highlights the complexity of defining logical theories across different systems.
nomadreid
Gold Member
Messages
1,748
Reaction score
243
TL;DR Summary
The definition of a "theory" for a model in classical (two-valued) logic is the collection of true sentences. But in multi-valued logic, is the theory the collection of only the purely true sentences, or of the "not-purely false" sentences, or what?
As in the summary: is the term "theory" only used for binary-valued logics? If not, how is it defined for multi-valued logics?
 
Physics news on Phys.org
Try google.
 
I did. Perhaps it is my search skills, but I came up empty. Therefore I had hoped that someone here might know.
 
Somewhat ironically, google is useful when you know enough about the topic to do an effective filtering of the millions of hits you get. But if you already know enough...
 
  • Like
Likes nomadreid
According to my book* ( which I thoroughly recommend; _..._ refers to a word referenced in the book), a theory is "a _formal language _ together with its _axioms_ and _ rules of inference_. Such a system generates a set of truths, the _theorems_but it cannot refer to the truth of its own sentences." Hope it was helpful. No special reference to multi-valued or single-valued logics. Note one of the (co) authors is a philosopher of Mathematics.

*Borwein and Borowski's Dictionary of Mathematics.
 
Thanks, WWGD. According to your definition, the theory is only the axioms and rules of inference, but not the consequences, and can include contradictions. Upon reflection, I realize that the phrase "inconsistent theory" is permissible, meaning that even if the definition is altered to include the consequences, neither of the options in my original post would be valid; there is no reason to exclude any degree of truth value, including 0.

The point that threw me off was the following, from the classic Chang and Keisler: "Let A be a model for L: then the theory of A is the set of all sentences which hold in A." Since a contradictory sentence does not hold in a model, then it would seem from this definition that contradictions or even other falsehoods cannot be part of a theory of a model. So the conclusion I make is that "theory" is a much broader concept than "theory of a model."

Anyway, your allusion to Gödel points out that it is tricky to say whether a theory will produce a contradiction, so best in the case of a definition of such a general word as "theory" is not to require the absence of contradictions.

Probably my question should have perhaps been about "consistent theories": if a consistent theory is one that does not produce a contradiction, then if one uses the idea of including consequences as part of the theory (apparently the two usages are current: some, as you do, and some, the collection of consequences), then the definition of a consistent multi-valued logic would be the second choice in my original post.

Would that be a fair summary?
 
(Starting with bivalent logic...)
Some good notes on formal methods:
http://web.stanford.edu/class/cs357/lecture9.pdf
Including your definition of a theory (which is the one I’m familiar with: a theory is the set of sentences in a formal system that is closed under implication). A model is an algebraic structure (set of objects + relations) which satisfies a theory. If you have a contradiction in your theory, then by the principle of explosion, the theory is the set of all sentences. This particular theory doesn’t have a model to satisfy it.

The notes above go at least some distance to answering your question. In particular, in bivalent logic, a theory ##\mathcal{T}## is complete iff for every sentence ##\sigma##, either ##\sigma \in \mathcal{T}## or ##\neg\sigma \in \mathcal{T}##. In addition, given a model ##\mathcal{M}##, a theory of that model ##Th (\mathcal{M})## is the set of sentences true in that model. The notes assert that ##Th( \mathcal{M})## is complete, but this is not true in multivalent logic. You might want to search around for model theory in multivalent logic for a more complete (no pun intended) answer. Maybe the references included in SEP’s entry here:
https://plato.stanford.edu/entries/logic-manyvalued/
 
Last edited:
  • Like
Likes jim mcnamara
Maybe Math Stack exchange or Math Stack Overflow can give better answers.
 
Thanks, TeethWhitener. It is interesting to compare your remark about the theory of a model being all those sentences which are true (by which you apparently mean full truth) under the given model with the statement that the theory is all those statements which are satisfied by the model. Referring you to your Stanford link, you will find that the definitions at the end of Section 1.1 allow for other values (so-called "designated values") to qualify to make a set of well-formed formulas have a model. (It does not however use the term "theory" for this set.)

I would be interested to know the justification for the statement that a multi-valued logic cannot be complete. (In answering, you can assume that I am familiar with the basics of model theory.) The condition that, for every sentence, either it or its negation are satisfied by the model, does not say that either the statement or its negation have to be given a valuation of full truth.

Thanks for the references, but as far as I could see, neither one of them explicitly defined the theory of a model for multi-valued logics.
 
  • #10
nomadreid said:
I would be interested to know the justification for the statement that a multi-valued logic cannot be complete
I was just referring to the definition of complete given in the notes. Since in a many-valued logic there are (presumably) statements that are neither true nor whose negations are true, you have a situation where ##\sigma \notin \mathcal{T}## and ##\neg\sigma\notin\mathcal{T}##. So by the definition in the notes, any such theory would not be complete.
 
  • #11
TeethWhitener said:
I was just referring to the definition of complete given in the notes.
The notes referred to bivalent logics. That a every sentence in the theory must be (fully) true is a consequence of there being only two possible values (since, if there were a false statement, explosion would stop it from being complete.) But this implication does not hold in a multi-valued logic. Having sentences that are merely not (fully) false does not lead to explosion.

Based on the definition of completeness, that for every sentence either it or its negation is in the theory, then
TeethWhitener said:
you have a situation where σ∉T\sigma \notin \mathcal{T} and ¬σ∉T\neg\sigma\notin\mathcal{T}.
Not necessarily. This would only work if you defined the theory as having only fully true sentences, which is not part of the definition of a complete theory. To try to prove that you can't have partially true sentences (sentences with a valuation between 0 and 1, not inclusive) by assuming this is circular.
 
  • #12
nomadreid said:
The notes referred to bivalent logics.
Yes I know. My point was simply that the bivalent definition of complete excludes multivalent logics from being complete. (Of course, it's possible that you could construct a multivalent logic where every possible sentence is either strictly true or strictly false, but this is kind of a pathological case that defeats the spirit of multivalency.)
 
  • #13
TeethWhitener said:
My point was simply that the bivalent definition of complete excludes multivalent logics from being complete. (Of course, it's possible that you could construct a multivalent logic where every possible sentence is either strictly true or strictly false,
Actually, no, even apart from the trivial degenerate cases. First, note that there are two types of completeness here: semantic and syntactic. I'll take the stronger version, the syntactic. The definition is that for every sentence in the language, either it or its negation is a theorem. This brings us to the question as to the definition of a theorem: a theorem of a theory is any statement that follows from the axioms using the rules of inference. In a multi-valued or fuzzy logic, the theorems need not acquire an evaluation of 1.

The bivalent definition of complete is not different from the multivalent definition, but in a bivalent logic, there is the consequence that all its theorems must be true; this consequence does not follow in a multivalent logic, and so this consequence does not stop a multivalued logic from being complete.
 
  • #14
Ok I'm trying to wrap my head around what you've written. The problem I'm having with your post involves this:
nomadreid said:
In a multi-valued or fuzzy logic, the theorems need not acquire an evaluation of 1.
I don't see how this is true, namely because any system like this would be unsound (since there would be theorems that are not true).
I think also maybe we're talking past each other. I'll think more about it.

Meanwhile, the following document might shed some light on your initial question:
http://www.personal.usyd.edu.au/~njjsmith/papers/smith-many-valued-logics.pdf
Particularly on page 6, where the author discusses the different logical systems that arise from interpreting tautologies in multivalent logic in in different ways (always true vs. never false).
 
  • #15
Thanks for the link, Teeth Whitener. I have downloaded it, and will be looking through it.
In the meantime:
TeethWhitener said:
I don't see how this is true, namely because any system like this would be unsound (since there would be theorems that are not true).
No, all you need for soundness is that implication preserves truth, by which is meant that a false statement (valuation of 0) will never be a theorem -- which does not imply that all theorems are fully true -- something that can perfectly well be part of a multi-valued logic. A bit stronger interpretation of soundness is that if A implies B, then the valuation of B in your truth-value lattice is greater than or equal to the valuation of A. In the case of a bivalent logic, this reduces to true statements implying true statements, and this property is maintained in a sound multi-valued logic, but you can have other theorems if at least one of your axioms has a valuation less than one.
 
  • Informative
Likes TeethWhitener
  • #16
Yes this seems to be related to the issue of always true vs. never false. Interesting stuff.
 
  • #17
But are theories then syntactic objects or are they semantic ones too?
 
  • #18
There is an analogy here to the different views of classical physics and quantum mechanics. Classical physics is very deterministic and somewhat tied to 1st order predicate logic, whereas QM is oriented towards expressing truths in terms of probability of a truth, and hence you have to reason with the logic of probabilities instead of one of black and white truths. More to the point, when you have to deal with truths that have to be expressed in probabilities or levels of likelihood, then fuzzy logic is more applicable. Consult the works of Lotfi Zadeh for relevant theory, of which there is a lot. Also, consider reviewing the topic of fuzzy arithmetic and how it handles arithmetic logic with fuzzy variables.
There are many things in the real world (the physical world) of sufficient complexity that they can only be expressed in terms of probabilities. This is kind of equivalent to saying that some things cannot be given 100% certainty of outcome, only statistical meanings. In the abstract world (for example anything non-physical like mathematics), you can successfully use binary logic. But the real world is inherently uncertain in some ways and cannot be fully predicted, only estimated. For example, you cannot fully predict the stock market, or turbulent flow. Sometimes you can approximate it, but have to deal with acceptable error.
I work with AI and modeling and see that some situations can be handled with binary decision making but other models can only be handled with probability or statistical reasoning.
 
Last edited:
  • Like
Likes nomadreid
  • #19
rstone4500 said:
There is an analogy here to the different views of classical physics and quantum mechanics. Classical physics is very deterministic and somewhat tied to 1st order predicate logic, whereas QM is oriented towards expressing truths in terms of probability of a truth, and hence you have to reason with the logic of probabilities instead of one of black and white truths. More to the point, when you have to deal with truths that have to be expressed in probabilities or levels of likelihood, then fuzzy logic is more applicable. Consult the works of Lotfi Zadeh for relevant theory, of which there is a lot. Also, consider reviewing the topic of fuzzy arithmetic and how it handles arithmetic logic with fuzzy variables.
There are many things in the real world (the physical world) of sufficient complexity that they can only be expressed in terms of probabilities. This is kind of equivalent to saying that some things cannot be given 100% certainty of outcome, only statistical meanings. In the abstract world (for example anything non-physical like mathematics), you can successfully use binary logic. But the real world is inherently uncertain in some ways and cannot be fully predicted, only estimated. For example, you cannot fully predict the stock market, or turbulent flow. Sometimes you can approximate it, but have to deal with acceptable error.
I work with AI and modeling and see that some situations can be handled with binary decision making but other models can only be handled with probability or statistical reasoning.
Well, the Jim Simons foundation seems to have done a good job of predicting the market.

I hope I don't throw things of by wondering if we can somehow think of probability theory as infinite-valued logic and if there may be some sort of limiting process that takes us from one to the other.
 
  • #20
'good job' is a close approximation to 'fully predicting' but certainly involves risk. Another example of non-fully predictable models is social and political systems. An unpredictable political - economic decision can swing markets, and you can't know for sure beforehand. Because the full model is too complex to fully model. For example we could not have predicted with fully certainty that some virus would pop up in China and disrupt world markets. We cannot know when a magnetic pole shift might reduce protection from radiation and cause population reductions.

Indeed, the probability approach is a way to handle infinite-valued logic and it may even be the only way. I look on QM as requiring infinite-valued logic, a big paradigm shift from classical mechanics. I think that the eventual bridging between small scale QM and large scale classical models will require a new form of scaleable causality reasoning.
 
  • #21
I think that if we want to escape from 'yes-no' logic we still should stick to finite integerially-indicable subsets of the unit interval -- sayable numbers please -- I hope that's ok. I'm sometimes inaccurate enough to be as if to suppose that there's such a thing as 'sorta true' but . . .
 
Last edited:
  • #22
I will only be satisfied with uncountable infinities! Okay, conceding, we have to be practical.
In my AI work I realize that somehow the human brain manages to deal with the serious amount of uncertainty in the world, without going over the cliff too often. Somehow we manage to mostly survive. When you look at it, this is a remarkable accomplishment and even more remarkable that evolution got us there.

But I agree, as far as I can see, that whatever logic we use to understand the universe needs to be managable from a practical standpoint. It is interesting that our neural systems perform admirably while using only relatively low resolution parameter values. The cells add up a bunch of spikes for awhile and then make a hard yes or no, but the system as a whole manages to deal with very fuzzy logic. So the low end is deterministic, but the high levels work with uncertainty and reduce it to decisions. And even better, the brain somewhat works useably even when drunk; that is, a degraded system still logics. Kinda. At least well enough for us to pick up a woman in a bar. :)
 
  • Like
Likes sysprog
  • #23
rstone4500 said:
I will only be satisfied with uncountable infinities! Okay, conceding, we have to be practical.
In my AI work I realize that somehow the human brain manages to deal with the serious amount of uncertainty in the world, without going over the cliff too often. Somehow we manage to mostly survive. When you look at it, this is a remarkable accomplishment and even more remarkable that evolution got us there.

But I agree, as far as I can see, that whatever logic we use to understand the universe needs to be managable from a practical standpoint. It is interesting that our neural systems perform admirably while using only relatively low resolution parameter values. The cells add up a bunch of spikes for awhile and then make a hard yes or no, but the system as a whole manages to deal with very fuzzy logic. So the low end is deterministic, but the high levels work with uncertainty and reduce it to decisions. And even better, the brain somewhat works useably even when drunk; that is, a degraded system still logics. Kinda. At least well enough for us to pick up a woman in a bar. :)
Sounds like an "Insights" in the making...
 
  • Like
Likes sysprog
  • #24
Just curious. In many-valued logic, proof by contradiction, in the standard sense, does not work, as it makes use of the excluded middle ; if ~A is shown to lead to a contradiction then we conclude A. But this is because we assume there is no third option. What sort of rule of inference other than MP do we then use in multi-logic?
 
  • #25
First of all, the principle of excluded middle is usually stated as an axiom, not a rule of inference, and many classical logics also have only one rule of inference (MP). Also, there is not just one multi-valued logic; there are very many, and the rules of inference can differ from one to the other (although it is usually the axioms that differ). In general, given MP, axioms usually are sufficient for a logical system to deal with inference.
 
  • #26
nomadreid said:
First of all, the principle of excluded middle is usually stated as an axiom, not a rule of inference, and many classical logics also have only one rule of inference (MP). Also, there is not just one multi-valued logic; there are very many, and the rules of inference can differ from one to the other (although it is usually the axioms that differ). In general, given MP, axioms usually are sufficient for a logical system to deal with inference.
I meant proof by contradiction as a rule of inference may not apply for 3+-valued logics.
 
  • #27
WWGD said:
I meant proof by contradiction as a rule of inference may not apply for 3+-valued logics
Well, no, but it also doesn't apply as a rule of inference for binary logics. It isn't a rule of inference, it's an axiom. In addition, a proof is a series of applications of the rules of inference to sentences, whereby the first step is a theorem (classifying axioms as a trivial theorems). So, to better state your point: the principle of the excluded middle is not an axiom in multi-valued logics. However, there are principles that are similar: for example, in intuitionist logic, ~(∀A:(A∨~A)) is consistent with the axioms. In Lukasiewicz's (with apologies for not having the first letter quite correct) 3-valued logic, (~A→~B)→(B→A)) and ((A→~A)→A)→A) are axioms.
 
  • #28
nomadreid said:
Well, no, but it also doesn't apply as a rule of inference for binary logics. It isn't a rule of inference, it's an axiom. In addition, a proof is a series of applications of the rules of inference to sentences, whereby the first step is a theorem (classifying axioms as a trivial theorems). So, to better state your point: the principle of the excluded middle is not an axiom in multi-valued logics. However, there are principles that are similar: for example, in intuitionist logic, ~(∀A:(A∨~A)) is consistent with the axioms. In Lukasiewicz's (with apologies for not having the first letter quite correct) 3-valued logic, (~A→~B)→(B→A)) and ((A→~A)→A)→A) are axioms.

IIRC, a rule of inference is just a function from an n-ple of premises into another premise called the conclusion. Then proof by contradiction:

Assume ~B
Prove C& ~C
--------------
Conclude B

Is a rule of inference. Maybe not used in classical layout but can be considered a rule of inference.
 
  • #29
It is an interesting question as to the difference between an axiom and a rule of inference, and in some systems there is no difference. However, in others there is, and I am sure that my characterization will not be the best one, so I would appreciate input by others, but here are a couple of ways one can distinguish them:

(a) sentences belong to the theory, and rules of inference are meta-theory; axioms are sentences

(b) rules of inference are functions taking collections of sentences, also known as the antecedents, to sentences; an axiom has no antecedents (here the distinction between material implication → and syntactical consequence (the turnstile |- ) steps in.

However, the distinction appears to a point of view, and can be blurred: for example, one could consider an axiom as a rule of inference with no antecedents, thereby obliterating the difference, or one could make the deduction theorem implicit, etc.

Any input from those who can express this better?
 
  • Like
Likes WWGD
  • #30
In some normally deterministic machines, logic circuits determine microcode response, microcode determines response to machine instructions, and machine instructions determine output based on input.
 
  • #31
A theory in mathematical logic is often understood to be a set of sentences closed under logical implication.

So understood, it doesn't matter whether the sentence was proved from axioms or using rules of inference. You can formalise the same theory either way. So the details about whether you're using a system of natural deduction or one with many axioms don't really matter.

So understood, the notion of a theory is indeed wider than 'the set of sentences true in a model M'. We can (and should) talk about theories without having to talk about the truth of the sentences of the theory at all. An inconsistent theory (classically understood) is the set of all sentences (of the formal language under study).

'The set of sentences true in model M' (assuming classical model theory) does pick out a theory, since 'true in M' is defined to be closed under classical logical implication.

This definition of a theory still works in non-classical logic -- for instance, an intuitionistic theory will be a set of sentences closed under intuitionistic implication.

How non-classical models treat truth in a model, and how they relate this notion to the non-classical logic they are models for, will probably vary in different theories. But I would avoid talking about the truth of a sentence of a theory independently of a given model for that theory (as some here seem to come close to doing) -- that discussion seems to go beyond what mathematical logic is about.
 
  • Like
Likes nomadreid
  • #32
yossell said:
'The set of sentences true in model M' (assuming classical model theory) does pick out a theory, ...
I don't quite understand your post but this sentence is interesting.

I don't understand how this would be the case? Consider the set of sentence true when we are quantifying on ##\mathbb{N}## (as usual). This set of sentences define what can be roughly called "truths of (true) arithmetic" [the (true) in bracket to imply that with "different natural-numbers/finite-ordinals than ##\mathbb{N}##", we will have some "truths" that wouldn't be true in ##\mathbb{N}##?]. Call this set ##A##. Yet different concrete theories seem to prove different subsets of ##A## [1]? So how does the set of truths for (standard) natural numbers single out a unique theory? I suppose am missing something?

[1] OK I suppose one objection could be that a model for a stronger theory would have many more "objects" than just natural numbers. But still, on the very least, the stronger theory would still pose all the questions that can be posed (in weaker theory such as PA).

P.S.
This also got me thinking that does model theory "guarantee" that set-theory has a model with same "natural numbers" as ##\mathbb{N}##? And if the answer to prev. question is "no", would the (possible) non-existence of such a model mean that the common-truths/theorems for all the models satisfying set-theory with "different" natural numbers [assuming that set-theory only proves "true things" (otherwise it seems to become hopelessly complicated for me if we only assume consistency)?] wouldn't agree with ##A##? Or there could still be agreement with ##A##?

But anyway, I don't even have enough superficial understanding of the complicated model theory related issues underlying this. More specifically, the idea of (non-standard) in natural numbers (and more generally ordinals) is quite unclear for me.

==========================================

Regarding the specific topic at hand, I don't have much to add.
 
Last edited:
  • #33
SSequence said:
Consider the set of sentence true when we are quantifying on ##\mathbb{N}## (as usual). This set of sentences define what can be roughly called "truths of (true) arithmetic" [the (true) in bracket to imply that with "different natural numbers than ##\mathbb{N}##", we will have some "truths" that wouldn't be true in ##\mathbb{N}##?]. Call this set ##A##. Yet different concrete theories seem to prove different subsets of ##A## [1]? So how does the truths for (standard) natural numbers single out a unique theory? I suppose am missing something?

I can think of two things that may be worrying you.

1.
'Different concrete theories seem to prove different subsets.'
Yes -- but why is this a problem? Different theories and different subsets go hand in hand.

Begin with a first order axiomatisation of arithmetic and let A be the smallest set containing the axioms and closed under logical consequence. By Godel's incompleteness theorem, we know this theory is incomplete.

Contrast with the theory B: the set of sentences true in the intended model of PA. This theory is complete -- for every sentence of first order arithmetic, either A or ¬A is contained in B. However, this theory cannot be recursively axiomatised.

Accordingly, A and B are different theories -- but the set of sentences true in the standard model is a unique set.

2.
Different concrete theories can contain different predicates -- one theory might contain '+' and 'x'; another might contain '!', or other predicates. Yet, as '!' can be defined in standard PA, in terms of + and x, we do not think of the theory formulated with ! as a genuinely different theory -- even though the set of sentences is, strictly speaking, different.

I reply: Fair point! -- I agree it could be good to treat theories which are mere definitional extensions as not genuinely distinct theories at all.

[1] OK I suppose one objection could be that a model for a stronger theory would have many more "objects" than just natural numbers. But still, on the very least, the stronger theory would still pose all the questions that can be posed (in weaker theory such as PA).
Stronger in what sense? The trouble with the standard first order formulation of arithmetic is that it has non-standard models -- models that contain 'numbers' which are greater than every natural number. 'Stronger' formulations -- such as second order formulations -- rule out these 'extra' objects.

P.S.
This also got me thinking that does model theory "guarantee" that set-theory has a model with same "natural numbers" as ##\mathbb{N}##? And if the answer to prev. question is "no", would the (possible) non-existence of such a model mean that the common-truths/theorems for all the models satisfying set-theory with "different" natural numbers [assuming that set-theory only proves "true things" (otherwise it seems to become hopelessly complicated for me if we only assume consistency)?] wouldn't agree with ##A##? Or there could still be agreement with ##A##?

I'm not sure where you're going with this -- in model theory, we don't expect and can't get one theory's models to have the 'very same' domain as another theory's models. If T has a model M, and if M' is isomorphic to M, then M' is a model of T also, even if its domain is different.
 
  • Like
Likes nomadreid
  • #34
yossell said:
1.
'Different concrete theories seem to prove different subsets.'
Yes -- but why is this a problem? Different theories and different subsets go hand in hand.

Begin with a first order axiomatisation of arithmetic and let A be the smallest set containing the axioms and closed under logical consequence. By Godel's incompleteness theorem, we know this theory is incomplete.

Contrast with the theory B: the set of sentences true in the intended model of PA. This theory is complete -- for every sentence of first order arithmetic, either A or ¬A is contained in B. However, this theory cannot be recursively axiomatised.

Accordingly, A and B are different theories -- but the set of sentences true in the standard model is a unique set.

Yes, I guess I understand what you are saying here. I was "assuming" that something like (or similar to): "recursive enumerability of theorems" was a necessary pre-requisite condition when you wrote theory.

yossell said:
Stronger in what sense? The trouble with the standard first order formulation of arithmetic is that it has non-standard models -- models that contain 'numbers' which are greater than every natural number. 'Stronger' formulations -- such as second order formulations -- rule out these 'extra' objects.
I just mean stronger in the sense that it proves "more" facts about the (real) natural numbers. Just like a "stronger theory" might prove con(PA) [alongside with everything that PA proves] but PA itself wouldn't.
yossell said:
I'm not sure where you're going with this -- in model theory, we don't expect and can't get one theory's models to have the 'very same' domain as another theory's models. If T has a model M, and if M' is isomorphic to M, then M' is a model of T also, even if its domain is different.
What I was asking was whether model theory proves whether set-theory must have a model with same "finite ordinals" as ##\mathbb{N}## (the actual natural numbers).

And the further (secondary) question was that if the answer is no, then does it (necessarily) imply [assuming set-theory to prove only "true things"] that set-theory proves facts about "finite ordinals" which do not match with "truths of arithmetic" [i.e. truths of questions posed by quantification over ##\mathbb{N}## (actual natural numbers)]?

OK I did a quick search. This question (it comes out as very first link) seems to be close to what I am asking:
https://math.stackexchange.com/questions/647480

Point-1 by the user CarlMummert seems to answer the first-part perhaps? (but I need to read carefully what ##\omega##-model means actually...)
"First, three caveats:
1. Nothing that we can prove within ZFC can justify this. It would be naively possible that ZFC is consistent but not ω-consistent (and thus has no ω-model), and in that case we could still prove all the same things in ZFC."
 
Last edited:
  • #35
nomadreid said:
It is an interesting question as to the difference between an axiom and a rule of inference,
It's indeed an interesting question!

The study of formal languages (and mathematical exposition in general) assumes the student has certain basic perceptive abilities for symbols and patterns of symbols and that these symbols and patterns have basic permanent properties similar to permanent properties of physical objects.

For example, the student must be able to perceive whether two symbols at different locations "are the same" - i.e. are the same with respect to some properties but different with respect to the property of location - however location is imagined - different location in a sequence of printed symbols from left to right or different location on an imaginary tape in a Turing machine etc.

We use "ordinary" logic to reason about strings of symbols and think of the symbols as having properties than correspond to properties of physically implemented symbols (e.g. printed symbols).

There is a clear separation between the metalanguage and metalogic versus the content of the language until we begin to use symbols to denote things that happen in the metalanguage.

For example if the rules in the metalanguage allow us to begin with the sequence of symbols "##P \land Q##" and write down other sequences that conclude with "##Q##" we can summarize this fact by saying "##Q##" is "derivable from "##P \land Q##" and abbreviate it as "##"P \land Q" \ \vdash \ "Q"##.

Do we have terminology to distinguish statements that refer only to manipulations of symbols in the language versus statements that refer to some intepretation of the symbols? (e.g. the distinction between ##(P \land Q) \implies Q## versus ##"P \land Q"\ \vdash\ "Q"##). ##\ ## Is an axiom one type of statement and rule of inference the other type?
 
  • #36
It has always seemed to me that a rule of inference occupied a funny place somewhere between semantics and higher-level syntax , between model and theory.

On one hand, it refers to the truth values of statements, hence attached to the semantics of the theory.

On the other hand, it is equivalent to a higher-order statement quantifying over the sentences of the theory, with the truth values now being constants in a corresponding higher-order theory.

So, clearly not belonging to the theory but also not belonging to an interpretation in the sense of sets which satisfy sentences, it seems to inhabit a third level somewhere.

I haven't come to terms with where that third level belongs. Just saying that it is metalogical seems to be hand-waving.
 

Similar threads

Replies
6
Views
1K
Replies
12
Views
2K
Replies
1
Views
3K
Replies
7
Views
1K
Replies
21
Views
2K
Back
Top