# Definition of a proposition

• I
In propositional logic we study rules of logical inference from propositions, such as ## (p\rightarrow q) \leftrightarrow (\lnot q \rightarrow \lnot p) ##, or ## \lnot (p \land \lnot p) \leftrightarrow (p \lor \lnot p) ##. Do we ever define the set of propositions we are dealing with? Some propositions, such as "this statement is false", appear problematic. Do we require some set theory to formally establish propositional logic?

Related Set Theory, Logic, Probability, Statistics News on Phys.org
andrewkirk
Homework Helper
Gold Member
Do we require some set theory to formally establish propositional logic?
We can't do that because the definitions would become circular. Modern mathematics is founded on set theory, and set theory is founded on logic. So we can't found logic on set theory, or our system will have no grounding!

Fortunately, we do not need to define the set of propositions in first order predicate logic (FOPL), and (IIRC) first order predicate logic is a sufficient foundation for Zermelo-Frankel set theory.

Higher order logics can refer to collections (not necessarily 'sets') of propositions. But they can be vulnerable to problems like 'this statement is false'.

Propositional logic is like a zero-order logic. It is much more constrained than FOPL, because it does not have quantifiers or variables - only propositional constants and connectors.

We can't do that because the definitions would become circular. Modern mathematics is founded on set theory, and set theory is founded on logic. So we can't found logic on set theory, or our system will have no grounding!
Indeed, that was my concern.

Fortunately, we do not need to define the set of propositions in first order predicate logic (FOPL), and (IIRC) first order predicate logic is a sufficient foundation for Zermelo-Frankel set theory.

Higher order logics can refer to collections (not necessarily 'sets') of propositions. But they can be vulnerable to problems like 'this statement is false'.

Propositional logic is like a zero-order logic. It is much more constrained than FOPL, because it does not have quantifiers or variables - only propositional constants and connectors.
But surely for propositional logic we at least need to work with a "collection" of propositions? And this collection may be infinite. Is it simply that we don't need to worry about introducing any of the machinery of set theory in order to introduce something as basic as a collection of propositions?

Stephen Tashi
andrewkirk
Homework Helper
Gold Member
That's right, we don't need the machinery. Nor do we need the concept of a collection. We use predicates in place of collections. We work with statements such as:

$$\textrm{is.proposition}(p,L)\wedge \textrm{is.proposition}(q,L) \to \textrm{is.proposition} (\textrm{cat}(p,\textrm{cat}("\wedge", q)))$$

which says that if symbol strings p and q both represent propositions in language L, then so does the symbol string obtained by concatenating p and q with a ##\wedge## symbol in-between.

Note that the above statement is a proposition in another language L2 whose set of constants includes 'quoted' versions of all the symbols in L and contains functions like 'cat' (concatenate) and predicates like 'is.proposition'. By using such a meta-language that can refer to language L but not to itself, we avoid the self-reference problems like 'This sentence is false' that arise when we allow a language to refer to itself.

sysprog
That's right, we don't need the machinery. Nor do we need the concept of a collection. We use predicates in place of collections. We work with statements such as:

$$\textrm{is.proposition}(p,L)\wedge \textrm{is.proposition}(q,L) \to \textrm{is.proposition} (\textrm{cat}(p,\textrm{cat}("\wedge", q)))$$

which says that if symbol strings p and q both represent propositions in language L, then so does the symbol string obtained by concatenating p and q with a ##\wedge## symbol in-between.

Note that the above statement is a proposition in another language L2 whose set of constants includes 'quoted' versions of all the symbols in L and contains functions like 'cat' (concatenate) and predicates like 'is.proposition'. By using such a meta-language that can refer to language L but not to itself, we avoid the self-reference problems like 'This sentence is false' that arise when we allow a language to refer to itself.
Doesn't the language contain a set (or collection) of propositions to begin with? Wikipedia seems to say so (https://en.wikipedia.org/wiki/Propositional_calculus#Generic_description_of_a_propositional_calculus):

"The alpha set
is a countably infinite set of elements called proposition symbols or propositional variables. Syntactically speaking, these are the most basic elements of the formal language
, otherwise referred to as atomic formulas or terminal elements. In the examples to follow, the elements of
are typically the letters p, q, r, and so on. "

andrewkirk
Homework Helper
Gold Member
A language L containing elements that a Wikipedia contributor chooses to refer to collectively as a set, using natural language in English (which is not the same as language L!), is not the same thing as language L itself containing formal elements that refer to the concept of a set. Loosely, you can think of Wikipedia as using a meta-language L2 to refer to the language L of propositional calculus.

The wiki author chose to use the language of set theory to describe L, presumably because it's easier to do it that way and she is aiming only to describe L (in this case a language of propositional calculus), not to give a construction that is grounded and free from any ultimate circularity. But the description can be written without using set-theoretic concepts, and we need to do it that way (using FOPL statements such as the one shown in post 4), although it takes a little longer, if we want to have a ground-up, circularity-free construction.

A language L containing elements that a Wikipedia contributor chooses to refer to collectively as a set, using natural language in English (which is not the same as language L!), is not the same thing as language L itself containing formal elements that refer to the concept of a set. Loosely, you can think of Wikipedia as using a meta-language L2 to refer to the language L of propositional calculus.

The wiki author chose to use the language of set theory to describe L, presumably because it's easier to do it that way and she is aiming only to describe L (in this case a language of propositional calculus), not to give a construction that is grounded and free from any ultimate circularity. But the description can be written without using set-theoretic concepts, and we need to do it that way (using FOPL statements such as the one shown in post 4), although it takes a little longer, if we want to have a ground-up, circularity-free construction.
Ok, but we at least agree that the language does contain elements? So therefore it can be considered a collection of elements, if not a formal set? I think my knowledge of formal set theory isn't sufficient to distinguish the difference between a collection of elements and a set, or to know at what stage it becomes necessary to use formal set theory in order to make things rigorous.

Stephen Tashi
Some propositions, such as "this statement is false", appear problematic.
Not all sentences are propositions. "Propositions" are defined to have certain properties ( e.g. a unique "truth value" ).

TeethWhitener
Not all sentences are propositions. "Propositions" are defined to have certain properties ( e.g. a unique "truth value" ).
Yes, that much is clear. So we need a way of constructing or defining valid propositions. The way that this seems to be done is to define a "set" of atomic propositions and another "set" of connectors, and some rules for concatentating atomic propositions with connectors to create more complex propositions. My question is how we can do all of this without worrying about first defining a "set"? I suspect that the answer will be that we only have to define a "set" once we want to do slightly more complex things with it.

TeethWhitener
Gold Member
Set theory doesn't even define "set." They're treated as primitive, and their theory is built up using axioms and rules of inference.

sysprog
Stephen Tashi
Ok, but we at least agree that the language does contain elements?
It's difficult to define a standard set of "ground rules" for discussing topics in logic. There is general agreement that discussing a topic in logic requires using a "meta language" that is not a rigorously specified language. Exactly what concepts are allowed in the meta language is thus not rigorously specified.

My question is how we can do all of this without worrying about first defining a "set"?
In my opinion, you cannot do anything formal without having some meta-language notion of a "set". But the meta-language notion can be restricted to a particular type of set (e.g a set of symbols) as opposed to the general abstract notion of a set (of anything).

For example, suppose we begin the study of logic by studying formal languages. A model for this is a set of symbols that can be printed on a page. There are rules that defined what a "well formed formula" is.

In describing such a formal language, what does our meta language assume? Among other things, it assumes the reader has some perceptive abilities - e.g. he can perceive that a symbol such a "X" in one location in a formula is the same symbol as "X" in a different formula. It assumes the reader can perceived a string of symbols as symbols in a particular order. So, in the meta-language, we assume the ability to perceive a type of equality (same symbol) and a type of order ( first symbol in a string, second symbol in a string etc.).

Perceiving the equality of symbols in different locations is a more restrictive idea than the general notion of an equivalence relation. Perceiving the order of symbols in a string is a more restricted notion that the concept of an ordinal number.

sysprog and andrewkirk
Fervent Freyja
Gold Member
You might want to start with the basics and the best. Russell and Whitehead’s Principia Mathematica covers set theory. And so many of his other works on propositional logic are available online for free. Some terms are outdated, but he is the best place to start.

Fervent Freyja
Gold Member
Maybe I’m just old school. I don’t see where you read it yourself in the post, though. In my opinion, Russell remains the best logician of all time. Anyone serious about understanding logic should review some of his work.

Maybe I’m just old school. I don’t see where you read it yourself in the post, though. In my opinion, Russell remains the best logician of all time. Anyone serious about understanding logic should review some of his work.

And you wouldn't recommend Principia Mathematica as a starting point for learning formal logic if you had covered some of the basic literature on it.

In my opinion, Russell remains the best logician of all time.
I think that logicianship is not a contest about who is or was the best; it's a quest for the truth ##-## it's clearly true that Bertrand Russell was a luminary ##-## so was Kurt Gödel; so was Ludwig Wittgenstein; so were Alfred Tarski, Rudolf Carnap, and others ##-## I admire all the great and good thinkers and teachers ##\cdots##

I think that logicianship is not a contest about who is or was the best; it's a quest for the truth ##-## it's clearly true that Bertrand Russell was a luminary ##-## so was Kurt Gödel; so was Ludwig Wittgenstein; so were Alfred Tarski, Rudolf Carnap, and others ##-## I admire all the great and good thinkers and teachers ##\cdots##
Don't forget Frege!

sysprog
Don't forget Frege!
In his Tractatus Logico-Philosophicus Wittgenstein referred to "Frege and Russell" when he whas objecting to their using of the equal sign to add 'identity' to first-order logic ##-## he said something like: to say of of two different things that they are equal is nonsense; to say of one thing that it is the same as itself is to say nothing at all . . .

In his Tractatus Logico-Philosophicus Wittgenstein referred to "Frege and Russell" when he whas objecting to their using of the equal sign to add 'identity' to first-order logic ##-## he said something like: to say of of two different things that they are equal is nonsense; to say of one thing that it is the same as itself is to say nothing at all . . .
Indeed, I watched a whole philosophy lecture on the difference between "a=a" and "a=b" (, starting around 20 minutes). Sadly my reading of the actual literature on these matters isn't up to scratch, but I hope to find the time to read everything from Frege to Kripke!

sysprog
I (on offhand recollection) said:
Kripke was only 19 years old when he wrote his semantics.
while, presumably more accurately, Wikipedia Article on Saul Kripke says:
He wrote his first completeness theorem in modal logic at 17, and had it published a year later.
and
Kripke's contributions to philosophy include:
1. Kripke semantics for modal and related logics, published in several essays beginning in his teens.

Last edited: