I Question: What book would you recommend for reading about mathematical logic?

  • I
  • Thread starter Thread starter Calculuser
  • Start date Start date
  • Tags Tags
    Functional
Calculuser
Messages
49
Reaction score
3
While I was studying Propositional Calculus from Elliott Mendelson's "Introduction to Mathematical Logic," I came across, on page 18, the proof of functional completeness of \neg, \wedge, \lor logical connectives as shown below:

Ekran Alıntısı.PNG


The part that does not make sense to me starts from "Then C_{k} has . . ." down to the end of the proof.

Question: Why does C_{k} have the T value for this assignment?
 
  • Informative
Likes Periwinkle
Physics news on Phys.org
Can you elaborate more by which "then ##C_k##" you're referring to as there are at least three that I see:
- Then ##C_k## has the value T...
- then ##C_k## is a disjunct of D.
- then ##C_k## is not a disjunct of D
 
By that I meant the following sentences:

"Then C_{k} has the value T for this assignment, whereas every other C_{i} has the value F. If f has the value T for row k, then C_{k} is a disjunct of D. Hence, D would also have the value T for this assignment. If f has the value F for row k, then C_{k} is not a disjunct of D and all the disjuncts take the value F for this assignment. Therefore, D would also have the value F. Thus, D generates the truth function f."

I could not grasp these following steps. For instance, why does C_{k} in the first sentence have to be T?
 
Maybe it would help to work out a particular example. Let ##f(x_1, x_2, x_3)## be "if ##x_1## then ##x_2## else ##x_3##". Then the truth table looks like this:

$$\left[ \begin{array}
\\ & A_1 & A_2 & A_3 & f(A_1, A_2, A_3)
\\ 1 & T & T & T & T
\\ 2 & T & T & F & T
\\ 3 & T & F & T & F
\\ 4 & T & F & F & F
\\ 5 & F & T & T & T
\\ 6 & F & T & F & F
\\ 7 & F & F & T & T
\\ 8 &F & F & F & F
\end{array}
\right]$$

There are 8 possible assignments of values to ##A_1, A_2, A_3##

So we have the following values for ##C_i##:
$$\left[ \begin{array}
\\ & C_j & f
\\ 1 & A_1 \wedge A_2 \wedge A_3 & T
\\ 2 & A_1 \wedge A_2 \wedge \neg A_3 & T
\\ 3 & A_1 \wedge \neg A_2 \wedge A_3 & F
\\ 4 & A_1 \wedge \neg A_2 \wedge \neg A_3 & F
\\ 5 & \neg A_1 \wedge A_2 \wedge A_3 & T
\\ 6 & \neg A_1 \wedge A_2 \wedge \neg A_3 & F
\\ 7 & \neg A_1 \wedge \neg A_2 \wedge A_3 & T
\\ 8 & \neg A_1 \wedge \neg A_2 \wedge \neg A_3 & F
\end{array}
\right]$$

Then ##D## is the disjunction of all the ##C_i## where ##f## has value ##T##:

##D = C_1 \vee C_2 \vee C_5 \vee C_7 == (A_1 \wedge A_2 \wedge A_3) \vee (A_1 \wedge A_2 \wedge \neg A_3) \vee (\neg A_1 \wedge A_2 \wedge A_3) \vee (\neg A_1 \wedge \neg A_2 \wedge A_3)##

Now, let's augment table 1 to include the values of ##C_i## and ##D##. I'm bold-facing those ##C_k## that are in the disjunction ##D##.

$$\left[ \begin{array}
\\ & A_1 & A_2 & A_3 & f(A_1, A_2, A_3) & \mathbf{C_1} & \mathbf{C_2} & C_3 & C_4 & \mathbf{C_5} & C_6 & \mathbf{C_7} & C_8 & D
\\ 1 & T & T & T & T & T & F & F & F & F & F & F & F & T
\\ 2 & T & T & F & T & F & T & F & F & F & F & F & F & T
\\ 3 & T & F & T & F & F & F & T & F & F & F & F & F & F
\\ 4 & T & F & F & F & F & F & F & T & F & F & F & F & F
\\ 5 & F & T & T & T & F & F & F & F & T & F & F & F & T
\\ 6 & F & T & F & F & F & F & F & F & F & T & F & F & F
\\ 7 & F & F & T & T & F & F & F & F & F & F & T & F & T
\\ 8 &F & F & F & F & F & F & F & F & F & F & F & T & F
\end{array}
\right]$$

So what they're saying is that ##C_k## has value ##T## for assignment number ##k##, and that ##C_k## has value ##F## for every other assignment. If ##C_k## is in the disjunction ##D##, then ##D## has value ##T## for assignment ##k##, otherwise ##D## has value ##F##. So we can see that ##D## is true exactly the same rows that ##f## is.
 
  • Like
Likes sysprog and Calculuser
I noticed my misunderstanding. All is clear now. Thank you very much!
 
Eliott Mendelson in his book before the referenced page 10-11. examples illustrate operations of the proposition calculus with an electronic circuit diagram. The theorem is that any complicated Truth-function of the A, B, C, ... propositions are given, then this can be done by "and," "or" and denial proposition calculus operations. The Truth-function is a table that assigns a "true" or "false" truth value to the A, B, C, ... propositions. Using this, I tried to illustrate the proof of the theorem.

proposition.jpg


We can prepare the electronic circuit diagram shown in the figure for the truth function, such that each line in the above table, where the value of the truth function is true, has a through-line. If the value of "A" in a row on the table is true, then we write "A" to the first switch on the through-line, if it is false, then no A; like B, C, and so on. In cases where the value of the truth function is true, the current is flowing from X to Y.

For the row of the table when the value of the truth function is false (in the case of such "true" - "false" distribution of propositions A, B, C, ...) there is no current. Because if we look at any through-line, the current is only on the "true" - "false" distribution of the proposition A, B, C, ..., which distribution is in the appropriate row of the table.

Mendelson's book would have beautiful memories for me. I was 15 when I read the Russian version during the summer break. I learned language and math at the same time. I remember reading the liar paradox first, or, e.g., about Russell's paradox; I first read about Neumann's foundation of set theory (the gloomy Neumann classes that are not sets). Everything seemed mysterious. I could only read a few pages a day because I didn't know the language well enough. I see these things quite differently today. However, if someone asked me which book I would recommend for reading, I wouldn't think for a minute: Ebbinghaus, H.-D., Flum, J., Thomas, Wolfgang, Mathematical Logic, Springer.
 
Last edited:
  • Like
Likes Calculuser and sysprog
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top