MHB Question on substitution of variables in natural deduction of predicate logic.

Mathelogician
Messages
35
Reaction score
0
Hi all,
I need Explanation on the attached image from Van Dalen's Logic and Structure; specially on how the red part follows from the lines before it!
Regards.
 

Attachments

  • Qlogic.png
    Qlogic.png
    16.2 KB · Views: 137
Physics news on Phys.org
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.
 
Evgeny.Makarov said:
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.
Well said. I completely disagreed with you until half-way through the second paragraph. In Physics it is very common to see the versions of a theorem made simpler by substituting constants for variables. But you are specifically talking about "free variables" and that reminded me of some of the fields that come up in the GR generalizations of inner product spaces. The induced fields only have the properties of satisfying the local boundary conditions. You could never replace those with any kind of scalars imaginable.

Thanks!

-Dan
 
Thanks...
I'll try it and talk about my thoughts.
 
As you said, I tried to solve it explicitly and exactly.
The book doesn't explain about definition derivations in predicate calculi; It just adds two rules to the propositional rules of natural deduction. But if i guess right about the atomic formulas that they are the BASIC step of the recursive definition of derivations, the first part of my proof seems to be true:

1-Basics - Since (as we assumed above) the atomic formulas are themselves derivations too (with single nodes), then for any atomic formula phi we have a derivation (D ... phi) with (the only one) hypothesis phi itself. Now the desired property would obviously hold, because since (phi|- phi) implies that (phi[x/c]|- phi[x/c]).

2- For propositional connectives, i believe that they behave similarly about the problem. So i just solved the (^I) case. [Attached pic 1]

VVcjlLX.jpg


3- For universal elimination rule, i solved it without using the substitution theorem you mentioned. [Attached pic 2]

D6RyhDk.jpg


4- For universal introduction rule, i couldn't use the trick (red starred around the green line!) in the previous solution, because here we have the problem of cancelling that hypotheses in Gama that contain x as a free variable. [Therefore not all sets that contain all hypotheses of the the final derivation (using univ introduction rule), would contain hypotheses of the first derivation (used as the inductive hypothesis)].

Sorry if bad handwriting! Ask on any ambiguity...
I'll wait to see the other solutions.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top