Question on substitution of variables in natural deduction of predicate logic.

Click For Summary
SUMMARY

This discussion centers on the substitution of variables in natural deduction within predicate logic, specifically referencing Van Dalen's "Logic and Structure." The key conclusion is that while variables can be generalized using the universal introduction rule (∀I), constants cannot replace variables in derivations without violating this rule. The participants emphasize the importance of understanding Theorem 2.8.3(i) to grasp the nuances of substitution and the behavior of propositional connectives in this context.

PREREQUISITES
  • Understanding of predicate logic and natural deduction
  • Familiarity with Van Dalen's "Logic and Structure"
  • Knowledge of universal quantification and its rules, particularly (∀I) and (∀E)
  • Basic principles of propositional logic and connectives
NEXT STEPS
  • Study Theorem 2.8.3(i) in detail to understand variable substitution
  • Review the definitions and applications of the universal introduction rule (∀I)
  • Explore the implications of substituting constants for variables in logical proofs
  • Examine the relationship between propositional connectives and their inductive properties
USEFUL FOR

Students and scholars of logic, particularly those studying predicate logic, natural deduction, and the intricacies of variable substitution in formal proofs.

Mathelogician
Messages
35
Reaction score
0
Hi all,
I need Explanation on the attached image from Van Dalen's Logic and Structure; specially on how the red part follows from the lines before it!
Regards.
 

Attachments

  • Qlogic.png
    Qlogic.png
    16.2 KB · Views: 156
Physics news on Phys.org
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.
 
Evgeny.Makarov said:
The idea is whether we can take a derivation and systematically replace all free occurrences of a certain variable by a constant or vice versa. The beginning of the paragraph before the underlined text points out that only variables can be generalized by (∀I). That is, if we have a formula A(x) containing x free (and x does not occur free in open assumptions), then we can derive ∀x A(x). However, if we replace x with c everywhere in the derivation and, in particular, in A(x), we can no longer apply (∀I). Therefore, a constant cannot in general take the place of a variable in a derivation.

Why a constant can be replaced with a variable is best understood by trying to prove Theorem 2.8.3(i), which I recommend doing very explicitly and carefully. Propositional rules do not depend on whether formulas involved contain variables or constants. This means that the inductive step for such rules follows directly from the inductive hypotheses (and the fact that substitutions commute with propositional connectives). The step for (∀E) uses the fact that $A[s/x][t/y]=A[t/y][s[t/y]/x]$ (where exactly?). However, the case of (∀I) turned out to be trickier than I first thought. I need to consult the book for the details of the definition of (∀I). It is also interesting that the theorem assumption says that $x$ does not occur in $\varphi$ at all instead of occur free. I invite you to try the proof and write your thoughts on whether induction does or does not go through for (∀I). I'll try to post my thoughts tomorrow.
Well said. I completely disagreed with you until half-way through the second paragraph. In Physics it is very common to see the versions of a theorem made simpler by substituting constants for variables. But you are specifically talking about "free variables" and that reminded me of some of the fields that come up in the GR generalizations of inner product spaces. The induced fields only have the properties of satisfying the local boundary conditions. You could never replace those with any kind of scalars imaginable.

Thanks!

-Dan
 
Thanks...
I'll try it and talk about my thoughts.
 
As you said, I tried to solve it explicitly and exactly.
The book doesn't explain about definition derivations in predicate calculi; It just adds two rules to the propositional rules of natural deduction. But if i guess right about the atomic formulas that they are the BASIC step of the recursive definition of derivations, the first part of my proof seems to be true:

1-Basics - Since (as we assumed above) the atomic formulas are themselves derivations too (with single nodes), then for any atomic formula phi we have a derivation (D ... phi) with (the only one) hypothesis phi itself. Now the desired property would obviously hold, because since (phi|- phi) implies that (phi[x/c]|- phi[x/c]).

2- For propositional connectives, i believe that they behave similarly about the problem. So i just solved the (^I) case. [Attached pic 1]

VVcjlLX.jpg


3- For universal elimination rule, i solved it without using the substitution theorem you mentioned. [Attached pic 2]

D6RyhDk.jpg


4- For universal introduction rule, i couldn't use the trick (red starred around the green line!) in the previous solution, because here we have the problem of cancelling that hypotheses in Gama that contain x as a free variable. [Therefore not all sets that contain all hypotheses of the the final derivation (using univ introduction rule), would contain hypotheses of the first derivation (used as the inductive hypothesis)].

Sorry if bad handwriting! Ask on any ambiguity...
I'll wait to see the other solutions.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 9 ·
Replies
9
Views
5K
Replies
2
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
3
Views
22K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K