# Hypothesis in proof

1. Jun 20, 2006

### Mr.Cauliflower

Hello,

I wonder whether if I have this formula to prove:

$$(\forall x)(P(x) \rightarrow Q(x)) \rightarrow ((\forall x) P(x) \rightarrow (\forall x) Q(x))$$

is it correct to have both

$$(\forall x)(P(x) \rightarrow Q(x))$$

and

$$((\forall x) P(x)$$

as hypotheses in the proof of this formula? The first hypothesis is obvious, but I'm not sure with the second one.

I found it here:

http://en.wikipedia.org/wiki/Generalization_(logic)" [Broken]

Thank you.

Last edited by a moderator: May 2, 2017
2. Jun 20, 2006

### AKG

It's not entirely clear what you intend to do. Depending on the particulars of your deduction system, it may not even make sense to have two hypotheses (you may have one hypothesis assumed within the scope of another, but cannot simultaneously hypothesize two formulas - in some systems).

Just show some work, and it will be easier to tell whether you've done things right or not.

3. Jun 21, 2006

### Mr.Cauliflower

I work in first-order predicate logic and if what you mean by deduction system is a set of axioms and inference rules, those are:

(axioms)

$$A \rightarrow (B \rightarrow A)$$

$$(A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C))$$

$$(\urcorner B \rightarrow \urcorner A) \rightarrow (A \rightarrow B)$$

Which are taken from propositional logic and

Scheme of specification
$$(\forall x) A \rightarrow A_{x}[t]$$

and the jump scheme (I really don't know how to call it in english)
If x doesn't have unbound occurence in A, then

$$(\forall x)(A \rightarrow B) \rightarrow (A \rightarrow (\forall x) B)$$

There are two inference rules: modus ponens and generalization rule:

From

$$\vdash A$$

we deduce

$$\vdash (\forall x) A$$

And I think I finally understand why there are two hypotheses in that proof on wikipedia. The proof later uses the deduction theorem and it is clear now.

What isn't clear yet for me is the "constants theorem" (I know it's probably called in another way in english). It's the thing you use when you want to use the deduction theorem but the formula which has to be (by premises of the theorem) closed, has some unbound variables.

Then the theorem says that if you substitute the variables in that formula with new constants (you extend the language with these constants), new formula is proveable iff the original was and you can use the new one in the deduction theorem.

Could you tell me please the name of this theorem (so that I could find more info on english pages) and how can it work? I just don't see why substituting constants (are these 'constants' as we normally understand them?) for unbound variables allows me to use the formula in the deduction theorem.

Thank you very much.

4. Jun 21, 2006

### HallsofIvy

Staff Emeritus
No. The hypothesis for the statement is just
(\forall x)(P(x) \rightarrow Q(x))
which does not imply that P(x) is true. For example, if P(x) is false and Q(x) is false then the statement is true. But, of course, you don't need P(x) true for any x. It might be simplest to construct a truth table, dependent on x. By the way, while the statement is true, its converse is not.

Last edited by a moderator: May 2, 2017
5. Jun 21, 2006

### Mr.Cauliflower

It's not that

$$(\forall x)(P(x) \rightarrow Q(x))$$

implies that $P(x)$ is true. Now, when (I hope) I understand it, i see the P(x) hypothesis is there because later, there is deduction theorem used and as it can be seen from the proof, it's ok I think.

And truth table in predicate logic? I don't think it's possible while we can have infinite (or very large) universum.

6. Jun 21, 2006

### StatusX

I think you're right. To prove A->B, you assume A and try to prove B. A simple extension of this is, to prove A->(B->C), you assume A and try to prove B->C, so you assume B (in addition to A), and try to prove C. Is this what you meant?

7. Jun 21, 2006

### Mr.Cauliflower

Yes, I mean the hypotheses are (apart from the obvious hypothesis A) given by what I want to achieve and in that proof they show what can be deduced from those two hypotheses (it's ok to show it) and then they put it on the left side in the deduction theorem, use it and...we have exactly what we wanted to have.

8. Jun 21, 2006

### AKG

Yes, by the deduction theorem, we get:

$$A, B \vdash C \ \Leftrightarrow \ A \vdash (B \rightarrow C) \ \Leftrightarrow \ \vdash (A \rightarrow (B \rightarrow C))$$

As for this "constants theorem" could you write it out symbolically? It sounds similar to a couple of theorems in my book, I'm not sure exactly what you're talking about though so just write it out in symbols.

9. Jun 21, 2006

### Mr.Cauliflower

Ok, I'll try that:

Let $T$ be a set of formulas of the language $L$ and let $A$ be formula of the language $L$. Let $L'$ be created from $L$ extending with new symbols for constants.

If $c_1, ..., c_m$ are new constants and $x_1, ..., x_m$ are variables then

$$T \vdash A_{x_{1}, ..., x_{m}}[c_1, ..., c_m] \mbox{ if and only if } T \vdash A$$

Last edited: Jun 21, 2006
10. Jun 21, 2006

### AKG

I don't know the name of that theorem. It should be clear that if T |- A, then T |- Ax1,...,xm[c1,...,cm]. Conversely, the intuitive idea is that if T can prove A with x's replaced with c's, then since the c's have nothing to do with T (since T is in L, and the c's are from L'), T can really prove A with any arbitrary constants replacing the x's, and thus can prove A where we leave the x's as variables.

I know of two particular deduction systems. In one, there is a rule that is more or less along the lines of, "if c is a new constant symbol, and A is a formula with one free variable, then from A[c] you may infer $(\forall x)A[x]$". It's called universal generalization. And rather than saying that c is a "new constant", my book actually requires that c should not occur in any undischarged assumption, which is to say that nothing special has been assumed about c, which is akin to saying that c is a new constant. My other system has an unnamed lemma which says that if $\Sigma$ is any set of formulas, and $\theta$ is a formula, then $\Sigma \vdash \theta$ if and only if $\Sigma \vdash (\forall x)\theta$. Putting those two facts together, you can perhaps see why that theorem is true. But no, I don't know what the theorem, as you've stated it, is called.

11. Jun 21, 2006

### Mr.Cauliflower

That's a nice idea and now I maybe see why this theorem is trut, BUT I still don't get why we use it (according to my book) when we want to use the deduction theorem and the formula $A$ is not closed (ie. it contains some free variable).

I will repeat what my book says about deduction theorem:

Let $T$ be a set of formulas, let $A$ be closed formula and let $B$ be arbitrary formula. Then

$$T \vdash A \rightarrow B\mbox{ if and only if } T, A \vdash B$$

EDIT: I maybe see now why we use "constants theorem" in such cases (when $A$ is not closed). If I find something unclear about that, I will ask.

12. Jun 21, 2006

### AKG

Could you elaborate on what your book says about using the constant theorem when using the deduction theorem?

13. Jun 21, 2006

### Mr.Cauliflower

Here it is, there is a note after the proof of the deduction theorem:

3.46 If $A$ is not closed and has free variables $y_1, ..., y_n$, if we want to prove the implication $A \rightarrow B$ from hypotheses $T$, we extend the language with new constants $c_1, ..., c_n$. Then

$$T \vdash A \rightarrow B$$

$$\mbox{if and only if}$$

$$T \vdash A_{y_1, ..., y_n}[c_1, ..., c_n] \rightarrow B_{y_1, ..., y_n}[c_1, ..., c_n]$$

$$\mbox{if and only if}$$

$$T, A_{y_1, ..., y_n}[c_1, ..., c_n] \vdash B_{y_1, ..., y_n}[c_1, ..., c_n]$$

First equivalence holds true from the Equivalence theorem and the second one from the Deduction theorem.

Thus for proving the implication $A \rightarrow B$ from $T$ it's sufficient to prove the formula $B_{y_1, ..., y_n}[c_1, ..., c_n]$ from$T, A_{y_1, ..., y_n}[c_1, ..., c_n]$

Substitution of new constants for free variables in $A$ assures that when proving $B$ from hypotheses $T, A$ we won't use the generalization rule on any variable with free occurrence in $A$.

End of the note :)

14. Jun 22, 2006

### AKG

Okay, so you want to prove $A \to B$ from hypotheses T. You say the first equivalence holds true from the Equivalence theorem, but it looks to me to hold true because of your "Constants theorem". The above note shows that in order to prove $A \to B$ from T, it suffices to prove By1,...,yn[c1,...,cn] from T, Ay1,...,yn[c1,...,cn]. You need both the Deduction Theorem and Constants Theorem to get to this sufficient condition. If it's not clear to you why both these theorems are needed, (well it should be clear to you why the Deduction Theorem is needed), note that:

$$(A \to B)_{y_1,\dots ,y_n}[c_1,\dots ,c_n]$$

and

$$A_{y_1,\dots ,y_n}[c_1,\dots ,c_n] \to B_{y_1,\dots ,y_n}[c_1,\dots ,c_n]$$

are the same formula.

15. Jun 23, 2006

### Mr.Cauliflower

Of course, I mean "Constants theorem" instead of "Equivalence theorem". Now I get it, thank you.