Proving Hypothesis of Generalization Logic

  • Thread starter Mr.Cauliflower
  • Start date
  • Tags
    Proof
In summary: The only thing I can think of is that your book is using the phrase "constant" in a different way than I'm used to.
  • #1
Mr.Cauliflower
18
0
Hello,

I wonder whether if I have this formula to prove:

[tex]
(\forall x)(P(x) \rightarrow Q(x)) \rightarrow ((\forall x) P(x) \rightarrow (\forall x) Q(x))
[/tex]

is it correct to have both

[tex]
(\forall x)(P(x) \rightarrow Q(x))
[/tex]

and

[tex]
((\forall x) P(x)
[/tex]

as hypotheses in the proof of this formula? The first hypothesis is obvious, but I'm not sure with the second one.

I found it here:

http://en.wikipedia.org/wiki/Generalization_(logic)"

Thank you.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
It's not entirely clear what you intend to do. Depending on the particulars of your deduction system, it may not even make sense to have two hypotheses (you may have one hypothesis assumed within the scope of another, but cannot simultaneously hypothesize two formulas - in some systems).

Just show some work, and it will be easier to tell whether you've done things right or not.
 
  • #3
AKG said:
It's not entirely clear what you intend to do. Depending on the particulars of your deduction system, it may not even make sense to have two hypotheses (you may have one hypothesis assumed within the scope of another, but cannot simultaneously hypothesize two formulas - in some systems).

Just show some work, and it will be easier to tell whether you've done things right or not.

I work in first-order predicate logic and if what you mean by deduction system is a set of axioms and inference rules, those are:

(axioms)

[tex]
A \rightarrow (B \rightarrow A)
[/tex]

[tex]
(A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C))
[/tex]

[tex]
(\urcorner B \rightarrow \urcorner A) \rightarrow (A \rightarrow B)
[/tex]

Which are taken from propositional logic and

Scheme of specification
[tex]
(\forall x) A \rightarrow A_{x}[t]
[/tex]

and the jump scheme (I really don't know how to call it in english)
If x doesn't have unbound occurence in A, then

[tex]
(\forall x)(A \rightarrow B) \rightarrow (A \rightarrow (\forall x) B)
[/tex]

There are two inference rules: modus ponens and generalization rule:

From

[tex]
\vdash A
[/tex]

we deduce

[tex]
\vdash (\forall x) A
[/tex]


And I think I finally understand why there are two hypotheses in that proof on wikipedia. The proof later uses the deduction theorem and it is clear now.

What isn't clear yet for me is the "constants theorem" (I know it's probably called in another way in english). It's the thing you use when you want to use the deduction theorem but the formula which has to be (by premises of the theorem) closed, has some unbound variables.

Then the theorem says that if you substitute the variables in that formula with new constants (you extend the language with these constants), new formula is proveable iff the original was and you can use the new one in the deduction theorem.

Could you tell me please the name of this theorem (so that I could find more info on english pages) and how can it work? I just don't see why substituting constants (are these 'constants' as we normally understand them?) for unbound variables allows me to use the formula in the deduction theorem.

Thank you very much.
 
  • #4
Mr.Cauliflower said:
Hello,

I wonder whether if I have this formula to prove:

[tex]
(\forall x)(P(x) \rightarrow Q(x)) \rightarrow ((\forall x) P(x) \rightarrow (\forall x) Q(x))
[/tex]

is it correct to have both

[tex]
(\forall x)(P(x) \rightarrow Q(x))
[/tex]

and

[tex]
((\forall x) P(x)
[/tex]

as hypotheses in the proof of this formula? The first hypothesis is obvious, but I'm not sure with the second one.

I found it here:

http://en.wikipedia.org/wiki/Generalization_(logic)"

Thank you.
No. The hypothesis for the statement is just
(\forall x)(P(x) \rightarrow Q(x))
which does not imply that P(x) is true. For example, if P(x) is false and Q(x) is false then the statement is true. But, of course, you don't need P(x) true for any x. It might be simplest to construct a truth table, dependent on x. By the way, while the statement is true, its converse is not.
 
Last edited by a moderator:
  • #5
HallsofIvy said:
No. The hypothesis for the statement is just
(\forall x)(P(x) \rightarrow Q(x))
which does not imply that P(x) is true. For example, if P(x) is false and Q(x) is false then the statement is true. But, of course, you don't need P(x) true for any x. It might be simplest to construct a truth table, dependent on x. By the way, while the statement is true, its converse is not.

It's not that

[tex]
(\forall x)(P(x) \rightarrow Q(x))
[/tex]

implies that [itex]P(x)[/itex] is true. Now, when (I hope) I understand it, i see the P(x) hypothesis is there because later, there is deduction theorem used and as it can be seen from the proof, it's ok I think.

And truth table in predicate logic? I don't think it's possible while we can have infinite (or very large) universum.
 
  • #6
I think you're right. To prove A->B, you assume A and try to prove B. A simple extension of this is, to prove A->(B->C), you assume A and try to prove B->C, so you assume B (in addition to A), and try to prove C. Is this what you meant?
 
  • #7
StatusX said:
I think you're right. To prove A->B, you assume A and try to prove B. A simple extension of this is, to prove A->(B->C), you assume A and try to prove B->C, so you assume B (in addition to A), and try to prove C. Is this what you meant?

Yes, I mean the hypotheses are (apart from the obvious hypothesis A) given by what I want to achieve and in that proof they show what can be deduced from those two hypotheses (it's ok to show it) and then they put it on the left side in the deduction theorem, use it and...we have exactly what we wanted to have.
 
  • #8
Yes, by the deduction theorem, we get:

[tex]A, B \vdash C \ \Leftrightarrow \ A \vdash (B \rightarrow C) \ \Leftrightarrow \ \vdash (A \rightarrow (B \rightarrow C))[/tex]

As for this "constants theorem" could you write it out symbolically? It sounds similar to a couple of theorems in my book, I'm not sure exactly what you're talking about though so just write it out in symbols.
 
  • #9
AKG said:
Yes, by the deduction theorem, we get:

[tex]A, B \vdash C \ \Leftrightarrow \ A \vdash (B \rightarrow C) \ \Leftrightarrow \ \vdash (A \rightarrow (B \rightarrow C))[/tex]

As for this "constants theorem" could you write it out symbolically? It sounds similar to a couple of theorems in my book, I'm not sure exactly what you're talking about though so just write it out in symbols.

Ok, I'll try that:

Let [itex]T[/itex] be a set of formulas of the language [itex]L[/itex] and let [itex]A[/itex] be formula of the language [itex]L[/itex]. Let [itex]L'[/itex] be created from [itex]L[/itex] extending with new symbols for constants.

If [itex]c_1, ..., c_m[/itex] are new constants and [itex]x_1, ..., x_m[/itex] are variables then

[tex]
T \vdash A_{x_{1}, ..., x_{m}}[c_1, ..., c_m] \mbox{ if and only if } T \vdash A
[/tex]
 
Last edited:
  • #10
I don't know the name of that theorem. It should be clear that if T |- A, then T |- Ax1,...,xm[c1,...,cm]. Conversely, the intuitive idea is that if T can prove A with x's replaced with c's, then since the c's have nothing to do with T (since T is in L, and the c's are from L'), T can really prove A with any arbitrary constants replacing the x's, and thus can prove A where we leave the x's as variables.

I know of two particular deduction systems. In one, there is a rule that is more or less along the lines of, "if c is a new constant symbol, and A is a formula with one free variable, then from A[c] you may infer [itex](\forall x)A[x][/itex]". It's called universal generalization. And rather than saying that c is a "new constant", my book actually requires that c should not occur in any undischarged assumption, which is to say that nothing special has been assumed about c, which is akin to saying that c is a new constant. My other system has an unnamed lemma which says that if [itex]\Sigma[/itex] is any set of formulas, and [itex]\theta[/itex] is a formula, then [itex]\Sigma \vdash \theta[/itex] if and only if [itex]\Sigma \vdash (\forall x)\theta[/itex]. Putting those two facts together, you can perhaps see why that theorem is true. But no, I don't know what the theorem, as you've stated it, is called.
 
  • #11
AKG said:
I don't know the name of that theorem. It should be clear that if T |- A, then T |- Ax1,...,xm[c1,...,cm]. Conversely, the intuitive idea is that if T can prove A with x's replaced with c's, then since the c's have nothing to do with T (since T is in L, and the c's are from L'), T can really prove A with any arbitrary constants replacing the x's, and thus can prove A where we leave the x's as variables.

That's a nice idea and now I maybe see why this theorem is trut, BUT I still don't get why we use it (according to my book) when we want to use the deduction theorem and the formula [itex]A[/itex] is not closed (ie. it contains some free variable).

I will repeat what my book says about deduction theorem:

Let [itex]T[/itex] be a set of formulas, let [itex]A[/itex] be closed formula and let [itex]B[/itex] be arbitrary formula. Then

[tex]
T \vdash A \rightarrow B\mbox{ if and only if } T, A \vdash B
[/tex]

EDIT: I maybe see now why we use "constants theorem" in such cases (when [itex]A[/itex] is not closed). If I find something unclear about that, I will ask.
 
  • #12
Could you elaborate on what your book says about using the constant theorem when using the deduction theorem?
 
  • #13
AKG said:
Could you elaborate on what your book says about using the constant theorem when using the deduction theorem?

Here it is, there is a note after the proof of the deduction theorem:

3.46 If [itex]A[/itex] is not closed and has free variables [itex]y_1, ..., y_n[/itex], if we want to prove the implication [itex]A \rightarrow B[/itex] from hypotheses [itex]T[/itex], we extend the language with new constants [itex]c_1, ..., c_n[/itex]. Then

[tex]
T \vdash A \rightarrow B
[/tex]

[tex]
\mbox{if and only if}
[/tex]

[tex]
T \vdash A_{y_1, ..., y_n}[c_1, ..., c_n] \rightarrow B_{y_1, ..., y_n}[c_1, ..., c_n]
[/tex]

[tex]
\mbox{if and only if}
[/tex]

[tex]
T, A_{y_1, ..., y_n}[c_1, ..., c_n] \vdash B_{y_1, ..., y_n}[c_1, ..., c_n]
[/tex]

First equivalence holds true from the Equivalence theorem and the second one from the Deduction theorem.

Thus for proving the implication [itex]A \rightarrow B[/itex] from [itex]T[/itex] it's sufficient to prove the formula [itex]B_{y_1, ..., y_n}[c_1, ..., c_n][/itex] from[itex]T, A_{y_1, ..., y_n}[c_1, ..., c_n][/itex]

Substitution of new constants for free variables in [itex]A[/itex] assures that when proving [itex]B[/itex] from hypotheses [itex]T, A[/itex] we won't use the generalization rule on any variable with free occurrence in [itex]A[/itex].

End of the note :)
 
  • #14
Okay, so you want to prove [itex]A \to B[/itex] from hypotheses T. You say the first equivalence holds true from the Equivalence theorem, but it looks to me to hold true because of your "Constants theorem". The above note shows that in order to prove [itex]A \to B[/itex] from T, it suffices to prove By1,...,yn[c1,...,cn] from T, Ay1,...,yn[c1,...,cn]. You need both the Deduction Theorem and Constants Theorem to get to this sufficient condition. If it's not clear to you why both these theorems are needed, (well it should be clear to you why the Deduction Theorem is needed), note that:

[tex](A \to B)_{y_1,\dots ,y_n}[c_1,\dots ,c_n][/tex]

and

[tex]A_{y_1,\dots ,y_n}[c_1,\dots ,c_n] \to B_{y_1,\dots ,y_n}[c_1,\dots ,c_n][/tex]

are the same formula.
 
  • #15
AKG said:
Okay, so you want to prove [itex]A \to B[/itex] from hypotheses T. You say the first equivalence holds true from the Equivalence theorem, but it looks to me to hold true because of your "Constants theorem". The above note shows that in order to prove [itex]A \to B[/itex] from T, it suffices to prove By1,...,yn[c1,...,cn] from T, Ay1,...,yn[c1,...,cn]. You need both the Deduction Theorem and Constants Theorem to get to this sufficient condition. If it's not clear to you why both these theorems are needed, (well it should be clear to you why the Deduction Theorem is needed), note that:

[tex](A \to B)_{y_1,\dots ,y_n}[c_1,\dots ,c_n][/tex]

and

[tex]A_{y_1,\dots ,y_n}[c_1,\dots ,c_n] \to B_{y_1,\dots ,y_n}[c_1,\dots ,c_n][/tex]

are the same formula.

Of course, I mean "Constants theorem" instead of "Equivalence theorem". Now I get it, thank you.
 

1. What is the process for proving a hypothesis in generalization logic?

The process for proving a hypothesis in generalization logic involves making observations and collecting data, formulating a hypothesis based on the observations, conducting experiments or tests to gather more data, analyzing the data to draw conclusions, and finally, determining if the evidence supports the hypothesis.

2. How do you know when a hypothesis has been proven?

A hypothesis can be considered proven when there is enough evidence to support it and when it can be replicated and verified by others. This means that the hypothesis has withstood rigorous testing and has not been disproven.

3. Can a hypothesis ever be proven completely true?

No, a hypothesis can never be proven completely true. This is because new evidence or information may come to light in the future that contradicts the hypothesis. However, a hypothesis can be considered well-supported and widely accepted if it has been extensively tested and has not been disproven.

4. What is the importance of proving a hypothesis in generalization logic?

Proving a hypothesis in generalization logic is important because it allows us to better understand the world around us and make accurate predictions. It also helps to advance scientific knowledge and can lead to the development of new theories and technologies.

5. Are there any limitations to the process of proving a hypothesis?

Yes, there are limitations to the process of proving a hypothesis. For example, the results of experiments may be affected by external factors that are difficult to control, and there may be ethical considerations that prevent certain experiments from being conducted. Additionally, the results of a study may not be generalizable to a larger population.

Similar threads

  • Calculus and Beyond Homework Help
Replies
20
Views
2K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
803
  • Calculus and Beyond Homework Help
Replies
1
Views
454
  • Calculus and Beyond Homework Help
Replies
8
Views
608
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
958
  • Calculus and Beyond Homework Help
Replies
28
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
507
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
Back
Top