# Simple ZFC question

Staff Emeritus
Gold Member
Just to show how dumb I am, I have to ask, does this implication follow from the ZFC axioms?

$$x=y\ \land\ y\in z\rightarrow x\in z$$

It's obviously true if we use the meaning of the = symbol, i.e. that the formula "x=y" means that x and y are different symbols for the same set, but I thought proofs in ZFC (and other formal languages) shouldn't ever have to rely on the meaning of the symbols.

Hurkyl
Staff Emeritus
Gold Member
Just to show how dumb I am, I have to ask, does this implication follow from the ZFC axioms?

$$x=y\ \land\ y\in z\rightarrow x\in z$$
This is a theorem of first-order logic in a language with a binary relation symbol $\in$; y appears freely in $y \in z$, and so substitution says
$$y=x \implies (y \in z \implies x \in z)$$​

Staff Emeritus
Gold Member
This is a theorem of first-order logic in a language with a binary relation symbol $\in$; y appears freely in $y \in z$, and so substitution says
$$y=x \implies (y \in z \implies x \in z)$$​
Thanks, but by referring to "substitution", aren't you just telling me to use the meaning of the = symbol? In that case, you haven't addressed my concern, which was that proofs shouldn't depend on the meaning of the symbols. (Right?)

I would like to understand this approach as well, even though it seems that the other approach (see below) also solves my problem completely.

Does this not follow from the second formulation of the axiom of extensionality?

http://en.wikipedia.org/wiki/Zfc
Thank you. I didn't even look at that page because I knew that it was using Kunen's version of the axioms. I figured that if I don't understand this with Kunen's book in front of me, the Wikipedia page isn't going to help. But it did. I was thinking that we could view x=y as an abbreviation of something else, but I didn't realize that this "something else" was

$$\forall z(z\in x\leftrightarrow z\in y)\ \land\ \forall z(x\in z\leftrightarrow y\in z)$$

I was thinking that the first term would be enough, and that made the extensionality axiom very confusing; it looked more like a definition of an abbreviation than like an axiom. But I think I get it now. The implication I posted in #1 follows immediately from this definition of the = symbol, and the extensionality axiom is not just a definition of an abbreviation.

I'm still confused about one thing. This second form of the extensionality axiom is

$$\forall x\forall y(\forall z(z\in x\leftrightarrow z\in y)\rightarrow \forall z(x\in z\leftrightarrow y\in z))$$

Don't we need to replace the → in the middle with a ↔, now that we're not relying on the meaning of the = symbol? Are there two sets that are members of the same sets, but don't have the same members?

...aren't you just telling me to use the meaning of the = symbol?

In first-order logic with equality (that is, the usual predicate calculus plus the equality axioms), the "=" is always interpreted as the equality (diagonal) binary relation, for any domain D. This restriction is imposed because, if we try to define (strong) equality using pure first-order logic, we always end up with a weaker version.

Don't we need to replace the → in the middle with a ↔

No, for the reason above; that (unecessarily complicated) expression, just tells you that, if two sets have the same elements, then, if one of them is an element of any other set, so must be the other.

Hurkyl
Staff Emeritus
Gold Member
Thanks, but by referring to "substitution", aren't you just telling me to use the meaning of the = symbol? In that case, you haven't addressed my concern, which was that proofs shouldn't depend on the meaning of the symbols. (Right?)
But proofs do depend on the rules of inference, and substitution is one of those in first-order logic (with equality).

Staff Emeritus
Gold Member
In first-order logic with equality (that is, the usual predicate calculus plus the equality axioms), the "=" is always interpreted as the equality (diagonal) binary relation, for any domain D.
But aren't the symbols of a first-order language supposed to be meaningless? I mean, we obviously can interpret → as "implies", but that doesn't mean that we have to. All proofs involving → are valid even if we don't assign a meaning to the symbol. Are you really saying that = must be treated as a symbol with the specific meaning we're used to? That seems to contradict the whole idea of formalism.

The approach that makes the most sense to me is to define x=y as an abbreviation. And if we do, I still think we might need to replace the → with a ↔ in the axiom. (Not sure though). Once we have seen that this approach works, I wouldn't mind switching to your approach. I would then view the "x=y is an abbreviation" approach as a formalization of set theory, and the "= is the diagonal relation" approach as a simplified and not purely formal way to express the same ideas, or alternatively as a mnemonic for what x=y abbreviates.

But proofs do depend on the rules of inference, and substitution is one of those in first-order logic (with equality).
Aha. That was an interesting point. I need to think about what that means.

CRGreathouse
Homework Helper
But aren't the symbols of a first-order language supposed to be meaningless? I mean, we obviously can interpret → as "implies", but that doesn't mean that we have to. All proofs involving → are valid even if we don't assign a meaning to the symbol. Are you really saying that = must be treated as a symbol with the specific meaning we're used to? That seems to contradict the whole idea of formalism.

In "first-order logic", the symbols are a priori meaningless. In "first-order logic with equality", the symbol "=" has special meaning.

Staff Emeritus
Gold Member
In "first-order logic", the symbols are a priori meaningless. In "first-order logic with equality", the symbol "=" has special meaning.
Aah, that makes sense. Sounds almost obvious now that I've heard someone say it. Am I right to think that ZFC can be viewed either as "first-order logic" or as "first-order logic with equality"?

yossell
Gold Member
What do you mean by follows'?

In the model theory for first order languages, the meaning of the logical symbols, all', and', or', is respected (they have the same and the right' meaning in all models), the meaning of the predicates and terms is normally ignored (their meanings vary from model to model). However, you can have a model theory where the meaning of =' is respected: =' in the model is really interpreted as identity.

If your notion of follows' is model-theoretic, (all models where the premises are true, the conclusion is true), then different things will follow from your axioms depending upon whether your models respect identity or whether they don't.

If your notion of follows' is proof theoretic, then NONE of the meanings of the terms matter - identity, the logical constants, whatever. It's a purely syntactic transformation. However, the rules that govern identity are good enough to give us the inference you originally asked about...and the logical rules underpinning ZFC is normally strong enough allow us to derive the original rule you asked about.

Staff Emeritus
Gold Member
Thanks yossell. That (in particular the last paragraph) made things a bit clearer. I meant the same kind of "follow" as when we prove that if x and y are both empty, then x=y. If x and y are both empty, they have the same members, so it "follows" from extensionality that x=y. (Note that if we haven't assigned = the usual meaning, we can't conclude from this that there's at most one empty set). Models don't enter into it, so I guess "proof theoretic" is the answer to your question, even though I didn't actually examine the axioms of any specific proof theory to verify that they support what I did in the second sentence of this post.

I think I get what Hurkyl was talking about. He's using axiom 3 here. Interestingly enough, I didn't see anything that looks like that in Kunen's book, at least not in the section titled "Formal proofs", where the rule of inference (modus ponens) and the logical axioms (stuff like x=x) are specified. Uhh, wait, now I see that number 11 (of definition II.10.1, page 119) with n=2 and p=$\in$ is

$$x_1=y_1\ \land\ x_2=y_2\rightarrow (x_1\in x_2\ \leftrightarrow y_1\in y_2)$$

That's does look like a substitution rule. Unfortunately I have to get some sleep, so I don't have time to think about it now. I've been sick the past week and as a result I'm sleeping at very strange hours.

I should also mention that when I started this thread, I wasn't aware that there were any axioms for the = symbol in proof theory, but these things have been getting clearer with each post in this thread.

Last edited:
The approach that makes the most sense to me is to define x=y as an abbreviation.

Some authors still do it, but it's going out of favour; the reason is that strict equality seems to so fundamental that it should be viewed as a purely logic relation.
Note that, when defining a first-order theory, the predicate symbols are non-logical symbols (particular to the theory) and there are authors that argue that equality should be among them, as a non-logical predicate. Another reason is that when proving the completness of FOL, equality introduces a technical complication and, for expository purposes, some prefer to give the proof only for the case without equality.
On the other hand, if you stick to FOL without equality, then you must define equality for each particular theory (which I think is the case with your ZFC example) and this is somewhat ugly (at least for me and a few others).

It's possible to define equality purely in terms of predicates, but only in second order logic, by the usually called "Leibniz Law":

$$\forall x\forall y \left(x = y \leftrightarrow \forall P\left(P\left(x\right)\leftrightarrow P\left(y\right)\right)\right)$$

But this is philosophically touchy; for example, "x is identical to itself" seems to be a perfectly reasonable property (or predicate) but, if you include it in the second order quantification above, then the above expression is trivial.

On the other hand, if you stick to FOL without equality, then you must define equality for each particular theory (which I think is the case with your ZFC example) and this is somewhat ugly (at least for me and a few others).
More importantly, it's putting the cart in front of the horse philosophically. $x=y$ expresses that x and y are coreferential - but coreferentiality is precisely the notion you need in order to understand sentences like $\forall x \exists y ~ \varphi(x) \wedge R(x,y)$, as it is what ties the two occurences of x together. A criterion of identity is what codefines a sortal concept like "set", it's not something you can just add later as a sort of cherry on top.

It's for that same reason that while we might argue about whether Leibniz's law is analytical or not, the fact is that it cannot serve as a definition of equality.

Staff Emeritus
Gold Member
I think I'm starting to get it, but it would help to get a comment about this as well:

It still seems to me that we can't claim to have successfully formalized mathematics until we've seen that we can do all this language/model theory/proof theory stuff without using the meaning of the = symbol. Do you guys agree or disagree with that?

Once we have seen that it can be done, I don't have any objections against doing it the other way, if that's easier. Isn't the idea here that the meaning of the = symbol implies stuff like x=x, and that this means that these sentences don't have to be explicitly included in the proof theory axioms? In other words, if we want a complete formalization of mathematics, use first-order logic without identity, but if we don't want to have to type as much, and make things a little bit easier for our human brains, use first-order logic with identity.

I'm still pretty confused about formal proofs, so I would also appreciate if someone can show me how to do this proof, which I assume is easy: Suppose that we take the approach that x=y is an abbreviation of

$$\forall z(x\in z\leftrightarrow y\in z)\ \land\ \forall z(z\in x\leftrightarrow z\in y)$$

Now we should be able to show that x=x for all x. How do we do that? Let's see, the sentence we need to prove is

$$\forall x(\forall z(x\in z\leftrightarrow x\in z)\ \land\ \forall z(z\in x\leftrightarrow z\in x))$$

right? I understand that $x\in z\leftrightarrow x\in z$ is a tautology. This means that the truth table for $\leftrightarrow$ tells us that the formula is true, no matter what truth value we assign to the formula $x\in z$. I have also read that a proof is a sequence of formulas such that if $\psi$ is in the sequence, then if $\psi$ isn't a proof theory axiom or a member of the set $\Sigma$ of formulas that we started with, one of the formulas earlier in the sequence must be $\phi\rightarrow\psi$, where $\phi$ is a formula that also appears earlier in the sequence. So I know what I'm supposed to do. I know that my $\Sigma$ is the definition of x=y, I know that tautologies are considered proof theory axioms, and I know that the last line of the proof is the (longer version of) $\forall x\ x=x$, but I don't understand what the intermediate steps are, or what other proof theory axioms to use. (I have never seen a worked out example. That's why I need to ask).

yossell
Gold Member
It still seems to me that we can't claim to have successfully formalized mathematics until we've seen that we can do all this language/model theory/proof theory stuff without using the meaning of the = symbol. Do you guys agree or disagree with that?

I think that either answer is defensible, depending on what you want from formalisation.

If proving is your game, then the meaning of =' shouldn't matter. What you want is axioms governing the behaviour of formulas containing this symbol that enable you to prove all the right sentences involving this formula - that x = x, (x = y & y = z) -> x = z, and the like.

However, if proving is your game, then the meaning of ALL the symbols shouldn't matter. Including conjunction, disjunction, and the universal and existential quantifiers. Here, too, what we want are rules that tell us how these symbols work - that from P & Q, you can infer Q, from Fa you can infer ExFx, and the like. Again, whether a formula is provable in a system is logically independent of the meanings of the symbols that occur in the system - it's a purely deductive matter.

If you have a more model-theoretic notion of follows in mind, things are different. But that's because the models we usually use are defined and constructed in a way that automatically respects the meaning of the quantifiers and the truth-functional connectives, while it's easy to generate models which don't care about the meanings of the predicates and names of the relevant language.

I don't think there's a great deal of significance in this. When first order theories are introduced, they are introduced when we notice that the fact that there are various valid inference patterns (Every F is G, a is an F; ergo, a is a G), that hold whatever the meanings of the names and the predicates. First order languages are then designed to have just enough structure to enable you to represent a range of valid inference patterns. Model theory for such language was then created to study exactly such inferences.

If somehow one is worried that the meaning of a term can play a significant role in logic, I would wonder why it is felt that the meaining of = should be variable, while the meaning of 'All' and `and' can be fixed across models.

Isn't the idea here that the meaning of the = symbol implies stuff like x=x, and that this means that these sentences don't have to be explicitly included in the proof theory axioms? In other words, if we want a complete formalization of mathematics, use first-order logic without identity, but if we don't want to have to type as much, and make things a little bit easier for our human brains, use first-order logic with identity.

Again, the proof theory doesn't change, whether we keep the meaning of identity fixed or whether we vary it across models. If we want to type a little less, we prove meta-theorems that allow us to use derived rules, such as transitivity of identity, rather than doing it all again.

If, on the other hand, you're concerned not with proof theory, but with, say, how much mathematics a theory can formulate (can set-theory really formulate *all* mathematics? Peano Arithmetic? Group Theory? Category Theory?) then it seems to me that the meanings of the predicates is an important component - one really needs a way of formulating identity if you're going to say that every number has a unique successor.

In order to prove those sentences, you need to use the proof theory for predicate logic - that is, you need a proof theory that will tell you when you can introduce and eliminate the universal and existential quantifiers. If you're only looking at the rules for propositional logic, you can't do it - my guess is that this why it seems hard.

Basically, the rule for all introduction says that, if you've managed to prove Fx, and nowhere in this proof is there an assumption specifically about x, then you can conclude that AxFx. So you just need to apply this rule twice to the two tautologies you mention, then conjunction introduction gives you the result.

CRGreathouse
Homework Helper
Once we have seen that it can be done, I don't have any objections against doing it the other way, if that's easier. Isn't the idea here that the meaning of the = symbol implies stuff like x=x, and that this means that these sentences don't have to be explicitly included in the proof theory axioms? In other words, if we want a complete formalization of mathematics, use first-order logic without identity, but if we don't want to have to type as much, and make things a little bit easier for our human brains, use first-order logic with identity.

That's certainly not how I see it. For me, = is one of the most philosophically primitive notions; defining = in terms of $\in$ is backward. But we can do it, if we prefer.

It still seems to me that we can't claim to have successfully formalized mathematics until we've seen that we can do all this language/model theory/proof theory stuff without using the meaning of the = symbol. Do you guys agree or disagree with that?

If by formalization, you mean the reduction of all of the Mathematic's corpus to formal systems, we have very good reasons to believe that it's impossible in general (and certainly if we restrict ourselves to first-order logic). Furthermore, in my opinion, even if it was possible, I don't think it would really advance our knowledge or solve any important problems.
Regarding the equality issue, I think it's a false question; you simply cannot escape the pretheoretical meanings of your symbols: they are implicit in your axioms and rules of inference; think of propositional logic: it can be axiomatized using a subset of all tautologies, but how could you single them out without the concept of tautology? (which depend on the connectives's meaning).
The fact is that Mathematics (and Logic) rest in a set of pretheoretical notions that we don't really have any idea where they come from, only that they seem to work pretty well. And that strict equality, as was already said by CR, is one of the most basic of them, so we have every reason to include it as a primitive synbol.

Regarding your proof, let me tell you first that almost nobody has the pacience to fully spell them out (exceptions are the simple cases) and will only do so under pressure (like homework); after you cross that bridge, you may use what Kreisel termed "informal rigour", that is, in your case that the expression is the generalization of a tautology and we are done by completness. A proof with all the details depends of the formal system, but in this case it could be something like this:

#### Attachments

• Proof.pdf
18.3 KB · Views: 136
Staff Emeritus
Gold Member
Edit: I wrote this before I saw JSuarez's post above.

However, if proving is your game, then the meaning of ALL the symbols shouldn't matter. Including conjunction, disjunction, and the universal and existential quantifiers. Here, too, what we want are rules that tell us how these symbols work - that from P & Q, you can infer Q, from Fa you can infer ExFx, and the like. Again, whether a formula is provable in a system is logically independent of the meanings of the symbols that occur in the system - it's a purely deductive matter.
That's what I'm thinking. I'm not saying that this is the most practical way to think about mathematics, but it is how I interpret the word "formalism".

If, on the other hand, you're concerned not with proof theory, but with, say, how much mathematics a theory can formulate (can set-theory really formulate *all* mathematics? Peano Arithmetic? Group Theory? Category Theory?) then it seems to me that the meanings of the predicates is an important component - one really needs a way of formulating identity if you're going to say that every number has a unique successor.
I'm thinking that "formalism" is a philosophy that views all of mathematics as the manipulation of meaningless symbols according to a specific set of rules. By that philosophy, claims like "every number has a unique successor" (which only make sense if the symbols have their usual meaning), shouldn't be thought of as part of the theory. At least that's how it seems to me. I would say that they belong to an "interpretation" of the theory, which is defined by an assignment of meaning to the symbols.

In order to prove those sentences, you need to use the proof theory for predicate logic - that is, you need a proof theory that will tell you when you can introduce and eliminate the universal and existential quantifiers. If you're only looking at the rules for propositional logic, you can't do it - my guess is that this why it seems hard.
I'm looking at pages 119-120 of Kunen, which defines the proof theory used in that book, but this is my first attempt to prove anything this way. Unfortunately neither the book nor those specific pages seem to be available online.

Basically, the rule for all introduction says that, if you've managed to prove Fx, and nowhere in this proof is there an assumption specifically about x, then you can conclude that AxFx. So you just need to apply this rule twice to the two tautologies you mention, then conjunction introduction gives you the result.
Number 2 on Kunen's list of "logical axioms" is

$$\phi\rightarrow\forall x\phi,\textit{ where x is not free in }\phi.$$

Is that what you meant? One of my problems is that z is free in $z\in x\leftrightarrow z\in x$. Another problem is that the theorem is of the form $\psi_1\land\psi_2$, and that by Kunen's definition of "proof", that means that the earlier formulas must include $\psi\rightarrow\psi_1\land\psi_2$, where $\psi$ is another formula that's included earlier in the proof. I don't see an axiom that tells me that I have obtained $\psi_1$ and $\psi_2$, I can combine them to $\psi_1\land\psi_2$.

That's certainly not how I see it. For me, = is one of the most philosophically primitive notions; defining = in terms of $\in$ is backward. But we can do it, if we prefer.
It seems to be the only way that's consistent with "formalism", but perhaps the whole concept of formalism is "backward".

Staff Emeritus
Gold Member
If by formalization, you mean the reduction of all of the Mathematic's corpus to formal systems, we have very good reasons to believe that it's impossible in general (and certainly if we restrict ourselves to first-order logic).
Are you thinking of Gödel's incompleteness theorem? If I understand it correctly, that only means that there's no single formal system that covers everything we might want to do. We can still think of each formal system as a specific theory which is completely formalized, with no meaning given to any of the symbols. So isn't it correct to say that all of mathematics can be reduced to formal systemS? It just can't be reduced to a formal system.

Furthermore, in my opinion, even if it was possible, I don't think it would really advance our knowledge or solve any important problems.
I think just having an exact definition of what a "proof" is is important enough.

Regarding the equality issue, I think it's a false question; you simply cannot escape the pretheoretical meanings of your symbols: they are implicit in your axioms and rules of inference;
I understand that the proof theory axioms (in particular the truth tables that we associate with the logical symbols) are chosen to match the preconceived meaning of the symbols. However, that doesn't mean that we have to use, or even know, the meaning of the symbols after that. The axioms enable us to forget what the symbols mean, and to do everything mechanically.

A proof with all the details depends of the formal system, but in this case it could be something like this:
Thank you. That's probably how I would have done it if I didn't have Kunen's axioms in front of me. I don't see how to use Kunen's axioms to justify these steps, but it's not like I need to figure that out now. Maybe it will come to me when I read more of the book.

I noticed that you and yossell both use terms like "introduction" and "generalization" as if they are standard terms. I'm not familiar with that terminology, but I found this at Wikipedia. There are a lot more "rules of inference" there than in Kunen's book. His proof theory has only one: modus ponens. I guess that's why I was confused and you guys thought it was easy. (He says "we are striving for a system which is easy to define and analyze mathematically, not one which is easy to use in practical settings".)

yossell
Gold Member
If you could list Kunen's axioms, that might be helpful.

There are many different formal systems. I find it easier to work with a system of natural deduction - a system that contains many rules other than modus ponens. Often, these rules are stated as introduction and elimination rules, telling you when and how you can introduce a logical symbol (if you've got a proof of P, then you can extend it to a proof of P v Q for any Q).

When one is trying to prove results ABOUT a system, metatheorems such as soundness - then it's often easier to focus on a system with very few axioms and rules of inference.

It's probably very tedious for you to write it all out, but it's not possible to work out the proof without the system in front of us.

Incidentally - I wasn't trying to get into the ins and outs of the purpose of formalisation - I was just pointing out that there were different projects that fall under this term. If it's formal proofs you want, the meaning of = doesn't matter - you can do the proof without knowing the meaning =. But that holds for ALL the symbols - so identity isn't special. There are other projects one (not necessarily you) might have in mind where the interpretation of the symbols does become relevant.

Staff Emeritus
Gold Member
If you could list Kunen's axioms, that might be helpful.
Definition II.10.1 A logical axiom of $\mathcal L$ is any sentence of $\mathcal L$ which is a universal closure (see Definition II.5.6) of a formula of one of the types listed below. Here, x,y,z, possibly with subscripts, denote arbitrary variables.

1. propositional tautologies.
2. $\varphi\rightarrow\forall x\varphi$, where x is not free in $\varphi$.
3. $\forall x(\varphi\rightarrow\psi)\rightarrow(\forall x\varphi\rightarrow\forall x\psi)$.
4. $\forall x\varphi\rightarrow\varphi(x\rightsquigarrow\tau)$, where $\tau$ is any term which is free for x in $\varphi$.
5. $\varphi(x\rightsquigarrow\tau)\rightarrow\exists x\varphi$, where $\tau$ is any term which is free for x in $\varphi$.
6. $\forall x\lnot\varphi\leftrightarrow\lnot\exists x\varphi$.
7. $x=x$.
8. $x=y\leftrightarrow y=x$
9. $x=y\land y=z\rightarrow x=z$.
10. $x_1=y_1\land\dots\land x_n=y_n\rightarrow (f(x_1\dots x_n)=f(y_1\dots y_n))$, whenever n>0 and f is an n-place function symbol of $\mathcal L$.
11. $x_1=y_1\land\dots\land x_n=y_n\rightarrow (p(x_1\dots x_n)\leftrightarrow p(y_1\dots y_n))$, whenever n>0 and p is an n-place predicate symbol of $\mathcal L$.

(My comment: I don't know what $\rightsquigarrow$ means.)

Definition II.10.3 If $\Sigma$ is a set of sentences of $\mathcal L$, then a formal proof from $\Sigma$ is a finite non-empty sequence of sentences of $\mathcal L$, $\varphi_0,\dots,\varphi_n$, such that for each i, either $\varphi_i\in\Sigma$ or $\varphi_i$ is a logical axiom or for some j,k<i, $\varphi_i$ follows from $\varphi_i,\varphi_j$ by Modus Ponens (that is, $\varphi_k$ is $(\varphi_j\rightarrow\varphi_i)$). This sequence is a formal proof of its last sentence, $\varphi_n$.

If it's formal proofs you want, the meaning of = doesn't matter - you can do the proof without knowing the meaning =. But that holds for ALL the symbols - so identity isn't special.
That's what I've been thinking this whole time, but it seemed to me when I started this thread that = was treated differently, because the extensionality axiom was written as

$$\forall x\forall y(\forall z(z\in x\leftrightarrow z\in y)\rightarrow x=y)$$

and the arrow going in the other direction was left out with the motivation that it follows from the fact that = has the meaning of equality. If we really need to use the meaning here, then = is treated differently, but we can just use the logical axioms instead. Since the logical axioms list the properties of all the symbols (= in 7-11 and the others in 1), = isn't really treated differently.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
4. $\forall x\varphi\rightarrow(x\rightsquigarrow\tau)$, where $\tau$ is any term which is free for x in $\varphi$.
5. $\varphi(x\rightsquigarrow\tau)\rightarrow\exists x\varphi$, where $\tau$ is any term which is free for x in $\varphi$.
4 is missing a varphi?

(My comment: I don't know what $\rightsquigarrow$ means.)
It looks like it's indicating substitution, replacing x with tau.

yossell
Gold Member
I have to rush right now - but as a start, it seems that
Az( x e z <-> x e z)

counts as a logical axiom - as it is a universal closure of a tautology. We need definition II.5.6 to be sure. Clearly, universal closure is going to be doing some of the work of All introduction or generalisation here.

I don't think the fact that the extensionality axiom is written using = has any more significance than the fact it also contains all and if then etc. From a proof-theoretic point of view, there will be background axioms governing the behaviour of identity - so the proof theory will all be fine.

Staff Emeritus
Gold Member
4 is missing a varphi?

(My comment: I don't know what $\rightsquigarrow$ means.)
It looks like it's indicating substitution, replacing x with tau.
Thanks. I have edited 4 now.

We need definition II.5.6 to be sure.
Definition II.5.6 If $\varphi$ is a formula, a universal closure of $\varphi$ is any sentence of the form $\forall x_1\forall x_2\cdots\forall x_n$, where n≥0.

This sounds a bit strange to me. The example he gives is that if $\chi$ is the formula $x\cdot(y\cdot z)=(x\cdot y)\cdot z$, then $\forall x\forall y\forall z \chi$ is a universal closure of $\chi$, but it seems that even $\forall w\chi$ would be a universal closure of $\chi$, since he doesn't say that $x_1,\dots,x_n$ are the free variables of $\varphi$.

Staff Emeritus
I understand what you said in #23. $x\in z\leftrightarrow x\in z$ is a tautology. That means that $\forall z(x\in z\leftrightarrow x\in z)$ is a logical axiom, since it's the universal closure of a tautology. (Kunen's #1 implies that it's a logical axiom). By the same reasoning, $\forall z(z\in x\leftrightarrow z\in x)$ is a logical axiom too. However, the $\land$ of these two axioms isn't a tautology, so now I don't see which one of Kunen's axioms I should use to get the final result
$$\forall x(\forall z(x\in z\leftrightarrow x\in z)\land\forall z(z\in x\leftrightarrow z\in x))$$