Fredrik said:
It still seems to me that we can't claim to have successfully formalized mathematics until we've seen that we can do all this language/model theory/proof theory stuff without using the meaning of the = symbol. Do you guys agree or disagree with that?
I think that either answer is defensible, depending on what you want from formalisation.
If proving is your game, then the meaning of `=' shouldn't matter. What you want is axioms governing the behaviour of formulas containing this symbol that enable you to prove all the right sentences involving this formula - that x = x, (x = y & y = z) -> x = z, and the like.
However, if proving is your game, then the meaning of ALL the symbols shouldn't matter. Including conjunction, disjunction, and the universal and existential quantifiers. Here, too, what we want are rules that tell us how these symbols work - that from P & Q, you can infer Q, from Fa you can infer ExFx, and the like. Again, whether a formula is provable in a system is logically independent of the meanings of the symbols that occur in the system - it's a purely deductive matter.
If you have a more model-theoretic notion of follows in mind, things are different. But that's because the models we usually use are defined and constructed in a way that automatically respects the meaning of the quantifiers and the truth-functional connectives, while it's easy to generate models which don't care about the meanings of the predicates and names of the relevant language.
I don't think there's a great deal of significance in this. When first order theories are introduced, they are introduced when we notice that the fact that there are various valid inference patterns (Every F is G, a is an F; ergo, a is a G), that hold whatever the meanings of the names and the predicates. First order languages are then designed to have just enough structure to enable you to represent a range of valid inference patterns. Model theory for such language was then created to study exactly such inferences.
If somehow one is worried that the meaning of a term can play a significant role in logic, I would wonder why it is felt that the meaining of = should be variable, while the meaning of 'All' and `and' can be fixed across models.
Isn't the idea here that the meaning of the = symbol implies stuff like x=x, and that this means that these sentences don't have to be explicitly included in the proof theory axioms? In other words, if we want a complete formalization of mathematics, use first-order logic without identity, but if we don't want to have to type as much, and make things a little bit easier for our human brains, use first-order logic with identity.
Again, the proof theory doesn't change, whether we keep the meaning of identity fixed or whether we vary it across models. If we want to type a little less, we prove meta-theorems that allow us to use derived rules, such as transitivity of identity, rather than doing it all again.
If, on the other hand, you're concerned not with proof theory, but with, say, how much mathematics a theory can formulate (can set-theory really formulate *all* mathematics? Peano Arithmetic? Group Theory? Category Theory?) then it seems to me that the meanings of the predicates is an important component - one really needs a way of formulating identity if you're going to say that every number has a unique successor.
In order to prove those sentences, you need to use the proof theory for predicate logic - that is, you need a proof theory that will tell you when you can introduce and eliminate the universal and existential quantifiers. If you're only looking at the rules for propositional logic, you can't do it - my guess is that this why it seems hard.
Basically, the rule for all introduction says that, if you've managed to prove Fx, and nowhere in this proof is there an assumption specifically about x, then you can conclude that AxFx. So you just need to apply this rule twice to the two tautologies you mention, then conjunction introduction gives you the result.