# X axioms .

Gold Member
X axioms...

lets assume we have a consistent system which have X axioms/definitions, can we infer (or deduce) somehow from this given, how many theorems/lemmas are there in this system? (or deduce the maximal theorems/lemmas that could be proved in this this system).

my initial answer would be no, but i think this question is a metamathematical and perhaps have an answer in fields I am not aware of.

Homework Helper
There could easily be infinitely many theorems. Assuming there is something like --> in this system, then for any sentence A of the language, we'd have A --> A, and there could be infinitely many strings that are proper sentences of the language. What's more, we'd also have:

A --> (A --> A)
A --> (A --> (A --> A))
...
etc.

Gold Member
besides sentences which are considered tautologies, such as what you have mentioned.

Homework Helper
Do you know the difference between a theorem and a tautology? In standard propositional logic, the things I've stated are both theorems and tautologies. A theorem is anything derivable from the set of axioms of the system using the rules of inference of the system, and a tautology is something that is true on every possible valuation (e.g. on every truth-value assignment of a sentence of propositional logic), if I recall correctly.

Gold Member
yes i do, and if you valuate A->A
A-T(true)
the inference is also true.
A-F(false)
the inference is true again.
am i wrong here, or what?

Homework Helper
(A --> A) is a tautology of propositional logic because on every truth-value assignment of it's atomic component(s), the compound sentence is true. (A --> A) is a theorem of propositional logic because it can be derived using the derivation system of propositional logic from the set of axioms of propositional logic, which happens to be empty. If I recall correctly, in propositional logic, every theorem is a tautology, and vice versa, and predicate logic, this is not the case. For some logics, every theorem is a tautology, and for some logic, every tautology is a theorem (and of course, for some logics, both are the case). One of these conditions is called consistency, the other completeness, although I can't remember which way is which. Godel's completeness and incompleteness theorems were about showing which logics were complete and which weren't, roughly speaking, although again I might be off in this.

Gold Member
We define a set of formulas X in language L. To get a set of tautologies Y in L, we define a truth-valuation on L (a mapping from X to {T, F}). To get a set of theorems Z in L, we define a set of axioms (a subset of X) and a set of rules of inference. These last two sets are called a calculus. Our calculus is sound and complete iff Y = Z.
which I think agrees with what AKG is saying.
The thing that might cause some confusion is that you usually don't use axioms but axiom schema, i.e., you use syntactic variables that range over whatever atomic symbols you have, because your set(s) of atomic symbols is (are) usually infinite. I'll give this some more thought when I'm fully awake.
My first guess is that you may be able to tell from your axioms&rules whether the set of theorems is empty, finite, infinite, how infinite, etc. but not get an exact number. But that's just a first guess. Also, axioms are theorems, so that gives you a start.

Last edited:
Gold Member
i think you need to sleep a little bit more. how can an axiom be regarded as a theorem? axioms aren't derived from other sentences like theorems.

Gold Member
Based only on the number of axioms or axiom schema, no, you cannot infer the number of theorems.

For instance, if you have no inference rules (rules, for short), the number of theorems will just be the number of axioms (without rules, you have no way to generate any more theorems). However, if you have an non-empty set of axioms and, say, one rule that given P you can infer ~~P, where P is any formula, you have an infinite number of theorems (p, ~~p, ~~~~p, etc.). So 1 axiom and 0 rules gives you 1 theorem, while 1 axiom and 1 rule can give you an infinite number of theorems, and so on.

If you have no rules and a set of axiom schema, your theorems will depend on how many instances of those schema exist, which could be finite or infinite.

Also, the same system can have different axiomatizations. For instance, propositional logic has axiomatizations with no axioms or schema and a handful of rules. Obviously, keeping those same rules, you can just adopt however many theorems as axioms and still end up with the same set of theorems. So the same set of rules combined with any number of axioms or schema can generate the exact same set of theorems.

There are also axiomatizations with one schema and one rule, five schemas and one rule, four schema and five rules, and so on. You can pretty much come up with any combination you want.*

*There may be a limit on the number of rules you can have. Depending on how 'well-behaved' your system is, you may be able to change theorems to rules. For instance, say (P -> P) is a theorem. You can change (P -> P) to the rule 'Given P, infer P'. You can do this, for instance, in propositional logic. You can also change rules to theorems in a similar way.

So numbers by themselves don't tell seem to tell you anything.

You may think there's a way to count theorems by grouping them thusly: The theorems that are instances of the axioms; theorems that can be proven from the axioms; theorems that can be proven from the previous theorems; and so on. But there may be different ways to prove the same theorem, so you could end up counting the same theorem more than once.
The most obvious way this happens is by just adding your conclusion to your set of premises: You can infer P from (P) and from (P & Q) and from (P & (Q v ~R) and so on. But here's another example.
Axiom schema:
i) a -> (b -> a)
ii) a -> a
Inference rule:
Modus Ponens - Given a and a -> b, infer b
(p -> (p -> p)) -> (p -> (p -> p)) is an instance of (ii). But it can also be proven from (i):

1) (p -> (p -> p)) -> ((p -> (p -> p)) -> (p -> (p -> p))) [instance of i]
2) p -> (p -> p) [instance of i]
3) (p -> (p -> p)) -> (p -> (p -> p)) [1, 2, Modus Ponens]

So I don't see a way to count theorems this way either.

On the bright side, all of your theorems will be formulas (the 'grammatical sentences' of the language). So there is at least a limit on the number of theorems. You can reduce this number further, for instance, if you know that P and ~P are not both theorems, then the number of theorems is < half the number of formulas.

Gold Member
loop quantum gravity said:
i think you need to sleep a little bit more. how can an axiom be regarded as a theorem? axioms aren't derived from other sentences like theorems.
You don't have to; It's just easier to define them that way. You want to say that P is a theorem if there is a proof of P in your system. The way you define proofs, an axiom is a proof of itself. So axioms are theorems, albeit special kinds of theorems.

Homework Helper
loop quantum gravity said:
i think you need to sleep a little bit more. how can an axiom be regarded as a theorem? axioms aren't derived from other sentences like theorems.
Well, I asked if you knew the difference between a tautology and a theorem, but it appears you don't. A theorem is something that can be derived from the axioms. Any reasonable logic will allow you to derive P from P, so you can derive any axiom from the axioms, hence every axiom is a theorem.

Homework Helper
I think it should be possible to deduct the minimum number of non-trivial (meant more as a lay concept than a strict logical concept) axioms necessary for a system being internally consistent. The easiest practical method I can think of is asking "how many plugs are needed for the system to hold water, so to speak?"

Neoclassical economists spent a lot of time to find out the minimum number of axioms (assumptions) necessary to prove that a competitive equilibrium exists in a market (free exchange) economy. The concise proof can be found in the 1983 Economics Nobel prize winner Gerard Debreu's seminal .

P.S. Oh, sorry! I completely reversed the original question! I think I need some more sleep (or coffee). If you knew the number of axioms, could you have deduced the number of non-trivial (again, a lay concept) theorems? It should be possible to go the other way in the example I gave and ask "starting from these axioms, what is the minimum number of theorems needed to prove that a competitive equilibrium exists?" I reserve the right to modify this as more coffee becomes available to the system Last edited by a moderator:
Staff Emeritus
Gold Member
Dearly Missed
Well set theory has a finite number of axioms, and from it you can derive all the theorems of arithmetic (of course some of those theorems are undecidable, but you CAN derive them!). It seems likeliy that the distinct theorems of arithmetic are infinite, given the least noninteresting number theorem:

There are no noninteresting whole numbers. Proof by contradiction: if there were noninteresting numbers, then since the numbers are linearly ordered, there would be a least one. But the least noninteresting number is interesting just for that property itself! Contradiction. Q.E.D.

Here a number is said to be interesting if it has some property, other than its value, that distinguishes it from all other numbers.

Homework Helper
But there should be an answer to the (more modest) question "what is the minimum number of theorems needed to prove the non-trivial statement XYZ starting from the set of axioms ABC?", shouldn't there? ("Non-trivial" meant as a layperson concept.) Sort of like finding the shortest route from point A to point X? (This may be a forced analogy, BTW. I haven't the clue for what kind of a metric can be used in a space in which points are statements.)

P.S. I realize that is not what the OP asked, but that question is settled by your (selfAdjoint's) post.

Last edited:
Homework Helper
You might ask what the fewest number of lines is required to prove sentence P from the set of sentences S, but otherwise, assuming P can be proven from S, the minimum number of theorems is 0.

Homework Helper
So assuming a metric can be defined, what you are saying is "P is provable from S" means zero distance, so P = S in that "metric" space, is that even close?

Homework Helper
No, I mean "the number of theorems required" does not provide a good metric, and is barely meaningful. In fact, I don't think the idea of a metric is a suitable one. For one, statements are proven from sets of statements, so our "metric" would tell us the distance between points and sets of points, which is not right. Moreover, our metric would not be symmetric because even if we can prove P from Q, that does not mean we can prove Q from P. There's also no guarantee that we can prove P from Q or vice versa, so the metric could not even be defined, even if you defined the distance from P to Q to be the distance from either P to Q (if Q follows from P) or Q to P (if P follows from Q).

The problem with "the number of theorems needed" is that if you have a proof that uses theorem T, it's no different from proving theorem T within your proof, so you could always say you didn't need that theorem. Also, every line of your proof (that occurs in the scope of the primary assumptions, i.e. the axioms) would technically count as a theorem, so it's better just to count lines anyways.

Still, it's not a good metric for most of the same reasons above (it would give a distance from a set to a point, and it wouldn't be symmetric).

Homework Helper
AKG said:
I don't think the idea of a metric is a suitable one. For one, statements are proven from sets of statements, so our "metric" would tell us the distance between points and sets of points, which is not right.
It doesn't have to; if each statement is a point then you might have statements x, y, and z that can be proven using statements a, b and c. Formally, that's just like a vector space.
Moreover, our metric would not be symmetric because even if we can prove P from Q, that does not mean we can prove Q from P.
I am not sure whether a metric has to be symmetric. If it does, then you are right, a metric cannot be defined. But you may still have a space ordered by "needed to prove" relation: define relation "$\prec$" such that if a set of statements A = {a1,...,ak} is needed to prove the set of statements X = {x1,...,xk} then one writes A $\prec$ X.
There's also no guarantee that we can prove P from Q or vice versa, so the metric could not even be defined, even if you defined the distance from P to Q to be the distance from either P to Q (if Q follows from P) or Q to P (if P follows from Q).
Yes, that means that the this ordering is incomplete. Alternatively, you could restrict it to the subspace of statements that are known to be provable. On that subspace, the ordering is complete.
The problem with "the number of theorems needed" is that if you have a proof that uses theorem T, it's no different from proving theorem T within your proof, so you could always say you didn't need that theorem.
That depends on definition. Let's say that if either T itself or a proof of T is needed in proving U then by definition T $\prec$ U.

Homework Helper
EnumaElish said:
It doesn't have to; if each statement is a point then you might have statements x, y, and z that can be proven using statements a, b and c. Formally, that's just like a vector space.
I'm not exactly sure what you're saying. Are you saying that if you can derive x from {a, b, c} in n lines (say that's our "metric") then d(x,a) = d(x,b) = d(x,c) = n? Well if you can derive x from {a, b, c} in n lines, then you can derive x from {a, b, c} U S in at most n lines for any set of statements S. In fact, since you can derive x from {x} in 1 line (or perhaps we should say 0 lines to get this looking more like a metric), you can derive x from {x, a, b, c} in 0 lines, so d(x, y) = 0 for all sentences x, y.
I am not sure whether a metric has to be symmetric. If it does, then you are right, a metric cannot be defined. But you may still have a space ordered by "needed to prove" relation: define relation "$\prec$" such that if a set of statements A = {a1,...,ak} is needed to prove the set of statements X = {x1,...,xk} then one writes A $\prec$ X.
What makes a set A a necessary set to prove X? Is it that all the sentences of A occur in the scope of the primary assumptions of the derivations of all the sentences of X?
Yes, that means that the this ordering is incomplete. Alternatively, you could restrict it to the subspace of statements that are known to be provable. On that subspace, the ordering is complete.
I'm not sure what you mean by complete or incomplete orderings.
That depends on definition. Let's say that if either T itself or a proof of T is needed in proving U then by definition T $\prec$ U.
Well, this is a different issue. That's fine for an ordering relation, but if you want a metric, it's still better to count lines.

Staff Emeritus
Gold Member
I think it should be possible to deduct the minimum number of non-trivial (meant more as a lay concept than a strict logical concept) axioms necessary for a system being internally consistent.

Zero.

If a collection of axioms is consistent, then so is any subcollection.

Neoclassical economists spent a lot of time to find out the minimum number of axioms (assumptions) necessary to prove that a competitive equilibrium exists in a market (free exchange) economy.

If it's not a tautology, then you only need one axiom. I'll give you one guess as to a good choice for that axiom. I gave and ask "starting from these axioms, what is the minimum number of theorems needed to prove that a competitive equilibrium exists?"

One: the theorem that a competetive equilibrium exists. :tongue2: (Assuming that a proof exists, of course)

Well set theory has a finite number of axioms, and from it you can derive all the theorems of arithmetic (of course some of those theorems are undecidable, but you CAN derive them!).

If it's a theorem, then it can't be undecidable, because you have a proof of it. :tongue2:

But you may still have a space ordered by "needed to prove" relation: define relation "$\prec$" such that if a set of statements A = {a1,...,ak} is needed to prove the set of statements X = {x1,...,xk} then one writes A $\prec$ X.

I don't think the idea of "needed to prove" makes any sense at all. I can prove the set of statements {R} from the set of statements {R}. I can also prove it from the set of statements {~~R}.

Homework Helper
I think if you want to consider an ordering, I can help you clarify what you want. Let us deal only with provable sentences (provable from the axioms of the system, and nothing more). For any such sentence P, there is a set of derivations D(P) = {D(P)1, D(P)2, ...} for P. Define D(P)i' as the set of sentences in the derivation D(P)i that occur in the scope of the primary assumptions, plus the primary assumptions themselves. Next, define [D(P)i'] as the set of equivalence classes of the sentences in D(P)i' defined by the natural equivalence relation (p and q are equivalent iff (p iff q)). Finally, define N(P) by:

$$N(P) = \bigcap _{S \in D(P)} [S']$$

Then $\forall Q, [Q] \in N(P) \Rightarrow Q \prec P$. These Q are then all the theorems that occur in every proof of P, in some shape or form. At first glance, this does indeed give a partial ordering, which is the best we can hope for.

Last edited:
Homework Helper
In fact, you could even use this to define a metric. I believe you can define a graph whose nodes are equivalence classes of sentences. A directed arrow will go from [P] to [Q] if [P] is in N(Q). Given a proper algorithm, you could generate this graph from the nodes [A] for each axiom A. Anyways, you could use this directed graph to describe your system. If you then made each edge undirected, you could then define a metric from [P] to [Q] as the length of the shortest path from [P] to [Q]. This would indeed satisfy the properties of a metric. Note that it is well-defined because the graph is connected, because for every sentence P, at least one axiom AP must be directly connected to it, and the same is true for any sentence Q (i.e. there is an axiom AQ). The sentence R = AP & AQ is clearly connected to both axioms, so P --> AP --> R --> AQ --> Q gives a path from P to Q. This is rather uninteresting, because any two sentences have a distance of at most 4 from each other. It might be better to say that if [Q], [R] are in N(P) and [Q] is in N(R), then omit the edge from [Q] to [P]. Without this omission, we could say that $Q \prec P$ if there is a directed edge from [Q] to [P]. Now, we would say that $Q \prec P$ if there is a path from [Q] to [P] (treating the edges still as directed, we only "undirect" them for our metric).

Staff Emeritus
Gold Member
"implies" is a very simple, and quite natural (pre)order on the set of all statements. It's a fairly nice one too, because the equivalence classes form a lattice! It's even a Boolean lattice, I think.

Anyways, I'm not so sure your N operator is a very useful thing. For example, suppose I take as my set of axioms {P, Q}, and I set R := P or Q.

Then, I can exhibit two derivations of R:
P, therefore R
Q, therefore R

So, unless P happens to be equivalent to Q, N(R) is simply the set of statements equivalent to R.

Homework Helper
Hurkyl said:
"implies" is a very simple, and quite natural (pre)order on the set of all statements. It's a fairly nice one too, because the equivalence classes form a lattice! It's even a Boolean lattice, I think.

Anyways, I'm not so sure your N operator is a very useful thing. For example, suppose I take as my set of axioms {P, Q}, and I set R := P or Q.

Then, I can exhibit two derivations of R:
P, therefore R
Q, therefore R

So, unless P happens to be equivalent to Q, N(R) is simply the set of statements equivalent to R.
N(R) is the set of (equivalence classes of) statements that are necessary for R. "Implies" does give a nice partial ordering as well, in fact we can say that such an ordering is defined using S(R), the set of statements that are sufficient for R. In fact, if we define R as P & Q, then neither does P --> R nor Q --> R, neither P nor Q are sufficient for R, but they're both necessary. This suggests some sort of duality between the two graphs. I don't know what you mean by "lattice" or "boolean lattice" but given the above, I wouldn't be surprised that whatever "nice" properties the lattice has for ordering by S, it will have for ordering by N. The only advantage N has is that it does what EnumaElish was looking for (he/she was interested in dealing with the notion of some theorems being required to prove others).

One addition I need to make is that the graph will have to have an "empty" class, [].

Homework Helper
Hurkyl said:
Neoclassical economists spent a lot of time to find out the minimum number of axioms (assumptions) necessary to prove that a competitive equilibrium exists in a market (free exchange) economy.
If it's not a tautology, then you only need one axiom.
What I meant to say was, "min. number of fundamental set-theoretic and/or mathematical axioms that are needed to prove that in an economy with finite numbers of consumers, producers, and markets there is always a unique price in each market such that demand equals supply in that market." You could postulate this (existence of a price such that demand = supply for each market) as an axiom, but that wouldn't be very interesting. To be interesting, one would try to imagine the minimum "bare essentials" that have to be assumed about the characteristics of consumers, producers, and goods, and then prove that such a price can be shown to exist for each good by combining all the "bare essentials" in a clever and novel way.

For example, one could start by assuming that "consumer demand for a good is a decreasing function of price." But one could then ask, "how can this demand function be derived from even more fundamental consumer characteristics such as use value?" and "what is the minimum (the most general) axiomatic structure that can be imposed on use value such that this can be accomplished?" E.g. one could assume that use value of a banana = 2 for all consumers. This wouldn't be very general, because what if some of the consumers have no taste for banana? Or, what if the value of the tenth banana is much less than that of the first banana? Or, what if use value was not directly measurable?

Last edited:
Gold Member
AKG said:
Well, I asked if you knew the difference between a tautology and a theorem, but it appears you don't. A theorem is something that can be derived from the axioms. Any reasonable logic will allow you to derive P from P, so you can derive any axiom from the axioms, hence every axiom is a theorem.
but it's the most trivial theorem, and thus i haven't given it such importance as you two have given it.
the notion of p->p is by iteself an undeniable truth (i wonder if you can give an example of a system when such a statement is not true always?).

Homework Helper
loop quantum gravity said:
but it's the most trivial theorem, and thus i haven't given it such importance as you two have given it.
No one is giving it importance. If you know the definition of a theorem, then in any reasonable logic, every axiom is a theorem. I'll give you the definitions again, for propositional logic:

Tautology: A sentence is a tautology iff the sentence is true for every possible truth-value assignment of its atomic components.

Theorem: A sentence is a theorem iff it can be derived from an empty set of assumptions.

Now it looks like you think that a theorem is an interesting or useful result, and a tautology is a self-evident or trivial result. That's not what the words mean in logic, however. If you want to count the theorems in a logical system, then you better use the meaning of the word "theorem" correctly, I don't know why you insist on arguing about this. On the other hand, if what you really meant to say is that you wanted a way to count the "interesting" theorems (and tautologies) a logic could produce, then you'd have to give a good definition for "interesting" otherwise this is impossible.
the notion of p->p is by iteself an undeniable truth (i wonder if you can give an example of a system when such a statement is not true always?).
As far as I'm concerned, (p -> p) follows by definition of '->' so any reasonable logical system that has something, '*', that means the same thing as '->' will define it and will provide rules of inference such that (p * p) is both a tautology (or whatever the equivalent to "tautology" would be in this system) and a theorem.

Homework Helper
loop quantum gravity said:
the notion of p->p is by iteself an undeniable truth (i wonder if you can give an example of a system when such a statement is not true always?).
Thinking aloud, I could define an ordering relationship whereby p is said to "strict imply" q if "p ---> q and not (q ---> p)," couldn't I? In this relationship, "p does not strict imply p" is true. That is, "p strict implies p" is false.

Homework Helper
EnumaElish said:
Thinking aloud, I could define an ordering relationship whereby p is said to "strict imply" q if "p ---> q and not (q ---> p)," couldn't I? In this relationship, "p does not strict imply p" is true. That is, "p strict implies p" is false.
You could also define "-->" such that every sentence that contains it is false, so (p --> p) would be false. However, in both your example and in mine, you're using the same symbol to mean something other than what loop quantum gravity means. That's cheating, there's nothing interesting about saying that you can have a system that uses the same symbols to mean different things, so that sentences in the two systems that appear to be the same actually mean different things, and thus have different truth values. What's interesting is not the symbols, but what they mean, so it would only be interesting if you had a system where, regardless of what symbols you used (which is why I used '*' instead of '-->'), you had a sentence that means the same thing as some sentence in his language, but has a different true value. Of course, this would be "interesting" but impossible (assuming we're dealing with "reasonable" systems, although there is much debate as to what makes a system reasonable).

Homework Helper
AKG said:
You could also define "-->" such that every sentence that contains it is false, so (p --> p) would be false. However, in both your example and in mine, you're using the same symbol to mean something other than what loop quantum gravity means.
If you re-read my post carefully, you will see that I was using the ---> symbol in its ordinary meaning, "implies." The relation I made up was "strict implies," which doesn't have a symbolic representation as of this time. "Strict implies" and the ordinary "implies" can be used within the same "system." You are correct that "strict implies" is similar to your * symbol.

That's exactly what I had intended, you read my thoughts. 