Does Artemov's "S-consistency" really revive the Hilbert Program?

nomadreid
Gold Member
Messages
1,775
Reaction score
257
TL;DR
Does Sergei Artemov's version of consistency (references given) which allows PA to prove its own consistency lead anywhere?
I have come across several preprints by Prof. Sergei Artemov, for example

https://arxiv.org/abs/2508.20346

https://arxiv.org/abs/2403.12272v1

(If it has appeared in a peer-reviewed journal yet, I have not found it.)

I have questions about the general approach.

First, my main question: whether my general understanding of what he wishes to do is correct.

Roughly, it appears that he says that the usual definition of consistency is too strong; he substitutes a weaker definition of the consistency (of PA) every finite subset of axioms (of Peano Arithmetic, PA) is consistent. He then proves (via "selectors") that PA fulfills this weaker version, so that PA can prove its own consistency. (Unlike Gentzen's proof of the consistency of PA.) He says that the Gödel proof is not violated because that proof required the stronger version of consistency. He concludes that therefore Hilbert's program is not dead after all, since his definition of consistency is the more appropriate definition. Corrections to my understanding would be highly welcomed.

Secondly, and this is a set of more general questions; whether this version of consistency can be of any use to the rest of logic or is just philosophical quibbling, whether there are any major weaknesses in his approach (rigorous enough? questions of infinity appropriately handled? ), whether his main goal (reviving the Hilbert Program) is indeed fulfilled by his procedure, and whether this procedure for PA would be extendible to other systems, e.g. ZFC. Any other relevant remarks would also be appreciated.

Many thanks in advance.
 
  • Like
Likes   Reactions: Demystifier
Physics news on Phys.org
So having spent some time (looking at various resources too), this is that partial understanding of it that I can manage. Note that this is just my (incomplete) understanding/interpretation of it (as it currently stands) and I don't know to what extent it does or doesn't represent the actual argument being made (for example).

========================

Let us first number all statements of ##PA## [which we also associate with "number-theoretic-statements"]. By that mean all valid formulas ("well-formed-formulas) with no open variables. We want to have a function, say ##code1##, whose domain is all ##PA## statements and whose range is ##\mathbb{N}##. We want this function ##code1## to be a 1-1 correspondence function (that is, a bijective function). Let us also define an inverse function ##reverse1## whose domain is ##\mathbb{N}## and whose range is all ##PA## statements.

Now the other interesting aspect is that, similar to above, we can also number all the "derivations of ##PA##". Consider any arbitrary statement ##S## that can be expressed in ##PA##. Any ##PA## "proof" of this statement [do note that some statements will expressible in ##PA## will not have any ##PA## proof/derivation due to first incompleteness theorem] can be called a derivation. Note that a given "##PA## derivation of ##S##" will essentially be a string of statements:
1)--- ##A_1##
2)--- ##A_2##
....
....
N)
--- ##A_N## (the statement ##S## itself)

So this particular derivation can be written as a finite string ##(A_1,A_2,....,A_N)##. Note that the statements ##A_1##,##A_2##,....,##A_i##,....##A_{N-1}##, ##A_N## will all be theorems of ##PA##. The statement ##A_1## will be an axiom [or in other words, essentially a theorem reached from nothing]. The statement ##A_2## will either be an axiom or a deduction reached from ##A_1##. The statement ##A_3## will also similarly either be an axiom or a deduction reached from any combination of ##A_1##,##A_2##. Similarly, for any ##M < N##, the statement ##A_{M+1}## will either be an axiom or a deduction reached from any combination of ##A_1##,##A_2##,....##A_M##.

Now one of the points is that, similar to how we can number all the statements, we can also number all the ##PA## derivations. We want to have a function, say ##code2##, whose domain is all ##PA## derivations and whose range is ##\mathbb{N}##. We want this function ##code2## to be a 1-1 correspondence function (that is, a bijective function). Let us also define an inverse function ##reverse2## whose domain is ##\mathbb{N}## and whose range is all ##PA## derivations.

Lastly, when can we say that the derivation mentioned above (for example) has a contradiction? Well that would be true only if we can find some numbers ##1 \leq i,j \leq N## and some ##PA## statement ##P## so that we have ##A_i \equiv P## and ##A_j \equiv \neg P##.

========================

Now I will try to proceed in points:
(1) We want to define a predicate ##F:\mathbb{N} \rightarrow \{0,1\}##. This function is essential for our purposes. Given an arbitrary natural number ##n## it is equal to ##1##/true iff:
"The derivation ##reverse2(n)## does not contain a contradiction."

The important point here is that the predicate ##F(n)## can be written as a well-formed formula in ##PA## (with ##n## as an open variable).

(2) Now let us try to think how we can write the statement ##con(PA)##in ##PA##. We can write it as (the quantification can be thought of over natural numbers ##\mathbb{N}##):
##\forall i \,[\, F(i) \,]\,##
For example, ##con(PA)## is true iff the predicate ##F(i)## is true for all natural numbers ##i##. Similarly, ##con(PA)## is false iff the predicate ##F(i)## is false at least for some natural number ##i##.

(3) Now here is where the main part comes in. Consider the following statement (the quantification can be thought of over natural numbers ##\mathbb{N}##):
##\forall i \,\,[ \,\, PA \,\, proves \,\, F(i) \,\, ]##
In a more formal manner, it seems that the above statement is usually symbolically written as:
##\forall i \,\,[ \,\, PA \,\, \vdash \,\, F(i) \,\, ]##

Now there are two further important points here. Let me mention them separately:
(a) The first point is that the statement I wrote just above itself is expressible in ##PA##. Or to put it in a slightly alternative way, there exists some specific natural number ##m## such that ##reverse1(m)## exactly represents the assertion ##\forall i \,\,[ \,\, PA \,\, \vdash \,\, F(i) \,\,]##.
(b) The second point is not only that the above statement is expressible in ##PA##, but it is also proveable in ##PA##. In other words, the statement ##reverse1(m)## is proveable in ##PA## (that is, ##reverse1(m)## has a derivation in ##PA##).
note: Also take a look at the ##PA## derivation example that I mentioned in the beginning part of the post.

========================

NOTE:
I don't really understand the argument being made so the below might be a misrepresentation (but I don't know to what extent).

Now here is what it seems to be (based on partial understanding). The author seems to give a ##PA## derivation of the statement I mentioned in point-(3) above (but I couldn't understand at all what selector/verifier mean though). That part is technically correct.

The point where the disagreement between the author and others seems to come in when the author seems to suggest (or perhaps say?) that having a ##PA## derivation of the statement mentioned in point-(3) is equivalent to saying: "##PA## proves its own consistency." However, some other people seem to be suggesting that it doesn't.
On the other hand, for his support, the author seems to have given number of qualitative arguments suggesting how various statements can be "serialized" and then proven.


To be honest, I don't think I understand things well-enough here to evaluate arguments on either side, so I will just leave it at that.

========================

Lastly, related to ##PA## (but not necessarily to the topic on hand), I also suggest reading https://arxiv.org/abs/1807.05641.
 
Last edited:
  • Like
Likes   Reactions: nomadreid
Thank you for the analysis, SSequence. This can help me in attempting to make sense of Artemov's arguments. (Your analysis also confirms my reaction that his arguments are not written in a way as to be clear to many readers.) Also, I read the Chow article, which I found very interesting (and, as a contrast to the ones I cited, very clearly written), so also many thanks for that.
 
Yeah well if we just want to look at it in a very high level way, it seems that "perhaps" it can be summarized into what we can call "part vs. whole" or "internal vs. external" issue.

What we have is a predicate ##F(i)## which can be expressed as a ##PA## formula. So, we will have ##F(0)##, ##F(1)##, ##F(2)##, ..... , ##F(i)##... as actual ##PA## statements. Further, the statement ##con(PA)## can also be represented as ##\forall i \,\, [\, F(i) \,]## [this will also be an actual ##PA## statement].

What is happening is something that is on two different levels which can be briefly described as follows:
(i) We can "externally" see/show that ##PA## proves all individual statements ##F(0)##, ##F(1)##, ##F(2)##, ....., ##F(i)##.... and so on.
(ii) Not only the previous point is true but in fact ##PA## itself can see point-(i). That is ##PA## itself can show that ##PA## proves all individual statements ##F(0)##, ##F(1)##, ##F(2)##, ....., ##F(i)##... and so on.

However, despite the points above, somehow ##PA## just can't prove the statement ##\forall i \,\, [\, F(i) \,]## [due to second incompleteness theorem]. All of this does look fairly tricky to comprehend tbh. But nevertheless, it seems that one can reasonably say (based on above) that ##PA## does come close to showing itself consistent. However, the point of difference seems to be whether it really does that or not.
 
Last edited:

Similar threads

  • · Replies 120 ·
5
Replies
120
Views
10K
  • · Replies 309 ·
11
Replies
309
Views
17K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
29
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
4K
  • · Replies 62 ·
3
Replies
62
Views
12K
  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 3 ·
Replies
3
Views
870