Was there any need or utility or aim, for Cantor's theory?

In summary: Cantor's work, in general, showed that things which were assumed to be simple were not so simple. His examples tended to show bazaar behavior that was counter-intuitive.Cantor's work, in general, showed that things which were assumed to be simple were not so simple. His examples tended to show bazaar behavior that was counter-intuitive.
Mathematics news on Phys.org
  • #2
What exactly do you mean by Cantor's theory? His theory of infinities and cardinality?
 
  • Like
Likes Cantor080
  • #3
dextercioby said:
What exactly do you mean by Cantor's theory? His theory of infinities and cardinality?
His theory of defining infinite set, and distinguishing sets as countable, uncountable, etc.
 
  • #4
If "problems" mean theoretical misconceptions, then yes, it cleared a lot up. If it means that some engineering problem was solved a different way, then probably no.
 
Last edited:
  • Like
Likes Demystifier and Cantor080
  • #5
This article talks about Cantor's life and mathematics:

https://www.storyofmathematics.com/19th_cantor.html

We probably will never know why exactly he worked on infinities. However, everyone who has every studied higher math will tell you infinities are taught, infinities are present and infinities frustrate you as you work your problems.

In Cantor's case, he didn't like how Galileo "dodged" the issue of infinities and so he set out to investigate further showing that there are different kinds of infinities and that there are rules and a new kind of math for infinities.
 
  • Like
Likes Janosh89, Cantor080 and FactChecker
  • #6
FactChecker said:
If "problems" mean theoretical misconceptions, then yes, it cleared a lot up.
What theoretical misconceptions were cleared? Or can you give any source where I can know more on this?

Thank you :smile:
 
  • #7
Cantor080 said:
What theoretical misconceptions were cleared? Or can you give any source where I can know more on this?

Thank you :smile:
It is not that easy to extract the successes of one individual mathematician from an entire period where fundamental insights took place and which was one of the most productive in mathematical history. I have a book about the history of mathematics (1700-1900) by Jean Dieudonné that has not less than ##22## references to Georg Cantor. Unfortunately I don't know of an English translation of the book, nor do I have the patience to extract all ##22## quotations for you, nor do the research in general. I suggest to look for a similar good book about the history of mathematics of the 19th and 20th century, or simply study the publications of G. Cantor.
 
  • Like
Likes Cantor080
  • #8
You could well ask this question about any topic in abstract mathematics, precisely because it is "abstracted" away from practical problems. Cantor gave us the tools to define infinity and showed us how to do rigorous thinking about it. That's a huge advance. Though the real universe is not infinite, there are many things in mathematics that are (for instance the list of numbers 1, 2, 3, ...). The properties of uncountable sets and countable sets are completely different and Cantor showed us how to tell which is which and how to figure out their differences.

Now as it turns out, abstract mathematics does have a way of helping us out years later in the real world. I'm sure reasoning about infinities is no exception, but I can't point you to specifics. I suggest a Google search with key words like "practical consequences of Cantor's theory"
 
Last edited:
  • Like
Likes Cantor080
  • #9
Cantor's work, in general, showed that things which were assumed to be simple were not so simple. His examples tended to show bazaar behavior that was counter-intuitive.
 
  • Like
Likes Cantor080
  • #10
Take my words with a grain of salt.

One thing is that you can just study it for its own sake, as a rich area of study.

It is true though that much of usual maths can be done without worrying about it. As I posted in a very recent thread:
One thing is that Cantor's notion of infinite doesn't seem to be required to do much of "usual" mathematics. Apparently, there has been much work on it. Obviously trying to pursue/understand the exact detail (which I don't know very well) requires time commitment ... (not to mention that there are good number of variations on this). But still, I can include a number of pointers if someone is interested.

For example, it seems to be generally favoured (apparently with good reasons) that FLT is provable in PA. You can search for it and you will find good number of links.
However, one thing that perhaps should me mentioned in addition is that there is the issue of "standardisation". Below the "standardised" version of maths, the problem becomes which version to use or favour. And the difference of version would create too much confusion, for someone who just wants to do mathematics. This is why "usually" I think mathematicians continue with "standard" version, and if someone was interested in this kind of foundational/revisionist matter, they would look into some variation(s).

An example perhaps might be of undergraduate analysis? If one doesn't proceed in a fully standard way you would possibly have several versions of it with minor (or perhaps more than minor) changes all around, all depending on the variation one is using.

=========================

There is one other valid point though. Do the study of the theory has ramifications on statements of ordinary nature that only use ℕ. The answer is yes ... to some extent.

For example, the most immediate example could be con(ZFC). While there is no proof of it currently, generally the confidence about it seems to be high (though certainly not universal). And perhaps very high for set-theorists? For example, the well-known set-theorist Hugh Woodin expressed confidence in one of his essays (I will have to search the ref. ... but I will mention the exact quote/essay if someone is interested) that there would(might?) be no contradiction found in thousands of years (I have forgotten the exact time mentioned but it was a similar time scale).

Then there is study of (large) cardinals (not provable in ZFC) within this field. As is fairly well-known, Prof. Harvey Friedman has worked extensively on finding simple/natural/intrinsic statements that correspond to them.


Edit:

Also should add con(PA) (provable from ZFC)****.

But it doesn't seem like anything near the full-strength of set-theory comes in here ... since the informal proof (even though I haven't studied it) would be the one that really increases one's confidence about it ... and certainly doesn't make any reference to uncountable infinities.

**** [Very rough ... take it with a grain of salt]
My amateur feeling is that one could easily "emulate" the "standard model"*** of PA (with the corresponding axioms) inside ZFC. That is axioms of PA should also be satisfied by ZFC (over ω).
So what that would show is that ~con(PA)→~con(ZFC). Its contrapositive would give con(ZFC)→con(PA).

***See this link, for example, for discussion of models of arithmetic ... which is certainly at least a little easier than models of set-theory (for which, I don't have any good intuition or understanding tbh).
 
Last edited:
  • Like
Likes Cantor080
  • #11
Cantor080 said:
Was there any need or utility or aim, for which Cantor created his theory? Did Cantor's theory clear any of the problems which existed before?

People get interested in things their friends are thinking about. This link describes events leading up to Cantor's work on infinite sets: https://plato.stanford.edu/entries/settheory-early/
 
  • Like
Likes Auto-Didact and Cantor080
  • #12
The idea of countability can be really useful. For example, there are clearly only countably many real/complex numbers which are roots of integer polynomials (algebraic), and hence transcendental numbers must exist. Trying to show this otherwise by giving an explicit example of a transcendental number and showing it is such is much harder.
 
  • Like
Likes Cantor080 and FactChecker
  • #13
Actually, the set of integers and quotients are both countable infinite since you can make a 1 to 1 correlation with the set of whole numbers. The Real numbers, however, are uncountable infinite. They are an infinite set that is a level above the countable infinite sets. In fact, you can't even count the number of real numbers between 0 and 1.
 
  • Like
Likes Cantor080
  • #14
SSequence said:
My amateur feeling is that one could easily "emulate" the "standard model" of PA (with the corresponding axioms) inside ZFC. That is axioms of PA should also be satisfied by ZFC (over ω).
So what that would show is that ~con(PA)→~con(ZFC). Its contrapositive would give con(ZFC)→con(PA)
Sorry for the late bump on this thread. I should have emphasized (since re-reading this, not emphasizing this distinction was a bit misleading) that intuitively this just shows why "con(ZFC) implies con(PA)" should be true ... and not much else. Trivial argument, but an argument nevertheless.

But, as I mentioned in the main part of post#10, the statement "ZFC proves con(PA)" is also true. I am a logic dummy so I don't understand precisely why ... @stevendaryl would know the precise reasoning though.
 
  • Like
Likes Cantor080
  • #15
SSequence said:
Sorry for the late bump on this thread. I should have emphasized (since re-reading this, not emphasizing this distinction was a bit misleading) that intuitively this just shows why "con(ZFC) implies con(PA)" should be true ... and not much else. Trivial argument, but an argument nevertheless.

But, as I mentioned in the main part of post#10, the statement "ZFC proves con(PA)" is also true. I am a logic dummy so I don't understand precisely why ... @stevendaryl would know the precise reasoning though.

There are two different types of proofs of Con(PA) that can be done within ZFC. The first is semantic proofs and the second is syntactic proofs.

The semantic proof works this way:
  1. You define a mapping from the domain of PA (natural numbers) to sets, so that each element of the domain corresponds to a set. A convenient mapping is this one: Let ##m(0) = \{\}##. Let ##m(n+1) = \{ m(0), m(1), ..., m(n) \}##.
  2. For every function symbols in the language (PA has ##+, *, S##, where ##S(x) = x+1##), you define a corresponding set-theoretic function.
  3. For every relation symbol in the language, you define a corresponding set. (PA doesn't have any relation symbols, other than ##=##, so this isn't necessary).
Now, you have a mapping between statements of PA and statements of ZFC. Now to prove consistency, you need to show the following:
  • Every axiom of PA is true, interpreted as a statement of ZFC.
If every axiom is true, then every theorem is true, and so it is impossible to prove a false statement.

This approach is a "relative consistency" proof. What that means is that you are not actually proving that PA is consistent, but you're proving that if it is inconsistent, then ZFC is inconsistent. ZFC actually proves something stronger, which is that if you formalize the statement "PA is consistent" in set theory, it's actually provable. That involves a second level of coding. You need to get a "code" for statements of PA so that it's possible to talk about the set of all theorems of PA. Then in terms of the coding, you define a formula ##true(x)## meaning ##x## is a code for a true statement of PA. Then you prove again that all axioms of PA are true (that is, their codes satisfy the formula ##true(x)##) and you show that every rule of inference takes you from true statements to true statements.

The syntactic proof:

The syntactic proof of consistency doesn't try to interpret PA as meaning anything at all. It just looks at the structure of proofs of PA. Proofs involve subproofs, and then combining the subproofs. For example, to prove a statement ##\forall x \phi(x)## by induction, you prove ##\phi(0## and you prove ##\phi(n) \rightarrow \phi(n+1)##. So to show that the theory is consistent, you come up with an ordering on proofs. That is, a way to say that one proof is "harder" or "easier" than another proof. Then you show that for any proof of a contradiction, there must be an easier proof of a contradiction.
 
  • Like
Likes Cantor080 and SSequence
  • #16
I haven't read your post yet, but just a small addition. I think you can also emulate (surely?) Gerhard Gentzen's famous proof too in ZFC.

Of course its purpose was quite (or rather completely) different!

======

It is possible that I have over-complicated things in the descriptions below

OK, as I understand, in the first part you are giving a sketch for:
ZFC proves "con(ZFC)→con(PA)"

One way I am finding it easier to visualise con(ZFC)→con(PA) is that:
if we denote all the valid formulas (syntactically correct, no free variables etc.) of NT as being divided into three parts:
T(0) ---- T proves this formula to be false
T(1) --- T proves this formula to be true
T(2) --- this formula is independent of T

Now we replace T by "Z" and "PA" (for our appropriate systems). Then con(ZFC) simply means:
Z(0) ∩ Z(1)=φ
So if we just show that:
PA(0) ⊆ Z(0)
PA(1) ⊆ Z(1)
then we are done.

Edit:
I think I may have added a redundancy with the condition PA(0) ⊆ Z(0). I will have to think about it a bit.

stevendaryl said:
This approach is a "relative consistency" proof. What that means is that you are not actually proving that PA is consistent, but you're proving that if it is inconsistent, then ZFC is inconsistent. ZFC actually proves something stronger, which is that if you formalize the statement "PA is consistent" in set theory, it's actually provable. That involves a second level of coding. You need to get a "code" for statements of PA so that it's possible to talk about the set of all theorems of PA. Then in terms of the coding, you define a formula ##true(x)## meaning ##x## is a code for a true statement of PA. Then you prove again that all axioms of PA are true (that is, their codes satisfy the formula ##true(x)##) and you show that every rule of inference takes you from true statements to true statements.
I don't fully understand what you mean. But I feel that probably what I describe below is probably something not too different(?) ... or maybe it is.
(I should add that what I have written below is really just a guess as to what you might have meant)

Now If we denote:
##TRUE=\{x | x \in PA(1) \}##

What we would want to show is that ##TRUE## doesn't contain any statements of the form ##p## and ~##p##. Now "probably" ... we can use an induction-like argument. For example, denote:
##S(n)## as all statements (in PA) obtained after ##n## steps (from application of all possible rules of inference ... at each step)

Then we just have to show that (using induction):
For all ##n \in \mathbb{N}##, ##S(n)## doesn't contain any statements of the form ##p## and ~##p##
 
Last edited:
  • Like
Likes Cantor080
  • #17
SSequence said:
I haven't read your post yet, but just a small addition. I think you can also emulate (surely?) Gerhard Gentzen's famous proof too in ZFC.

Of course its purpose was quite (or rather completely) different!

That's what I meant by the syntactic proof.
 
  • Like
Likes Cantor080
  • #18
SSequence said:
I don't fully understand what you mean. But I feel that probably what I describe below is probably something not too different(?) ... or maybe it is.
(I should add that what I have written below is really just a guess as to what you might have meant)

Now If we denote:
##TRUE=\{x | x \in PA(1) \}##

The details are tedious, though not that complicated, conceptually.

We define a function ##m(\sigma, t)## which maps a sequence ##\sigma## of finite ordinals and a term ##t## of arithmetic into a finite ordinal.
  1. ##m(\sigma, x_j) = \sigma_j##. So the variable ##x_j## is mapped to the ##j^{th}## element of ##\sigma##. (##\sigma## is just a way of assigning values to variables)
  2. ##m(\sigma, 0) = \{\}##. ##0## is mapped to the empty set.
  3. ##m(\sigma, S(t)) = m(\sigma, t) \cup \{ m(\sigma, t) \}##. 2&3 define the mapping ##m(\sigma, 0) = \{\}##, ##m(\sigma, n+1) = \{ m(\sigma,0), m(\sigma,1), m(\sigma,2), ..., m(\sigma,n) \}##
  4. ##m(\sigma, t_1 + t_2) = ## whatever (I'm not going to bother to write it down)
  5. ##m(\sigma, t_1 \times t_2) = ## whatever
Just a point of notation: If ##\sigma## is a sequence of finite ordinals, and ##\alpha## is a finite ordinal, then ##\sigma[j \rightarrow \alpha]## is the sequence ##\sigma'## such that ##\sigma'_j = \alpha## and for ##j \neq k##, ##\sigma'_k = \sigma_k##. In other words, we're only changing the assignment to variable ##x_j##.

Now, we define a satisfaction set to be a set ##K## of pairs ##\langle \sigma, \phi \rangle## where ##\sigma## is again a sequence of finite ordinals, and where ##\phi## is a formula of arithmetic with the properties:
  1. If ##m(\sigma, t_1) = m(\sigma, t_2)##, then ##\langle \sigma, t_1 = t_2\rangle \in K##
  2. If for all ##\alpha##, ##\langle \sigma[j \rightarrow \alpha], \phi\rangle \in K## then ##\langle \sigma, \forall x_j \phi \rangle \in K##
  3. If there is an ##\alpha## such that ##\langle \sigma[j \rightarrow \alpha], \phi\rangle \in K## then ##\langle \sigma, \exists x_j \phi \rangle \in K##
  4. If ##\langle \sigma, \phi \rangle## is not in ##K##, then ##\langle \sigma, \neg \phi \rangle \in K##
  5. If ##\langle \sigma, \phi \rangle## is in ##K##, and ##\langle \sigma, \psi \rangle## is in ##K##, then ##\langle \sigma, \phi \wedge \rangle \in K##
  6. If ##\langle \sigma, \phi \rangle## is in ##K## or ##\langle \sigma, \psi \rangle## is in ##K##, then ##\langle \sigma, \phi \vee\rangle \in K##
Then we can say that a formula ##\phi## is true if for all satisfaction sets ##K##, and for all ##\sigma##, ##\langle \sigma, \phi \rangle \in K##.

Then you need to prove that it's impossible for a statement and its negation to both be true. That really is guaranteed by rule number 4.

Then you have to prove that every axiom of PA is true, and that the rules of inference preserve truth. That implies that every theorem of PA is true.

What we would want to show is that ##TRUE## doesn't contain any statements of the form ##p## and ~##p##. Now "probably" ... we can use an induction-like argument. For example, denote:
##S(n)## as all statements (in PA) obtained after ##n## steps (from application of all possible rules of inference ... at each step)

Then we just have to show that (using induction):
For all ##n \in \mathbb{N}##, ##S(n)## doesn't contain any statements of the form ##p## and ~##p##

The semantic way of proving consistency does not use induction, but the Gentzen-style syntactic proof does. It's not induction on ##n##, though, it's an induction on complexity of proofs, which is a lot more complicated induction.
 
  • Like
Likes Cantor080
  • #19
A small correction from my prev. post. From post#16:
Now we replace T by "Z" and "PA" (for our appropriate systems). Then con(ZFC) simply means:
Z(0) ∩ Z(1)=φ
It should be "con(ZFC) implies" instead of "con(ZFC) simply means".

=====

Thanks for the detailed response. I should admit though, I think I will need to learn some formal concepts better to be able to understand your post#18.

The qualitative reason I was asking was that, even though details are a bit advanced, it is highly likely that proof theoretic analysis yields a clear proof of con(PA). Because apparently, it is based upon placing formulas at unique positions (in a "reasonable way") below ε0 and using an induction argument. So it is very difficult for me to see that how it could be unclear (given enough inspection).

On the other hand, the semantic proof you have mentioned, I don't know whether it can be made "completely" clear or not.

=====

Anyway, a brief point about the main topic ... I feel that perhaps proof-theoretic analysis is somewhat under-represented in these kind of discussions. For example, to justify the consistency of some description of sets, there seem to be following options:
i---- an empirical justification
ii---- give a justification in terms of how the picture/description makes sense
iii---- give a proof-theoretic analysis (which really should amount to a strict mathematical proof after enough inspection and filling-in of details)

The problem is that (i) is weak in my view. (ii) really varies from person to person. It is better than (i) for sure though. And we don't have (iii).

To give an idea of why (i) is weak, suppose we set up a reasonable specific statement equivalent to consistency, and then we write a highly optimised program corresponding to it. Even if we suppose the program was just of size 1000 say, the number of steps we would have to simulate (in the worst case) to be sure would be too much in the (physical) universe.
And if we know for sure that we haven't even executed a tiny fraction of those steps, then even a probability argument based upon "purely" empirical justification seems unconvincing.

Though, to be fair, a lot of people seem to give a combination of (ii) and (i) for justification (rather than (i) alone).
 
Last edited:
  • Like
Likes Cantor080
  • #20
SSequence said:
The qualitative reason I was asking was that, even though details are a bit advanced, it is highly likely that proof theoretic analysis yields a clear proof of con(PA). Because apparently, it is based upon placing formulas at unique positions (in a "reasonable way") below ε0 and using an induction argument. So it is very difficult for me to see that how it could be unclear (given enough inspection).

Well, it's very complicated, although no single step is very difficult, conceptually.

On the other hand, the semantic proof you have mentioned, I don't know whether it can be made "completely" clear or not.

I don't know, once you understand the details, it amounts to something pretty intuitive: A theory where all the theorems are true (under the right interpretation) can't possibly be inconsistent.

[Anyway, a brief point about the main topic ... I feel that perhaps proof-theoretic analysis is somewhat under-represented in these kind of discussions.

The proof-theoretic proof of consistency is a lot more work, and gives less insight. If you take PA and add another axiom, the result could be consistent, or it might not. But that additional axiom completely throws off the proof-theoretic consistency proof. The semantic proof to accommodate the new axiom is pretty straight-forward (although difficult in a different way): Just prove the new axiom using ZFC, which is more powerful than PA. Then it has to be consistent to add it (adding a true statement to a collection of true statements can't produce an inconsistent theory).

For example, to justify the consistency of some description of sets, there seem to be following options:
i---- an empirical justification
ii---- give a justification in terms of how the picture/description makes sense
iii---- give a proof-theoretic analysis (which really should amount to a strict mathematical proof after enough inspection and filling-in of details)

Are you talking about giving a proof-theoretic proof of the consistency of set theory, itself? That's a completely unsolved problem, as far as I know. The kind of induction that is needed to prove the consistency of a theory requires induction on an ordinal that is beyond the ability of the theory itself to represent. Take PA for example. To represent an ordinal ##\alpha## within PA requires what's called an "ordinal notation", which is a way to code all the ordinals less than ##\alpha## as natural numbers in such a way that PA can define a relation ##R(x,y)## such that whenevery ##\beta < \gamma < \alpha##, PA proves ##R(code(\beta), code(\gamma))## (where ##code(x)## means the natural number coding ##x##). PA can only represent in this sense ordinals less than ##\epsilon_0##.

In the case of set theory, the theory can represent ordinals much huger than ##\epsilon_0##. I'm not sure that there is a clear bound on the highest ordinal that can be represented. But to prove by induction that set theory is consistent, you'd have to do induction on an ordinal larger than any representable in set theory.

The proof-theoretic proof of consistency is very limited. I don't think it has been successfully applied to anything much more powerful than PA. (But I'm not sure).
 
  • #21
Cantor080 said:
His theory of defining infinite set, and distinguishing sets as countable, uncountable, etc.
I would say that the discovery of fractal geometry in the late 20th century was itself definitely an application of these concepts. I would argue though that the application of such mathematics to the real world hasn't even reached close to its full potential due to the inherent difficulty of the mathematics for those interested in applications such as engineers and scientists outside the exact/physical sciences.

The Cantor set is itself an uncountable self-similar fractal with uniform scaling. In the field of dynamical systems and chaos theory these same concepts are used to characterize and classify strange attractors which occur widely in physics, applied mathematics and other sciences.

Moreover, for most fractals that unlike the Cantor set display non-uniform scaling, a collection of superior mathematical techniques known as multifractal analysis is needed. These already have found applications in quantitative finance and theoretical economics, in fact were even discovered by mathematicians dabbling in those fields.

In fact, the most simple nonlinear maps and their attractors display neither strict self-similarity nor uniform scaling an example being the logistic map which has a set of superstable ##2^n##-cycles; for ##n \to \infty## this set approaches a topological Cantor set. The understanding of these objects would be much less clear without Cantor's early prescient input.
 
Last edited:
  • Like
Likes Cantor080
  • #22
Also, the modern theory of measure is inconsistent without the countable/uncountable distinction. If you have a circle of area 1, and you poke a hole in it (remove one single point), then the area is still 1. Do that any countable number of times, and the area is still 1. In measure theory, removing any countable number of points from a set of points leaves the measure unchanged. So it immediately follows that all sets with nonzero measure must have uncountably many points.
 
  • Like
Likes Cantor080 and Auto-Didact
  • #23
A related question which really interests me is whether applications such as the invention of fractal geometry and measure theory wouldn't have occurred at the time that they did without direct input from Cantor's ideas, i.e. were Cantor's ideas necessary for mathematicians to rigorously invent fractal geometry and measure theory?

If they were, then in my opinion Cantor's contributions to mathematics is not nearly appreciated well enough today. If they were, then Cantor should be celebrated like Newton (and Leibniz) is for discovering calculus and so jumpstarting the science of physics as we have known it since.
 
  • Like
Likes Cantor080
  • #24
Digression:
stevendaryl said:
Well, it's very complicated, although no single step is very difficult, conceptually.
Well to put it in a slightly different way. Suppose one wasn't completely convinced about the soundness of ZFC regarding arithmetic statements.
(Note that this is much weaker than taking a specific "negative" stance about soundness. A lack of very clear stance is enough.)

When I look at syntactic proof, I can view it as completely separately from having anything to do with ZFC at all. I can just look at it its own merit and decide whether the arguments can be justified or not. The issue is looking at the individual steps of arguments from revisionist perspective so to speak. Quoting from this essay for example:
Godel’s theorem does not actually say that the consistency of PA cannot be proved except in a system that is stronger than PA. It does say that Con(PA) cannot be proved in a system that is weaker than PA, in the sense of a system whose theorems are a subset of the theorems of PA—and therefore Hilbert’s original program of proving statements such as Con(PA) or Con(ZFC) in a strictly weaker system such as PRA is doomed. However, the possibility remains open that one could prove Con(PA) in a system that is neither weaker nor stronger than PA, e.g., PRA together with an axiom (or axioms) that cannot be proved in PA but that we can examine on an individual basis, and whose legitimacy we can accept. This is exactly what Gerhard Gentzen accomplished back in the 1930s...

The author also describes an "easy" proof in section-2, but I have no idea how it relates to semantic proof you mentioned (obviously because I don't understand either). My question roughly was that to what extent you can weaken the axiomatic system, while carrying over that sort of proof. It seems that in section-9 a very interesting comment is made (which seems to, be more or less, an answer to this ... I should add I have no idea as to "why" this is the answer):
1. If we believe that N must either have, or not have, every property expressible in the first-order language of arithmetic, then the straightforward set-theoretic proof should satisfy us that PA is consistent.
Personally, every such property having a truth value definitely is "satisfactory enough" to me. Honestly, if that's all what is the underpinning assumption, then that is really far more satisfactory than I would have expected.

Slight digression: But indeed, asking a question why every such property must have a truth value is a reasonable point ... (a point that I have repeated perhaps more times than necessary, on this forum). Basically my view point (I should stress that this is just my viewpoint) is that the only definitive answer to such an objection can be to give (if possible) a sufficiently growing function, while avoiding circularity(I have used this word informally here).
Perhaps, much more authoritatively, a very similar point is mentioned (towards end of section-3):
Voevodsky noted in his talk that first-order properties of the natural numbers can be uncomputable. This means that if our plan is to react to a purported proof of P∧¬P by checking directly whether P or ¬P holds for the natural numbers, then we might be out of luck—we might not be able to figure out, in a finite amount of time, which of P and ¬P really holds of the natural numbers. In the absence of such a decision procedure, how confident can we really be that the natural numbers must either have the property or not? Maybe the alleged “property” is meaningless.

On Topic:
stevendaryl said:
The proof-theoretic proof of consistency is a lot more work, and gives less insight.

...

Are you talking about giving a proof-theoretic proof of the consistency of set theory, itself? That's a completely unsolved problem, as far as I know. The kind of induction that is needed to prove the consistency of a theory requires induction on an ordinal that is beyond the ability of the theory itself to represent. Take PA for example. To represent an ordinal ##\alpha## within PA requires what's called an "ordinal notation", which is a way to code all the ordinals less than ##\alpha## as natural numbers in such a way that PA can define a relation ##R(x,y)## such that whenevery ##\beta < \gamma < \alpha##, PA proves ##R(code(\beta), code(\gamma))## (where ##code(x)## means the natural number coding ##x##). PA can only represent in this sense ordinals less than ##\epsilon_0##.

In the case of set theory, the theory can represent ordinals much huger than ##\epsilon_0##. I'm not sure that there is a clear bound on the highest ordinal that can be represented. But to prove by induction that set theory is consistent, you'd have to do induction on an ordinal larger than any representable in set theory.

The proof-theoretic proof of consistency is very limited. I don't think it has been successfully applied to anything much more powerful than PA. (But I'm not sure).
Yes, basically I think if we define a predicate WO(x) (where x∈ℕ), which should describe whether the program corresponding to index x(computing a function ℕ2 to {0,1}), describes a well-order (of ℕ with any order-type) or not . Then we can define the smallest ordinal α such that: "theory T can't prove any computable well-order relation with order-type α as a well-order."
Though I am not fully sure whether this is the standard definition, if there is one (or whether there are multiple definitions?).

Anyway, two small points:
(1) If the theory was "really" correct about its answers to predicate WO(x), then I think for sure the smallest such α can't equal ωCK (obviously α>ωCK is trivially ruled out). Because then we a trivial recipe for generating a computable relation for ωCK (a contradiction).
(2) The smallest ordinal α should work because given any computable β ... for any βsmall<β there is a trivial computable relation that we can write for βsmall, given the relation for β (and it seems quite reasonable to assume that any powerful-enough theory will prove that too, given it proves the relation for β to represent a well-order).

======

But anyway, I think your comment about such a proof giving less insight might not be fully justified.
Just based on what I have read: usually, on the very least, such an analysis (possibly among number of other things) also gives a collection of (recursive) increasing functions fn (n∈ω) such that the theory proves any given fi to be total but not any function which eventually dominates all of these function.

Totality of such function should be expressible purely using quantifiers on natural numbers, I think. But the way this information is captured in the above paragraph seems interesting.

=====

You are right that it is an unsolved problem for standard set theory. That's why I said "we don't have (iii)" in post#19. Also, I think you are probably right that currently there is not a clear understanding (for standard set theory) on what the highest bounds may or may not be (but I might be wrong too ... I just remember reading something along these lines). Generally, one does expect it to be pretty difficult to put such bounds.

Finally, regarding the last comment, I think that may not be quite true. I am just basing this on the wiki page which seems to list about two dozen different systems.
Furthermore one doesn't expect the wiki to list any partial results (also these kinds of analysis also might have applications outside of consistency perhaps(?) ... but I don't really know).
 
Last edited:
  • Like
Likes Cantor080
  • #25
SSequence said:
Well to put it in a slightly different way. Suppose one wasn't completely convinced about the soundness of ZFC regarding arithmetic statements.
(Note that this is much weaker than taking a specific "negative" stance about soundness. A lack of very clear stance is enough.)

When I look at syntactic proof, I can view it as completely separately from having anything to do with ZFC at all

Not quite. There is one step in a Gentzen-style proof, which is proving that a particular ordering of proofs is well-founded. ZFC is much better at that than PA.
 
  • Like
Likes Cantor080
  • #26
SSequence said:
Yes, basically I think if we define a predicate WO(x) (where x∈ℕ), which should describe whether the program corresponding to index x(computing a function ℕ2 to {0,1}), describes a well-order (of ℕ with any order-type) or not . Then we can define the smallest ordinal α such that: "theory T can't prove any computable well-order relation with order-type α as a well-order."
Though I am not fully sure whether this is the standard definition, if there is one (or whether there are multiple definitions?).
Although the discussion has run its course, there is a small point that should be pointed out. If we look at the wiki definition:
"a well-order on a set S is a total order on S with the property that every non-empty of S has a least element in this ordering."

So, while the above definition would fit perfectly well when T can talk about "every non-empty subset of", it seems that it isn't straight-forward when that is not the case.

For example, quantification over ℕ (like in PA) really has to be insufficient to describe the property "WO(x)" that was described in the quote above. If that was true, then the set containing the elements for which WO(x) is true would be an arithmetic set (which is known to be false).

So, if there is no explicit way to describe the property WO(x) for some T, then it seems the definition would definitely have to be different in that case (in one way or another).
 
  • Like
Likes Cantor080
  • #27
SSequence said:
Although the discussion has run its course, there is a small point that should be pointed out. If we look at the wiki definition:
"a well-order on a set S is a total order on S with the property that every non-empty of S has a least element in this ordering."

So, while the above definition would fit perfectly well when T can talk about "every non-empty subset of", it seems that it isn't straight-forward when that is not the case.

For example, quantification over ℕ (like in PA) really has to be insufficient to describe the property "WO(x)" that was described in the quote above. If that was true, then the set containing the elements for which WO(x) is true would be an arithmetic set (which is known to be false).

So, if there is no explicit way to describe the property WO(x) for some T, then it seems the definition would definitely have to be different in that case (in one way or another).

Right. Arithmetic cannot define whether a relation is well-ordered, or not. However, you can switch it around to the question of induction.

If ##R(x,y)## is a well-founded relation, then the following "induction schema" is true, for any formula ##\phi(x)##:

##(\forall y: (\forall x: R(x,y) \rightarrow \phi(x)) \rightarrow \phi(y)) \rightarrow \forall y: \phi(y)##

To see that this is valid:
  1. Assume that ##\forall y: (\forall x: R(x,y) \rightarrow \phi(x)) \rightarrow \phi(y)##.
  2. Let ##S## be the set of all ##y## such that ##\neg \phi(y)##
  3. Since ##R## is well-founded, then there must be a "smallest" element of ##S##, ##y_0##. Smallest means that there is no ##y## in ##S## such that ##R(y,y_0)##
  4. So that means that if ##R(y,y_0)## holds, then ##y## is not in ##S##, which means that ##\phi(y)## holds (since ##S## is defined as those ys that don't satisfy ##\phi(y)##)
  5. So we conclude: ##\forall y: R(y,y_0) \rightarrow \phi(y)##
  6. But by axiom 1, this implies ##\phi(y_0)##. Contradiction. So the assumption that ##S## was non-empty led to a contradiction, so ##S## is empty. So that means ##\forall y: \phi(y)##
So rather than talking about arithmetic proving that a relation ##R## is well-founded (it can't even state that), you can ask whether the ##R## induction schema is provable for every formula ##\phi(y)##
 
  • Like
Likes SSequence
  • #28
##(\forall y: (\forall x: R(x,y) \rightarrow \phi(x)) \rightarrow \phi(y)) \rightarrow \forall y: \phi(y)##

Well my intuition with orderings (except well-orders), well-founded relations etc. is very limited ... but I think I understand the gist of it.

So if we take ##R## to be a well-order relation (on ℕ), then this schema can really be thought of as a form of transfinite induction? So, for example suppose we have some computable ##R##. So there should always exist at least some ##R## so that if ##R## represented an order-type <ε0, then PA would conclude ##\forall y: \phi(y)## (assuming the hypothesis to be true)? But if ##R## represented an order-type ≥ε0, then PA would never be able to conclude ##\forall y: \phi(y)## (once again, assuming the hypothesis to be true)?

(and similarly for other weaker theories perhaps)
 
  • #29
SSequence said:
##(\forall y: (\forall x: R(x,y) \rightarrow \phi(x)) \rightarrow \phi(y)) \rightarrow \forall y: \phi(y)##

Well my intuition with orderings (except well-orders), well-founded relations etc. is very limited ... but I think I understand the gist of it.

So if we take ##R## to be a well-order relation (on ℕ), then this schema can really be thought of as a form of transfinite induction?

It is transfinite induction.

So, for example suppose we have some computable ##R##. So there should always exist at least some ##R## so that if ##R## represented an order-type <ε0, then PA would conclude ##\forall y: \phi(y)## (assuming the hypothesis to be true)? But if ##R## represented an order-type ≥ε0, then PA would never be able to conclude ##\forall y: \phi(y)## (once again, assuming the hypothesis to be true)?

(and similarly for other weaker theories perhaps)

I would put it this way: For every order type less than ##\epsilon_0##, there is a computable ##R## defining a well-founded relation of this order type such that PA proves every instance of R-induction. For any order type greater than or equal to ##\epsilon_0##, there is no such ##R## (there is a computable ##R## of that order type, but PA doesn't prove the induction schema---meaning that it doesn't prove every instance; there is some ##\phi## that it fails to prove)
 
  • Like
Likes SSequence
  • #30
I like this quote from Hilbert: "No one shall expel us from the paradise that Cantor has created." referring to set theory in general I presume.
 
  • Like
Likes Cantor080

1. What is Cantor's theory?

Cantor's theory, also known as set theory, is a branch of mathematics that deals with the study of sets, which are collections of objects. It was developed by Georg Cantor in the late 19th century and has since become an important foundation for many areas of mathematics.

2. What was the need for Cantor's theory?

Cantor's theory was developed in response to the paradoxes that arose in the study of infinite sets. These paradoxes, such as Russell's paradox, challenged the existing understanding of sets and their properties. Cantor's theory provided a rigorous and consistent framework for understanding infinite sets and solving these paradoxes.

3. What is the utility of Cantor's theory?

Cantor's theory has many practical applications in fields such as computer science, physics, and economics. It is used to model and analyze complex systems, and has also been used in the development of cryptography and data compression algorithms. Additionally, Cantor's theory has led to important developments in other areas of mathematics, such as topology and logic.

4. What was the aim of Cantor's theory?

The main aim of Cantor's theory was to provide a rigorous and consistent framework for understanding infinite sets and solving the paradoxes that arose in their study. Cantor also aimed to establish a foundation for mathematics that was free from contradictions and could be used to prove theorems and solve problems in various fields.

5. How did Cantor's theory impact mathematics?

Cantor's theory had a significant impact on mathematics, particularly in the areas of set theory, logic, and analysis. It provided a new understanding of infinite sets and their properties, and led to the development of new branches of mathematics such as transfinite arithmetic and the continuum hypothesis. Cantor's theory also paved the way for other important developments in mathematics, such as the axiomatic method and the study of infinite-dimensional spaces.

Similar threads

  • General Math
Replies
1
Views
889
Replies
47
Views
3K
Replies
33
Views
5K
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
0
Views
368
Replies
1
Views
2K
Replies
1
Views
10K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Set Theory, Logic, Probability, Statistics
3
Replies
79
Views
13K
Back
Top