Computational learnability: jump from finite to infinite

  • Thread starter nomadreid
  • Start date

nomadreid

Gold Member
1,262
88
Summary
Starting on basics of computational learnability, I am missing the key intuition that allows results about finite processes to reference results from the infinite domain.
Summary: Starting on basics of computational learnability, I am missing the key intuition that allows results about finite processes to reference results from the infinite domain.

Snowball:
(a) Hazard brought my attention to a popular article https://towardsdatascience.com/five-machine-learning-paradoxes-that-will-change-the-way-you-think-about-data-e100be5620d7, which had several blatant errors, so
(b) I went to the sources https://www.nature.com/articles/s42256-018-0002-3#ref-CR25 and
(c) https://arxiv.org/abs/1711.05195
which discussed ideas that I had no idea about, so
(e) clicking wildly on Wikipedia.

I did not mean to get in this deeply, and I have no intention of devoting much time to it: I wish to get the main ideas; therefore it is possible that my question is too vague, for which I apologize in advance.

The upshot (most clearly expressed in (c)) is that ZFC is not a sufficient basis for machine learning, which is not surprising; what is more curious is the use of the Continuum Hypothesis (CH). In attempting to see how the authors extend their results to be able to invoke the independence of CH, I find that I am missing a key bit of intuition. I would be grateful for an enlightening explanation of the following:

In the above sources, I find constant references to finite sets, to their maxima, to their cardinality, and to approximations of convergence. Approximations are the order of the day, with some precise results in the form of bounds. The only references to infinity at first are in the limits, and the idea that the functions are over the real numbers (even though, as I understand it, computers only actually use rationals), or over the collection of all finite sets. Then various results are extended to aleph-null, aleph-one, and the cardinality of the reals, bringing it into the realm of the CH.

Knowing that one must be careful when extending results about the finite over to the infinite, and from approximations to exact results, I am trying to find the justification of this jump (if possible, without working through each and every equation in the papers). Could someone enlighten me (knowing that I am presently close to zero in the theory of machine learning, having only begun to read into this field)? I will take into account that any answer on this level is bound to be a simplification, but that is fine with me at this point.

[Moderator's note: Link removed which breached copyright.]
 
Last edited by a moderator:

berkeman

Mentor
54,725
5,001
Thread is in Moderation temporarily while dealing with some copyright issues...

Reopened after removing a link which violated copyrights.
 
Last edited by a moderator:
If you want to know more about computer representations of real numbers, check out this Wikipedia article about floating point arithmetic:

 

nomadreid

Gold Member
1,262
88
Thank you, pat8126. The .problem remains: floating point arithmetic is an arithmetic of the rational numbers as approximations to reals, and some systems reference a few algebraic numbers as fixed constants, i.e., countable sets, and in fact further constraints keep the sets finite. The infinite, on the other hand, seem to be dealt with by humans. If we are to subscribe to the Church-Turing thesis, then either computers should not be constrained to deal only with finite sets, or humans should be.
 

jbriggs444

Science Advisor
Homework Helper
7,061
2,349
then either computers should not be constrained to deal only with finite sets, or humans should be.
Humans are constrained to deal with finite descriptions of sets, just as computers are.

"##\infty##" is a finite string.
 
Thank you, pat8126. The .problem remains: floating point arithmetic is an arithmetic of the rational numbers as approximations to reals, and some systems reference a few algebraic numbers as fixed constants, i.e., countable sets, and in fact further constraints keep the sets finite. The infinite, on the other hand, seem to be dealt with by humans. If we are to subscribe to the Church-Turing thesis, then either computers should not be constrained to deal only with finite sets, or humans should be.
The concept of infinity only suggests a pattern that potentially repeats forever, because no apparent reason exists for it to halt in the future.

If the universe were finite, the actual stopping point of a sequential pattern of numbers would be a rational number that no longer transforms under the rules of the previous infinite function.

If the universe actually continued forever, the best humans could do, as jbrigs444 mentioned, would be to round away the irrational values to a meaningful but inherently arbitrary level of precision.
 

.Scott

Homework Helper
2,197
698
The .problem remains: floating point arithmetic is an arithmetic of the rational numbers as approximations to reals, and some systems reference a few algebraic numbers as fixed constants, i.e., countable sets, and in fact further constraints keep the sets finite. The infinite, on the other hand, seem to be dealt with by humans. If we are to subscribe to the Church-Turing thesis, then either computers should not be constrained to deal only with finite sets, or humans should be.
As @jbriggs444 suggested, there are many ways to represent a number in a computer. In fact, any number that you can describe in a forum post will be a number stored in computers - because those are what we are using to communicate right now.

Floating point numbers can store some rational numbers precisely and some not-so-precisely. They will never get 1/3 perfect - unless you do something extraordinary such as implementing base-3 floating point arithmetic.
So more precisely: "Floating point arithmetic is an arithmetic of a subset of the rational numbers as approximations to other rational numbers and reals.

In a Turing test, the computer would not be constrained to floating point arithmetic. In fact, it should probably avoid using such skills since they would be a dead give-away to its non-human nature.
 

nomadreid

Gold Member
1,262
88
My thanks to jbriggs444, pat8126, and .Scott.
I guess the key point is that, if we think of infinity or infinitesimals as expressed by finite strings of syntactical symbols, then the semantic (model, interpretation) entity that satisfies that string is either not itself actually infinite or isn't comprehensible. Put another way: we can point to infinity but we are either pointing to either another symbol or we are pointing at a blank spot. Even shorter: infinity is just a blotch on paper.
 
My thanks to jbriggs444, pat8126, and .Scott.
I guess the key point is that, if we think of infinity or infinitesimals as expressed by finite strings of syntactical symbols, then the semantic (model, interpretation) entity that satisfies that string is either not itself actually infinite or isn't comprehensible. Put another way: we can point to infinity but we are either pointing to either another symbol or we are pointing at a blank spot. Even shorter: infinity is just a blotch on paper.
I concur.
 
32,344
4,130
The concept of infinity only suggests a pattern that potentially repeats forever, because no apparent reason exists for it to halt in the future.
I'm not sure that thinking about this in terms of patterns is useful. Speaking of real numbers, the decimal representation of 1/3 is 0.333..., with this pattern repeating endlessly. Certainly the decimal representation has a pattern that repeats endlessly, but the representation is of a finite number. The decimal representation of ##\pi## is infinitely long, but there is no recognizable pattern, even if we go out several billion decimal digits.

Every rational number has a finite-length pattern that repeats in its decimal representation. No irrational number exhibits any such pattern.

Floating point numbers can store some rational numbers precisely and some not-so-precisely. They will never get 1/3 perfect
Which would we expect, since the decimal representation of 1/3 repeats endlessly. More surprising is that fraction such as 1/5 and 1/10, whose decimal representations are 0.2 and 0.1 are also not stored precisely on computers using binary fraction representations. For example, the expression 0.2 + 0.2 + 0.2 + 0.2 + 0.2 comes out to a result a little bit less than 1 in any programming language that uses the usual floating point arithmetic, i.e., the IEE Standard for Floating Point Arithmetic, IEE-754.
 

.Scott

Homework Helper
2,197
698
Which would we expect, since the decimal representation of 1/3 repeats endlessly. More surprising is that fraction such as 1/5 and 1/10, whose decimal representations are 0.2 and 0.1 are also not stored precisely on computers using binary fraction representations. For example, the expression 0.2 + 0.2 + 0.2 + 0.2 + 0.2 comes out to a result a little bit less than 1 in any programming language that uses the usual floating point arithmetic, i.e., the IEE Standard for Floating Point Arithmetic, IEE-754.
To be precise, the common floating point formats are IEEE-754 binary32 and binary64.
As their names suggest, they are binary implementations, not decimal. When using binary numbers with bits to the right of the binary point, denominators that are not powers of two can result in repeating bit patterns. So, although 1/5 and 1/10 will not result in repeating decimals, they will result in repeating binaries. More specifically: 1/101 = 0.001100110011...
 

jbriggs444

Science Advisor
Homework Helper
7,061
2,349
The decimal representation of ππ\pi is infinitely long, but there is no recognizable pattern
As @Mark44 surely knows, much depends on what sorts of "patterns" one will accept. The decimal representation of pi is strongly patterned in the sense of Chaitin randomness. There are finite algorithms which can succeed in generating any desired digit.

##\pi## is believed to be free of patterns in the sense that every possible finite sequence of decimal digits occurs in the decimal expansion of pi with the same frequency as every other sequence of that length.
 
32,344
4,130
##\pi## is believed to be free of patterns in the sense that every possible finite sequence of decimal digits occurs in the decimal expansion of pi with the same frequency as every other sequence of that length.
That was the sense I meant in my response.
 
Take for instance the irrational number represented by 22 divided by 7.

The "pattern" is continuing to divide by 7, to any arbitrary level of precision.
 
32,344
4,130
Take for instance the irrational number represented by 22 divided by 7.
##\frac {22}7## is very much a rational number, being the ratio of two integers. It's not a coincidence that the two words are related. An irrational number cannot be represented as the quotient of two integers; i.e., is not a ratio.
pat8126 said:
The "pattern" is continuing to divide by 7, to any arbitrary level of precision.
The "pattern," as it is usually understood, is the repeating sequence 142857 in the decimal representation, not that we are continuing to divide by 7.
 
32,344
4,130
I guess the key point is that, if we think of infinity or infinitesimals as expressed by finite strings of syntactical symbols, then the semantic (model, interpretation) entity that satisfies that string is either not itself actually infinite or isn't comprehensible. Put another way: we can point to infinity but we are either pointing to either another symbol or we are pointing at a blank spot. Even shorter: infinity is just a blotch on paper.
Not sure what you're getting at here. As @jbriggs444 said earlier, '"##\infty##" is a finite string.' We can point to this symbol and get the idea that it represents a number larger than any number we could possibly think of. I don't see your match between the symbol (a blotch on a paper) and what it represents.
 

.Scott

Homework Helper
2,197
698
'"##\infty##" is a finite string.' We can point to this symbol and get the idea that it represents a number larger than any number we could possibly think of.
Some caution here: Infinities are not ordinal numbers. Calling them "larger numbers" is a little sketchy. For example, you cannot put them into an equation as you would with common numbers.
But short of that, we can define a function ##G(0)=1; For \ n>0, G(n) = 10^{G(n-1)}##. Then consider G(G(G(100))).
 

Want to reply to this thread?

"Computational learnability: jump from finite to infinite" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top