Demystifier said:
Here I would like to see interesting challenges for artificial intelligence, which seem very hard for AI and yet in the spirit of what AI usually does.
Here is my proposal: Let us feed the machine with a big sample (containing, say, a million numbers) of prime numbers and a big sample of non-prime numbers, without feeding it with a general definition of the prime number. The machine must learn to recognize (not necessarily with 100% accuracy) prime numbers from non-prime numbers, for cases which were not present in the initial sample.
The challenge looks hard because the distribution of prime numbers looks rather random, yet not prohibitively hard because there is a strict simple rule which defines what is and what isn't a prime number.
It's an interesting question, but it presupposes:
1) that there is some output from a computer which would mean "I understand what a prime number is.", or even, "I understand what an integer is";
2) that numbers (abstract mathamatical objects, possibly infinite) can be "fed" into a machine without someone having to write a program, decide how the numbers are encoded, etc.
3) that such a progam exists -- without going into how it was written--and has no prior knowledge of factorization written into it, or into the programming language (e.g., modulus operator) or into the processor's instruction set.
If you are only going to try a finite number of cases, how can you ever be sure that the computer is detecting prime numbers, and not some other property those numbers happen to have in common? If you N numbers and get the right answer every time, couldn't it fail on the very next number you tried?
Here's a great rule for finding prime numbers: look them up in the table in the current CRC Handbook.
It works for a whole bunch of primes! :-)
If you ask a human mathematician, "what do these numbers have in common?", the answer you get may
depend on his field of specialization: e.g., number theory, geometry, topology etc. An historian might interpret the number as dates. In general, there is no single "right" answer. While it may seem like 500 prime numbers and 500 composite numbers have only one possible answer, that's only because of our limiitations
as mathematicians and in imagination.
If the numbers are chosen at random, that doesn't mean necessarily will not (by pure chance) have some other property in common. For example, the composite numbers could all be perfect squares -- in which case a reasonable answer is "this set of numbers is composed of perfect squares, and the other one isn't.".
If you try to pick cases where the numbers are disorganized, that counts as spoon-feeding the program
with the desired answer.
Would the program classify the number 1 as prime or composite--or would the programmer exempt it? 1 has no factors except itself. That 2 is the smallest prime is a convention based on convenience--it avoids having to state an exception for every theorem involving prime numbers. But the program will have no concept of convenience, or elegance or even that there is such a thing as mathemtics, mathematicians, or people--because it has no general knowledge. The computer is just calculating machine, and only knows what you chose to tell it.
The program you start with matters greatly. The human brain can learn to do math, play chess, compose music, or speak French. But a chess-playing program can't even learn tic-tac-toe. So far as I know, there is no general game-playing algorithm. There certainly is no general number classifying algorthm. All AI programs are specialized for the task they perform. Then they "learn" (modifiy stored parameters) in order to improve their game.
From what I've seen of AI, more intelligence goes into writing the program than is ever displayed by the program. IBM spend 10 years and millions of dollars on Deep Blue: a purpose-built computer that filled a room. It took years to write the program, then they spent more time feeding all of Kasparov's recorded games into it. All this to beat one middle-aged man at a board game.
Press release: "Caterpillar Company just announced that, after years and millions in investment, it has finally built a piece of heavy equimpent that can move more dirt than one laborer with a shovel. :-)
Then IBM spent even more time and money to beat human players at a TV game show, doing even more research into past Jeopardy categories, to try to guess the topics. IBM staffers put a lot of thought into that program. That kind of AI is just "canned" human intelligence.
Back to prime numbers: even if the program gets it right in every case you try, has it "learned" something about math--let alone learned what mathematicians mean by "prime number"? It may be that all this talk about "machine learning" and "machine "intelligence" is merely metaphorical.
For example, it is possible to teach a dog to shake hands. But when the dog does this trick, it's not a greeting--dogs greet each other by sniffing. More likely, it's a way of getting a reward. In describing the dogs behavior, "shaking hands" thurns out to be a metaphor for "holding up its paw so the owner can shake it." (It doesn't even have hands!)
Would the program be learning about prime numbers, or merely raising its paw?
We care about prime numbers becuase of their usefullness and the theorems that have been proven about them. But the program will know nothing of this. It is a person who has memorized a phrase in a foreign language without knowing what it means. If that's intelligence, it's a paltry kind.
If one chose instead to build a deduction engine, iit is already known that any axiom system powerful
enough to state theorems of arthimetic is undecidable and incomplete.
If there are mathematicians in the Alpha Centauri galaxy, almost certainly they have the concept of a prime number. But that's becuase they are mathematicians, not calculators. To mathematician, the natural numbers aren't strings of digits: they are a (countably) infinite set defined by Peano's Axioms. Mathmatics is not about manipulating tokens.
When an agency of the US government wanted to factor large integers efficently, it had specialized hardware designed for that purpose. AI may be possible with neural networks, threshold elements, and new kinds of hardware we haven't dreamed up. But general purpose computers are general only with regad to the usual kinds of applications: spreadsheet, word processing, etc. Computer archetecture is nothing at all like brain architecture, nor is the underlying technology.
The AI technology horse is out in front of the computer science and neuroscience cart. When we can build a brain, we will be able to do AI. Until then, was passes for AI might be better called "artificial stupidity".
A hammer isn't stupid, neither is a calculator: it does just what you tell it to do. But a car-driving program that slams into the back of parked policy car: that is world-class stupidity!