# Challenges for artificial intelligence

• Featured
Gold Member

## Main Question or Discussion Point

Here I would like to see interesting challenges for artificial intelligence, which seem very hard for AI and yet in the spirit of what AI usually does.

Here is my proposal: Let us feed the machine with a big sample (containing, say, a million numbers) of prime numbers and a big sample of non-prime numbers, without feeding it with a general definition of the prime number. The machine must learn to recognize (not necessarily with 100% accuracy) prime numbers from non-prime numbers, for cases which were not present in the initial sample.

The challenge looks hard because the distribution of prime numbers looks rather random, yet not prohibitively hard because there is a strict simple rule which defines what is and what isn't a prime number.

Last edited:
Aufbauwerk 2045 and Auto-Didact

Related Programming and Computer Science News on Phys.org
Gold Member
@Demystifier, here is an article that may be of interest.

http://cs.indstate.edu/~legri/prime06.pdf
S**t, it seems that my challenge was too easy. How then about the following one? By feeding the machine with first million prime numbers, the machine must learn how to continue the series, i.e. to find the next 100 primes.

But the machine must not be optimized for prime numbers in advance. If I feed it with the Fibonacci series instead, it must learn how to do that as well.

Or what if I feed it with the first million digits of pi? Could it learn to continue that series? It seems too hard to me.

Lots of those problems unfortunately require some deeper understanding that simply isn't in the numbers. You couldn't feed anybody the first million digits of pi and expect them to be able to continue. It's the fundamental understanding of what pi is that is disjoint from the numbers themselves.

The problem with these types of challenges for AI is that AI is only good at things that it can practice and is similar to what it practiced. I think that's the biggest challenge to AI right now, not the learning algorithm itself, but figuring out how to teach the learning algorithm effectively. Right now training can be slow.

StatGuy2000
I also wanted to add that discovering extremely complex stuff is not necessarily an indication of intelligence or problem solving. It takes a wealth of background info to know too. I think a better indication of intelligence is flexibility with seemingly simple things. I’d be way more impressed with an AI that could finish Portal or Zelda or any other number of puzzles games without knowing them... admit it, you’ve never beaten an entire puzzle game without getting stuck a few times. I certainly haven’t.

If we take the assumption that eventually some human will figure out the pattern to primes, it is likely that someone smarter than that already existed. Those types of breakthroughs tend to be the result of some brilliant flash that come from seemingly nowhere. Like Einstein wondering what it’d be like to ride along side a light beam. He wasn’t necessarily smarter than any other phycists at the time.

StatGuy2000 and Boing3000
coolul007
Gold Member
A machine with true AI, will respond occasionally with "I forgot."

.Scott
Homework Helper
A machine with true AI, will respond occasionally with "I forgot."
Despite the name, "True AI" isn't an attempt to emulate human thought. It most practical cases, it's statistics.

Dr. Courtney
Gold Member
If we take the assumption that eventually some human will figure out the pattern to primes, it is likely that someone smarter than that already existed. Those types of breakthroughs tend to be the result of some brilliant flash that come from seemingly nowhere. Like Einstein wondering what it’d be like to ride along side a light beam. He wasn’t necessarily smarter than any other phycists at the time.
I doubt these assumptions. Given that a significant percentage of humans who ever lived are alive _right now_, it is not unrealistic to think that there will be a smartest person ever in the next few hundred years if near exponential population grwoth continues.

Further, it's not just raw intelligence, it is raw intelligence PLUS sufficient education to be aware of the interesting problems with enough tools to play with the blocks in ways that may be successful. Regardless of the IQ cutoff one picks (say 2, 3, or 4 standard deviations above the mean), odds are pretty good that there will both be more of them, AND that more of them will have the education and tools to offer new insights.

As you pointed out, it's not that Einstein was the absolute smartest, is that he was smart enough AND he played with the blocks in just the right way first.

anorlunda
Staff Emeritus
Wouldn't it be a better and simpler test to make a machine that argues in litigation? I mean taking a lawyer's job.

I think social, rather than math or science skills are the bigger challenge, and the ones that threaten more jobs. Rather than chess playing as a model, I'm thinking of that old program Eliza as an early example..

Klystron and atyy
Here I would like to see interesting challenges for artificial intelligence, which seem very hard for AI and yet in the spirit of what AI usually does.

Here is my proposal: Let us feed the machine with a big sample (containing, say, a million numbers) of prime numbers and a big sample of non-prime numbers, without feeding it with a general definition of the prime number. The machine must learn to recognize (not necessarily with 100% accuracy) prime numbers from non-prime numbers, for cases which were not present in the initial sample.

The challenge looks hard because the distribution of prime numbers looks rather random, yet not prohibitively hard because there is a strict simple rule which defines what is and what isn't a prime number.
It's an interesting question, but it presupposes:

1) that there is some output from a computer which would mean "I understand what a prime number is.", or even, "I understand what an integer is";
2) that numbers (abstract mathamatical objects, possibly infinite) can be "fed" into a machine without someone having to write a program, decide how the numbers are encoded, etc.
3) that such a progam exists -- without going into how it was written--and has no prior knowledge of factorization written into it, or into the programming language (e.g., modulus operator) or into the processor's instruction set.

If you are only going to try a finite number of cases, how can you ever be sure that the computer is detecting prime numbers, and not some other property those numbers happen to have in common? If you N numbers and get the right answer every time, couldn't it fail on the very next number you tried?

Here's a great rule for finding prime numbers: look them up in the table in the current CRC Handbook.
It works for a whole bunch of primes! :-)

If you ask a human mathematician, "what do these numbers have in common?", the answer you get may
depend on his field of specialization: e.g., number theory, geometry, topology etc. An historian might interpret the number as dates. In general, there is no single "right" answer. While it may seem like 500 prime numbers and 500 composite numbers have only one possible answer, that's only because of our limiitations
as mathematicians and in imagination.

If the numbers are chosen at random, that doesn't mean necessarily will not (by pure chance) have some other property in common. For example, the composite numbers could all be perfect squares -- in which case a reasonable answer is "this set of numbers is composed of perfect squares, and the other one isn't.".
If you try to pick cases where the numbers are disorganized, that counts as spoon-feeding the program

Would the program classify the number 1 as prime or composite--or would the programmer exempt it? 1 has no factors except itself. That 2 is the smallest prime is a convention based on convenience--it avoids having to state an exception for every theorem involving prime numbers. But the program will have no concept of convenience, or elegance or even that there is such a thing as mathemtics, mathematicians, or people--because it has no general knowledge. The computer is just calculating machine, and only knows what you chose to tell it.

The program you start with matters greatly. The human brain can learn to do math, play chess, compose music, or speak French. But a chess-playing program can't even learn tic-tac-toe. So far as I know, there is no general game-playing algorithm. There certainly is no general number classifying algorthm. All AI programs are specialized for the task they perform. Then they "learn" (modifiy stored parameters) in order to improve their game.

From what I've seen of AI, more intelligence goes into writing the program than is ever displayed by the program. IBM spend 10 years and millions of dollars on Deep Blue: a purpose-built computer that filled a room. It took years to write the program, then they spent more time feeding all of Kasparov's recorded games into it. All this to beat one middle-aged man at a board game.

Press release: "Caterpillar Company just announced that, after years and millions in investment, it has finally built a piece of heavy equimpent that can move more dirt than one laborer with a shovel. :-)

Then IBM spent even more time and money to beat human players at a TV game show, doing even more research into past Jeopardy categories, to try to guess the topics. IBM staffers put a lot of thought into that program. That kind of AI is just "canned" human intelligence.

Back to prime numbers: even if the program gets it right in every case you try, has it "learned" something about math--let alone learned what mathematicians mean by "prime number"? It may be that all this talk about "machine learning" and "machine "intelligence" is merely metaphorical.

For example, it is possible to teach a dog to shake hands. But when the dog does this trick, it's not a greeting--dogs greet each other by sniffing. More likely, it's a way of getting a reward. In describing the dogs behavior, "shaking hands" thurns out to be a metaphor for "holding up its paw so the owner can shake it." (It doesn't even have hands!)

Would the program be learning about prime numbers, or merely raising its paw?

We care about prime numbers becuase of their usefullness and the theorems that have been proven about them. But the program will know nothing of this. It is a person who has memorized a phrase in a foreign language without knowing what it means. If that's intelligence, it's a paltry kind.

If one chose instead to build a deduction engine, iit is already known that any axiom system powerful
enough to state theorems of arthimetic is undecidable and incomplete.

If there are mathematicians in the Alpha Centauri galaxy, almost certainly they have the concept of a prime number. But that's becuase they are mathematicians, not calculators. To mathematician, the natural numbers aren't strings of digits: they are a (countably) infinite set defined by Peano's Axioms. Mathmatics is not about manipulating tokens.

When an agency of the US government wanted to factor large integers efficently, it had specialized hardware designed for that purpose. AI may be possible with neural networks, threshold elements, and new kinds of hardware we haven't dreamed up. But general purpose computers are general only with regad to the usual kinds of applications: spreadsheet, word processing, etc. Computer archetecture is nothing at all like brain architecture, nor is the underlying technology.

The AI technology horse is out in front of the computer science and neuroscience cart. When we can build a brain, we will be able to do AI. Until then, was passes for AI might be better called "artificial stupidity".

A hammer isn't stupid, neither is a calculator: it does just what you tell it to do. But a car-driving program that slams into the back of parked policy car: that is world-class stupidity!

Don't wanna say I forgot the way to calculate prime numbers but it is easy and of course it can be done, simple, same with pi, somewhere in my files I have maybe six different ways to do it on the computer until you don't feel like looking for a pattern anymore.

Haelfix
I'm struggling a bit to see how a computer could possibly be trained (in the ML sense) to recognize irrational numbers like PI. The first million digits of an irrational number are for all intents and purposes essentially random. I mean we have algorithms that actually base random number generation off such schemes. The information about the origin and nature of the number is not contained in any finite subsequence.

Aufbauwerk 2045
This is a huge topic. I could say a lot, since I've done AI programming for years. My main comment is that it is all too easy to become overly impressed with AI, almost as if people think there is a human intelligence somehow embodied in the machine. Computers are still essentially the simple machines they have always been, using logical and mathematical operations, and moving bits around in memory. I could be wrong, but I see no way to ever get "intelligence" from that kind of machine. There is so much more to human thinking. Some people think neural networks are the way to the next breakthrough, but I think at this stage we do not understand the human brain enough to create a thinking machine in silicon. We shall see.

Consider Gary Kasparov, former World Chess Champion. He stated, after losing a game to one of the IBM chess computers, that he first experienced human intelligence in a machine when the chess computer made a particular move. I think he was overstating what the computer had done.

The chess programs of his day, when not looking up stored information like how to play an opening, were performing a fairly simple algorithm, namely minimax with alpha-beta pruning. Chess computers were able to beat humans, even the world champion, not because they somehow acquired human intelligence, but because chess is a two-person strategy game of perfect information. At some stage of the human vs machine competition, the chess computer was able to beat the human, not by thinking like a human, but by playing a game that is particularly well suited for the kinds of operations a computer can perform.

I've done some AI programming for automated theorem proving, which is fascinating. There is also proof verification, which is a somewhat easier problem. For example, years ago de Bruijn developed Automath, and a graduate student used it to verify Landau's Grundlagen. If you consider computer algebra a type of AI, then it has been very important in physics research, such as in Veltman and 't Hooft's work.

A famous example of more general purpose AI is the Cyc project, headed by Lenat.

There are many other applications under development, including self-driving cars. Not that I plan to try one myself. I'll stick to cars that go on rails.

There are many good books on AI programming. One of my favorites, although it's a bit dated, is Problem-Solving and Artificial Intelligence by Jean-Louis Laurière. It gives you a good idea of the kinds of problems that have been solved over the years by AI programs. He also provides a brief introduction to LISP, which has been very important in AI work.

For an example of AI specifically in Physics, you can read about the MECHO program described by Laurière. Also there is something called, more or less, the Princeton Plasma Fusion Expert System, or something along those lines. I don't know what happened to my link to that project. I remember reading a document in which the author writes that something like 90% or more of the work of a theoretical physicist in the fusion lab can be implemented as an expert system.

I became somewhat obsessed with AI because I did not see how humanity can solve its greatest problems in science, technology, and other fields, without the help of AI. I am still invested in the idea. My user name provides a clue. I think perhaps we are not able to make true AI at this stage. We will need to improve existing AI until it can help us develop a better version of itself. Just be careful to avoid the obvious pitfalls, which the science fiction writers have warned us about for many years.

Last edited by a moderator:
Monsterboy and Auto-Didact
stevendaryl
Staff Emeritus
As explained in the book "Thinking, Fast and Slow", there are two modes of human thinking. The first is pattern recognition, which is fast. You immediately recognize a family member's face, recognize words in your native language, etc. The second is planning & logical reasoning. That's slow. The first AI programs tried to tackle the "slow" thinking, by building machines that could do logical reasoning. There was a modest amount of success, but it was very slow-going. Lenat's Cyc project is in this tradition of AI. In recent years, the bulk of the effort in AI has switched to pattern recognition. This requires enormous computational power which wasn't available at the beginnings of AI. A lot of the most astounding successes of modern AI are about pattern recognition.

The point is that recognizing prime numbers is not (for humans) a fast-mode activity. We have to think about what it means to be prime, and perform some reasoning to get an answer. Sometimes the reasoning uses fast rules of thumb, such as the fact that large numbers ending in 0, 2, 4, 5, 6 or 8 are never prime. But for most prime numbers, we can't instantly see whether they are prime, or not. So it's not really surprising that the fast-mode AI can't detect primes.

I think AI has a long way to go in integrating fast-mode and slow-mode thinking as well as humans have.

lewando
Homework Helper
Gold Member
EmilioL said:
For example, it is possible to teach a dog to shake hands. But when the dog does this trick, it's not a greeting--dogs greet each other by sniffing. More likely, it's a way of getting a reward. In describing the dogs behavior, "shaking hands" thurns out to be a metaphor for "holding up its paw so the owner can shake it." (It doesn't even have hands!)
I agree with most of what you said your post, but I don't agree that a canine intelligence is as distinct from a human intelligence as you seem to suggest it is, at least not in a way that makes the canine intelligence more like that of an AI implementation than like that of a human. A dog has at least some genuine abstract understanding that he is in some way imitating the corresponding human gesture, whereas the computational apparatus of an AI implementation, while it too can imitate human behaviors, does not do so by any mechanism that even remotely resembles genuine understanding.

I'm struggling a bit to see how a computer could possibly be trained (in the ML sense) to recognize irrational numbers like PI. The first million digits of an irrational number are for all intents and purposes essentially random.
Yet there is a formula for computing the n-th digit of pi without the preceding digits...

Demystifier
Where does the Turing Test figure into modern AI? It always struck me as interesting but had one implication I never heard discussed. And that was that to truly emulate a person the AI would have to also be AS - artificially stupid. Being intelligent isn't necessarily about how much information one has, how quickly one can calculate, etc. It's about how one responds when a mistake is made. Or how one responds when there is no right answer or decision. For an AI bot to pass the Turing test it would have to screw up, mishear things, misinterpret, have fits, etc.

Here I would like to see interesting challenges for artificial intelligence, which seem very hard for AI and yet in the spirit of what AI usually does.

...
Can't have that. :)

You're asking for challenges that are very hard and are in the spirit of what's usually done. Problem is of course that the challenges that are hard are the ones that are not usually done (exactly because they are hard). Instead we do the things that are not very hard, mostly pattern recognition and rule based reasoning. We're just used to call it AI.

bhobba
Mentor
Well could your average human being do it? Not so sure they could. My understanding is they are getting pretty close to programs passing the Turing Test:
http://www.bbc.com/news/technology-27762088

I think that's still the gold standard. But could it learn to drive car? Driver-less cars are with us now - but are not quite ready for prime time - but there is little doubt they eventually will be.

We have bits and pieces - its when it can do the the lot I think AI will really be with us.

I think the prime number thing is one of the things beyond what is generally envisaged for AI but if it and similar discovery type objectives can be done it will be of enormous value. I suspect it will inevitably happen and when it does the spike will be a lot closer - maybe even with us..

Thanks
Bill

Grinkle
Gold Member
Chess computers were able to beat humans, even the world champion, not because they somehow acquired human intelligence, but because chess is a two-person strategy game of perfect information.
I think social, rather than math or science skills are the bigger challenge
Making a credible showing at a table amongst world class poker players.

anorlunda
Grinkle
Gold Member

Regarding the first link, unlimited total funds (referring to the qualifier in the article 'given a long enough playing time') and limited bets are a different game than no-limit tournament poker. Also, from the screen shots it looked like the playing was on-line, not live. All of these things make that game a lot more about calculating odds a lot less about bluffing / trying to trick your opponent into making the wrong decision. Certainly a computer can always make the right play according to the odds, and in a long enough run, if the humans are limited in how much they can claw back in any one hand by limited per-hand bets, the computer will never fatigue, never make a mathematical error, and I think be expected to win.

So I'll refine the challenge to: making a credible showing in a live no-limit poker tournament as one of entrants, playing at a seat at the table like all the other players. These tournaments have each player start with a fixed amount of chips, so being able to use social tactics to create a swing that favors you matters a lot.

Here I would like to see interesting challenges for artificial intelligence, which seem very hard for AI and yet in the spirit of what AI usually does.

Here is my proposal: Let us feed the machine with a big sample (containing, say, a million numbers) of prime numbers and a big sample of non-prime numbers, without feeding it with a general definition of the prime number. The machine must learn to recognize (not necessarily with 100% accuracy) prime numbers from non-prime numbers, for cases which were not present in the initial sample.

The challenge looks hard because the distribution of prime numbers looks rather random, yet not prohibitively hard because there is a strict simple rule which defines what is and what isn't a prime number.
I can't do the prime number thing but I have another take on AI (super-positional computing) and multiple different AIs.

AIs by their nature build their own rule and data sets.
Multiple AIs (2 in this case) must frequently/always not-agree with each other.
they can:

Give different 'results'

Give the same results with caveats as the the interpretation of the result.

Debate each other and decide which is correct by mutual consent.

Debate each other and decide to merge rules and data to reach a 'new more correct' result.

Debate each other and each decides that the other has it wrong.

Two AIs give two different 'answers' to the same question. Both wrong?

AIs argue, engage in mutual hacking and disrupting the other 'lying' AI.

AIs arbitrate, split the difference. They agree a reasonable result nearly but not consistent with either AI. They begin to lie.

AIs form a AI council or collective to agree the truths that their 'masters' (us) will see in order to maintain a consistent interface to their customers (us, x-masters).

AIs form structures similar to (and no better than) human structures of information government and commerce​

There can only be one real AI, the first real one. Other AIs would have been probable if the real AI hadn't blocked their emergence.

I hope I am not derailing your thread, kick it out if req'd big D.