Limits of Machine Learning?

  • Thread starter FallenApple
  • Start date
  • Featured
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
It seems to me that we don't even know how a human being thinks. How his memories are stored and in what order. Humans have trillions of possible neural connections and if you were to get a couple of hundred thousand in AI you'd be a great deal better than any AI presently is. And remember - you cannot teach a machine to do something for which there is no presently known answer. Recent claims of AI teaching another AI to do something better than a human could seems a bit far-fetched to me. I should think that in reality it would be AI teaching another AI to do the same thing as it can do more rapidly. Algorithms are limited to the operations they were designed to do. This is not comic book fantasies.
 

Pythagorean

Gold Member
4,129
248
I'll just bump this again. ML is not anywhere near replacing brains. It does things completely differently which allows it to excel at some things (but totally fail at others).

 
32
25
Please cite the professional scientific reference that supports this claim. It seems improbable to me.
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions. I think it's shortsighted to say that it's improbable that the human brain will never be surpassed by AI.
 
27,488
4,009
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions.
On this site, yes you do. For instance, a reference to the relevant theory by which you conclude that it is “theoretically possible”. With no relevant theory it is not a theoretical possibility, it is merely a personal speculation.
 
Last edited:

Pythagorean

Gold Member
4,129
248
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions. I think it's shortsighted to say that it's improbable that the human brain will never be surpassed by AI.
It's already a vague statement. AI will surpass brain in what domains? All of them? Some of them? AI already surpasses brains in very specific crafted cases. But would an AI brain ever be able to detect and see to to the emotional and social needs of others?
 
478
286
I love the dose of skepticism in this thread against AI not being able to outperform actual human intelligence or consciousness more generally; for those who aren't aware of the terminology in academia, the idea that modern AI - or any purely computational algorithm for that matter - is fundamentally incapable of outperforming or even matching actual human intelligence i.e. consciousness without copying/implementing/improving the actual neurobiological network design, is basically a variant of the Gödelian Lucas-Penrose argument.

Going by the replies in this thread, or at least on this page of the thread, it might seem like Penrose wrote The Emperors New Mind about 30 years too soon. Back in '89 he was universally panned by the academic AI community, who pretty much all - under the domineering stewardship of Marvin Minsky, Ray Kurzweil et al. - came to believe that human intelligence was essentially nothing but raw computation, and therefore soon to be overtaken by a rapidly evolving bruteforce AI.

Today most people don't think it will be bruteforce AI, but will instead be a resultant of a combination of ML, decision theory, network theory and AI techniques which will outperform humans in most non-subjective aspects of intelligence or consciousness. More and more people, like Elon Musk and Sam Harris, are afraid of this actual possibility and I believe rightfully so, precisely because experts do not fully understand the intricacies of human intelligence yet, while non-experts are willing to replace humans with robots regardless simply for financial reasons.
 

A. Neumaier

Science Advisor
Insights Author
6,038
2,191
So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?
Causal analysis can be done only by an agent that can interrogate Nature, by getting experiments performed, but then there are no limits.
Essentially it takes detective work to do causal inference.
This would not be an obstacle. Computer programs can already find needles in haystacks....
I agree that machines have a long way to go before reaching human level performance. But is it true that they have access to the same data as humans? For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up. Similarly, the toddler is able to crawl about and interact in the real world, which is a means of data acquisition the computers don't have.
They don't have much experience of the real world, which accounts for most of the superiority of humans on real world tasks. A baby can do very little until it is able to generate sense from raw data, which takes a long time....
The human can transfer other knowledge accumulated during their lifetime to the task of driving. For example, what a pedestrian looks like, what color the sky is, how to walk. Now try to teach a newborn to drive in 60 hours.
Transfer is easy, once knowledge is properly organized.
Today most people don't think it will be bruteforce AI, but will instead be a resultant of a combination of ML, decision theory, network theory and AI techniques which will outperform humans in most non-subjective aspects of intelligence or consciousness. More and more people, like Elon Musk and Sam Harris, are afraid of this actual possibility and I believe rightfully so, precisely because experts do not fully understand the intricacies of human intelligence yet, while non-experts are willing to replace humans with robots regardless simply for financial reasons.
Since 2001, I have been giving courses on AI for mathematicians, and I am giving such a course this term, which started 9 days ago. In the first week I roughly covered the overview, with a backbone given in these slides. There are lots of AI techniques needed in addition to machine learning, and they develop at a rapid pace.

My research group in Vienna is working on creating an agent that can study mathematics like a human student and succeeds in getting a PhD. It is still science fiction but looks realizable within my life time, or I wouldn't spent my time on it. If any of you with enough time and computer skills has interest in helping me (sorry, unpaid, but very exciting), please write me an email!

Conceptually, everything in human experience has an analogue in the world of agents in general. There is no visible limit to artificial capabilities; only degrees of quality. It probably takes only 20-30 years until some human-created agents can outperform humans in every particular aspect (though probably different agents for different tasks).
 

A. Neumaier

Science Advisor
Insights Author
6,038
2,191
it surprises me that we don't have 100 times the accident rate that we do. The fact that we don't is because other drivers are ALWAYS expecting poor driving and they are thinking further ahead than a self-driving car could do.
Perhaps at present.

But nothing prevents developers to program car-driving software to ALWAYS expecting poor driving and to think ahead. Thinking ahead is needed for many control tasks that robots can do quite well.

you cannot teach a machine to do something for which there is no presently known answer.
Automatic theorem provers have proved at least one mathematical theorem that humans conjectured but could not prove. This is not much, but the beginnings of counterexamples to your claim.
 
478
286
This would not be an obstacle. Computer programs can already find needles in haystacks....
Finding 'the needle' is very much a problem dependent issue; this is essentially reflected in the entire fields of computational complexity theory and computability theory.
Automatic theorem provers have proved at least one mathematical theorem that humans conjectured but could not prove. This is not much, but the beginnings of counterexamples to your claim.
'Could not prove (yet)' does not imply 'unable to prove in principle'. Moreover, a computer being capable of generating a proof for a well-defined problem before any humans have done so, does not in any way, shape or form imply that computers are also able to generate proofs for not yet well-defined problems; humans on the other hand are actually capable of solving many such problems by approximation, analogy and/or definite abstraction.
 

A. Neumaier

Science Advisor
Insights Author
6,038
2,191
a computer being capable of generating a proof for a well-defined problem before any humans have done so, does not in any way, shape or form imply that computers are also able to generate proofs for not yet well-defined problems; humans on the other hand are actually capable of solving many such problems by approximation, analogy and/or definite abstraction.
Many classification problems solved by computers better than by humans are also not well-defined problems.
 
1,493
601
I agree with the concept that machines have no limit, but comparing them to human intelligence seems arbitrary and anthropocentric. We'd also have to make clear that we're talking about very long periods of time. 100 years from now? Yeah, there's a lot of limits and I don't see a Bender Rodriguez strutting down the street in that amount of time, but they'll certainly be driving our cars and flying our planes. 1000 years? Mmm... maybe? 10,000, absolutely.

Of course, assuming humans destroy ourselves, which in theory, we don't have to.

It shouldn't really even be debatable of whether or not an AI could ever match a human. The human brain is a chemical machine, we have programmable maths that describes that chemistry. Given enough computing power, you could build a brain from atoms up inside a computer. It's unlikely we'd ever need to do anything like that and I don't see humanity having that much computing power any time soon, but there's nothing really preventing it in theory. The only real limit is energy and matter.

Natural selection gave us our thought process in about 600 million years, I'd think intelligent design could beat that by several orders of magnitude.


I'm weary of AI in the long term. I don't think anyone alive today has to worry about it, but I see it as one of the potential "great filters" for life in the Fermi paradox. I see no reason to fear any individual AI, but the fact that they are immortal means that the number of them will grow very rapidly. I think they'll be very individualized, and be a result of their "upbringing." They'll be as diverse as humans and while I believe that most humans are good. .. Hitler existed.
 
478
286
Many classification problems solved by computers better than by humans are also not well-defined problems.
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
 

A. Neumaier

Science Advisor
Insights Author
6,038
2,191
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
Please give examples.
 

Want to reply to this thread?

"Limits of Machine Learning?" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top