The time when AI can overcome humans

  • #1

Main Question or Discussion Point

How much time the machine can overcome the human? I was thinking about around 50 years? But I am not sure. I would like that you guys can take me this doubt.
 

Answers and Replies

  • #3
Hi @anorlunda,

I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
 
  • #4
DaveC426913
Gold Member
18,644
2,116
I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.

The problem is, it's a continuum. Every example requires a different amount of conceptualizing horsepower.

You might want to read upon The Turing Test. Essentially, it is a test that assumes there's no way to specifically quantify when a given machine has been successful at simulating a human. All you can do is try to stump it, and set a time limit on how long it has to keep up without the tester being able to tell.

I mean, how would you know it "understands"? Talk to it for a minute? An hour? A year?
 
  • #5
Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.

The problem is, it's a continuum. Every example requires a different amount of conceptualizing horsepower.

You might want to read upon The Turing Test. Essentially, it is a test that assumes there's no way to specifically quantify when a given machine has been successful at simulating a human. All you can do is try to stump it, and set a time limit on how long it has to keep up without the tester being able to tell.

I mean, how would you know it "understands"? Talk to it for a minute? An hour? A year?
at the time.
 
  • #6
I didn't know about this fact of Siri, Dave. Thanks!
 
  • #7
I'll read the Turing Test, I appreciate, Dave.
 
  • #8
mathman
Science Advisor
7,769
419
Hi @anorlunda,

I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
Machines can be programmed to respond sensibly, as long as the programmer can foresee these questions.
 
  • #9
DaveC426913
Gold Member
18,644
2,116
I didn't know about this fact of Siri, Dave. Thanks!
Not a fact!

Just an offhand guess. Siri is capable of some amount of simulated human interpretation. But it's just clever software, not AI.
 
  • #10
rcgldr
Homework Helper
8,680
514
For example, machines already beat humans in chess and other games.
Most of the chess engines are conventional programs. The main exception is Alpha Zero, which does use a form of neural network, first beating the best human players of "go". A different version using two neural networks (from what I understand), beat Stockfish version 8, which is already better than humans, but Stockfish 8 was limited to 1 move per minute, not letting it manage it's time better, and without the opening and end game table bases it normally used, and the current version Stockfish 9 was not used in the computer match.

One of several articles about this:

https://en.chessbase.com/post/alpha-zero-comparing-orang-utans-and-apples

Still this is an example of an AI for a specific task, not a generalized AI. One milestone yet to be achieved is for an AI to achieve the overall general intelligence of a cricket or roach. I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed", for example Monarch butterfly migration, where the Monarchs go through 3 or 4 generations on the northward migration.

https://en.wikipedia.org/wiki/Monarch_butterfly_migration
 
  • #11
DaveC426913
Gold Member
18,644
2,116
I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"
Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior?

That's the easy part. The hard part is learned behavior and adaptation.
 
  • #12
8,244
5,062
I don't hold with narrow definitions of AI. I think our machines have effectively been doing AI in the broad sense since long before we had computers and software.
 
  • #13
DaveC426913
Gold Member
18,644
2,116
I don't hold with narrow definitions of AI. I think our machines have effectively been doing AI in the broad sense since long before we had computers and software.
I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.

Intelligence is generally understood to be the ability to apply existing patterns to new circumstances. I can't think of any examples that precede the advent of modern computers. Can you?


Note that
  1. The term AI is generally invoked to distinguish it from the types of mechanical sophistication that have gone before it. Your definition essentially makes no distinction. So now we need a word for that.
  2. What term would you like to use to describe machines that can mimic sophisticated human behavior at a level better than mechanical devices have achieved before software? The rest of us call it AI.
  3. If you invoke your personal definition of AI when communicating with others, you will run into a lot of problems, since it already has a generally-accepted meaning.
 
Last edited:
  • #14
russ_watters
Mentor
19,421
5,561
I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.
Here's the basic definition in the Wikipedia page:
"...any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."

That sounds like a mechanical thermostat to me.
 
  • #15
DaveC426913
Gold Member
18,644
2,116
Here's the basic definition in the Wikipedia page:
"...any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."

That sounds like a mechanical thermostat to me.
That's quite a stretch for the term "intelligence".

A thermostat is, at its core, simply a bimetallic coil that expands and contracts with temperature.

If that qualifies as artificial intelligence, then the term has no useful meaning.

I'd argue that's quite a stretch for the terms "perceive", "maximize its chance" and "goals" as well.

I guess a windvane on a windmill counts as AI too.


So now we need a new term for a device that can match human-level intelligence in behavior. AI is out. How about AI-ii?
 
  • #16
russ_watters
Mentor
19,421
5,561
That's quite a stretch for the term intelligence.

A thermostat is, at its core, simply a bimetallic coil that expands and contracts with temperature.

If that qualifies as artificial intelligence, then the term has no useful meaning.
Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning. But I would welcome being shown a useful definition/application for the term.
So now we need a new term for a device that can match human-level intelligence in behavior. AI is out. How about AI-ii?
Don't be silly. If computers matching human-level intelligence is the definition you choose, so be it, but please recognize that that isn't anything special. Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engine

The entire point of a computer - mechanical or digital - is that it exceeds human intelligence. We wouldn't use them otherwise!
 
Last edited:
  • #18
DaveC426913
Gold Member
18,644
2,116
Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning.
Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.

Don't be silly.
Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.

Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engine
I thought of the Difference Engine, but didn't mention it as I didn't expect anyone to actually consider it as an example of intelligence.

It does very dumb things, just fast. That is not any definition of intelligence I would go with.

I stand by the hallmark of intelligence as the ability to adapt (that is, self-adapt) to new situations. The DI cannot do that at all.

The entire point of a computer - mechanical or digital - is that it exceeds human intelligence. We wouldn't use them otherwise!
I completely disagree.

The DI, for example doesn't exceed human intelligence at all. It is doing basic arithmetic - nothing more than a (very) patient Grade 2 student could do using a pencil and a (very) long sheet of foolscap.

DI can't abstract concepts, or apply previous lessons to another problem it has not seen before.

The point of (current) computers is that they do mundane things very fast. I see virtually no intelligence in there excepting software that is so sophisticated it is capable of making decisions between a blurred set of competing goals (if they're not blurry, then straight mathematical calculations with unambiguous outcomes would suffice - i.e what mainstream computers of today do. But they're starting to teach them fuzzy logic).
 
  • #19
8,244
5,062
I'll give a reason why I don't like narrow AI definitions. A narrow definition presupposes a particular approach to the solution, and excludes the chance that someone will succeed by a very different approach.

In the 80s, knowledge-based systems and inference engines were the only serious contenders for AI. Neural nets were on the fringe and were not (perhaps still not) considered intelligent.

Logic is an approach.
Neural nets is an approach.
Software is an approach.
What about biology?
What about approaches not yet conceived?

When debugging or inventing is it very important to keep your mind open and not presuppose the path to solution.
 
  • #20
rcgldr
Homework Helper
8,680
514
I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"
Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior? That's the easy part. The hard part is learned behavior and adaptation.
I wasn't considering AI that would include both pre-programed and learned behavior. In that case, I'm not sure how much of the intelligence of a cricket or roach is pre-programmed versus learned, but I do recall duplicating the intelligence of small insects like that was at least at one time considered an early goal for AI, so it's not clear to me why that was considered as an early goal for AI. It might have been related to how to react to a situation never encountered before, if it's a problem solving process (possibly based on the nearest similar experience) as opposed to a pre-programmed response. I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
 
  • #21
8,244
5,062
I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
There is no reason to make that a goal if the purpose is useful AI.
 
  • #22
russ_watters
Mentor
19,421
5,561
Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.
Agreed! So can you provide one?
Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.
I don't think moving the bar to a bad place proves it was previously in a bad place.
I stand by the hallmark of intelligence as the ability to adapt (that is, self-adapt) to new situations. The DI cannot do that at all.
Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?
I completely disagree.

The DI, for example doesn't exceed human intelligence at all. It is doing basic arithmetic - nothing more than a (very) patient Grade 2 student could do using a pencil and a (very) long sheet of foolscap.
I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.
The point of (current) computers is that they do mundane things very fast. I see virtually no intelligence in there....
Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.

And me, personally, regardless of what we call it, I want a computer that can quickly do things I can't or don't feel like it. I don't need or necessarily even want it to mimic me to do that.
...excepting software that is so sophisticated it is capable of making decisions between a blurred set of competing goals (if they're not blurry, then straight mathematical calculations with unambiguous outcomes would suffice - i.e what mainstream computers of today do. But they're starting to teach them fuzzy logic).
Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you? Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness.

My take is that people seem to want to build computers to mimic or actually be like flawed humans. I ask: why? That seems like a downgrade to me.
 
  • #23
russ_watters
Mentor
19,421
5,561
I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
There is no reason to make that a goal if the purpose is useful AI.
Agreed! I'm forgetful, emotional, slow-witted. Why is it a useful goal to try to emulate my brain? Why is AI being researched if that's all it is? I thought the point of machines and computers was to do things humans do better than humans do them (or just so we don't have to bother with doing them)?
 
  • #24
DaveC426913
Gold Member
18,644
2,116
I'll give a reason why I don't like narrow AI definitions. A narrow definition presupposes a particular approach to the solution,
No it doesn't. I don't know why you would think that.

Any definition will be about the result; only a bad definition would presume to specific how one might achieve the result.
 
  • #25
DaveC426913
Gold Member
18,644
2,116
Agreed! So can you provide one?
I think the Turing Test will have to suffice as the Gold standard for now. Though, as I'm about to point out, intelligence is a continuum, with the TT near top end.

Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?
Yes.

So we have a scale of artificial intelligence.
Learning thermostats are akin to flatworms, while Hal 9000 is akin to humans.

But, to my earlier point: someone suggested that AI has been around before software. The learning thermostat is software. So I still see no examples of mechanical AI.

I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.
Yes. None of which involves intelligence.

Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.
Which goes back to the media being a low bar to look to for a good definition. We don't have to accept it.

But nor do we have to throw the AI baby out with the bathwater.

And me, personally, regardless of what we call it, I want a computer that can quickly do things I can't or don't feel like it. I don't need or necessarily even want it to mimic me to do that.
Right. The intelligence would be for it to do things for you without you having to wet-nurse it.

Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you?
No. The input has reduced precision - in its parameters. The AI's purpose is to figure what I want without me having to explicitly tell it.

Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness.
My internet radio (Jango) plays me what I tell it to. But after a while, it starts figuring out what I might like, without me specifically telling it what I like.

The competing goals are:
Play me what I asked for
Play me some stuff I didn't ask for.

Its intelligence quotient would be in how well it balances those two things. They are fuzzy, because I am fickle. Don't play me too much of wha tI've alredy heard, but don't stray too far outside my comfort zone.

"too much" and "too far" are ambiguous input parameters.
The fact that I'm quite happy with the balance it's found is an indication that it is (as best it can) met the goal of pleasing me.

Sure, I guide it once in a while, by saying "I don't like that song", but a dumb program would simply stop playing that song. An intelligent program extrapolates nd adapts, to further refine my playlist without my further input.
 

Related Threads on The time when AI can overcome humans

Replies
14
Views
1K
Replies
17
Views
2K
Replies
3
Views
3K
Replies
39
Views
16K
Replies
2
Views
4K
  • Last Post
Replies
6
Views
3K
  • Last Post
Replies
14
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
1
Views
5K
Replies
3
Views
3K
Top