The time when AI can overcome humans

In summary, the conversation discusses the topic of machines overcoming human intelligence and understanding. The participants mention examples such as machines beating humans in games and Siri's ability to interpret human input. They also discuss the concept of the Turing Test and the difficulty in determining when a machine truly understands something. The conversation ends with the suggestion that achieving overall general intelligence may not be possible for machines.
  • #1
José Ricardo
92
5
How much time the machine can overcome the human? I was thinking about around 50 years? But I am not sure. I would like that you guys can take me this doubt.
 
Computer science news on Phys.org
  • #3
Hi @anorlunda,

I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
 
  • #4
José Ricardo said:
I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.

The problem is, it's a continuum. Every example requires a different amount of conceptualizing horsepower.

You might want to read upon The Turing Test. Essentially, it is a test that assumes there's no way to specifically quantify when a given machine has been successful at simulating a human. All you can do is try to stump it, and set a time limit on how long it has to keep up without the tester being able to tell.

I mean, how would you know it "understands"? Talk to it for a minute? An hour? A year?
 
  • Like
Likes CalcNerd and José Ricardo
  • #5
DaveC426913 said:
Our current AI likely can already parse this. I bet Siri might not be stumped by the analogy. The question is: by what criteria do you decide whether is has understood it? Probably by whether its response makes sense.

The problem is, it's a continuum. Every example requires a different amount of conceptualizing horsepower.

You might want to read upon The Turing Test. Essentially, it is a test that assumes there's no way to specifically quantify when a given machine has been successful at simulating a human. All you can do is try to stump it, and set a time limit on how long it has to keep up without the tester being able to tell.

I mean, how would you know it "understands"? Talk to it for a minute? An hour? A year?

at the time.
 
  • #6
I didn't know about this fact of Siri, Dave. Thanks!
 
  • #7
I'll read the Turing Test, I appreciate, Dave.
 
  • #8
José Ricardo said:
Hi @anorlunda,

I'm asking about the machine overcomes the human mind, the conciousness.
For example, I talk to a machine this:
"My lawyer is a vampire".
A human understood at that time that my lawyer sucks me too much money. I would like to know when the AI will arrive at this point.
Machines can be programmed to respond sensibly, as long as the programmer can foresee these questions.
 
  • #9
José Ricardo said:
I didn't know about this fact of Siri, Dave. Thanks!
Not a fact!

Just an offhand guess. Siri is capable of some amount of simulated human interpretation. But it's just clever software, not AI.
 
  • #10
anorlunda said:
For example, machines already beat humans in chess and other games.
Most of the chess engines are conventional programs. The main exception is Alpha Zero, which does use a form of neural network, first beating the best human players of "go". A different version using two neural networks (from what I understand), beat Stockfish version 8, which is already better than humans, but Stockfish 8 was limited to 1 move per minute, not letting it manage it's time better, and without the opening and end game table bases it normally used, and the current version Stockfish 9 was not used in the computer match.

One of several articles about this:

https://en.chessbase.com/post/alpha-zero-comparing-orang-utans-and-apples

Still this is an example of an AI for a specific task, not a generalized AI. One milestone yet to be achieved is for an AI to achieve the overall general intelligence of a cricket or roach. I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed", for example Monarch butterfly migration, where the Monarchs go through 3 or 4 generations on the northward migration.

https://en.wikipedia.org/wiki/Monarch_butterfly_migration
 
  • Like
Likes José Ricardo
  • #11
rcgldr said:
I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"
Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior?

That's the easy part. The hard part is learned behavior and adaptation.
 
  • Like
Likes José Ricardo
  • #12
I don't hold with narrow definitions of AI. I think our machines have effectively been doing AI in the broad sense since long before we had computers and software.
 
  • Like
Likes José Ricardo and russ_watters
  • #13
anorlunda said:
I don't hold with narrow definitions of AI. I think our machines have effectively been doing AI in the broad sense since long before we had computers and software.
I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.

Intelligence is generally understood to be the ability to apply existing patterns to new circumstances. I can't think of any examples that precede the advent of modern computers. Can you?Note that
  1. The term AI is generally invoked to distinguish it from the types of mechanical sophistication that have gone before it. Your definition essentially makes no distinction. So now we need a word for that.
  2. What term would you like to use to describe machines that can mimic sophisticated human behavior at a level better than mechanical devices have achieved before software? The rest of us call it AI.
  3. If you invoke your personal definition of AI when communicating with others, you will run into a lot of problems, since it already has a generally-accepted meaning.
 
Last edited:
  • #14
DaveC426913 said:
I'm not sure what kind of definition of Artificial Intelligence would include machines that do not have software.
Here's the basic definition in the Wikipedia page:
"...any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."

That sounds like a mechanical thermostat to me.
 
  • Like
Likes José Ricardo
  • #15
russ_watters said:
Here's the basic definition in the Wikipedia page:
"...any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."

That sounds like a mechanical thermostat to me.
That's quite a stretch for the term "intelligence".

A thermostat is, at its core, simply a bimetallic coil that expands and contracts with temperature.

If that qualifies as artificial intelligence, then the term has no useful meaning.

I'd argue that's quite a stretch for the terms "perceive", "maximize its chance" and "goals" as well.

I guess a windvane on a windmill counts as AI too.So now we need a new term for a device that can match human-level intelligence in behavior. AI is out. How about AI-ii?
 
  • Like
Likes José Ricardo
  • #16
DaveC426913 said:
That's quite a stretch for the term intelligence.

A thermostat is, at its core, simply a bimetallic coil that expands and contracts with temperature.

If that qualifies as artificial intelligence, then the term has no useful meaning.
Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning. But I would welcome being shown a useful definition/application for the term.
So now we need a new term for a device that can match human-level intelligence in behavior. AI is out. How about AI-ii?
Don't be silly. If computers matching human-level intelligence is the definition you choose, so be it, but please recognize that that isn't anything special. Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engine

The entire point of a computer - mechanical or digital - is that it exceeds human intelligence. We wouldn't use them otherwise!
 
Last edited:
  • Like
Likes José Ricardo
  • #18
russ_watters said:
Agreed; That's exactly what my opinion on the subject is. I think it is a term used mostly to generate headlines, and has no useful meaning.
Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.

russ_watters said:
Don't be silly.
Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.

russ_watters said:
Mechanical computers were doing that a hundred plus years ago. https://en.wikipedia.org/wiki/Difference_engine
I thought of the Difference Engine, but didn't mention it as I didn't expect anyone to actually consider it as an example of intelligence.

It does very dumb things, just fast. That is not any definition of intelligence I would go with.

I stand by the hallmark of intelligence as the ability to adapt (that is, self-adapt) to new situations. The DI cannot do that at all.

russ_watters said:
The entire point of a computer - mechanical or digital - is that it exceeds human intelligence. We wouldn't use them otherwise!
I completely disagree.

The DI, for example doesn't exceed human intelligence at all. It is doing basic arithmetic - nothing more than a (very) patient Grade 2 student could do using a pencil and a (very) long sheet of foolscap.

DI can't abstract concepts, or apply previous lessons to another problem it has not seen before.

The point of (current) computers is that they do mundane things very fast. I see virtually no intelligence in there excepting software that is so sophisticated it is capable of making decisions between a blurred set of competing goals (if they're not blurry, then straight mathematical calculations with unambiguous outcomes would suffice - i.e what mainstream computers of today do. But they're starting to teach them fuzzy logic).
 
  • Like
Likes PeterDonis and José Ricardo
  • #19
I'll give a reason why I don't like narrow AI definitions. A narrow definition presupposes a particular approach to the solution, and excludes the chance that someone will succeed by a very different approach.

In the 80s, knowledge-based systems and inference engines were the only serious contenders for AI. Neural nets were on the fringe and were not (perhaps still not) considered intelligent.

Logic is an approach.
Neural nets is an approach.
Software is an approach.
What about biology?
What about approaches not yet conceived?

When debugging or inventing is it very important to keep your mind open and not presuppose the path to solution.
 
  • Like
Likes José Ricardo and russ_watters
  • #20
rcgldr said:
I'm not sure this is possible, as some of the intelligence in living beings is "pre-programmed"

DaveC426913 said:
Of all things to have trouble with, why would a computer have trouble with pre-programmed behavior? That's the easy part. The hard part is learned behavior and adaptation.
I wasn't considering AI that would include both pre-programed and learned behavior. In that case, I'm not sure how much of the intelligence of a cricket or roach is pre-programmed versus learned, but I do recall duplicating the intelligence of small insects like that was at least at one time considered an early goal for AI, so it's not clear to me why that was considered as an early goal for AI. It might have been related to how to react to a situation never encountered before, if it's a problem solving process (possibly based on the nearest similar experience) as opposed to a pre-programmed response. I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
 
  • Like
Likes José Ricardo
  • #21
rcgldr said:
I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.

There is no reason to make that a goal if the purpose is useful AI.
 
  • Like
Likes russ_watters
  • #22
DaveC426913 said:
Maybe so in the media, but that shouldn't be where the bar is set. Those who want to discuss artificial intelligence should have a useful working definition.
Agreed! So can you provide one?
Of course I was being facetious. That was more directed at anorlunda's idea of the folly of eliminating the distinction between mere mechanical aptitude and bona fide intelligence.
I don't think moving the bar to a bad place proves it was previously in a bad place.
I stand by the hallmark of intelligence as the ability to adapt (that is, self-adapt) to new situations. The DI cannot do that at all.
Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?
I completely disagree.

The DI, for example doesn't exceed human intelligence at all. It is doing basic arithmetic - nothing more than a (very) patient Grade 2 student could do using a pencil and a (very) long sheet of foolscap.
I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.
The point of (current) computers is that they do mundane things very fast. I see virtually no intelligence in there...
Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.

And me, personally, regardless of what we call it, I want a computer that can quickly do things I can't or don't feel like it. I don't need or necessarily even want it to mimic me to do that.
...excepting software that is so sophisticated it is capable of making decisions between a blurred set of competing goals (if they're not blurry, then straight mathematical calculations with unambiguous outcomes would suffice - i.e what mainstream computers of today do. But they're starting to teach them fuzzy logic).
Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you? Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness.

My take is that people seem to want to build computers to mimic or actually be like flawed humans. I ask: why? That seems like a downgrade to me.
 
  • Like
Likes José Ricardo
  • #23
rcgldr said:
I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.
anorlunda said:
There is no reason to make that a goal if the purpose is useful AI.
Agreed! I'm forgetful, emotional, slow-witted. Why is it a useful goal to try to emulate my brain? Why is AI being researched if that's all it is? I thought the point of machines and computers was to do things humans do better than humans do them (or just so we don't have to bother with doing them)?
 
  • #24
anorlunda said:
I'll give a reason why I don't like narrow AI definitions. A narrow definition presupposes a particular approach to the solution,
No it doesn't. I don't know why you would think that.

Any definition will be about the result; only a bad definition would presume to specific how one might achieve the result.
 
  • #25
russ_watters said:
Agreed! So can you provide one?
I think the Turing Test will have to suffice as the Gold standard for now. Though, as I'm about to point out, intelligence is a continuum, with the TT near top end.

russ_watters said:
Fair enough. Older thermostats could be tuned - but by humans - to adjust the rate or overshoot of their temperature control. New thermostats learn the response of the system and adjust their operation to maximize their ability to maintain the setpoint. That fits your definition of learning, doesn't it?
Yes.

So we have a scale of artificial intelligence.
Learning thermostats are akin to flatworms, while Hal 9000 is akin to humans.

But, to my earlier point: someone suggested that AI has been around before software. The learning thermostat is software. So I still see no examples of mechanical AI.

russ_watters said:
I thought speed was everything? We use machines to automate physical tasks that we could theoretically do ourselves, but don't because we don't feel like it or it would take more effort than we are willing to put in. We could lift a ton by human power if we want, but we choose not to because it is faster and easier to use a crane - and a crane is more reliable. We use computers to do the same thing for mental tasks: they are faster and we can do other things with our time instead.
Yes. None of which involves intelligence.

russ_watters said:
Fair enough -- but you'll need to work out a definition that excludes that component because as we've seen from the OP, many people seem to believe that that's a key part of the definition.
Which goes back to the media being a low bar to look to for a good definition. We don't have to accept it.

But nor do we have to throw the AI baby out with the bathwater.

russ_watters said:
And me, personally, regardless of what we call it, I want a computer that can quickly do things I can't or don't feel like it. I don't need or necessarily even want it to mimic me to do that.
Right. The intelligence would be for it to do things for you without you having to wet-nurse it.

russ_watters said:
Well that's an interesting take. You're basically saying that a hallmark of "intelligence" is reduced precision/accuracy, aren't you?
No. The input has reduced precision - in its parameters. The AI's purpose is to figure what I want without me having to explicitly tell it.

russ_watters said:
Do people really want that in a computer? In my opinion, there is no such thing as a "blurred set of competing goals". There is only indecisiveness.
My internet radio (Jango) plays me what I tell it to. But after a while, it starts figuring out what I might like, without me specifically telling it what I like.

The competing goals are:
Play me what I asked for
Play me some stuff I didn't ask for.

Its intelligence quotient would be in how well it balances those two things. They are fuzzy, because I am fickle. Don't play me too much of wha tI've alredy heard, but don't stray too far outside my comfort zone.

"too much" and "too far" are ambiguous input parameters.
The fact that I'm quite happy with the balance it's found is an indication that it is (as best it can) met the goal of pleasing me.

Sure, I guide it once in a while, by saying "I don't like that song", but a dumb program would simply stop playing that song. An intelligent program extrapolates nd adapts, to further refine my playlist without my further input.
 
  • Like
Likes rbelli1 and José Ricardo
  • #26
In his book "LIFE 3.0" Max Tegmark notes p 49

At a recent (approx. 2017) symposium on AI at the Swedish Nobel Foundation "... a panel of leading AI researchers were asked to define intelligence, they argued at length without reaching a consensus. ... So there's clearly no undisputed 'correct' definition of intelligence. Instead, there are many competing ones, including capacity for logic, understanding, planning, emotional knowledge, self-awareness, creativity, problem solving and learning."

Tegmark's definition of intelligence is the ability to accomplish complex goals.

He says that this is broad enough to include the above mentioned goals as well as including the Oxford Dictionary definition "the ability to acquire and apply knowledge" It also includes @DaveC426913 criteria of ability to adapt.

Will we know what intelligence in a machine is when we see it?
 
  • Like
Likes José Ricardo
  • #28
rcgldr said:
I think a key point made in an old article was trying to duplicate how living things actually think, or ways to emulate that process.

anorlunda said:
There is no reason to make that a goal if the purpose is useful AI.
Perhaps it would be better stated as "problem solving" versus "thinking". That old article may have been focused on neural networks as opposed to AI in general.

rcgldr said:
insects - cockroaches
Although I first saw this mentioned in an old article, doing a web search for AI insects or AI cockroaches still produces quite a few current hits, so at least a sub-field of AI is trying to duplicate the problem solving capabilities of some insects (cockroaches, worms, ...). As I mentioned before, some of this involves pre-programmed behaviors versus learned behaviors.
 
Last edited:
  • Like
Likes José Ricardo
  • #29
Svein said:
It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).

I didn't know about this program that looks like a psychologist, or a psychiatric. Show.

@Svein, When you think that we going to see AI'S who give classes, teach you textual production, like a teacher/professor of Langauge and Literature, or a teacher or a professor of Mathematics, only that an AI?
 
Last edited:
  • #30
Svein said:
It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).

In the early years of AI people were overly fascinated with computer seemingly intelligent properties (the ELIZA Effect). Today there is a lot more skepticism about actual intelligence behind a computer response. Many believe that true intelligence (however you wish to define it) may not be possible at least for the foreseeable future. But could it happen that we will NOT recognize true intelligence if and when if it might occur?
 
  • Like
Likes José Ricardo
  • #31
@José Ricardo, If you are interested in AI , how it could impact the future of mankind, and why all of us should think about what we want for our future with AI read Max Tegmark's book "Life 3.0: Being Human in the Age of Artificial Intelligence"
 
  • Like
Likes José Ricardo
  • #32
Svein said:
It depends largely who is "seeing" it. Check out the old ELIZA program (https://en.wikipedia.org/wiki/ELIZA).
One of my more ambitious programming attempts on my little C=64.

Once, I stopped my ELIZA simulation to do some debugging and typed the command to display a few lines for analysis. What I got back at the command prompt was: "I'm sorry Dave. I'm afraid I can't do that."

All the hairs on the back of my neck stood up. And I slowly and carefully reached behind and switched off the power. Then I went outside and watched the sunset for a bit.

True story.

Good times. Good times.
 
  • #33
gleem said:
@José Ricardo, If you are interested in AI , how it could impact the future of mankind, and why all of us should think about what we want for our future with AI read Max Tegmark's book "Life 3.0: Being Human in the Age of Artificial Intelligence"

Thanks, Gleem! <3
 

1. When will AI be able to overcome humans?

There is no definitive answer to this question as it depends on various factors such as advancements in technology, research, and development. Some experts predict that AI could surpass human intelligence in the next few decades, while others believe it may take much longer.

2. What are the potential consequences of AI surpassing human intelligence?

There are both positive and negative consequences that could arise from AI surpassing human intelligence. On the positive side, AI could help us solve complex problems, improve efficiency, and enhance our quality of life. However, there are also concerns about job displacement, ethical issues, and the potential for AI to become uncontrollable.

3. Can humans prevent AI from overcoming us?

It is difficult to say for sure whether humans can prevent AI from surpassing us. Some experts believe that we can regulate and limit the development of AI to prevent it from becoming too powerful. Others argue that once AI reaches a certain level of intelligence, it may be impossible to control or stop its advancement.

4. How does AI compare to human intelligence?

AI and human intelligence are fundamentally different. AI is based on algorithms and data, while human intelligence is based on emotions, creativity, and consciousness. AI can process and analyze vast amounts of data at a much faster rate than humans, but it lacks the ability to think critically and make complex decisions based on morality and emotions.

5. What are the current limitations of AI?

Despite advancements in AI technology, there are still many limitations that prevent it from surpassing human intelligence. AI lacks common sense, creativity, and emotional intelligence, which are essential for many tasks. It also struggles with understanding context and making ethical decisions. Additionally, AI is only as good as the data it is trained on, and biases in the data can lead to biased outcomes.

Similar threads

Replies
8
Views
707
Replies
10
Views
2K
  • Computing and Technology
Replies
11
Views
661
Replies
7
Views
694
  • Computing and Technology
Replies
0
Views
129
  • Computing and Technology
3
Replies
99
Views
5K
Replies
2
Views
1K
Replies
2
Views
2K
  • Computing and Technology
Replies
0
Views
179
  • Computing and Technology
16
Replies
559
Views
21K
Back
Top