Limits of AI

  • Thread starter Delong
  • Start date
  • #1
400
17

Main Question or Discussion Point

I am skeptical that strong human-like AI is possible. I think AI can be made to look very much like human intelligence but I don't think it will actually be like human intelligence in the way that you and I are. For example, I don't think a computer can Actually feel pain or pleasure or fear and desire. I could be wrong but I am simply skeptical. I'm sure people can have those feelings towards robots and computers who actually look like they are like that. But can computers really feel? I am doubtful. I think feeling comes from biology and computers don't really have any kind of biology only a simulated kind of biology. Therefore I think there are limits. I obviously do not know the specifics of it but I am curious about it. Maybe computers can come very close but I don't think it can ever be exactly the same. Who are the big people working on AI right now? I would like to observe their progress.
 

Answers and Replies

  • #2
Ryan_m_b
Staff Emeritus
Science Advisor
5,841
711
I have no strong opinions over whether or not we will one day create a humanesque conscious being in silico but as for whether or not it's possible I see no reason why not. If something exists then you can simulate it and the simulation will have the same characteristics as the original.
 
  • #3
DavidSnider
Gold Member
485
130
There is no way to determine if another human being is "really feeling" something, let alone the even more abstract question of whether a simulation of a feeling is equivalent to a real one. People even doubt THEIR OWN feelings sometimes.

It's difficult for me to imagine how you would even test this.

Human-like AI is a bit of a waste of time I think. We already have humans, billions of 'em. What we need computers for are things like summing a million digits in a fraction of a second, or crunching away at a problem for days on end with no rest and other "stupid" things that humans are incapable of.
 
  • #4
Ryan_m_b
Staff Emeritus
Science Advisor
5,841
711
Human-like AI is a bit of a waste of time I think. We already have humans, billions of 'em. What we need computers for are things like summing a million digits in a fraction of a second, or crunching away at a problem for days on end with no rest and other "stupid" things that humans are incapable of.
Agreed. Developing better and more capable software is by no means synonymous with trying to create a digital sentient being. Hell one day we may have software packages that are so good they can pass a Turing test and perform almost any task a human could but that in no way implies that under the hood is anything similar to what's going on in our grey matter.
 
  • #5
400
17
I also agree. There doesn't seem any good reason to pursue humanesque AI other than sensationalistic mad science hoorah. First of all, is that even possible? And second of all how will we treat these computers and how will that change the distinction we make between humans and machines? Like you said we could create computers that pass every turing's test but is it really something that can feel and think like humans or is it something that can simply pass every turing's test we have so far thought up? Anyway, I do not understand computer science and philosophy of mind extremely well I am simply curious about the possibilities of a thing.

I know some AI already exists. Things like Watson and the chess champion. Even calculators are a simple form of AI.
As for crazy future scenarios I'm more worried about the things that happened in AI or I-Robot than the things that happened in Terminator or the Matrix. Can robots really feel? If so how will that change how we think about ourselves and machines?
Perhaps computers can sufficiently emulate the cognitive aspects of human and animal thought. They might even be able to have memories, learn, plan, make decision, make judgements, or form "desires". But can they actually experience these things like motivation or desire or curiosity? Well like DavidSnider said how can we tell? I suppose we have to investigate what it means to really feel these things in humans first and how we know it is the case and then see if it's possible to simulate that in computers. Perhaps I should study a little bit of neuroscience and philosophy of mind before I go answer that. COOL!
 
  • #6
6,265
1,275
Yes to all. It has always seemed to me that superimposing emotions on a calculator would have no effect other than to greatly compromise its ability to calculate.
 
  • #7
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
683
I also agree. There doesn't seem any good reason to pursue humanesque AI other than sensationalistic mad science hoorah.
There's money in them there hills. Lots and lots and lots of money. It turns out that a good amount of the work that we think requires intelligence (whatever that is) is just as rote as is the work of those whose jobs have already been automated by run of the mill, non-AI software. Many aspects of the task of putting together a disparate bunch of parts to form some fantastic widget have been automated by simple software and machines. Planning the process, ordering the parts, keeping the machines running: That is where tomorrow's job security lies. Wrong. That work also is rote and can be automated by yesterday's AI. The only part that isn't rote (so far) is coming up with the process in the first place.

First of all, is that even possible? And second of all how will we treat these computers and how will that change the distinction we make between humans and machines?
It's quite possible it doesn't require real intelligence (whatever that is) at all.

Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

Like you said we could create computers that pass every turing's test but is it really something that can feel and think like humans or is it something that can simply pass every turing's test we have so far thought up?
Google the term "Chinese room". Here's the wiki article: http://en.wikipedia.org/wiki/Chinese_room.

I know some AI already exists. Things like Watson and the chess champion. Even calculators are a simple form of AI.
A calculator is not AI. Neither is Deep Blue. Calculators are just abaci. There is zero intelligence behind an abacus or a calculator. Deep Blue, while it did beat Garry Kasparov, did so by means of dumb brute force. There was very little intelligence behind Deep Blue. Developers of chess programs long ago abandoned the AI approach to computerized chess.

Whether brute force will suffice to accomplish that which deem to be a sign of "intelligence" (whatever that is) remains to be seen. Whether AI researchers can use AI techniques to solve those problems is another question. Yet another question is what this thing we call intelligence is, is.
 
  • #8
Ryan_m_b
Staff Emeritus
Science Advisor
5,841
711
Yet another question is what this thing we call intelligence is is.
Definitely. The biggest problem with talking about things like this is the severe lack of good definition. When we can't define what exactly sentience or intelligence are how are we going to have a meaningful discussion about creating them? I tend to find it better to talk in terms of capability because at the end of the day that's what we want from machines; for them to do work so we don't have to. In fact it would be a far better if the mechanisms, however capable and human appearing*, were categorically nothing like humans in terms of consciousness or intelligence because then we get into a huge ethical quagmire.

*By human appearing I mean along the lines of natural language user interface rather than the Asimov-type human looking robot.
 
  • #9
400
17
It's quite possible it doesn't require real intelligence (whatever that is) at all.

Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

[/QUOTE]

I don't understand how this connects to the question it was supposed to answer.
 
  • #10
128
2
Yet another question is what this thing we call intelligence is, is.
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".

Then there's the fact that people tend to use different definitions. However, as language is primarily a means of communication, this definition makes sense, and does not seem to do injustice to most forms of what people call intelligence. Obviously, there are those who would say intelligence is primarily the ability to solve X kind of problem, or Y kind of problem that is most useful in situation Z. However, I have yet to find a more pragmatic definition than beforementioned one.
 
  • #11
PAllen
Science Advisor
2019 Award
7,964
1,256
I have long felt that it is enormously more likely that the first alien intelligence we communicate with will be one we built than an ET one. Which isn't to say how likely , how soon this is. Just that I think there are fewer fundamental obstacles to the former.
 
  • #12
256bits
Gold Member
3,048
1,076
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".
Wouldn't "knowing what to do." be more instinct, and easily programmable.
I bet you meant intelligence can be thought of as solving a problem not encountered before.
 
  • #13
128
2
I bet you meant intelligence can be thought of as solving a problem not encountered before.
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.
 
  • #14
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
683
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
 
  • #15
2,123
79
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
For what it's worth, the "Turing Test" seems to be the putative standard for "humanesque" intelligence that many still use. There are many experimental designs that are consistent with Alan Turing's original description (1951) and, afaik no machine has yet been developed that seriously warrants a comprehensive Turing Test.
 
Last edited:
  • #16
6,265
1,275
Defining "intelligence" is a very hard problem...

...To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
The "very hard problem" would be solved by defining Richard Feynman, then.
 
  • #17
256bits
Gold Member
3,048
1,076
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.
Memory is a part of intelligence, just as much as problem solving. One has to make an assensement of a situation and determne whether to use the rules stored in memory applicable to the same old same old problem or devise a new set of rules for a never encountered problem. Intelligence can range from that of a lobster, to a dog, to a chimpanzee, to a human.

So I agree with your statement that intelligence is not that hard to define. Problem is you cannot give an IQ test to a lobster or a dog. So the level of intelligence maybe is more difficult to pin down. While the Turing test to some is the holy grail to strive for, so that one can say a computer is as smart as a human, I would seriously bet that very few humans themselves could make a passing grade, as much as computer could. It seems to have the same level as Asimov's three laws of robotics which are severly flawed for design of AI by humans. IE The military would love to have a robot that can kill.

At present silicon needs support staff for repair and energy replenishment. Would we become slaves to our intelligent robots if they themselves are not able to sustain themselves as a unit.
 
  • #18
128
2
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
I tend to disagree with this definition of intelligence for exactly this reason. Pragmatically, defining intelligence as the IQ-quantity doesn't have any use. The ability to solve problems (which Feynman was very good at), however, has.

It's a lot harder to accurately test someone's ability to solve 'problems', though. After all, what kind of problems? When is something considered a problem? Does age matter when testing this? etc. etc. We'll most likely stick with IQ-tests for quite a while, which I think are the most reliable way to test one's potential for academic problem-solving at the moment (though I'm actually not sure of this; I've never really bothered to look up any studies to see whether this can be confirmed).
 
  • #19
241
0
People with low IQ (e.g. those with Down Syndrome), still have feelings, pretty much so.

While even the most sophisticated software on fastest world computer doesn't have any... Simply, without consciousness there aren't any feelings (nor emotions).

Also, IMO, consciosness and intelligence isn't the same thing... Level of IQ depends on quality of brains, while consciousness either is or isn't present.

All life is conscious, so, these two seems to be either one and the same thing or being two things as part of one (e.g. a coin with two faces).

All life is conscious, but awareness and intelligence varies in regards to structure of biological cells (not just brains), while human brains have the most comlpex biological structure on Earth, or say, in known Universe, thus they offer the best known ability to comprehend, imagine, create etc., and enermous capacity to associate and memorise (in capacity of storing data computers are already ahead of us humans, while in ability to comprehand they are behind even from bacteria, which knows well how to survive).

Computers/robots shall have feelings only when they become self-aware. And I don't think that's possible to achieve with software alone, no matter how sophisticated the software (simulation) is.
 
Last edited:
  • #20
OCR
842
696
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.
http://www.dailywav.com/0904/quitelikethis.wav

Computers/robots shall have feelings only when they become self-aware.

http://www.dailywav.com/0106/fullestuse.wav



Note: links only clickable with IE8... copy and paste to address bar with Firefox. Opens in WMP.




OCR... :wink: ... lol
 
Last edited:
  • #21
Ryan_m_b
Staff Emeritus
Science Advisor
5,841
711
All life is conscious, so, these two seems to be either one and the same thing or being two things as part of one (e.g. a coin with two faces).

All life is conscious, but awareness and intelligence varies in regards to structure of biological cells (not just brains), while human brains have the most comlpex biological structure on Earth, or say, in known Universe, thus they offer the best known ability to comprehend, imagine, create etc., and enermous capacity to associate and memorise (in capacity of storing data computers are already ahead of us humans, while in ability to comprehand they are behind even from bacteria, which knows well how to survive).

Computers/robots shall have feelings only when they become self-aware. And I don't think that's possible to achieve with software alone, no matter how sophisticated the software (simulation) is.
All life? Including bacteria, plants and brain dead patients? I highly doubt it. All the evidence points to consciousness being a product of a central nervous system. I also don't think it is fair to say that the human brain is the most complex biological structure, really there isn't much difference in complexity between a brain and many other organs.

If we discover how exactly emotions are generated then we may be able to emulate that on a chip. But that's hardly useful, what we want is computer programs that can solve problems in a mechanical, non-conscious way and if it acts like a person then that's all the better for interfacing.
 
  • #22
MarcoD
If we discover how exactly emotions are generated then we may be able to emulate that on a chip. But that's hardly useful, what we want is computer programs that can solve problems in a mechanical, non-conscious way and if it acts like a person then that's all the better for interfacing.
I wonder about that. People in CS make a difference between (human) thought and computation. Humans are good at thought but lousy at computation whereas computers are good at computation but lousy at thought.

We don't really know what thought is, apart from that -if we sidestep a lot of philosophical issues- it seems to be the byproduct of an organ, the human brain, a complex entity made out of an incredible number of neurons but also hormones, which interacts with the other organs which make up a human body. (And, of course, while interacting with, or driven by, the environment, with probably even an evolutionary goal.)

It therefor seems reasonable that if you want something close to an organic intelligence, you'ld need to model all that - a brain, neurons, hormones, and a body.

That's the thing with your comment: How would you motivate an organic like intelligence if it wouldn't have emotions like love, curiosity, or ambition, to drive it? It may well refuse to give a correct answer to the simplest calculation like "3+5" since there is nothing driving it.

(This is under the assumption that you really want something like 'organic' intelligence in silicon, which I think we want. We (humans) are just so much better at solving 'easy' tasks, like cleaning the house, whereas computers have to be programmed by humans to do that, and still fail at the easiest of tasks.)
 
Last edited by a moderator:
  • #23
PAllen
Science Advisor
2019 Award
7,964
1,256
I think the main reason to pursue strong AI is not any particular belief about its utility, but simply because 'maybe we can', just like many other non-practical pursuits (is anyone expecting utility from understanding dark matter?). If we could, it would be cool that we could, and the result would presumably be cool. It would certainly shed light on questions of what is intelligence or consciousness.
 
  • #24
56
0
Maybe I'm just being a reductionist, but saying that we can never create an AI engine that can truly feel because it doesn't have any biology - in my opinion - is just silly. Biology is a series of complex chemical interactions, and these complex chemical reactions can be summarized by relatively simple physical laws, I don't think there is anything particularly special by calling it, "biology" because computers are governed by the same physical laws as humans are. So saying that our notion of a feeling is somehow special, and that no machine could ever replicate it seems to me absurd. I think the human brain is very scattered, and even if we had everything perfectly mapped out it wouldn't be very computer like. Logically, computers are much more rigorous and are less prone to being misled, and I suppose if you define this weakness as a human trait, then yes computers I don't think will seem very human like to us because they are too systematic and logical.

Personally I don't see the point in developing AI, because the whole point of computing is to do superhuman calculations, and developing emotions hinders the efficiancy of a computer. But when listening to EMI or Emily Howell I get conflicted. Ahh well, I guess we'll see what the future holds :P
 
  • #25
DarkReaper
Firstly, I think that is possible. Plant a computer into a clone, which allows fluid interaction between the clone and machine, and your "emotion-AI" is formed.

Perhaps it is not that ideal for AI to possess 'real' emotion; which IMO should be the clones' duty, that is if we progress that far. Then we would prefer cyborgs? But the former possessing "fake emotion" would definitely be essential, for UI purposes or such. That would then eliminate the need for rocket science just for interacting with the machine..
 

Related Threads for: Limits of AI

  • Last Post
Replies
15
Views
2K
  • Last Post
Replies
2
Views
3K
  • Last Post
3
Replies
61
Views
7K
  • Last Post
Replies
5
Views
629
  • Last Post
Replies
1
Views
1K
Replies
97
Views
5K
  • Last Post
Replies
2
Views
567
  • Last Post
Replies
2
Views
1K
Top