Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Limits of AI

  1. Nov 12, 2011 #1
    I am skeptical that strong human-like AI is possible. I think AI can be made to look very much like human intelligence but I don't think it will actually be like human intelligence in the way that you and I are. For example, I don't think a computer can Actually feel pain or pleasure or fear and desire. I could be wrong but I am simply skeptical. I'm sure people can have those feelings towards robots and computers who actually look like they are like that. But can computers really feel? I am doubtful. I think feeling comes from biology and computers don't really have any kind of biology only a simulated kind of biology. Therefore I think there are limits. I obviously do not know the specifics of it but I am curious about it. Maybe computers can come very close but I don't think it can ever be exactly the same. Who are the big people working on AI right now? I would like to observe their progress.
     
  2. jcsd
  3. Nov 12, 2011 #2

    Ryan_m_b

    User Avatar

    Staff: Mentor

    I have no strong opinions over whether or not we will one day create a humanesque conscious being in silico but as for whether or not it's possible I see no reason why not. If something exists then you can simulate it and the simulation will have the same characteristics as the original.
     
  4. Nov 12, 2011 #3

    DavidSnider

    User Avatar
    Gold Member

    There is no way to determine if another human being is "really feeling" something, let alone the even more abstract question of whether a simulation of a feeling is equivalent to a real one. People even doubt THEIR OWN feelings sometimes.

    It's difficult for me to imagine how you would even test this.

    Human-like AI is a bit of a waste of time I think. We already have humans, billions of 'em. What we need computers for are things like summing a million digits in a fraction of a second, or crunching away at a problem for days on end with no rest and other "stupid" things that humans are incapable of.
     
  5. Nov 12, 2011 #4

    Ryan_m_b

    User Avatar

    Staff: Mentor

    Agreed. Developing better and more capable software is by no means synonymous with trying to create a digital sentient being. Hell one day we may have software packages that are so good they can pass a Turing test and perform almost any task a human could but that in no way implies that under the hood is anything similar to what's going on in our grey matter.
     
  6. Nov 12, 2011 #5
    I also agree. There doesn't seem any good reason to pursue humanesque AI other than sensationalistic mad science hoorah. First of all, is that even possible? And second of all how will we treat these computers and how will that change the distinction we make between humans and machines? Like you said we could create computers that pass every turing's test but is it really something that can feel and think like humans or is it something that can simply pass every turing's test we have so far thought up? Anyway, I do not understand computer science and philosophy of mind extremely well I am simply curious about the possibilities of a thing.

    I know some AI already exists. Things like Watson and the chess champion. Even calculators are a simple form of AI.
    As for crazy future scenarios I'm more worried about the things that happened in AI or I-Robot than the things that happened in Terminator or the Matrix. Can robots really feel? If so how will that change how we think about ourselves and machines?
    Perhaps computers can sufficiently emulate the cognitive aspects of human and animal thought. They might even be able to have memories, learn, plan, make decision, make judgements, or form "desires". But can they actually experience these things like motivation or desire or curiosity? Well like DavidSnider said how can we tell? I suppose we have to investigate what it means to really feel these things in humans first and how we know it is the case and then see if it's possible to simulate that in computers. Perhaps I should study a little bit of neuroscience and philosophy of mind before I go answer that. COOL!
     
  7. Nov 12, 2011 #6
    Yes to all. It has always seemed to me that superimposing emotions on a calculator would have no effect other than to greatly compromise its ability to calculate.
     
  8. Nov 12, 2011 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    There's money in them there hills. Lots and lots and lots of money. It turns out that a good amount of the work that we think requires intelligence (whatever that is) is just as rote as is the work of those whose jobs have already been automated by run of the mill, non-AI software. Many aspects of the task of putting together a disparate bunch of parts to form some fantastic widget have been automated by simple software and machines. Planning the process, ordering the parts, keeping the machines running: That is where tomorrow's job security lies. Wrong. That work also is rote and can be automated by yesterday's AI. The only part that isn't rote (so far) is coming up with the process in the first place.

    It's quite possible it doesn't require real intelligence (whatever that is) at all.

    Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

    Google the term "Chinese room". Here's the wiki article: http://en.wikipedia.org/wiki/Chinese_room.

    A calculator is not AI. Neither is Deep Blue. Calculators are just abaci. There is zero intelligence behind an abacus or a calculator. Deep Blue, while it did beat Garry Kasparov, did so by means of dumb brute force. There was very little intelligence behind Deep Blue. Developers of chess programs long ago abandoned the AI approach to computerized chess.

    Whether brute force will suffice to accomplish that which deem to be a sign of "intelligence" (whatever that is) remains to be seen. Whether AI researchers can use AI techniques to solve those problems is another question. Yet another question is what this thing we call intelligence is, is.
     
  9. Nov 12, 2011 #8

    Ryan_m_b

    User Avatar

    Staff: Mentor

    Definitely. The biggest problem with talking about things like this is the severe lack of good definition. When we can't define what exactly sentience or intelligence are how are we going to have a meaningful discussion about creating them? I tend to find it better to talk in terms of capability because at the end of the day that's what we want from machines; for them to do work so we don't have to. In fact it would be a far better if the mechanisms, however capable and human appearing*, were categorically nothing like humans in terms of consciousness or intelligence because then we get into a huge ethical quagmire.

    *By human appearing I mean along the lines of natural language user interface rather than the Asimov-type human looking robot.
     
  10. Nov 12, 2011 #9
    It's quite possible it doesn't require real intelligence (whatever that is) at all.

    Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

    [/QUOTE]

    I don't understand how this connects to the question it was supposed to answer.
     
  11. Nov 12, 2011 #10
    I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".

    Then there's the fact that people tend to use different definitions. However, as language is primarily a means of communication, this definition makes sense, and does not seem to do injustice to most forms of what people call intelligence. Obviously, there are those who would say intelligence is primarily the ability to solve X kind of problem, or Y kind of problem that is most useful in situation Z. However, I have yet to find a more pragmatic definition than beforementioned one.
     
  12. Nov 12, 2011 #11

    PAllen

    User Avatar
    Science Advisor
    Gold Member

    I have long felt that it is enormously more likely that the first alien intelligence we communicate with will be one we built than an ET one. Which isn't to say how likely , how soon this is. Just that I think there are fewer fundamental obstacles to the former.
     
  13. Nov 12, 2011 #12
    Wouldn't "knowing what to do." be more instinct, and easily programmable.
    I bet you meant intelligence can be thought of as solving a problem not encountered before.
     
  14. Nov 12, 2011 #13
    If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.
     
  15. Nov 12, 2011 #14

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

    To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
     
  16. Nov 12, 2011 #15
    For what it's worth, the "Turing Test" seems to be the putative standard for "humanesque" intelligence that many still use. There are many experimental designs that are consistent with Alan Turing's original description (1951) and, afaik no machine has yet been developed that seriously warrants a comprehensive Turing Test.
     
    Last edited: Nov 12, 2011
  17. Nov 12, 2011 #16
    The "very hard problem" would be solved by defining Richard Feynman, then.
     
  18. Nov 12, 2011 #17
    Memory is a part of intelligence, just as much as problem solving. One has to make an assensement of a situation and determne whether to use the rules stored in memory applicable to the same old same old problem or devise a new set of rules for a never encountered problem. Intelligence can range from that of a lobster, to a dog, to a chimpanzee, to a human.

    So I agree with your statement that intelligence is not that hard to define. Problem is you cannot give an IQ test to a lobster or a dog. So the level of intelligence maybe is more difficult to pin down. While the Turing test to some is the holy grail to strive for, so that one can say a computer is as smart as a human, I would seriously bet that very few humans themselves could make a passing grade, as much as computer could. It seems to have the same level as Asimov's three laws of robotics which are severly flawed for design of AI by humans. IE The military would love to have a robot that can kill.

    At present silicon needs support staff for repair and energy replenishment. Would we become slaves to our intelligent robots if they themselves are not able to sustain themselves as a unit.
     
  19. Nov 13, 2011 #18
    I tend to disagree with this definition of intelligence for exactly this reason. Pragmatically, defining intelligence as the IQ-quantity doesn't have any use. The ability to solve problems (which Feynman was very good at), however, has.

    It's a lot harder to accurately test someone's ability to solve 'problems', though. After all, what kind of problems? When is something considered a problem? Does age matter when testing this? etc. etc. We'll most likely stick with IQ-tests for quite a while, which I think are the most reliable way to test one's potential for academic problem-solving at the moment (though I'm actually not sure of this; I've never really bothered to look up any studies to see whether this can be confirmed).
     
  20. Nov 13, 2011 #19
    People with low IQ (e.g. those with Down Syndrome), still have feelings, pretty much so.

    While even the most sophisticated software on fastest world computer doesn't have any... Simply, without consciousness there aren't any feelings (nor emotions).

    Also, IMO, consciosness and intelligence isn't the same thing... Level of IQ depends on quality of brains, while consciousness either is or isn't present.

    All life is conscious, so, these two seems to be either one and the same thing or being two things as part of one (e.g. a coin with two faces).

    All life is conscious, but awareness and intelligence varies in regards to structure of biological cells (not just brains), while human brains have the most comlpex biological structure on Earth, or say, in known Universe, thus they offer the best known ability to comprehend, imagine, create etc., and enermous capacity to associate and memorise (in capacity of storing data computers are already ahead of us humans, while in ability to comprehand they are behind even from bacteria, which knows well how to survive).

    Computers/robots shall have feelings only when they become self-aware. And I don't think that's possible to achieve with software alone, no matter how sophisticated the software (simulation) is.
     
    Last edited: Nov 13, 2011
  21. Nov 13, 2011 #20

    OCR

    User Avatar

    http://www.dailywav.com/0904/quitelikethis.wav


    http://www.dailywav.com/0106/fullestuse.wav



    Note: links only clickable with IE8... copy and paste to address bar with Firefox. Opens in WMP.




    OCR... :wink: ... lol
     
    Last edited: Nov 13, 2011
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Limits of AI
  1. AI & Dualism (Replies: 2)

  2. AI anyone? (Replies: 1)

  3. AI breakthough (Replies: 15)

  4. Messing with AI (Replies: 61)

Loading...