Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Are computers intelligent?

  1. May 18, 2009 #1
    I've been thinking recently about computers. These are the types of things I've been thinking about:

    - Are computers intelligent?
    - Do computers think?
    - Is it possible for computers to feel or have emotions?
    - Can computers evolve?

    Here are some of what I have come up with to these questions:

    - I feel that computers make very intellectual decisions sometimes, but I don`t know whether to credit the computer or the programmer for the intelligence. Maybe sometime in the future though this will become clearer with AI developing further and further.

    - I really have no clue if a computer can think. I`m trying to think (lol) about what it means to even be thinking... and I am struggling to define it. A part of me wants to include emotions but I`m not sure. (Anyone have any readings to suggest to me on intelligence/thinking)

    - I think that it currently computers do not have feelings or emotions. For instance if we had developed a robot that knew how to skydive but its parachute failed. It knows what is occuring and runs programs that it knows will likely help it survive but is it having any feeling associated with the failure of its parachute? No. Though it may be that these feelings are too complex for this robot to have.(or us to think it has, is that prejudice?) So what about if we had a robot that could organize things based on shape or colour. Would the robot get any feelings related to doing this task? I'm doubtful but I don't think we could know. In any case I think developing feelings in a machine is beyond our current capabilites.

    - This to me is a scary thought. If computers could learn and evolve on their own. And I don't mean like evolve by breeding (although they may learn to create other machines...) I mean can my laptop in front of me evolve internally, get smarter? Learn what I am doing possibly get better at doing it? I think that it is possible as those chess programs learn a lot as they play the game more and more and keep huge databases on what works and what doesn't work. This is evolving in a sense I think.

    Anyways post your input on these questions or put up your own questions.
     
  2. jcsd
  3. May 18, 2009 #2
    Re: Computers

    Hi there,

    I believe the same as you, and would grant the "intelligence" of a computer to its programmer.

    It depends what you define as thinking. A computer will never come up with something new. It can only improve. Therefore, without the ability to develop new ideas, I don't consider that thinking.

    Computers, since there beginning, are build to follow a serie very define codes or programs. Therefore, it is certainly possible to implement the fear of falling to a computer. But, for humans, these fears are not pre programmed into our brain. We tend to develop them as we go along.

    I heard something about it too. I would like to know more about the possibilities for computers to evolved.

    Cheers
     
  4. May 18, 2009 #3
    Re: Computers

    I have no doubt that someone can develop a program of 'fear' so a computer knows when to be fearful. But is it actually feeling these emotions? What I'm saying is their is a behavioural response which we already know computers definitely have (for instance act a certain way under certain conditions). Then there is an emotional response. It's the latter I'm interested in. In philosophy I think its called qualitative experiences; qualia. Such as the colours or tastes of fruit. The fear of falling to possible death etc.


    As well here's another question. Don't know why I hadn't thought of it before:

    - Can computers have freewill?

    It occured to me after fatra2 posted that programmers can program a computer to do a variety of things. They could even possibly program a computer to be able to do things they hadn't thought of previously themselves. But the computer is still following the program and will not stray from following the program so I dont think it has freewill.
     
    Last edited: May 18, 2009
  5. May 18, 2009 #4
    Re: Computers

    Why would you say that? We just have to give computers the ability to develop ideas. I have no doubt that one day we will be able to replicated a free thinking organism through computers ~ Our brains are not something metaphysical. Something going on within the neurones causes us to be who we are, and eventually we will learn how to exploit this.
     
  6. May 18, 2009 #5
    Re: Computers

    Hi there,

    I could not agree more with you, that our brains are electrical connections between neurons. Did you ever looked at the complexity of these connections? Have fun replicating that into an artificial device.

    Computerised science is running into terrible difficulties having humanoid robots standing and reproducing human steps. Try to imagine what is going on in our brains when we start thinking.

    I am not saying that it's impossible. Just that science (and from many different fields) would need to make outstanding progress.
     
  7. May 18, 2009 #6
    Re: Computers

    They're already making computers that can "think", but on a lower level. A friend of mine's father worked for military intelligence developing neural networks (but even his son's not allowed to know more than that). Effectively, as I've heard it described elsewhere, they're laying out computers to mimic the neural connections in the brain. But they're nowhere near as smart as humans. They're more like the intelligence level of an insect or a small amphibian or something. They're getting there, but it's slow. And computers aren't quite growing by the leaps and bounds that they were some 10 years ago, when computers became obsolete every 3 years or so.

    I'm not sure if Deep Blue was designed using a neural network, but it was supposedly coming up with creative chess moves that surprised its developers. I recall another AI developer being amazed that his AI successfully defeated him using more advanced tactics than he had "programmed in".

    As for the emotional side of things, I doubt we'll ever know for certain. If I assembled a human being out of sub-atomic particles, and attempted to make it act as though it felt emotion, what's the difference between that and it actually feeling those emotions? Could you hope to prove one way or another that it did or didn't feel? That is, is it REALLY feeling emotions, or is it just ACTING like it's feeling emotions?

    Personally, I believe emotions are an evolutionary benefit that helps provide us an incentive to do certain things. It gives us a non-physical benefit which can be achieved by thinking that encourages us to think more. For instance, thinking about how a killer with a knife will affect me, I'll feel fear of death or pain, and in turn I'll think about what I should do, like run away.

    DaveE
     
  8. May 18, 2009 #7
    Re: Computers

    What about free will. Is it possible for us to design something for it to be completely deterministic but have it develop freewill?
     
  9. May 18, 2009 #8
    Re: Computers

    The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
    - Edsger Dijkstra
     
  10. May 18, 2009 #9
    Re: Computers

    Computers can only do what a programmer tells it.
     
  11. May 18, 2009 #10
    Re: Computers

    I don't believe the similarities are the same for each case. It just gets dismissed that computers don't do these things because they aren't living (so we say) Isn't that just prejudiced thinking though?
     
  12. May 18, 2009 #11
    Re: Computers

    Computers as they exist today are not capable of what we would consider thought. Although it's probably inevitable that some future iterations will possess this ability, it's a little premature to begin throwing words like "prejudice" around.

    First you need to be clear by how you define "thinking". Is it simply the ability to perform a calculation? By that definition, your digital watch can think, but human babies cannot. Obviously, that definition isn't going to fly on its own.

    Thinking implies problem solving. For a problem to exist, there needs to be a perceived need that is not currently being met.

    I am hungry. I need to eat. Where can I find food? What things around me are food? Is that an apple? Is an apple food? How can I get the apple? Should I climb the tree? Will I fall? Is the risk of falling worth the reward of the apple?

    Computers don't want anything. They have no needs or motivations, rather they are complex tools which we use to realize our own wants and motivations.
     
  13. May 18, 2009 #12
    Re: Computers

    I don't think that's necessarily true. Imagine if we were to invent (say) a Roomba-style robot, who "wants" never to run out of power. It's not that hard to have it recognize power outlets, and plug itself into them when necessary, and even learn which ones aren't functional (like if they're hooked to a light switch):

    "I am low on power. I need to plug into a power outlet. Where is the nearest power outlet? Are the plugs available on this outlet? Does this outlet work? If no, where can I find another one? If yes, plug in and be contented."

    The difference is that the knowledge of what power outlets look like and how to extract power from them is "programmed" rather than learned. But that's arguably solved with neural networks, where you don't program it in, but *teach* it, and that sort of thing's been done. The comparison in humans is where children are taught what is and isn't edible by their parents at a young age, and the child is "programmed" to stuff pretty much everything into its mouth.

    DaveE
     
  14. May 18, 2009 #13
    Re: Computers

    I think along the same lines as Dave but I think it comes down to behaviour and emotional responses.
     
  15. May 18, 2009 #14

    chroot

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Computers

    The majority of AI research focuses on producing emergent behavior. For example, a programmer might create a very simple artificial neuron that does nothing more than filter its input signals with a mathematical function to produce an output signal. The neuron by itself is certainly not "intelligent" by any definition of the word, but interesting things happen when you put many of them together. A large array of these simple neurons can "learn" to understand speech, or to diagnose heart attacks better than humans can. That's emergent behavior.
    In the purest possible sense, "fear" is just foresight that the current situation may result in death or dismemberment. A skydiving robot could evaluate its situation, a failed parachute, and reach the conclusion that it is about to be destroyed. That conclusion could be called fear; there's no reason to invoke some spooky superstition that our emotions are any more complicated than that.

    We humans just happen to hold our emotions in high regard, since they seem to transcend our rational thought processes. In fact, they seem to circumvent our rational thought processes. Your two minds (the rational and the emotional) each evaluate a given situation independently, and, if either is unsettled enough by the conclusion, a reaction is provoked. If a machine is shown the same situations and produces the same reactions as a human, then you might as well call it human. That's the essence of the Turing test, of course.

    The experience of emotion occurs in the limbic system, an ancient (and simpler) part of the brain. It evolved to quickly evaluate situations and produce strong responses -- fight or flight, for example. Its evaluations are frequently wrong, but it served us well earlier in our evolutionary development. Because it is simpler in nature, it stands to reason that the limbic system would be easier to emulate on a computer than would be our fancy, recently-evolved neocortex, where rational thought occurs. I believe that most people have an upside-down view of intelligence; the educated-guess responses of our emotional hardware are easier to emulate on computer hardware than are the rational, reasoned responses of our neocortex.

    Emotional responses are "stronger" than rational responses, in the sense that strong emotions can hijack the rest of our brains, at least temporarily. Many forms of entertainment take advantage of this situation. Rollercoasters, haunted houses, and even stand-up comedy all depend upon provoking a strong emotional response when it is rationally inappropriate.
    "Evolve" is the wrong word to use in this context; instead, stick to the word "learn." Computers are certainly capable of learning.
    People are somewhat prejudiced when it comes to declaring artificial neural networks "intelligent." Most people insist that computers can only do what programmers told them do it, but that's simply not true at all. No one sat down and codified heart attack diagnosis; the machine was simply shown examples of patients with and without heart attacks, and it learned to differentiate them. This is pretty much what happens in medical school, too.
    Most people describe "intelligence" as the ability to come up with novel solutions, and then tacitly declare that machines cannot come up with novel solutions. That's not true, either. Chess computers, protein folding algorithms, and many other systems are capable of finding solutions that no human would likely have found; sometimes these solutions are silly and bizarre, but sometimes they are incredible. We owe many of our powerful new drugs to artificial intelligence.
    This is factually incorrect. Our brains are certainly pre-wired to have emotional responses; they occur in infants long before any rational thought. That may be the only reason we still have emotions -- they are simpler and "come online" very early in our development, protecting a child until the brain has developed and becomes capable of higher, rational thought.
    In my opinion, there's no real difference between rational and emotional responses. Each involves the response of some neural network to some pattern of input. When we are aware of our neocortex being temporarily hijacked by our limbic system, we call the experience "feeling an emotion." Does "feeling" have any deeper meaning than a multiplexer being flipped from one input to another? I would argue that it is no more complex.
    If the computer is deterministic, the answer seems to be a solid "no." On the other hand, once you bring in non-deterministic events -- randomness, like the time between the receipt of network packets -- the answer may well be "yes."

    More specifically, computers probably can have as much free will as humans. A more interesting question, though, is whether or not humans have any free will in the first place. In my opinion, they do not.
    Our brains have more complexity than we can currently emulate in computer hardware, but that does not mean such complexity is really necessary for intelligence. It is possible that evolution rewards simplicity so strongly that our brains contain the bare minimum complexity capable of intelligence, but it seems that we can create intelligence with far fewer resources, particularly if you restrict the domain of problems to chess or heart attacks.
    This is incorrect. Moore's law is alive and well. It just happens that personal computers are now a mature market; most PCs do most of what most users want them to do. Bigger computers, however, continue to advance at an astounding rate.

    My stance on intelligence is that we humans have a delusion of grandeur about our own thought processes. It stands to reason that, to understand one "thinking machine," you would need a thinking machine of even greater power. Our brains may not be complex enough to understand their own complexity. As a result, it's very easy for people to write off any machine that they can understand as being unintelligent.

    Consider the statement:

    • "If a machine is understandable, it is not intelligent."
    The contrapositive of this statement, which is logically equivalent, is:

    • "If a machine is intelligent, it must not be understandable."
    That's very dangerous thinking! Any machine that we design, even if capable of emergent behavior, will necessarily be understandable. By that logic, we will never be able to create a machine that we will deem intelligent, no matter how capable it actually is.

    My own perspective, unpopular as it may be, is that we ourselves are not intelligent in the way that we usually define intelligence. The processes that occur in our brains are not magic, and they do not defy or transcend any laws of physics. I believe our thinking processes are based on a few small rules -- like those of the artificial neural networks that diagnose heart attacks -- conflated many billions of times until the emergent behavior is all we see. I believe that our thinking is probably every bit as mechanical as that of the machines we build. We deem ourselves "intelligent" simply because we do not yet understand ourselves.

    The gap between human and machine intelligence can be bridged in either direction. It seems inevitable that we will eventually make machines as complex as the human brain, but we may also need to relax the arrogant attitude that the human brain does something that no machine ever could.

    - Warren
     
    Last edited: May 18, 2009
  16. May 18, 2009 #15
    Re: Computers

    Is that robot ever doing any actual thinking during that process? The programmer has to do a great deal of thinking in order to anticipate the many circumstances the robot may have to deal with, but the nature of programming itself depends upon pure logic.

    Pure logic requires no thinking.
     
  17. May 18, 2009 #16
    Re: Computers

    Thanks for the really detailed responses... Warren I agree with most of what you said however I don't feel that freewill is completely dismissable in a determinisitic universe. If the universe is even completely deterministic anyways. At fundamental levels it does not seem at all that anything is determined.
    Thanks for the post though wasn't expecting anyone to go this in depth. :p

    I'm on my blackberry right now but when I get home I will most likely respond my self in more detail.
     
  18. May 18, 2009 #17
    Re: Computers

    See chroot's points about predicting heart attacks. You could program it to perform those tasks automatically as a programmer (in which case you could call it "instinct"), or you could program the robot to learn, and have it set "getting power" as a goal. Then, you can simply "teach" it by showing it different outlets and plugging it into them. Depending on the quality of the image/ultrasonic/whatever processing (quantifying images into different areas, colors, etc), it can learn to identify not only what power outlets look like, but how to plug itself into them, and where they are. In that case, you're not programming in anything about what power outlets look like, how high they are, how to plug into them, or anything else. You give it the ability to process its sensory input, and a goal of "obtain power". The rest it learns itself with your teaching.

    You could similarly build in automatic exploring, so that it could teach itself (much in the way that babies put random things in their mouths), rather than have you teach it how to plug into things-- you'd just give it a priority on plugging into random things in random ways in the event that it didn't know how to obtain power. It'd be slower to learn, but it could do the job.

    DaveE
     
    Last edited: May 18, 2009
  19. May 18, 2009 #18
    Re: Computers

    Unfortunately, chroot posted while I was typing, so it appears as if I'm ignoring everything he said. Quite to the contrary, I agree with his thoughts on AI. Maybe not so much his ideas on free will, but that's another discussion.

    For a machine to truly think, the conditions for an emergent intelligence need to be present. Even in the case of the Heart Attack machine, I'm not completely sold. It doesn't know anything besides heart attacks. It doesn't even really know anything about heart attacks. It's just really good at putting people into one of two categories.

    I'm of the opinion that we will eventually create an intelligent machine. It's just a matter of time. The real question is should we?
     
  20. May 18, 2009 #19
    Re: Computers

    Not necessarily one of two categories-- I'm not familiar with the specifics of that case, but for another, we had a class experiment in my old AI class where we taught a program how to learn who would like what types of food. Everyone in the class fed in their information on about 100 different types of foods and how much they liked them. The programs we wrote could correlate that people who liked "X" generally liked "Y". It learned to recognize different people's tastes, and what the good indicators were for particular foods.

    By comparison, if you were asked to predict if I (for example) would like lasagna, how would you think your way to a solution? You might ask me if I liked spaghetti, because the two are somewhat similar (based on your experience), and predict my likings based on that information. The program basically did the same thing, it just knew that spaghetti was similar thanks to its statistical correspondence with lasagna, rather than how you knew it was similar because you've had each before, and thought they tasted similarly. The difference is that the program, if given enough data, would probably out-predict you, because as a computer it can analyze more data all at once, and it could tell if there were perhaps other good indicators for whether or not I liked lasagna.

    Basically, the program was not told "X is similar to Y", it figured that out on its own. Similarly, in the case of the learning robot, it wouldn't be given any initial knowledge about what power outlets looked like, but once it found some, it could quickly learn what in its range of sensory inputs correlated to "power", much in the way that your human example correlates "apple" to "food", based on human experience.

    I honestly don't know if I'd consider it thought or not-- similar to how I don't know if I'd consider a mosquito capable of "thought". But they DO have brains, and computers are probably on the same level or higher.

    Just curious-- what reasons would you give against producing intelligent machines? I would assume that it might be one of:
    1) Machines doing our thinking and working for us (IE turning us into effectively slugs)
    2) Computers overthrowing humanity (like oh-so-many Sci-Fi movies)
    3) Humans enslaving computers (and we've got a moral obligation not to)

    DaveE
     
  21. May 18, 2009 #20
    Re: Computers

    Well, it's mostly 3, which leads to 2; and we're dealing with the early stages of 1 right now. I doubt it is as simple as any of those.

    I guess my real concern is that creating a truly intelligent machine is pretty much the same thing as creating a person. Once you succeed, you have transcended "machine" or "computer". We generally use human intelligence as the measuring stick. Anything less comes up short.

    Human intelligence is the result of billions of years of competition. We're at the top of the food chain, and we're extremely dangerous and effective predators because of this intelligence. Our intelligence is based on this competition for survival and supremacy. We won't be satisfied until we see ourselves looking right back at us.

    What do we do then? Now we have intelligent machines, but it's immoral for us to enslave them to do the tasks they were created to do. Do we give them equal status and accept that we've created superior replacements for ourselves? Who does the work previously assigned to machines? What need is there for people then? If we create a truly intelligent (Turing test) form of machine, they will fight us for survival and supremacy. They would be stupid not to.

    I mean seriously, nothing exists in a vacuum. The Terminator and Matrix movies have been made, so any human level intelligence that is created will have access to that line of thought. Unless we start figuring out some Asimov laws right about now, we're just asking for it.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Are computers intelligent?
  1. Intelligence (Replies: 29)

  2. Intelligence (Replies: 26)

Loading...