Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Human Computer Program?

  1. Nov 16, 2007 #1
    Hello. I have been studying in my philosophy class a work by John Searle regarding whether it is possible to program a computer program (with hardware) to duplicate the functions of a human.

    I wrote up a six page paper against Searle and supported the prospect of a humans computer, but I do not think that it would be a good idea to post that here.

    I just wanted the opinions of other PFers: Do you think that is possible to duplicate a human with a computer? (this is a question about possibility, not practicality, and you are not allowed to suggest that you give the computer the anatomy of the brain by duplicating the firing of neurons. This is purely hardware and programming).
     
  2. jcsd
  3. Nov 16, 2007 #2

    mgb_phys

    User Avatar
    Science Advisor
    Homework Helper

    Assuming the brain consists of neurons in certain states with certain connections:

    There are a finite number of neurons.
    They have a finite number of states.
    They have a finite number of interconnections.

    And you have a big enough computer to simulate this configuration - then why not?
     
  4. Nov 16, 2007 #3
    John Searle was the inventor of the so called 'Chinese room experiment', right? Both Dennett and Carrier has spend considerable time refuting it and I am inclined to agree with them.
     
  5. Nov 16, 2007 #4
    Also- Douglas Hoffstadter has repeatedly demolished Searle's argument.

    Roger Penrose might be considered on Searle's side.
     
  6. Nov 16, 2007 #5

    Evo

    User Avatar

    Staff: Mentor

    But the computer would also need to have judgement clouded by emotions and prior experiences, irrational fears, superstitions. It would change it's decision based on what a loved one thinks. Also, lack of sleep, health, etc... would effect how well it functioned.

    So while we might someday be able to simulate an "ideal" human brain, I don't think that we will ever be able to simulate a realistic human brain.
     
  7. Nov 16, 2007 #6
    Aren't all those things created by the combination of states and interconnections? If so, then why couldn't a computer replicate those emotions and feel them just as strongly as we do?
     
  8. Nov 16, 2007 #7

    mgb_phys

    User Avatar
    Science Advisor
    Homework Helper

    If you believe the brain is only neurons/states/connections then - yes.
    If you believe there is some 'higher level of being' or some quantum effect then no.
     
  9. Nov 16, 2007 #8
    Well, state is also influenced by the propagation of hormones and endorphins and whatnot, which travel outside the neat lines of the neuron interconnections. So, you will have to add to mgb_phys's 3-tuple a simulation of the endocrine system... :)

    Evo also mentions "prior experiences", which is a big deal. You may be able to model a human brain with a graph of simulated neurons, but do you know how to construct the graph in the first place? In order to tell which neurons should have interconnections between them, you'd basically have to take a real human brain apart neuron by neuron and record what's connected to what...

    None of this really prevents the simulation, though, it just complicates it.

    Same with "quantum effects", which also would only complicate the modeling-- since a quantum computer can't do anything a normal computer can't do too. Quantum computers just do it faster. And that's okay, since your neuron simulation is likely to run very, very slow already...

    The one thing that would really cause a problem in the simulation is all the sources of error the wet, meaty, decaying human brain adds to the computation, as blood sloshes around in your head and brain cells die off. It would probably not be reasonably possible to model all these biological sources of error-- at least so long as we're still thinking of the computation as "model a network of neurons" and not "model every atom in the human brain". The question of course then becomes whether these sources of error have a effect on behavior significant enough that you could in any way tell the difference...
     
    Last edited: Nov 16, 2007
  10. Nov 16, 2007 #9
    Think of it this way. Emotions are merely a classification of an action. It is what we use to describe the mood of an action in words.

    You could also take into account the if/then factor in all actions as an extra boost to a seperate argument.
     
  11. Nov 18, 2007 #10

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    John Searle isn't the only one. Computational consciousness depends on functionalism, a concept which Hillary Putnam came up with and who now has decided the idea is flawed. In his most recent book, "Representations and Reality" he attempts to prove that if computationalism is true, then panpsychism is also true, thus it's false. Mark Bishop and Tim Maudlin among others also have jumped on this line of logic, so Searle and Putnam are not the only ones.

    Emotions are experienced. They are qualia. For a computer to classify an action is simply behaviorism. Just because something acts as if it is hurt, in love, angry etc... doesn't mean it actually is.
     
  12. Nov 18, 2007 #11
    There have been several critics of the qualia concept, such as Daniel Dennett and Paul Churchland.
     
  13. Nov 18, 2007 #12

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Moridin
    True. They are essentially saying that once you've explained how and why all the neurons interact, you've explained everything there is to explain. In so explaining the interactions, you've done all that needs to be done to explain conscious phenomena.

    I don't buy that. The counterargument is to point out that such explanations don't explain how qualia can arise. Why should a given set of light wavelengths appear a given color such as red as opposed to being blue or green? Why should coffee have a specific taste or smell?

    One example (thought experiment) Dennett suggests is that such qualia might not be consistant and he uses a Maxwell House coffee taste tester as an example. He asks, does this taste tester have the same experience over time? Does this taste change over the years or does it stay the same? If not, then it gives rise to the idea that qualia are totally illusory.

    Again, I don't buy this. The fact there is anything to explain at all about how coffee tastes or smells is an indication that there is something more to explain. And of course the zombie argument also points out that we are missing something if we don't try to explain qualia.

    Dennett counters that zombies can't exist. etc...

    Why should the interaction of switches produce any phenomena such as experience which is more than simply behavior? And if a computer's switches can create this, then of course any similar computational device can also, including for example Ned Block's Chinese brain or Searle's Chinese room - something intuitively we'd like to avoid but can't. And if these examples are valid, then we must ask ourselves, 'what is a computation and how do you define it'? That question is the biggest problem today with no good answer, despite attempts by a long laundry list of truly brilliant individuals.
     
  14. Nov 18, 2007 #13
    When a computer is capable of teaching a human child the first person meaning of the word "pain", so that the child goes on to use the word correctly in future cases, then we will be forced to believe the computer if it says it has pain; and so on for every other mental state*.

    Imagine such a computer in an android body being struck by a car, screaming and wirthing on the ground. Could you deny that it had pain? Maybe at first, but as they became more wide spread society would quickly judge it to be correct to say the machine has mental states.

    Edit: If it is possible for me to say that other humans beside myself experience qualia, and to say this without doubt, then it will similarly one day be correct to make the same judgment about computers, when their behavior is in accord.
     
    Last edited: Nov 18, 2007
  15. Nov 18, 2007 #14

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Math Jeans,
    How do you define a computer? Consider that the phenomena produced by any physical system can be determined by calculation and is therefore a computational device of some sort. Consider also that compuation is symbol manipulation, and those symbols (ex: the position of a switch, the temperature of a rock, the shadow cast by a rock) can be interpreted in any way whatsoever. The symbols are dependant on people to define what they mean. Now try and explain what exactly a computer is.
     
  16. Nov 18, 2007 #15
    The Problem with the Chinese Room argument is that the speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions.

    http://plato.stanford.edu/entries/chinese-room/

    "Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: "The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)

    [...]

    Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett)."
     
  17. Nov 18, 2007 #16

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Crosson,
    This is a fairly typical, 'common sense' argument which I believe to be flawed. We already have these things, they're called computer games. We can create an image of a person act as if it is in pain, or being struck by a car, or whatever on a video screen. We can make the computer game tell you it feels pain, so the obvious next step is to point out that we can give that computer game a body and if that's all it takes to convince someone, then we allegedly have a computer which feels and has qualia. Hopefully you can see how absurd this is. We have to do more than simply show behavior which mimics what a person does to explain conscious phenomena.
     
  18. Nov 18, 2007 #17

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Mordin,
    Regarding the argument that speed somehow affects the phenomena of consciousness. If we take an allegedly conscious computer (whatever a computer is) and begin to slow it down, is there some speed at which it looses consciousness? Does the device slowly loose consciousness? What criteria should we use which will define for us how the speed of computation affects consciousness?

    I don't think the point regarding speed of computation can be taken seriously.
     
  19. Nov 18, 2007 #18
    Q_Goest, what are your thoughts about Churchlands' counterexample?
     
  20. Nov 18, 2007 #19

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Moridin,
    I think you're refering to the analogy regarding our intuitions, if not please explain.

    Regarding the analogy,
    I'm not in favor of using any analogy or thought experiment unless it aids in understanding a logical argument. This is where the Chinese room thought experiment fails.
    (Same reference)

    I'd agree that we can't dismiss that the Chinese room is consciously aware, a point made in the article you referenced which I've seen made a few times previously. It's a good point. This is strictly an intuition we have. We can't say for sure the system of the man, the room, and the instructions are not aware. This is something Searle would like to have you believe, but this is of course an intuition not based on any logical argument.

    If computationalism is true, then the system of man, room, instructions, input and output, must also be aware. I don't see any way around it, unless we find some way of defining a computation that excludes Searle's Chinese room. Searle is appealing to our intuitions.

    Regarding that last point being made (which is made by virtually everyone) that " ... no neuron in my brain understands English, although my whole brain does." That's an interesting and often cited point. Even this point however has critics. Steven Sevush and .. <someone else who's name escapes me> have proposed single neuron theories of consciousness which actually look very appetizing to me. :)
     
  21. Nov 18, 2007 #20
    I think this is a pretty crucial question.

    Personally I would define a computer as a mechanism which an be configured so as to solve generalized problems. So a Macintosh is a computer, and a Turing machine is a computer, and the "chinese room" mechanism with the man sitting in the middle is as a whole also a computer, and the human brain is a computer.

    Of course the way I've defined things Math Jeans' question is answered by "cheating", sort of: Can a computer be a person? Yes, because I defined a person as a kind of computer. I think that this indicates a problem with the question more than it does with my definition, though-- overall I see the "can a computer be like a person?" question as just not very interesting, because I don't think we're really all sure what it is we're even arguing about. I think any discussion on this subject invariably degrades into several different arguments which occur simultaneously, with people arguing past each other because they're fundamentally talking about totally different things. (This could be helped if people would bother to start out by stopping and defining such terms as "computer" or "person" or "self-aware", so that it isn't possible to accidentally equivocate between the things these terms mean in the different overlapping discussions.)
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: A Human Computer Program?
  1. Computer program (Replies: 23)

Loading...