Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Artifical Intelligence

  1. Nov 16, 2007 #1
    I would like someone to tell me if the logic behind my theory is valid or if it isn't.

    My theory is this:

    True Artifical Intelligence is impossible.


    Ok lets talk about programming at the most basic level. Lets use java. If i want to create a being from programming that can "think" for itself and make decisions for itself i would have to code it. Essentially it would consist of If, then,else statements. For example, the AI sees a person. My code says, If you see a person, then say Hi, else say go die, else say wasup, etc etc. Now their are an infinite number of responses and one cannot code infinite responses. That is just one instance in infinite instances, such as deciding what to eat, to like about a person, or which movie to watch. A human brain is the only such object which can actually decide what to do and contains infinite responses. By saying this i baffle myself, at how anything could contain knowledge of infinite. hmm. But the point is, AI is not possible.
  2. jcsd
  3. Nov 16, 2007 #2


    User Avatar
    Science Advisor
    Homework Helper

    There aren't an infinite number of decisions.
    Faced with each new set of circumestances there is a small range of possible decisions, you weigh these using some function and act.
    Then you come to another new set of inputs.

    It's like playing chess or Go - there are a large number of possible states but you only have to meet them one at a time.
  4. Nov 16, 2007 #3
    I vote invalid because you have a major invalid assumption in there: Human brains contain an infinite amount of knowledge? There are an infinite number of ways we can great someone? These are pretty big claims that need backing up if they are to be believed, and even then I'd still doubt them.

    Also, as newborns, we don't know all this stuff. But by experiencing, we can learn. AI is often the same -- just give it the basics (it's 'born'), let it experience the world, and then it too can learn. You learned to say "hi" to some and "go die" to others, so why can't the computer?
  5. Nov 16, 2007 #4
    Yes there are an INFINITE number of ways to greet someone. The human brain is not perfect, which separates it from a machine. Human logic isnt all logic, by that i mean, what you do doesnt have to be logical. If you see your friend and want to greet them, you could raise your hand in the hair , run around in a circle, and then greet them by saying whacawhacadingdongyo. Likewise you could see them, tell them to not move, go to china, kill some chinese farmer, take a plane back, and say hi. Imperfection is not possible in AI.
  6. Nov 16, 2007 #5
    Essentially, you call AI impossible because you can only code a finite number of instructions whereas true intelligence allows for an infinite number of responses. I might agree with your reasoning if AI actually relied only on a finite set of instructions, but that's not the case. Or if intelligence allowed for an infinite number of responses, but that's not the case either. Your proof suffers from incorrect premises.
  7. Nov 16, 2007 #6
    False, unless you both live for an infinite amount of time.

    Very false. Clearly you've never seen my programs.
  8. Nov 16, 2007 #7
    Last edited by a moderator: Apr 23, 2017
  9. Nov 19, 2007 #8


    User Avatar
    Science Advisor
    Gold Member

    I don't often wander over in the philopophy section, but I'll give you my 2 cents. You have this assumption that human intelligence, or at least the number of possible greetings for your friend is infinite. This is not true. It is a perception. Whether you realize it or not, you are constantly learning things. Just by looking at something, some object in the room, you learn a new way to greet your friend. Why did you choose China in the example in a previous post? Whether you realized it or not, something made you think of it. Something lead to something, then someting else, and in the end you had the kill the farmer example. It wasn't just lined up as the next data in the lookup table. So what is actually happening is you don't have an infinite number of ways to do it, you just keep learning new ways. From our perception, there is no difference. When we build computers that PROCESS information in the same way our brains do, then we will have something. It has more to do with that than with clock speed and all the crap we hear about from people trying to sell PCs and software. Of course, this takes a whole new approach on computing with components and devices that most likely don't exist today.
    Last edited: Nov 19, 2007
  10. Nov 22, 2007 #9
    I don't know if this is useful in any sense. In fact, it might OT the conversation up a bit or two, but - until we can identify the "essence" of something, it is unlikely that we will ever be able to describe to an AI how to recognize object X.

    For example - lets consider the most simplest AI that can convey information in a believable sense that makes it seem 'almost real.' I'd say this is probably something like a "chat bot" who can carry on an intelligent conversation with a human being via text (i.e. no visual, or auditory input). So, speech recognition is pretty darned important - but what does it mean to recognize speech? In my head at least, this is the main question... how DO we recognize speech? In order for this to work to our benefit, we'd have to answer this question, then figure out a way to convey this information to a computer. Personally, I feel that if we can logically answer the question to what is the "essence" of an object (notice how any shape and size pen, cup, chair, car, lamp, etc... you KNOW what that object is, and what it's purpose is...) question, via recognition of the word, then we might be able to make some progress in the AI world.

    At the current moment (I believe) AI isn't even as smart as a toddler. It can't 'walk' in to a room and recognize objects as a toddler can. Sure, it can perform some simple pattern recognition for objects that may be pre-programmed, however, it couldn't walk in to a new room and 'self-identify' objects already in the room, and evaluate their purposes, etc... My understanding is that this inability to 'recognize' objects, is very similar to its inability to recognize the essence of an object tied to the word as well.

    But I think that the idea of whether an 'essence' of a thing exists is still up for debate as it is.... I'm still curious how I can even recognize objects on a day to day basis. I could go on and on forever, but it's 6 a.m. and I haven't slept so I probably am just speaking nonsense at this point.
  11. Nov 29, 2007 #10
    You haven't played the Worms 2006 AI on the mobile phone, have you? Or at least any other similar game, like DOTA AI, perhaps? It all depends on the programmer... An excellent programmer creates excellent AI.

    As previously stated, AI is like chess... A chess player will react according to how his opponent moves. The player moved because his opponent made a move...

    I apologize if I cannot explain my thoughts well...
  12. Dec 1, 2007 #11
    I can give you a definitive answer that AI is possible.

    (I've been working in Strong AI since the mid 90's, and have to say that most of the big theoretical problems were probably solved years ago. In my own work I have built up a system level model of a working sentience. In my model Consciousness is the centre of sentience and mind, and of course conscious is really only self-awareness. Once we have a working consciousness theory we have a foundation upon which we can build the rest of a complete working sentience. )

    The most difficult problems all centre around computability and the real problem is one of language.

    You are actually correct in saying that the mind can process near infinite sets, it does this by a one step synchronous computation cycle. Representing this in todays computers has proved very very difficult. I am sure the problems are solvable though. I did a lot of work looking at how brains build up the computation cycle, and one thing that’s very clear is that the brain itself does not use the neural model alone. There is a complex layer of abstraction between the neural circuitry and what lies above it. In fact neural models simply cant solve most of the problems. - When you look at something like visual object breakdown and analysis and parallel object world memory storage, todays computers actually look like a much better option. There’s a huge amount of interlocking logic involved and a complex dynamic hierarchy. Neural networks are terrible at this kind of thing.

    In fact the real problem is that neither model can solve some of the problems, and we have to turn to something else. - Roger Penrose suggested some kind of quantum interlock, but I have developed a much simpler model that uses a temporal interlock. In other words the brain cheats by looking into its own future state using novel physics. I just know that we can reproduce this in a machine but we don’t have to, because once we know its there we can build around it algorithmically using computers. Roger Penrose only thought the solution was incomputable because he stopped when it was only half finished.

    (Physics : Once you go below the quantum limit in energy temporal indeterminacy becomes one possibility, another way of looking at it is as a tachyonic interaction.)

    - Robert Lucien
    Last edited: Dec 1, 2007
  13. Dec 2, 2007 #12
    I'm uncomfortable with describing brain function as "infinite" when in fact we know that there are limits to the amount of imfornation a brain can process. Anything from simply not being able to memorize more than 7-9 numbers at a time, on average, to recollection (acurately) of very long term events (such as from childhood). Just the simple fact that brain function varies from one person to another reinforces the fact.

    Is current technology able to duplicate the EXTREMELY HIGH functionality of the mind? Not yet, but with much faster processing and by solving some very solvable input processing issues, I believe it's possible.

    I think the problem is that people tend to look at machines as trying to duplicate the human mind and want to associate the mind with a "soul" or that a machine can't duplicate "emotional content" appropriately. I think that true perspective is that the human mind is itself a biological machine, and people have a hard time with that concept. They see emotional duplication as "fake". It's a perception problem. If you met somone and developed an emotional bond with them, and then afterwards discovered they were actually a convincing machine, would that change the emotional bond you've experienced? What if the machine didn't know it was a machine? Certainly it would change the future perception, but the past is real. These are the issues of the future.

    When you set aside things like ego, jealousy, competition, AI seems a lot easier to accept. We have to be willing to accept that we may one day create something that supercedes us. It happens to parents all the time. We just have to come to terms with the "artificial" aspect.
    Last edited: Dec 2, 2007
  14. Dec 2, 2007 #13


    User Avatar
    Science Advisor

    Correct me if I'm wrong, but it seems this question could be summed up as:
    do you think your brain works on chemicals alone, or is there some other force involved?

    If your brain works entirely on chemicals alone, this would seem to imply that a computer could be created to duplicate what a brain does. It might take forever to build, but it would be possible, in theory.

    On the other hand, if you think the brain is more than chemicals, such as god controlling your morality, then AI would not be possible because god doesn't make computers.

    Does this sound about right?
  15. Aug 22, 2008 #14
    If we take a look at the origins of life on this planet, we begin to notice something important. There is a theory that life originated from long chains of molecules capable of reproduction by mere chance. As life evolved these long chains of molecules became larger and eventually became the basis for life in the sea. I think this notion is important because it illustrates where the human brain came from. In a sense Darwins idea of random advatagous traits initiated by mutations of genetic code, begins to highlight how something completely random can produce something beautiful. Life which is buillt upon the premise of, "it is what it is", because it's here and has survived.
    A programmer sits down at his computer and writes code. A sequence of instructions to act under certain conditions. Human being are not like this. We have free will. Which is exactly the reason why a conventional computer will never THINK. IMO to create artificial intilligence the computer must be born out of the same elements that created life in the first place. A completely random event designed at self reproduction. Perhaps this could be achieved with a quantum computer. But we have to remember for a computer to be truly alive as it were, we can't force it to think a certain way. It must have free will. Of course this process took millions of years for us. So I don't know how you solve that problem? But it definately interests me. How about you?
  16. Aug 22, 2008 #15
    Source, you really don't have a sophisticated enough understanding of computer science (if you think that programming is nothing but if-then statements) to understand AI possibilities.

    I assure you it is well known that AI is *possible* with sufficient computing power. When neurons are compared to bits, it is evident that we can most certainly emulate everything they do, but the storage and speed requirements are far beyond our current technology. That does not mean they will never be.

    You must also remember that the brain is not a linear microprocessor; it is like millions of microprocessors working together. You most certainly would not program such a thing with "if-then" statements.

    There have been naive attempts at AI involving scripts or "menus" as some have called them. They can mimick human attributes like a parrot but do little more. That is not where legitimate AI is headed.

    For what it's worth, I personally promise you there will be AI this century. I will bet anything on it.
  17. Aug 22, 2008 #16
    The human brain and consciousness are, like everything else in this universe, modeled on the laws of physics. A powerful computer can likewise be able to simulate the mechanisms of the brain.

    A human is no more than the sum of its parts. This includes feelings, emotions, love, etc.
  18. Aug 22, 2008 #17
    Do you actually know this as a truth? What exactly is 'imperfection'? What is 'perfection'? Why can't it be simulated on a computer?

    Before I delve further, I would like to know how initiated you are on the subject of intelligence. I'm not trying to make a cheap ad hominem argument here: I just need to know in order to choose my method of reasoning in our discussion.
  19. Aug 22, 2008 #18
    If the human mind exists, then it should be possible that we create something that at least matches our level of intelligence. In that case, you would have to ask what level of intelligence the human brain retains (understanding our capabilities). The human mind is not an ultimate intelligence; there are limits (we cannot perceive beyond our consciousness...that is one noticeable limit amongst several others). I do believe, however, that statements about the rise of AI through simply enhancing memory functions in computers (or the current way of integrating computer functions) are not completely valid. Intelligence, in sense, must involve some other physical meaning...for example...our solar system is a system of energy and so is the human CNS-PNS...so in some sense, the two must relate...and if the human CNS-PNS can be considered an intelligent system, then why not the solar system? Another question to take into account might be how entropy relates to intelligence; how intelligent can a system be in face of entropy?
  20. Aug 22, 2008 #19
    Last edited by a moderator: Apr 23, 2017
  21. Aug 22, 2008 #20


    User Avatar
    Homework Helper

    Re: Artificial Intelligence

    Self awareness in humans is a learned thing. An infant takes some time to begin to see patterns and spatially relate them to its own body via the visual field. To likewise learn to associate and interpret sound before even beginning to undertake language. And throughout are the constant demands of running the chemical factory that powers activity and fuels subsequent growth in the blossoming of their DNA.

    Until machines are built that understand in intimate detail their interface with the universe around them, that command the ability and responsibility to fuel themselves and their function (setting aside any drive toward self replication - eunuch machines as it were), and understand in their own terms, learned through layers of their own experience, then transplanting human fabricated constructs, that mimic our own self awareness and not theirs, based on their own intimate self knowledge, seems to me to be doomed to be clever programming, adaptive expert systems even, capable of great assistance to humans, but not Artificially intelligent in any truly conscious sense.
  22. Aug 29, 2008 #21
    The bottom line is that conceptually, all the processes and composition of the human mind will be eventually duplicated, inlcuding meeting and exceeding the processing power of the human brain, the ability to analyze and interpret external data through sensors, and the ability to learn, and even eventually procreate, as in the singularity. These are already solid concepts with no major blockers given time and the geometric application of Moore's law. In fact, if Ray Kurzweil is correct, it will happen sooner rather than later, due what can most simply be described as compound expansion. Improvements made on top of the improvements and those improvements factoring into the next increase, adding up to an alost verticle line of processing power increase. I have no doubt that the singularity is very possible in this century.

    So what seperates us from machines? the ability to learn? to procreate? to be self aware? free will? The singularity would make all of these things possible- in fact most of the aforementioned things would preceed the singularity. So in every way, machines will likely one day duplicate humans.

    People will throw around terms like "soul", but what it really comes down to is emotional response. Anger, Jealousy, Rage, Contempt, Love.. This is what people mean when they say soul. And ultimately it is possible to teach even this behaviour. This has pluses and minuses. Beings more powerful than ourselves who get angry or jealous seems like a dangerous concept. But if you think about it, emotions can be taught. We learn to associate certain experiences with an emotional response. That can be programmed. The true challenge is how to deal with machines that exceed us. Then we go from being proud parents to annoying children who get in the way. Programming failsafes and methods of control is a stop gap, but when the machine can build itsself, it can eliminate the codes that allow us to retain control. Then it gets tricky from there.

    Now we can stop worrying about nuclear war and start worrying about the future of the Terminator (loosely) or nanobots consuming the planet.
  23. Aug 29, 2008 #22
    I saw a video lecture by John Searle once, which I think contained a very interesting argument:

    Let us start easy, with something like a calculator that can do basic arithmetic. Although the calculator works out of a micro chip, you make make the exact same machine using ping-pong balls and cups with thread between them. It is bigger, but it has the same functionality.

    Now, you can do everything that a computer can do with an elaborate system of ping-pong balls and threaded cups (you just need a lot of space). Can you really imagine that the ping-pong system becomes sentient?

    That is a though pill to swallow. So, the argument suggests that perhaps intelligence depends on certain chemical reactions (or sub-atomic relations) that we simply cannot mimic with silicon.

    No one knows, of course, but the argument is interesting.

  24. Aug 29, 2008 #23
    Ask anyone who's used Windows Vista whether "imperfection" can't be simulated on a computer! :rofl:

    Guess what, guys - the human brain is a computer! Except in the case of rare neurological disorders (like schizophrenia) the neurons do exactly what the laws of biology say they should do. If humans make mistakes, the mistakes are either in the software or the peripherals (the eyes, the ears, etc.)

    Look at it another way. What is actually happening when a human, say, misspells a word. Well, they were either being too fast and therefore careless, or they actually forgot how the word is spelled. Well, what happens when a computer is forced to store more information than it can handle or tries to perform faster than it can? Guess what - you see information get lost or garbled. It's the same idea.

    What computers WON'T have are emotions. Or will they? Emotions are chemical responses to scenarios that are programmed into our DNA for evolutionary reasons. (Love encourages reproduction and community/safety in numbers; hate encourages aggresison and survival; fear encourages avoidance of harm; humor encourages release of tension and anxiety, etc.) There's no reason these responses cannot be programmed into a computer as well. Then there will be no scientific difference between the computer's emotions and ours. Any distinctions you could draw would either be philosophical or theological.

    Can you imagine a bunch of sub-atomic ping pong balls called protons, neutrons, and electrons, becoming sentient?

    A computer with sufficient power can simulate any deterministic physical process. A quantum computer will be able to simulate ANY physical process - deterministic OR probabilistic. Either way, computers WILL simulate the hardware of human brains in the near future. Programming the software is an entirely different challenge.
  25. Feb 8, 2010 #24
    Sorry to add to an old debate but someone might still read it.
    Re processing power, computers have more than enough processing power to execute AI today. In fact an ordinary PC probably has enough. - My spec, ballpark figures : 10 Gb of online memory, processing throughput around 1 GB per second, dedicated processing lines up to 10 billion operations per second. Ironically the core of an AI could probably be run on a Z80.
    Of course none of this matters, another part of the spec involves Operating system, hardware barriers, and minimum reliability - and no current system gets anywhere near these. I could build the project tomorrow all I need is a silicon FAB to build the cpu, a designer to design it, coders to write the OS etc.
    - I can't even publish the thing, put in an open architecture it would be a weapon of mass destruction in ten years - ever watched the Phantom Menace?
  26. Feb 9, 2010 #25


    User Avatar
    Science Advisor
    Gold Member

    It has long been thought that human intelligence has a quantum component, such as quantum tunneling between neurons. This kind of process is largely beyond the reach of current computer technology. A quantum computer, however, would be an entirely different story, and might have pretty astonishing [and possibly frightening] AI capabilities. Here is an interesting paper - Artificial and Biological Intelligence, http://arxiv.org/abs/cs/0601052
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook