Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Approaches to Artificial Intelligence.

  1. Mar 25, 2004 #1


    User Avatar
    Science Advisor
    Gold Member

    Ok. This thread may border on similarity to the brain thread, but I thought I'd start fresh.

    I would just like to toss around some ideas about AI. I'm not seriously researching, I don't have some crazy dream about actually building something, just interested.

    Some basic requirements that pop into my head would be:

    1) Requires a certain amount of sensory input from its environment.
    2) Requires some form of a memory.
    3) Requires some form of motor action.
    4) Requires a basic emotion (fear for instance)
    5) Naturally, if fear is an emotion, the AI would have to have an instinct of self preservation.
    6) The ability to learn.
    7) Self aware.
    8) Feel free to throw in other suggestions.

    Admittedly, this is a tall order. The one thing I want to stress is that the basic requirements is where it ends. It can be as huge as a city or as small as a bug. It need not look or act like a robot. It can certainly be stationary, but have some part that can move like a solar panel for battery charging for instance. Rome wasn't built in a day, so let's take small steps. Let's start with basic building blocks. My main question is how is it all tied together? How is the consciousness formed?
  2. jcsd
  3. Mar 26, 2004 #2
    I’m also fairly interested in this. I’m not sure if your requirement #3 is necessary though. Indeed, if the AI being wanted to explore and interact with it’s environment then that would certainly be beneficial, but it wouldn’t necessarily be needed to make an artificial being. Also, is emotion necessary? I'm not too sure...

    Bottom line however: I think we’re a long way off in creating any sort of artificial intelligent being, at least not by any method we’ve been going about now. My reasoning? You’re never going to be able to write a computer program that is intelligent. It comes down to this: fundamentally all programs are IF and THEN statements. IF this happens THEN do this, etc. The program’s responding to an input in a certain way as defined by parameters set by the programmer. Basically you can guess at what the program’s going to do if you know all the input variables. Sure we could program something that acts like a cockroach. We could tell it to seek out food (or, if it were solar powered then sunlight), avoid running into walls or getting squashed. We could tell it to remember where the best source of food/light is, what times and places are best not to go to avoid being stepped on. Heck, we could even program it to seek out materials to build another cockroach, build it and then copy its little brain program into it. (that’s a looooong way off I know) Sure it would be VERY impressive, but would this be intelligent? No. It’s just doing exactly what we told it to do. We could tell the program to rewrite itself to make it more efficient based on the cockroaches experiences. We could even have one program from one cockroach 'mate' with a program from another to create a more efficient third program and thus, cockroach. But even then, it’s still behaving in a predictable manner.

    Anyway, if I could tell you how consciousness is formed, I think I’d be a rich man. LoL
    My guess, you need a truly random source to rewrite the program in a manner that preserves the original to an extent but allows for change that is beneficial to the program. Make sense? However, if you start looking at it like that, again it would be the random source that’s ultimately controlling the program/being…which is kind of interesting philosophically if that’s the way out own consciousness works.
  4. Mar 26, 2004 #3


    User Avatar

    Have you considered neural networks?

    I am researching the details but here what I know-

    Engineers have made neat little "robots" that respond to stimuli, learn, and run around doing all kinds of weird stuff WITHOUT ANY PROGRAMMING.

    Imagine a chaotic jumble of wires connecting the input sensors, the motors, and some number of electronic nodes [basically amplifers].

    A working neural net will have some physically prefered electronic state (strange attractor?) and will try to move towards this state. This state could be a fully charged battery or a freely spining motor. For example, robots have been made with photo sensors and solar panels that move around until they find a bright spot. If the light changes they will become "unhappy" and move around till they find a brighter spot.
    Another had six legs and "liked" to move. If it ran into a wall it would become "unhappy" and try different things until it started to move freely again. It would learn by trial and error to step over object put in its way. Each time it encountered a simialar trial it would respond faster. Actions that made the robot "happy" were internally reinforced.

    These are small simple robots. The first one I described can be built using a $60 mail order kit. The inventor keep making ones with more sensors motors and nodes. One had a video camera on top or a set of long legs. It seemed to recognize certain people and would hide in the bushes around strangers and follow other people around...

    These are all very nice but since no one understands the details of how the work all we can do is plug things together and turn it on. Some will work, must won't. Then a computer programmer said, "I can simulate a neural net on my computer a lot faster than you can make one in real life!"

    So now we have simulated neural networks, which can be "trained" by altering the network until the random firing of simulated nodes outpreforms the best conventional agorithims.

    Neural networks work in the same way that our own brains do, hence the name. They are my favorite choice for AI.
    Last edited: Mar 29, 2004
  5. Mar 26, 2004 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It would have to be code which knows how to write itself; a fundamentally different type of program.

    I don't think the bulk of the programming would be "if this, then do this; if this, then do this; ad infinitum", it would be more like "if new input detected, then develop new responses".
  6. Mar 27, 2004 #5


    User Avatar
    Science Advisor
    Gold Member

    I would say that the human brain uses alot of if-then statements. There are some differences though. One is that there are a HUGE number of them as opposed to any computer program. Another is that the outcome is not always obeyed. Kinda like fuzzy logic. Example, you can be in a room talking with someone and be fairly engrossed in the conversation. Someone else may say something, but you just don't hear it. Your eardrum moves, the nerve sends the signal to the brain, but you just don't consciously understand it. Enigma, you mention a program that is able to rewrite itself. This is exactly what I have thought. That certainly could be done on a simple level, one wonders what would happen as it evolved and learned.

    Lets make an example of simple sensory input and reaction.

    Your hand touches a very hot surface. A person who has no feeling in their hands couldn't tell other than seeing their hand burning, smelling it, or hearing it sizzle. But no feeling. Technically, that person is LESS intelligent that a person with feeling in thieir hand. I know, that is opening myself up for some serious flaming, but hear me out. Now, take a person with feeling in their hands. They touch the same very hot surface. HUGE sensory input goes through the nerves to the spine, but not necessarily the brain. Then the signal is processed and a signal is sent to the muscles to retract the hand.

    Think about the things that have happened here. First, the system must detect the heat, and the amount of it. Then, the system makes a determination of just how important it is that the hand needs to move, or if it needs to move at all. A number of things are done based on how important it is to move the hand. If it is not very hot, the hand will retract as an almost concious movement. You may keep reading the directions for your coffee or whatever, not drop the container of water or etc. MANY things have most likely taken place in the brain while this occurred. If the surface is VERY hot, then you will likely drop something you are holding onto, stop reading, educate your children with some new words, etc. The point is that ONE sensory input can affect probably thousands of different areas of the brain. You have also LEARNED from this experience. If you happened to have a new stove, you may remember about the incident and avoid it in the future. From a computer programming standpoint, most likely there were billions of different 'processors' running at the same time. They all 'broadcast' various information. They also 'listen' for information. Some inputs to these 'processors' have more priority than others. They also form a collective single output. An example of this collective output would be that you are sitting here reading this post and all of a sudden you smell something hot. Your adrenalin is probably slightly raised. Then you decide to investigate. You get up and hear something sizzling. Adrenalin goes up a little higher. Then you actually SEE what is going on. HOLY COW one of the kids left something in the toaster and the flames are HUGE. The adrenalin goes through the roof. One output based on many inputs. Each input added to the output. In some cases more input would subrtract from the output. Like if your spouse was pulling the plug and dumping water on the toaster. The problem is already under control. Adrenalin goes down. Making any sense?
  7. Mar 28, 2004 #6

    Oh I whole-heartedly agree that we have 'pre-programmed' so to speak responses that govern how we react to certain situations. But touching a hot surface, noticing that its burning your hand, automatically moving it away and learning not to touch it again or avoiding similar situations in the futures does not necessarily demonstrate intelligence. What would be intelligent would be being able to apply what you learned from your experience to something new. For example: You learn from touching that a stovetop is hot if it is on. You learn from touching that it is cold if it is off. You know from some past experience that heated food tastes better than food that isn’t heated. (Let’s say you once ate a chicken that was on fire and liked it.) Therefore putting food on a stovetop that is on will make the food taste better.
    Another demonstration of intelligence would be if you asked WHY the stovetop burned your hand.

    In response to the neural net post, I was thinking too. Indeed, you don’t need a computer program, just an electronic configuration that tries to reach a certain ‘goal’. I suppose it could be argued that people operate in much the same way, since the neural net is modeled on the human brain. We have certain goals that we try to attain, and I think there are thousands of individual ones that sort of group together to one ultimate goal which we aren’t aware of. Not that I have any proof of my claim, but I think humanity is striving for ‘something’ and we’re not sure what. But, looking at the individual, which now that I think about it reminds me a lot of The Sims game, we have to satisfy social needs, as well as basic survival needs and ‘intellectual’ needs. The thing is, unlike the robot described in one of the posts that searches for light to make it satisfied, we are never completely satisfied. Or, perhaps like in The Sims, we are for a short period but we need to continually re-satisfy these needs so over the years we have been evolving and creating things to make our re-satisfying more efficient.
  8. Mar 28, 2004 #7


    User Avatar
    Science Advisor
    Gold Member

    I disagree. It is a form of self preservation. If the AI we strive to 'create' had no form of self preservation wouldn't you say it is less intelligent than one that did? Ever hear of the phrase: 'Not smart enough to come in out of the rain'?

    Yes, it would be. But it is not a requirement in this discussion. At least not yet. Remember what I said in my original post? Rome wasn't built in a day.

    Humans are never completely satisfied because unlike the simple robot, we are so complex that there is always something in line to be the next thing satisfied. Our lifespan is typically not long enough to satisfy all items in line. You just don't realize it because the current 'project' takes priority and you forget about all other things. Then, when it is done and you are 'satisfied', then next thing in line comes up and you realize that there are still things you want to do. Another reasons is because our memory is not perfect and we tend to want to do some things over again because the memory has worn off and it would almost be like doing it over the first time. If the human life span were multiplied by oh, say 5, and our memory didn't improve, we would most likely TOTALLY forget about some things we had done and continue to repeat them several times thinking that we had NEVER done them. Just a hunch.
  9. Mar 28, 2004 #8
    Well, like I said, it wouldn’t necessarily make you intelligent. By that, I mean that a single demonstration would not be enough proof of intelligence. Self-preservation alone, in my opinion anyway, does not constitute intelligence. For example: plants grow and orient their leaves toward a light source and extend their root structure to a source of water. This is so the plant stays alive. This alone, to me, doesn’t demonstrate intelligence. Another example would be viruses, which preserve themselves by seeking out hosts and multiplying as much as they can. I don’t think of a virus as being intelligent. I do believe that self-preservation is an important part of intelligence, but in itself it cannot be used as proof of intelligence. It is a key attribute, however you could still argue for intelligence without it.

    I absolutely agree that humans are much more complex than any robot. However, I do believe that there are brief moments in everyone’s life when we are completely satisfied. It may be because our minds aren’t perfect and we are forgetting some things that we should be worrying about, but for that moment, as far as you’re concerned you can stay like that forever. And you probably would, if you’re mind doesn’t start remembering that you need to do certain things such as eat, pay your bills, get a haircut… whatever. It is the imperfectness of the mind that gives you the ‘illusion’ of being completely satisfied (mind you, those moments are few and far between) but I think many people do experience it at least once in their life.

    I see though that the intelligence I’m describing is similar to human intelligence and not very basic. To me, intelligence is something that, so far, only humans posses. I’m not saying that every other creature is just some mindless automaton, however there is a huge distinction between human intelligence and any other living example that we have. I think it’s because humans have moved beyond the goal of simply procreating and being well fed. We have a thirst for knowledge, and I don’t see that in anything else.
  10. Mar 28, 2004 #9


    User Avatar
    Science Advisor
    Gold Member

    I don't think that thirst for knowledge on its own can define intelligence. And I'm not really accusing you of saying that. My dog watches my EVERY move. It is not necessarily because he is constantly hungry. He is actually interested. It is his nature. If he were a human I'd most likely hate him. He is the snoopiest being I've ever known. It takes him about one time to learn something. What drives him though is the human contact. While you mention that humans thrive on a thirst for knowledge, it seems that my dog as well as most dogs thrives on human contact. It is what makes him learn. To him, when there is no people, there is simply no point. Socializing with people is the one goal which is what someone previously described as a strange attractor. Humans have a different strange attractor, but any reasonable strange attractor should cause the beings to evolve into more intelligent creatures.

    You also mention that it is possible to argue that something is intelligent without having any self preservation properties. The lowest form of life has self preservation properties as you mentioned. Something may be able to be created that is intelligent without self preservation, but it is my opinion that it will never evolve into anything MORE intelligent without it. In the real world, competition will snuff it out if pure accident and coincidence doesn't.

    My example of hand/stove situation is only a very small part towards building something that is intelligent. It simply demonstrates the LARGE number of if-thens and logic happening in a very simple case. Can anyone see where if you have a large enough number of individual 'processors' communicating with each other you should be able to form a somewhat intelligent device?

    I got to thinking about the above statement. You say that the little roaches would be behaving in a predictable manner. Don't most human beings behave in rather predicable manners? You also say that it would do the things it does because it was 'told' to by it's creators. Don't you see a similarity to humans here? Haven't you ever heard of 'closet kids'? Someone has a kid and decide they are too much trouble so they don't bother to teach them anything. They literally lock them in a closet and they never really develop an intelligence. Humans are intelligent partially due to the fact that we are TOLD things as children. If everyone had to learn things the hard way, we would be dumber than rocks. It is the combination of the complexity of our brains and the fact that we communicate with each other. THOSE are the reasons why humans are as intelligent as they are.

    It seems that check and I disagree on what defines intelligence. Let's just agree to disagree and try to get back on topic.
    Last edited: Mar 28, 2004
  11. Mar 28, 2004 #10
    Hahaha, fair enough. You make some good arguments and I’m really enjoying this thread.

    But getting back to how to create this. After some thinking, I believe the neural net approach is the best. I’m curious though. One of the easiest ways to build an extremely large and sophisticated neural net would probably be to do so biologically, so in affect, you’d be creating a brain. If you could then add as many inputs as possible, which could be electronic, such as touch sensors, video cameras, microphones etc., and somehow make it connect to the brain, what would happen? Could this work? If you were to do this to a human brain, but cut out functions that are used to sustain the body, somehow keep the brain alive, but then connect it to a bunch of artificial sensory inputs, would this be defined as AI? Assuming it was exhibiting intelligent behaviour.

    Also, I’m wondering about WJ said, that these neural nets are arranged in a way that makes them want to achieve some sort preferred electronic state. Does the builder of the robots set these states? If so could you set two ‘goals’ for this robot that conflict with each other?
  12. Mar 29, 2004 #11


    User Avatar

    More on Neural Nets

    If you want to design a relatively simple neural net with electronic parts, you can manipulate the know equations and figure out a combination that works.
    For these simple types, a builder chooses the desired state (or maybe states)
    // You could choose opposing states and it might occillate or do nothing, depending on the details

    But for more complex desired behavior, that deals with lots of inputs and must make complicated decisions, it becomes nearly impossible to just design one. But computerized brute force can solve the problem.

    Suppose I decide to try and make a really smart, complicated robot, with a dozen different sensors and motors.

    On a computer I simulate a neural net and an enviroment.
    Simulate thousands of different designs and test them with an algorithm. Eventually I get one that matches my criteria. It acts smart. It will do things I never told it to do. Also when it makes a decision, I won't really know why.

    If I designed my program simulate available electonic parts, I could then wire together the electonic brain. To me I would appear to be a chaotic mass of resistors and amplifiers, but I would function. I would not be able to say what its desired states were exactly, and it would not follow one course perfectly. Just like organic brains, it would make mistakes, sometimes learning from them, BUT it would never achieve perfection.

    // These neural networks differ from the most common simulated types. The types we use for fast algorithms don't have an internal feedback loop, so they always work the same way. Electronic Analog Neural networks keep echos running around inside their wires, which allow it to remember solutions//

    Theoretically, you could simulate a neural network with the capacity to change itself. This is probably what our brains do.

    Firing from the hip, I would say this would work. Scientists have been doing less severe experiments on monkeys and immobilized human patients.

    Someone connected a mecanical arm to some wires and "attached" the wires to a monkeys brain. The monkey was shown a screen with a pincher and a ball that bonced around. When the arm moved (in another room) the virtual pincher on the screen would move. When the monkey "caught" the ball it would get some reward (probably a zap across its pleasure center, poor little monkey...)
    This study suggest that the brain can adapt to new, intrusive connections and still function.
    Similarly, someone implanted a remote control into the head of a willing patient, and he learned to move a mouse around on a computer screen to comunicate and play games.

    Hope this helps.
    Last edited: Mar 29, 2004
  13. Apr 5, 2004 #12
    Firstly, an AI program would not only consist of bunch of if-then statements but it also consists of switch statements...similar to electric circuits...a multi-way expression calls a case which in turn calls on a function to do a certain action...

    and for the part if you really need a computer program at all....
    For simpler robots or machines you may not need a program. Learning would not be involved here. However, when you are making a complex robot or simulating a complex neural network you will definitely need computer programs. For example, when the robot is asked to do a certain task then the program calculates the number of ways to do the task[Combinatorics]...and out of these possible way of doing things it does a probability check on which way is better in accomplishing the task...when the right way of doing is found, the value is recorded...so there is learning taking place...the robot would be learning what to do for certain cases but it would not be writing itz own code.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook