Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Artificial Intelligence: Practical application

  1. Nov 18, 2003 #1
    What exactly is artificial intelligence, what are some of its practical applications, and have we created it yet?

    Please no arguments that humans are not intelligent! If it comes to that then this thread is useless.
  2. jcsd
  3. Nov 18, 2003 #2


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Artificial Intelligence is a research topic. The object is to create computer software/hardware that behaves intelligently, in some sense of the word. It takes various forms (top-down, neural network, etc.) and it has not yet produced anything that everyone agrees on as true intelligence. Some philosophers bitterly oppose it, holding that only the unique human mind can be truly intelligent.

    An obvious application is the computer chess programs that can beat world champions.

    And I believe sophisticated search engines like google have some relationship to AI research.
  4. Dec 7, 2003 #3
    AI is impossible

    From a philosphical point of view, I think AI is impossible. How can computers reason? There is no mathematical formula to calculate reason. Also, what is Intelligence? By reading Plato's Theory of Knowledge, is knowledge the same thing as intelligence? What makes a being intelligent? I think that it all depends on your point of view, but there is no way around it. How can there be artificial reason? "Who decides reason? What is logic?" John Nash once said.
    So, philosophically, in my point of View, AI is impossible. Comments? Questions? Suggestions?
  5. Dec 7, 2003 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    "AI" - a more modest concept

    After the initial buzz and hubris subsided, and real research into what 'human intelligence' actually is got under way in earnest, we learned that the devil truly is in the details. Along the way we discovered that what we thought would be a near-trivial problem - simulating 'common sense' - turned out to be richly complicated; common sense is, in fact, anything but 'common'.

    In terms of how the phrase AI was couched many a year (decade?) ago, there are a number of AI applications today, providing value to businesses, researchers, and Sally Public alike. Some examples:

    - machine translation, of which AltaVista's Babelfish is an example. Sure it leaves a lot to be desired, but it is a form of AI, as it was originally conceived

    - automatic voice transciption - you say the words, the 'AI' prints out what you said

    -agents and bots, like what google uses for example. They are used in many different applications, from travel sites (e.g. expedia) to job searching (e.g. monster) to some network management systems

    - expert systems. These codify the logic or knowledge (or both) of human experts in narrow domains of knowledge, and provide valuable assistance or advice. There are many examples, perhaps the most lucrative are those used to identify arbitrage opportunities in various financial markets; it's likely true that they produce better long-term results than highly paid professionals. Fraud management systems are another example which delivers valuable results.

    - autonomous systems. Perhaps now more an area within robotics than AI, but it was once thought to be an AI objective. Perhaps the most interesting examples are Spirit, Opportunity, and Beagle 2. These Mars landers are designed to maneouver their way around (a small part of!) Mars, making decisions on what to do, where to go, and how to get there without recourse to their human masters.
  6. Dec 11, 2003 #5
    People tend to think of AI in the 'Skynet' 'Terminator' sense -- making a machine self aware, or at least making one that can beat the Turing Test and make us *think* that it is a human.

    However, that's just one field of AI. Look at the animal life in your back yard; is your dog intelligent? Yes, it acts intelligently. But does it talk, act and think like a human? No it doesn't. It doesn't need to to fulfil its purpose.

    Concordantly, one immensely useful aspect of AI is making software that *acts* in an intelligent manner; which reasons things out according to prior experience rather than a strict unchanging program. The uses? Well don't you get sick of updating your virus scanner every few days? Why not just create a machine that could look at the code of a file and think: 'hey, that looks pretty nasty to me! Has all the stuff that I've seen in virusses before... I'd say this is a virus' and then tell you 'Hey Chris, I'm about 90% certain this is a virus. Should I kill the fella?'

    How useful it would be too, for the FBi to have a program that could *think* about how to crack an encryption code rather than mindlessly bruteforcing the keys.
  7. Dec 11, 2003 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    While 'reasoning' might go a bit far, neural network-based AI apps do learn; I believe their designers call the period before they let such systems out into the wide world 'training'.

    Examples? IIRC, some of the better fraud-detection systems used by banks, credit card companies and the like have neural-network components. Similarly, I'd not be surprised if Norton, McAfee, et al employed such systems internally for their work on virus detection and analysis.

    In astronomy, I remember reading of a program which could reliably and consistently assign a Hubble class to galaxy images, and which was judged more accurate than all but the most experienced humans (it was, of course, much faster than the humans, could work 24 hours a day, and didn't draw a salary!)

    May not be all that sexy, but progress is likely to be made in small, incremental steps than some Headline News breakthrough.
  8. Dec 11, 2003 #7
    You and I can reason through something, IE, should a person be punished with death for killing another human being? We unconciously and almost instantaneously decide yes or no depending on certain circumstances. Was the act made by mistake, such as an accident? Hold old is the person that actually did the killing? Was the act habitual or a one time event? Does the killer feel justified (self defense, vengeance, a preemptive attack, etc.)? Does the killer feel remorse? There may be hundreds of other questions that you unconciously answer before deciding whether the killer should be put to death or not. I feel that a computer can be programmed to reason its way through a similar situation. You and I have had years to take in information (increasing our knowledge base), we have had years to hear different reasons why a person should not be put to death or why they should be(rules or more appropriately - guidelines to follow). At this point we do not have a true "thinking machine". We have machines that can follow a set of instructions, guidelines, or rules. Some of those instructions may allow the machine to actually alter its instructions, guidelines, or add information or remove information from its knowledge base to perform in a different way than it was actually programmed to perform, ergo learn. We could devise a program that could decide on whether to put someone to death or not. First we would have to decide (very broadly) whether an act actually merits death. Someone commiting a crime where no one was hurt or no one died may be the first branch in the decision. Next, we may take into consideration some or all but not limited too the questions listed above. The program could priortize the questions and set a point value to the answers. Last, the program "weighs" the final value of the answers to determine if the death penalty should proceed. Now, of course, this is a very simple and incomplete answer to the problem, but it does (I hope) open a door in your mind to the idea that we could give computers enough information and some sort of dynamic guideline system to replicate or at least begin to replicate the reasoning process. This may be years/decades away, but I think it is possible. With that said, I showed you mine, now show me yours.

    Last edited by a moderator: Dec 11, 2003
  9. Dec 11, 2003 #8
    At least a limited amount of AI is important in any system that has to operate autonomously in a variable, unpredictable environment. That's probably not a bad definition of AI, but even that might be too restrictive.

    Some people aren't willing to use the label of AI unless a computer does everything, but lots of systems work in ways that are definitely intelligent, often surpassing the abilities of people performing the same tasks. One example is scanning x-rays looking for suspiciously cancerous growth. There are lots of expert systems doing all sorts of interesting things.

    AI is making progress, maybe slow progress, but in an evolutionary manner. Perhaps all they need to do is to be networked together.
  10. Dec 22, 2003 #9
    its kind of an oxymoron in my opinion though.. or maybe a paradox, not sure if the right word.. contradiction there we go.

    The only way to have AI truly accepted by everyone but the most scrutizing individuals is to have the code be able to 'learn' for itself.. and that INCLULDES guidlines. There are too many variables in reality for anyone 1 man or even 'all' men to be able to program into a machine which is what michio spoke on on tech tv i believe.

    so if you set guidlines the machine is never truly intelligent it will always simply be following orders and even if it learns guidlines for itself its still debatable that its following orders.

  11. Dec 22, 2003 #10
    Definitions of AI from 8 books:

    1) The exciting new effort to make computers think... machines with minds, in the full and literal sense.

    2) The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning...

    3) The act of making machines that perform functions that require intelligence when performed by people.

    4) The study of how to make computers do things at which, at the moment, people are better.

    5) The study of mental faculties through the use of computational models.

    6) The study of the computations that make it possible to perceive, reason, and act.

    7) Computational intelligence is the study of the design of intelligent agents.

    8) AI... is concerned with intelligent behaviour in artifacts.

    Blue: This category is about machines thinking like humans.
    Italicised: This category is about machiens that act like humans.
    Red: Systems that think rationally.
    Bold: Systems that act rationally.
    Last edited: Dec 22, 2003
  12. Dec 23, 2003 #11
    Hey adam,

    surely a concern is that if we design AI too much like human minds/ human brain architecture then we might get to see ai malfunctions in the form of paranoia, schizophrenia etc Its opening a bottle to a whole new genie (not that I am intrinsically against it)
  13. Dec 23, 2003 #12
    Hi Funkyjuice.

    I don't see it as a problem at all. I just did a semester of AI, and I was actually quite disappointed with it. The course focused on computational methods which I personally don't consider to be worth the title of "AI". We studied search methods and such, all manner of things which can apply in any other area of computer studies and are not specific to AI. The core of AI discussion and development, to me, is sorting out the "why". Why a machine will choose one thing over another, do one thing rather than another, et cetera. And this was not covered at all in my course.

    I have this idea that we will be one massive step closer to developing true AI when we have formulated a basic set of logical instructions on which all judgements will be based. For example:
    • Multiple entities with varying capabilities can achieve more than a single entity. In other words, co-operation is a good thing.
    • Killing off others produces negative effects (like others coming after you to kill you in return), so it should be avoided.
    • "I exist."
    • "The world outside my mind exists." (I feel this one is necessary and should be hardwired in. The other option is to show the machine that it can acceptance of the world, rather than solipsism, is basically a safer bet.)
  14. Dec 23, 2003 #13
    i think your last 2 are the key to true AI.

    "I exist."

    "The world outside my mind exists."

    If we can start to postulate a 'm-theory' of the mind that can corelate experiences like the forces of nature it will give us the foothold we need to translate real world events into a formula that can then be minipulated.

  15. Dec 23, 2003 #14
    my problem with the last point is that we as humans can't even prove that to be true... from Sheldrake's theory of morphic fields to the more mundane 5 senses, our "world outside" is only created from the assembly of information we are given... RS Ramachandran has shown how quickly we can fool the "body schema", so how can we hard wire a principle into a machine that we ourselves dont understand?We are not even sure how or even quite why the feeling of "self" and the "self"'s raltionship to the outside world works in humans
  16. Dec 23, 2003 #15

    It doesn't matter that we can't prove that point to be true. The fact is we must act as though it is true. Otherwise you might as well go for solipsism, eblieve you can fly off a building, and go jump. You'll splatter all over the ground. Natural selection will result in the end of solipsists and the continuation of those who accept that the world around them is real. In other words, it is a safer bet. This must be explained to a computer.
  17. Dec 24, 2003 #16
    Hey Adam,

    Thats all great in theory, but in practice unless you understand how it works in humans how are you meant to emulate this concept in an AI environment?

    Merry xmas all
  18. Dec 24, 2003 #17
    It's a simple logical choice. Demonstrate the logic to a machine. Show it what happens to another computer that chooses the solipsism option, with a hammer if necessary.
  19. Dec 24, 2003 #18
    You have to think of it more along the lines of quantum uncertainty and darwinism. You dont need to tell a set of robots how to build a car but eventually through the course of uncertainty they'll figure it out. You simply have to give them the power to learn through the senses and then form opinions based on that input.

    Somehow possibly correlating theyre experiences through a spatial refences so eventually they would realize that tactile sensory input is only achieved in its closest proximity. Wether they realize that that close proximity grid is them or not doesnt really matter. The fact is that it would see those occurances happening more often and would relate information based upon the locations of those events most often then anything else and hence voila zip bamboo or something.

    Kind of like, give the robot the ability to move within three dimensions but attach all of its experiences within a seperate dimension of its own; time. Each bit of information would have a time piece encoded on it along with a spatial coordinate system of its own. Eventually the program would learn with enough practice that in order for it to move, it has to keep certain opinions and other opinions must be let go.

    Dont get me wrong, i see where your coming from, but the problem with that type of philosophy is that we'll never be able to do it because we aren't god and we can't hand out the 'essence' of a big bang to a new life form so we'll just have to judge within our limits of creation. And personally I'd like to see it come true.

    Last edited: Jan 11, 2004
  20. Dec 24, 2003 #19
    What exactly do you mean by "uncertainty"?
  21. Dec 24, 2003 #20
    I appreciate the reply guys...

    I understand the philosophy of AI... but surely we must learn more about our own neurology first (im not saying understanding consciousness is impossible... just hard at the mo) ... before trying to emulate it in a machine... or we might once again behold the "trick" that is consciousness and yet still not know how it’s done...
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Artificial Intelligence: Practical application