## I think that we expect to much

why when we develop AI, we seem to think it is just not good enough,

there is a computer program that stores facts in a relational database and then reaches conclusions based on those facts.

infact, in the begining, it was told paramiters for being human, and it was not yet told that it was not yet human, but it was given paramiters for its abilities...it them posted a question "I am Human?"

that is pretty insightful for a machine....

just because we know how it works does not mean it is not working like us (we don't even know how we work)

at birth, Babies know zero information, then as they grow they add facts to their databases, and as they grow they learn valid ways to combine that information to make accurate deductions.

well, a computer does not know anything, but then you start teaching it. rightnow, we can program it to think correctly, and even add a meta reasoning node that tracks how well it deduces and then tracks similarities between the successful deductions and similarities beween failed deductions. given enough space and performance for searching its database, who is to say that a computer could not learn to read, given the right set of facilities to make that possible, write, and think?

just like how complex texts like the bible and other publications have patterns that can be used to find words that make sence when put in context of each other (the bible code), perhaps consiosness is just a pattern that expresses itself after a level of complexity of agragated information has reached a critical point and is then matched with a facility to analise that information.
 PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract
 I would agree with you that people expect too much of artificial intelligence. No matter how large a step one takes, the final system is criticized for not being a genius. I would be careful about referring to computer 'thought' too anthropomorphically. There is still some debate about how far symbolic computing can be taken. Asking 'Am I human?' is not the same as understanding the question. Also there is no reason that computer 'thought' should be similar to human thinking. Computers don't multiply using the same algorithm that humans do and Jets don't have flapping wings. Edward De Bono published a book MECHANISM OF THE MIND in 1969 that might interest you. De Bono's idea of the brain as a self organizing system has become quite popular with the renaissance of neural nets in the last few years. There were some attempts to build story understanding problems in the 80's (BORIS, SAM, PAM, MARGIE, CYRUS) but they never officially got beyond small paragraphs. The age old problem was that detailed understanding of longer pieces of writing would involve creating huge numbers of scripts for situations and something like the gigantic CYC database of commonsense. Part of the problem may be the paradigm that is being used. One is expected to input large amounts of information from the beginning such as in an expert system rather than finding a way for the computer to perceive and learn on its own. Part of this has to do with inflexible programming itself. Eurisko was a program that had access to its own code and did some very interesting things but that was decades ago. It's amazing, however, how intelligent an unthinking stimulus response program like A.L.I.C.E. the chat bot can seem just by collecting a huge database of sentences. Dictionaries aren't infinite and there are only a finite number of ways to complete certain sentences.

## I think that we expect to much

actually, I think the human brain can store much more information that that, there is some research into looking at the brain as a holographic storage medium, and in fact that is really the only logical way to explain the way that the brain can retain all past experiences when half of it is removed.

when you cut a hologram in half, a true hologram that you see in science museums, the picture is not cut in half, but the resolution of the image is decreased by half. so when you look at half of the material you see the full image, but it is fuzzier.

same thing happens with a brain, you remove part of it, and the information is still there, it is just harder to recall, it is fuzzier.

if the holographic theory is correct, the brain can store picobytes of information if given the chance.

 Similar discussions for: I think that we expect to much Thread Forum Replies General Discussion 5 Beyond the Standard Model 8 Academic Guidance 1 General Physics 0 General Discussion 0