lnx990 said:
In an article in Byte magazine (April 1985), John Stevens compares the signal processing ability of the cells in the retina with that of the most sophisticated computer designed by man, the Cray supercomputer:
"While today's digital hardware is extremely impressive, it is clear that the human retina's real-time performance goes unchallenged. Actually, to simulate 10 milliseconds (one hundredth of a second) of the complete processing of even a single nerve cell from the retina would require the solution of about 500 simultaneous nonlinear differential equations 100 times and would take at least several minutes of processing time on a Cray supercomputer. Keeping in mind that there are 10 million or more such cells interacting with each other in complex ways, it would take a minimum of 100 years of Cray time to simulate what takes place in your eye many times every second."
100 years divided by 1,000,000 (~20 years progress) is close to 1 hour.
1 hour /32,000 (~15 years) is just over 100 milliseconds.
So the processing power requirements of what is being described will be available in 1 to 2 decades. Undoubtedly more than just processing power has changed since 1985, namely estimates on processing power requirements for neurological simulation and improvements in algorithms.
We have retinex algorithms today running in real-time on consumer hardware which you could only run on their Cray computer if you're looking for a good laugh.
Lyuokdea said:
Remember a computer can only do one calculation at a time.
I'm beginning to think I've encountered a space-time rupture and Richard Nixon is about to be elected president. The year is 2009. A modern computer can perform hundreds of simultaneous operations, which is going onto millions by next decade.
michinobu said:
I remember reading in "Introduction to the Theory of Computation" by Michael Sipser, that Kurt Godel, Alan Turing, and Alonzo Church discovered that computers can't solve certain "basic" problems which are solvable to humans.
.
They discovered no such thing. I know Gödel believed this until the day of his lunatic death, but he, to no surprise of any competent modern computer scientist, was ever able to prove it.
michinobu said:
such as being able to prove if a mathematical statement is true or false.
They regularly do just this, and with mathematical formalisms. In fact many modern advanced proofs require computer solutions because the problem set is intractably complicated for the human mathematicians.
michinobu said:
Scientists in the field of neurology know very little about the human brain, the very fact that humans aren't digital shows what kind of difficulties an engineer might face in trying to recreate the human brain.
Digital computers aren't really digital either. They are complex analog devices with emergent boolean function. IBM's cognitive research and it's lead researchers have what is perhaps the most sophisticated computational brain model available, and they, of all people, see the brain as a binary device. The fact is the observing audience, including web-forum conjecturists, know very little about what scientists in the field know. The scientists themselves however aren't nearly as uninformed.
michinobu said:
isn't the very definition of "artificial intelligence" is intelligence being mimicked is intelligence?
No, the very definition of artificial intelligence is intelligence as implemented by another intelligence, typically first order evolved species. There is no concept of "faking intelligence". In fact that thought is demonstrably inane.