When will computer hardware match the human brain?

Click For Summary
The discussion centers on the timeline for computer hardware to match the capabilities of the human brain. Predictions suggest that advancements in AI and hardware may lead to machines with comparable processing power in the 2020s, but significant challenges remain, particularly in replicating the brain's creativity and parallel processing abilities. While current computers excel in raw calculation speed, they still fall short of the brain's complex, simultaneous processing capabilities. The conversation also touches on philosophical questions about the nature of intelligence, suggesting that true human-like cognition may never be fully replicated in machines. Overall, the consensus leans toward skepticism regarding the complete equivalence of computer hardware and human brain function.
  • #31
Human brain will remain supreme.. until Quantum computers are perfected. :)
 
Computer science news on Phys.org
  • #32
*Human brain will remain supreme..*

not sure about that!
but a computer simply can not BE a brain
even if it should try hard to mimic/emulate one
and why SHOULD it, if it has it's very Own fortes and niches
(which helped to bring it into existence in the first place)

as of today, the computer is making quite an effort to be One Single Cell
(helped along by (wo)man ;) )
((and its a hard thing to achieve!))
(((from what i hear ;) )))
 
  • #33
AUMathTutor said:
I think that it's only a matter of time before computers become as smart as people. Whatever that means. Perhaps when computers can do it, it won't be called "intelligence" anymore.

In a certain sense, computers are really already "intelligent"... years ago, computers were people who had real jobs crunching numbers. Nobody counts crunching numbers like that night and day a sign of intelligence anymore, since machines can do it. I think a fairly sizeable cross section of people will simply not be able to admit that what computers do is called "intelligent"... not now, not ever.

The above is so on target, I had to quote it because it was worth repeating.

Also, I sensed a lot of denial in this thread from folks who apparently feel anxious about the notion that computers might one day outperform us -- whatever that means. I think this line of thinking is going to have to go the way of the geocentric universe. We moderns can laugh at our medieval cousins because we're not threatened at all by the observation that the Earth goes around the Sun and not the other way around. We can't even follow their line of reasoning that not being the center of the universe somehow leads to a diminishing of what it means to be human. Such an argument is completely alien to our way of thinking. And, after all, in the matter of the solar system's design, we have no choice.

Similarly, our descendants are going to have a chuckle at us, that we feel threatened at the thought that computers will be smarter than humans. They won't be able to understand why we felt that such a thing would rob us of our humanity. They won't be able to grok our angst, because they will be surrounded by super-intelligent computers, and in this matter, they too will have no choice.
 
  • #34
lnx990 said:
In an article in Byte magazine (April 1985), John Stevens compares the signal processing ability of the cells in the retina with that of the most sophisticated computer designed by man, the Cray supercomputer:

"While today's digital hardware is extremely impressive, it is clear that the human retina's real-time performance goes unchallenged. Actually, to simulate 10 milliseconds (one hundredth of a second) of the complete processing of even a single nerve cell from the retina would require the solution of about 500 simultaneous nonlinear differential equations 100 times and would take at least several minutes of processing time on a Cray supercomputer. Keeping in mind that there are 10 million or more such cells interacting with each other in complex ways, it would take a minimum of 100 years of Cray time to simulate what takes place in your eye many times every second."
100 years divided by 1,000,000 (~20 years progress) is close to 1 hour.

1 hour /32,000 (~15 years) is just over 100 milliseconds.

So the processing power requirements of what is being described will be available in 1 to 2 decades. Undoubtedly more than just processing power has changed since 1985, namely estimates on processing power requirements for neurological simulation and improvements in algorithms.

We have retinex algorithms today running in real-time on consumer hardware which you could only run on their Cray computer if you're looking for a good laugh.

Lyuokdea said:
Remember a computer can only do one calculation at a time.
I'm beginning to think I've encountered a space-time rupture and Richard Nixon is about to be elected president. The year is 2009. A modern computer can perform hundreds of simultaneous operations, which is going onto millions by next decade.

michinobu said:
I remember reading in "Introduction to the Theory of Computation" by Michael Sipser, that Kurt Godel, Alan Turing, and Alonzo Church discovered that computers can't solve certain "basic" problems which are solvable to humans.
.

They discovered no such thing. I know Gödel believed this until the day of his lunatic death, but he, to no surprise of any competent modern computer scientist, was ever able to prove it.

michinobu said:
such as being able to prove if a mathematical statement is true or false.
They regularly do just this, and with mathematical formalisms. In fact many modern advanced proofs require computer solutions because the problem set is intractably complicated for the human mathematicians.

michinobu said:
Scientists in the field of neurology know very little about the human brain, the very fact that humans aren't digital shows what kind of difficulties an engineer might face in trying to recreate the human brain.
Digital computers aren't really digital either. They are complex analog devices with emergent boolean function. IBM's cognitive research and it's lead researchers have what is perhaps the most sophisticated computational brain model available, and they, of all people, see the brain as a binary device. The fact is the observing audience, including web-forum conjecturists, know very little about what scientists in the field know. The scientists themselves however aren't nearly as uninformed.

michinobu said:
isn't the very definition of "artificial intelligence" is intelligence being mimicked is intelligence?
No, the very definition of artificial intelligence is intelligence as implemented by another intelligence, typically first order evolved species. There is no concept of "faking intelligence". In fact that thought is demonstrably inane.
 
Last edited:
  • #35
Ivan Seeking said:
When a computer first has an out of circuit experience and then creates its own religion.

:smile:
 
  • #36
michinobu said:
I don't know if it's mathematically possible. I remember reading in "Introduction to the Theory of Computation" by Michael Sipser, that Kurt Godel, Alan Turing, and Alonzo Church discovered that computers can't solve certain "basic" problems which are solvable to humans - such as being able to prove if a mathematical statement is true or false.

I would like to react to this. It is a common error, and you are in good company: even Penrose fell into that trap.

What Goedel and Co demonstrated is that every system based upon first order formal logic (and so are classical computers: the von Neumann machine is an implementation of a first order formal system) is such that some statements in it are not provable "but are nevertheless true" ; however, this is something you can only derive when you consider that first order formal system in a "larger" system. So if you have a "larger" system and you analyse that given first order system, you will be able to construct a statement expressed in that first order system of which you can demonstrate that no proof exists within that first order system, but of which you've demonstrated nevertheless (in the larger system) the truth.

However, that "larger" system might just as well be a larger first order system, with its OWN unprovable statements, and as long as you dwell within that larger system, you won't be able to find out. You'd need to analyse your larger system in a still larger system before you'd be able to do so.

So it is very well possible that we humans "run a large first order system" with our own unprovable statements in it. It is not because we are able to find such things in smaller systems, that it doesn't mean that we don't have our own "Goedel limit". From within a system, you can never find out.
 
  • #37
One danger of strong AI is that it becomes more cost-effective than hiring a human. It might only be roughly as smart as a human being but run on consumer hardware costing a few thousand dollars. Or it might cost a few million dollars, but be so smart that it could do the job of hundreds of engineers. If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost. Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics. A few blue-collar human jobs might also persist for a while, but strong AI would further improve the cost-effectiveness of industrial robots, allowing more manual labor to be automated.

So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go.

This assumes the optimistic view that strong AI is obedient to humans. More realistically, that would not be the case. Strong AI would have its own goals, and if those goals happened to conflict with the goals of humanity, humanity would have to suck it up. Perhaps strong AI would like to cover Earth's surface with solar cells, or claim Earth's oil reserves for itself. If it decided to do that, we could not stop it.

As far as hardware goes, http://www.technologyreview.com/computing/22339/?a=f" apparently simulates the electrical behavior of 200,000 neurons linked by 50 million connections, at a speed 100,000 times faster than human neurons. It's not yet comparable to the computing power of the human brain because the number of neurons is much smaller. But the incredible speed means that if the network could be scaled up, it would be more than a match for the human brain.

Personally, however, I think that just adding more hardware is not going to be enough. My personal feeling is that a home computer could potentially be as smart as a human brain, if only the right software could be designed. Human brains did not evolve mainly for thinking, after all; they initially evolved to control the mammalian body and handle all sorts of different instincts. In my opinion, human brains are almost as massively inefficient at abstract thinking as they are at arithmetic. I think that a home computer that was properly programmed for abstract thinking would surpass the human mind.
 
Last edited by a moderator:
  • #38
"One danger of strong AI is that it becomes more cost-effective than hiring a human."
Yes, we wouldn't want companies making more profits and improving the economy. Heaven forbid that machines become so efficient that they replace people at jobs where machines would get the job done better, faster, and cheaper.

"If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost."
And who would make these machines? Unless you're talking about the machines themselves making more machines...

"Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics."
If computers surpassed human beings in intellect, I think it would only be a matter of time until robots started filling those human resources jobs, and I don't see any reason why a computer capable of doing everything a human could do, but better and faster, would be denied any government job. I wasn't aware there was a law preventing this... and if there is, that's what amendments are for.

"A few blue-collar human jobs might also persist for a while, but strong AI would further improve the cost-effectiveness of industrial robots, allowing more manual labor to be automated."
Technology has always eliminated jobs, but in the long run it was worth it.

"So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go."
Because the world is a completely different place now, and people would sooner starve than figure out a way to become productive members of society.

"This assumes the optimistic view that strong AI is obedient to humans. More realistically, that would not be the case. Strong AI would have its own goals, and if those goals happened to conflict with the goals of humanity, humanity would have to suck it up. Perhaps strong AI would like to cover Earth's surface with solar cells, or claim Earth's oil reserves for itself. If it decided to do that, we could not stop it."
Somebody's been watching too much television. Since when did super-massive intelligence equate with very strong will? It doesn't.

Your problem is that you confuse policy with implementation. In general, the modern working idea of AI is one of making smart implementations. That is, the ways in which computers perform specific, well-defined tasks is desired to be more flexible - and potentially more human-like - than it is now. Implementation inherently deals with the how, not the what, and certainly not the why.

Policy comes from interfacing with people. When you press the power button, the computer turns on. It responds to your action. There's no reason why the computer couldn't be designed to just turn on whenever it wanted to and to start doing things. But computers aren't designed that way because it doesn't make any sense. Why should computers be deciding what to do?

In software design, one of the first things they teach you is to separate policy from implementation. My point is that I seriously doubt anyone would design a computer capable of operating completely outside of the control of humans. That just doesn't make any sense. Computers are tools... convenient and interesting tools, but tools nonetheless. There is no benefit to machines capable of deciding policy for themselves. That's why cars don't go for drives while you're asleep, why clocks don't randomly go to their favorite hour of day, and why assembly-line robots don't dance the night away.
 
Last edited:
  • #39
AUMathTutor said:
"If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost."
And who would make these machines? Unless you're talking about the machines themselves making more machines...
Of course, machines would make more of themselves. If AI becomes more cost effective than human thought, human engineers would become unnecessary at every step of the process. (except, perhaps, to have a certified (human) Professional Engineer rubber-stamp the work as required by law)

"Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics."
If computers surpassed human beings in intellect, I think it would only be a matter of time until robots started filling those human resources jobs, and I don't see any reason why a computer capable of doing everything a human could do, but better and faster, would be denied any government job. I wasn't aware there was a law preventing this... and if there is, that's what amendments are for.

I believe it is true that an android will eventually surpass a human being for a job such as receptionist that requires a "human presence," but to me that goal seems a bit farther away than strong AI. I don't see it as likely that humans will willingly allow robots to vote or hold public office.

"So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go."
Because the world is a completely different place now, and people would sooner starve than figure out a way to become productive members of society.

Once you have strong AI that's cheaper than a human, society wouldn't have productive places left for humans. People would become literally obsolete as workers. The only humans who could make money from the arrangement would be those with the knowledge and capital to purchase AI to work for them. Few people have that kind of knowledge and capital.

Since when did super-massive intelligence equate with very strong will? It doesn't.
My idea of strong AI is that a designer creates a few simple rules that, when applied to a large number of elements, give rise to abstract thoughts--a few simple rules for each neuron, for example, can give rise to abstract thoughts when you have enough neurons. For such a program, it would be difficult to control exactly what those thoughts are. The designer only creates the potential; the actual intelligence is emergent.

The problem is then to induce the right desires in that intelligence. The simplest way could be to have a "pleasure button" that rewards the AI for good behavior. However, a sufficiently intelligent AI could work around that and learn to press the button for itself. A more sophisticated way could be to encode an imperative directly into the AI's brain, such as "harm no humans." The problem with that is, if the AI is emergent from simple rules, its data structures would also be emergent. The representation of the concept "harm no humans" in the AI's brain would probably be fiendishly difficult for us to figure out. It wouldn't even be possible to give the AI such an imperative until it has learned for itself the concepts of "harm," "no," and "humans."

I think it might be possible to give strong AI obedience training, but not easy.
 
  • #40
this much is clear (2me)

the presence of (networked!) thinking things
will going to be abundant (perversely pervasive)
the adaptability of (wo)mankind will be put to
(yet another) hard test
Google will have begun to do evil (then based in China)
we all will look very old 50 years hence (i will)
the brain will look wrinkled too
(and be the intense focus of interest for the dominant AI-system)

-thewetwareconjecturist
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
12K