When will computer hardware match the human brain?

In summary, there is ongoing debate over whether computer hardware will ever match the complexity and intelligence of the human brain. Some argue that the pure calculation speed of computers has long surpassed the brain, but others point to the brain's ability for parallel processing and its intricate network of neurons as key factors that cannot be replicated by current technology. While predictions vary, it is generally agreed that the required hardware to match the human brain's general intellectual performance will not be available for at least a few more decades. There is also speculation about the role of other mechanisms, such as the immune system, in our overall intelligence that may need to be considered.
  • #36
michinobu said:
I don't know if it's mathematically possible. I remember reading in "Introduction to the Theory of Computation" by Michael Sipser, that Kurt Godel, Alan Turing, and Alonzo Church discovered that computers can't solve certain "basic" problems which are solvable to humans - such as being able to prove if a mathematical statement is true or false.

I would like to react to this. It is a common error, and you are in good company: even Penrose fell into that trap.

What Goedel and Co demonstrated is that every system based upon first order formal logic (and so are classical computers: the von Neumann machine is an implementation of a first order formal system) is such that some statements in it are not provable "but are nevertheless true" ; however, this is something you can only derive when you consider that first order formal system in a "larger" system. So if you have a "larger" system and you analyse that given first order system, you will be able to construct a statement expressed in that first order system of which you can demonstrate that no proof exists within that first order system, but of which you've demonstrated nevertheless (in the larger system) the truth.

However, that "larger" system might just as well be a larger first order system, with its OWN unprovable statements, and as long as you dwell within that larger system, you won't be able to find out. You'd need to analyse your larger system in a still larger system before you'd be able to do so.

So it is very well possible that we humans "run a large first order system" with our own unprovable statements in it. It is not because we are able to find such things in smaller systems, that it doesn't mean that we don't have our own "Goedel limit". From within a system, you can never find out.
 
Computer science news on Phys.org
  • #37
One danger of strong AI is that it becomes more cost-effective than hiring a human. It might only be roughly as smart as a human being but run on consumer hardware costing a few thousand dollars. Or it might cost a few million dollars, but be so smart that it could do the job of hundreds of engineers. If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost. Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics. A few blue-collar human jobs might also persist for a while, but strong AI would further improve the cost-effectiveness of industrial robots, allowing more manual labor to be automated.

So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go.

This assumes the optimistic view that strong AI is obedient to humans. More realistically, that would not be the case. Strong AI would have its own goals, and if those goals happened to conflict with the goals of humanity, humanity would have to suck it up. Perhaps strong AI would like to cover Earth's surface with solar cells, or claim Earth's oil reserves for itself. If it decided to do that, we could not stop it.

As far as hardware goes, http://www.technologyreview.com/computing/22339/?a=f" apparently simulates the electrical behavior of 200,000 neurons linked by 50 million connections, at a speed 100,000 times faster than human neurons. It's not yet comparable to the computing power of the human brain because the number of neurons is much smaller. But the incredible speed means that if the network could be scaled up, it would be more than a match for the human brain.

Personally, however, I think that just adding more hardware is not going to be enough. My personal feeling is that a home computer could potentially be as smart as a human brain, if only the right software could be designed. Human brains did not evolve mainly for thinking, after all; they initially evolved to control the mammalian body and handle all sorts of different instincts. In my opinion, human brains are almost as massively inefficient at abstract thinking as they are at arithmetic. I think that a home computer that was properly programmed for abstract thinking would surpass the human mind.
 
Last edited by a moderator:
  • #38
"One danger of strong AI is that it becomes more cost-effective than hiring a human."
Yes, we wouldn't want companies making more profits and improving the economy. Heaven forbid that machines become so efficient that they replace people at jobs where machines would get the job done better, faster, and cheaper.

"If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost."
And who would make these machines? Unless you're talking about the machines themselves making more machines...

"Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics."
If computers surpassed human beings in intellect, I think it would only be a matter of time until robots started filling those human resources jobs, and I don't see any reason why a computer capable of doing everything a human could do, but better and faster, would be denied any government job. I wasn't aware there was a law preventing this... and if there is, that's what amendments are for.

"A few blue-collar human jobs might also persist for a while, but strong AI would further improve the cost-effectiveness of industrial robots, allowing more manual labor to be automated."
Technology has always eliminated jobs, but in the long run it was worth it.

"So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go."
Because the world is a completely different place now, and people would sooner starve than figure out a way to become productive members of society.

"This assumes the optimistic view that strong AI is obedient to humans. More realistically, that would not be the case. Strong AI would have its own goals, and if those goals happened to conflict with the goals of humanity, humanity would have to suck it up. Perhaps strong AI would like to cover Earth's surface with solar cells, or claim Earth's oil reserves for itself. If it decided to do that, we could not stop it."
Somebody's been watching too much television. Since when did super-massive intelligence equate with very strong will? It doesn't.

Your problem is that you confuse policy with implementation. In general, the modern working idea of AI is one of making smart implementations. That is, the ways in which computers perform specific, well-defined tasks is desired to be more flexible - and potentially more human-like - than it is now. Implementation inherently deals with the how, not the what, and certainly not the why.

Policy comes from interfacing with people. When you press the power button, the computer turns on. It responds to your action. There's no reason why the computer couldn't be designed to just turn on whenever it wanted to and to start doing things. But computers aren't designed that way because it doesn't make any sense. Why should computers be deciding what to do?

In software design, one of the first things they teach you is to separate policy from implementation. My point is that I seriously doubt anyone would design a computer capable of operating completely outside of the control of humans. That just doesn't make any sense. Computers are tools... convenient and interesting tools, but tools nonetheless. There is no benefit to machines capable of deciding policy for themselves. That's why cars don't go for drives while you're asleep, why clocks don't randomly go to their favorite hour of day, and why assembly-line robots don't dance the night away.
 
Last edited:
  • #39
AUMathTutor said:
"If either of those things happen, human "knowledge workers" will become obsolete, since any programmer or engineer could be replaced by a computer for a tiny fraction of the cost."
And who would make these machines? Unless you're talking about the machines themselves making more machines...
Of course, machines would make more of themselves. If AI becomes more cost effective than human thought, human engineers would become unnecessary at every step of the process. (except, perhaps, to have a certified (human) Professional Engineer rubber-stamp the work as required by law)

"Humans will then only be cost effective for public relations jobs where you need a warm human body, or for jobs where a human is required by law, such as politics."
If computers surpassed human beings in intellect, I think it would only be a matter of time until robots started filling those human resources jobs, and I don't see any reason why a computer capable of doing everything a human could do, but better and faster, would be denied any government job. I wasn't aware there was a law preventing this... and if there is, that's what amendments are for.

I believe it is true that an android will eventually surpass a human being for a job such as receptionist that requires a "human presence," but to me that goal seems a bit farther away than strong AI. I don't see it as likely that humans will willingly allow robots to vote or hold public office.

"So we'd be left with a world of receptionists and politicians. Most people would be out of work, and would probably starve. In the industrial revolution, people didn't starve because they found jobs as knowledge workers in the middle class. In the strong AI revolution, the knowledge workers would have nowhere to go."
Because the world is a completely different place now, and people would sooner starve than figure out a way to become productive members of society.

Once you have strong AI that's cheaper than a human, society wouldn't have productive places left for humans. People would become literally obsolete as workers. The only humans who could make money from the arrangement would be those with the knowledge and capital to purchase AI to work for them. Few people have that kind of knowledge and capital.

Since when did super-massive intelligence equate with very strong will? It doesn't.
My idea of strong AI is that a designer creates a few simple rules that, when applied to a large number of elements, give rise to abstract thoughts--a few simple rules for each neuron, for example, can give rise to abstract thoughts when you have enough neurons. For such a program, it would be difficult to control exactly what those thoughts are. The designer only creates the potential; the actual intelligence is emergent.

The problem is then to induce the right desires in that intelligence. The simplest way could be to have a "pleasure button" that rewards the AI for good behavior. However, a sufficiently intelligent AI could work around that and learn to press the button for itself. A more sophisticated way could be to encode an imperative directly into the AI's brain, such as "harm no humans." The problem with that is, if the AI is emergent from simple rules, its data structures would also be emergent. The representation of the concept "harm no humans" in the AI's brain would probably be fiendishly difficult for us to figure out. It wouldn't even be possible to give the AI such an imperative until it has learned for itself the concepts of "harm," "no," and "humans."

I think it might be possible to give strong AI obedience training, but not easy.
 
  • #40
this much is clear (2me)

the presence of (networked!) thinking things
will going to be abundant (perversely pervasive)
the adaptability of (wo)mankind will be put to
(yet another) hard test
Google will have begun to do evil (then based in China)
we all will look very old 50 years hence (i will)
the brain will look wrinkled too
(and be the intense focus of interest for the dominant AI-system)

-thewetwareconjecturist
 
Last edited:

Similar threads

  • Computing and Technology
Replies
4
Views
941
  • Computing and Technology
Replies
5
Views
2K
  • Biology and Medical
Replies
2
Views
11K
Back
Top