otownsend said:
Summary: I'm trying to understand how a brain v.s. a computer computes and why computers are immensely better than humans at certain tasks.
There are several differences between computers and brains that effect speed and efficiency for different tasks:
In computers memory (of both data and algorithms) and processing is separated, in brains they are more or less combined.
Computers work (basically) serial, brains work parallel.
A neuron works more like a leaky conductor that only fires if it gets a certain amount of input within a certain time, a transistor is like a switch.
The purpose of a brain is to transform vast amounts of input from all senses into very small decisions that improve chances of survival. One of the most important tasks in that context is recognition. The parallel working of the brain is much better suited for that than a serial computer because in recognition different data points may be of totally different importance and need therefor be treated differently. A parallel brain can easily calculate a diffuse bias towards one result or other (dog or cat, friend or stranger, twig or snake). It can easily filter out noise and irrelevant stuff or exagerate tiny clues. And to improve chances of survival further the brain works with a fair amount of redundancy.
But since memory and processing happen more or less in the same area each category to be recognized has its own space. One for faces, one for cars, one for melodies etc. When it comes to algebra the human brain is indeed very fast, too. But only if trained, that is, if a certain area of the brain is dedicated to the task. People trained at the Chinese abacus or some specially talented people can perform huge algebraic calculations very, very fast (I knew an accountant who could basically sum up all numbers on a sheet of paper in the blink of an eye).
Not as fast as a computer, but we must take into account that input and output to and from computer vs human is very different.
A human needs to reed or hear the question, interpret images or sound, transform them into something the brain can process and then re-transform the result into muscle movement of some kind. That takes additional time. The actual processing goes much faster. Since we generally only train a very small section of the brain to add numbers the brain is very slow if it has to do very many very simple tasks (like adding 10000 numbers) that delivers many simple results (thousands of sums that need to be remembered).
On the other hand it is very fast when computing vast, noisy and conflicting input (like a distorted, incomplete image or listening to someone at a noisy party) and deliver one result (“that is a cat” or a witty reply). Ask a computer via sound in a room full of talking people what is one plus one and even with the most advanced voice recognition software and connected to a supercomputer it will fail to answer as fast as a human.
If we ask for the actual max speed of a brain researchers disagree wildly but we can assume the number lies somewhere above 100 Tflops. Due to the separated nature of the brain very, very little of that computational power can be used for algebra, since most of the brain is already dedicated to other tasks like language, driving a car, how to chose and taking on clothes, prepare breakfast, and a thousand other things computers generally do not need to care about. But we can assume that if we grew a brain in a lab, provided some massive parallel input/output device and would train it to only do algebra, nothing else, it would be able to do that very, very, fast too. However, due to the nature of the nerve, the results may not always be precise.
To sum up the answer to the first question: The fundamental differences are dedication of areas, parallel vs serial computing and leaky conductors vs precise switches.
To give an answer to the second question: The human brain is faster in processing the kind of noisy, incomplete information that reality usually provides.