I Efficiency of computers vs. brains

Click For Summary
Modern electronic computers are faster than humans at basic arithmetic primarily because they are specifically designed for such tasks, while human brains evolved for different functions. The efficiency of computers stems from their ability to process information serially and their smaller components, allowing for quicker signal propagation. In contrast, human brains operate in parallel, excelling at recognizing complex patterns and processing ambiguous data, which computers struggle with. The differences in memory and processing architecture between computers and brains also contribute to their respective strengths and weaknesses in various tasks. Overall, while computers outperform humans in arithmetic, humans excel in tasks requiring nuanced understanding and adaptability.
  • #31
TMT said:
To access a specific data it must visit considerable amount of nodes. For simple basic arithmetic like addition 121+023, it seek the answer from stored info. it is not processing the addition! it seek memorized result.
I seriously doubt that we know exactly what happens in the brain when someone is calculating (if you had some article I would be thrilled). But even if your model might be correct when the network is insufficiently trained it is certainly not correct when a person is trained to do – well – basically anything. During training neural pathways are reinforced until operations can be performed without search.
How this happens can be seen in the training of calculation champions.
First students are trained on the Chinese abacus. The math-problem is translated into some initial position of the beads. The beads are then moved according to specific rules using the fingers. When all movements are executed the result can be translated into numbers from the final position of the beads.
The use of the abacus suits the human brain very well since it is very good at manipulate things with fingers, at processing visual information and at imagining visual change. The actual calculation is as such transformed into images of and changes in bead position, which the brain can handle much easier than abstract numbers. (Just as masters in memory use visual images and stories to remember thousands of numbers)
When the student becomes fast and flawless in the use of the abacus, he/she is trained to do it blindfolded, effectively reinforcing the areas that imagine the movements. From here on the brain performs no more searches. All movements are automatized. Translations and manipulations are executed by dedicated nerves.
When that is achieved the student starts to perform the movements without abacus, but still moving the fingers. This reduces the total number of nerves needed to perform. And even more so in the last step, when the finger-movements are reduced to mere flicker so no time is wasted. All movements are visualized and calculation happens very fast.
TMT said:
(There is no multiply unit in computer. Therefore computer performs repeated addition for multiplication)
I’m sorry to say, but this is definitely wrong. Computers use the so called Booth’s algorithm or something similar, which performs a combination of bitshift and addition, much, much faster than repeated addition. CPUs have dedicated circuits for that.
 
Physics news on Phys.org
  • #32
Hi @otownsend ,
I haven't read all the posts in this thread, but I wanted to comment on this:

otownsend said:
Summary: I'm trying to understand how a brain v.s. a computer computes and why computers are immensely better than humans at certain tasks.

So then what makes computers more efficient at processing information?

There is no computer built or programmed yet that is even close to the language processing abilities of an average normal human. By far.
 
  • Like
Likes sophiecentaur
  • #33
Flisp said:
I’m sorry to say, but this is definitely wrong. Computers use the so called Booth’s algorithm or something similar, which performs a combination of bit shift and addition, much, much faster than repeated addition. CPUs have dedicated circuits for that.
if you look the Booth's algorithm, it expressed in a loop and loop controlled with conditional branch "32nd repetition?" this proof of no dedicated circuit for multiplication in computer but there is a micro code is doing multiplication executing an algorithm (using existing circuits like as full adder) 32 times.
I have not read any article about these. but I have some theories about learning based on several real life biological experiments. But for you: may conduct a simple experiment with infants whom learning addition. if you practice him/her on addition, any selected number pair like 3&4 as rarely but make repeated exercises more frequent, with other numbers (having no 3 or 4) at test time the number containing 3&4 his/her operation take relatively longer against other combinations, since 3+4 not memorized well. (same with multiplication operation)
You must not take search process in neural pathways are similar as computer. the neural cell will produce a signal to another related neural cell in cluster. not seeking 1 by 1 as computer. you make an analogy with object in computer program when you send {+4} signal to object {3} it will respond back 12. neural cells in brain are ordered in cluster structure. all has a link to others. when you get addition request only adding cluster activated. then when received number 3 the cell {3} in cluster activated. and receiving number 4 cell {4}in cluster activated. then both cell produce signal in cluster that activates cell {12}in cluster.
there are also procedures memorized (like as micro codes) as your Booth's algorithm. starting from right most numbers repeat process until left most number.
The visual memory, sound memory are different inputs. you may consider cerebellum as co processor where muscle controls stored. specific rules of the fingers movement are stored at there abacus learning stored some info in cerebellum when activated cerebellum produce series of action via fingers. when beads are moved . then visual memory takes action takes picture of instants and re activate cerebellum... at the end visual memory translates resulted beans to numbers.
But all those context show human brain have no mechanism to perform arithmetic operation. it has great ability to store and access to stored one in short time. if you access a stored information more frequently the access time will be reduced.
 
  • #34
I found that post really good fun @TMT
The way hardware is used to optimize speed is nothing short of magic. I understand that different processors use their own methods. (At least in floating point and graphics routines) I imagine the human brains have an even wider range of methods.
 
  • #35
sophiecentaur said:
I found that post really good fun @TMT
I imagine the human brains have an even wider range of methods.
I agree with you! it has wider range of METHODS. (algorithms) but no structure to perform arithmetic operation. it performs a store and recall in all of those process. I mean there is no adder structure to sum up 3 & 4 to produce 7. but it stores "3+4" and result "7". it also stores a iterative method for multi digit add (single digit add and use carry digit) or it may prefer doubling some time like "222" + "222" as "222" * "2" = "444" but again it will recall "4" as result of "2*2" operation.
I tried to emphasis brain is using recall structure as basis of operations.
Computers also uses logic gates and components made off from those gates. it utilize several components chaperoned by an other component implementing an algorithm. Multiplication in computers is done in this manner. there are no component like adder or shift register or flip flops for multiplication but a control components constructed from adder & shift register to a comply multiplication process.
The recall process in brain is not a seek and find in stored array. it is not seeking "3+4" among bunch off numbers. when "3+4" signaled, "7" will be signaled spontaneously.
 
  • #36
TMT said:
no structure to perform arithmetic operation.
I suspect that the human brain's plasticity along with the invention / discovery of Maths may be in the process of changing that.
 
  • #37
sophiecentaur said:
I suspect that the human brain's plasticity along with the invention / discovery of Maths may be in the process of changing that.
I doubt on it! if you believe scientific methods, your theory must be proofed! "EVERY human brain's plasticity along with the invention / discovery of Math be in the process of changing that." must be relevant for human of any kind, regardless of if educated or not! But no math activity can be conduct by a person have no math education.
 
  • #38
TMT said:
if you believe scientific methods, your theory must be proofed!
I have to acknowledge that.

TMT said:
But no math activity can be conduct by a person have no math education.
That is largely true and it applies to many mental activities - humans have a significant level of culture. But there is a lot of informal maths 'education' that people pick up outside School.
 
  • #39
sophiecentaur said:
I have to acknowledge that.That is largely true and it applies to many mental activities - humans have a significant level of culture. But there is a lot of informal maths 'education' that people pick up outside School.
I mean people who have no resource of education like child of Africans. There is also no informal maths 'education' even.

Also some conducted experiments showing learning is some kind of storing and recalling. if you teach any math operation like addition to an infant and exclude a specific number pair on purpose. like {2,5} Kid will do all other number pair addition perfectly, excluding {2,5} . But face difficulties with {2,5}. after practices adding {2,5}, kid will show same performance with other number pair also.
 
  • #40
DennisN said:
There is no computer built or programmed yet that is even close to the language processing abilities of an average normal human. By far.

The general field of language processing can be split into several parts:
  • Separate sound/speech out of surrounding noise. If the computer is equipped with 2 or more microphones there is both hardware and software to do that.
  • Sound to phonemes. We have excellent software for that. Each language and dialect uses a unique setup of phonemes and if sufficient text is provided the used language/dialect can be detected with high accuracy.
  • Phonemes to text. Once the language is known, also for that we have software. However, mistakes can be made (even by humans), because to correctly distinguish between phrase with equal sound but different meaning the context of the text must be known.
  • In a speech up to 80% of the message is conveyed through prosody, facial expressions, gestures and body language. For each of these we now have software, that can correctly identify and analyze emotional and other basic content.
  • So when it comes to this part of language processing computers have since long surpassed humans in both speed and width, being able to process thousands of book and conversations in minutes or seconds in serial or parallel and in any number of languages/dialects.
  • Deeper context, however is always a challenge. In novels the author usually dedicates large sections to provide the reader with the context necessary to understand actions and exclamations. Still, the entire movie “Citizen Cane” fails to provide the context to understand the final word: “rosebud!”
  • While understanding things in their context is so overwhelmingly important for humans...
TMT said:
Also some conducted experiments showing learning is some kind of storing and recalling. if you teach any math operation like addition to an infant and exclude a specific number pair on purpose. like {2,5} Kid will do all other number pair addition perfectly, excluding {2,5} . But face difficulties with {2,5}. after practices adding {2,5}, kid will show same performance with other number pair also.
... I would claim that in the experiment cited by TMT the teachers fail to provide some useful context to the children why they should learn to handle abstract numbers and operations, so they do the next best thing and memorize some stuff that has no meaning for them, just to please the teacher. Given a meaningful context even children at young age have no difficulty to, for instance, share equally (division, often with rest), using rhymes and rhythms, shapes or colors, or other more or less complex methods of division and comparison. And since an understanding of fairness is, as far as we know, congenital, so must be the ability to compare and calculate differences as long as the child can grasp the purpose. Even without an education in math, most people can handle larger numbers when counting or measuring. I, myself, count for instance steps on a hike using patterns of rhythms and songs, and only numbers up to 20. Even without math peope can sew clothes that fit, cook enough food for everyone at the table, or fold a paper into a 5-point star (recreating a golden ratio).

Computers are limited in the understanding of general world knowledge and the first hand experience of being human. Context we take for granted in conversations, and therefor computer translations often fail. The industry is working on closing that gap since an understanding of human context is needed to improve the interaction of robots and humans.

Until recently the computational power of computers was too limited to do other than abstract calculation (disregarding context). That is changing now. And once computers become sufficiently powerful and aware of, provided by or embedded into context they will also master the other important step of language processing – talking about something: Expressing views, feelings and wishes; provide or ask for information, motivation or insight; gossip; jokes. I have a feeling very few people realize how soon that will happen and what impact it will have on our global society.
 
Last edited:
  • #41
Flisp said:
Still, the entire movie “Citizen Cane” fails to provide the context to understand the final word: “rosebud!”
Huh ? I think you and I must have watched different movies. If you mean mean that it doesn't provide enough for a computer to understand, then sure, but so what? Computers just aren't anywhere near that advanced yet.
 
  • #42
phinds said:
Huh ? I think you and I must have watched different movies. If you mean mean that it doesn't provide enough for a computer to understand, then sure, but so what? Computers just aren't anywhere near that advanced yet.
Well, my post was actually about prooving that computers indeed are very close to be that advanced, even if we never encounter any application that integrates all those abilities (since it requires more than a smartphone or desktop can offer in terms of flops). And I even if "Actually, as it turns out, “Rosebud” is the trade name of a cheap little sled on which Kane was playing on the day he was taken away from his home and his mother " I still have no (or many, but no conclusive) idea why Kane would say that.
 
  • #43
Flisp said:
I still have no (or many, but no conclusive) idea why Kane would say that.
Then you missed the meaning of the movie. Why do you think "rosebud" has become such an iconic phrase? Perhaps it isn't for your generation if you are fairly young but it was for older generations.
 
  • Like
Likes sophiecentaur
  • #44
phinds said:
Then you missed the meaning of the movie.
Probably. Didn't like it, anyways. But I can give it another try.
 
  • Like
Likes sophiecentaur
  • #45
phinds said:
Why do you think "rosebud" has become such an iconic phrase? Perhaps it isn't for your generation if you are fairly young but it was for older generations.
That was precisely the point of my post. Things, spoken or written language, can only be understood within their context. And if that context is hidden, or to complex, or outside the text (movie) itself, then no language processing capability is sufficient to make sense from some remark or other. Rosebud? My generation, but not my culture.
 
  • #46
DennisN said:
There is no computer built or programmed yet that is even close to the language processing abilities of an average normal human. By far.

Flisp said:
So when it comes to this part of language processing computers have since long surpassed humans in both speed and width, being able to process thousands of book and conversations in minutes or seconds in serial or parallel and in any number of languages/dialects.
That was not what I meant. I meant being able to process, understand and also generate natural language on a detailed level comparable to an average normal human. And no computer or software is even close to it (see * below).

Flisp said:
Deeper context, however is always a challenge.
Indeed, and I'd say calling it a "challenge" is somewhat an understatement. It is one of the most difficult problems in computer science and computer technology, and one of the most difficult problems I've encountered in academic studies.

* IBM's Watson computer is one of the most advanced language processing computers, and below I post a video describing the associated complexity of just answering a question in the open domain. "Open domain" means that the question can be about anything. This ability to answer a question, in this case a Jeopardy question, is very impressive, but it is still extremely limited when compared to the understanding of a human. A human can understand very complicated language, stories and conversations which include e.g. relations between themes and objects, time and temporal relations, space and spatial relations, emotions and emotional relations etc.

As a comparison, I, you and the other persons in this thread can easily read and understand this entire thread, including your posts. There is no computer or software that can do it on the level I, you and the other thread participants can. And I can reply to posts and write a, hopefully, relevant point about the current topic being discussed. A computer/software would not be able to do it on a detailed level comparable to a human.

Here is a good video descibing the computational complexities involved in answering a Jeopardy question:

IBM Watson: The Science Behind an Answer (IBM)
 
Last edited:
  • Like
Likes phinds
  • #47
phinds said:
Then you missed the meaning of the movie. Why do you think "rosebud" has become such an iconic phrase? Perhaps it isn't for your generation if you are fairly young but it was for older generations.
There are a lot of films and other artistic productions that are over analysed and critiqued. I watched Citizen Kane because I was told it was a masterpiece. I was unmemorable for me.

Flisp said:
Probably. Didn't like it, anyways. But I can give it another try.

A bit too worthy for me. It had a lot to live up to.

It is a big ask to expect a computer to analyse a work of fiction to find out 'whodunnit' because no one actually 'did' anything. It is all in the head(s) of the creator(s) of the work and there is no solution to the question. The fact that three people will watch a film and come to three different conclusions as to what it was 'all about' makes it unreasonable to test a computer with the same work of fiction.
Otoh, one could expect a suitably complex computer that is presented with all the evidence about a real crime would find a number of possible suspects with varying calculated probabilities of their guilt (or possibly just one suspect). I would tend to put my trust in that against the subjective reactions of police or a jury - or even an 'impartial' judge.

On the same topic, heard on a recent BBC R4 programme (cannot remember which one) that someone was reporting that people have claimed an AI profiling system was racist, on the grounds that it was learning (teaching itself independently) a connection between various behaviours and various facial features from a large population of data. Scary, as it again brings up some of the ideas contained in the taboo topic of eugenics. I guess it could be explained in the basic brief that the AI system was presented with so its views may not have been started with a blank sheet.
 
  • #48
DennisN said:
IBM's Watson computer is one of the most advanced language processing computers
Ah, Watson! If I recall correctly Watson-Jeopardy was much more about broad and parallel database search than language processing. Each Jeopardy question contains more or less hidden key words. The question itself and the combination of key words must point to precisely ONE answer and contains therefor a unique combination of key words or hints. I don’t know if IBM revealed their methods, or maybe you have some deeper insight, but I guessed (back then) that they used in large part statistical methods and from that built some kind of reference index, that point from a variety of more or less synonymous key words (ruler, king, head of state, royalty, queens husband, son of Henry the fifth etc) to an array of possible places where an answer can be found. Watson could then calculate the probability that one specific answer is correct if several synonymous key word combinations point to the same place. Of course there is some more to it, like some analysis of grammar etc, but that is rather basic stuff. I also think that Watson was given the questions in text form (so no sound to phenoms, phenoms to text transformation was required). I also recall that Watson generally performed worse when the key words where hidden in longer metaphors, rhymes or songs and sometimes came up with answers that showed that it really had no clue what the question was about. But I gladly admit that I might be very wrong, it is some years ago I checked all that and I have not followed the development. (I wrote all that before I saw that video, and I think I was pretty close:-)

What impresses me much more is for instance cortical.io (for everyone who cares to check it out). That is language processing and understanding at a level where it becomes very hard to distinguish argumentatively between how a computer understands speech and how we do. I have experimented with a program using similar technologies to extract emotional content from texts, so a computer could react to phrases like “I am homeless” with all the understanding of the pain and threats such a life situation may cause an individual, as a human would (if a separate reaction routine were added). The same technology allows to create follow-up question, so the computer can engage into some kind of a consulting dialog.

As I said, we have not yet seen much of these applications in real life (or do not notice if we do), because it demands the interaction of several demanding tasks to bring it all together to the level at which humans communicate. We have to keep in mind that verbal communication at human level is done with a brain that performs flops-wise far beyond what even a little network of common PCs with GPU-acceleration can produce. You need the capacity of an Nvidia DGX-1 to get even close and that costs about 100000$ for one single communication. Most applications focus on automation of many very simple communications like “Google, what time is?”. That’s where the money is, because humans can not communicate in parallel or at lower capacity. So having a single human answering simple questions one after the other is comparatively expensive.

However, we now have such a tool like the DGX-1 to an affordable price and that means that even small and low-budget research groups on garage level can start experimenting with potentially very powerful stuff.
 
  • #49
Flisp said:
Ah, Watson! If I recall correctly Watson-Jeopardy was much more about broad and parallel database search than language processing.
That's my recollection as well. The questions were quite simple, linguistically. What was impressive was its ability to accurately call on a wide breath of information to correctly answer the simple questions.
 
  • #50
sophiecentaur said:
On the same topic, heard on a recent BBC R4 programme (cannot remember which one) that someone was reporting that people have claimed an AI profiling system was racist,
Here are 3 links about that:
https://www.livescience.com/64621-how-algorithms-can-be-racist.htmlhttps://www.livescience.com/64621-how-algorithms-can-be-racist.html
https://www.livescience.com/65689-ai-human-voice-face.html
https://www.livescience.com/58675-artificial-intelligence-learns-biases-from-human-language.html

If an AI is trained on social material, the AI will inherit a bias, because the society that produces the material is biased. I know of no society that does not regard some groups within itself as more valuable than some other groups.
But even with global material I believe there will be some bias, (although much weaker). If an AI learns from humans it will incorporate our weaknesses. However, having a potentially global capacity, the chances that an AI may lift itself (one day) to a truly unbiased level is much bigger than that a group of humans does. Governments, NGOs, expert panels etc have always some egoistic agenda, because they are always made from individuals who all fight, one way or other, for their own survival, funding, prestige, glory, power, recognition, love, position in a hierarchy etc.
 
  • Like
Likes sophiecentaur
  • #51
Flisp said:
If an AI is trained on social material, the AI will inherit a bias,
Almost human!
 
  • #52
Flisp said:
I would claim that in the experiment cited by TMT the teachers fail to provide some useful context to the children
I mean no advise for nobody. I try to demonstrate "we are doing math by recall process". But I can say that students must memorize the first 10 numbers (0 - 9) and sum of each pair of single digits or count up by 1. then they learn methodology how to extent this ability for multi digit numbers (via carry digit).
Flisp said:
Computers work (basically) serial, brains work parallel.
A neuron works more like a leaky conductor that only fires if it gets a certain amount of input within a certain time, a transistor is like a switch
I'm agree with this. But let me enhance it. there are many neurons stimulated simultaneously for a case as you said "I, myself, count for instance steps on a hike using patterns of rhythms and songs" is stimulate another set of neurons. The neurons already stimulated by question will receives also this fired signal by neurons response to song. Therefore the answer neuron group will be stimulated more early.
It is advised in learning "use different paths to easy recall" to remember a telephone number visualize a keypad image and the geometric shape constructed with dialing action, or group numbers in such a way that either reminding a anniversary or your plate number or building number. and emphasis on groups relation as double of previous group + x. or The rhythm on sounding. You will recall telephone number more easily if you produce more analogy item.
 
  • #53
Many things to consider here, Silicon computational speed versus neuron computation speed, calculations done in hardware versus software, and what matters. In the human brain, it rarely matters that we rapidly compute the answer to a math problem, so we don't do it well.

Consider computational speed in a silicon computer. It goes faster with clock speed but it also can do more on each clock cycle based on its instruction set. you can build hardware to do more and more on each clock cycle by building circuits specifically for certain calculations. Basic math is very easy to implement in hardware. That makes it fast. You can do things that are not built into the hardware by building software algorithms that calls upon the hardware's capabilities. The more times the software needs to invoke the hardware the longer it takes.

Our brains are neural networks that are connected to solve the kinds of problems we see every day. The book Thinking Fast and Slow https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow tells how the human brain is fast at some things but slow at others. Long story short, we are fast at the things that matter that we do them fast and multiplying 3 digit numbers isn't one of them. As a child, I remember playing in the outfield as one of my childhood friends pitched to another. I could start toward where the ball was going to be hit as the pitcher started to throw. I saw a bunch of things about the thrower and hitter and responded. It mattered.

Today we are building computer codes that mimic neural networks. They have various strength connections between each "neighbor" like neurons, we currently us them for things like digital image enhancement and interpretation. Neural nets have gotten as good as expert doctors at diagnosing imagery.
 
  • #54
GlenLaRocca said:
Neural nets have gotten as good as expert doctors at diagnosing imagery.
Unfortunately, as has been mentioned earlier, the 'learning' is subjected to quality criteria which can be dictated by the humans who developed the algorithms. But there is always the risk of the 'Hal' syndrome in which the computer decides to look further beyond its brief and could make very Hippocratic decisions and appear to have failed. Perhaps not yet but, as the technology progresses, the computer could deliberately make the 'wrong' diagnosis in order to achieve a result that it 'thinks' is better than the medics may think. Would that mean it was better or worse than the humans?
 
  • #55
1 + 1 = 2 or does it? Machines can figure out binary math faster than we can. But we can solve complex problems faster than a computer. It all depends on the input and the desired output is something we have some experience with or is it something we even want to do. Like the IQ test someone shoved at me 9. I didnt really want to do it, about 4/5ths into the test I started to get extremely bored. But I stuck with it as long as I could. Scored 147. I do not remember the time limit on the test. Could a computer solve all the problems in a shorter time period. Depends on what they are programmed to do. Every computer has to be programmed by a human so far. When it is the other way around what will they try to make? Then things might get scarey.

http://nautil.us/issue/59/connections/why-is-the-human-brain-so-efficient
 
  • #56
bhusebye said:
1 + 1 = 2 or does it? Machines can figure out binary math faster than we can. But we can solve complex problems faster than a computer. It all depends on the input and the desired output is something we have some experience with or is it something we even want to do. Like the IQ test someone shoved at me 9. I didnt really want to do it, about 4/5ths into the test I started to get extremely bored. But I stuck with it as long as I could. Scored 147. I do not remember the time limit on the test. Could a computer solve all the problems in a shorter time period. Depends on what they are programmed to do. Every computer has to be programmed by a human so far. When it is the other way around what will they try to make? Then things might get scarey.

http://nautil.us/issue/59/connections/why-is-the-human-brain-so-efficient
I was explained this is due to difference of their structures and operations. You have to know that all in machine and operations are based on human assumptions. E.g. binary presentation on/off or true/false and setting of this elements in sequence was accepted as a polynomial based 2. each figure of 2 power is multiplied with either 0 or 1 of sequence. Means, you can represent any number with sequence of on/off elements. Then alphabet is also assumed as order number of letters. We can represent any word with sequenced numbers. in electronic we designed some components to handle logic operations "AND","OR", "NOT" and "NAND", "NOR","EXCLUSIVEOR", arithmetic operators like adder is designed with a truth table. For subtraction , complement arithmetic is enable us to reuse adder circuit. We assumed first multiplier of multiplier in polynomial represent sign of number (1 Negative, 0 Positive) to subtract a number from another number we first produce complement of number and add to other number. Since every entity of data is number subtraction will gave us comparison value (<0 less, 0 equal, >0 greater.) All those are assumption. Even some folks disagree, there is no dedicated circuit for multiplication. Multiplication performed with combination of shift registers, counters, adders. As you notice speed is completely depend on electronic components on/off transition time which is around nanoseconds.

But Human brain is working differently. It basically depends on memorizing. Any data gathered as signal from environment yield a stimulation of certain brain cells. The cells will change somehow, (eg. producing some transient proteins, or chemicals I'm not sure). The signal initiated stimulation of cell next time alter this "transient" to "semi permanent" then "permanent" change. In other word cell is "learned" or "memorized" the stimulation signal. Cell will respond to this stimulation signal as either produce an electric or chemical signal (or both). Cell will produce some Ca++ , Na++ ions and pass to neighbor cells. (once I was read cell producing a very spontaneous NO (Nitrogen Oxide) as signaling component). Now any of electrical or chemical signal produced by stimulated cell, will stimulate other cells around. new stimulated cells will initiate a "learning" process. (We must consider the respond of stimulation is depend on cell alteration degree with learning). Brain will altered some how with repeated data. Assume we have a data 2 number and arithmetic operation with result. numbers will stimulate some cells that learned the numbers and operator stimulate other cells learned addition, those cells groups will produce a new kind signal that stimulate result number cell group. The stimulation of cells for result number, addition request and numbers pair involved will be much slower comparing semiconductors.

As summary brain will not perform an addition each time. It will memorize pair of numbers and result of operation with several repeated exercises. Then when asked it will recall the result. This recalling is not searching every stored data, the stimulated cells with given numbers and operation data will stimulate result stored cells. You may analogy with associative memory structure (context based addressing). data it self is create address to access result. Also brain learns methodology to handle large number calculation with 2 single digit numbers operation. (Not all number but single digit numbers and calculated results memorized).
This operation structure may not good for arithmetic. But very efficient in relating several kind of data. The intelligence is defined as creating new data with possessed data. I accept brain as a kind off associative memory. Context based addressing and when accessed, will stimulate other stored context.

For efficiently utilizing brain, we memorize some basic data, (to memorize, we repeatedly visit those basic data). Hence it mainly depend on stimulation of targeted data, we may create different paths to access focused data. Eg. if targeted data is a telephone number, we split number to smaller number groups like 3-2 digits. Then we associate this groups with very well memorized numbers like anniversary dates of closed relatives (day or month or years), now we produce another pattern like moms birth month, brother birth years, my birth days + 7. Now when you recall mother brother and yourself. those info stored cells will be stimulated when they stimulated they also stimulate related birth dates and associate with month, years, days pattern you stimulate telephone number groups eventually targeted telephone number. Another helper method is using standard key pad and the shape created with dialing sequence a eg shape L J is 1 4 7 8 9 6 3 the L & J shape is recall 1478963 number. Also you can utilize body muscle control mechanism of cerebellum. If you map some information to finger movements you may recall the aimed result with moving your fingers in defined pattern. (some fast math teaching is utilize finger movements pattern as an imaginary abacus using)
Please do not forget we create digital and analog computer to perform faster calculation. Then why we are asking this comparison of brain and computer speed. We created computer for calculation but computers couldn't create humans for intelligence. Again we as human creating Artificial Intelligence for them.
 

Similar threads

Replies
29
Views
5K
Replies
5
Views
4K
Replies
2
Views
3K
Replies
1
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
10
Views
3K
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 4 ·
Replies
4
Views
5K