Medical How Much Brain Capacity Does Each Person Have?

  • Thread starter Thread starter Winzer
  • Start date Start date
  • Tags Tags
    Brain Capacity
Click For Summary
Quantifying the "space" of the human brain in terms of memory capacity is complex and cannot be directly compared to computer hard drives. Memory types, such as episodic or procedural, significantly influence capacity, and the brain's dynamic nature allows for potentially infinite storage, unlike static computer systems. Current understanding suggests that memory is linked to neural activity and pathways rather than a fixed byte measurement. While some estimates suggest vast capacities, like hundreds of thousands of terabytes for those with photographic memory, the actual mechanisms of memory storage remain poorly understood. Overall, the brain's capacity for memory is more sophisticated than binary systems, making direct comparisons to computers largely speculative.
  • #31
jsgruszynski said:
These kind of number has no meaning as several people have pointed out.

Here's a human specification that completely flummoxes these comparisons or metrics:

The maximum information and data rate of the human retina is 500 Kbits/second for B&W and 600 Kbits/second for color.

http://en.wikipedia.org/wiki/Retina#Physiology (see end of section)

This is due to the fact that 1) the retina detects only contrasts in space and time - it's been found to be a neural net that performs entropy filtering - and only the filtered result is sent to the optic nerve, 2) the high resolution portion of the eye is the fovea which is a very small portion of the retinal area, and 3) the entire "field" of vision is collected by scanning the fovea over the field of view by fixational and saccadic movement that collects only a portion at a time and thus the "full visual field" information rate is even slower than the above numbers! The brain patches these together to give the illusion of a full field of view. In short, most of the information that hits the retina as light is simply thrown away - to the tune of 99.9% or more.

To put this in perspective, the worst web cams have far higher performance than this:

640x480x30fps = 9.12 Mbits/second for B&W and 3x this number for color or 100x higher data rate. This is a sucky web cam.

This suggests the entire mechanism of computing by the human brain is completely and utterly different from a computer.

In fact, we all live a fraction of a second in the past from reality because of this low information rate - whatever you see in any given moment actually happened a second ago and it's only catching up to your consciousness right now when you notice it.

There are cognitive theories that say that what we see is simply an internally created simulation that is corrected by low-information-rate hints from our eyes. Learning how to draw or paint involves learning ways to override these simulations so you can "see" and draw what you are actually seeing rather than what your brain simulates as the objects it has recognized. Learning to be truly "scientific" in the use of empirical reality is much the same.

Or it could suggest that we have a far, far superior form of image compression since our brains only receive a relatively low amount of data compared to our digital equivalents. I don't know about you but my eyes receive much more detailed images than any camera can capture.
 
Biology news on Phys.org
  • #32
Blenton said:
Or it could suggest that we have a far, far superior form of image compression since our brains only receive a relatively low amount of data compared to our digital equivalents. I don't know about you but my eyes receive much more detailed images than any camera can capture.

You're quite right. It certainly does suggests image compression already.

Except that it's lossy. It's called semiotic generalization and naming. We call anything that is vaguely apple-shaped and red simply an "apple". That's an enormous amount of information compression. It completely throws away any information about a particular apple (we don't care much for Apple-rights so this isn't much of an issue of "every apple is an individual"). The very basis of brain computation seems to be based on this type of compression. Descarte should have said: "I generalize and name, therefore I am" - my Latin sucks so I can't translate that.

This is very different from a digital computer which represents things abstractly but doesn't have the gradation of representation that the human brain has in this respect. It can't bootstrap itself from it's creation nor can it bootstrap itself in a novel situation its programming didn't anticipate. The human brain can do both of those.

You can't compress below pure information without this discarding of information like this. The bandwidth limits of the retina are so low that information-wise it doesn't seem one could ever "catch up" to achieve a "real-time feed" from the world as we know it in a computer system. The numbers just don't work out. That's really the point - we aren't connected to the outside world with anything close to 100% fidelity, except by possible tricks like internal simulation.

When you back these information rates into "smallest noticeable spatial change" and the like, it fits well with known limits described experimental psychology in various UI references by Card and Moran.

For example the net information rate (information != data) from outside environment through the visual system to consciousness is only 3-4 bits per second in terms of distinct and separable information events. Some of this is due to the limits on short-term memory for non-associative (pure information) being limited to 5+-2 states. It pops off the FIFO quickly. Thankfully the world and our "codebook" for the world offer a lot of redundancy for associative linking (which is sometimes known as "chunking"). Think of it in terms of being a one-time pad for all the common things we can semiotically generalize to. There's also the question of how do you bootstrap the coding?

In most designed compression codes there is knowledge of the redundancy on both sides (sender and receiver). For natural language text there are known redundancies - the "Wheel of Fortune" trick or crytographer's first pass using ETAOANIRSH... For video compression like MPEG4, the fact that the human eye detects nothing faster than ~200 ms is used to throw out information (that's the retina's compression at work). But if you are talking about bootstrapping for the redundancies of the external environment, how does that happen? Essentially this is the same problem of how do you distribute a one-time pad without either side knowing a common language or having a common context.

The likely means is "childhood development" - where we accumulate the external environment's redundancy through direct interaction and learning from the starting point of quasi-Tabula-Rasa (it's prone to generalize-name and then synthesize so it's not pure blank). This also fits with Wolfram's idea that there are systems with high computational complexity for which the fastest means of simulating their results (akin to the halting problem) is to use the real thing and let it play in real-time because no simulation will run faster.

One of the exercises from "Drawing with the left side of your brain" (it's probably not "left" but "lower" but that's the title) that illustrates this. People without art training, about the time than childhood development predicts the onset of "abstract thinking", people tend to lose the ability to "draw naturally" and their drawings tend to have a common theme: drawing things that have geometric abstractions tend to be drawn more like the abstraction than like the actual visual field view of the object. In other words they draw the abstraction and not what they are actually "seeing" because as a artist would describe it, "they haven't learned to 'see'" or actually "see literally" rather than "see symbolically". This traces back to pre-Renaissance, pre-perspective drawing which often had people drawn in size proportional to their social standing rather than literal size - symbolic drawing. Another way of saying it: drawing an internal simulation of reality rather than reality as it empirically exists.

Other references for this model/framework are Jeff Hawkins (co-founder of Palm) and Dan Dennett:

http://www.ted.com/index.php/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html

http://www.ted.com/index.php/talks/dan_dennett_on_our_consciousness.html
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
24
Views
4K
Replies
15
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
4
Views
4K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
1
Views
2K
  • · Replies 26 ·
Replies
26
Views
5K