# The Importance of Connectivity to Strong AI

Q_Goest
Homework Helper
Gold Member
ThoughtExperiment: What you've referred to here as "overall perception" is called "unity" in philosophy. Searle describes it this way in his paper here:

Unity.
It is important to recognize that in non-pathological forms of consciousness we never just have, for example, a pain in the elbow, a feeling of warmth, or an experience of seeing something red, but we have them all occurring simultaneously as part of one unified conscious experience. Kant called this feature the transcendental unity of apperception'. Recently, in neurobiology it has been called the binding problem'. There are at least two aspects to this unity that require special mention. First, at any given instant all of our experiences are unified into a single conscious field. Second, the organization of our consciousness extends over more than simple instants. So, for example, if I begin speaking a sentence, I have to maintain in some sense at least an iconic memory of the beginning of the sentence so that I know what I am saying by the time I get to the end of the sentence.
Chalmers describes it this way here:

At any given time, a subject has a multiplicity of conscious experiences. A subject might simultaneously have visual experiences of a red book and a green tree, auditory experiences of birds singing, bodily sensations of a faint hunger and a sharp pain in the shoulder, the emotional experience of a certain melancholy, while having a stream of conscious thoughts about the nature of reality. These experiences are distinct from each other: a subject could experience the red book without the singing birds, and could experience the singing birds without the red book. But at the same time, the experiences seem to be tied together in a deep way. They seem to be unified, by being aspects of of a single encompassing state of consciousness.
The problem this feature of consciousness creates is called "the binding problem" which is to say, how do all these this different states of experience get bound together into a single unified whole? Regardless of whether you take the computationalist viewpoint or the non-computationalist viewpoint, there is a problem here which philosophers and scientists have tried to attack head on. They've tried to wrestle it to the ground through brute force and sheer logic and have essentially gotten no where. Dennett seems to refuse it even exists! From the second reference above (Chalmers) comes this:

Some (e.g. Dennett 1992) hold more strongly that consciousness is often or usually disunified, and that much of the apparent unity of consciousness is an illusion.
I can't find fault with your thought experiment yet, it seems very convincing. Perhaps rewriting using the premise that unity exists, let others discuss if it actually does or doesn't, but assuming unity exists, here's how you go about disproving computationalism.

Thanks for the post, one of the most interesting posts I've seen here yet.

My knowledge is limited here, so I can't garantee that the following thoughts of mine make any sense:

I don't see why the human brain has to be mimicked by connecting billions of computers, instead of just using one computer with the memory and processing capacity required to run a consciousness simulation. What you have suggested seems to be one billion consciousness connected together, like "The Borg" from Star Trek.

Even if you use your model, is there not a difference in where the instruction codes for consciousness is located? In your model, the instructions/programming is spread out between a billion computers, and each computer is suppose to be analogous a neuron, but in the brain, the instructions/programming is not located in the neurons, but rather in the patterns of synapses themselves; in your model, the wires connecting the computers don't contain code, they just transmit code.

Also, why would your computer model have to mimic every fuction of a neuron, when some functions are not related to the coding for consiousness, such as the function of producing energy and carrying out metabolism, something that would be analagous to the power supply to your computer model.

You mentioned that if some of the computers in your model was disconnected and fed artificial data through recorders, your model would not be aware of it, while a human would. My understanding is that a human would also not know of this in many situations. Consider when we are dreaming, there are no genuine inputs: it's recordings of memory bits already stored in our brains that are playing out, yet we think it's reality when we are asleep. Also, consider the Matrix senario.

Finally, my understanding tells me that consciousness in not really an emergent property and that FEA would actually work. Consciousness is the result of different brain modules carrying out various parts of perception. Consider a dynamic computer program; the running program is not an emergent property, but rather the summation of the various modules the program is divided into. Each module does a certain thing, and those modules can be broken down even further, which is evident to anyone who has studied any programming language. So, I would think that if we can identify all the different modules of consciousness in human brains, we can see how it's additive. For example, a certain module would code for the perception of sight, another for sound, another for touch, and so forth. If any one of these modules died, consciousness would still exist, but be less dynamic, a more simpler form of consciousness. For example, consider the consciousness of a cockroach, a lizard, a worm, and so forth. There are different complexities of consciousness, thus contradicting the argument of consciousness being an emergent property.

Even a currently existing computer program can be considered conscious, but a simple form of one: the program seems to be aware of enough input to allow it to function as it's programmed to do.

One interesting point though: the human brain is said to carry out great parallel processing, so perhaps for human level consciousness to exist in a computer, there will have to be a great deal of parallel processing as well.

Interesting article: http://www.transhumanist.com/volume1/moravec.htm [Broken]

Last edited by a moderator: