[Thought experiment] The futility of consciousness and free will

Click For Summary
SUMMARY

The forum discussion centers around a thought experiment that questions the nature of consciousness, free will, and self-awareness through a hypothetical scenario involving a swarm intelligence of humans operating in cubicles. The experiment suggests that if these humans are instructed to perform tasks without awareness of the larger system, they could simulate human brain functions, raising questions about the consciousness of the resulting simulation. The conversation references Ned Block's China Brain thought experiment and critiques computationalism, concluding that the debate on consciousness may be futile as it relies on subjective definitions and axioms.

PREREQUISITES
  • Understanding of basic neuroscience concepts, particularly neuron function.
  • Familiarity with thought experiments in philosophy, specifically the China Brain and Chinese Room scenarios.
  • Knowledge of computationalism and functionalism in the philosophy of mind.
  • Awareness of key philosophical figures such as Daniel Dennett and John Searle.
NEXT STEPS
  • Research the implications of Ned Block's China Brain thought experiment on consciousness.
  • Explore the critiques of computationalism and functionalism in contemporary philosophy.
  • Investigate the Chinese Room argument and its relevance to artificial intelligence.
  • Examine the works of philosophers like Daniel Dennett and their perspectives on consciousness.
USEFUL FOR

Philosophers, cognitive scientists, artificial intelligence researchers, and anyone interested in the intersection of consciousness, free will, and computational theories of mind.

svastikajla
Messages
9
Reaction score
0
All right, for arguments sake we shall assume that the neurons of the human central nervous system function in a simplified way, it shall soon become clear that this isn't relevant to the experiment, we shall assume they work as:

1: They receive an electrical charge via one to more inputs
2: They put out a charge through other outputs depending on what they receive (they fire).
3: The collective of all those neurons makes the human central nervous system function and makes my hands type for one, lovely piano fingers they are, elegantly moving over the keyboard.

All right, now image a room, a cubicle, what we have in this room are lights, L.E.D.'s if you like which start to flicker at some points. In it we place a human which we have briefed thoroughly about what to do. This human is instructed to press buttons in precise combinations depending on the patterns of L.E.D.'s that person sees, very easy. Of course we could give this human instructions equivalent to how one random isolated neuron in for instance, my central nervous system works based on firing. We just set a computer to give the light patterns and ask this person in this room to press the buttons.

I'm sure you by now get the intention, we can hook up the buttons to L.E.D.'s in another room, and so forth to effectively create a functional replica of a human central nervous system with in each a human which is given the exact analogous instructions. This is the part were we find out that the complexity of neurons is largely irrelevant as we can give more complex instructions if we want. All right, then we make a computer feed sensory input and we have all those milliards of humans simply doing their little boring task. The best part is, they needn't be aware of the others, we can just tell them we are performing a test on their reaction speed, in fact, we have thus created a swarm intelligence consisting of milliards of humans which can compute all a human brain can... a lot slower, won't see Dell investing on this.

Of course, we can also let the output of this vast complex of boring cubicals animate a real human, let's just do it in a computer simulation for ethics sake. And that input that fictive human gets is of course the fake sensory stimuli of that simulation. The human, which is now a replica of me, is just walking around in that simulated world, typing this message on this board, in reality calculated by a swarm intelligence, all of whose components have no idea what they are doing and that there little reaction tests contributes tiny bits of my automation.

Now come the questions of life:
1: Am I—the simulation—conscious?
2: Am I—the simulation—self-aware?
3: Do I—the simulation—have free will?

My whole brain function are just humans in cubicals, but to all functional effect, they perform the same functions; from my perspective, there shouldn't be a difference, but a lot of people would be pretty scary to say I am a conscious form of life by now which would also have the ethetical implications that this little program here can't be just turned off that easily.

From my perspective, all three are true, not from a human in the cubical or the one who orchestrated the experiment, there's also no way to test for them if I am conscious. There's no way to test if a computer is concious. I react to every-thing in the same manner every human would, though slowed down considerably, but my reactions are still calculated by milliards of people in cubicals who have no idea that they are calculating it at all.

Also, if I did have all those three, where on Earth is it 'located'?

Thoughts? I arrived upon this idea from late night calls with a friend while she was at a boring team working camp with her orchestra, I believe it's sufficient to demonstrate the dead end of consciousness / free will / self aware debates.
 
Last edited:
Physics news on Phys.org
What's conciousness?

Forum rules dictate that you should define the meaning of words. I'm not much for rules, but I think this is a nonsense word.
 
Phrak said:
What's conciousness?

Forum rules dictate that you should define the meaning of words. I'm not much for rules, but I think this is a nonsense word.
Ach, I thought about if I were going to define it and I hoped people would read enough to note that it's actually not relevant to the argument what it is because the argument attacks all 'properties' of the mind except what it computes equally.

Assuming that consciousness does not equal the effective computations the human mind executes, which I hope we can infere from context, it's not relevant to define it.

This often used note—which I also often pull—bares no relevance here. The experiments deals with all properties of the human mind safe the computation equally.
 
Hi svastikajla,
I believe what you’re trying to accomplish is the same as Ned Block’s China Brain thought experiment. Try Wikipedia:
http://en.wikipedia.org/wiki/China_brain

The conclusion is ambiguous.
Some philosophers like Daniel Dennett, have concluded that the China brain does create a mental state. Our intuition that this is impossible is just a bias against non-neuron minds, furthered by the implausibility of the scenario. There is a natural desire for us to locate the mind at a singularity because the mind feels to us like it is just one thing. Functionalist philosophers of mind endorse the idea that something like the China brain can realize a mind, and that neurons are, in principle, not the only material that can create a mental state.

Note also there have been similar thought experiments aimed at computationalism including the Chinese Room by Searle and one by Hillary Putnam which I thought was very good.
 
Q_Goest said:
Hi svastikajla,
I believe what you’re trying to accomplish is the same as Ned Block’s China Brain thought experiment. Try Wikipedia:
http://en.wikipedia.org/wiki/China_brain
In effect it's functionally equivalent except that my version has the explicit requirement that all members in the cubicals do not know what they compute in total and also demonstrates that whether they are told this or not is irrelevant to the outcome. I should note that I had not seen the name China brain come by as of yet.

Q_Goest said:
The conclusion is ambiguous.
It doesn't really have a conclusion, a conclusion is dependent on axioms. Were we to make the axiom that indeed consciousness is only influenced by functionalism, then the conclusion is that the brain is conscious, if we make the axiom that more is at stake, diving into magical consciousness, then the brain needn't be. Which shows that whether the brain is 'conscious' or not, and by extension has any other property that the ability to simulate human behaviour is independent of all the things we can test and observe. Demonstrating the futility of the debate as signalled in the title.


Q_Goest said:
Note also there have been similar thought experiments aimed at computationalism including the Chinese Room by Searle and one by Hillary Putnam which I thought was very good.
I draw a critique on the Chinese room very similar to the one I first made in this experiment. Except that the cubicle is now replaced by one cubicle, the one inhabitant of that room doesn't understand Chinese any more than one fake neuron in the cubicle understands what the simulated human behaviour does. However the overlaying personality that that person creates by translating is said to 'understand Chinese', it shows yet again the futility of 'consciousness' in a debate as we come back on what this 'overlaying personality' is which seems analogous here to the conscioussness of the Chinese Brain-esque, which is some-thing which doesn't really seem to 'exist' as perceivable to humans.
 
I'm glad you can see all the problems with computationalism. Not sure exactly what you mean by this:
Which shows that whether the brain is 'conscious' or not, and by extension has any other property that the ability to simulate human behaviour is independent of all the things we can test and observe. Demonstrating the futility of the debate as signalled in the title.
But I suspect you're pointing to the fact that regardless of what we consciously believe, and given computationalism & functionalism as the paradigm for consciousness, there is no affect from these beliefs or experiences on any of the individual neurons and by extension, no affect on the brain as a whole. Thus, any computational theory of consciousness is 'empty'. Note that computationalism is attacked by this line of reasoning, and in defence, computationalists have various arguments in return - the strongest being that neurons, to the best of our knowledge, are only affected 'classically' (ie: per rules of classical mechanics). There are those who have disagreed with this, such as Hameroff & Penrose, Stapp and many others.

So what's your conclusion?
 
Q_Goest said:
I'm glad you can see all the problems with computationalism. Not sure exactly what you mean by this:
This is my conclusion. That 'conscious' and the like are independent of our current axiomatic system if you like it informally.

The question is largely truth per definition, how de define consciousness and/or the fine distinctions of neurons.
 

Similar threads

Replies
5
Views
3K
  • · Replies 21 ·
Replies
21
Views
6K
Replies
8
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
Replies
7
Views
7K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 31 ·
2
Replies
31
Views
2K
  • · Replies 24 ·
Replies
24
Views
5K