Replace a neuron with a computer chip?

In summary, the conversation discusses the two assumptions that can be made when replacing a brain cell with a computer chip and its implications for the concept of strong AI. The first assumption is that the internal workings of a neuron are pertinent to consciousness, while the second is that only the outward function of the neuron is important. The conversation also touches on the Chinese room argument and the Turing test's limitations in determining consciousness. Finally, the conversation concludes that the best assumption for a strong AI advocate is that the chip creates an outward function that duplicates the function of a neuron.
  • #1
ThoughtExperiment
41
0
Removing a brain cell and replacing with a chip must make one of two possible assumptions. The first is that what goes on inside a neuron is pertinent to consciousness. Alternatively, we might assume that what goes on inside a neuron is NOT pertinent to consciousness, that only the outward function of the neuron is important, thus we break the argument into two possibilities:

EITHER: We assume the internal workings of a neuron are pertinent to consciousness and we assume the computer chip has the same internal workings that the neuron does, so the computer chip brain can become conscious. This is obviously not true, so we can't assume that the internal workings of the neuron are pertinent to consciousness if we want to accept strong AI. Since the internal workings of a neuron and a computer chip are different - if we assume what goes on inside a neuron is pertinent to the phenomenon of consciousness, then a neuron can not be replaced with a computer chip and still create consciousness.

OR: It assumes that whatever a neuron is doing inside it's cell wall does not need to be duplicated, only it's outward function needs be duplicated, it only has to 'say' the same thing in response to a given input that a neuron does. This is the Chinese room argument at neuron scale. We could have any number of different methods of producing the same output from a given input. A recording for example, is not the voice of a conscious individual, it is simply a duplicate of a conscious individual's voice. This is also identical in philosophy, and directly analogous to, the Turing test which has been widely rejected as a method of determining consciousness.

It has been argued that the Turing test so defined cannot serve as a valid definition of machine intelligence or "machine thinking" for at least three reasons:
1. A machine passing the Turing test may be able to simulate human conversational behaviour, but this may be much weaker than true intelligence. The machine might just follow some cleverly devised rules. A common rebuttal in the AI community has been to ask, "How do we know humans don't just follow some cleverly devised rules?" Two famous examples of this line of argument against the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.
Ref: Wikipedia

Of the three reasons Wikipedia provides, this first one is the most applicable.

Conclusion: The best assumption a strong AI advocate can possibly make is that the chip creates an outward function that duplicates the function of a neuron. This is directly analogous to the Turing test which states only the outward appearances of consciousness is necessary to assume consciousness exists. Further, one must assume the inner workings of a neuron have no affect whatsoever on the phenomenon of consciousness.

Comments/thoughts?
 
Computer science news on Phys.org
  • #2
We should make explicit what we mean by "strong AI" before we proceed. Is the following a satisfactory sense of the term? Would you make any amendments or additions?

Since Searle coined the term, we suggest that, for our purposes here, we first take a look at what "strong AI" is for Searle.

In his article "Minds, Brains, and Programs (1980)," Searle characterizes strong AI as follows: ". . . according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitve states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations."

http://www.ptproject.ilstu.edu/STRONGAI.HTM
 
Last edited by a moderator:
  • #3
H, thanks for the clarification. Just to be sure we're all speaking about the same thing, I'd like to clarify my interpretation to what Searle says here:
the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitve states.
He's saying, and it is my understanding, that this means the appropriately programmed computer is consciously aware like a human is. It experiences the phenomenon of unity for example. The computer and the person would have identical experiences.

Further, that is my understanding of the neuron/computer chip thought experiment. The question, "would you notice any difference" if a neuron were replaced with a chip really implies that there is no difference, that the experience after having replaced all neurons is identical, or at least that is the possibility being explored.
 
  • #4
ThoughtExperiment said:
Conclusion: The best assumption a strong AI advocate can possibly make is that the chip creates an outward function that duplicates the function of a neuron.

In thought experiments about neuronal replacement, the above isn't a conclusion so much as it is an initial assumption-- we assume that there exists some device D that can replace a neuron N in a neural network such that the behavior of the network does not change when N is replaced with D. For this assumption to hold, it must be the case that D computes the same input/output function as N, i.e. given the same electrical and chemical inputs, D generates the same electrical and/or chemical responses as N would have. From this initial assumption, we think about what would happen to consciousness as more and more Ns are replaced with Ds. The strong AI view would be that consciousness and other cognitive functions would not change a bit.

In total, there is room for the strong AI view to actually be stronger than the view that neuronal replacement would not change an entity's mental properties. Depending on what exactly a strong AI proponent believes is constitutive of 'approprtiate programming,' such a proponent might hold that a computer need not bear as strong a resemblance to a human brain as a neuron-by-neuron replaced one does in order to support identical mental properties. That is, a strong AI proponent might believe that even more can be abstracted away from the specific implementation of the human brain than is imagined in the neuron replacement thought experiment while still preserving the associated mental properties.

Conversely, one can take a weaker stance of what 'appropriately programmed' means and still hold a strong AI-ish view on the neuron replacement thought experiment. For instance, suppose some internal cellular process P that occurs inside neurons is relevant to consciousness. One might then propose that a device D' can replace a neuron N in a neural network without changing the network's conscious properties just in case D' perfectly models N's P in addition to N's input/output function in the context of the network. So some version of this thought experiment can survive even if it turns out that some internal cellular functions are also relevant to consciousness.

As far as I understand it, the neuron replacement thought experiment is used not to assert that consciousness is only sensitive to physical activity on the level of inter-neuronal signals, but rather to argue that the biological properties of neurons are not relevant to consciousness (i.e., the same consciousness could be built from different, non-biological material). In general, it is used to argue that consciousness is sensitive to the functional activity of the physical system with which it is correlated, rather than the material properties of that system (what matters is not what the system is made of, but rather what the system does). The view that consciousness is sensitive primarily or only to physical activity on the level of inter-neuronal signaling seems to be the dominant one in cognitive science today and serves as a useful simplifying assumption for this thought experiment, but it isn't critical to the thought experiment's driving concerns.
 
Last edited:
  • #5
Just a note to clarify the direction of this thread. The intent of this thread is not to disprove strong AI. It seems the neuron/chip replacement thought experiment is generally a highly regarded thought experiment used to argue for strong AI by proponents such as Dennett, Chalmers and others. My understanding is that Searle disagrees with this, mostly for the reasons I'm using here. The purpose of this thread is to discuss the pros and cons of this thought experiment. I feel the thought experiment provides circumstantial evidence for strong AI at best, and is possibly very misleading or even intentionally misrepresentative of the facts.

As far as I understand it, the neuron replacement thought experiment is used not to assert that consciousness is only sensitive to physical activity on the level of inter-neuronal signals, but rather to argue that the biological properties of neurons are not relevant to consciousness (i.e., the same consciousness could be built from different, non-biological material).
Yes, I agree. The assertion is that the biological properties of neurons are not relevant. The underlying assumption in this thought experiment is also as you mention:
In thought experiments about neuronal replacement, the above (duplicating the outward function of the neuron) isn't a conclusion so much as it is an initial assumption . . .
The thought experiment makes the assumption that only the duplication of the outward function is sufficient. I'd call this a sleight of hand. The thought experiment as it stands disregards any processes within the neuron, and it is this objection which should be addressed as you've suggested:
For instance, suppose some internal cellular process P that occurs inside neurons is relevant to consciousness. One might then propose that a device D' can replace a neuron N in a neural network without changing the network's conscious properties just in case D' perfectly models N's P in addition to N's input/output function in the context of the network.

Problem: A computation can never replace cellular processes and since we don't know what ones are responsible for consciousness, the thought experiment is based on a faulty assumption. That assumption is that all we need to do to create consciousness is to produce the outward appearance of consciousness and this thought experiment is exactly what the Turing Test does and is why the Turing Test fails. I realize the suggestion is that we only need to identify these pertinant processes and duplicte them in symbolic form (ie: mathematics), but here's the objection . . .

The thought experiment chooses (rather arbitrarily) the level of the neuron. Let's take the literal interpretation of the experiment for a moment. Why not replace a tiny fraction of the neuron with a chip? How about replacing the DNA with a chip? Obviously this replacement won't work, the neuron needs the actual DNA to function, not a symbolic or mathematical representation. Similarly, the mitochondria can't be replaced with a chip, the remainder of the neuron will die from lack of energy. Any number of any portions of the neuron can be replaced with a computer model of that portion of the neuron, and the neuron will cease to function as a neuron. This is also a Searle argument in a sense, as he's pointed out that calculating rain for example, won't make anyone wet. The point here is that if there are processes P that occur inside a neuron, then we might ask if these processes require the actual process or not. Simply symbolizing a process using numbers does not result in the process being duplicated, it results in a process being calculated.

The obvious come back for this argument assumes a non-literal interpretation of the thought experiment. Basically it contends that once all parts of the neuron are replaced, there won't be a need for DNA, mitochondria, etc… Only the "function" is needed. But we obviously can't get a chip to function like a neuron in reality. Inserting the actual chip in place of a neuron is no better than inserting a lead bullet.*

The term "function" really is the key IMO. In the case of the Turing test, the computer only has to provide the function, or outward appearance, of consciousness. So what we really mean by the computer chip providing the "function" of a neuron is not that it will actually have the same processes, we mean it will produce the outward appearance of the processes just as a Turing test looks for the outward appearance of consciousness.

IMO, it seems to me the logic of replacing a neuron with a chip is not a particularly strong argument in support of strong AI unless one can also defend the Turing test.

Further, it seems the true logic of this thought experiment hasn't been thought through or debated very strongly, or perhaps it has. Perhaps there is a paper that covers these objections and others in detail.


* I think there's one other point regarding neuron processes that is relevant: In order to allow a neuron to be replaced with a chip, the chip must obviously have sensors to detect signals, be they electrical signals, ions, glucose used for energy, or whatever. Neurons transmit and receive ions (calcium ions?) through synapse but that is not the only affect. They are also affected very strongly by temperature. If the neuron's temperature rises or falls by about 10 degrees, it will cease to function. Similarly, pressure on a neuron might elicit a response such as during a stroke. The electrical waves that pass throughout the brain would also need some type of sensor. For the chip to replace a neuron, one would require numerous sensors as well as transmitters for transmitting pressure, ions to other neurons, heating devices to transmit heat to other neurons, etc… So suggesting one replace a neuron with a chip is overly simplified. A chip can not interface with neurons directly. The point of all this is to show just how different these two things are, they are as different as calculating rain and actually being wet.
 
  • #6
Comparing the neuron replacement thought experiment (NR) with the Turing Test thought experiment (TT) is pretty clever, and I can see why it would lead to concerns of the sort you're expressing. However, there are a couple of important distinctions to be made, such that rejecting the TT does not entail rejecting NR. (For my part, I don't buy the Turing Test, but I lean towards thinking that some form of NR is true.)

Before I talk about those distinctions, I should address the quite valid concerns you raise about the practicality of NR. It would indeed be quite a task to develop an artificial device that could seemlessly replace a neuron in a complicated neural network, and there is indeed a problem in discerning what, if any, intracellular processes are really relevant to consciousness. But I don't think we should get too caught up in all that. This is a thought experiment after all; we're not concerned with actually doing this experiment or even if we could meet the technical and practical demands it would present, but only with reasoning out what would happen assuming that we could actually do it.

The first distinction between TT and NR we should observe is that TT is only concerned with attributing intelligence to computers, not (as far as I know) phenomenal consciousness per se. That's a already a big disanalogy. But since you're concerned that NR is operating on some appearance-implies-reality fallacy in a way similar to common critiques of TT, it should be instructive to paper over this disanalogy and proceed as if TT really were concerned with attributing consciousness rather than just intelligence.

So now the distinction that becomes relevant is essentially the level of abstraction on which each thought experiment operates. NR is only committed to claiming that consciousness is maintained if the functional structure of the brain is maintained on the level of individual neurons (or perhaps even lower levels, including intraneuronal processes), whereas our modified TT is committed to claiming that consciousness can be attributed to any system that displays behavior identical to humans on the level of language input and output. The gap between these levels of abstractions is quite significant. Essentially, TT corresponds to the stronger form of strong AI I alluded to in my previous post, wherein pretty much all lower levels of the brain's structural and functional organization can be abstracted away while still maintaining (at least some form of) consciousness. TT entails NR, but NR does not entail TT, since a proponent of NR can hold that TT simply abstracts away too much of what is relevant to consciousness.

That said, NR does retain some of TT's appearance-implies-reality flavor. The reasoning behind NR basically goes that since each replacement device exactly mimics the behavior of the replaced neuron, the behavior of the brain as a whole should be the identical to the way it was pre-replacement-- and that entails that the subject should continue to behave precisely as if he was conscious, including the production of verbal utterances like "of course I'm still conscious-- for instance, I am currently very much appreciating the deep and rich hues of this beautiful Da Vinci painting, which look the same to me as they always have." The argument goes that we should be compelled to believe such behaviors and utterances and infer that the subject is indeed still conscious.

Strictly speaking, does behavior as if one is conscious entail that one is indeed conscious? No, strictly speaking, it doesn't. There is no logical contradiction in supposing that our subject has indeed lost any semblence of phenomenal consciousness, even though he goes on behaving as if he hasn't.

But this touches on the deep epistemic problems we have with consciousness in general. The only phenomenal consciousness I can observe directly, the only consciousness I can be completely sure exists, is my own. For others, the best I can do is to infer that they are conscious from their behaviors. (More recently, brain imaging has provided another avenue of indirect, third person evidence about first person consciousness.) But the sort of evidence I have for the consciousness of others is precisely the sort of evidence I have for the consciousness of our hypothetical neuron replacement subject S. On what basis should I continue to attribute consciousness to normal humans but deny it to S?

Not only do I have no evidential basis on which to deny that S is conscious, but if I do make such a denial, I am also committed to a rather strange viewpoint. For instance, I am perhaps committed to claiming that S's phenomenal consciousness gradually 'fades' or otherwise becomes degraded as each neuron is replaced, and all the while he is essentially none the wiser. In the limit, I must claim that S is not conscious at all even though his behavior has not changed a bit, which, while perhaps logically consistent, seems an exceedingly odd or even absurd thing to actually occur in our world.
 
  • #7
www-formal.stanford.edu/jmc/whatisai/node1.html

--
Q. Are computers the right kind of machine to be made intelligent?

A. Computers can be programmed to simulate any kind of machine.

Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
--
 
  • #8
The only phenomenal consciousness I can observe directly, the only consciousness I can be completely sure exists, is my own. For others, the best I can do is to infer that they are conscious from their behaviors.
I'll agree that behavior is a valid method of determining whether or not a person is conscious, but I'd have to disagree that the best you can do is infer consciousness from behavior. This concern that you can't know except from behavior (and others phenomena like it) are based on a fear I'll call Logic Insanity Disease (LID's) for lack of a better term. It’s a concept that struck me quite a long time ago, and I think its applicable here. I've heard many philosophers suggest exactly this difficulty, but I think the best way around it is as follows: If two systems are virtually identical (ie: my brain operates using neurons and so does everyone else's) then those two systems should produce the same phenomena. Namely, if my brain seems to be aware and conscious, then I'm assuming all similar brains should be also. They should produce the same phenomena as my brain does, and since I'm aware of my own consciousness, and since I can assume my brain is functioning normally, I can assume other normally functioning brains produce the same phenomena mine does. One can go as far as to say if my brain sees the color red, then given the same system which exists in another mechanism, it should see red in the same way.

But a computer is not a similar mechanism. One can't apply this logic to a computer because they are not the same. The point I made in my previous post of showing that the two systems are different to the point of being incompatible is that one can't necessarily expect the same phenomenon to occur in the same way if they are not the same. So the 'cure' for LID's can not be applied here. I think we all intuitively use this logic to assume others are conscious already, but it is worth pointing out what subconscious logic we're using and I think this is it.

If I can't apply the remedy for LID's, then the next thing we do is to suggest that the function or behavior is the same. We examine the outward appearance of the function. But this doesn't get us there and I don't believe it ever will.

The first distinction between TT and NR we should observe is that TT is only concerned with attributing intelligence to computers, not (as far as I know) phenomenal consciousness per se.
I'd like to get this one clarified. You're correct in that Alan Turing used the word "intelligent". My understanding, and it could be wrong, is that he meant "conscious" in the same way a human is conscious. His paper is online here:
http://cogprints.org/499/00/turing.html [Broken]
I'd be glad to hear comments on whether or not Alan Turing meant for the TT to be a test for consciousness or not.

Searle says this about TT, consciousness and behavior:
The two most common mistakes about consciousness are to suppose that it can be analyzed behavioristically or computationally. The Turing test disposes us to make precisely these two mistakes, the mistake of behaviorism and the mistake of computationalism. It leads us to suppose that for a system to be conscious, it is both necessary and sufficient that it has the right computer program or set of programs with the right inputs and outputs. I think you have only to state this position clearly to enable you to see that it must be mistaken. A traditional objection to behaviorism was that behaviorism could not be right because a system could behave as if it were conscious without actually being conscious. There is no logical connection, no necessary connection between inner, subjective, qualitative mental states and external, publicly observable behavior. Of course, in actual fact, conscious states characteristically cause behavior. But the behavior that they cause has to be distinguished from the states themselves. The same mistake is repeated by computational accounts of consciousness. Just as behavior by itself is not sufficient for consciousness, so computational models of consciousness are not sufficient by themselves for consciousness. The computational model of consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.
Ref: http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/searle.prob.html
From reading that I'd have to believe that Searle is implying the TT is a test for consciousness by testing for behavior and computationalism. Also, just a side note. Although Searle uses these types of arguements against computationalism, I would disagree that anything he's said yet disproves computationalism. I don't think anyone has proven or disproven computationalism yet.

NR is only committed to claiming that consciousness is maintained if the functional structure of the brain is maintained on the level of individual neurons (or perhaps even lower levels, including intraneuronal processes),
This is an interesting point. There is a structure to the brain, and how various portions of the brain function and interact seems to be critical to how the brain works. But how is it we're maintaining them by removing them and replacing them with something else? A prosthetic limb is replacing the function of that limb, but it does not replace the limb. The problem with replacing a portion of the brain with a prosthetic device is that we don't yet have a theory for consciousness, so we don't know what other phenomena need to be duplicated in order for consciousness to emerge. If all it takes is for a device to be able to calculate, then an abacus can be used to replace a neuron, or a slide rule, or just pencil and paper. We could put a man in a room that gets bits of paper sent to him and he has rules to return bits of paper with funny Chinese markings on them that he doesn't understand, so the man in the Chinese room could be used to replace the neuron. Suggesting only the calculation is important is to suggest we already know that only calculations are necessary for consciousness, not hormones, amino acids, or any of the myriad complex molecules floating around in the brain.
 
Last edited by a moderator:
  • #9
Q. Are computers the right kind of machine to be made intelligent?
Now there's a loaded question! LOL
 
  • #10
ThoughtExperiment said:
I'll agree that behavior is a valid method of determining whether or not a person is conscious, but I'd have to disagree that the best you can do is infer consciousness from behavior. This concern that you can't know except from behavior (and others phenomena like it) are based on a fear I'll call Logic Insanity Disease (LID's) for lack of a better term. It’s a concept that struck me quite a long time ago, and I think its applicable here. I've heard many philosophers suggest exactly this difficulty, but I think the best way around it is as follows: If two systems are virtually identical (ie: my brain operates using neurons and so does everyone else's) then those two systems should produce the same phenomena. Namely, if my brain seems to be aware and conscious, then I'm assuming all similar brains should be also. They should produce the same phenomena as my brain does, and since I'm aware of my own consciousness, and since I can assume my brain is functioning normally, I can assume other normally functioning brains produce the same phenomena mine does. One can go as far as to say if my brain sees the color red, then given the same system which exists in another mechanism, it should see red in the same way.

That's a good point. But what makes NR more interesting and compelling than hard core computationalist/strong AI claims is that very much of the flavor of what you suggest here can be extended to NR. In the case of a brain whose neurons have been replaced by appropriate artificial devices, what is retained from the original brain is not just gross input/output functions on the level of the organism as a whole. Remaining similarities between the original, biological brain and the new, artificial brain include the following:

* identical functional behavior at least on the level of neurons
* identical temporal evolution of signal processing
* similar signal carrier (electrical pulses)
* similar spatial distributions and relationships of processing units and signals, again on the level of neurons

So, many characteristics of the original brain beyond just abstract functional behavior are retained. The only really striking difference is the very low level structures of the individual processing units (types of atoms used, configurations thereof, etc). So although we cannot attribute consciousness to an NR brain as readily as we could to a normal one, we do still have quite a large set of similarities of which to form such a basis. Your 'LID cure' would not be a given, but it would still remain quite viable.

ThoughtExperiment said:
The problem with replacing a portion of the brain with a prosthetic device is that we don't yet have a theory for consciousness, so we don't know what other phenomena need to be duplicated in order for consciousness to emerge.

Quite true. Again, NR does not take it for granted that we know what aspects of brain function are relevant to consciousness; rather, it is an argument that seeks to establish the conclusion that biology is not one of these relevant aspects.

ThoughtExperiment said:
If all it takes is for a device to be able to calculate, then an abacus can be used to replace a neuron, or a slide rule, or just pencil and paper. We could put a man in a room that gets bits of paper sent to him and he has rules to return bits of paper with funny Chinese markings on them that he doesn't understand, so the man in the Chinese room could be used to replace the neuron. Suggesting only the calculation is important is to suggest we already know that only calculations are necessary for consciousness, not hormones, amino acids, or any of the myriad complex molecules floating around in the brain.

I agree that consciousness must be dependent on more than just some abstract notion of pure calculation. I very much doubt that a natural phenomenon like phenomenal redness owes its existence to something so abstract that it could be implemented by any kind of physical structure according to any kind of suitable algorithm over any kind of conceivable time scale. The abacus example you mention is my personal favorite for drawing out the extreme counterintuitiveness and implausibility of the very staunchest of the strong AI positions.

However, it's important to realize that NR is not an argument for pure computationalism. A large number of details about the physical implementation of the brain are indeed maintained in NR, as outlined above, and this makes a big difference. For instance, suppose for the sake of argument that consciousness is only associated with physical systems in which electrical pulses are transmitted in some particular kinds of spatiotemporal patterns. If this is the case, then pure computationalism fails spectacularly, but NR still holds, because an NR brain exhibits electrical activity in particular kinds of spatiotemporal patterns in a fashion virtually identical to biological brains.
 
  • #11
Thanks for the feedback, and sorry it took so long to respond. I've had to do some additional research on this and that rolled into a vacation.

I see your point about trying to duplicate the function of a neuron. You're correct about that, and that's what Chalmer's point is too. So:
- If consciousness only requires duplicating a function in a general way, then it would seem that NR will work.
- If consciousness requires something more, if for example there's something else to it such as suggested by Hameroff and others, it can't work.
- If consciousness depends more strongly on exactly what functions are being performed, and if switches or chips can't duplicate those functions, then there may be something in between and one may slowly loose consciousness as neurons are replaced. I believe this is the stance taken by Searle, Ned Block and others.

Debating which may be correct doesn't seem as productive as I'd thought it might be. It would seem there's been healthy debate on each side, much more than I'd realized, and it has more to do with function than how something behaves as I'd originally thought. Thanks again for the feedback.
 
  • #12
if you copy the function of a neuron(the outward function) then you have effectively replaced its function in the neural network. It can now judge various factors(inputs) & provide an output based on these inputs & their importancies(weights). It now becomes part of the neural network. It is not so much the neuron as the connections between them that are imortant, the brain as a whole is somewhat more than its parts. Thinking is a global process, each individual element certainly doesn't do much, but helps towards the global view. Once you have parts that weigh up various factors & parts that can associate data with other data, you start to get the basics of thought, but the exact knitting of the neurons is a little harder. people often fail to understand how a single neuron can think, but it is the combination of connections & neurons. Like the whole is more than the sum of its parts when assembled. I write Neural network software every day, and the more you come to understand how neural networks work in their 'judging factors' and 'associating memories(like tomatos, & that day i dropped tomatoes on yself)' you start to understand how similar AI neural nets are to our own brains, although conciousness is a little bit of a harder problem, again as a simple NN is more than the sumof its parts, so the brain may be more, and conciousness(or perceived conciosness) may be the result.
 
  • #13
the question is, is an amoeba self aware? is an ant? a cat? We can certainly get to a slugs level of Artificial NN. When there's so much going on in making a decision, memories, associations, going off on a tangent associative memory wise, this could be construed as consiousness. The more going on, the more aware, is my philosophy
 
  • #14
ThoughtExperiment said:
Removing a brain cell and replacing with a chip must make one of two possible assumptions. The first is that what goes on inside a neuron is pertinent to consciousness. Alternatively, we might assume that what goes on inside a neuron is NOT pertinent to consciousness, that only the outward function of the neuron is important, thus we break the argument into two possibilities:
I'm going to address only this point of your discussion. Why do you make this an either/or choice? Unless you're trying to make this decision for each individual neuron instead of any neuron, it doesn't make any sense. Not all neurons would be expected to be involved in consciousness (cognitive processes). There are neurons that have other functions, such as regulating hormone release, or control of other autonomic functions (heart rate, breathing, intestinal motility, temperature regulation, etc) that have no requirement of consciousness.

Also, since neurons don't have just one input and one output, a single neuron might be involved in both cognitive and non-cognitive processes...perhaps as a "relay hub" of sorts, so you might not be able to categorize such a neuron according to the dichotomy you start out with...it could be both.

Another issue I have with the idea of replacing a neuron with a computer chip is what sort of output would you have? How would you connect it to the other neurons? Neurons don't communicate via electrical signals, they communicate via chemical signals (neurotransmitters). Those neurotransmitters can be either stimulatory or inhibitory on efferent neurons, depending on the neurotransmitter. The neurotransmitter released can also be different depending on the signals received by the afferent neuron. Neurons don't just communicate via on/off switches either. You could get more or less neurotransmitter released, and depending on the receptors on the efferent neuron, that may or may not increase the signal received (in other words, if the receptors are few and already saturated, increasing neurotransmitter release may not matter, but if receptors are abundant, there may be more sensitivity to different concentrations of neurotransmitter release).

I also wouldn't put much stock in what Searle says. He might be able to impress philosophers, but he gave a seminar for the neuroscience program I used to be a member of and we were NOT impressed. He clearly has not kept up-to-date on progress in neuroscience and used fairly naive arguments about the behavior of neurons.
 
  • #15
ooh moonbear do you have an papers on that seminar by searle?

oh and about the neuralchip to neuron...i thought only the synaptic knobs/synapse region dealt with NTs...if that is so i wonder if you could chop off the ends and some how hooks the knobless dendrit/axon to a neuralchip to exhibit potential signalling within teh axon. The trick would be to maintain the knobless ends someone onto the chip. Or perhaps teh chip would be a cube that stores/pools(from CSF) Ca/K/Na. That would be a funky design. Btw have you read Geary's book called the "body electric"
he's tells of some interesting experiments especially one researcher who put a remote into his/his wife's arm to attempt emotional signals transfer via the remote. I don't know how that experiment panned out.
 
  • #16
Jim, thanks for the feedback. I take it that you feel the thought experiment provides a convincing argument for the concept that a brain made of computer chips and structured as neurons are structured can provide for the phenomenon of consciousness.

I think the thought experiment has (slightly) more validity now that I've had a chance to examine the function of neurons more closely, though I can't see how anyone can claim it provides proof in any way.

It is not so much the neuron as the connections between them that are imortant, …

How can you be sure it is the connections between the neurons that produces consciousness? The neuron gets information from numerous locations in the form of neurotransmitters and then manipulates that information somehow before 'deciding' on whether or not to release or not release neurotransmitters of its own. What assumptions does one need to make to conclude the interconnections are all that are necessary? If there are any at all, then how can we be sure the differences we overlook are not critical in some way? And because a neuron will sometimes 'rewire' itself to new, different neurons, would we not also need to duplicate that function as well?

In concluding that the neuron itself serves only the function of some fancy set of switchs, we assume that whatever is going on inside the neuron is unimportant. Whatever kind of manipulations the neuron is doing therefore are assumed to have nothing to do with consciousness. With this in mind, we could do those manipulations with any variety of means so long as the end result produces the same resulting release or non-release of neurotransmitters. If this is true, one could have those internal functions performed with electrical switches, water valves, a man with an abacus or we could do those calculations by putting a man inside a chinese room.

I'll continue this by responding to Moonbear because MB is also making similar comments.
 
  • #17
Moonbear. Thanks for the feedback. You're correct of course, not all brain cells contribute to consciousness. There's a discussion of that on another thread regarding NCC's. Regardless, the thought experiment originally proposed by Chalmers replaces a brain cell with a chip and asks the question, "At what point might consciousness disappear?" So I suppose you could say Chalmers' original question is in error, but I think that to suggest a given brain cell doesn't contribute or only partially contributes to consciousness is a bit of a nit-pick. The point is not that all brain cells contribute, the point regards replacing the function of a cell with a neuron, that's all. My either/or choice then (the one you quoted) is an attempt to expand on the function of the brain cells that contribute to consciousness, if you will.

Given that we've replaced the input/output of any given neuron with the equivalent input/output, will consciousness necessarily be unaffected? Also please note that I'm not arguing one can be assured one way or another, what I'm suggesting here is to break the problem up into various possibilities and see if there's a logical argument to suggest consciousness remains or does not.

I still think there's a valid argument here which suggests that we can't necessarily replace the function of a neuron simply by providing identical input/output via some interface. As you point out, neurons don't communicate via electrical signals like computer chips. I'll quote Roger Rigterink here regarding this point.

The point of the argument is to show that as long as the silicon chips carry out the same functions as the neurons which they replace, we can transform a human being, Robert, into a machine, Robot, without this individual losing consciousness. It is important to note, however, that the neurons do more than channel electrons; they are also chemical factories.

Chalmers himself notes this in passing by saying, "We can imagine that [the replacement chip] is equipped with tiny transducers that take in electrical signals and chemical ions and transforms these into a digital signal upon which the chip computes, with the result converted into the appropriate electrical and chemical outputs." (Emphasis added).
Obviously, something more than mere computing (that to which Church’s thesis applies) is taking place in this thought experiment. Apparently, the ‘silicon chip’ which is to replace a neuron has a pharmaceutical laboratory available through which it can create and dispense appropriate chemicals. Thus, it is disingenuous to suggest that a mere silicon chip can serve as a neuron replacement. In order to have the replacement ‘chip’ carry out all the appropriate functions (including the physical as well as the logical), one would have to have something that looks very much like and behaves like a neuron. What we do not know is to what extent the physical functions are responsible for the existence of consciousness.
Ref: http://www.manitowoc.uwc.edu/staff/awhite/roger97.htm [Broken]

I remain unconvinced that the NR experiment is sufficiently convincing to rest on. In fact, it isn't a logical argument at all really. It doesn't offer us any reason to believe that consciousness can be created by switches except to suggest the only thing needed for consciousness is to create something that functions in a way similar to it. It offers no explanation as to why it emerges, it only says it does and if we do something similar to what the brain does, consciousness should similarly emerge. I'm not buyin' it.
 
Last edited by a moderator:
  • #18
neurocomp2003 said:
ooh moonbear do you have an papers on that seminar by searle?
It was just a department seminar. We had donors who are into the whole mind-brain thing and requested he speak, so in the interest of keeping donors happy, we brought him in (not worth it in my opinion...he has a ridiculous speaking fee...not very "academic" of him to demand that, and I'll admit it puts me off right from the start if someone demands a speaking fee when invited to give a seminar).

oh and about the neuralchip to neuron...i thought only the synaptic knobs/synapse region dealt with NTs...if that is so i wonder if you could chop off the ends and some how hooks the knobless dendrit/axon to a neuralchip to exhibit potential signalling within teh axon. The trick would be to maintain the knobless ends someone onto the chip. Or perhaps teh chip would be a cube that stores/pools(from CSF) Ca/K/Na. That would be a funky design. Btw have you read Geary's book called the "body electric"
he's tells of some interesting experiments especially one researcher who put a remote into his/his wife's arm to attempt emotional signals transfer via the remote. I don't know how that experiment panned out.
Chop off the ends? Then you'd have a damaged neuron. How is that going to work? Even the action potential is an ion gradient, not a pure electrical signal.

But, let's say, for the sake of argument, you could patch onto the synaptic end and somehow have your chip translate that into a useable signal (somewhat like the way an electrophysiologist can patch onto a neuron to measure membrane potential...though they patch onto the cell body, not the synapse). The first question that comes to mind is how do you then deal with synaptic plasticity? This seems to be dependent on both the presynaptic and the postsynaptic neuron to strengthen or weaken certain synapses. Normal brain function involves a huge amount of synaptic plasticity, so if we clamped a neuron onto a chip, that particular function is going to be disrupted. The second question is still how do you send a signal to the neurons efferent to the one being replaced by a chip, especially if the signal is supposed to be inhibitory rather than stimulatory? Again, you have the problem of plasticity on the efferent side as well as the afferent side.

Of course I'm making the assumption we're talking about the CNS here. If you're talking about a motor neuron, you can stimulate contraction of a muscle with an electrical stimulus, and this is already being done...not with a single chip, but with electrodes...in order to start restoring some functionality to paralyzed limbs.
 
  • #19
ThoughtExperiment said:
Given that we've replaced the input/output of any given neuron with the equivalent input/output, will consciousness necessarily be unaffected?
If you're only talking about a single neuron, consciousness will be unaffected. It doesn't matter if you replace it or not, you can just kill a single neuron and have no noticeable effect. The brain has quite a few redundant systems and remarkable ability to "rewire" itself to bypass injured areas.

The more relevant questions to what I think you're trying to get at here would be how many neurons and how much of a neural network would you need to destroy, and where, to disrupt consciousness/cognitive function. (I keep replacing consciouness here with cognitive function...I suspect you're not asking about conscious vs unconscious, but cognitive processes such as thinking, self awareness, decision-making, language, memory formation, learning, etc. If I'm incorrect about that, please speak up on it.) As someone mentioned above, cognition is not an intrinsic property of a neuron, but an outcome of an entire network of neurons.
 
  • #20
The first question that comes to mind is how do you then deal with synaptic plasticity? This seems to be dependent on both the presynaptic and the postsynaptic neuron to strengthen or weaken certain synapses. Normal brain function involves a huge amount of synaptic plasticity, so if we clamped a neuron onto a chip, that particular function is going to be disrupted.

Arguing from the computationalist side, it isn't too difficult to deal with this. We simply assume all chips that replace the neurons are fully interconnected. In other words, we connect the new chip to ALL neurons since we can't (physically) create new connections. The chip only uses the connections needed. At first, the chips only use those connections they inititally replace and leave the rest unconnected. If the chip which replaces a neuron, creates a connection with a different neuron, the computer simply opens that gate an allows the connection. One can assume there is a causal relationship which results in the new connection being created which is calculable. If the chip (the one which replaces a neuron) breaks a connection with another neuron, the chip simply shuts off that connection. That's why one can't make a real argument around this capability. I started to suggest this, but stopped because of this lack of an argument.

If you're only talking about a single neuron, consciousness will be unaffected. It doesn't matter if you replace it or not, you can just kill a single neuron and have no noticeable effect. The brain has quite a few redundant systems and remarkable ability to "rewire" itself to bypass injured areas.

Ok, but what about NCC's? What about the thalamus? Is a single neuron in the thalamus not going to affect consciousness? I'm not so sure, though it doesn't really matter much. Arguing this issue isn't productive. One can accept your proposition without coming to any conclusion regarding the replacment of a neuron with a chip. One can suggest that a single neuron replaced with a chip has no affect or a negligable affect and both responces are equal from the POV of Chalmers. The issue is the eventual replacement of a large number of neurons with chips.

(I keep replacing consciouness here with cognitive function...I suspect you're not asking about conscious vs unconscious, but cognitive processes such as thinking, self awareness, decision-making, language, memory formation, learning, etc. If I'm incorrect about that, please speak up on it.

Not sure what you really mean. Please elaborate.

As someone mentioned above, cognition is not an intrinsic property of a neuron, but an outcome of an entire network of neurons.

This is an assumption/opinion. It has no basis in theory. For example, one can equally suggest that the replacement of every neuron with a chip results in a P-zombie. Why should consciousness "emerge" at all? If one is to make this claim, there needs to be some theory as to why consciousness should emerge as opposed to a P-zombie.
 
  • #21
moonbear: heh i knew i was forgetting something PNS!=CNS...forgive me I'm not really a biology/psychology person more psych/math/cs :smile: :smile:

As for the neural chip design...about the issue of plasticity, it depends on whether you need it. If your goal was to connect one CNS neuron to another through one "synaptic chip" then I don't think you would worry about plasticity.

Now let's say you really wanted to...i wonder if you could build a neuralchip with NT gates and sensorys that detect the passing of NTs from one presyn. to postsyn. And yah the problem of cutting off the knobs would be to not allow the axon to rupture. I was under the impression that the axon and cell body have different maintainenance structures, taht is to say if a axon gets destroyed the cell body is still holds structure.
The solution needed would need to "sow" up the branches.

As for plasticity again if you had a bunch of gates(a 2D surface with gates) you could try to grow multiple connection zones if you found a way to sow up the axon. OR perhaps the neuron will adapt its growth structure to the neural chip of some evolution time. Whether the 2D surface chip resembles a single neuron or multineuron then becomes a design of the maker.
 
Last edited:
  • #22
Hi All,

A neurochip is ceratinly possible but examine a Pyramidal cell with more than 30,000 "pins" and thousand of them changing every second.
Just think that the cell itself is modifying its behavior by 80 neurotransmitters.

Plasticity is the key issue of all neural functioning. If you want to restrict this essential behavior you'll return simply to an unintelligent chip.

Brain is definitely not hardwired but there is of course some "skeleton" networks that are modified by glia and neurons.
 
  • #23
hoohoohoo, what you say really makes me luagh. Recent robots by many worldrankin companies not as advanced as what you describe. But i think there robots that can sense a little, and ahve some feelings. this also fun.
i not think the replacement is good in terms of ethnics either, that evolves machines but stops human evolution.
 
  • #24
OK, here is a variation of Neuronal Replacement. Instead of replacing each neuron with sillicon chips, replace them with other neurons.

In this experiment, you identify all of the essential characteristics of each individual neuron, such as length and number of its connections. You then invent a way to grow individual neurons in the lab that have the exact properties you want.

Now, start replacing neurons one at a time. Has consciousness changed? What if you replace them all at once? If you believe that replacing neurons with silicon simulations makes consciousness fade away because there is some essential property of consciousness INSIDE the neurons, then surely consciousness will fade away when replaced with lab grown neurons that have no way to know of any consciousness or intrinsic meaning from the life history of that individual. The replacement neuron is simply a replacement machine in that sense, but a biological machine.

If your friend had each of their individual neurons replaced with replica neurons, and they continued to talk normally and seemed conscious, would you still lay down your life for them, or would you now consider them a zombie?
 
  • #25
Hi jeff,
Suggesting ‘your friend’ is no longer ‘your friend’ after having one’s neurons replaced is a very duelistic perspective. Who is this ‘you’ being referred to? Even a person’s neurons are replaced, bit by bit, by the body’s own biological mechanisms, many times, over the course of a lifetime. None of the molecules in my body are the same ones I had many years ago. People are not ‘you’ or ‘me’ they are patterns of molecules which produce the phenomena of consciousness. There is no “your friend”, other than the pattern of molecules which we loosely call that person’s body.

Now take the hypothetical thought experiment of growing exact duplicate neurons in petri dishes and using them to replace the neurons in ‘your friend’. How is that any different than what nature is doing right now to every person’s body?
 

1. What is the purpose of replacing a neuron with a computer chip?

The purpose of replacing a neuron with a computer chip is to create a more efficient and advanced way of processing information in the brain. This technology, known as neuromorphic computing, aims to mimic the structure and function of the human brain to improve tasks such as pattern recognition and decision making.

2. How does the process of replacing a neuron with a computer chip work?

The process involves creating a computer chip that mimics the structure and function of a neuron, including the ability to receive and transmit information. This chip is then connected to the existing neural network and is able to communicate with other neurons in the brain, essentially acting as a replacement for a damaged or missing neuron.

3. What are the potential benefits of replacing a neuron with a computer chip?

Replacing a neuron with a computer chip has the potential to greatly improve the capabilities of artificial intelligence and machine learning systems. It could also help in treating neurological disorders and injuries, as the chip can take over the function of damaged neurons.

4. Are there any ethical concerns surrounding replacing a neuron with a computer chip?

As with any emerging technology, there are ethical concerns that need to be addressed. Some of the concerns surrounding this technology include issues of privacy, control over one's own thoughts and actions, and potential misuse of the technology. It is important for ethical guidelines to be established and followed in the development and use of this technology.

5. What are the limitations of replacing a neuron with a computer chip?

One of the main limitations is that the current technology is still in its early stages and has not yet reached the level of complexity and sophistication of the human brain. Additionally, the brain is a highly complex and interconnected network, so replacing a single neuron may not have the desired effect. There are also concerns about the long-term effects and potential risks of introducing a foreign object into the brain.

Similar threads

Replies
10
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Computing and Technology
2
Replies
44
Views
3K
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
4K
Replies
23
Views
5K
Replies
67
Views
14K
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
Replies
6
Views
4K
Replies
1
Views
5K
Back
Top