The Importance of Connectivity to Strong AI

  • Medical
  • Thread starter ThoughtExperiment
  • Start date
  • Tags
    Ai
In summary, the conversation discusses a thought experiment that aims to challenge the concept of the computational mind, which states that any calculating machine can give rise to consciousness. The essay focuses on the importance of connectivity in understanding consciousness and compares it to the use of finite element analysis (FEA) and computational fluid dynamics (CFD) in engineering. It argues that without proper connectivity, there can be no conscious phenomena, and the machine cannot distinguish between signals, making it impossible to know if it is actually calculating anything. The essay also raises the question of whether the type of signal passing through the machine affects its ability to produce consciousness.
  • #1
ThoughtExperiment
41
0
My apologies for the length of this essay, it runs approximately 4000 words. Because of this, it will require two separate posts. It is too long for a single post. At the end of the first post, a link will be provided to the second. Feel free to comment on either thread.

This is an attempt at a thought experiment to discount the computational mind on par with Searle's "Chinese Room". I don't consider myself a philosopher, in fact I've been an engineer for about 15 years now, so much of the language that goes into debates of consciousness are not concepts I'm totally familiar and comfortable with. This essay instead works with concepts most engineers might be comfortable with, finite element analysis or computational fluids dynamics. I've provided an overview of how that type of engineering analysis is done, hopefully it will be sufficient to allow people to grasp the thought experiment I've proposed.

I've also not addressed some types of computational methods or computers, primarily because I feel doing so will only confuse the issue and those retorts to this essay using arguments based on different computational methods should be grouped fairly easily and dismissed. If something is missing along those lines such as "this won't work with a quantum computer because… " I'd be glad to discuss that as a separate issue. My intention here is to provide a coherent thought experiment and to discuss any missing or overlooked concepts which might alter the conclusion. Thanks in advance for your time.


The Importance of Connectivity to Strong AI and the Computational Mind

Abstract:


Searle describes strong AI as follows: ". . . according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations." In other words, the mind is a calculating machine, and any calculating machine of similar qualities will give rise to consciousness in the same way. The assumption is that subjective experience is a by-product of the calculation, or some may say, it IS the calculation.

What this essay intends to demonstrate can be summarized as follows:
1a. If various portions of the computer ARE connected, the computational mind concept allows the possibility of conscious phenomena emerging.
1b. If various portions of the computer are NOT connected, there can be no conscious phenomena.
2. There is no mechanism allowing individual portions of the computer to discern whether or not they are connected.
3. If the individual portions of the computers can not discern whether they are connected or not, no overall phenomena can emerge which requires them to be connected.

Further, this essay intends to show that the connectivity of the computing device is vitally important in understanding consciousness. Is the signal or information that is passed from one portion of the machine to another special in any way; or can any signal which allows the computation to be performed suffice? Although it would appear to be a superficial question who's answer is simply that any signal should suffice, that concept will be challenged after it is shown that the computational machine can not distinguish between signals. As a result, there is no way for the machine to know whether or not it is in one piece and actually calculating anything or not. To help illuminate this argument, a mind experiment will be used, applying the same concepts used by computational fluid dynamics and finite element analysis tools.


Finite Element Analysis and Computational Fluid Dynamics

Computational fluid dynamics (CFD) and finite element analysis (FEA) provide engineers the tools with which to calculate the behavior of complex systems. CFD is used to analyze how fluids flow - for example, how air flows over the surface of an aircraft, or how water might flow through a turbine used to create electricity at a hydroelectric power plant. FEA is used to analyze stresses and strains in materials. In the case of the aircraft, its wing might be analyzed to see what kind of stresses the aluminum structure is under to ensure the structure can withstand the rigors of flight, or it could be used to examine the turbine casing on that hydroelectric power plant to ensure the pressure of the water, and the stresses created by the spinning turbine do not cause those structures damage.

CFD and FEA are very similar tools, in that they both take a large system, break it down into small chunks, and analyze each chunk individually using information coming from one chunk to calculate what will happen inside the next chunk. Each chunk is a three dimensional piece of the phenomena being modeled.

In the case of CFD analyzing the flow of air over a wing for example - an imaginary, three dimensional grid is formed around the aircraft, and it is through this 3D grid that the air is moving. If we examined one of those small three dimensional chunks in the grid, we would find air moving into the chunk, air moving out of the chunk, perhaps some reflections of air waves coming off of the aircraft moving through, and other phenomena. To calculate what is going on inside this chunk, a mathematical model, such as a subroutine, calculates what is going on when air moves into and out of the chunk using various continuity and other equations. As the air moves over the wing, it calculates the drop in pressure, and that pressure is transformed into a force where it borders the wing to determine how much lift a wing might generate.

An FEA analysis works very similarly. In this case, it's a structure which is examined. Take the turbine case for example. The turbine case must support the bearings on which the shaft turbine rides, it must contain loads due to internal pressure, and it must be able to handle all the forces and loads on it. Again, a three dimensional grid is used to model the entire case. Imagine a lump of clay, molded to look like the turbine case, and a thin wire knife is used to chop it vertically lengthwise, vertically crosswise, and then horizontally, many many times, creating tiny rectangular chunks. Each chunk will have equations for the forces applied to the six faces of the chunk, and each chunk will have equations which calculate the stresses created by those forces inside each volume. Each chunk will also have equations which determine the strain, or amount of stretching, it experiences applied to the material inside

Both of these methods for analysis use the same concept of breaking a finite amount of matter up into very tiny chunks. They both then apply equations which model the applicable physical laws for the phenomena of interest. One can extend this concept to analyze anything whatsoever, such as a piping system, or even a human brain. The overall behavior of any emergent phenomena can then be determined to a high degree of accuracy which is only dependant on the accuracy of the model.


Overall Perception

The method of analysis just discussed, FEA or CFD is a tremendously useful tool in analyzing most any phenomena engineers and scientists can think of. From the examples given, all the way up to a model of the universe, the concept of cutting a given chunk of the universe up into small pieces so that each piece is simple enough to analyze on its own, and so that information about what is going on in one chunk can be used by the next, will be used to analyze the brain. But before that can be done, we need to point out one aspect of consciousness with which we will be most interested. I'll call this feature "overall perception" for lack of a better term.

We perceive our surroundings as a unified whole. It is a singular perception, not a disconnected one. When looking at a painting we perceive not just one spot on the painting, we perceive an entire painting, a complex painting, something which may take up our entire eyesight. And this painting will be perceived across a large chunk of the brain, not just in one tiny spot. Our perception and what we experience when looking at this painting is assumed to require a large part of our brain. So overall perception refers to the singular, unified experience we have when looking at a painting for example. The thought experiment below will highlight the difference between a computer and a conscious human brain. It is intended to show that this "overall perception" can not exist in a computer. In addition, it is intended to show that there are no physical laws which can presently accommodate this phenomena.

Part 2 can be located here.
 
Last edited:
Biology news on Phys.org
  • #2
The Importance of Connectivity to Strong AI (Part 2)

This is the second half of a post. The first part of the post can be found here, along with an introduction: Part 1


Conscious Computers and FEA, a Thought Experiment

Let's apply the FEA concept to the human brain and create a mind experiment to examine the phenomena of consciousness. In this case, each chunk might be drawn around something as small as or even smaller than a single human cell, be it a neuron, blood cell, or any other cell. Each chunk might be analyzed by a separate desktop computer which is powerful enough to mimic exactly, everything that is going on inside the cell. Chemical reactions, heat transfer, pressure waves, electrical waves, and all such phenomena occurring inside an actual human cell, could in principal, be modeled mathematically by a computer. The human cell would be connected to others in its immediate vicinity, and likewise, each desktop computer would be connected to other computers simulating other cells. One might imagine a single large three dimensional mountain of desktop computers connected to each other, constantly communicating and passing information from one another just as each cell interfaces with it's neighbor. Molecules passing from one cell to another would be simulated by the information being passed back and forth by the individual computers. If heat or energy of some sort is being passed between cells in the brain, the computers would also pass the equivalent information between each other. Similarly for every interaction occurring in the brain between cells, that corresponding information would be passed along wires connecting each desktop computer such that, per strong AI, the simulation of the brain was perfectly created and every emotion the brain experienced, the mountain of computers simulating this brain would also experience.

This mountain of computers would model a human brain in every detail, right down to modeling the splitting of DNA, and every chemical reaction. In addition, the mountain of computers would have a set of cameras for eyes, computers for nerves, and inputs for each of the senses. Such a massive computer, modeling everything in a human brain, might then be thought of as being able to give rise to consciousness. This massive undertaking is obviously not required, but may aid in understanding how computers might be used to model human consciousness, and thus become conscious entities per strong AI. By duplicating each portion of the brain using a computer, and modeling every conceivable chemical reaction, every electrical impulse, and every pressure wave or gradient in the system, we should be able to exactly duplicate the human brain. The sum total of this mountain of computers should result in the duplication of the human brain, along with the emergent phenomena of consciousness.

Using this model, one might finally be convinced that consciousness is computable. By duplicating every unconscious chemical reaction, every unconscious cell using unconscious switches, what emerges as consciousness does not reside inside any single desktop machine, nor any of the trillions of switches inside each computer chip. But as a whole, the mountain of computers would experience emotions when examining a painting by Leonardo da Vinci called "Mona Lisa". The famous painting, shown to the cameras and highways of computers that simulate the cord of nerves connecting eyes to brains, would send information directly into the heart of this computer mountain. As a result, we might witness a flash of communication throughout this mountain of computers that duplicates the flash of communication seen inside the human brain. And this mountain of computers would experience the same delight a human might experience when viewing this painting. From a speaker attached to the mountain of computers which simulated a mouth, we might hear expressions of fascination with the painting, or reflections on how beautifully the brushstrokes form the paint into thin swirling ridges. As the cameras provide the mountain with a view of the painting, the mountain might even share a memory in return. In a sense, the concept of strong AI would suggest this mountain of computers would ultimately feel what it is like to be human and conscious.

Perhaps we might not want to tell the mountain that we chose not to give it arms or legs. Perhaps we’d keep secret from the mountain the fact that we might have developed it so that we might experiment on it. Because when we built this mountain, we did something to it: we built recorder/transmitters into each of the connections between each of the thousands of billions of separate desktop machines. As the cameras cast their gaze onto the Mona Lisa, these devices busily recorded every interaction between their respective computers. Every output was documented, and every input was duly recorded. And the second thing these insidious devices did was to communicate directly with one other of its kind, as opposed to simply allowing the intervening signals to pass unhindered. Say for example, the computers each have six others with which they communicate, just like the six faces of a cube. For each computer element which represented a minute chunk of the human brain, there would be six communicators sending and receiving signals to the other computers with which it was connected. And at any time, the mad scientist (MC) in charge could stop the transmissions of any and all communicators. In addition, the MC could replace reality with recordings, so that instead of any given computer receiving a signal from its neighbor, it would actually receive a recorded signal.

What the mountain of computers didn't realize was that as it cast its camera gaze upon the famous painting, the MC recorded every interaction between each of the computers. Now the MC only needed to replay the signal from the cameras to duplicate the exact sequence of events inside the mountain of computers. Instead of actually placing the painting in front of the cameras, the camera output was simulated by playing back the recording taken from the first viewing of the famous painting. And when this happened, one might imagine that, assuming the computers functioned flawlessly, and the computers were reset to their original configurations (effectively wiping out any memory), the exact same flash of communication we saw during the original viewing, might happen exactly the same way a second time. And out of the speaker, the mountain of computers might utter the same phrase, and tell us how much it enjoyed the beauty of the painting.

With this experiment firmly under his belt, the MC might then proceed to perform other acts of sabotage on this mountain of computers. Initially, he might disconnect a single computer from the mountain, rendering it incapable of connecting with the others. In its place, the recording taken from the first viewing might be fed to the neighboring computers. And sure enough, when the recording of the painting is played back to the camera outputs, the speaker would assure us that the painting was created by a brilliant artist who had earned a place in history. No difference would be found in the performance of any of the computers. The flash of communication between the computers in this mountain would be identical to the flash seen when the one computer was disconnected from the system.

The MC would then ruthlessly proceed to cut each connection, one at a time. And with each cut, the playback of the cameras would be initiated. And with each initiation, we might observe the same flash of communication throughout the mountain, followed by the speaker, resonating with the same words for the beloved painting. And those of us on the outside looking in would continue to see the same flash, the same words, knowing the computer was slowly being dismantled. But the recordings kept for each of the computers would ensure it never wavered from it's original allegedly conscious state.

We might wonder then, did the computer ever lose consciousness or the sensation of overall perception? We would not be able to tell. In the end, the MC would dismantle every single computer. And not a single communication would exist between any of the computers. But the recordings would ensure that each computer knew nothing of the plot. Each computer would receive the same exact messages, in the same exact order, as it received the first time. And once each computer had been isolated, we would still hear the words from the speaker, telling us how beautiful the Mona Lisa's smile is, and how it reminded the mountain of a distant memory. Because the speaker, too, was connected to a computer which had a recording played into it.

Once all of the desktop computers were disconnected we would wonder: when did the mountain computer lose consciousness and the sensation of overall perception? Certainly no one would be able to tell. The flash of communication throughout the mountain would remain identical with the disconnection of each computer. The pulse of signals would move along the chord of nerve cell computers to the center of the mountain, just as it had before. And the words uttered by the speaker would never change a beat, always telling us how gorgeous that painting was, and how it admired the artist.

Just in case the MC would ever doubt that consciousness and overall perception had indeed ceased to exist, each computer with its dedicated recorders/transmitters would carefully be attached to a rocket and ignited, scattering them across the universe. And in unison, the recordings would play back the image of the viewing that occurred so many years ago. The flash of communication would occur faster than the speed of light across the eons. And out of the speaker, which we steadfastly refused to relinquish, would come the words we fully expected to hear: the painting was worthy, and the beautiful brush strokes formed paint into swirling ridges. But information can not travel faster than light, and so one must assume that both consciousness and overall perception had ceased to exist at some point in time.


Where Is Consciousness Given the Computational Theory?

The computationalist would say that each machine harbors a small piece of the conscious experience. There is no single location for consciousness in the brain. A computationalist would insist that consciousness is smeared out over some volume of the brain, such that when dividing the brain up into small bits as was done using the method described above, each individual desktop machine carried part of that conscious experience. There is no single location where one might find all the matter and energy required to provide for consciousness. In the case of the mountain of desktop machines, not all of the machines would necessarily contribute to conscious experience, but an extremely large number of machines would.


Is It Gone?

One has to ask, "Did the mountain of computers loose consciousness? What exactly did it loose, and at what point did the mountain lose it? And could it tell?" This mind experiment should stir up numerous questions.

From the vantage point of any single desktop computer, the inputs and outputs are identical regardless of whether they are connected or not. So from the perspective of any of the individual computers, nothing has changed, and everything which was computed prior to disconnecting, should exist after disconnecting. Each machine should still have the same portion of the conscious experience after being disconnected that it had when it was connected. In other words, since consciousness is smeared out across all these computers, each computer experiences an exceedingly small but finite amount of consciousness, and it's this small amount that when added together with all the other computers, results in the phenomena we call consciousness.

Now we might examine what is different. For the fully connected mountain of computers, consciousness is understood to exist exactly as we humans perceive it, and the mountain would experience this "overall perception" which integrated its entire conscious experience. The mountain of computers sees an entire picture of the Mona Lisa, and the experience it has is a connected, fully formed view of this painting. It perceives everything. For the disconnected machine, nothing is truly perceived. Each of the individual machines is unaware of what it's neighbor is doing. It is impossible for it, and we can show by physical law that this is true. To prove it we only need to move all the machines farther apart in space, and we know we can not have any information traveling faster than light, so we know that they can not communicate, and the entire experience of overall perception can not exist if the machines are not connected.

We are forced to accept that for the disconnected mountain of machines, performing the exact same calculations as the connected mountain, there is no conscious experience. Or more to the point, the conscious experience which is spread out throughout millions of machines, still exists. But since it is disconnected, there can be no unified experience. The entire picture of the Mona Lisa can not be perceived, and thus the experience of an entire picture of the Mona Lisa has gone away.

Since the unified experience has disappeared, one might ask why did it? The computationalist answer is that the machines were disconnected. But that does not resolve the issue, and herein lies the most important part of the thought experiment. Each machine received the same exact signal as it had previously. The signals were identical in every way. The problem is that none of the computers in that mountain of desktop machines can distinguish between the real signal which comes from another computer or a signal which comes from a recording. There is no difference in the signal. The signal it receives is identical.

Now you and I have this funny "overall perception" which is able to distinguish that we are in one piece. Our perception is that all these chunks are acting together so we not only get to see the Mona Lisa's eyeball, we actually can see the entire painting. But the computational machine has no way of realizing whether it's connected or not. We could disconnect half of the machines and there would be no change in any of the remaining connected machines. Nor would there be any change in any of the disconnected machines. And why should anything change? A computer is a deterministic mechanism, incapable of doing anything random. So the mountain of computers just keeps ticking away, and if there is any "overall perception" inside the computer it certainly can't tell that half of it's chips just went offline and half of it's perception dropped away if it keeps getting the signals maintained by the recordings.

So the bottom line is, the mountain of computers has no way of discerning the difference between when it is completely connected and acting like a conscious person, and when it is completely disconnected and acting like a zombie, because it has no "overall perception" of anything.

After all this, we are led to the conclusion, that only the "correct" signal will provide for consciousness. Only the original signal sent from one machine to another will suffice to create a unified experience of consciousness. If machine A sends a signal to machine B, but it doesn't get there and only a duplicate signal arrives, then we are forced to conclude that the unified experience of consciousness can not arise. We are forced to conclude that only the original signal is acceptable, and that a duplicate signal is not acceptable. And such a conclusion must be defended, but it can not be. Because there is no known way for a mathematical computation to indicate where it came from. It is simply impossible. A number 1 is a number 1 regardless of whether it came from computer A, recorder B or Alpha Centuri. If the number 1 is input into the calculation, there is no mechanism by which the computational device which accepts the value, can determine from where it came.

The computationalist's final reply might be that there is no way for the computational machine to know it is disconnected, the perception that is lost when the machine is cut in half for example, is unable to perceive any change, nor would a human brain. A dissected computer and a dissected human brain might not be able to perceive any difference so long as all the information coming from the missing half was duplicated and sent to it. But the problem now is there is no way for a machine to distinguish whether it is connected or not. So it could be connected, or it might not. Whatever phenomena exists prior to disconnecting must exist after disconnecting since there is no method by which any of the parts can distinguish between their being connected or not. So the machine can not experience this "overall perception" which conscious humans can. The machine certainly could be a zombie which acts in every way exactly identical to a human, but there is no way for a computational mechanism to distinguish between being connected and being disconnected, so all phenomena which exist prior to disconnecting, must exist after disconnecting. But overall perception can not exist after disconnecting, so it must not exist prior to disconnecting. We must conclude that computational consciousness is dead.

Finally, since this is a dead ended road, we must accept that our original premise was incorrect. A computational device can not experience consciousness. Simply duplicating the brain using a sufficiently powerful computer is not enough.


What Is Wrong?

The astute reader might now realize there is a quandary. Because the actual brain itself is made up of neurons, blood cells, water, and other cells and material such that in principal, one should be able to duplicate the brain using computers. But as we've seen, this is impossible, because the phenomena of consciousness disappears. At this point we must be led to the conclusion that our present understanding of physical laws is lacking a key feature which may not be strictly computational.
 
  • #3
The problem with your attack is very simple. You are assuming that you understand how a cell works. Suppose you are making a fatal error in your understanding of the net functions of a cell and are omitting an necessary factor! Essentially you are presuming you have the correct answer to AI and using it to prove you are right.

Have fun -- Dick
 
  • #4
Of course, the obvious challenge to this would be "What if you did the same thing with human neurons?" How could it be any different?

The problem with your argument is that there is no meaningful distinction between your uses of "connected" and "disconnected". If you watch the news live, you are connected to it, right? But what if you watch a tape of the news, or a copy of the tape? Is your connection to that information any different? It's removed in time, but there is a direct line from it to you. Similarly, there is no less of an informational link between the computers when you are using recorded data. It's just removed in time. You might ask "But then what time is the computer experiencing?" Well, of course, it's experiencing the first time it looked at the Mona Lisa.

This was an interesting, well-written paper, but I don't think the conclusion was valid. If the laws of physics as they currently exist are correct, at least at the appropriate scales, then the brain does evolve deterministically. And this means a computer could simulate it exactly. Why would proteins doing things create experience while silicon chips doing the same things creates zombie behavior?


Just two quick points. First, you assumed that informational links are what give rise to experience. I agree with this, but many might not, and it is far from proven, or even widely accepted. Second, you might want to just put it all in one thread next time. If it doesn't all fit in the first post, put the second part as a reply. The way you've done it make it difficult to organize a discussion if people decide to post in different threads.
 
Last edited:
  • #5
You are assuming that you understand how a cell works. Suppose you are making a fatal error in your understanding of the net functions of a cell and are omitting an necessary factor!
Dr. Dick, thanks for the feedback. Actually, the assumption is on the part of a strong AI advocate (ex: Dennett). The assumption they would make is that the physical laws we are already familiar with are sufficient, in principal, to create conscious experience (ie: the brain is a computational device so it should be, in principal, possible to create consciousness on any other computational device). The use of a computer mimicking a cell is intended to simplify the strong AI concept for those who might otherwise disagree.

We don't need to actually know how a brain works. The thought experiment only uses a computer to model one possible solution to help simplify and provide a visual for the benefit of the strong AI concept. One could equally take any allegedly conscious computer model and dissect it to any degree for this thought experiment, putting recorders/transmitters on each connection. Those connections could even be placed on each switch in the computer, thus each switch would require only 3 recorder/transmitters (an in, an out, and a control). And by isolating each switch like this, it would no longer matter what kind of programming a strong AI advocate used (ex: parallel, bottom up, etc…) it would still all be reduced to a finite number of switches, each of which could be isolated using recorder/transmitters.

It simply doesn't matter what computer architecture one assumes, only that we can, in principal, create a conscious experience from a computational machine.
 
  • #6
Of course, the obvious challenge to this would be "What if you did the same thing with human neurons?" How could it be any different?
Status, You are obviously "the astute reader" I referred to in the last paragraph. Thanks for the comments, and thanks for the compliment on the writing. Yes, there is an issue here. The first issue though is to understand the thought experiment, which I've found from other feedback I've gotten, is not easy. The experiment doesn't try to explain actual human consciousness in any way, only to point out one difference between the computational model and the actual brain. There are many similarities, but there is one difference I'm pointing at here, the sensation or ability to connect all the disparate bits of experience throughout the brain, almost as if there's a little homonucleus sitting inside watching things. That is the experience I'm calling "overall perception" for lack of a better term. Perhaps this could be better written in the paper.

For whatever reason, we have this overall perception, and it seems quite obvious to us. But how can that come about in a machine? You asked about a meaningful distinction between a recording and an actual signal. If we have a recording, it is not an experience. This is true regardless of our being a strong AI advocate or not. But for a strong AI advocate only, if we have an actual signal being transmitted through the computer, it is an experience. Now note that the difference between an actual signal and a recording can not be discerned by any portion of a computer. The computer can not make the distinction between a recording it receives and an actual signal. We could dissect this mountain of computers in any way at all. In the case of no dissection, a strong AI advocate would say there is experience. In the case of partial dissection, but with all the recordings, I think we would all have to say there is an impact on the experience since only recordings are being used to feed the "conscious part" of the machine. We can't remove a very large chunk of brain or machine and still have conscious experience or this sense of "overall perception" that we had before. But the machine can't make any distinction between any of this like you or I might. The signals throughout the computer brain are identical, and it would continue to do whatever it might, just as a single cell might continue to do whatever it might if it were fed the proper proteins, ions, electrical impulses, etc.

Now the difficulty is, how do our brains discern this sense of connection? If the computer or brain was receiving nothing but recordings to each portion of its computer brain, everything that was there before, should be there afterwards. No part of the computer and no part of the brain should, in principal, be able to distinguish between an actual signal/protein or a recorded signal/protein. But in the case of the brain, our overall perception can not exist if only recordings are provided. I'm not suggesting I know why that might be. I'm only pointing out that a computer couldn't, even in principal, distinguish that difference, but a human brain would loose this sense of overall perception.
 
  • #7
ThoughtExperiment said:
If we have a recording, it is not an experience.

Experience is a vague word. Subjective experience is what's important here, not the meaning you're using above.

In the case of partial dissection, but with all the recordings, I think we would all have to say there is an impact on the experience since only recordings are being used to feed the "conscious part" of the machine.

Why would we have to say that? If the wires between the computers were a light year long, but the appropriate simulations were still carried out, that would work, right? So what's the difference between that and recording the signals, waiting a year, and then playing them back? The fact is, we don't know the necessary conditions for consciousness, and you are assuming you do, requiring some vaguely defined "direct connection."

The signals throughout the computer brain are identical, and it would continue to do whatever it might, just as a single cell might continue to do whatever it might if it were fed the proper proteins, ions, electrical impulses, etc.

Now the difficulty is, how do our brains discern this sense of connection? If the computer or brain was receiving nothing but recordings to each portion of its computer brain, everything that was there before, should be there afterwards. No part of the computer and no part of the brain should, in principal, be able to distinguish between an actual signal/protein or a recorded signal/protein. But in the case of the brain, our overall perception can not exist if only recordings are provided. I'm not suggesting I know why that might be. I'm only pointing out that a computer couldn't, even in principal, distinguish that difference, but a human brain would loose this sense of overall perception.

This part is especially confusing. What do you mean you don't know why it might be? This is a thought experiment. There are no strange results that need to be explained like in an actual experiment. You decide what you think the results would be, and you must back them up. It is absurd to say "the result of my thought experiment was that computers can't be conscious, but I have no idea why this might be."
 
  • #8
sorry. my mistake.
 
  • #9
Status, from your reply, it seems the jist of this thought experiment hasn't been understood. Perhaps a summary of the experiment and results would help. I'll try to make this response brief and concise.

- The MC begins disconnecting the mountain of computers, and only providing recordings to each machine. Note that instead of each machine, I could easily suggest each switch in the machine since each switch obviously can have no overall perception, but the insinuation is that no individual computer in the mountain is large and powerful enough to have overall perception. Each machine is only representing a single human cell which has excruciatingly little conscious experience, let alone an ability to take in an entire painting.

- Once all machines are disconnected, and only recordings played back to each machine, there can no longer be an "overall perception" of anything. If you're not convinced of this, spread the machines apart in space, spread them billions of light years apart, then allow all recordings to be played back. Information can not travel faster than light. The ability to perceive a single painting of a Mona Lisa for example, is gone. There can no longer be an overall perception of this painting. Per strong AI, the painting can't be perceived unless each machine is connected and the actual signals are transmitted between machines. An identical substitute signal is insufficient to provide the overall perception of the painting. The recording is not good enough to allow an overall perception of the painting regardless of how far apart in space these machines are. They can be far apart, or right next to each other, it doesn't matter. Per strong AI, unless they are actually connected and in principal, able to perform a calculation, they can not create an overall perception of the painting. I see no escaping this conclusion. It would be true for a conscious computer, and it would be true for a human brain.

- Each machine may still maintain it's original portion of the overall perception, but the overall perception (the connected perception) is gone. This is what a strong AI advocate (ex: Dennett) would tell you. Someone who disbelieves in computational consciousness would tell you a computer can't ever be conscious to begin with, it can never have an 'overall perception' as has been defined by the essay (ex: Penrose).

- None of the machines have a mechanism to distinguish between being connected and disconnected. I see no escaping this conclusion. If someone were to disagree, they would need to prove that a computer could tell the difference between an electrical impulse coming from another computer with an electrical impulse coming from a recorder. By definition, there is no difference in the signal, so there can be no escaping that conclusion.

- Finally, and most importantly, the contention is that one can not suggest that a phenomena exists (ie: overall perception or consciousness) which requires a machine to be connected when the machine has no ability to discern it is in fact connected or not (as described above). This is the most important part, and is the only point worth attacking. If someone can come up with a valid reason why a sense of overall perception can exist only because the machine is connected - and in principal, able to perform a computation, then I'd love to hear the answer. I'm not sure how one can argue this point, it seems quite impossible to me. One can't say there's some special signal that tells each machine or switch they are connected. That was ruled out as I've mentioned above. How can a sense of overall perception emerge from all these machines ONLY when they are ACTUALLY connected to each other and NOT the recordings (referring to the recordings described above)? What differentiates the connected machine from the disconnected machine in such a way as the connected machine senses that the information each tiny part of it receives is actually from it's neighbor which is experiencing part of the overall perception and not a recording (referring to the recordings described above)? Certainly anyone doing an FEA analysis doesn't look for an overall phenomena which requires each actual computational chunk to be actually connected in order for the phenomena to emerge. So the only valid conclusion is that it wasn't there to begin with. The only valid conclusion is the machine was not conscious. And if we assume that, we have to face the fact that we're missing something very important regarding our ability to model physical laws.

I sincerely value your feedback and hope this helps in understanding the thought experiment. Any additional feedback would be very helpful.
 
  • #10
I was wrong. What I said before about the connected/disconnected ambiguity, while important, was not the main problem with your argument. I originally had a lot more here continuing along that line, but I realized it was mostly irrelevant and took it out.

The most important point is that these two systems (before and after the direct links are cut) are different systems. There is no reason to require the system as a whole to have an identical experience in each case just because the individual computers can't distinguish the difference. It is entirely possible (though not nearly as obvious as you've assumed) that the disconnected system does not experience. But so what? The experience of a system is a property of the system as a whole, and changing the system as a whole can change the experience, regardless of what the parts "know".

Consciousness is not something that can be pinpointed as being here or there in a system. The parts don't have part of the consiousness. It is a phenomenon that is, at present, extremely poorly understood, but it seems (as David Chalmers has argued) to arise as a consequence of the overall data processing that a system is doing. It is likely that the actual mechanism doing the data processing is unimportant, only the processing itself. And, I cannot stress this enough, it is not a consequence of the current laws of physics. There need not be any separate "signals" telling a system whether it's conscious or not. There will need to be new natural laws relating some property of a physical system (eg, information processing, according to Chalmers) to what that system experiences. But these "psychophysical" laws will undoubtedly apply to the entire system, not just to one computer or neuron at a time.

You hinted at some potential psychophysical laws yourself. If the computers are "directly connected" (vague, but I'll use it for now), an overall perception arises. If they aren't, maybe each computer experiences something very simple, but there is no larger consciousness. But you find this unacceptable for some reason. If most of the parts of the system are behaving the same, you believe any properties of it must be identical. But not all the parts are the same, the connections being completely different in the two cases. A brain spread across light years is obviously very different from a connected one. So why can't some of the properties, like consciousness, be different as well?
 
Last edited:
  • #11
Welcome to Physics Forums, ThoughtExperiment! Thanks for your very thoughtful and well-written essay. I have merged the two threads containing the first and second parts of your argument into one single thread to make things easier on all of us.

I'm not prepared to say your argument is flawless, but on the surface it seems compelling. One critique I can offer is that your characterization of brain cells (or fundamental computing elements, or whatever) as containing little 'bits' of consciousness is not necessarily faithful to the Strong AI position. Strong AI holds that consciousness is an emergent property of certain computational systems. It is not committed to claiming that the computational elements of such a system have some sort of consciousness; it is only committed to claiming that certain systems as a whole are conscious.

An analogy may be helpful here. We can consider the macroscopic fluidity of liquid water to be an emergent property of a system constituted by H2O molecules. The fluidity of water is a property that arises as a result of the behavior of its constituent parts, but fluidity is a property that belongs only to the system as a whole, or at least, only to sufficiently large chunks of the system. It is not the case that the individual H2O molecules have tiny bits of fluidity that contribute to macroscopic fluidity.

Having said that, though, I don't think this objection substantively affects your overall argument.

On a side note:

ThoughtExperiment said:
If someone can come up with a valid reason why a sense of overall perception can exist only because the machine is connected - and in principal, able to perform a computation, then I'd love to hear the answer.

Actually, you might be surprised to learn that a notion somewhat like this is proposed by Gregg Rosenberg in his book A Place for Consciousness, which you might have noticed is being discussed here at PF on a subforum of the Metaphysics & Epistemology board. Rosenberg argues that our traditional notions of causality are deficient, and goes about constructing a new theory of causation. Part of his proposal includes what he calls "receptivity," which sounds analogous to what you mean by "connectivity." Very roughly, receptivity is a system's capacity to be affected. In Rosenberg's model, receptivity can bind disparate 'stuff' into a unified system that can 'feel' causal effects as a unified system. He proposes that the unified conscious manifold, what you call "overall perception," is unified precisely because certain neural elements in the brain share a common receptivity and thus form a unified causal system.

On this view, your grounds for falsifying Strong AI turn out to be invalid, because something like connectivity indeed is responsible for overall perception. (That's not to say that Rosenberg's thesis is a form of Strong AI-- anything but.) It's a further question whether or not the initially connected mountain of computers has the proper causal structure to share a common receptivity analogous to the type supposedly exhibited by our brains, and thus, to support the existence of overall perception; depending on how the view is fleshed out, one could come to the conclusion that it does, or that it doesn't.

In any case, you might be interested in looking into the book; it seems to have direct relevance for your thought experiment.
 
Last edited:
  • #12
StatusX said:
The most important point is that these two systems (before and after the direct links are cut) are different systems. There is no reason to require the system as a whole to have an identical experience in each case just because the individual computers can't distinguish the difference. It is entirely possible (though not nearly as obvious as you've assumed) that the disconnected system does not experience. But so what? The experience of a system is a property of the system as a whole, and changing the system as a whole can change the experience, regardless of what the parts "know".

To be fair, we have to keep in mind that ThoughtExperiment is critiquing Strong AI. It's true that the two systems, before and after disconnection, are physically distinct. But the relevant question here, per Strong AI, is whether or not they are computationally distinct. It seems to me that the two systems are actually computationally identical for any sense of 'computational' that is relevant to a Strong AI view.
 
  • #13
StatusX

It is entirely possible (though not nearly as obvious as you've assumed) that the disconnected system does not experience.
Note that I've couched the argument you refer to here as being per a computationalist who accepts strong AI, someone such as Dennett. Dennett assumes the brain is a computational device and any computational device of similar complexity and computational power is sufficient to mimic the brain.

(consciousness) … is not a consequence of the current laws of physics. …
There will need to be new natural laws relating some property of a physical system (eg, information processing, according to Chalmers) to what that system experiences. But these "psychophysical" laws will undoubtedly apply to the entire system, not just to one computer or neuron at a time.
My understanding is that a computationalist (ie: Dennett, not Chalmers) would disagree with this statement. Again, I may agree with this, but my opinion doesn't matter here. What matters is maintaining a single viewpoint to determine if that viewpoint is consistant. And if we prove the viewpoint is inconsistant we can learn from that by locating the falicy. The viewpoint we must maintain is the computationalist view, so adding psychophysical laws which may or may not be computational in nature is outside the boundry of a discussion which uses computationalism as a basis.

Chalmers is a different type, I'm not sure if he's a computationalist or not. I've been told he is, and he's written he feels the thought experiment about replacing brain cells one at a time with computer chips is valid, but when he starts talking about psychophysical laws, it would appear he's arguing against computationalism, so I'd prefer to leave Chalmers out. As a side issue, I'd be interested in knowing if Chalmers considers himself a computationalist. I've tried to figure that one out myself, but haven't seen convincing proof either way. But since he seems to be a rival of Dennett, I have to believe he's not.

The most important point is that these two systems (before and after the direct links are cut) are different systems.
I've had to rethink this, and came up with a minor change to the overall thought experiment which I'll have to elaborate on later. Here's a quick summary: The only difference between the two (connected and disconnected playing recordings) machines is that the disconnected machine can not, in principal, calculate anything new. If the computer was still able (in principal) to communicate and calculate anything new that came along, then I have to believe we would agree that the machine should be able to become conscious per strong AI. And if one were then to disprove that simply having the ability, in principal, to calculate anything new allowed for conscious experience (per strong AI), then do you believe that would aid this argument?
 
  • #14
hypnagogue

Thanks for the welcome, the compliments on the writing, and for assembling this into a single post. Very much appreciated.

I'm not prepared to say your argument is flawless, but on the surface it seems compelling.
To be honest, I'd have to say the same thing. And that is the primary reason I've posted here, so I can work out the flaws as you've pointed out. I believe the thought experiment could be strengthened by coming up with a way in which the individual computers could, in principal, still communicate (when a change occurred for example) as I've mentioned to StatusX. Thus, if we pull the painting of the Mona Lisa away, and show it a different picture, the machines would be able to change their computation. This still seems a bit like a parlor trick, but like any parlor trick, there must be a valid explanation, so I'll have to show you how that's done and see what explanation (from a computationalist viewpoint) there might be. I'll have to work on that revision to the thought experiment soon.

Regarding the analogy of consciousness being similar to water, certainly the analogy is clear, but then analogies are not explanations. The problem I'm having is with some fundamental assumptions of the computationalist argument as you've noted. Any pointers/links which might provide a better explanation of the strong AI assumptions would be appreciated.

Very roughly, receptivity is a system's capacity to be affected. In Rosenberg's model, receptivity can bind disparate 'stuff' into a unified system that can 'feel' causal effects as a unified system. He proposes that the unified conscious manifold, what you call "overall perception," is unified precisely because certain neural elements in the brain share a common receptivity and thus form a unified causal system.
Yes, that sounds very familiar. I'll try and reword: the system which is conscious must be unified in such a way that it can (in principal) react to any input from any location within the system. Is that correct? Perhaps you could expand a bit.

It's a further question whether or not the initially connected mountain of computers has the proper causal structure to share a common receptivity analogous to the type supposedly exhibited by our brains, and thus, to support the existence of overall perception; …
Presently, the disconnected mountain of computers has no ability, in principal, to recognize when the input from the cameras has changed. It has no ability to autonomously recognize a change in the picture which we present it. Now if the original mountain of computers had the ability to react to a change in the incoming signal or input (ie: the picture of the Mona Lisa) but we were still able to prove that no "overall perception" existed, would that help to strengthen the case? That, in a nutshell, is what I'd planned on revising.

But the relevant question here, per Strong AI, is whether or not they are computationally distinct. It seems to me that the two systems are actually computationally identical for any sense of 'computational' that is relevant to a Strong AI view.
I believe I understand what you're saying here, but not well enough to put words in your mouth. If you could elaborate a bit, that would be helpful.
 
  • #15
hypnagogue

One critique I can offer is that your characterization of brain cells (or fundamental computing elements, or whatever) as containing little 'bits' of consciousness is not necessarily faithful to the Strong AI position. Strong AI holds that consciousness is an emergent property of certain computational systems. It is not committed to claiming that the computational elements of such a system have some sort of consciousness; it is only committed to claiming that certain systems as a whole are conscious.

Going back through the essay, I see I've made a mistake which you've pointed out, and need to clarify. I believe there's a difference between the overall perception I've referred to and consciousness. Overall perception is a feature of consciousness, or a subset of it. If a computer can be shown to lack certain features of consciousness, then the contention is that a computer can not be conscious. The original intent of this thought experiment was to point out a single difference between a supposedly conscious computer as defined by strong AI and a conscious human. That difference is the ability to percieve something as a unified whole, and I've called this "overall perception" for lack of a better term. When I introduced this term, I mentioned only briefly that it was "one aspect of consciousness" and now looking back at what I've written, I see I actually began using the word consciousness when I should have been using the term overall perception. The thought experiment was only intended to focus on this single feature. I see that I've written "consciousness is smeared out" as you mention, so I wonder if it wouldn't be more technically accurate to say "overall perception" is smeared out, which was the original intent. Thoughts and feedback on that would be especially helpful. This isn't to say I honestly understand how consciousness isn't smeared out, but at least if the essay stays consistant I'd not confuse the reader so.
 
  • #16
hypnagogue said:
To be fair, we have to keep in mind that ThoughtExperiment is critiquing Strong AI. It's true that the two systems, before and after disconnection, are physically distinct. But the relevant question here, per Strong AI, is whether or not they are computationally distinct. It seems to me that the two systems are actually computationally identical for any sense of 'computational' that is relevant to a Strong AI view.
ThougtExperiment said:
Note that I've couched the argument you refer to here as being per a computationalist who accepts strong AI, someone such as Dennett. Dennett assumes the brain is a computational device and any computational device of similar complexity and computational power is sufficient to mimic the brain.

So let me get this straight. Strong AI advocates claim that a sufficiently complex computer could, in principle, mimic every property of the brain, from learning to creativity to consciousness. Is this correct? I'll assume it is for the rest of this post, but please correct me if I'm wrong. And you are arguing against Strong AI, claiming that at least one property of humans can't be mimicked. I'm not 100% sure which one you mean, and I address two possibilites below.

Also, hypnagogue, they are not computationally identical, because one responds to the environment and one doesn't.

My understanding is that a computationalist (ie: Dennett, not Chalmers) would disagree with this statement. Again, I may agree with this, but my opinion doesn't matter here. What matters is maintaining a single viewpoint to determine if that viewpoint is consistant. And if we prove the viewpoint is inconsistant we can learn from that by locating the falicy. The viewpoint we must maintain is the computationalist view, so adding psychophysical laws which may or may not be computational in nature is outside the boundry of a discussion which uses computationalism as a basis.

Chalmers is a different type, I'm not sure if he's a computationalist or not. I've been told he is, and he's written he feels the thought experiment about replacing brain cells one at a time with computer chips is valid, but when he starts talking about psychophysical laws, it would appear he's arguing against computationalism, so I'd prefer to leave Chalmers out. As a side issue, I'd be interested in knowing if Chalmers considers himself a computationalist. I've tried to figure that one out myself, but haven't seen convincing proof either way. But since he seems to be a rival of Dennett, I have to believe he's not.

They would probably both agree that the first machine could be identical to the human brain in every important way. The difference is that Dennett would claim neither were "conscious", in the intrinsic subjective experience sense, while Chalmers would claim both were.

I noticed you intend to change "consciousness" to "overall perception" in your argument. So you'll have to be clear on what exactly you mean by "overall perception." Is this a behavioral ability, like the ability to report information about internal states? If so, you're saying there will be things a human can do that a computer never could. Or is this a first person, subjective ability, like experience? If so, you're saying that computers are zombies.

If it's the former, I disagree. Neurons follow the laws of physics like everything else, and a sufficiently sophisticated computer could simulate a physical system to arbitrarily high precision.

In your experiment, the behavior of the two systems is identical, which leads me to believe that you intend the latter, that computers are zombies. If so, and if you believe we aren't zombies, then you have to decide what is necessary for a system to be conscious. Do you think the current laws of physics suffice, and that in any world with the same laws of physics (regardless of any other different properties they might have), any beings physically identical to us would also be conscious? If you do, I can explain why that is almost certainly not the case. And if you don't, you have to accept that there are further psychophysical laws on top of the physical ones. You can't ignore these in this thought experiment because they're the deciding factor in what systems experience and which don't.

I've had to rethink this, and came up with a minor change to the overall thought experiment which I'll have to elaborate on later. Here's a quick summary: The only difference between the two (connected and disconnected playing recordings) machines is that the disconnected machine can not, in principal, calculate anything new. If the computer was still able (in principal) to communicate and calculate anything new that came along, then I have to believe we would agree that the machine should be able to become conscious per strong AI. And if one were then to disprove that simply having the ability, in principal, to calculate anything new allowed for conscious experience (per strong AI), then do you believe that would aid this argument?

Well, how could it have the ability to calculate anything new and still be meaningfully different than the first system? The only difference I see is that you've taken away that power in the second.
 
Last edited:
  • #17
StatusX said:
Also, hypnagogue, they are not computationally identical, because one responds to the environment and one doesn't.

They are not identical in terms of what they could potentially compute (edit: this only holds if we keep the inputs from the recording devices constant), but they are identical in terms of what they actually do compute, with respect to the Mona Lisa input. The same computational processes occur in both.

edit: To expand a bit, it seems very strange to argue that, for instance, part of the reason I am conscious upon viewing the Mona Lisa is because my brain could be performing different computations if I were looking at something else. What seems relevant is what my brain is actually doing, and likewise the strong AI position is only concerned with what the system in question is actually computing. So it does seem as if a strong AI view would be committed to claiming that the two systems do indeed have identical experiences, since they are performing identical computations.

edit 2: Notice also that the disconnected system could still perform any computation that the connected system could; it would just need to receive the appropriate inputs from the recording devices. Nothing about the actual computational/algorithmic structures is different; the only difference is the way the individual computing units receive their inputs.
 
Last edited:
  • #18
StatusX

So let me get this straight. Strong AI advocates claim that a sufficiently complex computer could, in principle, mimic every property of the brain, from learning to creativity to consciousness. Is this correct?
Yes, and I really think the most fundamental reason that such an assumption is made is exactly the reason you point to here:

Neurons follow the laws of physics like everything else, and a sufficiently sophisticated computer could simulate a physical system to arbitrarily high precision.
And of course, it's not just neurons, it's glia, blood cells, or whatever cells (made of matter) that we assume follow physical laws. That in itself is an extremely powerful argument. It's hard to argue that such macroscopic bits of matter are anything but deterministic in nature. Nevertheless, intuitively the appeal of such a simple explanation is lacking for the vast majority of people. In fact, if you get out past those of us who are very familiar with such concepts as physical laws being the model for the effect which follows a cause, I'd dare say that most people can't comprehend why we're even arguing about computationalism. For the majority of my friends, I'd only receive cockeyed looks. So the intuitive appeal of the computational model is most certainly lacking, but why?

The difference is that Dennett would claim neither were "conscious", in the intrinsic subjective experience sense, while Chalmers would claim both were.
Can you expand on that? Why do you say Dennett would claim neither were conscious, and what do you mean by "the intrinsic subjective experience sense"?

I noticed you intend to change "consciousness" to "overall perception" in your argument. So you'll have to be clear on what exactly you mean by "overall perception."
Yes, that's what hypnagogue noticed. There are a few instances where I should have put "overall perception" in instead of consciousness. But having reread this a few more times now, I don't think they really make a significant impact on the essay, I just get a bit pissed at times when I miss something like that.

To expand just a bit on "overall perception", the image of the painting in this example must be perceived over a large chunk of brain. Chalmers might refer to this as an easy problem. The painting doesn't reside on a single neuron, but more correctly, the image of the painting is perceived or spread out over a large chunk of brain. That's easy enough to imagine. And it's an easy problem. The hard part of this particular problem with consciousness is that the painting is perceived as a unified whole. There's something that consciousness does that pulls all the bits of brain together and provides us with a single unified experience. Thus, the term "overall perception" refers to that feature of consciousness which integrates the chunk of brain calculating the image of the painting into a unified, single experience. When we look at a painting, we don't experience a huge number of tiny pixels, we experience a single painting. The pixels are easy, they're just tiny bits of brain providing that tiny spot on the image. The integrated whole is hard, it requires a connection or integration between the disparate bits of the image.

In your experiment, the behavior of the two systems is identical, which leads me to believe that you intend the latter, that computers are zombies. If so, and if you believe …
The conclusion I reached is that computers are zombies, that they don't have consciousness. But what I believe has nothing to do with the argument, and it should never have anything to do with it. This needs to be as unbiased an assessment as is possible, I don't understand how anyone trying to discuss this in good faith can possibly try to maintain a bias.

Well, how could it have the ability to calculate anything new and still be meaningfully different than the first system? The only difference I see is that you've taken away that power in the second.
There are a couple different ways of creating the ability for the machine to respond to new stimulus. But before I explain them, I'll say this. I don't like these two (below) modifications to the thought experiment much, which is why I wrote the one I did. The one I wrote up is much more faithful to the control volume concept that FEA and CFD analysis uses. The control volume concept is an exceedingly powerful tool, but why I say that probably isn't immediately obvious to someone that doesn't use that concept on a daily basis. It's applied to all sorts of mechanistic phenomena (just as the basic premise of strong AI contends the brain is a mechanistic object) because you can take anything that's mechanistic and break it down into small chunks, and it doesn't matter where you make the boundary (in principal). The sum of all the small chunks, analyzed separately, are equal to the whole. Unfortunately, that concept doesn't seem to work for consciousness. We can't have a perception of anything if we break our brains up into tiny chunks. I think hypnagogue "get's it" so to speak, but it's not an easy thing to relate.

Perhaps the easiest change is to suggest that if, and only if, a new stimulus enters the system, the recorder/transmitters send the new information. And if the receivers never receive new information, they simply provide the recording as an input. In other words, we have a transmitter on one machine which compares the output to a recording. If it is identical to the recording, it doesn't send. If it is NOT identical to the recording, it DOES send. In this case, so long as the recording from the cameras is played back, there will never be a change in any of the outputs, and the transmitters will never send out a signal, and the receivers will always provide the recording to the desktop machine. Actually, this isn't so far from the control volume concept and perhaps would work well with it now I think about it. It maintains the ability of the computer to react to new stimulus, except at that point it remains an open issue of whether or not consciousness exists again. But it does stay true to the control volume concept I originally introduced. This could be expanded, to saying we record a lifetime of experiences, and thus the transmitters never, even after a lifetime, ever have to transmit anything, so the machine just lived a lifetime with no 'overall perception'.

Another way would be to suggest that there are two duplicate machines, machine 1 and machine 2. Machine 1 is a normal, fully connected machine, and machine 2 is a machine, identical in every way to machine 1, except it only has the input connected from machine 1, and all the outputs in machine 2 are ignored and discarded. So machine 1 is a fully computational machine which can provide outputs to each of it's various parts. But identical machine 2 only takes the input from machine 1. Both machines do exactly the same thing, they mirror image each other perfectly. But machine 2 is a "follower" and machine 1 is the leader (so to speak). Machine 2 only has it's inputs connected to machine 1. All of machine 2's outputs are discarded. But that doesn't matter to machine 2, since it has these outputs provided by machine 1 as inputs. That might be confusing because I tried to stuff the explanation into a single paragraph. Anyway…

In each of these two cases, computational consciousness dries up. It doesn't exist. It disappears. Only an actual signal can be used. The logic for these two additional thought experiments is probably difficult to grasp, and would take some significant writing to explain. Perhaps over a beer… Regardless, it's obvious we can create a machine which has various traits, all of which are zombie in nature, but that depends on how you define "the machine". or "The computational device" which is dependant on how you define how it's connected. It all seems to depend on how a computer is connected, and that's the real thrust of this thought experiment.


If so, and if you believe we aren't zombies, then you have to decide what is necessary for a system to be conscious. Do you think the current laws of physics suffice, and that in any world with the same laws of physics (regardless of any other different properties they might have), any beings physically identical to us would also be conscious?
As I've stressed a number of times already, what I "believe" doesn't matter at all, I'm not wedded to this argument. The only thing that matters is what can be proven (hopefully) through logical deduction. So far, strong AI seems to say that a computer which models physical laws as we presently understand them (ie: we assume effect follows cause) applies to all models of phenomena which describe this universe. But strong AI covers one other item, it covers the possibility of a random quantum event also in the sense that strong AI can also claim that truly random events, if modeled, should have nothing more important to say about consciousness. For example: assume one or more of the switches in the mountain of computers is in fact actually random. One or more of the switches somehow uses quantum phenomena, such that at any time it could simply switch with no cause. To account for this in the existing thought experiment, all you need to do is record what position the random switch went to as it observed the painting, and replace it with a non-random switch. Done. Continue thought experiment without pause. You simply record which direction the random switch went, and replace it with a deterministic one, then continue - because at that point, you have a recording of a perception, a recording of an allegedly conscious experience, and every time you play back the recording, the perfectly deterministic machine which used to have a random switch, but now does not, simply goes through all the same exact calculations it did during the recording. The experience and perception it has are identical in each case (per strong AI).
 
  • #19
ThoughtExperiment said:
To expand just a bit on "overall perception", the image of the painting in this example must be perceived over a large chunk of brain. Chalmers might refer to this as an easy problem. The painting doesn't reside on a single neuron, but more correctly, the image of the painting is perceived or spread out over a large chunk of brain. That's easy enough to imagine. And it's an easy problem. The hard part of this particular problem with consciousness is that the painting is perceived as a unified whole. There's something that consciousness does that pulls all the bits of brain together and provides us with a single unified experience. Thus, the term "overall perception" refers to that feature of consciousness which integrates the chunk of brain calculating the image of the painting into a unified, single experience. When we look at a painting, we don't experience a huge number of tiny pixels, we experience a single painting. The pixels are easy, they're just tiny bits of brain providing that tiny spot on the image. The integrated whole is hard, it requires a connection or integration between the disparate bits of the image.

Correct me if I'm wrong, but this paragraph seems to summarize the core of your analysis. Connectivity is to mean this property of bringing all the little pieces into one unified whole. You point out that simple classical mechanics doesn't recognize the whole as a different entity from the sum of the parts (see the thread "Does plurality exist" on the General Philosophy forum). Have you looked into complex systems and particularly into autocatalytic systems? They seem to do something like what you are looking for. Notice also that mobs have a "personality" that is not a linear combination or superposition of the personalities of the people composing it.
 
  • #20
selfAdjoint

The paragraph you pointed out is only an attempt to define something. It is not a summary of the thought experiment. I tried to summarize the thought experiment on the 9'th post down on page 1. See if that helps at all.

Regarding autocatalytic systems, if we modeled such a system on a computer by breaking it into small chunks (ie: control volumes) and analyzing each part separately, do you think the phenomena would lose some emergent property? Do you think we would fail to be able to duplicate all emergent properties because we were only using recorded data on each of the control volumes to simulate it as described in the thought experiment but when we used actual data the property re-established itself such as strong AI might predict for a conscious computer?
 
  • #21
hypnagogue

Thanks for the additional thoughts. I think what you've suggested is valid. You seem to understand the thought experiment very well. Do you have any suggestions as to how it might be refined to make it more understandable?

Here's another idea which might be of interest. Instead of the mountain of computers, let's put a mathematician in place of each desktop machine. And instead of the connections as proposed with recorder transmitters, let's put each mathematician at his desk in some office building, sitting in front of his computer connected to the wireless internet. Strong AI would have to say that such a system is conscious because the mathematicians are doing the calculating, and the wireless internet is providing the connection. So how does the mathematician know the information he receives from the internet is recorded information or not? It is, in a sense, always recorded and retransmitted. But we could move all our mathematicians apart in space, and only provide recordings, and the mathematicians would have no clue that they were only working with recordings. Again, there can't be any overall perception so strong AI can't predict conscious experience. This simply helps to prove that the individual computers have no ability to discern the difference between the recording and the actual message. And a computationalist would have to ask, "if the mathematicians are only receiving a recording, why isn't that sufficient to provide for overall perception?"

Again, the only retort may be that in principal, the system must be connected and able to react. But as you've noted, that doesn't change any of the computations in the system. I'm not sure if that's helpful or not, but was another idea I thought I'd throw out. It seems this thought experiment is rather subtle.
 
  • #22
Ok, I think I understand the experiment now, but let me know if I still don't. Strong AI says that the computations a system performs are what gives rise to all the important "mental" properties, regardless of what is doing the computing. You have thought of a system that is computationally identical to the brain, but cannot possibly experience, thus disproving Strong AI.

Now, if this is really all there is to Strong AI, and if "computational", as it appears in the definition, really only concerns what actual computers are doing, not the connections between them, then I would agree that Stong AI is false. If this is all you're trying to prove, then I'd say you've probably done that.

But if you mean to say that computers can't be conscious in general, I would have to disagree. Again, it is unimportant what a single computer in the mountain or a single neuron "know" about the authenticity of their input. The important property is data flow over the entire system. In a connectd brain or computer, data is flowing over the whole system, and in a disconnected, recorded-input computer, it (probably) isn't. (if it was, it would experience, and I'm still not sure whether it would or not)

I think you are confusing two possible meanings of "mechanistic": the sense of being analyzable by breaking into parts, and non-biological. I believe non-biological systems can be conscious, but that whether they actully are conscious or not is not something that can be discerned by analysing control volumes. It is a property of the entire system. Like I said, it is almost certainly not a physical property, in the usual sense, and it does not give into a reductive explanation.

If anything, you've shown that the usual methods of divide and conquer used in traditional engineering/physics will not work for consciousness.
 
  • #23
ThoughtExperiment said:
The paragraph you pointed out is only an attempt to define something. It is not a summary of the thought experiment. I tried to summarize the thought experiment on the 9'th post down on page 1. See if that helps at all.

Regarding autocatalytic systems, if we modeled such a system on a computer by breaking it into small chunks (ie: control volumes) and analyzing each part separately, do you think the phenomena would lose some emergent property? Do you think we would fail to be able to duplicate all emergent properties because we were only using recorded data on each of the control volumes to simulate it as described in the thought experiment but when we used actual data the property re-established itself such as strong AI might predict for a conscious computer?

I have now read your post #9, and I think I grasp your point. I notice a difficulty in some of your discussion, where you distinguish betrween "connected" and "unconnected but exchanging information". You use a lightspeed argument to show the second state is different from the first, but of course the lightspeed limit is always with us, we are only seeing the past lightcone of the screen in front of us.

Your question about emulating autocatalytic systems is interesting to me. In order to recover ALL of the emergent properties you would have to model ALL of the interactions and ALL of their properties. This would make strong restrictions on what kind of a system you would need to do the emulations; note that things like the rate of the interactions are critically important. So my belief is that yes, if you could meet all those conditions, you would indeed recover the emergent properties. Your machines would function like microstates in statistical dynamics. The microstates are simple too, but emergent properties like temperature and entropy do unnh, emerge from their interaction.
 
  • #24
StatusX

Thanks, that's actually a very interesting response. My understanding of strong AI is that they only refer to "a computational device" being conscious, which I generally consider to be a device which manipulates symbols. One might interpret that to mean manipulating matter and energy instead, perhaps there's a difference. I'm not sure the strong AI concept distinguishes between the two.

I believe you're trying to disprove that a signal coming from a machine and a signal coming from a recorder are identical when you say, "The important property is data flow over the entire system. In a connected brain or computer, data is flowing over the whole system, and in a disconnected, recorded-input computer, … ".

Symbols are identical. And electron A is generally thought to be indistinguishable from electron B. So regardless of whether the signal (electron) going to switch X comes from the output of the conscious computer switch Y, or the output of a recorder switch Z - one might suggest, as I have earlier, that those signals are identical and are not distinct in any way. I believe you're suggesting they are. I believe you're suggesting the signal going into switch X must come from switch Y and not recorder Z, regardless of what that particular signal does to switch X… Note that either signal might make switch X change state, for example. Whatever the signal does, regardless of what switch it came from, switch Y or recorder Z, it has the same affect on switch X, whether that signal goes to the control wire of the switch and changes its position, or goes from inlet to outlet. The affect on the switch is the same. Look at that again… You are switch X and you get a signal, so you change state. But if the signal came from recorder Z, the system you are interacting with is not conscious. If the signal came from the "Conscious System Switch Y" then the system you are interacting with IS conscious (or COULD BE conscious). So why would the signal need to come from switch Y in order for consciousness to occur? Why can't the signal come from recorder Z? Both of these signals are the same symbol and are handled by the computer in the same way.

I believe one can make a physical distinction, and I believe you may have a valid point. What do you think that physical distinction is, and why should that particular physical distinction make any difference to the computer?
 
  • #25
selfAdjoint

Hi selfAdjoint. The light speed comments you're referring to are not actually needed, they're only there to emphasize that there is no interaction between the various parts of the computer such that there is no overall perception. Even without mentioning the light speed part, one should be able to conclude there is no "overall perception" (as I've been calling it) for the disconnected machine. And it is this overall perception phenomena which I'm pointing at saying if this is an emergent property of computationalism, then why is it gone when we suggest using recorded data for the entire machine and break the system up as I've described?

Just a funny thought, but per strong AI, even though one may have to admit that this phenomena is gone, I think one could argue that once the machine was reconnected, the allegedly sentient machine would know nothing of the experience. All of it's memories would have been recorded exactly as they would have been recorded if they were connected. This computer would insist it had never lost "overall perception" because it could remember everything perfectly clearly.

I don't know anything about autocatalytic systems, but if I wanted to determine an emergent property, I'm assuming one could (in principal) model it using the method mentioned, ie: the finite element/CFD analysis in which the material is broken into small control volumes and analyzed separately. So from that aspect, autocatalytic systems are not unlike other emergent phenomena. That is, they exhibit no phenomena which simply disappear when the systems is broken into small chunks and recorded inputs are used.
 
  • #26
ThoughtExperiment: What you've referred to here as "overall perception" is called "unity" in philosophy. http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/searle.prob.html:

Unity.
It is important to recognize that in non-pathological forms of consciousness we never just have, for example, a pain in the elbow, a feeling of warmth, or an experience of seeing something red, but we have them all occurring simultaneously as part of one unified conscious experience. Kant called this feature `the transcendental unity of apperception'. Recently, in neurobiology it has been called `the binding problem'. There are at least two aspects to this unity that require special mention. First, at any given instant all of our experiences are unified into a single conscious field. Second, the organization of our consciousness extends over more than simple instants. So, for example, if I begin speaking a sentence, I have to maintain in some sense at least an iconic memory of the beginning of the sentence so that I know what I am saying by the time I get to the end of the sentence.

Chalmers describes it this way here:

At any given time, a subject has a multiplicity of conscious experiences. A subject might simultaneously have visual experiences of a red book and a green tree, auditory experiences of birds singing, bodily sensations of a faint hunger and a sharp pain in the shoulder, the emotional experience of a certain melancholy, while having a stream of conscious thoughts about the nature of reality. These experiences are distinct from each other: a subject could experience the red book without the singing birds, and could experience the singing birds without the red book. But at the same time, the experiences seem to be tied together in a deep way. They seem to be unified, by being aspects of of a single encompassing state of consciousness.

The problem this feature of consciousness creates is called "the binding problem" which is to say, how do all these this different states of experience get bound together into a single unified whole? Regardless of whether you take the computationalist viewpoint or the non-computationalist viewpoint, there is a problem here which philosophers and scientists have tried to attack head on. They've tried to wrestle it to the ground through brute force and sheer logic and have essentially gotten no where. Dennett seems to refuse it even exists! From the second reference above (Chalmers) comes this:

Some (e.g. Dennett 1992) hold more strongly that consciousness is often or usually disunified, and that much of the apparent unity of consciousness is an illusion.

I can't find fault with your thought experiment yet, it seems very convincing. Perhaps rewriting using the premise that unity exists, let others discuss if it actually does or doesn't, but assuming unity exists, here's how you go about disproving computationalism.

Thanks for the post, one of the most interesting posts I've seen here yet.
 
  • #27
My knowledge is limited here, so I can't garantee that the following thoughts of mine make any sense:

I don't see why the human brain has to be mimicked by connecting billions of computers, instead of just using one computer with the memory and processing capacity required to run a consciousness simulation. What you have suggested seems to be one billion consciousness connected together, like "The Borg" from Star Trek.

Even if you use your model, is there not a difference in where the instruction codes for consciousness is located? In your model, the instructions/programming is spread out between a billion computers, and each computer is suppose to be analogous a neuron, but in the brain, the instructions/programming is not located in the neurons, but rather in the patterns of synapses themselves; in your model, the wires connecting the computers don't contain code, they just transmit code.

Also, why would your computer model have to mimic every fuction of a neuron, when some functions are not related to the coding for consiousness, such as the function of producing energy and carrying out metabolism, something that would be analagous to the power supply to your computer model.

You mentioned that if some of the computers in your model was disconnected and fed artificial data through recorders, your model would not be aware of it, while a human would. My understanding is that a human would also not know of this in many situations. Consider when we are dreaming, there are no genuine inputs: it's recordings of memory bits already stored in our brains that are playing out, yet we think it's reality when we are asleep. Also, consider the Matrix senario.

Finally, my understanding tells me that consciousness in not really an emergent property and that FEA would actually work. Consciousness is the result of different brain modules carrying out various parts of perception. Consider a dynamic computer program; the running program is not an emergent property, but rather the summation of the various modules the program is divided into. Each module does a certain thing, and those modules can be broken down even further, which is evident to anyone who has studied any programming language. So, I would think that if we can identify all the different modules of consciousness in human brains, we can see how it's additive. For example, a certain module would code for the perception of sight, another for sound, another for touch, and so forth. If anyone of these modules died, consciousness would still exist, but be less dynamic, a more simpler form of consciousness. For example, consider the consciousness of a cockroach, a lizard, a worm, and so forth. There are different complexities of consciousness, thus contradicting the argument of consciousness being an emergent property.

Even a currently existing computer program can be considered conscious, but a simple form of one: the program seems to be aware of enough input to allow it to function as it's programmed to do.

One interesting point though: the human brain is said to carry out great parallel processing, so perhaps for human level consciousness to exist in a computer, there will have to be a great deal of parallel processing as well.

Interesting article: http://www.transhumanist.com/volume1/moravec.htm
 
Last edited by a moderator:

What is the definition of connectivity in the context of strong AI?

Connectivity refers to the ability of an AI system to connect and communicate with various sources of information, both internal and external. This includes connecting with other AI systems, databases, and sensors to gather and analyze data.

Why is connectivity important for strong AI?

Connectivity is crucial for strong AI because it allows the system to access a wide range of information and resources, enabling it to make more informed decisions and perform complex tasks. Without connectivity, the AI system would be limited in its capabilities and would not be able to adapt to changing situations.

What are some examples of connectivity in strong AI?

Some examples of connectivity in strong AI include the ability to connect with other AI systems for collaboration, access to large databases for information retrieval, and connection with various sensors for input and feedback. Additionally, strong AI may also have the ability to connect with human users for communication and learning purposes.

How does connectivity impact the development of strong AI?

Connectivity plays a crucial role in the development of strong AI as it enables the system to learn and improve over time. By connecting with various sources of information and feedback, the AI system can continuously update and refine its algorithms, leading to more advanced and intelligent capabilities.

What are the potential drawbacks of high connectivity in strong AI?

One potential drawback of high connectivity in strong AI is the risk of data overload. With access to vast amounts of information, the AI system may struggle to prioritize and filter out irrelevant data, leading to inaccurate or biased decision-making. Additionally, high connectivity may also increase the risk of cyber attacks and security breaches, making it crucial to implement robust security measures.

Similar threads

  • General Discussion
Replies
1
Views
216
  • Computing and Technology
Replies
0
Views
130
Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
25
Views
3K
  • Mechanical Engineering
Replies
2
Views
1K
Replies
2
Views
862
Replies
9
Views
2K
Replies
207
Views
3K
Replies
3
Views
713
  • STEM Academic Advising
Replies
6
Views
837
Back
Top