Royce's Theorem: Intelligence Is Not Computational

AI Thread Summary
Roger Penrose's "Shadows of the Mind" argues that consciousness cannot be computational, prompting the introduction of Royce's Theorem, which posits that purely computational processes lack true intelligence. The discussion emphasizes that any intelligence present in a computational system originates from its human creators, as machines merely execute programmed algorithms without genuine awareness or creativity. The analogy of dominoes illustrates that increasing complexity in programming does not equate to consciousness, as the fundamental nature of these processes remains mechanical and reactive. Critics argue that while machines can simulate certain tasks, they cannot replicate the nuanced creativity and awareness inherent to human thought. Ultimately, the conversation highlights skepticism about the potential for artificial intelligence to achieve true consciousness or intelligence independent of human input.
Royce
Messages
1,538
Reaction score
0
I am reading Roger Penrose's Shadows of the Mind (just started). He opens the book by arguing again that consciousness cannot be computational even in principle. This got me thinking about IA etc and I came up with the following that I call Royce's Theorem. :wink:

If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Whatever intelligence involved is that of the inventor, designer and/or programmer of the computational algorithm. Once that is done the rest is simply data processing according to the rules of the algorithm, not an intelligent process itself but one of duplication, repetition and rote. Such actions do not involve actual thinking, intelligence, awareness or consciousness. We used to call it plug and chug. Once the formula is known or given, plug in that data and hit the go button.

This can and does have many implications but I would rather toss this up and have it batted around awhile before getting into any implications.
 
Physics news on Phys.org
But there was an intelligence behind the program. I mean the programmer!
And I think it is very important
 
somy said:
But there was an intelligence behind the program. I mean the programmer! And I think it is very important

Yes it is because no one has been able to show any other way for a program to avoid turning repetitive, or into dumb randomness, except when human consciousness steps in and makes new adjustments.

It reminds me of someone lining up dominos in the most creative ways possible, like in those big contests they have in Japan, where when the dominos start falling they flip things on, send things flying, turn stuff, etc. Now what if someone creating that domino pattern hoped maybe it would learn to do it for itself? Everytime he fails to produce a self-creating domino pattern, he believes it is because the pattern wasn't complicated enough. So he makes it more and more complicated. Yet the problem isn't solved no matter how complex he makes it because at the root of the process is the same thing -- falling dominos -- but falling dominos isn't creativity, which is what organizes the dominos.

So I think Royce is correct. To me it is the same problem with hoping computer programming will spawn consciousness. Programming is falling black or white dominos, but if that isn't what consciousness is, then it's hopeless for programmers to keep trying more complexity to create consciousness.
 
Les Sleeth said:
It reminds me of someone lining up dominos in the most creative ways possible, like in those big contests they have in Japan, where when the dominos start falling they flip things on, send things flying, turn stuff, etc. Now what if someone creating that domino pattern hoped maybe it would learn to do it for itself? Everytime he fails to produce a self-creating domino pattern, he believes it is because the pattern wasn't complicated enough. So he makes it more and more complicated. Yet the problem isn't solved no matter how complex he makes it because at the root of the process is the same thing -- falling dominos -- but falling dominos isn't creativity, which is what organizes the dominos.

But he could, conceivably, create a domino pattern that could add numbers, or prove number theoretical theorems, or express a belief, assuming dominoes are capable of simulating turing machines. A parapelegic couldn't organize a domino pattern either, but that doesn't mean he isn't creative.
 
How many parts of a human do you have to replace until it becomes nonhuman? Whereas nonhuman is nonhuman consciousness, you can add or subtract parts.

If we replaced near all its parts could we then conclude that consciousness is not physical?

The ghost in the machine just might decide to try out the new toys we build.
 
Les and Rader, you both obviously see where this can lead. It makes AI, Artificial Intellegence impossible to be anything more than artificial in principle as well as fact.
And again if we apply Chalmer's argument to intellegence then when does it become intelligent or stop being intelligent?
If the theorum is alway and completely true then Penrose's position that Consciousness is not computable in principle is true. Therefore intelligence and/or consciousness cannot be emergent phemomena.

But, is it always completely true? I obviously think it is.
 
StatusX said:
But he could, conceivably, create a domino pattern that could add numbers, or prove number theoretical theorems, or express a belief, assuming dominoes are capable of simulating turing machines.

That's correct, he could. But Royce allowed that it is possible to create programs that compute (and I'm pretty sure he'd agree programs can mindlessly express things). The issue is if computing power can get "intelligent," which I assumed he meant could creatively think up and design. If we downgrade intelligence to simply computing power, then sure, a computer is intelligent.


StatusX said:
A parapelegic couldn't organize a domino pattern either, but that doesn't mean he isn't creative.

Yes, but that seems off point. He couldn't physically organize the pattern, but he could still design it. As of now a computer can only design what human consciousness programs it do.
 
Les Sleeth said:
That's correct, he could. But Royce allowed that it is possible to create programs that compute (and I'm pretty sure he'd agree programs can mindlessly express things). The issue is if computing power can get "intelligent," which I assumed he meant could creatively think up and design. If we downgrade intelligence to simply computing power, then sure, a computer is intelligent.

In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from. I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.

Yes, but that seems off point. He couldn't physically organize the pattern, but he could still design it. As of now a computer can only design what human consciousness programs it do.

My point was that the only reason a domino pattern couldn't construct another domino pattern is because it isn't physically equipped to do so. It could conceivably "think up" new arrangements by forming certain complex patterns that a computer could interpret and then create using a robotic arm.


Also, regarding the topic, what you have isn't really a theorem but is instead just your definition of intelligence, which I happen to disagree with. Intelligence is not well defined, and unless you have another definition of intelligence in mind that you are referring to here, this theorem has no real content besides defining a term.
 
Last edited:
StatusX said:
In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from.

Okay, I'll go for that.


StatusX said:
I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.

I can't figure out why mechanists think a computer will recognize "interesting" or anything else that is determined by quality. How is a computer going to get inspired?

I was watching a special on Google and their founders last night. One of the criticisms by other analysts was that Google searches tend to priortize what's most negative. The founders said they haven't been able to figure out how to stop their program from doing that (they have plenty of help too, and hiring 25 more people per month). I know I have posted over 1400 times here, and over 500 times at the old PF. Yet when I do a Google search of my name, up pops the one humorous post I did about the biology of MASTURBATION ( :redface: that's how big it seems to me). Before that I posted in a thread where someone asked if it was okay to view porno at work, and that one popped up!

One reason I see why a "mechanical computer couldn't replicate" creativity/intelligence is because so far computers and their programs have shown themselves to be utterly stupid. It is only people's prior belief in physicalism, or believing consciousness is the result of neuronal complexity, that makes them claim computers can be conscious. There is not the slightest bit of objective evidence yet to indicate that.
 
Last edited:
  • #10
StatusX said:
In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from. I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.
Ideas are a productof thinking. The weighing and connecting thoughts or ideas requires that the thinker is aware that it is thinking and that it is aware of its thoughts. Random thoughts is still thinking but thinking randomly and can only result in randomness. Combining thoughts in unique ways requires intelligence including understanding, awareness of results, applicability and ramifications as well as purpose, intent. A machine even if composed of dominoes can only do what it is programed or designed to do. A domono has only two choices to stand or fall. if the next domino falls into it it has no choice but to fall. This is not random nor thinking but unwillful reacting to its invironment the only way that it can. It is the same with a computer or any other machine to date. It will always be so for any machine as we understand the term, machine.


My point was that the only reason a domino pattern couldn't construct another domino pattern is because it isn't physically equipped to do so. It could conceivably "think up" new arrangements by forming certain complex patterns that a computer could interpret and then create using a robotic arm.

A domino or a billion domines can only stand or fall in response to influences outside of itself or its control. It does not decide to fall or not to fall. It cannot act randomly or willfully. It can only do what it was designed to do.

Any and all intelligence in any such machine lies with the designer and/or programmer not the machine. If it is a machine it can only do what it is designed to do and nothing else or it is malfunctioning.
 
  • #11
Les Sleeth said:
One reason I see why a "mechanical computer couldn't replicate" creativity/intelligence is because so far computers and their programs have shown themselves to be utterly stupid. It is only people's prior belief in physicalism, or believing consciousness is the result of neuronal complexity, that makes them claim computers can be conscious. There is not the slightest bit of objective evidence yet to indicate that.
You just reminded me of a certain caveat my networking instructor used to issue to his students:

"Never anthropomorphize your computers. They don't like it."

:biggrin:
 
  • #12
Math Is Hard said:
You just reminded me of a certain caveat my networking instructor used to issue to his students:

"Never anthropomorphize your computers. They don't like it."

:biggrin:

:smile: :smile: :smile: Great observation! We should send this to Chalmers to help him with his next debate with Dennett.
 
  • #13
This is in response to Les Sleeth and Royce's posts, since you both argued essentially the same point that mechanical processes can never behave in a way we could reasonably call intelligent.

I hate to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we can all agree is "intelligent". Does it experience thought? Maybe. We believe our own brain does, but we know nothing of other's brains. Just looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

I just can't understand any of the objections to this argument. Is the brain made of something besides atoms? How could that be if it arose from atoms (namely, those of the food we eat, or if you want to go back farther, star material) ? Do the laws of physics not apply in the brain? Again, there is no conceivable reason to suppose this is true.
 
Last edited:
  • #14
an organism has desires that fuel and give direction to that "machine," ie the brain, for starters. Maybe if you could wire in a desire to survive, or an aim it would be different (or at least to appearances).
 
  • #15
If we want to go with the computer analogy then we must realize that the computer consists of switches only. The switches are hard wired and can only turn from off to on or from on to off only in response to its input. The fact that a switch is on or off has no meaning other than what the programmer assigns it. Take for example a light, led, on your computer panel. It is either illuminated or not in response to a switch at its input. We have no idea what the light being on means unless it is labeled or has a position that has previously been assigned a significance. It could mean that the power switch is on, it could mean that our hard drive is being accessed or it could mean that there is a floppy or CD present in the drive.
The point being is that the light is not intelligent nor does it convey any intelligence or information unless it has been previously assigned a value by the designer and we know what the designer assigned it to mean. For all we know the light being off may be a good thing or bad, may indicate proper functioning or improper functioning, or as is the case in many indicators on the simulator on which I work, they don't mean anything because the lights don't apply to this model. It is still a light and it still switches on and off and it is either red or green or yellow, some are even are labeled but in this particular case they have no meaning and convey no information because none has been assigned to it it this particular case.
Intelligence is designed into a machine. The machine can only respond as designed in response to the data fed into the machine and the ones and zeroes or the light or dark places only have meaning and convey information only if we all know what the designer intended it to mean. Someone who does not know, understand and read the english language as used in the United States could make no sense of what any of us have written here.
It would convey no information and would convey no intelligence. In order to understand the results of a process we have to know and understand beforehand the form in which the information is encoded.
Where then does any intelligence lie within the machine?
 
Last edited:
  • #16
StatusX said:
Not to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we all can agree is intelligent. Does it experience thought? Maybe. We know our own brain does, but we know nothing of other's brains.

I don't think you are suggesting looking at the brain objectively. Rather, you are saying look objectively at what's observable by our senses. With our senses it is true we can only observe atoms, conformity to physical laws, complex behavior . . . If you sit still and experience your own consciousness however, it won't be with the senses. Yet no one wants to include that information in the model.

But there is another factor to consider, and that is the thus-far observed behavior of matter which is not "alive." It doesn't exhibit anything creative enough that should make us believe it can organize itself for billions of years to produce "livingness" and consciousness. It mostly just disorganizes. So why does anyone have faith that physicalness can produce life and consciousness?


StatusX said:
Looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

If you were Einstein, and you delivered your paper on relativity via the radio, what would you think if the world's population credited the radio with being a genius? Just because we see intelligence associated with the brain doesn't mean the brain is creating it.
 
  • #17
Les Sleeth said:
I don't think you are suggesting looking at the brain objectively. Rather, you are saying look objectively at what's observable by our senses. With our senses it is true we can only observe atoms, conformity to physical laws, complex behavior . . . If you sit still and experience your own consciousness however, it won't be with the senses. Yet no one wants to include that information in the model.

I editted my post, and I don't know if you saw it, but to reiterate: the brain is made of star. Is this special quality we can't observe that you suggest present in stars too, or does it arise at some point in the formation of a brain? Like I said, there is no reason to believe in such an quality.

Keep in mind, a physicalist will claim your beliefs are physically caused, and do not necessarily say anything about the world except that something about it causes you to have those beliefs. That the cause is what you believe it to be (that the beliefs result from your interaction with this higher realm) is not a priori true. The same goes for me. I recognize that it feels as if I am making decisions for myself and that there is some essenence of what it means to be me, call it a soul if you want. But I also recognize that if physicalism is true, it could conceivably explain why I have these feelings if they aren't true.

But there is another factor to consider, and that is the thus-far observed behavior of matter which is not "alive." It doesn't exhibit anything creative enough that should make us believe it can organize itself for billions of years to produce "livingness" and consciousness. It mostly just disorganizes. So why does anyone have faith that physicalness can produce life and consciousness?

What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?
 
Last edited:
  • #18
StatusX said:
I hate to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we can all agree is "intelligent".

No we can't because we cannot agree or know where intelligence, awareness and consciousness lies. If the brain is made up of cells that switch on and off only in response to stimuli, inputs, then how many cells and how many connections does it take to become intelligent. If then we remove or kill one cell or one connection then does it become no longer intelligent or conscious.
That is Chalmers argument.

Does it experience thought? Maybe. We believe our own brain does, but we know nothing of other's brains.

Tell me how a thinking machine can be aware first that it is thinking and second be aware of what it is thinking if it is what is doing the thinking in the first place. There has to be something more that is aware of what the brain machine it doing and assign its results value, intelligence and meaning. Is it another machine made up of another type of cells or connections? Where is it and what is it about it that makes it aware and conscious?

Just looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt?

I just can't understand any of the objections to this argument. Is the brain made of something besides atoms? How could that be if it arose from atoms (namely, those of the food we eat, or if you want to go back farther, star material) ? Do the laws of physics not apply in the brain? Again, there is no conceivable reason to suppose this is true.

Now you have finally addressed the real question that we and hundreds of millions of others have been trying to the determine the answer for thousands of years. I guess our atoms are just not intelligent enough yet. :rolleyes:
 
Last edited:
  • #19
StatusX said:
What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?

When does the assembly line become bored with making cars and decide to make washing machines or better yet decide to take a vacation and go fishing? When does this intelligent machine contemplate its designer and maker? When does it decide or intend or be aware of anything.

"Computers are the dumbest most aggravating machines ever made by man because they insist on doing exactly what they are told to do instead of being reasonable and doing what we want them to do."
 
  • #20
Royce said:
No we can't because we cannot agree or know where intelligence, awareness and consciousness lies. If the brain is made up of cells that switch on and off only in response to stimuli, inputs, then how many cells and how many connections does it take to become intelligent. If then we remove or kill one cell or one connection then does it become no longer intelligent or conscious.
That is Chalmers argument.

What is? Chalmers says that experience arises in any information processing system, or something similar to that. Cells don't necessarily behave in simple on/off manner, but neither do all possible components of machines. Atoms follow the laws of physics, and if one arrangement of atoms can give rise to intelligent behavior, then there is no reason to assume many other ones can. Biological matter is no different than silicon in that they are made of the same stuff and follow the same rules.

Tell me how a thinking machine can be aware first that it is thinking and second be aware of what it is thinking if it is what is doing the thinking in the first place. There has to be something more that is aware of what the brain machine it doing and assign its results value, intelligence and meaning. Is it another machine made up of another type of cells or connections? Where is it and what is it about it that makes it aware and conscious?

We don't yet know what causes consciousness. Chalmers believes its everywhere, while Dennett believes its nowhere. What they agree on is that a brain and an extremely accurate model of a brain made of another material should act the same. They go further and claim they should also have the same state of consciousness. This is controversial, but it is less controversial to just assume they will both behave intelligently, since by premise, they behave identically.

How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt?

Complex systems can give rise to very complicated behavior. For example, there are only a few symbols, axioms and rules in boolean algebra. Because of this, it is pretty simple, and has the property that any well formed statement can be proved true or false. Add some more axioms, symbols, and rules and you have whole numbers, which is so complicated that simple theorems have gone unproven for hundreds of years and some statements have the property that they can never be proven true or false. Even though there is not a huge difference in the specification of these two formal systems, the little extra complexity in the rules of whole numbers give rise to enormously more complex behavior. It is not the symbols themselves that "contain" the complexity of the system, but together with the axioms and rules, complicated behavior can emerge. Likewise, atoms aren't intelligent, but arrangements of atoms can be.

When does the assembly line become bored with making cars and decide to make washing machines or better yet decide to take a vacation and go fishing? When does this intelligent machine contemplate its designer and maker? When does it decide or intend or be aware of anything.

It doesn't, but are these the necessary conditions for life or intelligence? Is it inconceivable that a computer be programmed to do those things? Or when it does, will you just redefine intelligence to be what computers cannot do?
 
Last edited:
  • #21
StatusX said:
What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?

How did those assembly lines get organized? You admit it is people, but then go on like that is some small matter. Yes, once organized and given a power source, assembly lines can indeed behave quite intelligently, but it was human intelligence that made it happen! If you create the software, guess what's behind that? Human consciousness. To me it is incredible that mechanists keep undervaluing what they themselves, as consciousness, contribute to any mechanical endeavor beyond what might happen if a bunch of rocks fall on a board and accidentally create a bit of lever action.
 
  • #22
Dear Royce,
I just want to know what new does this theorem say? I mean what do you want to say with this theorem.
We all know what we say by the intelligence behinde a program (specially the ability of control that a program consists.)
So I really want to know what are you going to say with this theorem.
Anything new I mean.
Thanks in advance.
Somy.
 
  • #23
StatusX said:
What is? Chalmers says that experience arises in any information processing system, or something similar to that. Cells don't necessarily behave in simple on/off manner, but neither do all possible components of machines. Atoms follow the laws of physics, and if one arrangement of atoms can give rise to intelligent behavior, then there is no reason to assume many other ones can. Biological matter is no different than silicon in that they are made of the same stuff and follow the same rules.

One of Chalmers arguments against consciousness emerging from increasing complexity is where is the cutoff point. If we start with an intelligent conscious machine and remove tiny bits of silicon or whatever would we reach a point where we remove one more bit and no longer have an intelligent conscious machine. This is what I was refereing to.

We don't yet know what causes consciousness. Chalmers believes its everywhere, while Dennett believes its nowhere. What they agree on is that a brain and an extremely accurate model of a brain made of another material should act the same. They go further and claim they should also have the same state of consciousness. This is controversial, but it is less controversial to just assume they will both behave intelligently, since by premise, they behave identically.
Complex systems can give rise to very complicated behavior. For example, there are only a few symbols, axioms and rules in boolean algebra. Because of this, it is pretty simple, and has the property that any well formed statement can be proved true or false. Add some more axioms, symbols, and rules and you have whole numbers, which is so complicated that simple theorems have gone unproven for hundreds of years and some statements have the property that they can never be proven true or false. Even though there is not a huge difference in the specification of these two formal systems, the little extra complexity in the rules of whole numbers give rise to enormously more complex behavior. It is not the symbols themselves that "contain" the complexity of the system, but together with the axioms and rules, complicated behavior can emerge. Likewise, atoms aren't intelligent, but arrangements of atoms can be.

This is exactly what Penrose is arguing against. His premise is that consciousness cannot be created by any purely computational process no matter how complicated or sophisticated in principle as well in fact. The best that can be accomplished is a simulation of consciousness that is not itself conscious. I am extending this argument to include intelligence as IMO intelligence is a necessary componet of consciousness.
 
  • #24
somy said:
Dear Royce,
I just want to know what new does this theorem say? I mean what do you want to say with this theorem.
We all know what we say by the intelligence behinde a program (specially the ability of control that a program consists.)
So I really want to know what are you going to say with this theorem.
Anything new I mean.
Thanks in advance.
Somy.

original post by Royce said:
If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Whatever intelligence involved is that of the inventor, designer and/or programmer of the computational algorithm. Once that is done the rest is simply data processing according to the rules of the algorithm, not an intelligent process itself but one of duplication, repetition and rote. Such actions do not involve actual thinking, intelligence, awareness or consciousness. We used to call it plug and chug. Once the formula is known or given, plug in that data and hit the go button.

What I am saying or what the theorem says, in support and in response to Penrose's argument that consciousness cannot be duplicated or create by purely computational methods, is that this also applies to intelligence.

I think that is is much easier to prove that intelligence is not involved in any purely computational process as intelligence is IMO a necessary component of awareness and consciousness. The intelligence is involved in the design of the processor, the invention of the algorithm and the programming of the process into the processor. The computational process itself contains nor uses no intelligence and therefore intelligence does not and cannot arise out of a purely computational process no matter how complicated or sophisticated it may be or become. This also implies that Artificial Intelligence cannot be true creative intelligence but only an artificial simulation of it.
 
Last edited:
  • #25
Royce said:
One of Chalmers arguments against consciousness emerging from increasing complexity is where is the cutoff point. If we start with an intelligent conscious machine and remove tiny bits of silicon or whatever would we reach a point where we remove one more bit and no longer have an intelligent conscious machine. This is what I was refereing to.

I think you are misreading Chalmers. His argument is that experience will arise from any information-processing system because you cannot mark a cutoff point. He even argues that a thermometer, the simplest form of information-processor, might possesses a very primitive capacity to experience.
 
  • #26
loseyourname said:
I think you are misreading Chalmers. His argument is that experience will arise from any information-processing system because you cannot mark a cutoff point. He even argues that a thermometer, the simplest form of information-processor, might possesses a very primitive capacity to experience.

Maybe so. We may be referring to different books or parts of the same book. I may be referring to a reference to his work that I read somewhere else. I don't know or remember but it seemed to me that tha was part of the easy problem of consciousness.

While a thermometer reacts to its enviroment, unless one believes that the universe and everything in it is conscious, I don't think that a thermometer experiences anything as it is not conscious. Consciousness, including awareness and understanding is required to experience anything in the real world. Subjective experience is of course something different but I don't think that you or he are talking about that.

The whole point of this thread is to argue for the belief that there is something more to consciousness other than pure computational processes of the brain that may some day be duplicated by a machine. That there is that about consciousness that is a non-computationable that is now beyound the present theories and knowledge of science. Science will have to expand its present area of inquirery and its methods of investigation to address consciousness. I do not say that it is not a knowable thing forever beyound science. Only that at present with the tools at hand consciousness cannot be explained, accounted for or duplicated by a machine even in principle.
 
  • #27
Royce said:
How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt? :
You should read this new book by Gregg Rosenberg A place for consciousness that we are to discuss shortly. There is some interesting things in there with a new pitch, until then the rabbit will have to stay in its hat.
 
  • #28
Les Sleeth said:
How did those assembly lines get organized? You admit it is people, but then go on like that is some small matter. Yes, once organized and given a power source, assembly lines can indeed behave quite intelligently, but it was human intelligence that made it happen! If you create the software, guess what's behind that? Human consciousness. To me it is incredible that mechanists keep undervaluing what they themselves, as consciousness, contribute to any mechanical endeavor beyond what might happen if a bunch of rocks fall on a board and accidentally create a bit of lever action.

There are two forces we know of that are capable of creating organized systems and behavior: intelligence and evolution. Although the methods are very different, they are capapble of acheiving many common results: systems capable of seeing, hearing, moving, flying, swimming, digging, etc. Since we've have already reached these goals, no one doubts there is nothing about them that is mystical.

Evolution has proved capable of acheiving intelligence, but the only intelligence that humans have managed to create so far have been the somewhat stupid computers. Now you may argue that the creation of real intelligence is beyond human ability, which I disagree with, but this does not mean it's non-physical. Evolution, a purely physical process, could do it. Nothing else that evolution has done, short of creating the first instance of life itself, is doubted to be purely physical. So why not intelligence, and to go a little farther, why not consciousness?
 
  • #29
Royce said:
While a thermometer reacts to its enviroment, unless one believes that the universe and everything in it is conscious, I don't think that a thermometer experiences anything as it is not conscious. Consciousness, including awareness and understanding is required to experience anything in the real world. Subjective experience is of course something different but I don't think that you or he are talking about that.

Well if this is your opinion, Chalmers' is almost the polar opposite. And in fact, subjective experience is exactly what he's talking about. As you've differentiated this from consciousness, which is not done when talking about the hard problem, you'll have to be more specific about what exactly you mean by consciousness.

The whole point of this thread is to argue for the belief that there is something more to consciousness other than pure computational processes of the brain that may some day be duplicated by a machine. That there is that about consciousness that is a non-computationable that is now beyound the present theories and knowledge of science. Science will have to expand its present area of inquirery and its methods of investigation to address consciousness. I do not say that it is not a knowable thing forever beyound science. Only that at present with the tools at hand consciousness cannot be explained, accounted for or duplicated by a machine even in principle.

That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.
 
Last edited:
  • #30
THE PROBLEM WITH THE CONSCIOUS INTELLIGENCE IN HUMANS

There are several causal and relational problems with conscious intelligence:

1) ENERGY

Conscious intelligence needs Energy to work. It needs to be powered up by externally induced energy. The fundamental DESIGN and ENGINEERING problem confronting conscious intelligence is that it has no mechanism or process for recycling energy. This external dependency for energy shortchanges the hard-core epiphenominalists.

2) LOGICALLY HARDWIRED

It needs to be hardwired with clear logical pathways to work in the first place. If you cut the key wires from the body, everything intelligible in a human system colapses. You can surgically destroy intelligence in humans by cutting the right wires in the humans. Rene Decartes saw this problem and attempted to counter it by intruding a dubious notion of spirits assisting the Non-physical Soul.

3) INDEPENDENCE

The notion of independence of conscious intelligence from the body is undermined by problems (1) and (2). Its dependence on the body and carefully layed out and hardwired logical pathways to function undermines the very notion of independence. The claim that conscious intelligence pre-exists and post-exists the material body begs the question as to what does it want with the body in the first place. Worst still, the notion of 'Absolute Independence' is completely out of the qustion because this would imply something superior coming into a relation with something inferior (the mortal material body). Infact, absolute independence can also mean something that is so structurally and functionally perfect such that it is incapable of relating to any other thing. Either way, the notion is fundamentally problemtatic.

4) MOTION

Conscious intelligence relies on motion. Grind an intelligent system into a hault and out goes conscious intelligence! Stop the world on its track, and conscious intelligence everporates with it!

All these problems together seems to suggest that conscious intelligence comes into existence at the same time as the body and perhaps also perishes with the body. Hence, the nottion of continuity of life after death seems very slim. Perhaps, we are better off pursuing the notions of 'Life Continuity' and 'Immortality' scientifically, whereby we seek scientific methods for creating an immortal human being in a physical material sense. At least, there is nothing which logically rules this option out. Although, this is heavily contested in many quaters, yet it cannot by any device of logic be completely ruled out as a viable option.
 
Last edited:
  • #31
Good points Philocrat!
 
  • #32
StatusX said:
Well if this is your opinion, Chalmers' is almost the polar opposite. And in fact, subjective experience is exactly what he's talking about. As you've differentiated this from consciousness, which is not done when talking about the hard problem, you'll have to be more specific about what exactly you mean by consciousness.

I knew that introducing subjective experience at this point was a mistake.
I am trying simply to take it one step at a time; but, that doesn't seem to be where this thread is going. IMHO all experience is altimately subjective as the actual experience happens in the mind/brain. I differentiated it because there is a difference between physical, awake experience of life and the purely mental experience of dreams, imagination and meditation. As far as Chalmers is concerned, I liked his argument of subtracting the smallest possible part from a conscious entity until it becomes no longer conscious to determine the point or minimum complexity needed for consciousness. This showes according to him the absurdity of consciousness being nothing more than a matter of complexity. Unless of course I have completely misread that passage.

That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.

Well, that, at least in part, is what the theorem is about. As a machine is only capable of performing computations regardless of how complex or sophisticated it involves no intelligence in the performing of the computations and therefore cannot be considered intelligent nor is it possible for it to be or become conscious no matter how complex it may become. It seems to me that something else is needed that is as of yet undiscovered and not capable of being computed or duplicated by us.
 
  • #33
Philocrat said:
THE PROBLEM WITH THE CONSCIOUS INTELLIGENCE IN HUMANS

There are several causal and relational problems with conscious intelligence:

1) ENERGY

Conscious intelligence needs Energy to work. It needs to be powered up by externally induced energy. The fundamental DESIGN and ENGINEERING problem confronting conscious intelligence is that it has no mechanism or process for recycling energy. This external dependency for energy shortchanges the hard-core epiphenominalists.

This is nothing more than an assumption. It may take physical energy for our brains to function and for our bodies to be alive and conscious but there it has not been shown that actual intelligence and consciousness is physical energy dependent.

2) LOGICALLY HARDWIRED

It needs to be hardwired with clear logical pathways to work in the first place. If you cut the key wires from the body, everything intelligible in a human system colapses. You can surgically destroy intelligence in humans by cutting the right wires in the humans. Rene Decartes saw this problem and attempted to counter it by intruding a dubious notion of spirits assisting the Non-physical Soul.

It has been shown that the human brain is not hard wired but changes its connections as new things are learned whether knowledge or physical as learning to swing a golf club.

3) INDEPENDENCE

The notion of independence of conscious intelligence from the body is undermined by problems (1) and (2). Its dependence on the body and carefully layed out and hardwired logical pathways to function undermines the very notion of independence. The claim that conscious intelligence pre-exists and post-exists the material body begs the question as to what does it want with the body in the first place. Worst still, the notion of 'Absolute Independence' is completely out of the qustion because this would imply something superior coming into a relation with something inferior (the mortal material body). Infact, absolute independence can also mean something that is so structurally and functionally perfect such that it is incapable of relating to any other thing. Either way, the notion is fundamentally problemtatic.

Since 1 and 2 are not necessarily true and not proven (in the case of 2 proven not to be the case) then the above does not follow at all. It may or may not be true and there are many who are convinced that it is not true through personal experience.

4) MOTION

Conscious intelligence relies on motion. Grind an intelligent system into a hault and out goes conscious intelligence! Stop the world on its track, and conscious intelligence everporates with it!

All these problems together seems to suggest that conscious intelligence comes into existence at the same time as the body and perhaps also perishes with the body. Hence, the nottion of continuity of life after death seems very slim. Perhaps, we are better off pursuing the notions of 'Life Continuity' and 'Immortality' scientifically, whereby we seek scientific methods for creating an immortal human being in a physical material sense. At least, there is nothing which logically rules this option out. Although, this is heavily contested in many quaters, yet it cannot by any device of logic be completely ruled out as a viable option.

This doesn't seem relevant at all. Why should intelligence and consciousness depend on physical motion at all. Yes everything is in motion but is there a causal or correlational relationship.
 
  • #34
Royce said:
This is nothing more than an assumption. It may take physical energy for our brains to function and for our bodies to be alive and conscious but there it has not been shown that actual intelligence and consciousness is physical energy dependent.

I believe that what the fMRI images show is energy consumption, in the form of glucose usage. These studies have shown patterns of energy consumption during tasks like problem solving and meditation. It's hard to imagine what facet of consciousness you believe not to be captured in these studies; the subjects are experiencing "what it is like" to solve problems or meditate, and right along there their brains are consuming energy in specific areas. The burden of proof is on you to show that they can experience without thinking and using energy.
 
  • #35
Yes, as I said the brain takes and uses energy to stay alive and function. Consciousness and intelligence has not been absolutely tied to the brain functioning.
It takes electrochemical energy in the brain to think but is that all that awareness, intelligence and consciousness is? Is the production and use of electrochemical energy the cause or the effect? Is that all there is to consciousness and itelligence is electrochemical energy? This is the very thing that this thread is about. I amoung many others think that it is the effect rather than the cause, that there is more than the consumption of energy to drive computational processes that is involved in intelligence and consciousness.
There is no burden of proof on either side as none of this speculation can be proved as of yet any way. There is only a 'burden of proof' if one is commited to the physicalist paradigm that all mentality is computational processes in the physical brain.
 
  • #36
You say consciousness has not been "absolutely tied" to brain functioning. Have you any surveyable evidence of consciousness WITHOUT brain functioning? If you claim there is, or even might be, something like that then the burden of proof is surly on you!

Alternatively brain functioning is a necessary prerequisite of consciousness, and therefor to be conscious requires that your brain function, and hence that your body burn energy. Thus "to be conscious" necessarily entails "to process energy" whatever philosophical categories you costruct.
 
  • #37
StatusX said:
Evolution, a purely physical process, could do it. Nothing else that evolution has done, short of creating the first instance of life itself, is doubted to be purely physical. So why not intelligence, and to go a little farther, why not consciousness?

When you say the word "evolution" you are simply equating it to whatever process generated human intelligence, so it isn't saying much to turn around and claim that therefore evolution has created human intelligence. It is true by definition. The trick comes in when you then equate this definition of "evolution"(whatever process that created humans) to the current scientific explanation for the existence of humans and the result is "evolution is a completely physical process".

Since the word "physical" means absolutely nothing to me, this conclusion doesn't either. But I do understand the intent. The intent is to claim that we understand almost everything there is to understand about how humans arrived. But as long as there are serious philosophical issues around consciousness I would expect there to always be conflicting theories. To simply gloss over the problems of consciousness because it is a result of what is known to be "a physical process" of evolution is simply the same as assuming your conclusion.

And "why not consciousness"? Because, as Royce is saying, there are serious issues suggesting that consciousness cannot be the result of complexity. If people here were claiming that consciousness cannot be created by humans because it is too complex then I would be agreeing with you. But that's not what is being stated here.


That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.

What Royce is trying to say is that there are serious reasons to believe that,in principle, consciousness cannot be a creation of computative operations and therefore be reductively explained. The correlation between brains and consciousness is indiputable. But this correlation tells us nothing about the relationship. Perhaps, given the presense of certain processes, a fundamental element of reality called consciousness manifests itself in a different way? So it may be conceivable that man could actually create a conscious machine. But this is very different from claiming that the computations themselves actually create consciousness. In this case, it would be like claiming that your internal plumbing actually creates the water instead of simply allowing it to flow into your home. Or that your radio is actually creating that unusual vocal sound.
 
  • #38
selfAdjoint said:
Alternatively brain functioning is a necessary prerequisite of consciousness, and therefor to be conscious requires that your brain function, and hence that your body burn energy.

This statement and the one in your previous post is misleading. The discussion here is whether consciousness can be created by a reductive computative process. As I said in my previous post to StatusX, all we know is that there is a correlation between brains and consciousness. We don't know the nature of the relationships in the correlation(which is what Royce is saying I believe). So your very first sentence is misleading. The way it's worded, it implies that consciousness is created by brain functions. A radio is a prerequisite for you to hear your favorite song but that does not mean the creation of that song is dependent on the wiring of your radio. This implication is equating the existence of the song with the existence of the actual "play" event on the radio even though you have no clue how in the world that box of wires can compose such a sound. And there is reason to believe that in principle, it could not possibly do such a thing.

Thus "to be conscious" necessarily entails "to process energy" whatever philosophical categories you costruct

If I need gasoline to drive to my grandparents house so that I can split wood, then we can say that gasoline entails having split wood. Would there be an implication here that gasoline somehow contributes some functional explanation as to how the process of splitting wood takes place? I would hope not considering that gasoline has no purpose at all in the actual process of splitting wood. The same could be true for consciousness. Hey, we could just simply say that the big bang entails everything and leave it at that! No more explanations required!:-p

These just seem like semantic debates which miss much of the point of the original post.
 
Last edited:
  • #39
Royce said:
If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Royce, it seems to me that much of the debate here could have been fueled by using the word "intelligence". It depends on how we define this word but I would have initially pushed back on the idea that intelligence cannot be generated by computations.

So I'm not sure using the word is very useful. It doesn't seem we need it. There are serious philosophical issues around consciousness and any philosophical issues that you can site about "intellgence" seem to be because of its dependence on consciousness. So why have this word as if it's a different issue? Am I mis-understanding? Are there other things which you think cannot be generated by computations that aren't covered in the concept of consciousness?
 
  • #40
Fliption said:
This statement and the one in your previous post is misleading. The discussion here is whether consciousness can be created by a reductive computative process. As I said in my previous post to StatusX, all we know is that there is a correlation between brains and consciousness. We don't know the nature of the relationships in the correlation(which is what Royce is saying I believe). So your very first sentence is misleading. The way it's worded, it implies that consciousness is created by brain functions. A radio is a prerequisite for you to hear your favorite song but that does not mean the creation of that song is dependent on the wiring of your radio. This implication is equating the existence of the song with the existence of the actual "play" event on the radio even though you have no clue how in the world that box of wires can compose such a sound. And there is reason to believe that in principle, it could not possibly do such a thing.



If I need gasoline to drive to my grandparents house so that I can split wood, then we can say that gasoline entails having split wood. Would there be an implication here that gasoline somehow contributes some functional explanation as to how the process of splitting wood takes place? I would hope not considering that gasoline has no purpose at all in the actual process of splitting wood. The same could be true for consciousness. Hey, we could just simply say that the big bang entails everything and leave it at that! No more explanations required!:-p

These just seem like semantic debates which miss much of the point of the original post.


You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.
 
  • #41
Royce said:
I knew that introducing subjective experience at this point was a mistake.
I am trying simply to take it one step at a time; but, that doesn't seem to be where this thread is going. IMHO all experience is altimately subjective as the actual experience happens in the mind/brain. I differentiated it because there is a difference between physical, awake experience of life and the purely mental experience of dreams, imagination and meditation. As far as Chalmers is concerned, I liked his argument of subtracting the smallest possible part from a conscious entity until it becomes no longer conscious to determine the point or minimum complexity needed for consciousness. This showes according to him the absurdity of consciousness being nothing more than a matter of complexity. Unless of course I have completely misread that passage.

Well like I said earlier, all your theorem reduces to is a definition of the ambiguous word intelligence. So to keep this discussion productive, you'll need to define intelligence in some other way that this can be compared to. For example, do you believe true intelligence requires subjective experience? Personally, I think intelligence refers to a purely behavioral characteristic, and if there is a corresponding phenomenal aspect, that is something else.

Now I don't know that you agree with me that intelligence is behavioral, since you mentioned "intelligence/consciousness" in your earlier posts as if they were equivalent. They are not, and it is perfectly coherent to imagine a being which possesses one of these attributes but not the other. So I have two questions that should get to the bottom of this argument: Is consciousness necessary for the behavioral quality of intelligence? and Can non-humans (or non-animals) be conscious?

And by the way, I don't know what passage you are referring to, but from what I've read of Chalmers', you are misinterpretting him. He argues that experience is a fundamental property of the universe, like space or energy, and arises wherever there is an information processing system, such as a brain, a computer, or a thermostat. The complexity of the experience is in direct correlation to the complexity of the information processing, and there is no cut-off point below which consciousness vanishes.
 
  • #42
Fliption said:
When you say the word "evolution" you are simply equating it to whatever process generated human intelligence, so it isn't saying much to turn around and claim that therefore evolution has created human intelligence. It is true by definition. The trick comes in when you then equate this definition of "evolution"(whatever process that created humans) to the current scientific explanation for the existence of humans and the result is "evolution is a completely physical process".

Since the word "physical" means absolutely nothing to me, this conclusion doesn't either. But I do understand the intent. The intent is to claim that we understand almost everything there is to understand about how humans arrived. But as long as there are serious philosophical issues around consciousness I would expect there to always be conflicting theories. To simply gloss over the problems of consciousness because it is a result of what is known to be "a physical process" of evolution is simply the same as assuming your conclusion.

I don't think I'm assuming anything. I start with the fact that evolution seems to be a perfectly reasonable and supported hypothesis for the development of complexity in life. This process is understood conceptually, even if many specific properties of the organisms themselves have not been exhaustively accounted for. But most importantly, the theory requires no extra ingredient; presumably, a universe in which the laws of physics as we understand them today are all there is would still develop advanced life. I used this to argue that intelligence does not require an extra ingredient.

As for consciousness, this is more subtle. I don't disagree that the laws of physics as we know them do not account for consciousness. What I'm saying is this: The arrangement of atoms in our brain has given rise to experience. I cannot accept that there was some supernatural event during the formation of our brain that endowed it with a property that another identical collection of atoms would not have. Therefore, I agree with Chalmers that the right information processing system is automatically instilled with subjective experience.

And "why not consciousness"? Because, as Royce is saying, there are serious issues suggesting that consciousness cannot be the result of complexity. If people here were claiming that consciousness cannot be created by humans because it is too complex then I would be agreeing with you. But that's not what is being stated here.

I think we are getting off track. The discussion was about intelligence, and I believe that increasing the complexity of machines will eventually lead to human intelligence and beyond. But as I said in my last post, I agree with Chalmers that consciousness is fundamental, not a result of complexity.

What Royce is trying to say is that there are serious reasons to believe that,in principle, consciousness cannot be a creation of computative operations and therefore be reductively explained. The correlation between brains and consciousness is indiputable. But this correlation tells us nothing about the relationship. Perhaps, given the presense of certain processes, a fundamental element of reality called consciousness manifests itself in a different way? So it may be conceivable that man could actually create a conscious machine. But this is very different from claiming that the computations themselves actually create consciousness. In this case, it would be like claiming that your internal plumbing actually creates the water instead of simply allowing it to flow into your home. Or that your radio is actually creating that unusual vocal sound.

This seems to be an argument of semantics. Whether or not the processes have "created" consciousness is immaterial. If thoses processes are always accompanied by experience, there is probably some kind of natural law relating them, and interpretting this law as anything more than a correlation would be misleading.
 
Last edited:
  • #43
To me intelligence is a part of consciousness. It involves thinking or better reasoning, understanding and awareness so it is surely a part of consciousness as far as human consciousness is concerned and I would think non-human animals that are clearly conscious, aware and to some degree intelligent. Intelligence is probably as hard to define as is consciousness. Penrose proposed using the term genuinely intelligent or creatively intelligent to distinguish it from so called intelligent machines which is common usage but to him and me not the proper in this sense use of the word intelligent.

I think the ability to reason and understand, to think unique, original creative thoughts at least to the one doing the thinking are all part of being intelligent as well seeing understanding and making new connections, relationships implications and ramifications beyond a new thought or idea. This of course leaves out a lot of people (but not as many animals) and of course includes especially blonds and liberals. :biggrin: (Sorry about that. I just couldn't resist the opportunity to take a jab.)

I'm not sure how you mean intelligence is behavioral unless you consider the above behavioral. I think of intelligence more as a quality or attribute.
 
  • #44
selfAdjoint said:
You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.

Can we think of a parallel situation?

You are reading my words and ideas right now on your computer. Is your computer creating my words and ideas? No. Could my words and ideas be in your presence without computer technology and our computers? No.

Can physical principles alone at this time account for the presence of consciousness? No. Can consciousness be present in this universe without physicalness? As far as we know, no.

The proper conclusion is, the brain is required for consciousness to be present, but consciousness cannot yet be attributed to the brain. It is entirely possible that the brain "holds" consciousness here in the physical universe.
 
Last edited:
  • #45
selfAdjoint said:
You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.

That is exactly my point. You seem to have missed it. The relationship between a radio and a song is the same as consciousness and the brain. Yes it is true, you can hear a song without a radio. This relationship between the radio and the song does not preclude you from listening to that song without a radio. So what is it about the relationship between consciousness and the brain that suggests to you that one cannot exists without the other? Or that one caused the other? The only thing you have to argue is that(unlike the radio) you don't know of an instance of consciousness without a brain. Well how on Earth would you know if such a thing did exists? The only argument you have is actually an illustration of the problems of a theory of emergence.

You simply assumed that it was a known fact that consciousness and brains cannot be separated. Whereas I was trying to illustrate that the relationship between the two suggests no such assumption...as the radio/song analogy shows.
 
Last edited:
  • #46
StatusX said:
I don't think I'm assuming anything. I start with the fact that evolution seems to be a perfectly reasonable and supported hypothesis for the development of complexity in life. This process is understood conceptually, even if many specific properties of the organisms themselves have not been exhaustively accounted for. But most importantly, the theory requires no extra ingredient; presumably, a universe in which the laws of physics as we understand them today are all there is would still develop advanced life. I used this to argue that intelligence does not require an extra ingredient.

I see your point and I agree. What I was trying to point out was the flaw in using the assumption that evolution is a complete theory to conclude that consciousness was physical, was flawed because consciousness isn't accounted for by any theory. It's simply assuming the conclusion.

I think we are getting off track. The discussion was about intelligence, and I believe that increasing the complexity of machines will eventually lead to human intelligence and beyond. But as I said in my last post, I agree with Chalmers that consciousness is fundamental, not a result of complexity.

I see. Then you and I completely agree. As I said earlier, I too would have argued that "intelligence" can be created by machines. But this is purely semantic. I think at the end of the day, Royce and I would likely agree.

This seems to be an argument of semantics. Whether or not the processes have "created" consciousness is immaterial. If thoses processes are always accompanied by experience, there is probably some kind of natural law relating them, and interpretting this law as anything more than a correlation would be misleading.

This I do not agree with. Whether consicousness is "created" is a very important distinction. If consciousness is fundamental and simply correlated with certain atom arrangements and processes then we don't need to spend a lot of time trying to reductively explain the production of consciousness. We simply need to understand the correlation. But if consciousness is created then we have to be able to reductively account for that creation. This is what I do not think is possible.

If this distinction were not important, then what on Earth is Chalmers all uptight about?
 
Last edited:
  • #47
Royce said:
To me intelligence is a part of consciousness. It involves thinking or better reasoning, understanding and awareness so it is surely a part of consciousness as far as human consciousness is concerned and I would think non-human animals that are clearly conscious, aware and to some degree intelligent. Intelligence is probably as hard to define as is consciousness. Penrose proposed using the term genuinely intelligent or creatively intelligent to distinguish it from so called intelligent machines which is common usage but to him and me not the proper in this sense use of the word intelligent.

Again, I'm not sure exactly what it is you are denying machines the ability to do. We might as well change your and Penrose's term to "human intelligence," which shows how this defintion makes your theorem trivially true. At present, machines can add, diagnose diseases, carry on simple conversations, and manipulate blocks in simple enviroments, among many, many other complicated tasks. Some of these must overlap with what most of us consider intelligence. And their abilities will only expand with time. Are you saying a machine doesn't have free will? Neither do you, in the strictest sense that your actions are constrained by physical law. It can't come up with original ideas? I disagree, and as evidence, a computer was able to prove an ancient geometric theorem in a new and original way that had never occurred to any human. So what can't they do?

I'm not sure how you mean intelligence is behavioral unless you consider the above behavioral. I think of intelligence more as a quality or attribute.

It is behavioral (a functional property) as opposed to subjective. For example, when someone takes an IQ test, it is measuring the ability of that chunk of matter, conscious or not, to reason.

Fliption said:
This I do not agree with. Whether consicousness is "created" is a very important distinction. If consciousness is fundamental and simply correlated with certain atom arrangements and processes then we don't need to spend a lot of time trying to reductively explain the production of consciousness. We simply need to understand the correlation. But if consciousness is created then we have to be able to reductively account for that creation. This is what I do not think is possible.

If this distinction were not important, then what on Earth is Chalmers all uptight about?

Well I'm sure human conscisousness can be reduced, but I think that pure qualia will be associated in some one-to-one way with some type of physical process. Once we know what this correlation is, the problem is solved for all intents and purposes.
 
  • #48
Les Sleeth said:
Can we think of a parallel situation?

You are reading my words and ideas right now on your computer. Is your computer creating my words and ideas? No. Could my words and ideas be in your presence without computer technology and our computers? No.

I don't have any qualms with the rest of your post, but this definitely isn't true. There are many ways to experience a person's words without a computer. There are still no known ways to experience consciousness without a brain. Fliption could be right, and the brain is only a radio-like conduit that allows us to channel consciousness from some other source, but it seems to me that explanations like that are a big time copout. If that is the case, then the actual source of consciousness becomes an unsolvable mystery. I guess that's just the rub with metaphysical questions, though. They're all unsolvable mysteries. I'd like to think that any phenomenon we can directly experience is not.
 
  • #49
loseyourname said:
I don't have any qualms with the rest of your post, but this definitely isn't true. There are many ways to experience a person's words without a computer. There are still no known ways to experience consciousness without a brain.

I assumed readers would see my point. I could have said, ". . . if you were someone who never left your computer since birth, never met other humans had been fed by tubes, etc." to make the analogy fit better. My point was that there are reasons to not yet assume the brain is creating consciousness, and there's another explanation for how consciousness could be present in the brain.


loseyourname said:
Fliption could be right, and the brain is only a radio-like conduit that allows us to channel consciousness from some other source, but it seems to me that explanations like that are a big time copout. If that is the case, then the actual source of consciousness becomes an unsolvable mystery.

It's not a copout if it is true. Just because we want to scientifically figure out everything doesn't mean we can. It's horrible to contemplate, but there might just be truths beyond human experience and therefore which ultimately must remain a mystery. But so what? We still get to be consciousness; not knowing the source doesn't change that. In fact, maybe it would benefit us more if we made more effort to learn how to be consciousness than trying to figure out what causes it.


loseyourname said:
I guess that's just the rub with metaphysical questions, though. They're all unsolvable mysteries.

Phyiscalism is a metaphysical question. Metaphysics isn't synonomous with myth. It just the meta-systems behind the specifics of what we see going on around us. Is there a physical meta-system? There must be because we can't see all the causes of physical phenomena. Is everything we see the result of a physical meta-system? That's what we are arguing about.


loseyourname said:
I'd like to think that any phenomenon we can directly experience is not [an unsolvable mystery].

Are science researchers experienced with all aspects of consciousness there is to know? If, for example, someone is adept with their intellect, does it mean they understand how to use their consciousness every way it has been demonstrated it can be used?

Here's what I don't understand. How do people justify remaining blissfully ignorant of the achievements of others (and I'm not specifically referring to you)? How do people develop their consciousness in one way, ignore everything which isn't their "way,", and then try to act like they know how to evaluate everything? When the only thing one studies is science and physicalness, for instance, that is all they are going to know about. It doesn't mean what science finds all there is to know, or that's the only way one can develop one's consciousness!

Consciousness has been studied deeply, and long, long before any brain researchers decided to take up the investigation. They were people who dedicated their entire lives to learning to directly experience that "subjective" aspect which mystifies everyone currently. As far as I can tell from looking at both sides, the neuroscience side understands the role of the brain best, and the direct experience side understands consciousness itself best. It is too bad the physicalists of the neuroscience side have already decided they know the metaphysical "truth," and so have closed off every bit of openness to any evidence except that which can be studied scientifically.
 
  • #50
At first glance the theorem may seem trivial and intuitively obvious; but. it has implications and ramifications that directly address a number of threads and discussions going on here at PF as well as other places and times. There are three that I have in mind:

1. Artificial Intelligence, AI, can only be that, artificial, at best a simulation of genuine creative intelligence. Thus Mr. Data of Star Trek, The Next Generation could never be more than a data processor of great complexity and sophistication that simulated intelligence and consciousness. He could never be, in principle and fact an intelligent, conscious sentient entity or being in his own right. Sentient conscious robots or machine as we know them now are in principle impossible.

2. Intelligence and thus consciousness cannot be duplicated by merely increasing the size, computational complexity or sophistication of computational processes. Thus intelligence and consciousness cannot be emergent properties.

3. There is that of intelligence and consciousness that is non-computational and thus non-reducible by physics, science and/or mathematics at this time with the tools and scope of science as it is.

----------

There is something more than computational processes to intelligence and consciousness that is beyond science at this time and to study it science will have to broaden it's scope and limits to include at least subjective experience.

---------
The fact that electrochemical activity can be observed in the brain while people are thinking, problem solving and meditating proves only that there is a correlational between thinking and electrochemical activity in the brain. It does not prove that such activity is the sole cause and origin of intelligence and consciousness There is also activity in the brains of people in comas. There is anecdotal evidence, documented, verified and published that people who are clinically dead or verified to be deeply anesthetized have retained their awareness, consciousness and identity and could report accurately experiencing and seeing events going on around them and knowing and recognizing people that they had no way of knowing. This at least brings into question the necessity of a conscious functioning brain being present for awareness, experience and consciousness.
There are also many reports of near death and out of body experiences that cannot be easily explained by traditional physical sciences. You may ignore or reject such evidence as not being scientific and nothing more than mystical hogwash; but, how do you or science know until it is legitimately investigated. It is evidence that is repeatable and verifiable and in order to get anywhere with intelligence/consciousness any and all evidence must be investigated with an open mind even if it is outside the realm of physical science.
Yes it is presently within the realm of metaphysics but not necessarily totally mystical and not necessarly forever so. I do believe that it is knowable.
 
Back
Top