Royce's Theorem: Intelligence Is Not Computational

In summary: That's where the practical implications come in. If computing power can get "intelligent," then the line between human and machine is blurred to the point where intelligence (or consciousness) might be said to be inherent in the machines themselves. In summary, Penrose argues that consciousness cannot be computational even in principle, and that if a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete. He also suggests that if a process contains intelligence, it is the intelligence of the inventor, designer, or programmer of the computational algorithm, not the intelligence of the process itself. He argues that if this is the case, then any attempt to create consciousness through computer programming is doomed
  • #1
Royce
1,539
0
I am reading Roger Penrose's Shadows of the Mind (just started). He opens the book by arguing again that consciousness cannot be computational even in principle. This got me thinking about IA etc and I came up with the following that I call Royce's Theorem. :wink:

If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Whatever intelligence involved is that of the inventor, designer and/or programmer of the computational algorithm. Once that is done the rest is simply data processing according to the rules of the algorithm, not an intelligent process itself but one of duplication, repetition and rote. Such actions do not involve actual thinking, intelligence, awareness or consciousness. We used to call it plug and chug. Once the formula is known or given, plug in that data and hit the go button.

This can and does have many implications but I would rather toss this up and have it batted around awhile before getting into any implications.
 
Physics news on Phys.org
  • #2
But there was an intelligence behind the program. I mean the programmer!
And I think it is very important
 
  • #3
somy said:
But there was an intelligence behind the program. I mean the programmer! And I think it is very important

Yes it is because no one has been able to show any other way for a program to avoid turning repetitive, or into dumb randomness, except when human consciousness steps in and makes new adjustments.

It reminds me of someone lining up dominos in the most creative ways possible, like in those big contests they have in Japan, where when the dominos start falling they flip things on, send things flying, turn stuff, etc. Now what if someone creating that domino pattern hoped maybe it would learn to do it for itself? Everytime he fails to produce a self-creating domino pattern, he believes it is because the pattern wasn't complicated enough. So he makes it more and more complicated. Yet the problem isn't solved no matter how complex he makes it because at the root of the process is the same thing -- falling dominos -- but falling dominos isn't creativity, which is what organizes the dominos.

So I think Royce is correct. To me it is the same problem with hoping computer programming will spawn consciousness. Programming is falling black or white dominos, but if that isn't what consciousness is, then it's hopeless for programmers to keep trying more complexity to create consciousness.
 
  • #4
Les Sleeth said:
It reminds me of someone lining up dominos in the most creative ways possible, like in those big contests they have in Japan, where when the dominos start falling they flip things on, send things flying, turn stuff, etc. Now what if someone creating that domino pattern hoped maybe it would learn to do it for itself? Everytime he fails to produce a self-creating domino pattern, he believes it is because the pattern wasn't complicated enough. So he makes it more and more complicated. Yet the problem isn't solved no matter how complex he makes it because at the root of the process is the same thing -- falling dominos -- but falling dominos isn't creativity, which is what organizes the dominos.

But he could, conceivably, create a domino pattern that could add numbers, or prove number theoretical theorems, or express a belief, assuming dominoes are capable of simulating turing machines. A parapelegic couldn't organize a domino pattern either, but that doesn't mean he isn't creative.
 
  • #5
How many parts of a human do you have to replace until it becomes nonhuman? Whereas nonhuman is nonhuman consciousness, you can add or subtract parts.

If we replaced near all its parts could we then conclude that consciousness is not physical?

The ghost in the machine just might decide to try out the new toys we build.
 
  • #6
Les and Rader, you both obviously see where this can lead. It makes AI, Artificial Intellegence impossible to be anything more than artificial in principle as well as fact.
And again if we apply Chalmer's argument to intellegence then when does it become intelligent or stop being intelligent?
If the theorum is alway and completely true then Penrose's position that Consciousness is not computable in principle is true. Therefore intelligence and/or consciousness cannot be emergent phemomena.

But, is it always completely true? I obviously think it is.
 
  • #7
StatusX said:
But he could, conceivably, create a domino pattern that could add numbers, or prove number theoretical theorems, or express a belief, assuming dominoes are capable of simulating turing machines.

That's correct, he could. But Royce allowed that it is possible to create programs that compute (and I'm pretty sure he'd agree programs can mindlessly express things). The issue is if computing power can get "intelligent," which I assumed he meant could creatively think up and design. If we downgrade intelligence to simply computing power, then sure, a computer is intelligent.


StatusX said:
A parapelegic couldn't organize a domino pattern either, but that doesn't mean he isn't creative.

Yes, but that seems off point. He couldn't physically organize the pattern, but he could still design it. As of now a computer can only design what human consciousness programs it do.
 
  • #8
Les Sleeth said:
That's correct, he could. But Royce allowed that it is possible to create programs that compute (and I'm pretty sure he'd agree programs can mindlessly express things). The issue is if computing power can get "intelligent," which I assumed he meant could creatively think up and design. If we downgrade intelligence to simply computing power, then sure, a computer is intelligent.

In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from. I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.

Yes, but that seems off point. He couldn't physically organize the pattern, but he could still design it. As of now a computer can only design what human consciousness programs it do.

My point was that the only reason a domino pattern couldn't construct another domino pattern is because it isn't physically equipped to do so. It could conceivably "think up" new arrangements by forming certain complex patterns that a computer could interpret and then create using a robotic arm.


Also, regarding the topic, what you have isn't really a theorem but is instead just your definition of intelligence, which I happen to disagree with. Intelligence is not well defined, and unless you have another definition of intelligence in mind that you are referring to here, this theorem has no real content besides defining a term.
 
Last edited:
  • #9
StatusX said:
In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from.

Okay, I'll go for that.


StatusX said:
I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.

I can't figure out why mechanists think a computer will recognize "interesting" or anything else that is determined by quality. How is a computer going to get inspired?

I was watching a special on Google and their founders last night. One of the criticisms by other analysts was that Google searches tend to priortize what's most negative. The founders said they haven't been able to figure out how to stop their program from doing that (they have plenty of help too, and hiring 25 more people per month). I know I have posted over 1400 times here, and over 500 times at the old PF. Yet when I do a Google search of my name, up pops the one humorous post I did about the biology of MASTURBATION ( :redface: that's how big it seems to me). Before that I posted in a thread where someone asked if it was okay to view porno at work, and that one popped up!

One reason I see why a "mechanical computer couldn't replicate" creativity/intelligence is because so far computers and their programs have shown themselves to be utterly stupid. It is only people's prior belief in physicalism, or believing consciousness is the result of neuronal complexity, that makes them claim computers can be conscious. There is not the slightest bit of objective evidence yet to indicate that.
 
Last edited:
  • #10
StatusX said:
In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from. I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.
Ideas are a productof thinking. The weighing and connecting thoughts or ideas requires that the thinker is aware that it is thinking and that it is aware of its thoughts. Random thoughts is still thinking but thinking randomly and can only result in randomness. Combining thoughts in unique ways requires intelligence including understanding, awareness of results, applicability and ramifications as well as purpose, intent. A machine even if composed of dominoes can only do what it is programed or designed to do. A domono has only two choices to stand or fall. if the next domino falls into it it has no choice but to fall. This is not random nor thinking but unwillful reacting to its invironment the only way that it can. It is the same with a computer or any other machine to date. It will always be so for any machine as we understand the term, machine.


My point was that the only reason a domino pattern couldn't construct another domino pattern is because it isn't physically equipped to do so. It could conceivably "think up" new arrangements by forming certain complex patterns that a computer could interpret and then create using a robotic arm.

A domino or a billion domines can only stand or fall in response to influences outside of itself or its control. It does not decide to fall or not to fall. It cannot act randomly or willfully. It can only do what it was designed to do.

Any and all intelligence in any such machine lies with the designer and/or programmer not the machine. If it is a machine it can only do what it is designed to do and nothing else or it is malfunctioning.
 
  • #11
Les Sleeth said:
One reason I see why a "mechanical computer couldn't replicate" creativity/intelligence is because so far computers and their programs have shown themselves to be utterly stupid. It is only people's prior belief in physicalism, or believing consciousness is the result of neuronal complexity, that makes them claim computers can be conscious. There is not the slightest bit of objective evidence yet to indicate that.
You just reminded me of a certain caveat my networking instructor used to issue to his students:

"Never anthropomorphize your computers. They don't like it."

:biggrin:
 
  • #12
Math Is Hard said:
You just reminded me of a certain caveat my networking instructor used to issue to his students:

"Never anthropomorphize your computers. They don't like it."

:biggrin:

:smile: :smile: :smile: Great observation! We should send this to Chalmers to help him with his next debate with Dennett.
 
  • #13
This is in response to Les Sleeth and Royce's posts, since you both argued essentially the same point that mechanical processes can never behave in a way we could reasonably call intelligent.

I hate to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we can all agree is "intelligent". Does it experience thought? Maybe. We believe our own brain does, but we know nothing of other's brains. Just looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

I just can't understand any of the objections to this argument. Is the brain made of something besides atoms? How could that be if it arose from atoms (namely, those of the food we eat, or if you want to go back farther, star material) ? Do the laws of physics not apply in the brain? Again, there is no conceivable reason to suppose this is true.
 
Last edited:
  • #14
an organism has desires that fuel and give direction to that "machine," ie the brain, for starters. Maybe if you could wire in a desire to survive, or an aim it would be different (or at least to appearances).
 
  • #15
If we want to go with the computer analogy then we must realize that the computer consists of switches only. The switches are hard wired and can only turn from off to on or from on to off only in response to its input. The fact that a switch is on or off has no meaning other than what the programmer assigns it. Take for example a light, led, on your computer panel. It is either illuminated or not in response to a switch at its input. We have no idea what the light being on means unless it is labeled or has a position that has previously been assigned a significance. It could mean that the power switch is on, it could mean that our hard drive is being accessed or it could mean that there is a floppy or CD present in the drive.
The point being is that the light is not intelligent nor does it convey any intelligence or information unless it has been previously assigned a value by the designer and we know what the designer assigned it to mean. For all we know the light being off may be a good thing or bad, may indicate proper functioning or improper functioning, or as is the case in many indicators on the simulator on which I work, they don't mean anything because the lights don't apply to this model. It is still a light and it still switches on and off and it is either red or green or yellow, some are even are labeled but in this particular case they have no meaning and convey no information because none has been assigned to it it this particular case.
Intelligence is designed into a machine. The machine can only respond as designed in response to the data fed into the machine and the ones and zeroes or the light or dark places only have meaning and convey information only if we all know what the designer intended it to mean. Someone who does not know, understand and read the english language as used in the United States could make no sense of what any of us have written here.
It would convey no information and would convey no intelligence. In order to understand the results of a process we have to know and understand beforehand the form in which the information is encoded.
Where then does any intelligence lie within the machine?
 
Last edited:
  • #16
StatusX said:
Not to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we all can agree is intelligent. Does it experience thought? Maybe. We know our own brain does, but we know nothing of other's brains.

I don't think you are suggesting looking at the brain objectively. Rather, you are saying look objectively at what's observable by our senses. With our senses it is true we can only observe atoms, conformity to physical laws, complex behavior . . . If you sit still and experience your own consciousness however, it won't be with the senses. Yet no one wants to include that information in the model.

But there is another factor to consider, and that is the thus-far observed behavior of matter which is not "alive." It doesn't exhibit anything creative enough that should make us believe it can organize itself for billions of years to produce "livingness" and consciousness. It mostly just disorganizes. So why does anyone have faith that physicalness can produce life and consciousness?


StatusX said:
Looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

If you were Einstein, and you delivered your paper on relativity via the radio, what would you think if the world's population credited the radio with being a genius? Just because we see intelligence associated with the brain doesn't mean the brain is creating it.
 
  • #17
Les Sleeth said:
I don't think you are suggesting looking at the brain objectively. Rather, you are saying look objectively at what's observable by our senses. With our senses it is true we can only observe atoms, conformity to physical laws, complex behavior . . . If you sit still and experience your own consciousness however, it won't be with the senses. Yet no one wants to include that information in the model.

I editted my post, and I don't know if you saw it, but to reiterate: the brain is made of star. Is this special quality we can't observe that you suggest present in stars too, or does it arise at some point in the formation of a brain? Like I said, there is no reason to believe in such an quality.

Keep in mind, a physicalist will claim your beliefs are physically caused, and do not necessarily say anything about the world except that something about it causes you to have those beliefs. That the cause is what you believe it to be (that the beliefs result from your interaction with this higher realm) is not a priori true. The same goes for me. I recognize that it feels as if I am making decisions for myself and that there is some essenence of what it means to be me, call it a soul if you want. But I also recognize that if physicalism is true, it could conceivably explain why I have these feelings if they aren't true.

But there is another factor to consider, and that is the thus-far observed behavior of matter which is not "alive." It doesn't exhibit anything creative enough that should make us believe it can organize itself for billions of years to produce "livingness" and consciousness. It mostly just disorganizes. So why does anyone have faith that physicalness can produce life and consciousness?

What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?
 
Last edited:
  • #18
StatusX said:
I hate to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the environment in a way we can all agree is "intelligent".

No we can't because we cannot agree or know where intelligence, awareness and consciousness lies. If the brain is made up of cells that switch on and off only in response to stimuli, inputs, then how many cells and how many connections does it take to become intelligent. If then we remove or kill one cell or one connection then does it become no longer intelligent or conscious.
That is Chalmers argument.

Does it experience thought? Maybe. We believe our own brain does, but we know nothing of other's brains.

Tell me how a thinking machine can be aware first that it is thinking and second be aware of what it is thinking if it is what is doing the thinking in the first place. There has to be something more that is aware of what the brain machine it doing and assign its results value, intelligence and meaning. Is it another machine made up of another type of cells or connections? Where is it and what is it about it that makes it aware and conscious?

Just looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt?

I just can't understand any of the objections to this argument. Is the brain made of something besides atoms? How could that be if it arose from atoms (namely, those of the food we eat, or if you want to go back farther, star material) ? Do the laws of physics not apply in the brain? Again, there is no conceivable reason to suppose this is true.

Now you have finally addressed the real question that we and hundreds of millions of others have been trying to the determine the answer for thousands of years. I guess our atoms are just not intelligent enough yet. :rolleyes:
 
Last edited:
  • #19
StatusX said:
What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?

When does the assembly line become bored with making cars and decide to make washing machines or better yet decide to take a vacation and go fishing? When does this intelligent machine contemplate its designer and maker? When does it decide or intend or be aware of anything.

"Computers are the dumbest most aggravating machines ever made by man because they insist on doing exactly what they are told to do instead of being reasonable and doing what we want them to do."
 
  • #20
Royce said:
No we can't because we cannot agree or know where intelligence, awareness and consciousness lies. If the brain is made up of cells that switch on and off only in response to stimuli, inputs, then how many cells and how many connections does it take to become intelligent. If then we remove or kill one cell or one connection then does it become no longer intelligent or conscious.
That is Chalmers argument.

What is? Chalmers says that experience arises in any information processing system, or something similar to that. Cells don't necessarily behave in simple on/off manner, but neither do all possible components of machines. Atoms follow the laws of physics, and if one arrangement of atoms can give rise to intelligent behavior, then there is no reason to assume many other ones can. Biological matter is no different than silicon in that they are made of the same stuff and follow the same rules.

Tell me how a thinking machine can be aware first that it is thinking and second be aware of what it is thinking if it is what is doing the thinking in the first place. There has to be something more that is aware of what the brain machine it doing and assign its results value, intelligence and meaning. Is it another machine made up of another type of cells or connections? Where is it and what is it about it that makes it aware and conscious?

We don't yet know what causes consciousness. Chalmers believes its everywhere, while Dennett believes its nowhere. What they agree on is that a brain and an extremely accurate model of a brain made of another material should act the same. They go further and claim they should also have the same state of consciousness. This is controversial, but it is less controversial to just assume they will both behave intelligently, since by premise, they behave identically.

How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt?

Complex systems can give rise to very complicated behavior. For example, there are only a few symbols, axioms and rules in boolean algebra. Because of this, it is pretty simple, and has the property that any well formed statement can be proved true or false. Add some more axioms, symbols, and rules and you have whole numbers, which is so complicated that simple theorems have gone unproven for hundreds of years and some statements have the property that they can never be proven true or false. Even though there is not a huge difference in the specification of these two formal systems, the little extra complexity in the rules of whole numbers give rise to enormously more complex behavior. It is not the symbols themselves that "contain" the complexity of the system, but together with the axioms and rules, complicated behavior can emerge. Likewise, atoms aren't intelligent, but arrangements of atoms can be.

When does the assembly line become bored with making cars and decide to make washing machines or better yet decide to take a vacation and go fishing? When does this intelligent machine contemplate its designer and maker? When does it decide or intend or be aware of anything.

It doesn't, but are these the necessary conditions for life or intelligence? Is it inconceivable that a computer be programmed to do those things? Or when it does, will you just redefine intelligence to be what computers cannot do?
 
Last edited:
  • #21
StatusX said:
What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?

How did those assembly lines get organized? You admit it is people, but then go on like that is some small matter. Yes, once organized and given a power source, assembly lines can indeed behave quite intelligently, but it was human intelligence that made it happen! If you create the software, guess what's behind that? Human consciousness. To me it is incredible that mechanists keep undervaluing what they themselves, as consciousness, contribute to any mechanical endeavor beyond what might happen if a bunch of rocks fall on a board and accidentally create a bit of lever action.
 
  • #22
Dear Royce,
I just want to know what new does this theorem say? I mean what do you want to say with this theorem.
We all know what we say by the intelligence behinde a program (specially the ability of control that a program consists.)
So I really want to know what are you going to say with this theorem.
Anything new I mean.
Thanks in advance.
Somy.
 
  • #23
StatusX said:
What is? Chalmers says that experience arises in any information processing system, or something similar to that. Cells don't necessarily behave in simple on/off manner, but neither do all possible components of machines. Atoms follow the laws of physics, and if one arrangement of atoms can give rise to intelligent behavior, then there is no reason to assume many other ones can. Biological matter is no different than silicon in that they are made of the same stuff and follow the same rules.

One of Chalmers arguments against consciousness emerging from increasing complexity is where is the cutoff point. If we start with an intelligent conscious machine and remove tiny bits of silicon or whatever would we reach a point where we remove one more bit and no longer have an intelligent conscious machine. This is what I was refereing to.

We don't yet know what causes consciousness. Chalmers believes its everywhere, while Dennett believes its nowhere. What they agree on is that a brain and an extremely accurate model of a brain made of another material should act the same. They go further and claim they should also have the same state of consciousness. This is controversial, but it is less controversial to just assume they will both behave intelligently, since by premise, they behave identically.
Complex systems can give rise to very complicated behavior. For example, there are only a few symbols, axioms and rules in boolean algebra. Because of this, it is pretty simple, and has the property that any well formed statement can be proved true or false. Add some more axioms, symbols, and rules and you have whole numbers, which is so complicated that simple theorems have gone unproven for hundreds of years and some statements have the property that they can never be proven true or false. Even though there is not a huge difference in the specification of these two formal systems, the little extra complexity in the rules of whole numbers give rise to enormously more complex behavior. It is not the symbols themselves that "contain" the complexity of the system, but together with the axioms and rules, complicated behavior can emerge. Likewise, atoms aren't intelligent, but arrangements of atoms can be.

This is exactly what Penrose is arguing against. His premise is that consciousness cannot be created by any purely computational process no matter how complicated or sophisticated in principle as well in fact. The best that can be accomplished is a simulation of consciousness that is not itself conscious. I am extending this argument to include intelligence as IMO intelligence is a necessary componet of consciousness.
 
  • #24
somy said:
Dear Royce,
I just want to know what new does this theorem say? I mean what do you want to say with this theorem.
We all know what we say by the intelligence behinde a program (specially the ability of control that a program consists.)
So I really want to know what are you going to say with this theorem.
Anything new I mean.
Thanks in advance.
Somy.

original post by Royce said:
If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Whatever intelligence involved is that of the inventor, designer and/or programmer of the computational algorithm. Once that is done the rest is simply data processing according to the rules of the algorithm, not an intelligent process itself but one of duplication, repetition and rote. Such actions do not involve actual thinking, intelligence, awareness or consciousness. We used to call it plug and chug. Once the formula is known or given, plug in that data and hit the go button.

What I am saying or what the theorem says, in support and in response to Penrose's argument that consciousness cannot be duplicated or create by purely computational methods, is that this also applies to intelligence.

I think that is is much easier to prove that intelligence is not involved in any purely computational process as intelligence is IMO a necessary component of awareness and consciousness. The intelligence is involved in the design of the processor, the invention of the algorithm and the programming of the process into the processor. The computational process itself contains nor uses no intelligence and therefore intelligence does not and cannot arise out of a purely computational process no matter how complicated or sophisticated it may be or become. This also implies that Artificial Intelligence cannot be true creative intelligence but only an artificial simulation of it.
 
Last edited:
  • #25
Royce said:
One of Chalmers arguments against consciousness emerging from increasing complexity is where is the cutoff point. If we start with an intelligent conscious machine and remove tiny bits of silicon or whatever would we reach a point where we remove one more bit and no longer have an intelligent conscious machine. This is what I was refereing to.

I think you are misreading Chalmers. His argument is that experience will arise from any information-processing system because you cannot mark a cutoff point. He even argues that a thermometer, the simplest form of information-processor, might possesses a very primitive capacity to experience.
 
  • #26
loseyourname said:
I think you are misreading Chalmers. His argument is that experience will arise from any information-processing system because you cannot mark a cutoff point. He even argues that a thermometer, the simplest form of information-processor, might possesses a very primitive capacity to experience.

Maybe so. We may be referring to different books or parts of the same book. I may be referring to a reference to his work that I read somewhere else. I don't know or remember but it seemed to me that tha was part of the easy problem of consciousness.

While a thermometer reacts to its enviroment, unless one believes that the universe and everything in it is conscious, I don't think that a thermometer experiences anything as it is not conscious. Consciousness, including awareness and understanding is required to experience anything in the real world. Subjective experience is of course something different but I don't think that you or he are talking about that.

The whole point of this thread is to argue for the belief that there is something more to consciousness other than pure computational processes of the brain that may some day be duplicated by a machine. That there is that about consciousness that is a non-computationable that is now beyound the present theories and knowledge of science. Science will have to expand its present area of inquirery and its methods of investigation to address consciousness. I do not say that it is not a knowable thing forever beyound science. Only that at present with the tools at hand consciousness cannot be explained, accounted for or duplicated by a machine even in principle.
 
  • #27
Royce said:
How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt? :
You should read this new book by Gregg Rosenberg A place for consciousness that we are to discuss shortly. There is some interesting things in there with a new pitch, until then the rabbit will have to stay in its hat.
 
  • #28
Les Sleeth said:
How did those assembly lines get organized? You admit it is people, but then go on like that is some small matter. Yes, once organized and given a power source, assembly lines can indeed behave quite intelligently, but it was human intelligence that made it happen! If you create the software, guess what's behind that? Human consciousness. To me it is incredible that mechanists keep undervaluing what they themselves, as consciousness, contribute to any mechanical endeavor beyond what might happen if a bunch of rocks fall on a board and accidentally create a bit of lever action.

There are two forces we know of that are capable of creating organized systems and behavior: intelligence and evolution. Although the methods are very different, they are capapble of acheiving many common results: systems capable of seeing, hearing, moving, flying, swimming, digging, etc. Since we've have already reached these goals, no one doubts there is nothing about them that is mystical.

Evolution has proved capable of acheiving intelligence, but the only intelligence that humans have managed to create so far have been the somewhat stupid computers. Now you may argue that the creation of real intelligence is beyond human ability, which I disagree with, but this does not mean it's non-physical. Evolution, a purely physical process, could do it. Nothing else that evolution has done, short of creating the first instance of life itself, is doubted to be purely physical. So why not intelligence, and to go a little farther, why not consciousness?
 
  • #29
Royce said:
While a thermometer reacts to its enviroment, unless one believes that the universe and everything in it is conscious, I don't think that a thermometer experiences anything as it is not conscious. Consciousness, including awareness and understanding is required to experience anything in the real world. Subjective experience is of course something different but I don't think that you or he are talking about that.

Well if this is your opinion, Chalmers' is almost the polar opposite. And in fact, subjective experience is exactly what he's talking about. As you've differentiated this from consciousness, which is not done when talking about the hard problem, you'll have to be more specific about what exactly you mean by consciousness.

The whole point of this thread is to argue for the belief that there is something more to consciousness other than pure computational processes of the brain that may some day be duplicated by a machine. That there is that about consciousness that is a non-computationable that is now beyound the present theories and knowledge of science. Science will have to expand its present area of inquirery and its methods of investigation to address consciousness. I do not say that it is not a knowable thing forever beyound science. Only that at present with the tools at hand consciousness cannot be explained, accounted for or duplicated by a machine even in principle.

That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.
 
Last edited:
  • #30
THE PROBLEM WITH THE CONSCIOUS INTELLIGENCE IN HUMANS

There are several causal and relational problems with conscious intelligence:

1) ENERGY

Conscious intelligence needs Energy to work. It needs to be powered up by externally induced energy. The fundamental DESIGN and ENGINEERING problem confronting conscious intelligence is that it has no mechanism or process for recycling energy. This external dependency for energy shortchanges the hard-core epiphenominalists.

2) LOGICALLY HARDWIRED

It needs to be hardwired with clear logical pathways to work in the first place. If you cut the key wires from the body, everything intelligible in a human system colapses. You can surgically destroy intelligence in humans by cutting the right wires in the humans. Rene Decartes saw this problem and attempted to counter it by intruding a dubious notion of spirits assisting the Non-physical Soul.

3) INDEPENDENCE

The notion of independence of conscious intelligence from the body is undermined by problems (1) and (2). Its dependence on the body and carefully layed out and hardwired logical pathways to function undermines the very notion of independence. The claim that conscious intelligence pre-exists and post-exists the material body begs the question as to what does it want with the body in the first place. Worst still, the notion of 'Absolute Independence' is completely out of the qustion because this would imply something superior coming into a relation with something inferior (the mortal material body). Infact, absolute independence can also mean something that is so structurally and functionally perfect such that it is incapable of relating to any other thing. Either way, the notion is fundamentally problemtatic.

4) MOTION

Conscious intelligence relies on motion. Grind an intelligent system into a hault and out goes conscious intelligence! Stop the world on its track, and conscious intelligence everporates with it!

All these problems together seems to suggest that conscious intelligence comes into existence at the same time as the body and perhaps also perishes with the body. Hence, the nottion of continuity of life after death seems very slim. Perhaps, we are better off pursuing the notions of 'Life Continuity' and 'Immortality' scientifically, whereby we seek scientific methods for creating an immortal human being in a physical material sense. At least, there is nothing which logically rules this option out. Although, this is heavily contested in many quaters, yet it cannot by any device of logic be completely ruled out as a viable option.
 
Last edited:
  • #31
Good points Philocrat!
 
  • #32
StatusX said:
Well if this is your opinion, Chalmers' is almost the polar opposite. And in fact, subjective experience is exactly what he's talking about. As you've differentiated this from consciousness, which is not done when talking about the hard problem, you'll have to be more specific about what exactly you mean by consciousness.

I knew that introducing subjective experience at this point was a mistake.
I am trying simply to take it one step at a time; but, that doesn't seem to be where this thread is going. IMHO all experience is altimately subjective as the actual experience happens in the mind/brain. I differentiated it because there is a difference between physical, awake experience of life and the purely mental experience of dreams, imagination and meditation. As far as Chalmers is concerned, I liked his argument of subtracting the smallest possible part from a conscious entity until it becomes no longer conscious to determine the point or minimum complexity needed for consciousness. This showes according to him the absurdity of consciousness being nothing more than a matter of complexity. Unless of course I have completely misread that passage.

That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.

Well, that, at least in part, is what the theorem is about. As a machine is only capable of performing computations regardless of how complex or sophisticated it involves no intelligence in the performing of the computations and therefore cannot be considered intelligent nor is it possible for it to be or become conscious no matter how complex it may become. It seems to me that something else is needed that is as of yet undiscovered and not capable of being computed or duplicated by us.
 
  • #33
Philocrat said:
THE PROBLEM WITH THE CONSCIOUS INTELLIGENCE IN HUMANS

There are several causal and relational problems with conscious intelligence:

1) ENERGY

Conscious intelligence needs Energy to work. It needs to be powered up by externally induced energy. The fundamental DESIGN and ENGINEERING problem confronting conscious intelligence is that it has no mechanism or process for recycling energy. This external dependency for energy shortchanges the hard-core epiphenominalists.

This is nothing more than an assumption. It may take physical energy for our brains to function and for our bodies to be alive and conscious but there it has not been shown that actual intelligence and consciousness is physical energy dependent.

2) LOGICALLY HARDWIRED

It needs to be hardwired with clear logical pathways to work in the first place. If you cut the key wires from the body, everything intelligible in a human system colapses. You can surgically destroy intelligence in humans by cutting the right wires in the humans. Rene Decartes saw this problem and attempted to counter it by intruding a dubious notion of spirits assisting the Non-physical Soul.

It has been shown that the human brain is not hard wired but changes its connections as new things are learned whether knowledge or physical as learning to swing a golf club.

3) INDEPENDENCE

The notion of independence of conscious intelligence from the body is undermined by problems (1) and (2). Its dependence on the body and carefully layed out and hardwired logical pathways to function undermines the very notion of independence. The claim that conscious intelligence pre-exists and post-exists the material body begs the question as to what does it want with the body in the first place. Worst still, the notion of 'Absolute Independence' is completely out of the qustion because this would imply something superior coming into a relation with something inferior (the mortal material body). Infact, absolute independence can also mean something that is so structurally and functionally perfect such that it is incapable of relating to any other thing. Either way, the notion is fundamentally problemtatic.

Since 1 and 2 are not necessarily true and not proven (in the case of 2 proven not to be the case) then the above does not follow at all. It may or may not be true and there are many who are convinced that it is not true through personal experience.

4) MOTION

Conscious intelligence relies on motion. Grind an intelligent system into a hault and out goes conscious intelligence! Stop the world on its track, and conscious intelligence everporates with it!

All these problems together seems to suggest that conscious intelligence comes into existence at the same time as the body and perhaps also perishes with the body. Hence, the nottion of continuity of life after death seems very slim. Perhaps, we are better off pursuing the notions of 'Life Continuity' and 'Immortality' scientifically, whereby we seek scientific methods for creating an immortal human being in a physical material sense. At least, there is nothing which logically rules this option out. Although, this is heavily contested in many quaters, yet it cannot by any device of logic be completely ruled out as a viable option.

This doesn't seem relevant at all. Why should intelligence and consciousness depend on physical motion at all. Yes everything is in motion but is there a causal or correlational relationship.
 
  • #34
Royce said:
This is nothing more than an assumption. It may take physical energy for our brains to function and for our bodies to be alive and conscious but there it has not been shown that actual intelligence and consciousness is physical energy dependent.

I believe that what the fMRI images show is energy consumption, in the form of glucose usage. These studies have shown patterns of energy consumption during tasks like problem solving and meditation. It's hard to imagine what facet of consciousness you believe not to be captured in these studies; the subjects are experiencing "what it is like" to solve problems or meditate, and right along there their brains are consuming energy in specific areas. The burden of proof is on you to show that they can experience without thinking and using energy.
 
  • #35
Yes, as I said the brain takes and uses energy to stay alive and function. Consciousness and intelligence has not been absolutely tied to the brain functioning.
It takes electrochemical energy in the brain to think but is that all that awareness, intelligence and consciousness is? Is the production and use of electrochemical energy the cause or the effect? Is that all there is to consciousness and itelligence is electrochemical energy? This is the very thing that this thread is about. I amoung many others think that it is the effect rather than the cause, that there is more than the consumption of energy to drive computational processes that is involved in intelligence and consciousness.
There is no burden of proof on either side as none of this speculation can be proved as of yet any way. There is only a 'burden of proof' if one is commited to the physicalist paradigm that all mentality is computational processes in the physical brain.
 
Back
Top