# Royce.s Theorem

1. Jan 3, 2005

### Royce

I am reading Roger Penrose's Shadows of the Mind (just started). He opens the book by arguing again that consciousness cannot be computational even in principle. This got me thinking about IA etc and I came up with the following that I call Royce's Theorem.

If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Whatever intelligence involved is that of the inventor, designer and/or programmer of the computational algorithm. Once that is done the rest is simply data processing according to the rules of the algorithm, not an intelligent process itself but one of duplication, repetition and rote. Such actions do not involve actual thinking, intelligence, awareness or consciousness. We used to call it plug and chug. Once the formula is known or given, plug in that data and hit the go button.

This can and does have many implications but I would rather toss this up and have it batted around awhile before getting into any implications.

2. Jan 3, 2005

### somy

But there was an intelligence behind the program. I mean the programmer!!!
And I think it is very important

3. Jan 3, 2005

### Les Sleeth

Yes it is because no one has been able to show any other way for a program to avoid turning repetitive, or into dumb randomness, except when human consciousness steps in and makes new adjustments.

It reminds me of someone lining up dominos in the most creative ways possible, like in those big contests they have in Japan, where when the dominos start falling they flip things on, send things flying, turn stuff, etc. Now what if someone creating that domino pattern hoped maybe it would learn to do it for itself? Everytime he fails to produce a self-creating domino pattern, he believes it is because the pattern wasn't complicated enough. So he makes it more and more complicated. Yet the problem isn't solved no matter how complex he makes it because at the root of the process is the same thing -- falling dominos -- but falling dominos isn't creativity, which is what organizes the dominos.

So I think Royce is correct. To me it is the same problem with hoping computer programming will spawn consciousness. Programming is falling black or white dominos, but if that isn't what consciousness is, then it's hopeless for programmers to keep trying more complexity to create consciousness.

4. Jan 3, 2005

### StatusX

But he could, conceivably, create a domino pattern that could add numbers, or prove number theoretical theorems, or express a belief, assuming dominoes are capable of simulating turing machines. A parapelegic couldn't organize a domino pattern either, but that doesn't mean he isn't creative.

5. Jan 3, 2005

How many parts of a human do you have to replace until it becomes nonhuman? Whereas nonhuman is nonhuman consciousness, you can add or subtract parts.

If we replaced near all its parts could we then conclude that consciousness is not physical?

The ghost in the machine just might decide to try out the new toys we build.

6. Jan 3, 2005

### Royce

Les and Rader, you both obviously see where this can lead. It makes AI, Artificial Intellegence impossible to be anything more than artificial in principle as well as fact.
And again if we apply Chalmer's argument to intellegence then when does it become intelligent or stop being intelligent?
If the theorum is alway and completely true then Penrose's position that Consciousness is not computable in principle is true. Therefore intelligence and/or consciousness cannot be emergent phemomena.

But, is it always completely true? I obviously think it is.

7. Jan 3, 2005

### Les Sleeth

That's correct, he could. But Royce allowed that it is possible to create programs that compute (and I'm pretty sure he'd agree programs can mindlessly express things). The issue is if computing power can get "intelligent," which I assumed he meant could creatively think up and design. If we downgrade intelligence to simply computing power, then sure, a computer is intelligent.

Yes, but that seems off point. He couldn't physically organize the pattern, but he could still design it. As of now a computer can only design what human consciousness programs it do.

8. Jan 3, 2005

### StatusX

In my opinion, creativity occurs when randomly created ideas are analyzed to see if they are interesting. Probably most of this filtering occurs subconsciously, so sometimes we will get an inspiration and have no idea where it came from. I see no reason whatsoever why a mechanical computer couldn't replicate this process, and if it could, it would be indistinguishable from human creativity, even if you don't agree with my explanation of it.

My point was that the only reason a domino pattern couldn't construct another domino pattern is because it isn't physically equipped to do so. It could conceivably "think up" new arrangements by forming certain complex patterns that a computer could interpret and then create using a robotic arm.

Also, regarding the topic, what you have isn't really a theorem but is instead just your definition of intelligence, which I happen to disagree with. Intelligence is not well defined, and unless you have another definition of intelligence in mind that you are referring to here, this theorem has no real content besides defining a term.

Last edited: Jan 3, 2005
9. Jan 3, 2005

### Les Sleeth

Okay, I'll go for that.

I can't figure out why mechanists think a computer will recognize "interesting" or anything else that is determined by quality. How is a computer going to get inspired?

I was watching a special on Google and their founders last night. One of the criticisms by other analysts was that Google searches tend to priortize what's most negative. The founders said they haven't been able to figure out how to stop their program from doing that (they have plenty of help too, and hiring 25 more people per month). I know I have posted over 1400 times here, and over 500 times at the old PF. Yet when I do a Google search of my name, up pops the one humorous post I did about the biology of MASTURBATION ( that's how big it seems to me). Before that I posted in a thread where someone asked if it was okay to view porno at work, and that one popped up!

One reason I see why a "mechanical computer couldn't replicate" creativity/intelligence is because so far computers and their programs have shown themselves to be utterly stupid. It is only people's prior belief in physicalism, or believing consciousness is the result of neuronal complexity, that makes them claim computers can be conscious. There is not the slightest bit of objective evidence yet to indicate that.

Last edited: Jan 3, 2005
10. Jan 3, 2005

### Royce

11. Jan 3, 2005

### Math Is Hard

Staff Emeritus
You just reminded me of a certain caveat my networking instructor used to issue to his students:

"Never anthropomorphize your computers. They don't like it."

12. Jan 3, 2005

### Les Sleeth

:rofl: :rofl: :rofl: Great observation! We should send this to Chalmers to help him with his next debate with Dennett.

13. Jan 3, 2005

### StatusX

This is in response to Les Sleeth and Royce's posts, since you both argued essentially the same point that mechanical processes can never behave in a way we could reasonably call intelligent.

I hate to beat a dead horse, but look at the brain objectively. It is a collection of atoms that interacts with the enviroment in a way we can all agree is "intelligent". Does it experience thought? Maybe. We believe our own brain does, but we know nothing of other's brains. Just looking at the data, we have a collection of atoms which strictly obey physical laws and which produce complex, intelligent behavior. How is this not an intelligent machine?

I just can't understand any of the objections to this argument. Is the brain made of something besides atoms? How could that be if it arose from atoms (namely, those of the food we eat, or if you want to go back farther, star material) ? Do the laws of physics not apply in the brain? Again, there is no conceivable reason to suppose this is true.

Last edited: Jan 3, 2005
14. Jan 3, 2005

### 0TheSwerve0

an organism has desires that fuel and give direction to that "machine," ie the brain, for starters. Maybe if you could wire in a desire to survive, or an aim it would be different (or at least to appearances).

15. Jan 3, 2005

### Royce

If we want to go with the computer analogy then we must realize that the computer consists of switches only. The switches are hard wired and can only turn from off to on or from on to off only in response to its input. The fact that a switch is on or off has no meaning other than what the programmer assigns it. Take for example a light, led, on your computer panel. It is either illuminated or not in response to a switch at its input. We have no idea what the light being on means unless it is labeled or has a position that has previously been assigned a significance. It could mean that the power switch is on, it could mean that our hard drive is being accessed or it could mean that there is a floppy or CD present in the drive.
The point being is that the light is not intelligent nor does it convey any intelligence or information unless it has been previously assigned a value by the designer and we know what the designer assigned it to mean. For all we know the light being off may be a good thing or bad, may indicate proper functioning or improper functioning, or as is the case in many indicators on the simulator on which I work, they don't mean anything because the lights don't apply to this model. It is still a light and it still switches on and off and it is either red or green or yellow, some are even are labeled but in this particular case they have no meaning and convey no information because none has been assigned to it it this particular case.
Intelligence is designed into a machine. The machine can only respond as designed in response to the data fed into the machine and the ones and zeroes or the light or dark places only have meaning and convey information only if we all know what the designer intended it to mean. Someone who does not know, understand and read the english language as used in the United States could make no sense of what any of us have written here.
It would convey no information and would convey no intelligence. In order to understand the results of a process we have to know and understand beforehand the form in which the information is encoded.
Where then does any intelligence lie within the machine?

Last edited: Jan 3, 2005
16. Jan 3, 2005

### Les Sleeth

I don't think you are suggesting looking at the brain objectively. Rather, you are saying look objectively at what's observable by our senses. With our senses it is true we can only observe atoms, conformity to physical laws, complex behavior . . . If you sit still and experience your own consciousness however, it won't be with the senses. Yet no one wants to include that information in the model.

But there is another factor to consider, and that is the thus-far observed behavior of matter which is not "alive." It doesn't exhibit anything creative enough that should make us believe it can organize itself for billions of years to produce "livingness" and consciousness. It mostly just disorganizes. So why does anyone have faith that physicalness can produce life and consciousness?

If you were Einstein, and you delivered your paper on relativity via the radio, what would you think if the world's population credited the radio with being a genius? Just because we see intelligence associated with the brain doesn't mean the brain is creating it.

17. Jan 3, 2005

### StatusX

I editted my post, and I don't know if you saw it, but to reiterate: the brain is made of star. Is this special quality we can't observe that you suggest present in stars too, or does it arise at some point in the formation of a brain? Like I said, there is no reason to believe in such an quality.

Keep in mind, a physicalist will claim your beliefs are physically caused, and do not necessarily say anything about the world except that something about it causes you to have those beliefs. That the cause is what you believe it to be (that the beliefs result from your interaction with this higher realm) is not a priori true. The same goes for me. I recognize that it feels as if I am making decisions for myself and that there is some essenence of what it means to be me, call it a soul if you want. But I also recognize that if physicalism is true, it could conceivably explain why I have these feelings if they aren't true.

What about industrial assembly lines? These take pieces of metal or plastic and produces very highly organized structures, like cars and dishwashers. They are created by people, but once built, can keep working with very little maintenance. True, if you left them alone for a very long time, there would eventually be a problem that would cause the line to stop functioning. But surely better software would fix this problem, or at least increase its lifetime, which is perfectly acceptable since even people eventually break down and stop functioning. These don't contain an elan vital, do they?

Last edited: Jan 3, 2005
18. Jan 3, 2005

### Royce

No we can't because we cannot agree or know where intelligence, awareness and consciousness lies. If the brain is made up of cells that switch on and off only in response to stimuli, inputs, then how many cells and how many connections does it take to become intelligent. If then we remove or kill one cell or one connection then does it become no longer intelligent or conscious.
That is Chalmers argument.

Tell me how a thinking machine can be aware first that it is thinking and second be aware of what it is thinking if it is what is doing the thinking in the first place. There has to be something more that is aware of what the brain machine it doing and assign its results value, intelligence and meaning. Is it another machine made up of another type of cells or connections? Where is it and what is it about it that makes it aware and conscious?

How do atoms create anything? What is it in atoms that makes them intelligent when they are in the brain of an intelligent human being but as dumb as dirt when they are in dirt?

Now you have finally addressed the real question that we and hundreds of millions of others have been trying to the determine the answer for thousands of years. I guess our atoms are just not intelligent enough yet.

Last edited: Jan 3, 2005
19. Jan 3, 2005

### Royce

When does the assembly line become bored with making cars and decide to make washing machines or better yet decide to take a vacation and go fishing? When does this intelligent machine contemplate its designer and maker? When does it decide or intend or be aware of anything.

"Computers are the dumbest most aggravating machines ever made by man because they insist on doing exactly what they are told to do instead of being reasonable and doing what we want them to do."

20. Jan 3, 2005

### StatusX

What is? Chalmers says that experience arises in any information processing system, or something similar to that. Cells don't necessarily behave in simple on/off manner, but neither do all possible components of machines. Atoms follow the laws of physics, and if one arrangement of atoms can give rise to intelligent behavior, then there is no reason to assume many other ones can. Biological matter is no different than silicon in that they are made of the same stuff and follow the same rules.

We don't yet know what causes consciousness. Chalmers believes its everywhere, while Dennett believes its nowhere. What they agree on is that a brain and an extremely accurate model of a brain made of another material should act the same. They go further and claim they should also have the same state of consciousness. This is controversial, but it is less controversial to just assume they will both behave intelligently, since by premise, they behave identically.

Complex systems can give rise to very complicated behavior. For example, there are only a few symbols, axioms and rules in boolean algebra. Because of this, it is pretty simple, and has the property that any well formed statement can be proved true or false. Add some more axioms, symbols, and rules and you have whole numbers, which is so complicated that simple theorems have gone unproven for hundreds of years and some statements have the property that they can never be proven true or false. Even though there is not a huge difference in the specification of these two formal systems, the little extra complexity in the rules of whole numbers give rise to enormously more complex behavior. It is not the symbols themselves that "contain" the complexity of the system, but together with the axioms and rules, complicated behavior can emerge. Likewise, atoms aren't intelligent, but arrangements of atoms can be.

It doesn't, but are these the necessary conditions for life or intelligence? Is it inconceivable that a computer be programmed to do those things? Or when it does, will you just redefine intelligence to be what computers cannot do?

Last edited: Jan 3, 2005