# Argument against computer consciousness

1. Jun 20, 2010

### johne1618

I was wondering what people think of the following argument against the idea that consciousness is the result of computation.

Imagine to the contrary that a computer becomes consciously aware by running a particular program.

Let us assume that the program is enclosed in a loop and is run say N times.

Let us say that this causes the computer to experience N consecutive "moments".

Prior to measuring the current moment number n, the computer can argue that it is equally likely to be in any moment n from 1 to N.

This seems reasonable.

Now let us assume that the loop is infinite so that the computer program iterates forever.

In principle it must be possible that it is a fact that the program never halts (just as it is a fact that the decimal expansion of pi continues forever).

Again prior to measuring its actual moment number n, the computer should be able to argue that it is equally likely to be in any moment.

But now there are an infinite number of moments so that the probability of finding itself in any particular moment is 1 / infinity which is zero.

Here infinity is a "completed" infinity so that 1 / infinity doesn't just tend to zero but actually is zero.

There seems to be a paradox here. We seem to find that the computer cannot find itself in any moment n.

I think the solution is that a deterministic computer running a program cannot be conscious in the first place.

What do you think?

2. Jun 20, 2010

### SW VandeCarr

You're basing your argument on the theory that consciousness has something to do with iteration and loops. It may, but I don't think the evidence is strong or specific enough for an argument for or against artificial consciousness.

EDIT: I'm also not sure that a test for artificial consciousness would be different from a Turing Test. So if a computer passes a series of well designed Turing Tests, is it therefore conscious?

http://www.elsevier.com/wps/find/bookdescription.authors/672881/description#description

Last edited: Jun 20, 2010
3. Jun 20, 2010

### Antiphon

Not a good argument. Since the machine is conscious at every iteration, the probability of finding itself in any particular conscious state is one. The root of your confusion is that "any particular" isn't specified in advance.

What you have is a model for winning the lottery.

4. Jun 20, 2010

### GeorgCantor

What is 'conscious' and what is 'I'?

I know what they do, but i wonder what they are beyond the 'emergent property' label. An illusion? Is everything we know about the 'world' pointing from different fields of study that objective reality is more or less a primitive misconception?

Last edited: Jun 20, 2010
5. Jun 20, 2010

### SW VandeCarr

First, assuming a machine is conscious as a premise and could last forever, it could loop through a finite number of states forever so the probability of finding the computer in any given state at some time t is non-zero. And yes, the probability of the computer finding itself in some unspecified state would be unity. I don't see your point why this proves something about consciousness or any process of discrete states.

Last edited: Jun 20, 2010
6. Jun 20, 2010

### SW VandeCarr

Antiphon

I mistook you for the original poster (OP) and thought you were responding to my initial response to the OP. Its helpful to quote whom you're responding to. In any case I thought you were saying my response was not a good argument. I see you were talking about the OP. I wasn't really making an argument. I was questioning his, like you.

Last edited: Jun 20, 2010
7. Jun 20, 2010

### Antiphon

To SW VandeCarr: understandable mistake. My problem is that I maintain contact with this forum via an iPhone. The edit window doesn't permit navigation in the edit process beyond a few lines. If I quote a post then basically I can't edit it.

8. Jun 20, 2010

### SW VandeCarr

OK. I should have checked the name wasn't the OP anyway,but I just read the message and assumed it was the OP. Thanks for your response.

Last edited: Jun 21, 2010
9. Jun 20, 2010

### cesiumfrog

Let's number generations of people, beginning with some ape named Adam and continuing for ever more. A person tries to estimate which generation they belong to, but having not studied any history this person must presume the probability is quite evenly distributed over all generation numbers (1 to infinity). The person concludes, for any finite range of generation numbers, that the probability of belonging to those generations is exactly zero. Therefore, people cannot exist.

johne1618, if you still want to disprove AI, you should find an argument that is somehow specific to machines.

Last edited: Jun 20, 2010
10. Jun 27, 2010

### Boy@n

I'd agree that it's very hard to imagine for a computer to get self-aware by itself or by external intervention (or better to say internet, which as whole network, its connections and all computening power, is way more similar to a single human brain than a single computer).

It's perhaps even harder to imagine for computer/internet to have true feelings, as humans do, and even animals. How would you make a robot feel true pain or pleasure - by just programming it to express that it is in pain? Doing it so won't mean a robot really feels it, it would just be like human's pretending... Computer could also be programmed to claim 'I am', but being actually self-aware is way different than just pretending so.

Are we, humans, pretending it? Well, we might even say that we are pretending self-awareness (say, by being programmed to think so), but we do feel genuine pain when we are in pain, and same for all human feelings, where genuine love surely stands out of them all.

As for this concrete argument against computer consciousness I'd say that to me a human brain is indeed like computer, but what coputers lack is that which we reffer to as soul. And a soul to me is eternal, it's building block of awareness, and thus, if a soul might one day chose to 'experience' computer existence than so shall it be.

Perheps the reason it didn't happen yet is because it's still not the right time, for I guess that self-aware human race wouldn't really go well along self-aware machines (computers, robots, etc.), plus as i already mentioned, a human brain is way more functional and efficient than a comparable computer in storage capacity and parallel computational capablity.

Last edited: Jun 27, 2010
11. Jun 28, 2010

### JoeDawg

This sounds similar to this problem.

I think you're running into the standard problem with reductionism. You might think of consciousness as a process, instead of a serious of events.

Consciousness is being aware of the self, not just of events as they occur.

12. Jun 30, 2010

### jackmell

I believe consciousness is an emergent property of sufficiently complex dynamics. I'm quite convinced of that. But those dynamics I believe must necessarily be non-linear. At their fundamental level, I do not believe computers run in their non-linear regime; the voltage is either +5 or -5 or some variation of that discrete on/off phenomenon. Also, instructions at the machine level always effect the same response: add 2 to the ax register, shift bx left by five bits, jump to address xxxx and so forth. Such (linear) dynamics do not offer the richness of critical points, strange attractors, and other non-linear phenomena I think is crucial to the emergence of consciousness. However, I feel very confident we will one day reach a critical point in technology, a qualitatively new device will be developed and from it, the beginnings of synthetic consciousness will start to emerge.

13. Jun 30, 2010

### imiyakawa

Ok, I'm going to be the pedantic non-expert arsehole that's probably wrong, so with the disclaimer in mind don't cut me down if I'm wrong.

I thought a digital computer could be non-linear because of the violation of the superposition principle? Actually yes, it does violate this principle. But I get your point totally.

------------------

On another note, another argument against a digital computer (running via the syntax of predefined algorithms or otherwise) is the apposition of the discreteness in digital processing to the continuity of brain processes (assuming no potential reduction to digitalism is fatal to the contrast). The difference could be critical to the formulation of consciousness in a digital processing device. Although that doesn't rule out the possibility of some kind of supervening consciousness on a system of digital algorithms.

Another argument is the lack of functional similarity.

Another is Searle's Chinese room argument, but I don't like that very much.
--
Of course most of this is assuming digital computers. The future is not constrained to my ignorance. A future computer may very well be made to be functionally identical (or similar to) brains.

Last edited: Jun 30, 2010
14. Jul 1, 2010

### Pythagorean

Everone may want to note that computationalism has nothing to do with computers as you know them, so bringing them up in the discussion is mostly off-topic.

15. Jul 23, 2010

### robheus

Nope. Your assumption that there are a completed infinite number of moments is false.