Staff Emeritus
Gold Member
Originally posted by Mentat
I disagree. If a computer were to use the same stimuli on different people, the result should be the same. There needn't be any "underlying representation" (which strikes me as (to use your term) "phenomenological chairs flying around").
There are no phenomenological chairs flying around (beyond what is perceived by each individual, of course).

Take the case of two people in the matrix, A and B, looking at the same chair from different angles. The computer cannot be feeding the same stimuli into both people, or else they would see the chair from the same angle. There must be some representation of the chair stored as data in the computer to ensure that a) what A and B see is logically consistent with their respective POVs and b) A and B see a logically consistent construct when they look at the chair.

For instance, say A is looking at the chair from directly above, and say B is looking at the chair directly from the side. Suppose there is a circular stain on the seat of the chair. To satisfy a), A must see what looks essentially like a square, and B must see essentially what looks like an angular, lower case 'h'. To satisfy b), A must see the stain on the chair appear to be perfectly circular, and B must see the stain as a compressed ellipse, in such a way that it is consistent with looking at the circular stain from his glancing angle. The computer cannot satisfy a) or b) without keeping track of where the chair is located in the room, or where the observors are with respect to the chair. This mechanism of "keeping track" is simply the computer's internal representation of the room, the chair, and the observors' "matrix bodies." If there were no such internal mechanisms for keeping track of where things were, there could not be a logically consistent presentation of the room to both A and B.

Thus your argument, is that it is possible for Dualism to be logically consistent (note: you are not taking a neutral PoV, which would be to refute the arguments of the non-dualist, but are instead making a case for the logical consistency of dualism), and that argument should be able to stand, with less assumptions, against the argument to the contrary, shouldn't it?
My argument is simply that dualism cannot be ruled out on purely logical bases; that is, that dualism can be logically consistent. We may doubt its veracity on the basis of heuristics such as OR, but that is not a purely logical criterion of judgement; it says nothing about the logical consistency of the framework.

But I don't see any reason to agree to this. If there are objective entities, that elicit subjective awareness in the minds of humans, then these objective entities must exist as "phenomenological chairs floating around", so to speak. If, OTOH, Dennett is right, and there are no objective entities in the matrix, then there is only one reality, in which electrochemical stimulation produces subjective awareness in humans (no extra entities required).
You are still thinking in the wrong terms. The matrix is basically a bunch of data stored in computers. What people hooked up to the matrix see is not the most fundamental aspect of the matrix-- the most fundamental aspect is the data in the computers. It is a simple analogy to how we usually think of the real world.

data in matrix : matrix perceivers :: atoms/photons : 'real world' perceivers

This analogy works insofar as in both cases, data and atoms/photons work as objectively existing generators of logically consistent input into human brains.

Originally posted by hypnagogue
There are no phenomenological chairs flying around (beyond what is perceived by each individual, of course).

Take the case of two people in the matrix, A and B, looking at the same chair from different angles. The computer cannot be feeding the same stimuli into both people, or else they would see the chair from the same angle.
Which is why Dennett doesn't believe that "matrix worlds" can be created: It takes too much information.

You see, as far as Dennett (along with other such Materialist philosophers) is concerned, there would not be a "representation of a chair", but instead there would be a slightly different stimulus for each possible angle. NOTE: These slightly different stimuli released when necessary, and do not exist as an informational construct, containing all the information about a chair (which would basically be a chair anyway).

btw, when I sad that they "do not exist as...", I meant that they needn't exist as... because it works without that postulate, and thus good 'ol Occam rules in my favor on this matter.

There must be some representation of the chair stored as data in the computer to ensure that a) what A and B see is logically consistent with their respective POVs and b) A and B see a logically consistent construct when they look at the chair.
Actually, that is not true. As stated above, there could just be a program that calculates (when necessary, and at no other time) the stimulus required for one to see a chair at a particular angle.

For instance, say A is looking at the chair from directly above, and say B is looking at the chair directly from the side. Suppose there is a circular stain on the seat of the chair. To satisfy a), A must see what looks essentially like a square, and B must see essentially what looks like an angular, lower case 'h'. To satisfy b), A must see the stain on the chair appear to be perfectly circular, and B must see the stain as a compressed ellipse, in such a way that it is consistent with looking at the circular stain from his glancing angle. The computer cannot satisfy a) or b) without keeping track of where the chair is located in the room, or where the observors are with respect to the chair. This mechanism of "keeping track" is simply the computer's internal representation of the room, the chair, and the observors' "matrix bodies." If there were no such internal mechanisms for keeping track of where things were, there could not be a logically consistent presentation of the room to both A and B.
Again, there could, indeed, be such a logical presentation...it would just require a whole lot more information...ergo, Dennett thinks of "matrix worlds" as possibilites in principle, and barely that!

My argument is simply that dualism cannot be ruled out on purely logical bases; that is, that dualism can be logically consistent. We may doubt its veracity on the basis of heuristics such as OR, but that is not a purely logical criterion of judgement; it says nothing about the logical consistency of the framework.
But it does say that any two logically consistent ideas will be judged by the amount of assumptions made...therefore, if you make yours full of assumptions, while the other has few, you may establish logical consistency, but this will quickly be cut of by the "Razor".

You are still thinking in the wrong terms. The matrix is basically a bunch of data stored in computers. What people hooked up to the matrix see is not the most fundamental aspect of the matrix-- the most fundamental aspect is the data in the computers. It is a simple analogy to how we usually think of the real world.
But, think about what you are saying. You are referring to "data" as though it were a static representation inside the computer. You are almost toying with the idea of an application of the h-problem to the matrix computers themselves, since they have to contain all these phenomenological chairs in their CPUs instead of just the capacity (in software and hardware) to produce them at will, the latter being the way that computers actually work AFAIK.

This analogy works insofar as in both cases, data and atoms/photons work as objectively existing generators of logically consistent input into human brains.
But, again, the data doesn't just "sit there" ready to be used as stimulus (that's what happens in the real world, but not inside a computer's CPU), it is only a matter of stimulation from human to computer, causing a stimulation back from the computer to the human.

Staff Emeritus
Gold Member
Originally posted by Mentat
Actually, that is not true. As stated above, there could just be a program that calculates (when necessary, and at no other time) the stimulus required for one to see a chair at a particular angle.
Yes, but what happens in this case? The computer fetches data lying around. It says "there is a human perceiver whose 'matrix body' is in room 17, so I need to fetch information about room 17 and present it the perceiver." Even if it calculates these things dynamically, it still must have an internal representation of what is there.

Again, there could, indeed, be such a logical presentation...it would just require a whole lot more information...ergo, Dennett thinks of "matrix worlds" as possibilites in principle, and barely that!
Possibility in principle is all that is needed.

But it does say that any two logically consistent ideas will be judged by the amount of assumptions made...therefore, if you make yours full of assumptions, while the other has few, you may establish logical consistency, but this will quickly be cut of by the "Razor".
I am not concerned with that. I am concerned with objecting to statements to the effect that dualism cannot be a logically consistent framework.

But, think about what you are saying. You are referring to "data" as though it were a static representation inside the computer. You are almost toying with the idea of an application of the h-problem to the matrix computers themselves, since they have to contain all these phenomenological chairs in their CPUs instead of just the capacity (in software and hardware) to produce them at will, the latter being the way that computers actually work AFAIK.
The computers do not contain phenomenological chairs! They simply contain data sufficient for producing input into a human brain to illicit subjective perceptions of phenomenological chairs. The phenomenology only occurs when human brains are introduced into the mix.

But, again, the data doesn't just "sit there" ready to be used as stimulus (that's what happens in the real world, but not inside a computer's CPU), it is only a matter of stimulation from human to computer, causing a stimulation back from the computer to the human.
What you are saying is that the matrix doesn't (or needn't) calculate proper inputs into human brains when there are none around to perceive them. And I totally agree with this. Nonetheless, when a human perceiver is there, some data representing his environment must be fetched in order to stimulate him properly.

I want you two to continue on but I just thought I would interject and say that I completely understand what Hypnagogue is saying. It makes all the sense in the world to me. It appears to me that Mentat still has not really grasped the point of the argument. I too see that whether the matrix has data ready made for brain stimulation or whether it caluculates on demand as irrelevant to the main point. Tha algorythms alone used to calculate are objective stimulators equivalent to atoms in the analogy used. Carry on.

Originally posted by hypnagogue
Yes, but what happens in this case? The computer fetches data lying around. It says "there is a human perceiver whose 'matrix body' is in room 17, so I need to fetch information about room 17 and present it the perceiver." Even if it calculates these things dynamically, it still must have an internal representation of what is there.
You mean it has to have a set of stimuli that it is programmed to produce under these particular circumstances, right?

The computers do not contain phenomenological chairs! They simply contain data sufficient for producing input into a human brain to illicit subjective perceptions of phenomenological chairs. The phenomenology only occurs when human brains are introduced into the mix.
I don't understand this last sentence. As it is, a computer doesn't contain a static set of data that equals "chair from this position", it just has programs of what parts of the brain to stimulate at any given time, right?

What you are saying is that the matrix doesn't (or needn't) calculate proper inputs into human brains when there are none around to perceive them. And I totally agree with this. Nonetheless, when a human perceiver is there, some data representing his environment must be fetched in order to stimulate him properly.
And that's what I'm arguing against, since no environment needs to be "fetched" (think about the connotations of that term, since you already know what I'd say about it )at all; all that needs to happen is for the little probe to stimulate the right neurons, as it is programmed to do. What am I getting wrong?

Originally posted by Fliption
I want you two to continue on but I just thought I would interject and say that I completely understand what Hypnagogue is saying. It makes all the sense in the world to me. It appears to me that Mentat still has not really grasped the point of the argument. I too see that whether the matrix has data ready made for brain stimulation or whether it caluculates on demand as irrelevant to the main point. Tha algorythms alone used to calculate are objective stimulators equivalent to atoms in the analogy used. Carry on.
Algorithms and programs are what the "probe" acts on, nothing more (under Dennett's materialistic theory, that is). As it is, I may not have grasped the point of the argument, and I apologize if I'm being a slow learner; but I think I have grasped the points he's trying to make, and even admitted to their validity, but I disagree with them, and am trying to present a counter-argument.

Originally posted by Mentat
How exactly do you define an "unconscious mental event"?

An unconscious mental event is a prime example of a lucid dream, where one dreams the reality but they imagining is the only think the consciousness is suppling the mental event with for the unconscious state of dreaming.

Originally posted by me
However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.

Mentat said:

The consciousness basically is the most vital part of the brain, that and the heart. It adapts to other human characteristics and operates most of human activity. While the Matrix is an ex gratia imaginative world like a dream state. You think you are being very physical but all you are doing or more what arent you doing is -- the consciousness is making it seem rather physical and thus really mindless in the existant world.

For example,

Why should our actual world correspond with Universe A rather than with Universe 2A? If this is a valid question it admits of only two answers. Either there is no reason at all, it is just a God-given (contradictory depending on your belief) or contingent fact that that is how things actually are, like the fact that anything at 11 should exist rather than nothing, or else there is some reason, for example we might suppose that the world we know could not have evolved as it has done had it not been for the intervention of mind.

A fortiori, we should note that it is only in its derived sense that we can define or explicate what we mean by consciousness. In its basic sense it can no more be defined than any other primitive concept. With any primitive concept, either one understands what is intended or one fails to understand.

You see?

Originally posted by Jeebus
An unconscious mental event is a prime example of a lucid dream, where one dreams the reality but they imagining is the only think the consciousness is suppling the mental event with for the unconscious state of dreaming.
Rephrase, please...maybe it's a grammar thing, or something, but I'm having difficulty understanding your explanations of late. I apologize for this; be patient with me, English is my second language you know (not really...yes, I did learn it second, but I was 1 1/2 years old; now I speak it even better than my original language (environment, and all that), so it's not much of an excuse ).

Originally posted by Jeebus
However, consciousness is, by common consent, the most distinctive attribute of mind and it would be hard to make sense of a mind that never at any time became conscious. At all events the Matrix is, ex hypothesi, a purely physical or totally mindless universe.

Mentat said:
The consciousness basically is the most vital part of the brain, that and the heart. It adapts to other human characteristics and operates most of human activity. While the Matrix is an ex gratia imaginative world like a dream state. You think you are being very physical but all you are doing or more what arent you doing is -- the consciousness is making it seem rather physical and thus really mindless in the existant world.

For example,

Why should our actual world correspond with Universe A rather than with Universe 2A? If this is a valid question it admits of only two answers. Either there is no reason at all, it is just a God-given (contradictory depending on your belief) or contingent fact that that is how things actually are, like the fact that anything at 11 should exist rather than nothing, or else there is some reason, for example we might suppose that the world we know could not have evolved as it has done had it not been for the intervention of mind.

A fortiori, we should note that it is only in its derived sense that we can define or explicate what we mean by consciousness. In its basic sense it can no more be defined than any other primitive concept. With any primitive concept, either one understands what is intended or one fails to understand.

You see? [/QUOTE]

Yes, I get it now (mostly). Joseph LeDoux made something of the same point, at the beginning of Synaptic Self (an excellent book, btw), and William H. Calvin mentioned it in passing in The Cerebral Code (also a good book, but very hard to understand if you haven't learned some neurophysiology, evolutionary biology, and philosophy of the mind, before having read it...fortunately, there's a good glossary at the end ). People tend to make a little too much out of consciousness' role in the mind. Sure, it's the most noticable, and it's hard to refer to a "mind" without referring to consciousness (like you mentioned in your post), but if it weren't for the myriad unconscious processes in the brain (Dennett's "stupid demons"), there would be no consciousness ITFP.

Also, as to the undefinability of consciousness, I tend to disagree, since Dennett has proposed a working hypothesis on how consciousness can be definable in Materialistic terms (though he doesn't actually define it, he shows that it can be done, and explains some of the necessities for such a theory (William Calvin in the aforementioned book does a very good job of making a technical theory of consciousness (which does, btw, fit in with Dennett's guidelines), and it appears that Gerald M. Edelman and Giulio Tononi did a fine job, as well, in A Universe of Consciousness, but I've really just started reading it). So, if Dennett (and these others that I've mentioned) is/are right, then consciousness can indeed be defined, just (perhaps) not as we might have expected.

Staff Emeritus
Gold Member
Originally posted by Mentat
You mean it has to have a set of stimuli that it is programmed to produce under these particular circumstances, right?
Yes. Let us consider for our argument a set of perceptual inputs that represents a matrix chair at a particular location under a particular set of circumstances (such as lighting, etc), as perceived by an observor at a particular location with respect to the chair. There are only really two coherent ways in which a matrix computer could feed this perceptual input into the brain of an observor at any given time.

1) Store sets of data representing objects in the matrix environment such as chairs (this corresponds roughly to our notion of the atoms which compose a 'real' chair). When an en-matrixed observor is present, use simulated laws of physics to dynamically generate perceptual input and feed it into the observor's brain. Let us refer to this type of object representation in the matrix as explicit representation.

2) Store sets of data representing every possible set of perceptual input from the matrix environment. This way, when an en-matrixed observor is present, the perceptual input needn't be dynamically generated, since it is already stored as static data in the computer. All the computer needs to do is check the observor's location and viewing angle and fetch the corresponding set of perceptual input from its database. In this scheme, there is no set of data uniquely designated as "chair," but rather, there are sets of perceptual data representing all the possible ways in which the chair could impress perceptual sensations onto the observor. Let us refer to the union of all the sets of perceptual representations related to a matrix object (such as the chair) as implicit representation.

Now, in either scheme, what I have argued for holds; the perceptual input perceived phenomenologically by the observor is indicative of an objectively existing thing, which is simply data in the computer matrix. In 1), the obersvor perceives a data object that is explicitly designated as a discrete object (a chair) in the computer, in an analagous fashion to how we say we see an objectively existing chair in the 'real world.' In 2), the observor perceives a subset of the data in the computer which implicitly connotes the existence of a discretely existing object. In both cases, the matrix must have some sort of representation of the objects in its world, be it explicit or implicit; if it did not, then a coherent perceptual world could not be displayed to the observors.

Notice, however, that there is a problem with 2). Here we have assumed that there is no explicit representation at all of any of the matrix environment. However, in order for 2) to work coherently, the computer must at least be able to keep track of the observor's location in the environment; in order to do this, there must at least be an explicit representation in the computer of a set of spacetime coordinates for the matrix environment.

But there is a much more grave problem with scenario 2). 2) works well enough in a static environment, but the matrix environment is dynamic, since en-matrixed people are allowed to interact with it and change it. So it is all well and good to have a prerecorded set of perceptual inputs that a chair could possibly impinge upon an observor without explicitly representing the chair, but what do we do if the observor actually moves the chair? The only way 2) can work coherently is if the matrix predicts the actions of all its inhabitants, starting from the initial conditions until termination of the simulation, and then generates the static perceptual inputs accordingly. Clearly 1) is the much more feasible scenario.

Think of a video game in which you observe, move through, and interact with a simulated 3-dimensional environment. Here, too, all the game console needs to do is produce the correct set of perceptual inputs (through the monitor) to give the gamer the illusion (albeit a much poorer illusion than the matrix) that s/he is immersed in an interactive 3-dimensional world. But, of course, the simplest (and indeed probably the only really feasible) way for the computer game to do this is to store explicit representations of the objects in the environment in conjunction with simulated laws of physics.

Last edited:
Originally posted by hypnagogue
Yes. Let us consider for our argument a set of perceptual inputs that represents a matrix chair at a particular location under a particular set of circumstances (such as lighting, etc), as perceived by an observor at a particular location with respect to the chair. There are only really two coherent ways in which a matrix computer could feed this perceptual input into the brain of an observor at any given time.

1) Store sets of data representing objects in the matrix environment such as chairs (this corresponds roughly to our notion of the atoms which compose a 'real' chair). When an en-matrixed observor is present, use simulated laws of physics to dynamically generate perceptual input and feed it into the observor's brain. Let us refer to this type of object representation in the matrix as explicit representation.
There are two problems so far:

1) Information about what stimulus will be applied to the "en-matrixed observer"'s brain is not static, but is programed to be activated by other activity in that brain. It is not activated until this other activity occurs.

2) What good would it do the computer to have both the program for the chair, and the program for the stimulus to make someone experience a chair, working at all times? It works just fine for it to respond only to his activity, and thus the opposite of existentialism is at work...the chair is only there when he's looking at it.

2) Store sets of data representing every possible set of perceptual input from the matrix environment. This way, when an en-matrixed observor is present, the perceptual input needn't be dynamically generated, since it is already stored as static data in the computer.
But "static data" just takes the place of phenomenological chairs flying around, since all of material reality is a collection of "static data". In a matrix computer system, there needn't be any such static data, but should instead be a set of programs that are activated by particular activities in the observer's brain, and that produce a chair for his inspection.

All the computer needs to do is check the observor's location and viewing angle and fetch the corresponding set of perceptual input from its database.
But this means that the observer moves (if the computer must check his location). We cannot remain in the realm of analogy, hypna, we also have to think about what it actually happening. The observer is tied to a chair on some ship near the Earth's core, and he hasn't moved an inch since he was "plugged in". Thus, certain brain activities may translate as part of the "movement" program of the Matrix, but he hasn't moved at all.

In this scheme, there is no set of data uniquely designated as "chair," but rather, there are sets of perceptual data representing all the possible ways in which the chair could impress perceptual sensations onto the observor. Let us refer to the union of all the sets of perceptual representations related to a matrix object (such as the chair) as implicit representation.
But this is like an existentialism in a computer program, and I don't think that's the way computers work. After all, we already have programs that can allow me to see a chair from all possible angles, and in different lighting, but there is no static set of data in the computer for the chair, merely for the program that illicits that particular representation on the monitor.

if it did not, then a coherent perceptual world could not be displayed to the observors.
Wrong, and that's practically the whole point of Consciousness Explained. Dennett was trying to show that we didn't need this paradoxical dualism in order to have a world with consciousness. In the matrix, there are programs that elicit certain stimuli due to particular activities in the brain's of the en-matrixed people. As it is, this program would have to be very complex, since it would have to account for all possible factors, but it would not have to do this when there was no observer present. After all, what good is a static representation of a chair to the computer itself (with no "observers" to stimulate)?

But there is a much more grave problem with scenario 2). 2) works well enough in a static environment, but the matrix environment is dynamic, since en-matrixed people are allowed to interact with it and change it. So it is all well and good to have a prerecorded set of perceptual inputs that a chair could possibly impinge upon an observor without explicitly representing the chair, but what do we do if the observor actually moves the chair? The only way 2) can work coherently is if the matrix predicts the actions of all its inhabitants, starting from the initial conditions until termination of the simulation, and then generates the static perceptual inputs accordingly. Clearly 1) is the much more feasible scenario.

Think of a video game in which you observe, move through, and interact with a simulated 3-dimensional environment. Here, too, all the game console needs to do is produce the correct set of perceptual inputs (through the monitor) to give the gamer the illusion (albeit a much poorer illusion than the matrix) that s/he is immersed in an interactive 3-dimensional world. But, of course, the simplest (and indeed probably the only really feasible) way for the computer game to do this is to store explicit representations of the objects in the environment in conjunction with simulated laws of physics.
And yet this is not (AFAIK) what video games do. For example, if I'm playing Donkey Kong 64, and am in room with the K. Lumsy, there only need be the stimulations to my television - which, in turn, stimulates my retina - to produce certain photonic emissions (which, in turn, stimulate certain triangular arrays in my neocortex)...there needn't be any representation whatsoever of Kranky Kong in his lab, or of Candy Kong in her shop, since I'm not there and the game console has no use for such representations.

Staff Emeritus
Gold Member
Originally posted by Mentat
But "static data" just takes the place of phenomenological chairs flying around, since all of material reality is a collection of "static data". In a matrix computer system, there needn't be any such static data, but should instead be a set of programs that are activated by particular activities in the observer's brain, and that produce a chair for his inspection.
So the program produces a chair when the observor is looking. What happens when the observor looks away? In a logically consistent world, when he looks back to where the chair was, it should still be there. How does the computer take this into account? The only way is for it to store information about the chair, even when the observor is not looking at it. If this were not done, the computer would not be able to reliably reproduce the chair in that same location every time the observor looked there.

By way of analogy, the information that represents, stands for, codes for-- however you want to say it-- your web browser exists in your computer's hard drive, even when you are not actively running (looking at) your browser's program.

But this means that the observer moves (if the computer must check his location). We cannot remain in the realm of analogy, hypna, we also have to think about what it actually happening. The observer is tied to a chair on some ship near the Earth's core, and he hasn't moved an inch since he was "plugged in". Thus, certain brain activities may translate as part of the "movement" program of the Matrix, but he hasn't moved at all.
No kidding. I didn't say anywhere that the observor was moving in the 'real' world. I meant that the computer must keep track of where the observor is located in the simulated matrix world. This of course is not a literal physical location, just abstract data representing a location in an abstract world made of bits.

But this is like an existentialism in a computer program, and I don't think that's the way computers work. After all, we already have programs that can allow me to see a chair from all possible angles, and in different lighting, but there is no static set of data in the computer for the chair, merely for the program that illicits that particular representation on the monitor.
If the simulated world is to be an interactive one, then there must be some internal representation of the objects within it, or the computer must be able to precisely predict all actions taken by the participants. See my previous post.

Wrong, and that's practically the whole point of Consciousness Explained. Dennett was trying to show that we didn't need this paradoxical dualism in order to have a world with consciousness.
Whoa, hold your horses. I never said we need dualism to explain consciousness. I said we need internal data representation to explain how an interactive world like the matrix can work.

In the matrix, there are programs that elicit certain stimuli due to particular activities in the brain's of the en-matrixed people. As it is, this program would have to be very complex, since it would have to account for all possible factors, but it would not have to do this when there was no observer present. After all, what good is a static representation of a chair to the computer itself (with no "observers" to stimulate)?
Again, you misunderstand. The computer needn't compute all the necessary things for human perception when an observor is not present. But it does need to store some sort of information in order to retrieve it for when the observor comes around, so that it can then do its appropriate computations.

Assume we play a game where we navigate through a 3 dimensional world, except instead of doing this through a computer, we do it through pencils, paper, and imagination. I have written down on a paper, "Room 17: It is a plain, cubic room. There is a chair in the back left corner of the room." I read this information to you when you have 'entered' Room 17. When you 'come back' to Room 17, I read it to you again, and sure enough, the chair is still in the back left corner.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.

Say we stop playing and then resume 3 months later, and you remember the details about Room 17 but I do not. I also seem to have lost the paper with the information about Room 17 written on it. So I make something up, and you say, "Hey! That's not an accurate description of Room 17." Without the information stored on the paper, I have lost the ability to make our imaginary world logically consistent. Likewise for your version of the matrix.

And yet this is not (AFAIK) what video games do. For example, if I'm playing Donkey Kong 64, and am in room with the K. Lumsy, there only need be the stimulations to my television - which, in turn, stimulates my retina - to produce certain photonic emissions (which, in turn, stimulate certain triangular arrays in my neocortex)...there needn't be any representation whatsoever of Kranky Kong in his lab, or of Candy Kong in her shop, since I'm not there and the game console has no use for such representations.
This is like saying this very post you're reading needn't be stored as data on a computer somewhere-- after all, your computer only needs to make the appropriate stimulations to your monitor, which in turn stimulates your retina, and so on, to have the experience of reading this post.

How does the computer make those appropriate stimulations to the monitor if it is not drawing it from some stored information? Is it doing it randomly? No, clearly there must be some sort of data in the server hard drive that represents this post, which can be fetched and displayed to you when requested/needed. Likewise with Donkey Kong and the matrix.

Originally posted by hypnagogue
So the program produces a chair when the observor is looking. What happens when the observor looks away?
It stops producing this stimulus, since it has no reason to anymore.

In a logically consistent world, when he looks back to where the chair was, it should still be there.
Indeed. When he looks back the stimulus from his brain to computer, to produce the stimulus from computer to brain of a chair, is re-activated.

How does the computer take this into account? The only way is for it to store information about the chair, even when the observor is not looking at it. If this were not done, the computer would not be able to reliably reproduce the chair in that same location every time the observor looked there.
If it stores information about what a chair is supposed to look like under all of the given circumstances then what you have is a program that deduces, from the stimuli given by the human's brain, which stimulus it should (in turn) give back to his brain to produce the illusion of a chair.

By way of analogy, the information that represents, stands for, codes for-- however you want to say it-- your web browser exists in your computer's hard drive, even when you are not actively running (looking at) your browser's program.
True enough, but all the computer has to remember is the program, it doesn't have a static representation of this particular page at all times.

If the simulated world is to be an interactive one, then there must be some internal representation of the objects within it, or the computer must be able to precisely predict all actions taken by the participants. See my previous post.
Well, that prediction part is more of what Dennett was worried about (which is why he believed it would lead to combinatorial explosion). As it is, there should not be any static representations of chairs in the matrix; but even if there can be, there needn't be since such predictions (or reactions to current stimuli that will cause later stimuli) can occur.

Whoa, hold your horses. I never said we need dualism to explain consciousness. I said we need internal data representation to explain how an interactive world like the matrix can work.
But we don't, since we have the Dennett model of actual interaction, between the brain and the computer. Each new stimulus from the brain causes the computer to produce the proper subsequent stimulus for the brain.

Again, you misunderstand. The computer needn't compute all the necessary things for human perception when an observor is not present. But it does need to store some sort of information in order to retrieve it for when the observor comes around, so that it can then do its appropriate computations.
What if it has a program that dictates only "this stimulus means that that stimulus is the appropriate response; while this stimulus means that that other stimulus is the appropriate response"?

Assume we play a game where we navigate through a 3 dimensional world, except instead of doing this through a computer, we do it through pencils, paper, and imagination. I have written down on a paper, "Room 17: It is a plain, cubic room. There is a chair in the back left corner of the room." I read this information to you when you have 'entered' Room 17. When you 'come back' to Room 17, I read it to you again, and sure enough, the chair is still in the back left corner.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.
But this is not a correct analogy to a computer's processes. If I were to come to you and you were to re-draw Room 17, then you would be doing what a computer does, since the computer has no use for such static representations until stimulated by an observer, and then only until stimulation ceases.

Say we stop playing and then resume 3 months later, and you remember the details about Room 17 but I do not. I also seem to have lost the paper with the information about Room 17 written on it. So I make something up, and you say, "Hey! That's not an accurate description of Room 17." Without the information stored on the paper, I have lost the ability to make our imaginary world logically consistent. Likewise for your version of the matrix.
But not if the program is written so as to produce each particular pixel of the representation in the order that it is supposed to in response to that particular stimulus.

This is like saying this very post you're reading needn't be stored as data on a computer somewhere-- after all, your computer only needs to make the appropriate stimulations to your monitor, which in turn stimulates your retina, and so on, to have the experience of reading this post.

How does the computer make those appropriate stimulations to the monitor if it is not drawing it from some stored information? Is it doing it randomly? No, clearly there must be some sort of data in the server hard drive that represents this post, which can be fetched and displayed to you when requested/needed.
Clearly you are not speaking from a knowledge of computers, but from a knowledge of what you believe "should" be the case with them (no offense is intended here, btw, I'm just making an observation).

However, I've been talking to some people, and it's becoming more and more apparent to me that the programs that run simulations are set by programs to respond to different stimuli (in this case wherever I might click with my mouse or whatever key I might type on my keyboard) in the appropriate ways, meaning that there is nothing "written on paper" - merely a lot of "paper", a lot of "ink", and a lot of programs that teach it what to do in response to whatever stimulus.

Staff Emeritus
Gold Member
Originally posted by Mentat
It stops producing this stimulus, since it has no reason to anymore.
Yes.

Indeed. When he looks back the stimulus from his brain to computer, to produce the stimulus from computer to brain of a chair, is re-activated.
Yes.

If it stores information about what a chair is supposed to look like under all of the given circumstances then what you have is a program that deduces, from the stimuli given by the human's brain, which stimulus it should (in turn) give back to his brain to produce the illusion of a chair.
But the computer can't know what circumstance the chair is in unless it stores information to that effect.

True enough, but all the computer has to remember is the program, it doesn't have a static representation of this particular page at all times.

You seem to still think I am saying that the 'data object' of the chair includes the sensory output characteristic to it. I am not. I saying this data object acts (partially) as a generator of those inputs by storing relevant information about the chair. As I have said, at least the chair's location in 'matrix space' must be recorded, and probably additional information (such as, Bob dropped grape juice on this chair and so it has a stain). This information can then be fetched from the database and used to produce appropriate sensory inputs when an observor is present.

Well, that prediction part is more of what Dennett was worried about (which is why he believed it would lead to combinatorial explosion). As it is, there should not be any static representations of chairs in the matrix; but even if there can be, there needn't be since such predictions (or reactions to current stimuli that will cause later stimuli) can occur.
A logically consistent world cannot be created entirely from information from observors' brains. For instance, say Bob spilled grape juice on the chair yesterday, but forgot about it. Jane saw him spill it, and remembers it vividly. We now have two contradictory sets of information about the chair with no way to decide which is right and which is wrong. The solution is to explicitly store information to the effect that grape juice has been spilled on the chair.

What if it has a program that dictates only "this stimulus means that that stimulus is the appropriate response; while this stimulus means that that other stimulus is the appropriate response"?
Then we can't have a logically consistent world. Jane's expectation that the chair should be stained means the appropriate response is to show her a chair with a stain on it. Bob's expectation that the chair should not be stained means the appopriate response is to show him a chair without a stain. Now Jane says to Bob, "nasty stain there, huh?" and Bob disagrees that there is even a stain on the chair. Logical inconsistency.

But this is not a correct analogy to a computer's processes. If I were to come to you and you were to re-draw Room 17, then you would be doing what a computer does, since the computer has no use for such static representations until stimulated by an observer, and then only until stimulation ceases.
I do 're-draw' Room 17 everytime you re-enter it, by reading aloud the description of it to you. I explicitly said this.

The paper is like information in the computer database, and my reading the information to you is like an actively running program in the matrix presenting stimuli to an observor. I am not constantly reading the information to you, but I still need to have the paper handy in order to ensure that the Room 17 I present to you is logically consistent.
Indeed, I have no use for the piece of paper that has information about Room 17 until you 'enter' it. But once you do enter Room 17, I need that information stored away on the piece of paper to tell you (generate stimuli) about it. Just like the matrix has no use for information about the location of a chair in a room until an observor enters the room; at that point, the matrix fetches information from its database about the room, so that it can use it to generate the proper stimuli for the observor. If the matrix did not do this, it would have no way of 'remembering' where the chair should be located in this room.

But not if the program is written so as to produce each particular pixel of the representation in the order that it is supposed to in response to that particular stimulus.
It doesn't know what it's supposed to do without information to this effect. If it draws this information entirely from human brains, many logical inconsistencies will arise, since everyone has differing internal representations of what the (matrix) world out there looks like. Therefore, to make a logically consistent world, the matrix must store some information about this world on its own database.

Clearly you are not speaking from a knowledge of computers, but from a knowledge of what you believe "should" be the case with them (no offense is intended here, btw, I'm just making an observation).
Clearly I should just hand over my degree in computer science to you right now and be done with this conversation.

However, I've been talking to some people, and it's becoming more and more apparent to me that the programs that run simulations are set by programs to respond to different stimuli (in this case wherever I might click with my mouse or whatever key I might type on my keyboard) in the appropriate ways, meaning that there is nothing "written on paper" - merely a lot of "paper", a lot of "ink", and a lot of programs that teach it what to do in response to whatever stimulus.
There is still informational representation. If you are talking about something like neural nets, the representation is implicit and much more abstract, but it still exists.

Let's say I'm playing a flight simulator. There is a tall red building to my left. I turn right so that I can no longer see the building. Then I turn back around, and I can see the building again. How did the program remember that there was supposed to be a tall red building there? It had information stored that says something to the effect that "there is a tall red building here." When that 'here' is located in the current field of vision, then the computer actively uses the stored information to render the image of the building.

Quote-response didn't seem appropriate in this case, so I'm just going to try to respond to all the points that need responding to on this post, without the use of quotes...

I'd like to first of all say that I didn't mean to sound condescending in any way when I said you didn't sound like you were speaking from an understanding of computer programs...you were just using a lot of "I thinks", IMO, and so it didn't seem like you were basing this on actual knowledge about the way a computer works.

Anyway, I want to concede a little here, but want to be clear as to how much I'm conceding. You see, I understand that a certain bit (no pun intended) of information must exist that indicates that, when a plane is pointed in that direction there must appear a representation of a red building. However, that's not really what I was fighting. I was fighting against the idea that the sensory outputs would remain as a static representation, ready to be used at any given time. This is the only way I could see that you could apply the workings of a matrix computer to the dualistic idea of consciousness.

As it is, I'm willing to admit that these collections of information must exist in "computer language" , but the sensory outputs are re-reproduced (maybe just "reproduced" is correct here, I'm not sure ) every time the stimulus from the brain equals the appropriate cause for that particular effect. In this case, there doesn't appear to be any relevance of this analogy to the dualistic approach to consciousness.

Staff Emeritus
Gold Member
Originally posted by Mentat
I'd like to first of all say that I didn't mean to sound condescending in any way when I said you didn't sound like you were speaking from an understanding of computer programs...you were just using a lot of "I thinks", IMO, and so it didn't seem like you were basing this on actual knowledge about the way a computer works.
You could have said "It seems that..." instead of "Clearly..." But in any case, don't sweat it.

Anyway, I want to concede a little here, but want to be clear as to how much I'm conceding. You see, I understand that a certain bit (no pun intended) of information must exist that indicates that, when a plane is pointed in that direction there must appear a representation of a red building. However, that's not really what I was fighting. I was fighting against the idea that the sensory outputs would remain as a static representation, ready to be used at any given time. This is the only way I could see that you could apply the workings of a matrix computer to the dualistic idea of consciousness.
We'll get back to that last point. For now, all I wanted to do was show that it is incorrect to say that (for instance) a matrix chair does not exist. It does exist, insofar as it exists as information in the computer matrix which represent its attributes-- structure, mass, location, etc. I never claimed that this perpetually existent set of data includes the sensory outputs associated with the chair; rather, I tried to make it clear that these sensory outputs were dynamically produced as a function of the observor's reference frame, the information representing the object, and the computer's simulated 'physical laws.'

As it is, I'm willing to admit that these collections of information must exist in "computer language" , but the sensory outputs are re-reproduced (maybe just "reproduced" is correct here, I'm not sure ) every time the stimulus from the brain equals the appropriate cause for that particular effect. In this case, there doesn't appear to be any relevance of this analogy to the dualistic approach to consciousness.
Well, again, I just wanted to clarify that matrix chairs do exist. This is only a side issue for the main argument, which is well buried by now. I will look over the history of this thread and proceed with presenting the argument soon.

Originally posted by hypnagogue
We'll get back to that last point.
I'm gonna hold you to that.

For now, all I wanted to do was show that it is incorrect to say that (for instance) a matrix chair does not exist. It does exist, insofar as it exists as information in the computer matrix which represent its attributes-- structure, mass, location, etc. I never claimed that this perpetually existent set of data includes the sensory outputs associated with the chair; rather, I tried to make it clear that these sensory outputs were dynamically produced as a function of the observor's reference frame, the information representing the object, and the computer's simulated 'physical laws.'
So, doesn't that contradict the dualistic approach, which would give a seperate (and static) existence to the matrix chair itself, and wouldn't allow it to be nothing but stimulus that the "probe" gives our brain that happens to be directed by an information structure in the computer's programming? IOW, dualism gives a sort of existentialist approach to phenomenological entities, which is what I thought was what was being considered in the matrix analogy.

Staff Emeritus
Gold Member
OK, let's clean the slate and start over from the beginning. I'm just going to post some propositions one by one, and hopefully we can come to an agreement on them before we proceed. (By the way, don't go assuming how I am going to use this or that in my argument-- I think that approach caused some confusion previously... just tell me if you agree or disagree with these statements. Hopefully I can make this clear and straightforward.)

Proposition 1:
An object/phenomenon/entity X is physical if and only if it is possible in principle to observe X in the objective world by using objective measurements. Otherwise it is non-physical.

Agree or disagree?

Last edited:
Originally posted by hypnagogue
OK, let's clean the slate and start over from the beginning. I'm just going to post some propositions one by one, and hopefully we can come to an agreement on them before we proceed. (By the way, don't go assuming how I am going to use this or that in my argument-- I think that approach caused some confusion previously... just tell me if you agree or disagree with these statements. Hopefully I can make this clear and straightforward.)
Thank you, hypna. I'm sorry for being so confused in my views, but I just don't see how you can be right here. This approach will probably make it easier for me...

Proposition 1:
An object/phenomenon/entity X is physical if and only if it is possible in principle to observe X in the objective world by using objective measurements. Otherwise it is non-physical.

Agree or disagree?
Disagree. I cannot observe an electron.

Staff Emeritus
Gold Member
Originally posted by Mentat
Disagree. I cannot observe an electron.
Sorry, bad phrasing. Replace "observed" with "detected."

Originally posted by hypnagogue
Sorry, bad phrasing. Replace "observed" with "detected."
Then, for the purpose of being a "good sport": Agree.

Originally posted by Mentat
Then, for the purpose of being a "good sport": Agree.
Wait a minute...sorry, but what about the possiblity in principle of multiple Universes? They would be physical, but are impossible (in principle) to observe/detect, since they are seperated/connected by nothing at all.

Staff Emeritus