Is Consciousness an Emergent Property of a Master Algorithm?

  • Thread starter Thread starter Mentat
  • Start date Start date
  • Tags Tags
    Emergent Property
Click For Summary
The discussion centers on the concept of consciousness as an emergent property, specifically through the lens of a "master algorithm." The argument posits that while "subjective experience" is often cited in discussions of consciousness, it lacks a coherent definition and is therefore not a useful concept. Instead, consciousness can be understood through the complex interactions of numerous processes in the brain, which can be quantified as an algorithmic structure. This perspective aligns with reductionist scientific approaches, which aim to explain consciousness without relying on the ambiguous notion of subjective experience. The conversation highlights the ongoing debate between reductionist views and those that emphasize the significance of subjective experience in understanding consciousness.
  • #31
Al said:
I think you cannot apply the same argument to life and conciousness. Life is a definition, while experience is appalling, self-imposing reality. If you tell me you experience life, its just a tag you put on the fact of experiencing. Maybe "conciousness" is just another tag, but when you reduce all the tags, something remains, which is equal to your (mine, ours) existence, and cannot be denied. I think that's the problem with Dennett's et al. argument. Now Denett may be right in telling that the experience can be explained in terms of fundamental language, that is in terms of self-arising entities/interactions constituting the universe, for exemple, mass, energy, logical operations. It may be that we cannot intrinsically grasp the explanation, as Marion Gothier argues. Yet if the conciousness can be dissected in logical operations, how come we can grasp the operations but not their integration?

I'm not sure I think much of Marion Gothier's view. I've read the thread that confutatis provided but I haven't responded to it because...well...because Confutatis has me on ignore lol. And he seems to be the only one that cares about this view. I still need to think more about it but my initial impression is that claiming we cannot comprehend consciousness because of our relationship with it seems a bit like a cop out. I understand that an argument was made that we achieve knowledge by "taking the experience out" but this just seems to be an illustration of the hard problem rather than a refutation of it. After spending so much time convincing us that consciousness is unique and then concluding that there is nothing mysterious about it; that it may very well be completely physical seems confused and over reaching in a way that is typical of an aprior attempt to rationalize a complex situation. Afterall,"mysterious" is not an absolute condition. It is a relative statement about our ability to know and explain. So I agree with your question. Exactly where and how does the understood physical laws become the uncomprehendable laws of consciousness? How doe one create an uncomprehendable property from comprehendable parts? All she has done is create a whole new "hard" problem it seems.

As for Dennett's view... it just seems obstinate and off the topic.
 
Last edited:
Physics news on Phys.org
  • #32
It might be helpful here to introduce some terms: phenomenal consciousness and access consciousness (or P-consciousness and A-consciousness for short). Access consciousness, very roughly, is taken to be those aspects of consciousness that play a functional role: attention, verbal report, intentionality (about-ness), motoric activity, perceptual discrimination, and so on can be taken to be instances of A-consciousness. Phenomenal consciousness, again roughly, is taken to be those aspects of consciousness that are experiential: the redness of an object, the timbre of a musical note, and the felt texture of a smooth tile can be taken to be instances of P-consciousness.

I am not sure if Mentat's position is that P-consciousness does not exist or if it is that P-consciousness is subsumed under A-consciousness (i.e., that it is impossible to have A-consciousness without P-consciousness); I would appreciate a response here from Mentat pinpointing which of these views he holds, as opponents of the hard problem often take positions that do not explicitly differentiate between the two. This will help clarify further discussion.

I would also like to say something about zombies. There is a bit of a fallacy of thought going on here that is easy to slip into, and I have done it myself in the past (even if only half-jokingly). First, to frame zombies in the nomenclature above, a zombie is a being with A-consciousness identical to that of a normally functioning human, but still lacking P-consciousness altogether. Thus a zombie behaves identically to a normally functioning human, even though the first person view of the zombie is non-existent.

The problematic notion I'd like to address is that one who denies the hard problem is acting in a zombie-like way by refuting, in some manner, the problem of P-consciousness. This seems like a natural position to take, since a zombie presumably could not understand the hard problem on the basis of its lack of P-consciousness. However, strictly speaking, this position cannot follow since the zombie behaves identically to a normal human, including verbal reports indicating a belief of P-conscious qualities. Therefore a zombie could be just as much a proponent of the hard problem as an enemy of it. Indeed, if all zombies had systematic difficulties understanding the hard problem, then on average they would not have A-conscious properties identical to the average human, contradicting our intial definition.

This is a great complication, because it implies that if I were to suddenly become a zombie, my first person view would be dramatically different even though I could not know about it personally, let alone indicate it to others either directly or indirectly. I do not think that this defeats the hard problem, but rather it underscores its hardness by emphasizing the epistemic difficulties involved.
 
  • #33
hypnagogue said:
The problematic notion I'd like to address is that one who denies the hard problem is acting in a zombie-like way by refuting, in some manner, the problem of P-consciousness. This seems like a natural position to take, since a zombie presumably could not understand the hard problem on the basis of its lack of P-consciousness. However, strictly speaking, this position cannot follow since the zombie behaves identically to a normal human, including verbal reports indicating a belief of P-conscious qualities. Therefore a zombie could be just as much a proponent of the hard problem as an enemy of it. Indeed, if all zombies had systematic difficulties understanding the hard problem, then on average they would not have A-conscious properties identical to the average human, contradicting our intial definition.

I'm impressed! The first person here who seems to understand that!

This is a great complication, because it implies that if I were to suddenly become a zombie, my first person view would be dramatically different even though I could not know about it personally, let alone indicate it to others either directly or indirectly. I do not think that this defeats the hard problem, but rather it underscores its hardness by emphasizing the epistemic difficulties involved.

It seems ironic to me that the hard problem is so hard that it can't even be properly stated. It's no surprise there's so much cynicism around it.

There is a hard problem, but it has nothing to do with consciousness in particular. Consciousness just happens to be a good example of a truly hard problem, one that is far more fundamental than anything Chalmers addresses. The issue is language and its ability to represent reality. The truly hard problem is how to explain the relationship between language and reality, or between explanation and explanandum to use a more pompous language. The problem comes from the fact that any description of the explanandum, or the relationship between explanation and explanandum, is also an explanation. No matter how hard you try it, it is impossible to come up with any explanation that transcends the domain of explanation. So one may be tempted to think that explanations are all that exist. This is Dennett's position, by the way.
 
Last edited by a moderator:
  • #34
hypnagogue said:
I am not sure if Mentat's position is that P-consciousness does not exist or if it is that P-consciousness is subsumed under A-consciousness (i.e., that it is impossible to have A-consciousness without P-consciousness);

Yes, this is the same question I'm asking I think. Judging from past conversation, I'm thinking the answer will be that they are one and the same thing.

This is a great complication, because it implies that if I were to suddenly become a zombie, my first person view would be dramatically different even though I could not know about it personally, let alone indicate it to others either directly or indirectly. I do not think that this defeats the hard problem, but rather it underscores its hardness by emphasizing the epistemic difficulties involved.

I don't follow this zombie clarification. I understand what you're saying. I just don't understand why you're saying it. The nature of the hard problem is one of explanation. Is this correct? The fact that consciousness cannot be reductively explained using the fundamental elements we currently assume. I can understand this issue because I can compare my experience to the explanation and see that something is missing. I don't understand how a zombie scientist could ever find the explanation of his A-consciousness unsatisfactory. He may believe himself to have some form of p-consciousness, but what characteristics could this P-consciousness have that would not allow it to be reductively explained? What nature could it possibly have that would make the zombie scientist feel that something is missing?

A further question to ask is, if a planet existed that consisted of nothing but zombies and no one was conscious, would there be a hard problem?
 
Last edited:
  • #35
Fliption said:
The nature of the hard problem is one of explanation. Is this correct? The fact that consciousness cannot be reductively explained using the fundamental elements we currently assume.

I hope hypnagogue replies, but I'd also like to offer my view. According to Chalmers, the hard problem applies to P-consciousness only. A-consciousness is what zombies have and that can be explained; Chalmers calls that the "easy problem".

I can understand this issue because I can compare my experience to the explanation and see that something is missing.

Yes, you see that P-consciousness is missing from an explanation of A-consciousness. That would be correct. But a zombie would think he sees it to. You must keep in mind that, according to Chalmers, there's nothing a zombie may say or do that would reveal his zombieness, because everything a zombie says and does is the result of A-consciousness - including statements about P-consciousness!

I don't understand how a zombie scientist could ever find the explanation of his A-consciousness unsatisfactory.

For the same reason you do: he doesn't see P-consciousness in it. Or, rather, the physical action of a zombie scanning the words of an explanation of A-consciousness causes the zombie to move his mouth and tongue and utter the phrase: "I don't see P-consciousness in it!".

He may believe himself to have some form of p-consciousness, but what characteristics could this P-consciousness have that would not allow it to be reductively explained?

A very simple fact: for the zombie, P-consciousness is an illusion. He thinks he has it but he doesn't. Therefore no explanation of anything real will appear to the zombie as an explanation of his P-consciousness, for the simple fact that no true explanation of anything real can imply that an illusion is real.

Ironically, that's exactly what Dennett and Mentat say about us non-zombies! That we believe in an illusion called P-consciousness, and then complain that their true theories of real phenomena can't account for something that, from their perspective, is not real. That's why Dennett and Mentat do not mind being called zombies - they are just being cynical.

A further question to ask is, if a planet existed that consisted of nothing but zombies and no one was conscious, would there be a hard problem?

I think even Chalmers acknowledges that zombies would also eventually come up with a hard problem, except in their case it would be a pseudo-problem whereas in our case it's a real problem :smile:
 
  • #36
confutatis said:
Yes, you see that P-consciousness is missing from an explanation of A-consciousness. That would be correct. But a zombie would think he sees it to. You must keep in mind that, according to Chalmers, there's nothing a zombie may say or do that would reveal his zombieness, because everything a zombie says and does is the result of A-consciousness - including statements about P-consciousness!

Right. I understand that. But are we saying that a zombie can't think for himself? The whole point of defining a zombie this way seems to be to make it impossible for "other people" to differentiate a zombie from a non-zombie to illustrate a point about consciousness.


For the same reason you do: he doesn't see P-consciousness in it. Or, rather, the physical action of a zombie scanning the words of an explanation of A-consciousness causes the zombie to move his mouth and tongue and utter the phrase: "I don't see P-consciousness in it!".

Again, this implies a zombie doesn't think for himself. I didn't realize that we were assuming that consciousness is what allowed me to think, calculate and make decisions. If we are that's fine. I'll just need to come up with another word to describe people like Mentat who don't know what the color red is.

He thinks he has it but he doesn't.
I'm trying to understand why. The only reason I can fathom is that they have been defined as deterministic robots who are simply programmed to say the same things that conscious people say.

I think even Chalmers acknowledges that zombies would also eventually come up with a hard problem, except in their case it would be a pseudo-problem whereas in our case it's a real problem :smile:

If they are allowed to think for themselves, I don't see how this can be true. But they may not be defined that way in which case I can see how that's true and I just need to come up with another word.
 
  • #37
Fliption said:
Well I have no idea what is being "worked on". I'm of the opinion that it cannot be solved with the current assumptions regardless of whether it's being worked on or not.

Yes, I do know what I'm referring to. When you ask "what is it?", what is it that you are looking for me to tell you? Are you looking for words that allow you to scientifically approach it? Don't you realize that my knowledge comes from experience and you asking me for words is like trying to explain the color red to a blind man?

What you are saying is that your own experience is more than can be explained under current assumptions, right? But you can't tell me what it is that remains unexplained?

Oh, btw, you can't explain the color red any better to a person capable of sight. You can only point out examples, which is what I asked you to do with regard to this thing which has eluded explanation but which definitely exists.

Are you suggesting that if you could step into the position of my PC when it is doing math calculations, that you would find it experiencing the act of doing math exactly as you do yourself?

Not at all. My method of processing is distinctly different, but it remains a "method of processing", nothing more.

This may be the case today but it does not necessarily have to be the case. Category labels can be useful if they are consistently defined. But this is not relevant to this topic.

No, but what is relevant is that we have the term before the definition. What I'm trying to tell you is that that is a terrible way to reason. The phenomenon is supposed to be understood as existing, distinct from other phenomena, before a word is assigned to it (because then, at least we'll know what "it" is, to which the word refers).

I think your answer to my question above will be critical to me understanding your point. I see a distinction between measuring the wave length of light and the experience of the color red.

I hate to pick at words (though, as you well know, I think it is necessary that the words be correct, so as to avoid the possibility of confusion), but I too see a difference between "measuring" a particular wavelength of light and experiencing the color. What I don't see is the difference between being stimulated by a particular wavelength of light, which you then process in terms of previous stimulations and remember, and "experiencing" a certain color. I don't see what's left to explain, and those things that I mention are all part of the "easy problem".

If you are saying that the experience of the color red does not exists then we have nothing else to talk about.

I never said that. There's a difference between equating experience with computation, and saying that the experience never happened at all.

However, if you are saying that the eye and brain can pick up light and based on wavelength computations present what I am referring to as the experience of red then this is fine.

"Present"? To whom?
 
  • #38
hypnagogue said:
It might be helpful here to introduce some terms: phenomenal consciousness and access consciousness (or P-consciousness and A-consciousness for short). Access consciousness, very roughly, is taken to be those aspects of consciousness that play a functional role: attention, verbal report, intentionality (about-ness), motoric activity, perceptual discrimination, and so on can be taken to be instances of A-consciousness. Phenomenal consciousness, again roughly, is taken to be those aspects of consciousness that are experiential: the redness of an object, the timbre of a musical note, and the felt texture of a smooth tile can be taken to be instances of P-consciousness.

I need a better definition of "P-consciousness", as you probably expected. "The redness of an object" is a matter of perceptual discrimination, is it not?

I am not sure if Mentat's position is that P-consciousness does not exist or if it is that P-consciousness is subsumed under A-consciousness (i.e., that it is impossible to have A-consciousness without P-consciousness); I would appreciate a response here from Mentat pinpointing which of these views he holds, as opponents of the hard problem often take positions that do not explicitly differentiate between the two. This will help clarify further discussion.

I just don't understand what P-consciousness means. Your assesment of opponents of the hard problem appears to hold true with me, since I don't so much think that P-consciousness doesn't exist, or that it is subsumed under A-consciousness. What I really think is that the term doesn't make sense.

I suppose I could say that, were you to give me a specific instance of what you'd consider P-consciousness, I'd show that it is really just A-consciousness. But, at the same time, to do so does seem to imply that P-consciousness doesn't exist at all.

I would also like to say something about zombies. There is a bit of a fallacy of thought going on here that is easy to slip into, and I have done it myself in the past (even if only half-jokingly). First, to frame zombies in the nomenclature above, a zombie is a being with A-consciousness identical to that of a normally functioning human, but still lacking P-consciousness altogether. Thus a zombie behaves identically to a normally functioning human, even though the first person view of the zombie is non-existent.

Hold on a second. While this is the best definition of "zombie" I've ever seen, it is also the one that lays bare the ridiculousness of the notion. There are specific neo-cortical activities (things that would fall under the category of A-consciousness, or so I'd suspect) which can fully explain having a first-person view of objective phenomena. Indeed, Dennett went into a lengthy evolutionary explanation of that very matter in Consciousness Explained.

So, being a "zombie" becomes having no P-consciousness, with which I have no problem, so long as we don't deny them any of the things that A-consciousness can be shown to entail - i.e. self-consciousness, emotion, intuition, creativity, memory, perceptual discrimination (in all of it's forms; i.e. noticing, and responding to, the difference between textures, colors, shapes, and sounds), and reasoning ability.

The problematic notion I'd like to address is that one who denies the hard problem is acting in a zombie-like way by refuting, in some manner, the problem of P-consciousness. This seems like a natural position to take, since a zombie presumably could not understand the hard problem on the basis of its lack of P-consciousness. However, strictly speaking, this position cannot follow since the zombie behaves identically to a normal human, including verbal reports indicating a belief of P-conscious qualities. Therefore a zombie could be just as much a proponent of the hard problem as an enemy of it. Indeed, if all zombies had systematic difficulties understanding the hard problem, then on average they would not have A-conscious properties identical to the average human, contradicting our intial definition.

Has it not occurred to you that I might have been right when I told Fliption that everyone is a zombie? Think about it. I'm clearly a zombie, since I could claim to have P-consciousness, but I can't explain it. This exact statement holds true for all of you, does it not?

This is a great complication, because it implies that if I were to suddenly become a zombie, my first person view would be dramatically different even though I could not know about it personally, let alone indicate it to others either directly or indirectly. I do not think that this defeats the hard problem, but rather it underscores its hardness by emphasizing the epistemic difficulties involved.

Or it shows that, if the hard problem exists at all, then we are all zombies.
 
  • #39
Fliption said:
Yes, this is the same question I'm asking I think. Judging from past conversation, I'm thinking the answer will be that they are one and the same thing.

Technically speaking, they can't be. One presumes that P-consciousness is a subset of A-consciousness and the other that P-consciousness does not exist altogether. That is, the former says that phenomenal redness exists in virtue of (say) computation, while the latter says that redness does not exist in the first place.

I don't follow this zombie clarification. I understand what you're saying. I just don't understand why you're saying it. The nature of the hard problem is one of explanation. Is this correct? The fact that consciousness cannot be reductively explained using the fundamental elements we currently assume.

Yes. I am just trying to be conceptually precise about the terms (specifically "zombie") that we are using. I still think the hard problem is a valid one.

I can understand this issue because I can compare my experience to the explanation and see that something is missing. I don't understand how a zombie scientist could ever find the explanation of his A-consciousness unsatisfactory.

The process of comparing conceptual tokens is subsumed under A-consciousness. A zombie may not have P-consciousness, but he still has second order beliefs that he does, and his beliefs are identical to a normal human's. (Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)

If we presume that zombies think that something is missing from a physically reductive explanation of consciousness to a greater degree than humans do on average, then we are assuming that

a) there is some overlap between P- and A-consciousness in humans, i.e. that at least some aspect of A-consciousness is causally related to some aspect of P-consciousness (otherwise there would be no discernable difference in the behavior of a human and a zombie), and
b) the part of a zombie's A-consciousness corresponding to this P/A overlap is missing in virtue of its lack of P-consciousness (thereby accounting for the difference in its behavior, i.e. failing to recognize the hard problem).

However, this contradicts our initial definition that a zombie's A-consciousness must be identical to its human counterpart. Therefore, it is not possible that a human acknowledges the hard problem and his zombie counterpart does not acknowledge it to the same degree. Zombie Chalmers believes in the hard problem just as vigorously as human Chalmers, and zombie Dennett is no more set against the hard problem than is human Dennett.
 
  • #40
You know, the more I think about it, the clearer it becomes to me that a zombie is simply one lacking a final destination for the stimuli entering his/her brain. As such s/he is simply an exception to the Cartesian Theater model. But, if this is so, then we are all, most definitely, zombies. The Cartesian Theater model has been shown to have no merit, and is sometimes even used as an epithet.

For those who don't know what's wrong with the Cartesian Theater model, it usually comes back to one question: What happens next?

If there really were a "center", wherein "experience" played itself out, who would be observing it? Would they be conscious? If so, would they not need to have an observer within their own brains? It goes on ad infinitum without ever getting any closer to explaining consciousness. Thus, it is discarded.

P.S. Forgive me, Fliption, for having resurrected the old homunculus problem. I remember I never explained it well enough for you to get what I meant, which led to many fruitless debates, but it just seemed necessary that I remove the Cartesian Theater, along with any theories that fall into the same trap.
 
  • #41
Mentat said:
What you are saying is that your own experience is more than can be explained under current assumptions, right? But you can't tell me what it is that remains unexplained?

I can tell you what remains to be explained. But the ability to communicate it to you is dependent on your ability to experience that which cannot be explained as well. If you do not experience it or you choose to deny that you experience it, then the only way I can tell you what it is is to explain it.

Oh, btw, you can't explain the color red any better to a person capable of sight. You can only point out examples, which is what I asked you to do with regard to this thing which has eluded explanation but which definitely exists.
Right. But the difference is that red is a visual subjective experience associated with a physical object, which allows me to point to something. Subjective experiences in general is not so easy to do. I can't point to anything. I can only attempt to reference it to your own personal experience by saying "it feels like something to be Mentat". Why should it feel like anything at all?

Not at all. My method of processing is distinctly different, but it remains a "method of processing", nothing more.

Since it is nothing but another method of processing then you should be able to explain and recreate the whole process.

No, but what is relevant is that we have the term before the definition. What I'm trying to tell you is that that is a terrible way to reason. The phenomenon is supposed to be understood as existing, distinct from other phenomena, before a word is assigned to it (because then, at least we'll know what "it" is, to which the word refers).
It may be a terrible way to reason but no one here is doing that. I DO KNOW what it is. I keep saying this. Why would I attach a word to something that has no meaning to me?

I hate to pick at words (though, as you well know, I think it is necessary that the words be correct, so as to avoid the possibility of confusion), but I too see a difference between "measuring" a particular wavelength of light and experiencing the color. What I don't see is the difference between being stimulated by a particular wavelength of light, which you then process in terms of previous stimulations and remember, and "experiencing" a certain color. I don't see what's left to explain, and those things that I mention are all part of the "easy problem".
How can you be so sure of what the easy problem is when you can't see the hard problem?

I never said that. There's a difference between equating experience with computation, and saying that the experience never happened at all.

I agree. I was just trying to understand which one you were proposing. I think Hypnagogue is asking the same thing.

"Present"? To whom?

You didn't answer the question. One problem at a time.
 
  • #42
hypnagogue said:
Technically speaking, they can't be. One presumes that P-consciousness is a subset of A-consciousness and the other that P-consciousness does not exist altogether. That is, the former says that phenomenal redness exists in virtue of (say) computation, while the latter says that redness does not exist in the first place.

Wait a minute. Why can't P-consciousness and A-consciousness be the same thing? (I'm not saying I think they are, I just think I'll be better able to understand them after this question is answered.)

Oh, btw, what is redness?

Yes. I am just trying to be conceptually precise about the terms (specifically "zombie") that we are using. I still think the hard problem is a valid one.

Inspite of the fact that you appear - until further clarification presents itself - to have shown yourself and Fliption to be zombies as well?

The process of comparing conceptual tokens is subsumed under A-consciousness. A zombie may not have P-consciousness, but he still has second order beliefs that he does, and his beliefs are identical to a normal human's. (Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)

Why not? From a purely A-consciousness PoV, belief is not just the disposition to make certain verbal utterances, but the disposition to rule in favor ("rulings", meaning simple computative processes of discrimination) of one idea over another, based on previous stimulations.

btw, you, too, may not have P-consciousness, and instead have but a second-order belief that you do. After all, you haven't really explained what it is, and that is the same predicament that a zombie would have, is it not?

If we presume that zombies think that something is missing from a physically reductive explanation of consciousness to a greater degree than humans do on average, then we are assuming that

a) there is some overlap between P- and A-consciousness in humans, i.e. that at least some aspect of A-consciousness is causally related to some aspect of P-consciousness (otherwise there would be no discernable difference in the behavior of a human and a zombie), and
b) the part of a zombie's A-consciousness corresponding to this P/A overlap is missing in virtue of its lack of P-consciousness (thereby accounting for the difference in its behavior, i.e. failing to recognize the hard problem).

However, this contradicts our initial definition that a zombie's A-consciousness must be identical to its human counterpart. Therefore, it is not possible that a human acknowledges the hard problem and his zombie counterpart does not acknowledge it to the same degree. Zombie Chalmers believes in the hard problem just as vigorously as human Chalmers, and zombie Dennett is no more set against the hard problem than is human Dennett.

So, I ask you again, what is the difference, after all that you've said here, between a zombie and everyone else?
 
  • #43
Mentat said:
What you are saying is that your own experience is more than can be explained under current assumptions, right? But you can't tell me what it is that remains unexplained?

I can tell you what remains to be explained. But the ability to communicate it to you is dependent on your ability to experience that which cannot be explained as well. If you do not experience it or you choose to deny that you experience it, then the only way I can tell you what it is is to explain it.

Oh, btw, you can't explain the color red any better to a person capable of sight. You can only point out examples, which is what I asked you to do with regard to this thing which has eluded explanation but which definitely exists.
Right. But the difference is that red is a visual subjective experience associated with a physical object, which allows me to point to something. Subjective experiences in general is not so easy to do. I can't point to anything. I can only attempt to reference it to your own personal experience by saying "it feels like something to be Mentat". Why should it feel like anything at all?

Not at all. My method of processing is distinctly different, but it remains a "method of processing", nothing more.

Since it is nothing but another method of processing then you should be able to explain and recreate the whole process.

No, but what is relevant is that we have the term before the definition. What I'm trying to tell you is that that is a terrible way to reason. The phenomenon is supposed to be understood as existing, distinct from other phenomena, before a word is assigned to it (because then, at least we'll know what "it" is, to which the word refers).
It may be a terrible way to reason but no one here is doing that. I DO KNOW what it is. I keep saying this. Why would I attach a word to something that has no meaning to me?

I hate to pick at words (though, as you well know, I think it is necessary that the words be correct, so as to avoid the possibility of confusion), but I too see a difference between "measuring" a particular wavelength of light and experiencing the color. What I don't see is the difference between being stimulated by a particular wavelength of light, which you then process in terms of previous stimulations and remember, and "experiencing" a certain color. I don't see what's left to explain, and those things that I mention are all part of the "easy problem".
How can you be so sure of what the easy problem is when you can't see the hard problem?

I never said that. There's a difference between equating experience with computation, and saying that the experience never happened at all.

I agree. I was just trying to understand which one you were proposing. I think Hypnagogue is asking the same thing.

"Present"? To whom?

You didn't answer the question. One issue at a time. It's much less confusing that way.
 
  • #44
Mentat said:
Has it not occurred to you that I might have been right when I told Fliption that everyone is a zombie? Think about it. I'm clearly a zombie, since I could claim to have P-consciousness, but I can't explain it. This exact statement holds true for all of you, does it not?
As I said to confutatis in another thread. This assumes that humans can objectively know everything that exists in reality.
 
  • #45
Fliption said:
I can tell you what remains to be explained. But the ability to communicate it to you is dependent on your ability to experience that which cannot be explained as well. If you do not experience it or you choose to deny that you experience it, then the only way I can tell you what it is is to explain it.

As per your first statement, I expect that you will give what explanation you have to give.

Right. But the difference is that red is a visual subjective experience associated with a physical object, which allows me to point to something. Subjective experiences in general is not so easy to do. I can't point to anything. I can only attempt to reference it to your own personal experience by saying "it feels like something to be Mentat". Why should it feel like anything at all?

First off, yes, the computation of "red" is a subjective one, I don't see how any computer could compute objectively.

Secondly, I don't expect you to point to something that is subjective, I expect you to give a definition. I can define "red". Can you define "subjective experience"?

Finally, it doesn't feel like anything to me, because I have nothing to compare it to...I've never been anyone else.

Since it is nothing but another method of processing then you should be able to explain and recreate the whole process.

Recreate! Surely you jest. As I explained at the very outset of the thread called "Faulty expectations of a theory of Consciousness", no scientific theory is ever expected to produce the phenomenon that it explains. It is merely expected to explain what it is and is not, when/under what circumstances it occurs, and then to be able to recreate the circumstances and prove that it does indeed occur under those circumstances.

So, since there is a certain computation that occurs whenever you are exposed to a certain wavelength of light, I need only explain which wavelength it is, which form of stimulation (and re-stimulation) is occurring in the pyrimidal neurons of your neocortex, under what conditions this occurs (which I have already established: whenever the wavelength is present and stimulates your retina, this computation occurs), and then reproduce the conditions (which I could easily do by, for example, turning my words red).

It may be a terrible way to reason but no one here is doing that. I DO KNOW what it is. I keep saying this. Why would I attach a word to something that has no meaning to me?

I wouldn't expect you to, but you have to relate what that meaning is, and I can't just trust that you have a meaning in mind.

How can you be so sure of what the easy problem is when you can't see the hard problem?

Simple, I read the piece by Chalmers which defined the "easy problem".

You didn't answer the question. One issue at a time. It's much less confusing that way.

What question? I looked back at the antecedent post, and the only question I'm seeing is my own (namely: to whom is this thing being "presented"? That is the term you used, and that is the term needs explanation).
 
  • #46
Fliption said:
As I said to confutatis in another thread. This assumes that humans can objectively know everything that exists in reality.

Only if you're bound to induction. If it can be deduced that there is nothing more to explain, after all the facets of A-consciousness have been explained - and that the definition of "zombie" is one having A-consciousness, but nothing more - then it can be logically concluded that everyone is a zombie.
 
  • #47
hypnagogue said:
However, this contradicts our initial definition that a zombie's A-consciousness must be identical to its human counterpart. Therefore, it is not possible that a human acknowledges the hard problem and his zombie counterpart does not acknowledge it to the same degree. Zombie Chalmers believes in the hard problem just as vigorously as human Chalmers, and zombie Dennett is no more set against the hard problem than is human Dennett.

But I don't see how this could ever exists. How could a zombie's A consciousness be identical when a human's A consciousness is connected somehow to P consciousness and a zombies is not? There has to be some difference somewhere, doesn't there?
 
  • #48
Mentat said:
Only if you're bound to induction. If it can be deduced that there is nothing more to explain, after all the facets of A-consciousness have been explained - and that the definition of "zombie" is one having A-consciousness, but nothing more - then it can be logically concluded that everyone is a zombie.

If you could deduce such things, yes. But you cannot.
 
  • #49
Fliption said:
If you could deduce such things, yes. But you cannot.

Sure I can. I can inductively or deductively prove the first proposition. The second stands as "accepted" since hypna posted it, and I find no fault with it. And the conclusion is valid, provided the premises are.
 
  • #50
Mentat said:
I can define "red". Can you define "subjective experience"?
You first. I don't think you can do it.

Finally, it doesn't feel like anything to me, because I have nothing to compare it to...I've never been anyone else.

I'm about to give up. Obviously you feel something. You don't refer to it as "what it feels like to be Mentat" for the reason you provided but that's just the way that I, as an outsider, would refer to whatever feeling you may have.

Recreate! Surely you jest. As I explained at the very outset of the thread called "Faulty expectations of a theory of Consciousness", no scientific theory is ever expected to produce the phenomenon that it explains. It is merely expected to explain what it is and is not, when/under what circumstances it occurs, and then to be able to recreate the circumstances and prove that it does indeed occur under those circumstances.

Obviously producing the phenomenon is not always something that can be done. I understand this. But we're not talking about creating a black hole here. We're talking about computation. Simple instructions like a software program. Just so you don't think I am debating the role of a scientific theory with you, I'm not saying that you have explained it and now have to produce it to qualify as a valid theory and convince me. I'm saying that you have not explained it and this is why you cannot produce it.

So, since there is a certain computation that occurs whenever you are exposed to a certain wavelength of light, I need only explain which wavelength it is, which form of stimulation (and re-stimulation) is occurring in the pyrimidal neurons of your neocortex, under what conditions this occurs (which I have already established: whenever the wavelength is present and stimulates your retina, this computation occurs), and then reproduce the conditions (which I could easily do by, for example, turning my words red).

This doesn't satisfy me. It doesn't explain what I am referring to. You should be able to produce what you have described very easily.

I wouldn't expect you to, but you have to relate what that meaning is, and I can't just trust that you have a meaning in mind.

I will have to solve the hard problem before I can make you understand it. Makes sense to me. You get to keep your view either way.

Simple, I read the piece by Chalmers which defined the "easy problem".

Since he did such a good job explaining it, why not read his explanation of the hard problem as well?

What question? I looked back at the antecedent post, and the only question I'm seeing is my own (namely: to whom is this thing being "presented"? That is the term you used, and that is the term needs explanation).

You were responding to the second option that I was giving you as part of a question. I think Hypnagogue is asking the same question and you may have answered it since then.
 
  • #51
Mentat said:
Sure I can. I can inductively or deductively prove the first proposition. The second stands as "accepted" since hypna posted it, and I find no fault with it. And the conclusion is valid, provided the premises are.
Ok fine, you can deduce it. But I cannot.
 
  • #52
Fliption said:
But I don't see how this could ever exists. How could a zombie's A consciousness be identical when a human's A consciousness is connected somehow to P consciousness and a zombies is not? There has to be some difference somewhere, doesn't there?

Congratulations! After much wrangling, you are finally beginning to see what's wrong with Chalmers' argument.

Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Where is that guy who said this discussion is merely about semantics? :smile:
 
  • #53
confutatis said:
Congratulations! After much wrangling, you are finally beginning to see what's wrong with Chalmers' argument.

Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Where is that guy who said this discussion is merely about semantics? :smile:

This is all true. But it isn't a semantic problem only, because none of this relevant. I have been using the zombie concept when I should have been using some other word. I personally don't see the signficance of the distinction hypnagogue has pointed out. It doesn't seem to me that the definition has to be this way to make the case that Chalmers is trying to make. My only beef with it is it means I have to find another word to call Mentat. The issue remains regardless of what I call it though.
 
Last edited:
  • #54
Mentat said:
I need a better definition of "P-consciousness", as you probably expected. "The redness of an object" is a matter of perceptual discrimination, is it not?

Again, I cannot precisely pick out the concept in words, but I can only point to it. When you look at a stop sign, what does it look like to you? Among its many apparent properties, it has a certain visual phenomenal quality that you call 'redness.'

Discrimination is clearly involved here (eg, discriminating the redness of the sign from the blueness of the sky), but discrimination alone does not exhaustively characterize this phenomenon. For instance, for a human there is something different about discriminating hues of color and pitches of tone. You may say that this difference is purely underpinned by computational differences, and that may be the case, but we are only trying here to point to instances of what we mean by P-consciousness, not explain them.

Let me put it another way. Imagine that one day you encounter a curious cognitive dissociation. Suddenly you can't see anything at all, that is, the world looks to you the same way it looked in the past when you would close your eyes. And yet, you can walk around just as well as you could before, and you can accurately describe the world (e.g. by telling someone "I see a red stop sign" when a red stop sign is placed at a distance before you) just as well as you could before. This would be a case of visual A-consciousness without visual P-consciousness.

I'm not claiming that this is possible in practice; indeed, I suspect it most probably is not. I am simply using this example to illustrate how we can conceptually delineate between A and P consciousness. Even if it turns out that they are one and the same thing, there still would seem to be the distinctive property that there are different aspects or viewpoints of that one thing.

I suppose I could say that, were you to give me a specific instance of what you'd consider P-consciousness, I'd show that it is really just A-consciousness. But, at the same time, to do so does seem to imply that P-consciousness doesn't exist at all.

If P-consciousness does not exist for you, then your personal experience of acting in the world would be the same as your current personal experience of deep sleep: i.e., you would have no personal experience at all. If you respond to this by saying that you would indeed have personal experience just in virtue of your A-consciousness as you acted in the world, then you would be acknowledging the existence of P-consciousness and adding some claims about its properties (eg it exists whenever certain A-conscious activities occur). This is not the same as denying its existence altogether.

So, being a "zombie" becomes having no P-consciousness, with which I have no problem, so long as we don't deny them any of the things that A-consciousness can be shown to entail - i.e. self-consciousness, emotion, intuition, creativity, memory, perceptual discrimination (in all of it's forms; i.e. noticing, and responding to, the difference between textures, colors, shapes, and sounds), and reasoning ability.

A-consciousness entails the behavioral characteristics of, say, sadness, but it doesn't entail the personal feeling of sadness. If there is no P-consciousness, then by definition there is no personal feeling of sadness. This is the familiar schism; A-consciousness speaks of 3rd person observable properties, whereas P-consciousness speaks of 1st person observable properties. To the extent that sadness is characterized by objectively observable behaviors and brain activities, it has an A-conscious aspect; and to the extent that it is characterized by particular subjective feelings, it has a P-conscious aspect. Similar remarks can be made about the other members of your list.

Has it not occurred to you that I might have been right when I told Fliption that everyone is a zombie? Think about it. I'm clearly a zombie, since I could claim to have P-consciousness, but I can't explain it. This exact statement holds true for all of you, does it not?

It doesn't follow that your failure to explain P-consciousness entails that you are a zombie. If I can't explain how weather works, that doesn't mean there is no weather.

I maintain that I am not a zombie in virtue of my P-consciousness. To make this claim I am forced to assume that there is indeed some kind of overlap or causal connection between my A-conscious utterances and my P-conscious perceptions (otherwise I would have no basis in saying that I know I am P-conscious). So, ultimately, our viewpoints are probably not as far apart as they might seem on the surface-- we both acknowledge some sort of deep connection between A and P. Where we mainly disagree is on the nature of P.
 
  • #55
Fliption said:
But I don't see how this could ever exists. How could a zombie's A consciousness be identical when a human's A consciousness is connected somehow to P consciousness and a zombies is not? There has to be some difference somewhere, doesn't there?

There are at least two possibilities for how it could be that some creature has A-consciousness identical to a human but no P-consciousness.

1) It could be that A is not nomologically sufficient to influence a human's P consciousness in the way that it does. (Nomological sufficiency refers to a sufficiency that obtains in our reality as a result of its contingent natural laws, and as such is a stronger constraint than logical sufficiency.) If this were the case, then even though some aspects of my A-consciousness might always be accompanied by P-consciousness, a creature could exist in our reality with an A-consciousness identical to mine, such that it would not have my P-consciousness.

Note that A-consciousness is ultimately a functional concept, so this possibility might allow that a computer with an A-consciousness identical to mine would not have P-consciousness even though it might not allow that a human with A-consciousness identical to mine would not have my P-consciousness.

2) It could be that A is not logically sufficient to influence a human's P consciousness in the way that it does. If this were the case, then even though it might be the case that any creature in our universe which has an A-consciousness identical to mine has at least some sort of P-consciousness, it could still be the case that in some metaphysically possible world with different laws of nature, a creature with my A-consciousness would have no P-consciousness at all. This is the scenario Chalmers likes to use: there could be some metaphysical world physically identical to ours, in which a creature physically identical to me (and thus with identical A-consciousness) still does not have P-consciousness.

I think I can pinpoint the difficulty you are facing. You are assuming that there is some aspect of a human's A-consciousness that depends upon the human's P-consciousness, and that the presence of the human's P-consciousness is necessary for his A-consciousness to act in the way that it does (eg, you are assuming that a human's conceptual acceptance of the hard problem, as born out by his behavior and verbal reports, is possible only if he has P-consciousness). I think this necessity is too strong a limit. I see why P interacting with A in this way would be sufficient to cause the human to behave as if he accepts the hard problem, but I don't see why it is necessary-- I think it is logically possible that a zombie have the proper brain activation such that he behaves as if he accepts the hard problem even without 'input' from P-consciousness.

Suppose human H enters a brain state B, indicating roughly his belief in the hard problem, as a result of his P-consciousness. It is logically possible that there exists some metaphysical zombie who has entered the same brain state as H by means other than input from P.
 
Last edited:
  • #56
confutatis said:
Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Neither follows, actually. It could be the case that if I build a computer functionally identical to me (eg with identical A-consciousness), it still might not be P-conscious. (Chalmers uses zombies that are physically identical to humans, but he places them in metaphysical worlds with contingent laws that are not identical to all the contingent laws of our world. He does not contend that a physical replica of a person in our world could possibly not be P-conscious.)

As for your second claim, we can note that if P-consciousness interacts with A-consciousness, this interaction may be sufficient, but not necessary, to produce utterances such as "I am seeing the color red."
 
  • #57
Fliption said:
How can you be so sure of what the easy problem is when you can't see the hard problem?

I have no intention of speaking for Mentat because he already gave his answer but, what I want to ask you, Fliption is: If you can't "see" the easy problem; what makes you believe there is even a hard problem without the fundamentals for its basis?

And I could also say "How can you be so sure of what the hard problem is when you can't see or explain the easy problem without verification of what either constitutes its parts? You could shirk this question forever both ways, neither will be explained unless you start off easy.
 
  • #58
hypnagogue said:
Suppose human H enters a brain state B, indicating roughly his belief in the hard problem, as a result of his P-consciousness. It is logically possible that there exists some metaphysical zombie who has entered the same brain state as H by means other than input from P.

Ok, I can accept this but to me it implies a zombie is deterministically a slave of external forces. I agree that a brain state stating a belief in P could happen without an actual P event but I just don't understand why this would ever happen. I can interject a state that I wish a computer program to be in as well. But if I don't purposefully interject this state and allow the program to run it's course, it would have no reason to casually come into such a state on it's own. This is why I say that such a creature would have to be a slave to external influences and have no casual logic in it's own actions. It doesn't do anything because of it's own calculations. It doesn't seem to think at all.

No matter. What word should I use to describe someone who denies the hard problem because they do not have consciousness and therefore can explain their cognitive existence easily with the reductive tools of science?
 
Last edited:
  • #59
hypnagogue said:
Neither follows, actually. It could be the case that if I build a computer functionally identical to me (eg with identical A-consciousness), it still might not be P-conscious.

Chalmers is not talking about computers, as you pointed out yourself. The point Chalmers makes is that P-consciousness is not required to explain A-consciousness. He bases his claim on the notion of a physically identical entity which exhibits identical A-consciousness but lacks P-consciousness. He doesn't base his claims on seemingly-conscious computers.

(Chalmers uses zombies that are physically identical to humans, but he places them in metaphysical worlds with contingent laws that are not identical to all the contingent laws of our world.)

I believe you are wrong about Chalmers, but if you are right then that claim is just ridiculous, as it would imply that the hard problem is only a problem in the zombie universe. I definitely don't think that's what Chalmers is saying.

As for your second claim, we can note that if P-consciousness interacts with A-consciousness, this interaction may be sufficient, but not necessary, to produce utterances such as "I am seeing the color red."

I didn't claim we may be zombies. All I said was that there's nothing in Chalmers' definition of what a zombie is that allows us to feel different from them. We believe we have P-consciousness and so do zombies. Exactly where is the difference? In the "fact" that we are right about our belief and the zombie is wrong? That doesn't make any sense.

(here's a paper by Chalmers in case people think I'm misrepresenting his position: http://jamaica.u.arizona.edu/~chalmers/papers/goldman.html )
 
Last edited by a moderator:
  • #60
Jeebus said:
I have no intention of speaking for Mentat because he already gave his answer but, what I want to ask you, Fliption is: If you can't "see" the easy problem; what makes you believe there is even a hard problem without the fundamentals for its basis?

I think you have misunderstood. I don't have an issue with understanding the easy problem. I just found it amusing that Mentat (who claims to not understand what the hard problem is all about) used the term "easy problem" as if he understood the distinction. Which he admittedly doesn't. When he labels a set of activities as "the easy problem", he can't be sure he is correct because he doesn't understand the hard problem.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 77 ·
3
Replies
77
Views
16K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
6K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 48 ·
2
Replies
48
Views
6K