Is Consciousness an Emergent Property of a Master Algorithm?

  • Thread starter Thread starter Mentat
  • Start date Start date
  • Tags Tags
    Emergent Property
AI Thread Summary
The discussion centers on the concept of consciousness as an emergent property, specifically through the lens of a "master algorithm." The argument posits that while "subjective experience" is often cited in discussions of consciousness, it lacks a coherent definition and is therefore not a useful concept. Instead, consciousness can be understood through the complex interactions of numerous processes in the brain, which can be quantified as an algorithmic structure. This perspective aligns with reductionist scientific approaches, which aim to explain consciousness without relying on the ambiguous notion of subjective experience. The conversation highlights the ongoing debate between reductionist views and those that emphasize the significance of subjective experience in understanding consciousness.
  • #51
Mentat said:
Sure I can. I can inductively or deductively prove the first proposition. The second stands as "accepted" since hypna posted it, and I find no fault with it. And the conclusion is valid, provided the premises are.
Ok fine, you can deduce it. But I cannot.
 
Physics news on Phys.org
  • #52
Fliption said:
But I don't see how this could ever exists. How could a zombie's A consciousness be identical when a human's A consciousness is connected somehow to P consciousness and a zombies is not? There has to be some difference somewhere, doesn't there?

Congratulations! After much wrangling, you are finally beginning to see what's wrong with Chalmers' argument.

Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Where is that guy who said this discussion is merely about semantics? :smile:
 
  • #53
confutatis said:
Congratulations! After much wrangling, you are finally beginning to see what's wrong with Chalmers' argument.

Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Where is that guy who said this discussion is merely about semantics? :smile:

This is all true. But it isn't a semantic problem only, because none of this relevant. I have been using the zombie concept when I should have been using some other word. I personally don't see the signficance of the distinction hypnagogue has pointed out. It doesn't seem to me that the definition has to be this way to make the case that Chalmers is trying to make. My only beef with it is it means I have to find another word to call Mentat. The issue remains regardless of what I call it though.
 
Last edited:
  • #54
Mentat said:
I need a better definition of "P-consciousness", as you probably expected. "The redness of an object" is a matter of perceptual discrimination, is it not?

Again, I cannot precisely pick out the concept in words, but I can only point to it. When you look at a stop sign, what does it look like to you? Among its many apparent properties, it has a certain visual phenomenal quality that you call 'redness.'

Discrimination is clearly involved here (eg, discriminating the redness of the sign from the blueness of the sky), but discrimination alone does not exhaustively characterize this phenomenon. For instance, for a human there is something different about discriminating hues of color and pitches of tone. You may say that this difference is purely underpinned by computational differences, and that may be the case, but we are only trying here to point to instances of what we mean by P-consciousness, not explain them.

Let me put it another way. Imagine that one day you encounter a curious cognitive dissociation. Suddenly you can't see anything at all, that is, the world looks to you the same way it looked in the past when you would close your eyes. And yet, you can walk around just as well as you could before, and you can accurately describe the world (e.g. by telling someone "I see a red stop sign" when a red stop sign is placed at a distance before you) just as well as you could before. This would be a case of visual A-consciousness without visual P-consciousness.

I'm not claiming that this is possible in practice; indeed, I suspect it most probably is not. I am simply using this example to illustrate how we can conceptually delineate between A and P consciousness. Even if it turns out that they are one and the same thing, there still would seem to be the distinctive property that there are different aspects or viewpoints of that one thing.

I suppose I could say that, were you to give me a specific instance of what you'd consider P-consciousness, I'd show that it is really just A-consciousness. But, at the same time, to do so does seem to imply that P-consciousness doesn't exist at all.

If P-consciousness does not exist for you, then your personal experience of acting in the world would be the same as your current personal experience of deep sleep: i.e., you would have no personal experience at all. If you respond to this by saying that you would indeed have personal experience just in virtue of your A-consciousness as you acted in the world, then you would be acknowledging the existence of P-consciousness and adding some claims about its properties (eg it exists whenever certain A-conscious activities occur). This is not the same as denying its existence altogether.

So, being a "zombie" becomes having no P-consciousness, with which I have no problem, so long as we don't deny them any of the things that A-consciousness can be shown to entail - i.e. self-consciousness, emotion, intuition, creativity, memory, perceptual discrimination (in all of it's forms; i.e. noticing, and responding to, the difference between textures, colors, shapes, and sounds), and reasoning ability.

A-consciousness entails the behavioral characteristics of, say, sadness, but it doesn't entail the personal feeling of sadness. If there is no P-consciousness, then by definition there is no personal feeling of sadness. This is the familiar schism; A-consciousness speaks of 3rd person observable properties, whereas P-consciousness speaks of 1st person observable properties. To the extent that sadness is characterized by objectively observable behaviors and brain activities, it has an A-conscious aspect; and to the extent that it is characterized by particular subjective feelings, it has a P-conscious aspect. Similar remarks can be made about the other members of your list.

Has it not occurred to you that I might have been right when I told Fliption that everyone is a zombie? Think about it. I'm clearly a zombie, since I could claim to have P-consciousness, but I can't explain it. This exact statement holds true for all of you, does it not?

It doesn't follow that your failure to explain P-consciousness entails that you are a zombie. If I can't explain how weather works, that doesn't mean there is no weather.

I maintain that I am not a zombie in virtue of my P-consciousness. To make this claim I am forced to assume that there is indeed some kind of overlap or causal connection between my A-conscious utterances and my P-conscious perceptions (otherwise I would have no basis in saying that I know I am P-conscious). So, ultimately, our viewpoints are probably not as far apart as they might seem on the surface-- we both acknowledge some sort of deep connection between A and P. Where we mainly disagree is on the nature of P.
 
  • #55
Fliption said:
But I don't see how this could ever exists. How could a zombie's A consciousness be identical when a human's A consciousness is connected somehow to P consciousness and a zombies is not? There has to be some difference somewhere, doesn't there?

There are at least two possibilities for how it could be that some creature has A-consciousness identical to a human but no P-consciousness.

1) It could be that A is not nomologically sufficient to influence a human's P consciousness in the way that it does. (Nomological sufficiency refers to a sufficiency that obtains in our reality as a result of its contingent natural laws, and as such is a stronger constraint than logical sufficiency.) If this were the case, then even though some aspects of my A-consciousness might always be accompanied by P-consciousness, a creature could exist in our reality with an A-consciousness identical to mine, such that it would not have my P-consciousness.

Note that A-consciousness is ultimately a functional concept, so this possibility might allow that a computer with an A-consciousness identical to mine would not have P-consciousness even though it might not allow that a human with A-consciousness identical to mine would not have my P-consciousness.

2) It could be that A is not logically sufficient to influence a human's P consciousness in the way that it does. If this were the case, then even though it might be the case that any creature in our universe which has an A-consciousness identical to mine has at least some sort of P-consciousness, it could still be the case that in some metaphysically possible world with different laws of nature, a creature with my A-consciousness would have no P-consciousness at all. This is the scenario Chalmers likes to use: there could be some metaphysical world physically identical to ours, in which a creature physically identical to me (and thus with identical A-consciousness) still does not have P-consciousness.

I think I can pinpoint the difficulty you are facing. You are assuming that there is some aspect of a human's A-consciousness that depends upon the human's P-consciousness, and that the presence of the human's P-consciousness is necessary for his A-consciousness to act in the way that it does (eg, you are assuming that a human's conceptual acceptance of the hard problem, as born out by his behavior and verbal reports, is possible only if he has P-consciousness). I think this necessity is too strong a limit. I see why P interacting with A in this way would be sufficient to cause the human to behave as if he accepts the hard problem, but I don't see why it is necessary-- I think it is logically possible that a zombie have the proper brain activation such that he behaves as if he accepts the hard problem even without 'input' from P-consciousness.

Suppose human H enters a brain state B, indicating roughly his belief in the hard problem, as a result of his P-consciousness. It is logically possible that there exists some metaphysical zombie who has entered the same brain state as H by means other than input from P.
 
Last edited:
  • #56
confutatis said:
Apparently only two things can follow from Chalmers' definition of a zombie: either they can't possibly exist, as you realized, or we are all zombies, as Mentat says.

Neither follows, actually. It could be the case that if I build a computer functionally identical to me (eg with identical A-consciousness), it still might not be P-conscious. (Chalmers uses zombies that are physically identical to humans, but he places them in metaphysical worlds with contingent laws that are not identical to all the contingent laws of our world. He does not contend that a physical replica of a person in our world could possibly not be P-conscious.)

As for your second claim, we can note that if P-consciousness interacts with A-consciousness, this interaction may be sufficient, but not necessary, to produce utterances such as "I am seeing the color red."
 
  • #57
Fliption said:
How can you be so sure of what the easy problem is when you can't see the hard problem?

I have no intention of speaking for Mentat because he already gave his answer but, what I want to ask you, Fliption is: If you can't "see" the easy problem; what makes you believe there is even a hard problem without the fundamentals for its basis?

And I could also say "How can you be so sure of what the hard problem is when you can't see or explain the easy problem without verification of what either constitutes its parts? You could shirk this question forever both ways, neither will be explained unless you start off easy.
 
  • #58
hypnagogue said:
Suppose human H enters a brain state B, indicating roughly his belief in the hard problem, as a result of his P-consciousness. It is logically possible that there exists some metaphysical zombie who has entered the same brain state as H by means other than input from P.

Ok, I can accept this but to me it implies a zombie is deterministically a slave of external forces. I agree that a brain state stating a belief in P could happen without an actual P event but I just don't understand why this would ever happen. I can interject a state that I wish a computer program to be in as well. But if I don't purposefully interject this state and allow the program to run it's course, it would have no reason to casually come into such a state on it's own. This is why I say that such a creature would have to be a slave to external influences and have no casual logic in it's own actions. It doesn't do anything because of it's own calculations. It doesn't seem to think at all.

No matter. What word should I use to describe someone who denies the hard problem because they do not have consciousness and therefore can explain their cognitive existence easily with the reductive tools of science?
 
Last edited:
  • #59
hypnagogue said:
Neither follows, actually. It could be the case that if I build a computer functionally identical to me (eg with identical A-consciousness), it still might not be P-conscious.

Chalmers is not talking about computers, as you pointed out yourself. The point Chalmers makes is that P-consciousness is not required to explain A-consciousness. He bases his claim on the notion of a physically identical entity which exhibits identical A-consciousness but lacks P-consciousness. He doesn't base his claims on seemingly-conscious computers.

(Chalmers uses zombies that are physically identical to humans, but he places them in metaphysical worlds with contingent laws that are not identical to all the contingent laws of our world.)

I believe you are wrong about Chalmers, but if you are right then that claim is just ridiculous, as it would imply that the hard problem is only a problem in the zombie universe. I definitely don't think that's what Chalmers is saying.

As for your second claim, we can note that if P-consciousness interacts with A-consciousness, this interaction may be sufficient, but not necessary, to produce utterances such as "I am seeing the color red."

I didn't claim we may be zombies. All I said was that there's nothing in Chalmers' definition of what a zombie is that allows us to feel different from them. We believe we have P-consciousness and so do zombies. Exactly where is the difference? In the "fact" that we are right about our belief and the zombie is wrong? That doesn't make any sense.

(here's a paper by Chalmers in case people think I'm misrepresenting his position: http://jamaica.u.arizona.edu/~chalmers/papers/goldman.html )
 
Last edited by a moderator:
  • #60
Jeebus said:
I have no intention of speaking for Mentat because he already gave his answer but, what I want to ask you, Fliption is: If you can't "see" the easy problem; what makes you believe there is even a hard problem without the fundamentals for its basis?

I think you have misunderstood. I don't have an issue with understanding the easy problem. I just found it amusing that Mentat (who claims to not understand what the hard problem is all about) used the term "easy problem" as if he understood the distinction. Which he admittedly doesn't. When he labels a set of activities as "the easy problem", he can't be sure he is correct because he doesn't understand the hard problem.
 
Last edited:
  • #61
confutatis said:
Chalmers is not talking about computers, as you pointed out yourself. The point Chalmers makes is that P-consciousness is not required to explain A-consciousness. He bases his claim on the notion of a physically identical entity which exhibits identical A-consciousness but lacks P-consciousness. He doesn't base his claims on seemingly-conscious computers.

I know, I was merely stating a possible case where a zombie as I have defined it (with A-consciousness identical to a human but no P-consciousness) could possibly exist in this reality.

Chalmers' point is not so much that P need not be invoked to explain A, as it is that completely explaining A does not completely explain P.

I believe you are wrong about Chalmers, but if you are right then that claim is just ridiculous, as it would imply that the hard problem is only a problem in the zombie universe. I definitely don't think that's what Chalmers is saying.

I think you need to brush up on your Chalmers. :biggrin:

The Conceivability Argument

According to this argument, it is conceivable that there be a system that is physically identical to a conscious being, but that lacks at least some of that being's conscious states. Such a system might be a zombie: a system that is physically identical to a conscious being but that lacks consciousness entirely. It might also be an invert, with some of the original being's experiences replaced by different experiences, or a partial zombie, with some experiences absent, or a combination thereof. These systems will look identical to a normal conscious being from the third-person perspective: in particular, their brain processes will be molecule-for-molecule identical with the original, and their behavior will be indistinguishable. But things will be different from the first-person point of view. What it is like to be an invert or a partial zombie will differ from what it is like to be the original being. And there is nothing it is like to be a zombie.

There is little reason to believe that zombies exist in the actual world. But many hold that they are at least conceivable: we can coherently imagine zombies, and there is no contradiction in the idea that reveals itself even on reflection. As an extension of the idea, many hold that the same goes for a zombie world: a universe physically identical to ours, but in which there is no consciousness. Something similar applies to inverts and other duplicates.

From the conceivability of zombies, proponents of the argument infer their metaphysical possibility. Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature. But the argument holds that zombies could have existed, perhaps in a very different sort of universe. For example, it is sometimes suggested that God could have created a zombie world, if he had so chosen. From here, it is inferred that consciousness must be nonphysical. If there is a metaphysically possible universe that is physically identical to ours but that lacks consciousness, then consciousness must be a further, nonphysical component of our universe. If God could have created a zombie world, then (as Kripke puts it) after creating the physical processes in our world, he had to do more work to ensure that it contained consciousness.

- David Chalmers, http://jamaica.u.arizona.edu/~chalmers/papers/nature.html

The claim, nonetheless, is not ridiculous. It is not an ontological claim about what exists, but an epistemic claim about what we can know about consciousness. (edit: scratch that; as Chalmers uses it, it is an ontological argument, although it can be modified to be purely an epistemic argument.) If there could be some world physically identical to ours that contains humans without P-consciousness, this underscores our conceptual difficulties with explaining P-consciousness in this world, where we are accustomed to being able to explain almost anything with a physically reductive explanation. Thus the hard problem obtains in our universe, and metaphysical zombies are only used to illustrate this point.

I didn't claim we may be zombies. All I said was that there's nothing in Chalmers' definition of what a zombie is that allows us to feel different from them. We believe we have P-consciousness and so do zombies. Exactly where is the difference? In the "fact" that we are right about our belief and the zombie is wrong? That doesn't make any sense.

If I am looking at a stop sign and I say, "I am seeing redness," I am referring to a certain mental state of mine. If I close my eyes, generate no internal visual imagery, and then say "I am seeing redness," then clearly my mental state is not the same as it was beforehand, even if my utterance is. The referent of the utterance has changed.
 
Last edited by a moderator:
  • #62
After reading the link that Confutatis provided, I'm trying to figure out why this isn't paradoxical.


so that qualia don't seem to play a primary role in the process by which we ascribe qualia to ourselves!


he'll tell you that he thinks that Bob Dylan makes good music. How can this ability for self-ascription be explained? Clearly not by appealing to qualia, for Zombie Dave doesn't have any. The story will presumably have to be told in purely functional terms.

The claim is made here that qualia is simply "along for the ride". If this is true then I can understand why Hypnagogue says the zombie definition is what it is. And I would agree. But the problem I'm having is that I can't explain how someone could ever come to write an article such as this if not for the existence of the qualia itself. Am I misunderstanding this?

I actually agree with everything that is being said by Chalmers and Hypnagogue about zombies except for one thing. It makes sense to me that they could conceivably be identical in everyway except for the belief in the hard problem itself. That's where I'm still not seeing the possibility. I just can't imagine why a planet of nothing but zombies would ever have a "hard problem" to solve.
 
Last edited:
  • #63
Forgive me if I am repeating what others have said i only got to page 3 of the thread, but isn't the 'subjective experience' that philosophers are referring to mearly personal experience, and isn't personal experience what some thing feels like for you?
And arnt philosophers mearly saying that no matter how detailed you describe the systems and sub-systems of the brain. These could never enable (hyphoteticaly) someone who wasnt conscious to understand what it 'feels' like to be conscious?
kind of like the descriptions of components of a tv not being about articulate what its like to watch a episode of buffy the vampire slayer. (not that I've ever watched it *ahem)
 
  • #64
hypnagogue said:
I know, I was merely stating a possible case where a zombie as I have defined it (with A-consciousness identical to a human but no P-consciousness) could possibly exist in this reality.

Well, I don't think you can possibly have A-consciousness exactly like a human without being a lot like a human. But that is a side issue anyway.

I think you need to brush up on your Chalmers.

I have read most of the papers on Chalmers' web page (my job bores me to death). I still see a contradiction in his ideas, and from my perspective your quote shows it explicitly:

Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature.

So - probably - our laws of nature are enough to explain why we are not zombies. Isn't that correct?

But the argument holds that zombies could have existed, perhaps in a very different sort of universe.

A very different sort of universe with very different natural laws? Could be, but in that case the hard problem should be stated in terms of those very different natural laws, not in terms of our own. That's not what Chalmers does; he conceives of a very different universe, and then uses his knowledge of facts about that universe to make claims about our own. But how can he know anything about a universe where the natural laws are very different from our own? That's what I find ridiculous.

For example, it is sometimes suggested that God could have created a zombie world, if he had so chosen.

I have no issue with this but the problem, as Chalmers himself hints at, is the nagging feeling that that's exactly what God did, that we are all zombies according to the way Chalmers defines zombies. You have pointed the problem yourself, and now you seem to be overlooking it.

From here, it is inferred that consciousness must be nonphysical.

That is not the only option. It can also be inferred that consciousness must be an illusion. That's what Mentat infers, and I don't see anything wrong with his argument.

If God could have created a zombie world, then (as Kripke puts it) after creating the physical processes in our world, he had to do more work to ensure that it contained consciousness.

Let me show you a similar idea:

If God could have created a world without water, then after creating the physical processes in our world, he had to do more work to ensure that it contained water.

Surely you don't want to claim that water is a metaphysical substance, do you? We all know that our universe didn't have water in the beginning, but the existence of water can be fully explained by the same laws that explain the universe without water.
 
  • #65
Fliption said:
I want to make sure you're clear on my position because it doesn't appear you are.

I understand your position. You think the way Chalmers defines a zombie doesn't make much sense. And I agree.

I have an issue with the way zombie is being defined here for two reasons. 1) I currently see a problem with it and 2) I don't feel it needs to be defined this way to illustrate the point of the hard problem.

You are right about #1, but you are partially wrong about #2. The zombie thing is central to Chalmers' ideas. You may have a different concept of a hard problem which does not require zombies, but then your hard problem is not the same as Chalmers'.

From various discussions I've seen here, people are making too much out of the whole zombie topic and don't seem to understand the real point.

It's probably because people are not really talking about what you think they are talking about. Happens all the time in forums like this.

I said that zombies must not believe in the hard problem.

That's not what you said. Here's the quote from your post:

It makes sense to me that they could conceivably be identical in everyway except for the belief in the hard problem itself.

It's not clear from that sentence alone whether you think zombies do not believe in the hard problem, or if belief in the hard problem is what defines one as a zombie.

This is a non-sequitur

It's not a non-sequitur if you were talking about a definition. This is always a problem; it's hard to know if people are defining things and then stating what they think follows from their definitions, or if they are simply stating what they believe to be true. You have now made it clear that you were talking about your beliefs, not about definitions. And there's no point arguing about someone's beliefs without knowing how they came up with them. I don't know why you think zombies must not believe in the hard problem, since my belief is that zombies can't possibly exist.

I have a feeling that when I do get there, I'll be all alone. :frown:

I have found that the more I understand about the world and about myself, the harder it is to make people understand things the way I do. But for the most part I consider understanding as just a form of entertainment, so it doesn't bother me that people don't see things the way I do, because more likely than not everyone is wrong, including myself.

There are far more important things to do in life than "understanding". Only bored people waste time with philosophy.
 
Last edited by a moderator:
  • #66
confutatis said:
A very different sort of universe with very different natural laws? Could be, but in that case the hard problem should be stated in terms of those very different natural laws, not in terms of our own. That's not what Chalmers does; he conceives of a very different universe, and then uses his knowledge of facts about that universe to make claims about our own. But how can he know anything about a universe where the natural laws are very different from our own? That's what I find ridiculous.

The metaphysical universe in which these zombies live is physically identical to our own. Chalmers' implication, then, is that we cannot give a physical explanation of consciousness in our own universe-- if we could, it should apply equally well to our physically identical zombie counterparts, but it doesn't.

I have no issue with this but the problem, as Chalmers himself hints at, is the nagging feeling that that's exactly what God did, that we are all zombies according to the way Chalmers defines zombies. You have pointed the problem yourself, and now you seem to be overlooking it.

Chalmers' argument doesn't imply that we are zombies. It does highlight the difficult issue of our epistemic access to our P-consciousness, however: by what means can/do we have knowledge of P? It would seem on the face of it that we can't have access to it if we have the same access to everything a zombie does, but there are ways around this. For instance, it could be that P-consciousness is in some way sufficient, but not necessary, to induce the kind of activity in me that leads me to say "I see a blue sky." A zombie might be led to say the same thing, but by means of a somehow different process.

That is not the only option. It can also be inferred that consciousness must be an illusion. That's what Mentat infers, and I don't see anything wrong with his argument.

The problem with this line of thinking is that it still leaves the big questions unanswered. P-consciousness is all about appearances in the first place, so saying that it is an illusion does not get us anywhere. It is still just as mystifying how such an illusion could be illusory in the way that it is.

If God could have created a world without water, then after creating the physical processes in our world, he had to do more work to ensure that it contained water.

Surely you don't want to claim that water is a metaphysical substance, do you? We all know that our universe didn't have water in the beginning, but the existence of water can be fully explained by the same laws that explain the universe without water.

If there is a world physically identical to ours, then it follows from this that there is water. It does not seem to follow in the same way that in a world physically identical to ours, there are conscious beings. This is the key difference in the argument you seem to be overlooking.
 
  • #67
Fliption said:
The claim is made here that qualia is simply "along for the ride". If this is true then I can understand why Hypnagogue says the zombie definition is what it is. And I would agree. But the problem I'm having is that I can't explain how someone could ever come to write an article such as this if not for the existence of the qualia itself. Am I misunderstanding this?

You're not misunderstanding, I think. This a deep issue of our epistemic access to P-consciousness. It appears as if we make certain actions (such as saying "I see that the sky is blue") in virtue of our P-conscious contents/properties/attributes. If this is so, it does raise serious questions about the possible existence of a creature with an identical A-consciousness but a non-existent P.

In view of this quandary, we might be lead to claim that any system with an identical A-consciousness to some conscious human must have an identical P. This is a purely functionalist notion, and accepting it forces us to make some apparently wild claims. For instance, Ned Block has composed the thought experiment of the Chinese Gym, where each individual in China communicates to the others by means of a walkie talkie, such that their communications are functionally isomorphic to neurons sharing information in the brain. If we accept strictly that identical A-consciousness must imply identical P, then we must accept that if each member of this Chinese Gym took on the functional characteristics of each of your neurons, then it would be P-conscious in precisely the same way you are. Given the proper input, it would 'say' "I see that the sky is blue" precisely when you would say the same thing, and presumably the whole ensemble collectively would be seeing the same phenomenal sky as you do when you make this statement.

So, it appears that there are ridiculous claims to be made all around the board when it comes to consciousness. Of course, it could be that the Chinese Gym really is P-conscious as a collective, but it seems to be a serious violation of our intuition-- just as bad, perhaps, as supposing that there could be a creature with an identical A but not an identical P.

For my own part, I currently think a panpsychist ontology is the best way to navigate the issue, but it's difficult to arrive at some paradigm on this problem for too long before more pressing issues arise.

I actually agree with everything that is being said by Chalmers and Hypnagogue about zombies except for one thing. It makes sense to me that they could conceivably be identical in everyway except for the belief in the hard problem itself. That's where I'm still not seeing the possibility. I just can't imagine why a planet of nothing but zombies would ever have a "hard problem" to solve.

Perhaps if a race of creatures evolved with A-consciousness similar to our own but no P, they wouldn't ever make reference to something like a P. But I think this is the wrong way to conceive of the problem. Presumably such a race would not have an A identical, on average, to humans, so they would not be zombies in the true sense.

Perhaps a better way to think of it is to suppose that tomorrow, by some strange occurence, all humans on this planet retained their A but lost their P. Would we go on merrily talking about the blueness of the sky and so on? Well, that really depends on the nature of the relationship between A and P-- if P is necessary to get us to behave as if we have it, then there would soon be a drastic change in our collective As; if it is not necessary, then it could be the case that by some means, we continues to behave as if we had it even though we don't. Perhaps it would turn out this way naturally, in which case P would have to be epiphenomnal and causally inefficacious; or, perhaps it would take some wild scenario like aliens interfering with our brain patterns in order to induce us to continue acting as if we had it, in which case we could conclude that P is sufficient, but not necessary, to get us to behave as if we have it.
 
  • #68
hypnagogue said:
The metaphysical universe in which these zombies live is physically identical to our own.

I do have trouble reconciling your "physically identical to our own" above with your previous mention of "different laws of nature". My perception is that you are contradicting yourself; how can you have a physically identical universe with different laws of nature?

But I might be missing something, so I'll await further comments.

Chalmers' argument doesn't imply that we are zombies.

Chalmers clearly states (and I need no brushing up for this :smile:) that the only difference between ourselves and a zombie is the truth about beliefs about P-consciousness. As I said, the only essential difference is that the zombie belief that they have P-consciousness is false, while our belief that we have P-consciousness is true. Yet Chalmers also states that the zombie has no way to find out that his beliefs about consciousness are false.

Do you have a way to find out that your beliefs about consciousness are false? No? What makes you different from a zombie then?

The problem with this line of thinking is that it still leaves the big questions unanswered. P-consciousness is all about appearances in the first place, so saying that it is an illusion does not get us anywhere. It is still just as mystifying how such an illusion could be illusory in the way that it is.

And yet Chalmers' zombies do not have P-consciousness and still have the illusion of having it. Can you explain how that is possible, because to me it makes no sense at all.

If there is a world physically identical to ours, then it follows from this that there is water.

I meant identical in the sense that it is describe by the same natural laws. Our knowledge of physics and chemistry explains why there's so much water in Louisiana, and it also explains why there's so little water in Nevada.

It does not seem to follow in the same way that in a world physically identical to ours, there are conscious beings.

That is only because you are assuming, a priori, that the laws that explain the presence or absence of water are incapable of explaining the presence or absence of consciousness. It's a circular argument. But the contrary argument, that they can explain, is also circular. This is the key point so many people seem to be overlooking.
 
  • #69
confutatis said:
Keep up the good work Fliption, and eventually you'll see why this hard problem is nonsense.

I want to make sure you're clear on my position because it doesn't appear you are.

No, you're not misunderstanding anything. It's a crucial point in Chalmers' argument that nothing that zombies do or say can be used to imply that they lack P-consciousness.

I have an issue with the way zombie is being defined here for two reasons. 1) I currently see a problem with it and 2) I don't feel it needs to be defined this way to illustrate the point of the hard problem. So just to be clear, I do believe there is a hard problem. I just don't think this problematic definition of zombies is needed.

From various discussions I've seen here, people are making too much out of the whole zombie topic and don't seem to understand the real point. I see that Hypnagogue has even felt that he had to clear up some confusion about this being a thought exercise of epistemology and not ontology. The same message I found myself saying over and over again in other threads, you may recall.

Since neither Mentat nor myself believe in the hard problem itself, then we must be zombies. Just look at your words above, that's what they imply.
No they don't imply that. I said that zombies must not believe in the hard problem. That is not the same thing as saying that people who don't believe in the hard problem are zombies. There's a big difference. This is a Non-sequitur (affirming the consequent) logical fallacy.

You are getting there.
Perhaps. I have a feeling that when I do get there, I'll be all alone. :frown:
 
Last edited:
  • #70
confutatis said:
I do have trouble reconciling your "physically identical to our own" above with your previous mention of "different laws of nature". My perception is that you are contradicting yourself; how can you have a physically identical universe with different laws of nature?

The claim is that the physical laws of nature do not exhaustively represent all the laws of nature.

Do you have a way to find out that your beliefs about consciousness are false? No? What makes you different from a zombie then?

Presumably if a zombie were to be granted P-consciousness, he would notice the difference.

Once again, there are deep issues here about epistemic access. Is there a way to find out that a belief as of a certain subjective experience is false? I don't know. It is a difficult issue with no clear answer.

On some views, P-consciousness is incorrigible-- that is, it is impossible to be wrong about a belief as of a certain subjective experience. Does this imply that a zombie in Chalmers' sense must have P-consciousness since it believes it does? No, I think not, since perhaps the metaphysical difference in the zombie's world that allows him to be functionally identical to a human without having attendent P-consciousness also allows him to be incorrect about his beliefs about P-consciousness. If this were true, then the incorrigibility of P-consciousness would be a result of the contingent laws of our universe involved in granting us P-consciousness. On the other hand, some claim that P-consciousness in this reality itself is not incorrigible.

And yet Chalmers' zombies do not have P-consciousness and still have the illusion of having it. Can you explain how that is possible, because to me it makes no sense at all.

Such zombies have second order beliefs of P-consciousness without first order P-conscious contents, whereas we have both. Strictly speaking, the zombie is under no illusion at all, since there is no 1st person view for the zombie from which it can be illusioned. The zombie is under an illusion no more than a rock is under an illusion. It seems that to have an illusion in the first place, one must have some sort of subjective perspective with which to be aware of such an illusion.

I meant identical in the sense that it is describe by the same natural laws. Our knowledge of physics and chemistry explains why there's so much water in Louisiana, and it also explains why there's so little water in Nevada.

If a conglomeration of H2O molecules exists in the zombie world, it follows that there is water. If a conglomeration of neurons exists in the zombie world, it does not follow that there is P-consciousness.

That is only because you are assuming, a priori, that the laws that explain the presence or absence of water are incapable of explaining the presence or absence of consciousness. It's a circular argument. But the contrary argument, that they can explain, is also circular. This is the key point so many people seem to be overlooking.

It is not an a priori assumption, it is an a posteriori deduction from our apparent systematic failure to satisfactorily explain P-consciousness in physical terms. The deduction may or may not be false, but it is not something assumed at the outset-- it is a conclusion arrived upon.
 
  • #71
Let me touch on the relevance of this particular definition of zombie. Isn't the conceivability argument about the idea that consciousness does not follow from the laws of the universe as we understand them? Is this the whole point? The whole idea to me seems to be claiming that consciousness is not explained or accounted for with a purely physical explanation. At least no one has been able to do it yet. So let's say we can create (or conceive of) a being where all the laws of nature are not broken. Is it conscious? We do not know because we do not understand how the laws of nature can lead to such a thing. This seems like the whole point to me. Correct me if I have misunderstood.

So why is it so relevant that this being must exhibit the exact same behaviour that I do? So what if he doesn't see the hard problem because he doesn't see anything that cannot be explained via the laws of nature. We cannot know for certain whether this being is conscious(he could just be lying) and have no reason to believe it is by simply looking at it's physical make-up which is still the point. What am I missing?
 
  • #72
Fliption said:
Let me touch on the relevance of this particular definition of zombie. Isn't the conceivability argument about the idea that consciousness does not follow from the laws of the universe as we understand them? Is this the whole point? The whole idea to me seems to be claiming that consciousness is not explained or accounted for with a purely physical explanation. At least no one has been able to do it yet. So let's say we can create (or conceive of) a being where all the laws of nature are not broken. Is it conscious? We do not know because we do not understand how the laws of nature can lead to such a thing. This seems like the whole point to me. Correct me if I have misunderstood.

You are referring to the problem of other minds vis a vis the problem of the metaphysical possibility (conceivability) of zombies. You are correct in pointing out that these two problems are intimately tied together. The zombie argument (as used by Chalmers) just goes a little further in making some metaphysical claims about the relationship between consciousness and phsyics. But yes, even if we suppose that the idea of a zombie is logically incoherent (metaphysically impossible), we are still left with the familiar core problems: the existence and nature of P-consciousness, asymmetry of access to P-conscious states, and so on.

So why is it so relevant that this being must exhibit the exact same behaviour that I do?

Zombies are used in this way to illustrate the seeming dissociation between reality as science/physics describes it and reality as it presents itself in P-consciousness. If there is a systematic difference in the behavior of zombies and of humans, then this systematic difference must be explicable in terms of physics (since 3rd person behavior is presumably explicable entirely in terms of physics). As you are stipulating that this difference must be due to lack of P-consciousness, it must follow then that P-consciousness is explicable entirely in terms of physics. In this case P-consciousness would literally be those physical processes missing from zombies such that they behave as if they have no P-consciousness.

So what if he doesn't see the hard problem because he doesn't see anything that cannot be explained via the laws of nature.

My Chalmerian zombie twin must believe in the hard problem just as much as I do. To frame it again: If he doesn't, then there is a difference in his 3rd person behavior which is explicable entirely in terms of physics. So if we say that this difference is due to his lack of P-consciousness, and that this difference is explicable in terms of physics, it follows that P-consciousness exists entirely in terms of extrinsic physics, contradicting the ontological intuition behind the hard problem.

We cannot know for certain whether this being is conscious(he could just be lying) and have no reason to believe it is by simply looking at it's physical make-up which is still the point. What am I missing?

You are correct, the central problems remains as real as ever.

For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing. However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?
 
Last edited:
  • #73
Maybe because the physics explanation of reality is incomplete?
 
  • #74
hypnagogue said:
For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing.

Can we consider that possibility for a moment, without falling in the materialist trap? I'm a monist and not a materialist, and even though I find my position difficult to explain, I see a lot of people share it.

However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?

All those problems have a simple explanation that's not mystifying at all. Our misunderstanding of language constrains our ability to understand things, because most of what we know we learn through language, yet we know very little about language itself. Our situation is not unlike that of a man who travels to a foreign country, hires an incompetent interpreter, and finds himself having trouble communicating with everyone. Until he realizes his interpreter is the source of the problem, he will be lead to think the locals don't make any sense. Our languages often stand between us and reality, and they are not good at interpreting facts.

From that view, the source of your epistemological gap is the fact that any statement about anything must always include three distinct elements. In English those are the subject, the object, and the verb. In math, it's two quantities and an operation, or two sides of an equation and the equal sign. Whenever you look at anything from the point of view of language, you will always see two distinct entities and a relationship between them. Very often the two entities are exactly the same, and the relationship is just a fictitious linguistic device.
 
  • #75
If the epistemological gap is only linguistic, then it should be possible in principle for me to literally see your 'red' and see to what extent it is similar to my 'red.' Are you proposing the only reason I can't do this is because of some linguistic confusion? It seems to run much deeper to me.
 
  • #76
We still don't know if its really impossible to look from other person's subjective persepective. We cannot exclude that in some future a technology can be developed to wire two brains together (some animals can do it, ants, for instance). I wonder what would the two people experience then. I supose that they will keep individual conciousness if the dualists are right, and would "meld" as a conciousness if the materialists are right.
 
Last edited:
  • #77
hypnagogue said:
If the epistemological gap is only linguistic, then it should be possible in principle for me to literally see your 'red' and see to what extent it is similar to my 'red.'

All you are saying above is that you can't know everything. And the idea that there's anything wrong with that is an erroneous notion that's purely linguistic in nature.

Are you proposing the only reason I can't do this is because of some linguistic confusion?

No, the reason you can't know what's on my mind is because you are not omniscient. The linguistic confusion is involved in the fact that you think your limited, imperfect knowledge is an aspect of reality.
 
  • #78
confutatis said:
All you are saying above is that you can't know everything. And the idea that there's anything wrong with that is an erroneous notion that's purely linguistic in nature.

There's more to it than that. If P-consciousness exists entirely in virtue of physical phenomena, it should admit itself to physical analysis. We should be able to know which systems are P-consciousness, and exactly what way in which they are P-conscious. But it appears as if we cannot-- it appears as if it is impossible in principle to do this with any degree of certainty. If it is a physical phenomenon, why should it be opaque to physical analysis even in principle?

I'm not saying we should be able to know everything. I am saying that in principle, we should be able to objectively observe all phenomena that are rightly called physical-- there should be nothing asymmetric or hidden about consciousness from a 3rd person view. But there is, and there are strong arguments that it cannot be otherwise, even in principle.
 
  • #79
Fliption said:
I think you have misunderstood. I don't have an issue with understanding the easy problem. I just found it amusing that Mentat (who claims to not understand what the hard problem is all about) used the term "easy problem" as if he understood the distinction. Which he admittedly doesn't. When he labels a set of activities as "the easy problem", he can't be sure he is correct because he doesn't understand the hard problem.


Thanks for clarifying that up for me. Can you direct me to a link were Mentat made this mistake? I must have skipped over it because if it is in this thread I surely missed it; reason being I only read selective posts.

Thanks, by the way.
 
  • #80
Thanks for this response to my question.


hypnagogue said:
If there is a systematic difference in the behavior of zombies and of humans, then this systematic difference must be explicable in terms of physics (since 3rd person behavior is presumably explicable entirely in terms of physics).

I understand what you're saying. But why is this the case? It seems as if there is an assumption that something non-physical cannot influence the behavior of something physical. Why is this assumption being made? I don't understand why it's necessary to make this assumption because it is obviously not true in this case. If this were true then it proves that consciousness is purely physical. Otherwise we wouldn't be talking about this issue right now. Surely this conversation is influenced by the fact that we have consciousness and can't explain why and not because god is pulling strings?


For instance, let us suppose that zombies (in Chalmers' strong sense) are logically impossible, and therefore that P-consciousness exists entirely in virtue of physical laws. If this is the case, there is no ontological gap between physical reality and P-consciousness: they are the same thing. However, in this scenario, we are still left with massive epistemological gaps between the two: the problem of other minds, asymmetry of access, etc. Furthermore, we are left with a further mystifying problem: why should an epistemological gap exist if there is no ontological gap?

Exactly. I just thought that this was the main point of the zombie thought exercise to begin with. So I didn't see the definition clarification of zombie as being very relevant to the solution of the hard problem, like some people seem to be saying.
 
Last edited:
  • #81
Jeebus said:
Thanks for clarifying that up for me. Can you direct me to a link were Mentat made this mistake? I must have skipped over it because if it is in this thread I surely missed it; reason being I only read selective posts.

Thanks, by the way.


Well I don't know if I'd call it a mistake. I just thought it was interesting. His quote is on page 3 of this thread and here is the paragraph:

I hate to pick at words (though, as you well know, I think it is necessary that the words be correct, so as to avoid the possibility of confusion), but I too see a difference between "measuring" a particular wavelength of light and experiencing the color. What I don't see is the difference between being stimulated by a particular wavelength of light, which you then process in terms of previous stimulations and remember, and "experiencing" a certain color. I don't see what's left to explain, and those things that I mention are all part of the "easy problem".

So he is listing out all the things he thinks the easy problem encompasses when he admits to not understanding the hard problem. Contrary to what he has said here, I would argue that some of these things he listed are indeed part of the hard problem and not the easy problem. The word "experience" is the key. He just assumes that the all physical brain activity equals experience. He just glossed right over the hard problem. I relalize that he thinks it's all easy problems but he said that these were easy problems according to Chalmers and that doesn't seem true at all.
 
Last edited:
  • #82
Fliption said:
I understand what you're saying. But why is this the case? It seems as if there is an assumption that something non-physical cannot influence the behavior of something physical. Why is this assumption being made?

Let me introduce a new term here to make discussion a little easier: a C-zombie is a zombie in Chalmers' sense, i.e. it is a creature physically identical to a human and existing in a metaphysical world physically identical to our own, such that its A-consciousness is identical to that of a human but it has no P-consciousness.

You postulated that no C-zombie should be able to behave as if it appreciates the hard problem, due to its lack of P-consciousness. If this is the case, then P-consciousness must be necessary for the existence of A-conscious behaviors indicating P-conscious beliefs (or "A as if P" for short). But, there is no problem in principle for physics to completely explain A-conscious properties of any kind. Therefore, if A-conscious properties indicating beliefs in P-conciousness must be caused by P-consciousness, and if such A-conscious properties are entirely in the domain of physics, then P-consciousness must be entirely in the domain of physics as well. There is no dissociation here from P-conscious properties and A-conscious properties, and so they wind up becoming the same thing: whenever there is A as if P, on this view, it must follow that there is P, and from this it looks as if physics' ability to explain all A implies that it can explain P.

(Otherwise, we would have to explain why certain physical phenomena-- those embodied by A as if P-- cannot occur without some non-physical component, even though there is every reason to believe that they should be able to occur quite naturally underneath the wing of purely physical laws. A far more natural and less ad hoc interpretation under this condition of necessity is to simply assume that P is A and nothing more, that there is no difference between the two. I don't think this is the view you want to take.)

We don't run into this problem if we suppose that P-consciousness is sufficient, but not necessary, to produce A-conscious properties indicating belief in P-conscious properties. If this is the case, then we can have A as if P but not P. In this scenario, physics does not automatically ensnare all the phenomena involved. Despite its ability to exhaustively explain A, there is something more about P that eludes the grasp of physics. So there is then a dissociation between the two that itself needs explanation.

If we say that P is sufficient but not necessary to produce A as if P, then that means A as if P can be produced via several mechanisms. One mechanism might be the kind of 'dead,' robotic, P-less production of A that you have alluded to before; we can imagine that a computer emulating a human brain's functional properties might be such an instance, where A is duplicated but there is no P. Another mechanism for generating A as if P would be that instantiated in P-conscious human brains, where the window could be open for the kind of interactionist dualism you refer to in your post (although this route is not a necessary one to take for a proponent of the hard problem).

Exactly. I just thought that this was the main point of the zombie thought exercise to begin with. So I didn't see the definition clarification of zombie as being very relevant to the solution of the hard problem, like some people seem to be saying.

The clarification just brings things into sharper focus. It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain, as I hope I showed successfully above.
 
Last edited:
  • #83
hypnagogue said:
It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain

Just for the record, I want to clarify that not everyone who opposes the hard problem do so because they hold a view of strict and complete identity between mind and brain. I do not hold such a view but I still oppose the hard problem.

What Chalmers is trying to sell is nothing but good old Cartesian dualism. He certainly deserves the merit of finding a way of expressing Descartes' ideas in a more modern/scientific framework, but the central issue is the same. The "hard problem" is just a modern replacement for the cogito. And that means, with all due respect to the parties involved, that Chalmers and his followers are lagging some 350 years behind when it comes to philosophy. Cartesian dualism is not a tenable philosophical position; that has been shown by people far more qualified than I am, so I won't dwell on it.

That said, dualism, in some form or another, is part of anyone's worldview. Even die-hard materialists such as Dennett and his followers do not really believe in their theories when it comes to an understanding of themselves; their claims to deny the supremacy of a first-person worldview are betrayed by the language they use to describe their own world. Perhaps the only difference between the Cartesian dualist and the materialist monist is their attitude towards what they can't understand: the former accepts it, the latter rejects it. That's all there is to the debate as far as I can tell; the bottom line is neither side really understands why things are the way they appear to be.

However, there is an alternative. It's not well explored because it is somewhat new, at least compared to the two other currents of thought, but my study of the subject has revealed that it is at least about a century old. There is no clear label for the philosophy yet; the best name I've seen for it is "dual-aspect monism". It is a form of monism that successfully incorporates dualism as an attribute of perception rather than an attribute of reality. Central to the idea is an understanding of the role knowledge plays in perception, which also requires an understanding of the role language plays in knowledge. The idea is far from simple, but to those who understand it, it makes far more sense than the other two competing views.

I believe anyone who understands dual-aspect monism will reject both Chalmers' and Dennett's ideas, while still acknowledging both positions have some truth to them. That is my position, but I realize it sounds paradoxical to those who are not familiar with it. Fliption has been kind enough to point that out, even though I can only see his criticisms as failure to see past his current philosophical framework.

As a side note, according to dual-aspect monism the identity between mind and brain can be explained by asserting that, while it's true that the brain contains the mind, it's also true that the mind contains the brain, and that both mind and brain are equally real. It's their mutual containment relationship which makes it possible for both to exist, but it's not correct to say, as the materialists do, that the brain must evolve before the mind appears. Due to their mutual containment, they must necessarily evolve together, as the absence of one would imply the absence of the other.
 
Last edited by a moderator:
  • #84
confutatis said:
What Chalmers is trying to sell is nothing but good old Cartesian dualism. He certainly deserves the merit of finding a way of expressing Descartes' ideas in a more modern/scientific framework, but the central issue is the same. The "hard problem" is just a modern replacement for the cogito. And that means, with all due respect to the parties involved, that Chalmers and his followers are lagging some 350 years behind when it comes to philosophy. Cartesian dualism is not a tenable philosophical position; that has been shown by people far more qualified than I am, so I won't dwell on it.

You are misreading Chalmers. Descartes was an interactionist substance dualist, and Chalmers is committed neither to interactionism nor a 'mind substance.' Chalmers leaves the door open for epiphenomenalsim, and actually prefers monism over dualism.

As I see things, the best options for a nonreductionist are type-D dualism, type-E dualism, or type-F monism: that is, interactionism, epiphenomenalism, or panprotopsychism. If we acknowledge the epistemic gap between the physical and the phenomenal, and we rule out primitive identities and strong necessities, then we are led to a disjunction of these three views. Each of the views has at least some promise, and none have clear fatal flaws. For my part, I give some credence to each of them. I think that in some ways the type-F view is the most appealing, but this sense is largely grounded in aesthetic considerations whose force is unclear.

- Chalmers, http://jamaica.u.arizona.edu/~chalmers/papers/nature.html

Note also that there is no clear distinction between 'dual aspect monism' and 'aspect dualism.' Both pick out the same general concept of different ontological aspects or properties ultimately belonging to the same entity. And in fact, in formulating a tentative theory of consciousness in his paper http://jamaica.u.arizona.edu/~chalmers/papers/facing.html , Chalmers embraces precisely such an aspect dichotomy rather than one of substance:

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing.

So Chalmers is most certainly not a redux of Descartes, and in fact is probably best described as an aspect dualist, or dual aspect monist if you prefer. Forgive me if I am using 'dual aspect monist' in a different manner from you, but if I am, I would like to know what distinguishes dual aspect monism from aspect dualism. In any case, it is clear that aspect dualism can be easily recast as at least some kind of monism, and this is the position that Chalmers seems to prefer.

The type-F monism that Chalmers describes may also be useful in illuminating our recent discussion of zombies. Again from "Concsioucness and its Place in Nature":

A type-F monist may have one of a number of attitudes to the zombie argument against materialism. Some type-F monists may hold that a complete physical description must be expanded to include an intrinsic description, and may consequently deny that zombies are conceivable. (We only think we are conceiving of a physically identical system because we overlook intrinsic properties.) Others could maintain that existing physical concepts refer via dispositions to those intrinsic properties that ground the dispositions. If so, these concepts have different primary and secondary intensions, and a type-F monist could correspondingly accept conceivability but deny possibility: we misdescribe the conceived world as physically identical to ours, when in fact it is just structurally identical.[*] Finally, a type-F monist might hold that physical concepts refer to dispositional properties, so that zombies are both conceivable and possible, and the intrinsic properties are not physical properties. The differences between these three attitudes seem to be ultimately terminological rather than substantive. (emphasis mine)

I myself also find this type-F monism the most attractive solution to the problem of consciousness, and as I maintain the metaphysical possibility of zombies, I would fall under the third category Chalmers describes above. That is, I maintain that physical properties refer to dispositional (extrinsic) properties only, and therefore there could exist a world with the same physical (extrinsic) properties as our world but different intrinsic properties. On the other hand, you may find yourself siding with one of the first two categories, thus accounting for your rejection of the possibility of zombies (although it should be pointed out that rejecting the possibility of zombies does not entail rejecting the hard problem).
 
Last edited by a moderator:
  • #85
hypnagogue said:
Therefore, if A-conscious properties indicating beliefs in P-conciousness must be caused by P-consciousness, and if such A-conscious properties are entirely in the domain of physics, then P-consciousness must be entirely in the domain of physics as well.

I decided to leave this for a day because I felt there was the potential that I wasn't seeing the forest for the trees. As usual, things look different when I come back. On a technical note, I'm not sure I understand why the domain of physics should cover everything that interacts with the physical. If this were really true then I would reasonably conclude that consciousness is physical because the fact that we are talking about this strongly suggest an interaction. But I will accept it for now because I think I see more clearly the intent of the definition and perhaps I have been too picky.

I do understand what you mean when you say that a zombie can say things like "That object is red" and even "I believe in the hard problem". I understand that there is a certain A consciousness state that relates to every single behavior that can be exhibited by a person with P-consciousness. I understand why this is an important point to make in the definition of zombie. But while I agree that the A-consciousness state that allows a zombie to say and believe in the hard problem is possible, I do not believe that such a state would ever occur if we assume the zombie is calculating in a casual, logical way. This was the point I was making. Obviously, I agree that the state is possible in principal. I just don't believe the zombie would ever casually arrive in such a state. Now that I think about it, I'm not sure my point is all that different from saying sufficient but not necessary.

The clarification just brings things into sharper focus. It's not surprising that opponents of the hard problem have found more problem with this interpretation of zombies than yours, since your interpretation of zombies (with the necessity of P for A as if P) actually turns out to be closer to their views of strict and complete identity between mind and brain, as I hope I showed successfully above.

Even though I do not believe that a zombie would ever believe in the hard problem, Chalmers point stands because the A consciousness state that allows me to say the hard problem exists can be mimicked in a zombie in principal. But I still struggle a bit with the issue from above about the domain of physics emcompassing all that interacts with the physical. I still feel the most a scientist could ever get is to say "the belief in the hard problem is equivalent to the differences in these two A consciousness states." To then conclude that a belief in the hard problem is equivalent to P-Consciousness itself doesn't seem like very good logic.
 
Last edited:
  • #86
hypnagogue said:
You are misreading Chalmers. Descartes was an interactionist substance dualist, and Chalmers is committed neither to interactionism nor a 'mind substance.' Chalmers leaves the door open for epiphenomenalsim, and actually prefers monism over dualism.

I agree I may have oversimplified things a bit. The point I was trying to make is that Chalmers' view is similar to Descartes' in the sense that it raises problems that are unsolvable in principle. Or, as Chalmers calls it, "hard".

Note also that there is no clear distinction between 'dual aspect monism' and 'aspect dualism.' Both pick out the same general concept of different ontological aspects or properties ultimately belonging to the same entity.

I disagree. In the philosophy I'm calling dual-aspect monism language plays an extremely important role. Whatever it is that Chalmers has in mind, I do not see language being given enough emphasis to qualify his view as anything resembling the view I'm talking about.

And in fact, in formulating a tentative theory of consciousness in his paper, Chalmers embraces precisely such an aspect dichotomy rather than one of substance

But, again, he does not refer to language as playing any fundamental role.

Forgive me if I am using 'dual aspect monist' in a different manner from you, but if I am, I would like to know what distinguishes dual aspect monism from aspect dualism.

I'm quite positive you're using 'dual aspect monism' in a different manner, but there really isn't a standard vocabulary to talk about what I have in mind. It's just something I thought I came up with by myself, and later realized a lot of other people came up with very similar ideas.

I think the key difference from aspect dualism is that aspect dualism refers to reality made of some "substance" which takes different "aspects" depending on... depending on what? I'm not sure what in the philosophy gives rise to the dichotomy in perception. Dual-aspect monism, as I'm defining it anyway, makes it clear that the dichotomy is just an illusion caused by our misunderstanding of the nature of our knowledge.

Another key difference, I suppose, is that in dual-aspect monism there is no hard problem. The supposed inability to explain subjective experience in terms of objective knowledge is a misperception - objective knowledge itself is the explanation of subjective experience, because the world is perfectly isomorphic to the mind that observes it. It's just that our language tends to conceal that isomorphism. The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this[/color], whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

you may find yourself siding with one of the first two categories, thus accounting for your rejection of the possibility of zombies (although it should be pointed out that rejecting the possibility of zombies does not entail rejecting the hard problem).

Actually, I reject the conceivability of zombies, and that does entail rejecting the hard problem, as I'm sure you'd agree (with the entailment, not the rejection)
 
  • #87
Fliption--

It's a difficult and subtle point, but I believe it stands. I think I can state it a little more clearly now, and it may be helpful to do so even thoough you seemed to have relaxed your conceptual requirement for the necessity of P for A as if P.

Our problem from earlier in this thread arises from the tension between the set of statements

1. P is necessary for A as if P
2. P is non-physical
3. all A is physical
4. all physical entities/states can be described entirely by the laws of physics

(By "physical" I mean extrinsic/relational/dispositional properties only.)

From premises 1-3, we conclude that a non-physical entity is necessary for the existence of certain physical entities. But this contradicts premise 4, which says essentially that no physical entity requires a non-physical cause. So either we must abandon premise 4, or we must abandon one of 1, 2, or 3.

To abandon premise 4, we would have to show that, for instance, my disposition to say things such as "The sky looks so blue today" is inexplicable, in principle, by physics. But this seems like an impossible task. Science can straightforwardly tell a causal story about light wavelengths striking my retina, being transduced into neural signals, and causing a cascade of neural events in my brain terminating in a set of motoric signals that move my mouth/tongue/throat/etc such that I utter "The sky looks so blue today." So abandoning premise 4 is off limits, and we must abandon one of the other premises.

Premise 3 is safe, since A-consciousness is defined in such a way that it is a purely physical phenomenon. That leaves 1 and 2. If we refuse to reject 1 (as you seemed reluctant to do previously), then we must reject premise 2. But rejecting 2 essentially makes us materialists and leaves us with all the familiar problems, so it becomes clear that we should reject premise 1.

But while I agree that the A-consciousness state that allows a zombie to say and believe in the hard problem is possible, I do not believe that such a state would ever occur if we assume the zombie is calculating in a casual, logical way. This was the point I was making. Obviously, I agree that the state is possible in principal. I just don't believe the zombie would ever casually arrive in such a state. Now that I think about it, I'm not sure my point is all that different from saying sufficient but not necessary.

Possibility in principle is all that is needed, since the possibility in principle for A as if P but not P implies that P is not necessary for A as if P.

There are a number of ways we could imagine this possibility in principle to be realized. If P is epiphenomenal and plays no causal role, then we can easily imagine a world with identical physical laws to our own which followed a course of history identical to our own. In this world, you and I are having the same discussion as we are in our own world, but we in fact do not have P. This is possible since all the causal agents in our world are duplicated in this particular zombie world, leading to the same events.

On the other hand, if P does play some causal role, then perhaps we can imagine a world with a surrogate causal agent for P, such that it replicates P's causal role without having P's phenomenal properties.

But I still struggle a bit with the issue from above about the domain of physics emcompassing all that interacts with the physical. I still feel the most a scientist could ever get is to say "the belief in the hard is equivalent to the differences in these two A consciousness states." To then conclude that a belief in P-Consciousness is equivalent to P-Consciousness itself doesn't seem like very good logic.

We indeed do not have to conclude that P is physical based on what you have presented here. But in order to do this, we must suppose that P is not necessary for A as if P, as discussed above.
 
  • #88
hypnagogue said:
Fliption--
On the other hand, if P does play some causal role, then perhaps we can imagine a world with a surrogate causal agent for P, such that it replicates P's causal role without having P's phenomenal properties.

I do understand what you're saying but I still found myself cringing every once in a while until I read this one paragraph above. It has put what you're saying into perspective and I now think I fully understand your point and agree with what you're saying. I do personally believe beyond doubt that there is a casual relationship here so I continued to struggle but this point about a surrogate helped me to see exactly where you're coming from.

I think that while I agree that in principal a zombie can believe in the hard problem whether P consciousness is casual or not, I still think the possibility of these events happening in practice are not likely which is why we will probably continue to be tempted to call people like Mentat a zombie. :smile:

Thanks for the clarification.
 
  • #89
confutatis said:
I agree I may have oversimplified things a bit. The point I was trying to make is that Chalmers' view is similar to Descartes' in the sense that it raises problems that are unsolvable in principle. Or, as Chalmers calls it, "hard".

Chalmers does not hold that the hard problem is unsolvable even in principle (although some philosphers do, such as Colin McGinn with his concept of cognitive closure). If Chalmers thinks the hard problem is literally unsolvable, I imagine he wouldn't bother trying to solve it.

Besides, materialist viewpoints seem to raise problems that are just as hard. If my visual percepts literally are just an illusion, how could they possibly have the illusory characteristics that they have in virtue of a purely materialist ontology?

I think the key difference from aspect dualism is that aspect dualism refers to reality made of some "substance" which takes different "aspects" depending on... depending on what?

In Chalmers' interpretation, the dual aspects arise as a result of the difference between intrinsic properties and extrinsic properties. As he puts it:

This view holds the promise of integrating phenomenal and physical properties very tightly in the natural world. Here, nature consists of entities with intrinsic (proto)phenomenal qualities standing in causal relations within a spacetime manifold. Physics as we know it emerges from the relations between these entities, whereas consciousness as we know it emerges from their intrinsic nature.

Another key difference, I suppose, is that in dual-aspect monism there is no hard problem. The supposed inability to explain subjective experience in terms of objective knowledge is a misperception - objective knowledge itself is the explanation of subjective experience, because the world is perfectly isomorphic to the mind that observes it. It's just that our language tends to conceal that isomorphism.

Thus far I don't see how your position is any different from Dennett's materialist eliminativism.

The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this[/color], whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

To this I say, nonsense. Suppose I devise my own new language. My language has only one word, "unga." "Unga" refers precisely to my visual experience as of this[/color]. There are no other words to which unga may refer, and yet it has a clear referent.

Besides, if all words are defined by their relationship to other words, then it is impossible for a word to refer to reality (except that part of reality corresponding to these words). If this is the case, then words should have syntax but no semantics. I should be able to study the syntax of a Chinese dictionary and some Chinese texts and eventually come to as complete an understanding of Chinese as a citizen of China, without ever speaking to a Chinese speaker or seeing externally grounded diagrams such as 'table --> (picture of a table)'. By the same token, linguists should not have needed the rosetta stone to decode heiroglyphics. But obviously this cannot be the case; words acquire meaning in virtue of their relationship to the world. They must be grounded in something external to the linguistic system itself. In the case of "red" (as well as "unga"), the word is grounded in (refers to) my visual experience of this color[/color].

Actually, I reject the conceivability of zombies, and that does entail rejecting the hard problem, as I'm sure you'd agree (with the entailment, not the rejection)

It depends on the type of zombies you're talking about. Arguably one could reject the conceivability of Chalmers' zombies without rejecting the conceivability of Block's Chinese Gym zombie.
 
Last edited:
  • #90
hypnagogue said:
Chalmers does not hold that the hard problem is unsolvable even in principle (although some philosphers do, such as Colin McGinn with his concept of cognitive closure).

He does hold that the hard problem is unsolvable within a materialistic context.

Besides, materialist viewpoints seem to raise problems that are just as hard.

I don't want to argue this point because I would seem to be arguing for materialism, which I'm not. I just want to point out that materialism does not, in principle, pose any unsolvable problem. To find unsolvable problems you have to transcend the materialist perspective.

Thus far I don't see how your position is any different from Dennett's materialist eliminativism.

It is different on a very fundamental point: from my perspective, "matter causes mind" is just as true as "mind causes matter". You seem to be overlooking the importance of the second assertion.

To this I say, nonsense. Suppose I devise my own new language. My language has only one word, "unga." "Unga" refers precisely to my visual experience as of this[/color]. There are no other words to which unga may refer, and yet it has a clear referent.

Your language doesn't allow you to make any true statements about 'unga'. It's not the kind of language I'm talking about.

Besides, if all words are defined by their relationship to other words, then it is impossible for a word to refer to reality

To this I say, nonsense :smile:

It's perfectly possible for words to refer to reality based on their relationships to other words alone, as long as those relationships reflect asymmetries in reality. When reality exhibits symmetries, words cannot refer to it, which is why we have all those inverted spectrum scenarios. For instance, all you know about 'right' is that it is 'not left', but you have no way to know if my right is the same as your right; for all you know, it could be your left. But it doesn't stop there, it goes into higher levels. For instance, all you know about 'top and bottom' is that it is 'neither right nor left', but again you have no way to know if what I experience as 'top and bottom' is what you experience as 'right and left'.

If you take that to the highest level possible, of the language as a whole, then you clearly see that language is far less connected to reality than you currently dream of. I have a very simple argument for this: if language really reflected reality, then all semantically correct statements would correspond to facts about reality. As you surely know, that is far from being the case.

If this is the case, then words should have syntax but no semantics. I should be able to study the syntax of a Chinese dictionary and some Chinese texts and eventually come to as complete an understanding of Chinese as a citizen of China

Dictionaries do not define a language; they don't expose enough word relationships. In order to learn Chinese, you need to be exposed to an awful lot of it, certainly far more than just a dictionary. But it is not true that you can't learn Chinese by studying the language alone. How do you suppose those geniuses at the army crack enemy codes?
 
Last edited by a moderator:
  • #91
confutatis said:
I don't want to argue this point because I would seem to be arguing for materialism, which I'm not. I just want to point out that materialism does not, in principle, pose any unsolvable problem. To find unsolvable problems you have to transcend the materialist perspective.

I would agree, in a way. Materialism, when applied to its own domain, poses no unsolvable problems. But P-consciousness does not appear to be in the domain of materialism, and it appears as if materialism is not suited to solving the problem of P-consciousness. Even if we choose to label P-consciousness an illusion, it is still paradoxical how it could even have the illusory properties that it does if materialism is true.

It is different on a very fundamental point: from my perspective, "matter causes mind" is just as true as "mind causes matter". You seem to be overlooking the importance of the second assertion.

I don't think you've gone into enough detail on this point.

Your language doesn't allow you to make any true statements about 'unga'. It's not the kind of language I'm talking about.

What if 'unga' means 'I see this color[/color]' (or if you prefer, 'there is this color[/color]')? Then clearly it can have a truth value, despite it being the only word in my language.

Here is what you said initially:

The reason that happens is because we tend to assign meaning to words, rather than to their relationships with other words. Just like you think the meaning of the word 'red' is this, whereas the real meaning of the word 'red' is defined by its relationship with all other words in the language.

What of a child who learns his first word? His father points to his mother and says "momma," and eventually the child learns to refer to his mother as "momma" himself. The child knows no other words, so there are no other words for his "momma" to achieve meaning from, and yet clearly the word "momma" now has meaning for the child. How can this be if the meaning of the word "momma" is strictly contingent upon other words?

Another scenario: before a hypothesized experimental result is determined empirically, what determines the truth value of the hypothesis? Does it not yet have a truth value? When does it attain a truth value, when the experimenters observe that it has been verified (or falsified), or when the experimenters think internally/speak/write about the empirical results?

If you take that to the highest level possible, of the language as a whole, then you clearly see that language is far less connected to reality than you currently dream of.

I'm not necessarily making claims about the connections between language and reality. What I am making claims about is the connection between language and perceptual experience.

Suppose there is a 5 year old child, A, who has seen and can percpetually distinguish between cats and dogs, but suppose that his limited vocabulary only allows him to make the crudest of linguistic distinctions regarding what makes a dog a dog, such that these distinctions alone are not sufficient to tell dogs and cats apart. To this end we might imagine that A would say "a dog is a furry animal with 4 legs and a tail, a snout, two eyes, a nose," etc.-- a description that agrees perfectly with any description of a cat. So A can perceptually discriminate between cats and dogs, even if he cannot say precisely what it is about dogs that makes them different from cats.

Now suppose that there is another child, B, with the same vocabulary set as A, except for words referring to furry, four legged animals (although he knows what furry, four, legged, and animal mean). Not only does B have no words for furry, four legged animals, he has never seen one. Suppose that B learns what dogs and cats are only in virtue of reading a simple, linguistic description of what they are-- perhaps A has written him a letter telling him about dogs, which matches precisely the description of cats in a children's book (with no pictures). What will B label a cat if one is presented to him? He may call it either a dog or a cat, since it is a furry, four legged animal, or he may claim that he doesn't know which one it is. Why can A distinguish between the two whereas B cannot, if they are working with the same linguistic tools? Because A's linguistic notions of cat and dog are associated with his past perceptual experiences of cats and dogs, whereas B has no such perceptual experience of cats or dogs to ground the semantics of these terms.

Dictionaries do not define a language; they don't expose enough word relationships. In order to learn Chinese, you need to be exposed to an awful lot of it, certainly far more than just a dictionary. But it is not true that you can't learn Chinese by studying the language alone. How do you suppose those geniuses at the army crack enemy codes?

They crack them by finding systematic relationships between the code and a natural language. But such schemes are made much easier due to the fact that symbols in a code stand for letters in an alphabet. Chinese has no alphabet, it has distinct symbols for each concept.

Even putting that objection aside-- to borrow from your example, how would the interpreter, going only by syntax, differentiate between the words for 'left' and 'right'? Even if he manages to narrow things down enough such that he knows one word must mean 'left' and the other 'right,' how is he to differentiate between these without ultimately making some inference grounded in facts about the external world? For instance, if he finds that one word refers to the dominant hand of most people in China, he may conclude that this word means 'right,' but this inference is draw via reference to an externally existing fact about Chinese people; or he may find that one word means 'left' by roundabout reference to the direction in which the sun sets, but this again relies on an empirical fact. (e.g., if the text of some human-like alien civilization fell to Earth tomorrow, we would not know which of their hands tends to be dominant, nor would we know in which direction their sun sets, and so we could not make sense of any of these.)
 
Last edited:
  • #92
hypnagogue said:
What if 'unga' means 'I see this color[/color]' (or if you prefer, 'there is this color[/color]')? Then clearly it can have a truth value, despite it being the only word in my language.

If your language only had that one word, could you think about other things? For instance, could you think about not-unga? And if you can think about non-unga, can you invent a word for it? If you can come up with a new word, that means you already have the concept in your mind. When I'm referring to language here, I'm referring to the totality of concepts you have in your mind, not the totality of arbitrary symbols which may or may not exist as expressions of those concepts.

What of a child who learns his first word? His father points to his mother and says "momma," and eventually the child learns to refer to his mother as "momma" himself.

The child may only know one word, but his/her head must already be full of concepts before the first word is learned. It's one thing to know that 'momma' is the sound that goes together with a particular concept; it's another thing to become aware of the concept in the first place. I'm talking about the latter, not the former.

Let me use a notation to make things easier: I will append a '+' sign whenever I'm talking about a concept a word refers to, and '-' when I'm talking about the word itself (eg: mother-, mère-, madre-, mutter-, are different words in different languages for the concept mother+)

The child knows no other words, so there are no other words for his "momma" to achieve meaning from, and yet clearly the word "momma" now has meaning for the child. How can this be if the meaning of the word "momma" is strictly contingent upon other words?

The meaning of momma- is momma+. The meaning of momma+ is contingent upon concepts such as object+, room+, person+, face+, eyes+, and so on. Even though it may take years for the child to learn the words object-, room-, person-, face-, eyes-, those concepts must be in place from a very early age.

Another scenario: before a hypothesized experimental result is determined empirically, what determines the truth value of the hypothesis?

Semantics.

When does it attain a truth value, when the experimenters observe that it has been verified (or falsified), or when the experimenters think internally/speak/write about the empirical results?

That depends. The experimenter learns something by observing the experiment, and that knowledge becomes true to him as concepts (eg: this+ causes+ that+). But concepts as such cannot be communicated, so the experimenter must choose some words in his vocabulary, and create a relationship between the words that mirror the relationship between the concepts in his mind. And here is where semantics shows up its ugly head: how can the experimenter choose words that perfectly recreate the concept "this+ causes+ that+" in the mind of everyone else?

I'm not necessarily making claims about the connections between language and reality. What I am making claims about is the connection between language and perceptual experience.

The connection may be clear for the speaker, but for the listener/reader it must be reconstructed. It's one thing to explain what momma- means by pointing your fingers at momma+. It's quite another thing to explain what "consciousness- is- an- epiphenomenon- of- the- brain-"; it's really difficult for anyone to figure out what concepts a person has in mind when uttering that sentence. However, no one is born a speaker, which means our knowledge of what words mean is always imperfect. Which means not everything we learn from other people is true, in the sense that it would be true if we had learned it from personal experience.

Suppose there is a 5 year old child, A, who has seen and can percpetually distinguish between cats and dogs, but suppose that his limited vocabulary only allows him to make the crudest of linguistic distinctions regarding what makes a dog a dog...

To cut a long story short, learning about cats+ and dogs+ is not the same thing as learning about cats- and dogs-. If you know nothing about cats- you can still think about cats+. If you know about cats- but do not know about cats+, then you may be tempted to think cats- is just another word for something you already know (such as dogs+). You may, in fact, enter into a long philosophical discussion as to whether dogs- really exist as every dog- can be shown to be a cat+ (which is of course nonsense if you know that dogs+ are not cats+)

They crack them by finding systematic relationships between the code and a natural language. But such schemes are made much easier due to the fact that symbols in a code stand for letters in an alphabet.

I'm sorry but you're wrong on this. Those forms of encryption (letter substitution) are no longer used since, as you said, they are so easy to crack. What makes cracking codes possible is that people usually know what a coded message probably means - there aren't many things one can talk about during war. But this is a side issue anyway.

Even putting that objection aside-- to borrow from your example, how would the interpreter, going only by syntax, differentiate between the words for 'left' and 'right'? Even if he manages to narrow things down enough such that he knows one word must mean 'left' and the other 'right,' how is he to differentiate between these without ultimately making some inference grounded in facts about the external world? For instance, if he finds that one word refers to the dominant hand of most people in China, he may conclude that this word means 'right,' but this inference is draw via reference to an externally existing fact about Chinese people; or he may find that one word means 'left' by roundabout reference to the direction in which the sun sets, but this again relies on an empirical fact. (e.g., if the text of some human-like alien civilization fell to Earth tomorrow, we would not know which of their hands tends to be dominant, nor would we know in which direction their sun sets, and so we could not make sense of any of these.)

Even though my example was trying to address something different, I will comment on that as it touches on the same issue. The issue is what I referred to as symmetries. There is a symmetry between 'left' and 'right' that prevents you from knowing what other people mean by it, except that if something is on the right then it can't be on the left. That's all you know about left and right; for all you know your left+ might be my right+ and we'd still agree that most people prefer to use their right- hand. So the meaning of right- is not right+, it's something else close to "not left-". But of course there's more, because there are things that are neither right- nor left-. Even so, things that are neither right- nor left- tell you very little about what right+ and left+ could possibly be.

In the end, we can only discover what right- and left- mean to the extent that we can perceive assymetries. And this has two very important consequences:

- the entirety of our perceptions cannot possibly exhibit any kind of assymetry
- as such, any description of our perceptions that implies assymetry (eg: mind vs. body) is an artificial construct
- since descriptions are made of abstract symbols, the dichotomy between the description of our perceptions and the perceptions themselves must have been introduced by the symbols, not by our perceptions themselves

I'm not sure exactly how language, as expressed by symbols, creates this false dichotomy, but I'm sure that it does. The reason I'm so sure is because there is no dichotomy between any aspect of my perceptions and the entirety of them; in other words, I never experience anything that I believe I should not be experiencing. Clearly it is our theories that must be wrong, not our perceptions.
 
  • #93
If an infant has a concept of 'mother' before learning to say 'momma,' then surely a dog has a concept of 'master' despite never learning any words at all. Is a dog, then, a linguistic animal despite never speaking or writing?
 
  • #94
hypnagogue said:
If an infant has a concept of 'mother' before learning to say 'momma,' then surely a dog has a concept of 'master' despite never learning any words at all. Is a dog, then, a linguistic animal despite never speaking or writing?

Parrots can speak many words. I guess that makes them linguistic animals then :mad:
 
  • #95
Some parrots have shown the ability to use language relatively intelligently.

Anyway, my point is that you seem to refer to much more than is normally referred to by 'language.' Concepts of the kind you refer to can exist without linguistic tokens, and are probably best characterized as perceptual concepts (baby's concept of momma, pre-language, is defined by baby's visual perception/recognition of its mother's face). That was my point at the outset-- perception is not a purely linguistic phenomenon, although you seem to be trying to paint it as such.
 
  • #96
Confutatis

I've been following along here, taking advantage of the dialogue you're having with Hypnagogue to once again try to understand your view. It seems the last 2 or 3 posts have been some of the most comprehensive as far as describing the heart of your view that I've seen. It seems there are some arguments being presented that are crucial to understanding your view. I have read these posts several times trying to make sure I understand them before I post any questions or develop any opinions. I do not have an opinion right now. I need a little more clarification.

It seems to me a crucial thing to understand is what you mean by "symmetry" and "asymmetry". While I know what these words mean, I'm not sure how you're applying them here. We have one example of -left and -right that you say has symmetry which leads to the same problem that we have in inverted spectrum scenarios. Sometimes you used the "symmetry/asymmetry" concept when referring to the relationship between words. And other times you referred to these concepts as something that reality would exhibit. Exactly what is it that has symmetry or does not have symmetry? And what criteria classifies it has having symmetry? I just need a little more clarification/examples on of what you mean by these concepts.
 
Last edited:
  • #97
hypnagogue said:
you seem to refer to much more than is normally referred to by 'language.' Concepts of the kind you refer to can exist without linguistic tokens

That doesn't change the fact that we can assign tokens to those concepts, and apply the same rules as we do for all other concepts. There's nothing particularly different about a concept that currently lacks a word, except the fact that it currently lacks a word.

perception is not a purely linguistic phenomenon, although you seem to be trying to paint it as such.

Perception is a purely linguistic phenomenon as far as our theories go, because our theories are also purely linguistic phenomena. There are far more things than things that we can talk about, but there's nothing we can say about those things, except in the languages of art, myth, folklore, etc.
 
  • #98
Fliption said:
It seems to me a crucial thing to understand is what you mean by "symmetry" and "asymmetry".

It certainly is, because ultimately it can be shown that there is a symmetry between "mental" and "physical", and because of that symmetry we have no way to know exactly what is different about them. But let's save that for a future discussion.

While I know what these words mean, I'm not sure how you're applying them here. We have one example of -left and -right that you say has symmetry which leads to the same problem that we have in inverted spectrum scenarios.

I believe the left vs. right problem is also classified as an inverted spectrum scenario, but I'm not sure. In any case, the idea is the same: flip everything, and nothing in our descriptions change.

Sometimes you used the "symmetry/asymmetry" concept when referring to the relationship between words. And other times you referred to these concepts as something that reality would exhibit.

Concepts certainly exhibit symmetry, as in left/right. You can replace every single instance of one word with the other, and your knowledge still remains intact. You can't do that with 'left' and 'top', for instance, so left and top are asymmetrical. Still, taken together, left and right are symmetrical with top and bottom.

As to whether reality exhibits symmetries, the answer is a bit more complex. The existence of a certain symmetry between concepts implies that we have no way to know which aspects of reality the concepts refer to. For instance, if the words 'red' and 'green' are really symmetrical as some people think, then all you can know about reality is the relationship between 'red' and 'green', not what they really are. You can't know if 'red' means this[/color] or this[/color].

This is where things start to get interesting, because things that appear different to different observers are not considered real; we usually call them 'illusions'. For instance, if there is no objective way to assert if grass looks like this[/color] or like this[/color], then it necessarily follows that grass is neither this[/color] nor this[/color], and our perception of color is an illusion. Still we do perceive something, so what is it that we perceive after all?

Let's not argue that last bit for now. First, we can't be sure that 'red' and 'green' are really symmetrical. Second, we're not yet ready to discuss what 'illusion' mean in the context of the kind of monism I'm talking about.

Exactly what is it that has symmetry or does not have symmetry?

Langauge definitely has it. Reality exhibits symmetry to the extent that we are ignorant of some of its aspects. For instance, suppose we have two perfectly identical cards placed side by side on a table. We call the card on the left 'card A', the one on the right 'card B'. We leave them on the table, go away to get something, and when we come back we find the wind has blown them away. We can no longer tell which card is which, even though we are sure both are still there. So we say there is a symmetry between card A and card B by virtue of their identical appearance.

what criteria classifies it has having symmetry?

We find symmetries by using thought experiments, such as the one above about two identical cards.
 
  • #99
Unfortunately, I'm still not clear on exactly what it means for things to be symmetric or asymmetric. It sounds as if the criteria for being symmetric has something to do with our ability, or lack thereof, to know. Know what? It sounds as if it means we can't know what aspect of reality a word refers to? Is that close? I'm just not clear. I'll try to be more specific below.



confutatis said:
Concepts certainly exhibit symmetry, as in left/right. You can replace every single instance of one word with the other, and your knowledge still remains intact. You can't do that with 'left' and 'top', for instance, so left and top are asymmetrical. Still, taken together, left and right are symmetrical with top and bottom.

Why is 'left' and 'right' symmetric and 'top' and 'left' are not?

This is where things start to get interesting, because things that appear different to different observers are not considered real; we usually call them 'illusions'. For instance, if there is no objective way to assert if grass looks like this[/color] or like this[/color], then it necessarily follows that grass is neither this[/color] nor this[/color], and our perception of color is an illusion. Still we do perceive something, so what is it that we perceive after all?

People seeing different things is different from the inability to objectively prove that people are seeing the same thing. Inverted spectrum scenarios are a statement about our ability to know whether we are referring to the same thing, with the color red for example. This doesn't mean that we necessarily DO see different things thus making it an illusion. But this may be getting too far ahead. I'm not sure I'm prepared to move this far until I understand symmetry better.

First, we can't be sure that 'red' and 'green' are really symmetrical.
Why can't we be sure? It certainly seems that we have an inverted spectrum scenario with them so why would they not be symmetric? I'm hoping your answer will illuminate more on what it means to be symmetric.

Langauge definitely has it. Reality exhibits symmetry to the extent that we are ignorant of some of its aspects. For instance, suppose we have two perfectly identical cards placed side by side on a table. We call the card on the left 'card A', the one on the right 'card B'. We leave them on the table, go away to get something, and when we come back we find the wind has blown them away. We can no longer tell which card is which, even though we are sure both are still there. So we say there is a symmetry between card A and card B by virtue of their identical appearance.

Does this symmetry exists if we had not originally labeled them as 'card A' and 'card B'? If not then again it seems symmetry only applies to concepts and not reality.

The reason I'm trying to understand this distinction is because it seems symmetry is applied to both concepts and external objects differently and it makes the definition of symmetry more confusing to me. And I'm hoping to make it as simple as I can. At least at first. There should only be one definition of symmetry that can be applied to both concepts and reality but I'm not sure what that single definition is yet.
 
Last edited:
  • #100
confutatis said:
That doesn't change the fact that we can assign tokens to those concepts, and apply the same rules as we do for all other concepts. There's nothing particularly different about a concept that currently lacks a word, except the fact that it currently lacks a word.

Perhaps you can find a better word to use than language. The way you are using it, we can easily speak of mice having language, but that doesn't square with the way the word 'language' is used.

Perhaps we might say that a language is some set of concepts existing within an organism's mind/brain that can be expressed externally by a systematic set of abstract symbols. Symbols as such may not be sufficient for language, as in the case of parrots (even if it is arguable that a parrot's 'speech' truly consistitutes a symbol of a concept in the first place), but surely they are necessary. If I never speak or write a word or have internal mental chatter, but have at least some set of concepts in my mind, then surely I cannot be said to have any linguistic properties.

Perception is a purely linguistic phenomenon as far as our theories go, because our theories are also purely linguistic phenomena. There are far more things than things that we can talk about, but there's nothing we can say about those things, except in the languages of art, myth, folklore, etc.

Depends what you mean by theory. Is my perception of what differentiates this color[/color] from this[/color] a theory? If so, then all animals with red/green color perception can be said to have such theories. If not, then you cannot say that subjective redness is a merely linguistic phenomenon.
 
Last edited:
Back
Top