Chapter 1: A Place for Consciousness

  • Thread starter Thread starter hypnagogue
  • Start date Start date
  • Tags Tags
    Consciousness
AI Thread Summary
Rosenberg explores phenomenal consciousness, defining it through various philosophical lenses, including Nagel's "what it is like" and distinctions from access consciousness. He identifies the mind-body problem, questioning how to reconcile the ontology of mind with the physical world, proposing Liberal Naturalism as a framework that avoids ad hoc claims. This paradigm seeks a coherent account of subjective experience, arguing against both physicalist and interactionist dualist perspectives. Rosenberg introduces a new understanding of causality, emphasizing effective, receptive properties, and carriers, which he believes are essential for a complete account of consciousness. The discussion raises concerns about the clarity of definitions and the implications of his arguments as the book progresses.
  • #51
honestrosewater said:
Stephen,
I think qualia are unique only in that they are a part of your experience, and you are unique as an experiencer (but what do I know? :rolleyes: ). It could be that red looks the same to you as it does to me and toothaches feel the same to you as they do to me and so on.

SH: Thank you for providing a well-considered response. This particular part
of the story seems ok to me. Our brains structures are only similar, not identical, so it seems unlikely we could obtain identical shades of color if we
both looked at the same object. This is not the part which bothers me.

Honestrose water wrote:
Your descriptions of qualia seem to all include representations. I've been told that qualia are defined by Rosenberg as being nonrepresentational. Apparently, representational and nonrepresentational aspects of our experience are both meaningful to us. But, allegedly, they are meaningful in different ways (I really don't understand the difference).

SH: We get to the problem I also find with this, at least approximately :-)
And I did some work to look into phenomena or sensory data that would impact our eyes but doesn't get passed onto consciousness/awareness.
P-consciousness is supposed to composed of conscious phenomena. But
there is evidence that there is neuronal activity that occurs before we
become conscious of taking or deciding to take an action. Like we start
to remove our hands from a hot stimulus before we reach such a decision.

http://www.ucc.uconn.edu/~wwwphil/pctall.html by Austen Clark
"Ned Block introduced the technical sense of the term
"phenomenal consciousness" (or P-consciousness) in the
course of contrasting it with what he called "access
consciousness". Of course since it cannot be analyzed
in terms of functional or psychological notions, it is
(regrettably) impossible to give a definition, but one
can at least list some synonyms and point to examples.

Block says:
"P-consciousness is experience. P-conscious properties are
experiential ones. P-conscious states are experiential,
that is, a state is P-conscious if it has experiential
properties. The totality of the experiential properties
of a state are "what it is like" to have it. Moving from
synonyms to examples, we have P-conscious states when we
see, hear, smell, taste, and have pains. P-conscious
properties include the experiential properties of
sensations, feelings, and perceptions, but I would also
include thoughts, wants, and emotions. (Block 1995, 230)"

SH: I had gone searching on the internet. I couldn't find a consistent
definition of "phenomenal consciousness" nor qualia. I don think there is
a definition. I see lots of examples. But I couldn't come up with a rule
that allowed prediction of something like your grandfather's face and
how/what response that would illicit from your consciousness-- suppose
there was more than just an emotion evoked, but a thought like how
old is he now and when is his birthday. If you want to include "thoughts"
like Ned Block does in the above quote, how can one distinguish between
p-consciousness and a-consciousness? Chalmers seems to have a
different point of view, without including thought.

Chalmers says a mental state is conscious if it has a
qualitative feel-an associated quality of experience.
These qualitative feels are also known as phenomenal
properties, or qualia for short. The problem of
explaining these phenomenal properties is just the
problem of explaining consciousness. This is the really
hard part of the mind-body problem. (Chalmers 1996, 4)

He says that "what it means for a state to be phenomenal
is for it to feel a certain way" (Chalmers 1996, 12).
By "feel a certain way" Chalmers means not just tactile
experience, but sensory appearances of any kind,
including visual, auditory, and so on. So conscious
mental states are states that have a "phenomenal feel".

SH: This just seems circular to me.

Honestrosewater wrote:
Your experience of the Mona Lisa probably has several levels of qualia and several levels of interpretation (by "interpretation" I mean the process of forming representations or a system consisting of representations). Just as a rough example, if you could see the Mona Lisa in the way I imagine a painter easily could, the first level of qualia may consist of just patches of various hues, values, and saturations. To this level of qualia, you can apply an interpretation; You may interpret some patch of hues, values, and saturations as representing a 3D shape and that shape as a smiling mouth, another as representing an eye, another as a dirt road, etc. Another level of qualia can arise from this interpretation. This level of qualia may consist of the feeling of being looked at and smiled at, of smiling or posing for a portrait, of traveling down a winding dirt road, etc. To this level of qualia, you can apply an interpretation; You may interpret your feeling of shyness and uneasiness arising from being smiled at as representing, well, I don't know, something about your personality. You may also bring other things to the painting, such as some knowledge or suspicion about the relationship between the painter and his subject or admiration for the painter's skill. You may be listening to some music and interpret the flow of the music as corresponding to the flow of the lines of the painting, the pitch to the colors, the timbre to the texture, and so on, qualia of the music and qualia of the painting representing each other. Another level of qualia can arise from these interpretations, and other interpretations can be applied to this level of qualia.
The point is that there are several types of qualia, and qualia themselves are not representations though they may arise from representations and representations may arise from them. That's my understanding of things anyway. I think Rosenberg makes the same point with the Necker Cube, but he speaks of conceptualization instead of interpretation (they mean the same thing to me).
There certainly were representational aspects to my examples, but I was focusing (or trying to focus) on the nonrepresentational aspects.

SH: I suppose you understand this better than I do. I will not dispute your
attribution of what is qualia. Rather, I have a problem with seeing this as
distinct types of consciousness; I guess the right way to put it is that they
seem like artificial categories constructed to advance a theory.

Remember the earlier post:

"CE is not a qualitative content or quale."
p-CE is a quale; a-CE is not.

"CE is not a bare difference."
According to Rosenberg, p-CE is not composed of bare difference because it is a quale. In the view Rosenberg will develop later on in the book, a-CE is not composed of bare differences either. According to physicalism, both a-CE and p-CE are composed of bare differences.

"Phenomenal consciousness includes facts about CE."
The facts about p-consciousness include facts about p-CE. The facts about p-consciousness do not include facts about a-CE (or, if they do, it is only in an indirect way).
------------------------------------------------------------------

SH: I suppose this addresses the issue -- I guess this is to hard for me,
and more work than I want to put into it. Nice chatting with you.

Good luck,
Stephen
 
Last edited by a moderator:
Physics news on Phys.org
  • #52
Just a question relevant to this discussion. Does Rosenberg discuss blindsight at all? This seems an example of perception that the perceivers are not conscious of. I know that this is a much discussed phenomenon in consciousness research.
 
  • #53
selfAdjoint said:
Just a question relevant to this discussion. Does Rosenberg discuss blindsight at all? This seems an example of perception that the perceivers are not conscious of. I know that this is a much discussed phenomenon in consciousness research.

Blindsight - the ability to respond appropriately to visual
inputs while lacking the feeling of having seen them - might
be something which only occurs in cases of brain damage, but
seems much more likely to be a significant phenomenon of
intact brain function as well. Indeed, it seems likely that
blindsight (and similar phenomena in other spheres) is an
important ingredient of of a variety of activities where one
wants to move quickly and appropriately, without
"thinking about it".

SH: I haven't come across it yet, but some of my reading is skimming.
This example may be interesting too:

"When we propose that a state of phenomenal consciousness is a
state of sensing something that looks red, which of these senses
[epistemic (knowing) or non-epistemic] of "looks" is meant? The
first possibility is that we are using "looks" in its epistemic
sense. Consider, for example, Wilfrid Sellars' account of what
it means to say "that looks red". Sellars argued that "looks
red" is logically more complicated than "is red"; he says
"being red is logically prior, is a logically simpler notion,
than looking red" (Sellars 1963, 142). To illustrate the
point Sellars tells a story about a necktie shop, whose owner
John has never used electric lighting, and has never reported
the colour of anything except under "standard conditions" of
northern daylight. John has no locutions for reporting
appearances or how colours look; he just reports what the
colour is. Then at last one day John installs electric lighting
and turns it on for the first time. Under the new lights a blue
necktie (as we would say) looks green. His friend Jim
demonstrates how the apparent colour of the necktie changes
depending on the illumination. Initially John is befuddled,
since he doesn't want to say that the blue necktie is green
when inside the shop, or that the lighting changes its colour.
But Sellars tutors him, when inside the lit shop, to stifle
his otherwise natural report that the necktie is green. He is
taught to say first "It is as though I were seeing the necktie
to be green" (Sellars 1963, 143). Finally he acquires a new
locution: the necktie looks green."

SH: There is an experiment which apparently confirms Sellers position.
For Short Presentation Times, Two Stimuli Blend into One
"In one of Efron's experiments, a small red disk, shown for 10 msec on a
monitor, was immediately followed by a green disk at the same location,
also for 10 msec. Instead of seeing a red light turn into a green one, subjects
saw a single yellow flash. Similarly, if a 20 msec blue light was followed by a
20 msec yellow light, a white flash was perceived, but never a sequence of
two lights whose color changed."

SH: To me, this substantiates the view that there can be a difference
between seeing a color as it is and seeing what the color appears to be,
even though we may see the same shade of the color.

"Sellars notes that the experiences that would lead one to
report "it looks green" or "it is green" might as experiences
be indistinguishable from one another. "Two experiences may
be identical as experiences, and yet one be properly referred
to as a seeing that something is green, and the other merely
as a case of something's looking green" (Sellars 1963, 145).
The only difference is epistemic: in one the content of the
experience is endorsed, and in the other it is not.
"It looks green" might also be phrased "Visually it is just
as if I were seeing something green, but I do not endorse the
claim that it is green". I am visually representing something
to be green, but I do not endorse that representation.

The interesting implication is that reports in the epistemic
sense of "that looks green" are themselves reports of a higher
order thought. Unlike reports of the form "that is green", a
report "that looks green" expresses a thought about one's own
mental state. Suppose it means "I am in the sensory state that
would normally lead me to judge that thing to be green, but
something is amiss, and I wish to withhold judgement". It
follows that to be in a state in which something looks green
to me, I must be in a state whose content is "I am in a sensory
state that would normally lead me to judge that thing to be
green, but something is amiss, and I wish to withhold
judgement." This content includes a higher-order comment about
one's own visual state: that it is of a kind that would
normally lead me to judge the thing in question to be green.
Or: that I am in the same kind of visual state that I would
be in if the thing causing it were green."

SH: This "higher-order comment" would appear to dispute Ned Block's
claim that qualia included thought. I notice that Rosenberg distances
[edit: I mean this sounds more like a-consciousness than p-consciousness.]
himself from Block's position in chapter 3. I am not so sure this impacts
Gregg's argument; more like 'what is it like to see red' is not well-defined.
 
Last edited:
  • #54
selfAdjoint said:
Just a question relevant to this discussion. Does Rosenberg discuss blindsight at all? This seems an example of perception that the perceivers are not conscious of. I know that this is a much discussed phenomenon in consciousness research.

SH: Thanks for the hint. This is what Rosenberg says:

APFC.pdf Rosenberg, page 77

"While Liberal Naturalism might feel liberating, we have too much
freedom. To find a place for consciousness, we need tests for the
minimal adequacy of proposed explanations, and also a class of
problems able to provide clues that help us triangulate to the
point of fundamental incompleteness in our knowledge. As a
beginning for the effort, I wish to step back to examine
assumptions and to try to identify the deepest problems and clues
in the vicinity. ...

For example, the links between conscious experience, voluntary
action and functional awareness lead to very interesting puzzles
when considering multiple personality cases (Braude1991), or
commissurotomy patients (Marks 1981) or blindsight patients
(Weiskrantz, 1986; 1988). These puzzle cases can be very seductive,
philosophically, but if Liberal Naturalism is correct they are
likely more intriguing than they are fundamental. Were we to focus
exclusively on overtly cognitive features of consciousness like
these, we would run the danger of confusing the inessential with
the essential, and overlooking promising paths in our search."
-------------------------------------------------------------

GR: "These puzzle cases can be very seductive, philosophically, but if
Liberal Naturalism is correct they are likely more intriguing than they are
fundamental."

SH: I think this should be argued that these cases are intriguing rather
than fundamental, thus Liberal Naturalism is correct.

I find there is disupte about whether there are instances of P-Consciousness
without A-Consciousness or vice versa, but usually both occur. (Ned Block)

Or Chalmers:
Chalmers claims that a clear conceptual distinction can be made
between access and phenomenal consciousness when one considers the
fact that we can imagine P-Consciousness without A-Consciousness
and A-Consciousness without P-Consciousness, and the fact that
A-Consciousness can be accounted for by cognitivist explanations
while P-Consciousness is resistant to such explanations. Unlike
Block, however, Chalmers believes that A-Consciousness and
P-Consciousness *always* occur together.

Or Block again:
http://www.royalinstitutephilosophy.org/articles/neural_block.htm
"Although phenomenal-consciousness and access-consciousness differ
conceptually (as do the concepts of water and H_2O), we don’t know
yet whether or not they really come to the same thing in the brain."

http://www.def-logic.com/articles/silby011.html

"Blindsight is a well documented phenomenon that occurs in people
who have suffered damage to certain areas of their visual cortex.
These people have a blind region in their visual field, and though
they are aware of their blind spot, they cannot see anything that
is presented to them in that area of space. The important feature
of blindsight is that although subjects are unaware of stimuli in
their blind spots, they have an uncanny ability to `guess' as to
the location, motion and direction of such stimuli. In these cases
their appears to be some visual awareness without the phenomenal
properties that normally occur with visual awareness. For Block,
cases of blindsight point to instances of absent P-consciousness.
Block cannot say, however, that these people have A-consciousness
of the stimuli in their blind region, because the content of the
blind region is not available for the rational control of action.
Blindsight patients must be prompted by an experimenter before
they will `take a guess'. It is unlikely that a hungry blindsight
patient would spontaneously reach for a chocolate in his blind
region. But, says Block, imagine a super-blindsighter who had
acquired the ability to guess when to guess about the content of
her blind field. Even though she doesn't see the objects in her
blind field, she can spontaneously offer verbal reports about
those objects. Information about her blind field just spring into
her thoughts. A super-blindsighter would be A-conscious but not
P-conscious. Whether there are any super-blindsighters is an
empirical question that has not been answered yet, but this does
not affect Block's point. It is enough for Block that they are
conceptually possible. To emphasize this conceptual possibility,
Block points to evidence that the human visual system is divided
into two separate subsystems - the ventral and dorsal subsystems.
In blindsight there seems to be damage to the ventral system,
which Block claims is closely connected to P-Consciousness.

The ventral system is responsible for object recognition and
classification, while the dorsal system is involved in computing
spatial features such as location and motion. Block believes that
because the visual system is comprised of these two visual
subsystems, it would also be conceptually possible to find cases
of P-Consciousness without A-Consciousness. This might occur if
someone incurred damage to their dorsal system, while their
ventral system remained intact. Of course, if Block's distinction
is accurate, we would probably not know if someone was P-Conscious
of events in their visual field without being A-Conscious of those
events because a lack of A-Consciousness implies that content is
not poised for the control of behavior. This includes behavior
such as making the statement: "I see a red object."
 
Last edited by a moderator:
  • #55
Hi,

I just started reading into this discussion, and have read these first 4 pages regarding the first chapter. I've no own background on this subject, just a very w(a/o)ndering mind.

If this drifts too far from the book discussion, do say so.

1. Regarding blind sight:

I believe Pinker defines two aspects of mind, namely sensation and perception. Is there any difference between p-consciousness and sensation, or a-consciousness and perception?
In any case, I think the plausible explanation of blind sight is that to make a conscious effort, your p-consciousness needs to be triggered, however, it's your a-consciousness that performs the mental functions. Therefore, if your p-consciousness is triggered by alternative means (ie. the experimenter giving a verbal cue), you can still perform the mental function.

2. I fail to see why distinguishing between representational consciousness and non-representational consciousness is a good idea. I'm reading Gombrich's Art and Illusion, and it illustrates that our way of representing experiences is something that we learn, something that is a gradual process. I think it's very misleading to describe this representation as if it's an independent, or an at least in some way isolatable aspect of our consciousness. That is, I don't see how this is a process which needs any kind of special institution. I think it's far more likely that representation is just a different way of catagorising experiental data. But of course, these objections might be solved further on. My apologies if this is the case, I've some serious catching up to do.
 
Last edited:
  • #56
Tsunami said:
1. Regarding blind sight:

I believe Pinker defines two aspects of mind, namely sensation and perception. Is there any difference between p-consciousness and sensation, or a-consciousness and perception?
I don't know if Pinker has any idiosyncratic uses for those terms, but in general sensation just refers to the detection of environmental information by the senses, whereas perception involves integrating that sensory information into a coherent model of the environment. So sensation per se isn't really an aspect of consciousness itself, but rather a link in the flow of information from the world to the mind. Perception need not be conscious (e.g. as in subliminal perception), but when perception is conscious we can say it usually involves both p-consciousness (what the experience of perception is like, phenomenologically) and a-consciousness (being able to use the perceptual information to guide thought and behavior).

Tsunami said:
In any case, I think the plausible explanation of blind sight is that to make a conscious effort, your p-consciousness needs to be triggered, however, it's your a-consciousness that performs the mental functions. Therefore, if your p-consciousness is triggered by alternative means (ie. the experimenter giving a verbal cue), you can still perform the mental function.
Not sure exactly what you mean here. But in general, if you're talking about something in consciousness needing to be triggered in order to do something else, you're probably talking about its access-consciousness aspect.

Tsunami said:
2. I fail to see why distinguishing between representational consciousness and non-representational consciousness is a good idea. I'm reading Gombrich's Art and Illusion, and it illustrates that our way of representing experiences is something that we learn, something that is a gradual process. I think it's very misleading to describe this representation as if it's an independent, or an at least in some way isolatable aspect of our consciousness. That is, I don't see how this is a process which needs any kind of special institution. I think it's far more likely that representation is just a different way of catagorising experiental data. But of course, these objections might be solved further on. My apologies if this is the case, I've some serious catching up to do.
Again, not quite sure if I catch you exactly-- you may be using the word "representation" differently than it's being used here. The issue of representation was only brought up briefly in this chapter to paranthetically address representationalist theories of consciousness.
 

Similar threads

Replies
28
Views
6K
Replies
62
Views
12K
Replies
87
Views
14K
Replies
28
Views
10K
Replies
135
Views
22K
Replies
32
Views
4K
Replies
246
Views
33K
Back
Top