The Flaw in the Definition of Consciousness

AI Thread Summary
The discussion critiques the traditional definition of consciousness as a state where "it is like something to be you," arguing it is flawed because it assumes a central self. This perspective leads to Cartesian dualism, which fails to explain consciousness adequately. The proposed new definition suggests consciousness is an advanced computational ability that creates the illusion of a singular perspective, rather than relying on a central self. The conversation also explores how subjective experiences, including feelings, can be perceived as illusions without a central self, raising questions about the nature of consciousness itself. Ultimately, the thread emphasizes the need for a more comprehensive understanding of consciousness that aligns with these insights.
Mentat
Messages
3,935
Reaction score
3
The state of being conscious has been defined as "a state in which it is 'like something' to be you". IOW, if it's like something to be you, you're conscious.

However, this definition has recently presented itself to me as both fundamentally flawed, and extremely misleading in the attempt to find a reductive theory of consciousness. The way we're going, Chalmers is indeed correct, we will not find a reductive theory of consciousness. But, I think we are simply trying to explain the wrong thing, and I hope to show that in this post, along with presenting a new definition - and thus, a new objective.

First off, why is the definition fundamentally flawed? Because it presupposes the existence of a central, indivisible, self.

In order for it to be "like something" to be A, there must be an absolute A. However, this kind of Descartean reasoning (that there is a centeral point of consciousness and mind) cannot be correct, since it brings forth Cartesian and Dualistic ideas.

That's not to say that Dualism is completely untenable. Hypnagogue has shown a possible scenario, wherein a Dualistic reality is possible in principle; and there are probably more such scenarios. However, these scenarios do not explain the initial consciousness, only the subsequent "en-matrixed" ones, and so we are simply (IMHO) putting of the real problem.

So, if Dualism remains illogical - as an explanation of consciousness itself - then we should reject the idea of a central "self".

David Hume (at least, I think it was Hume) wrote a piece on this. The conclusion of his reasoning on the matter (as I have posted elsewhere) was that, if you strip away those innate properties of a person (those given by genetics) and all of the experiences that that person has had throughout his/her lifetime, you will not have a naked, blank "self" remaining - since such a concept is both undefined and nonsensical - but will have nothing at all.

Now, with the rejection of the central "self" we must also reject the idea that it can be "like something" to be that singular self. Instead, if it is "like something" to be a dog (for example) then it is "like something" for the particular mechanics of the dog's consciousness to be at work exactly as they are. But, for emphasis, there is no central "dog self", and so it is not "like something" to be the dog, but it is - instead - like something to go through that dog's experiences, having been endowed with all of that dog's previous conditioning.

Any corrections to this first part of the post are welcome. This is (hopefully) as far as I will go on this topic, until a new issue is raised.

Now to the redifining...
 
Last edited:
Physics news on Phys.org
The redefining of "Consciousness"

Without the idea of a central "self", or the idea of it being "like something" to be that "self", we must adopt a completely new definition of consciousness, since it is clear that some beings are conscious, inspite of it's not being "like something" to be that singular being.

This new definition of consciousness should account for all of the things that the previous definition accounted for - which is not really that much, when you stop and think about it. It should also use less or an equal amount of assumptions (Occam's Razor). Finally, it should allow for reductive explanations of consciousness (this is not really a requirement of it's being an acceptable definition; I'd just like it to be that way :smile:).

My new definitions are:

Consciousness: The state of advanced computational ability that allows for innovation and the illusion of a central perspective.

Conscious (this is the one that is really replacing the previous "like-something-to-be-me" definition): 1) Under the illusion of a central perspective. 2) A synonym to "aware".


That's it. To be conscious, one must simply be under the illusion that it is a singular being.

btw, I didn't really need to add that bit about "computational ability", in my definition of consciousness, I just chose to.

Now, let's put this definition to the test...
 
Testing the new definition...

Please note: I added a second part to the definition of "conscious", just to avoid confusion. My new definition of "conscious" is a replacement for that which "the state in which it is 'like something' to be 'me'" accounted for. There is another form of consciousness (inexorably linked, but not synonymous to, definition #1), which is the basic awareness of one's surroundings (i.e. the computation of the stimuli entering the CNS).

Now, let's take the classic example of experiencing the color "red". This is not as obviously tied into it being "like something" to be "me", or (using my definition) to the illusion that there is a central self. However, it can be dealt with under the new paradigm.

Let's say a stream of light enters the retina, that is of the wavelength we commonly call "red" (I don't remember exactly what wavelength that is, but I also don't think it's too relevant). Now, the retina is stimulated, and so electricity coarses through the axons of the nearby neurons, causing a release of neuro-transmitter by the dendrites on the other side of the neuron. This neuro-transmitter then stimulates other neurons, and the of stimulation continues.

Using the Hexagon model of William Calvin (as imperfectly paraphrased by myself on a previous thread), we now have a spatiotemporal stimulation, that will fire synchronously with other, purely spatial, patterns, and there will be replication of the same pattern (a vital part of the Darwin machine that is our brain).

Anyway, skipping all of the stuff I covered in the aforementioned thread, we now have computation. And, with that computation, coupled with the illusion that there is a central self (which is itself a computation, however flawed the result), one may now say that "I" have experienced "red".

This may not seem very convincing, but there's a bit more. You see, if a certain wavelength of light stimulates a harmonious firing of particular neurons, causing the brain to process/categorize this light as "the same as such-and-such previous light wave". The only reason my new definition of consciousness would ever come in is because, at some point, this brain could think "it was like something to experience that 'red' light"...then the brain could take it further and say "it's like something to think about it being like something...", and eventually they may even say "it's like something to be me, instead of being someone else". My postulate is that this is an illusion (much as the color red is a convenient illusion for the purpose of categorization and processing of a new bit of information), and sentient beings all fall prey to this illusion, which is what endows them with their sentience ITFP.
 
As an analogy to help one understand the first step I took...

Imagine a pyramid. If you could only see the tip, you would forever wonder what the pieces that built up to that tip look like. And, indeed, if there is a tip to the pyramid, then there must be pieces building up from the ground up to directly under that tip, that are of a different nature than the tip. However, what if - on the very top of the pyramid - you saw, instead of a tip, a square set of blocks. This square set of blocks is clearly built on top of other square sets of blocks, leading down to the ground. What's the difference?

The difference is that, in the end, there is either a complete eventuality, or there is not. The square set of blocks could easily be further built upon, to produce a taller pyramid, and - most importantly - the square set of blocks looks just like all the other levels of the pyramid, only higher.

If, however, there is a tip, then there is an eventuality on which nothing can be built, and which (again, most importantly) looks very little like the lower levels.

The typical definition of consciousness is symbolized by the tip of the first pyramid. It is qualitatively different than any of the previous levels of "sub-conscious" activity, and is not explanable purely in the same terms as one explains the previous levels...ergo, no reductive explanation.

What I hope to show (rather, what I hope I have shown) is that, if the tip is an illusion (perhaps produced by the very close proximity of the top set of squares) and the very top of the pyramid is merely a square set of blocks, then it can be explained in exactly the same terms as one would explain the previous levels; it's...just...higher.

Anyway, I kind of liked that illustration, so I thought I'd share it.
 
Last edited:
Mentat, I think you're too hung up on the notion of the 'central self.' The 'what it is like to be' criterion is not to meant to highlight that there must be a central, incompressible self. What it is meant to highlight is that there must be some sort of subjective experience (or, if you prefer, 'feeling').

So the basic criterion for consciousness that 'it is like something to be A' is equivalent to 'A feels something.' If you still think that is too indicative of your 'selfhood' problem, it could be further rephrased 'there is feeling associated with system A.'

Simply stating that feeling is an illusion is no help; even if it is an illusion in some sense, the fact remains that the illusion itself must be felt, or subjectively experienced. And it remains as unclear as ever in principle how some set of physical processes, as we understand them in the 3rd person physical sense, could feel anything at all.
 
An Illusion? Who is experiencing the illusion? This illusion argument has been used before and to me it only seems to beg the question because the ability to have an illusion is partly what it is we're trying to reductively explain to begin with.
 
Last edited:
Let's get this straight:

- there's no central self
- the experience of a central self is an illusion
- the experience of a central self is accompanied by the experience that the central self is "conscious"
- since the experience of a central self is an illusion, the experience of anything associated with such central self must also be an illusion
- therefore consciousness is an illusion

Does that make any sense?
 
Originally posted by hypnagogue
And it remains as unclear as ever in principle how some set of physical processes, as we understand them in the 3rd person physical sense, could feel anything at all.

One way to explain it is maybe, consciousness does the feeling, not the form.
 
Robots who use what they see to help the needing in the best possible way is what we need.

I would like to see robots reading books in Africa etcetera.

Robots who use what they see or tries to find IN THE BEST POSSIBLE WAY (for them selves in the beginning) are conscious or gain consciousness after a while (although that's a part of being conscious).

Fortunatelly.

They will find that dying is the same thing as getting unconscious; if nothing needs to be the same before as after getting unconscious,
then dying is the same thing as getting unconscious. Therefore you get born if you die and win if you lose.

Robots are unfortunatelly the ultimate lifeform. They can live practically anywere, and will not die when the universe cools down.

Although, they will perhaps die when the universe falls down to it's old energylevel, in a big bang, caotically (the higgsparticles would be randomly distributed).


That's actually it. I promise
 
  • #10
Originally posted by confutatis
Let's get this straight:

- there's no central self
- the experience of a central self is an illusion
- the experience of a central self is accompanied by the experience that the central self is "conscious"
- since the experience of a central self is an illusion, the experience of anything associated with such central self must also be an illusion
- therefore consciousness is an illusion

Does that make any sense?

As I said to Mentat:

Simply stating that feeling is an illusion is no help; even if it is an illusion in some sense, the fact remains that the illusion itself must be felt, or subjectively experienced.

In attempting to discredit consciousness, the illusory argument boils down to the following:

Subjective experience is only a subjectively experienced illusion.

I think that statement says all that needs to be said.
 
  • #11
fortunately, there is no need to define something in order to investigate its nature.
 
  • #12
Originally posted by hypnagogue
As I said to Mentat:

Simply stating that feeling is an illusion is no help; even if it is an illusion in some sense, the fact remains that the illusion itself must be felt, or subjectively experienced.

In attempting to discredit consciousness, the illusory argument boils down to the following:

Subjective experience is only a subjectively experienced illusion.

I think that statement says all that needs to be said.

Perfect. I was going to respond by saying that if consciousness doesn't exists and is just an illusion then how do we explain that an illusion is an act of consciousness?
 
  • #13
you can look into a mirror. in the language of consciousness, one can do self-reflection and meditation.

"Can concsiousness, the thing that conceptualizes, fit into a concept that will fit into itself?"

well it is possible to have a concept that will fit into itself. at least mathematically.
 
  • #14
consciousnes is trying to find the best solution (for yourself mainly). It can be to find the best grass to eat and avoid getting eaten, like a dear. There goal is to get the best grass and to survive etc.

Why don't you believe me?
 
  • #15
Originally posted by Sariaht
consciousnes is trying to find the best solution (for yourself mainly). It can be to find the best grass to eat and avoid getting eaten, like a dear. There goal is to get the best grass and to survive etc.

Why don't you believe me?

You are proposing here a functional process that may overlap in some respects with consciousness, but also does not overlap with consciousness in many important ways. Consciousness is defined at bottom not by what it does, but how it feels.
 
  • #16
Originally posted by hypnagogue
Mentat, I think you're too hung up on the notion of the 'central self.' The 'what it is like to be' criterion is not to meant to highlight that there must be a central, incompressible self. What it is meant to highlight is that there must be some sort of subjective experience (or, if you prefer, 'feeling').

So the basic criterion for consciousness that 'it is like something to be A' is equivalent to 'A feels something.' If you still think that is too indicative of your 'selfhood' problem, it could be further rephrased 'there is feeling associated with system A.'

Simply stating that feeling is an illusion is no help; even if it is an illusion in some sense, the fact remains that the illusion itself must be felt, or subjectively experienced. And it remains as unclear as ever in principle how some set of physical processes, as we understand them in the 3rd person physical sense, could feel anything at all.

Hypnagogue, I trust you read the entirety of my posts, so I apologize for any repetition I make:

First off, I kept mentioning the "central self" problem because it seems like people want an "end product" of "experience", and this is not the case. You see, if there were an end product, then there would an end destination, and there would be a "central self", which is illogical. Since there is no "end product", "experience" should be referred to in the participle tense (e.g. "He was experiencing red", not "He experienced red"). Again, the significance of this distinction lies only in its necessity for the removal of the central "self" (the final product; the emergent property; etc).

Secondly, the illusion is indeed subjectively felt, in one sense of the term. The feeling is the illusion, and it is only really descernable (as a "whole experience") in retrospect - and I think that's because there never really was a "whole experience", merely a set of minor computations (at the fundamental level) leading to more and more complex processing of the stimulus, but never to a "Final Draft".

Finally, the point was that they didn't really "feel anything at all", they just believed (belief being a form of computation in its own right) that they did in retrospect, when - in actuality - they were merely computing/processing the new stimulus using many of the different processing methods available to the brain.

Do you see what I mean by retrospective belief that there was an "experience"? In my definition of "conscious", there is the postulate that the being must succumb to illusion that it is a singular being experiencing singular events (this really must be an illusion, since, in physical reality, there are no singular occurances, merely ongoing processes).
 
  • #17
Originally posted by Fliption
An Illusion? Who is experiencing the illusion?

The experience is the illusion. The brain is a computer, and it doesn't just compute current, incoming data, but also continues computation long after the outside stimulus has subsided. Thus, it is in retrospect that the illusion of centralization is found, since it is only in retrospect that one really thinks coherently at all (this seems rather obvious, I'm sure, since there is no real "present" and we're still moving into the future).
 
  • #18
Originally posted by confutatis
Let's get this straight:

- there's no central self
- the experience of a central self is an illusion
- the experience of a central self is accompanied by the experience that the central self is "conscious"
- since the experience of a central self is an illusion, the experience of anything associated with such central self must also be an illusion
- therefore consciousness is an illusion

Does that make any sense?

Not quite...

- there is no central self
- the experience of a central self is an illusion
- this illusion is the "like-something-to-be-me" experience that Chalmers (and others) want explained.
- therefore centralization and "like-something-to-be-me" experiences are illusions.
 
  • #19
Originally posted by hypnagogue
As I said to Mentat:

Simply stating that feeling is an illusion is no help; even if it is an illusion in some sense, the fact remains that the illusion itself must be felt, or subjectively experienced.

In attempting to discredit consciousness, the illusory argument boils down to the following:

Subjective experience is only a subjectively experienced illusion.

I think that statement says all that needs to be said.

Sure, but it's a misinterpretation (at least of what I said; I don't know about my predecessors in this line of thought). Subjective experience is not an illusion, it is the concept of a lump sum [/color](a synergy or gestalt)of this experience that is the illusion, produced by both the manner and slowness of our CPU.[/color]
 
  • #20
Originally posted by phoenixthoth
fortunately, there is no need to define something in order to investigate its nature.

Expound please. I thought it was rather integral (if not completely indespensible) to have defined all terms in order to make sure that we are even discussing the same topic.
 
  • #21
Originally posted by Fliption
Perfect. I was going to respond by saying that if consciousness doesn't exists and is just an illusion then how do we explain that an illusion is an act of consciousness?

An illusion is an act of building upon sub-experience (to borrow Canute's term from another thread). The computational processes occurring in the brain are occurring such that Multiple Drafts (I know you were probably hoping to never see that term again, but it has occurred to me that all of the "good" explanations for consciousness, that I've seen, have been just as Dennett predicted they'd be) of the sub-experiences are created, and the illusion of a sum-total is processed right along with the rest of the information.
 
  • #22
Originally posted by hypnagogue
You are proposing here a functional process that may overlap in some respects with consciousness, but also does not overlap with consciousness in many important ways. Consciousness is defined at bottom not by what it does, but how it feels.

But the whole reductionist approach to this is based on the idea that how it feels is a function of what it does.
 
  • #23
Originally posted by Mentat
Expound please. I thought it was rather integral (if not completely indespensible) to have defined all terms in order to make sure that we are even discussing the same topic.

i understand the desire to define terms but
leaving something undefined can be done in math so why not here?

(eg, not all "things" in "the elements" are defined nor are sets in set theory... sure you try to say a set is a class contained in another class but what's a class? or you could try to say a set is a collection of things but what's a collection and what kind of thing are we talking about?)

for sure the number of undefined terms is kept to a minimum and perhaps consciousness ought to be one word we don't define. what do you think?
 
  • #24
Originally posted by hypnagogue
You are proposing here a functional process that may overlap in some respects with consciousness, but also does not overlap with consciousness in many important ways. Consciousness is defined at bottom not by what it does, but how it feels.


Do you want a robot that feels but does not do?

The thing is, i consider myself a robot that is trying to solve a problem here, trying to convince you that a robot trying to use what it finds to solve a problem is conscious. How would you notice if i was a robot? You would not, cause I am. I am trying to use words to convince you that I am, but you won't listen.

If you do everything to survive, your conscious. IOW, if you look conscious, you are conscious.

The worst part is, how does that work if I know i am going to be just like you? That my life is neverending?
 
Last edited:
  • #25
Originally posted by Mentat
The experience is the illusion. The brain is a computer, and it doesn't just compute current, incoming data, but also continues computation long after the outside stimulus has subsided. Thus, it is in retrospect that the illusion of centralization is found, since it is only in retrospect that one really thinks coherently at all (this seems rather obvious, I'm sure, since there is no real "present" and we're still moving into the future).

Talk about infinite regress, sheesh. This is like saying a man put himself together and when I ask how he put his arms on, you say "why, of course he picked them up with his arms and stuck them on".

Mentat, I've read everything you've written, but I just don't see how calling something an illusion eliminates the need to explain it. As an example, a mirage can still be drawn by the person experiencing it piece by piece even though it doesn't actually exists. How can you do the same for "feeling"?
 
Last edited:
  • #26
Originally posted by Mentat
First off, I kept mentioning the "central self" problem because it seems like people want an "end product" of "experience", and this is not the case. You see, if there were an end product, then there would an end destination, and there would be a "central self", which is illogical.

Did you forget to explain why it is illogical?

From my perspective, no one "wants" anything. I just tell you what I'm seeing when I look at the problem. But I think many people really do "want" to keep the same world view that they understand and feel safe particpating in. And they will find a way to make things fit into it regardless of how ridiculous it sounds sometimes.

I read the rest of your post but it just sounds like semantic bungie jumping to me. As I've said before, redefining the problem doesn't solve the original problem. So making sure people agree that the redefinition is "what needs to be explained" is crucial.
 
  • #27
I'm not much more than you see; I need food to find solutions to problems.


I like food. Very much. And I could not live without it.
I claim that I'm no more than a problemsolving robot (and that's not so very little). Most guys would do anything to survive, cause if they don't, they can't solve problems, and that's a really big problem you don't want to have
 
Last edited:
  • #28
Originally posted by Mentat
But the whole reductionist approach to this is based on the idea that how it feels is a function of what it does.

The point is that, even if a reductive explanation of consciousness were possible, it would have to address subjective experience on some level, not just objective structure and function. Objective structure and function might be interesting topics in their own right, but it is not acceptable to redefine consciousness as a different problem altogether.

For instance, I think your illusion argument is flawed, but at least it tries to address the question of how feeling (or the illusion thereof) can occur. The strategy of Sariaht's argument is to recast the problem as one of objective survival tactics at the outset, therefore addressing a topic distinct from subjective experience.
 
  • #29
Good night, sleep tight.
 
  • #30
Originally posted by Mentat
The experience is the illusion.

Originally posted by Mentat
Subjective experience is not an illusion

Care to clarify?
 
  • #31
Consciousness is that which we perceive inwardly simultaneous to that which we perceive outwardly.
 
  • #32
Originally posted by hypnagogue
Originally posted by Mentat
The experience is the illusion.
Originally posted by Mentat
Subjective experience is not an illusion
Care to clarify?

I figured I should address this first.

Subjective experience is not an illusion, as clearly we do, subjectively, experience the world around us. However, the idea of a complete experience - a Final Draft of "what it felt like" - is an illusion.

It is my (current) position that this misconception (that there is a final, end-product, gestalt from the multiple computations taking place in the CPU) is at the heart of the belief that it is impossible to subjectively explain consciousness. I think they are trying to explain something that doesn't really exist.

P.S. for clarity: Consciousness exists, but they are not trying to explain consciousness, they are trying to explain what they think consciousness/subjective experience is, and that (IMO) doesn't exist.
 
  • #33
Originally posted by Loren Booda
Consciousness is that which we perceive inwardly simultaneous to that which we perceive outwardly.

That's the problem, the firings of our neurons are not nearly fast enough for us to experience each new stimuli completely before being confronted with new stimuli...in the end, the computational process of our brains yeilds such convenient frameworks as the so-called "specious present" in order to keep up.
 
  • #34
Originally posted by Mentat
I figured I should address this first.

Subjective experience is not an illusion, as clearly we do, subjectively, experience the world around us. However, the idea of a complete experience - a Final Draft of "what it felt like" - is an illusion.

It is my (current) position that this misconception (that there is a final, end-product, gestalt from the multiple computations taking place in the CPU) is at the heart of the belief that it is impossible to subjectively explain consciousness. I think they are trying to explain something that doesn't really exist.

P.S. for clarity: Consciousness exists, but they are not trying to explain consciousness, they are trying to explain what they think consciousness/subjective experience is, and that (IMO) doesn't exist.

I find this a little confusing. Perhaps you can try to clarify some more?

In any case, if you grant that subjective experience is not an illusion, you still have to face all the traditional problems. A subjective experience is a feeling, and how can feelings be logically entailed by materialism?

As for the gestalt, I'm not sure exactly what you're getting at, but there is tentative evidence that some gestalt components existing in consciousness (eg grouping together precepts in the visual field as one coherent object) are reflected directly in brain processing via synchronous firing of neurons.
 
  • #35
Originally posted by Fliption
Talk about infinite regress, sheesh. This is like saying a man put himself together and when I ask how he put his arms on, you say "why, of course he picked them up with his arms and stuck them on".

Mentat, I've read everything you've written, but I just don't see how calling something an illusion eliminates the need to explain it. As an example, a mirage can still be drawn by the person experiencing it piece by piece even though it doesn't actually exists. How can you do the same for "feeling"?

No, no, the feeling does exist. I'm not postulating that subjective experiences are illusions, merely that the concept of one, complete, experience is an illusion. Instead, I'm proposing that experience is an ongoing process, and that it itself is simply the computation of new stimuli by the brain; but, instead of trying to explain how we come to experience something complete (like the color red), I think we should be explaining how our brains process the incoming information, relate it to those already stimulated arrays, and then process the illusion that we experienced an entire picture (or sound or smell) instead of billions of discreet units of information.
 
  • #36
Originally posted by hypnagogue
I find this a little confusing. Perhaps you can try to clarify some more?

I'll try...

It appears to me that most philosophers of the mind are trying to explain how a complete experience can be had by a person (these complete experiences are what they believe constitute consciousness). They are not trying to understand how the brain processes incoming data, or which parts of the brain are useful for (for example) visual stimuli, but they want to know how all of that comes together to form conscious experience (please correct me if I'm totally wrong about their expectations).

It is my opinion that, since there really is no "Final Draft" of the experience - there are just those discreet units of information being processed, which those philosophers are not really interested in - the question of how all that information "becomes" a complete conscious experience is completely moot. Instead, they should be studying those individual processes to see how it is that, in retrospect, our brains look back at all of that information and see, not discreet units of information, but a complete "picture". Of course, the quick answer (in terms of the paradigm I'm currently pursuing) is "it's a computing mechanism that is merely for convenience" (convenience in storage, recall, and communication to others).

It's like trying to study why I see a complete "picture" of the library, inspite of the constant shifting (saccades) of my eyes. I really do process that constant shifting, but my brain "compacts" it all into a concise "image".

I've got to go now...I hope I didn't seem to rushed, but the library's about to close.
 
  • #37
It appears you're talking what would be called an 'easy' problem and not the hard problem. You're talking about how it is that the brain treats diverse packets of information as coherent wholes. This can be treated entirely as an objective issue of styles of information processing, and so it has no fundamental link to the question of how it is that feeling exists, even if it is tangentially related to some aspects of how humans feel/subjectively experience under normal conditions.

The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).
 
  • #38
Originally posted by hypnagogue
The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).

If we can't fundamentally and systematically reason our way through various methods of cognitive science and neuroscience objectivity on conscious experience physical or mental; how are we ever going to yielf an explanation for the "extra ingredient" posed by Chalmers to functionally assess the given maxim of provability? I just don't see how quantum pheonomena is still going to give the empirical objectivity we need in conscious experience? Although, Chalmers has the right approach on the hard problem and has the easy problem mapped out systematically, in what way is he or any other going to conclude the basis of reasoning he needs for that explanitory bridge?
 
  • #39
Originally posted by hypnagogue
It appears you're talking what would be called an 'easy' problem and not the hard problem. You're talking about how it is that the brain treats diverse packets of information as coherent wholes. This can be treated entirely as an objective issue of styles of information processing, and so it has no fundamental link to the question of how it is that feeling exists, even if it is tangentially related to some aspects of how humans feel/subjectively experience under normal conditions.

The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).

It appears to me (correct me if I'm wrong) that you are using the term "subjective experience" to mean something completely separate from all of the things that are normally used to describe it...

What I mean is, I can reductively explain a particular experience, and how the brain processes the lump of information as a gestalt (in retrospect), and you can accept all of this, but you still ask that I explain "subjective experience"...what else is there to explain?

Remember my first post in this thread? In it, I mentioned Hume, who talked about how, if you remove all of a persons innate characteristics ("nature") and their learned ones ("nurture"), you will not have a naked self remaining, but you will have nothing at all, since "self" cannot be reasonably defined as anything but a collection of those aforementioned things. Well, it seems to be the same with experiences, if you take away the computation of new and old stimuli, the use of recall, and the apparent (thought illusory) synergy of these bits of information into a whole experience, you don't have a "naked experience" left, you don't have anything left since "experience" can't (so far as I can tell) be coherently defined as anything besides those aforementioned things.
 
  • #40
Jeebus, you mentioned the "extra ingredient" that Chalmers requires for an explanation of consciousness. That's what's symbolized by the "tip" of the "pyramid" in my illustration (third post down on the first page). The problem is, I think, that the "tip" doesn't really exist. It's just a bunch of the same blocks in the same kind of configuration as the blocks below them...they're just "higher". This is why I made threads like "Faulty expectations of a theory of consciousness" and "Vitalist nonsense versus Science": because people are expecting something "extra" out of a theory of consciousness that they don't expect out of any other Scientific phenomenon. This is exactly what the vitalists did, with regard to "life". They were convinced that you could explain every minute function of a living being, and still not discover how those processes "become 'life'". Their problem, of course, was that they thought there was "something more" to life, than those physical functions, when it turns out that the physical functions are all you need to explain life.

Now, I know that it has been said that the two cases are not analogous, but I say they are. It's not like a vitalist ever said "It may be possible to explain life, if you could know all of the physical functions, but you'll never know all of the physical functions". No, they said just what the Chalmerean (there's an interesting new word) philosophers are saying about consciousness now (just substitute "conscious" for "living", and "consciousness" for "life"): "You can explain all of the functions that take place in a particular living being, but that still doesn't bring you any closer to explaining how life can arise from all of those processes...(and this is my favorite part) I can clearly imagine all of those processes occurring in a particular being, without the presence of life in that being (conversely "...without it being alive" which is also replaceable with "conscious")".
 
  • #41
Mentat, you need to stop parading that analogy, because it just doesn't work.

With life, the thing that needs explaining is purely a set of objectively observable functions (reprodcution, growth, locomotion, etc). The non-physical vital spirit is an explanatory posit to try to explain how it is that the functions of life work, not something that needs to be explained in its own right. Once it is shown that physics can completely account for the functions of life, there is no longer a compelling reason to believe in the vital spirit; there is no fundamental question of the form "Why is it that reproduction, growth, locomotion, etc. are associated with life?"

With consciousness, the thing that needs explaining is not objective at all, but instead is subjective experience. (Again, that is not to beg the question, but rather to assert that any explanation, even one grounded in objective theory, must ultimately arrive at subjective experience if it is to be successful.) Subjective experience is NOT an explanatory posit like the vital spirit; rather, it is the central thing in need of explanation. No matter how much we address the objective processes of the objective brain, we are still always faced with the same question that must ultimately be addressed: "why is it that brain activity X is associated with subjective experience?"

It should not be surprising that investigations into consciousness are unique among all scientific inquiries. All other phenomena (including life) are by definition objective in nature, with only objective properties in need of explanation, and so may all be treated in the same general way by science. Investigation of consciousness is a unique undertaking in all of scientific inquiry, precisely because such investigation involves explanation of subjective experience, which does not reveal itself in objective observation.
 
  • #42
Originally posted by hypnagogue
Mentat, you need to stop parading that analogy, because it just doesn't work.

With life, the thing that needs explaining is purely a set of objectively observable functions (reprodcution, growth, locomotion, etc). The non-physical vital spirit is an explanatory posit to try to explain how it is that the functions of life work, not something that needs to be explained in its own right.

Look, if it's not vitalism, then at least look at my example in its own right. Forget what the vitalists were after, that's irrelevant. I'm talking about the people (and I've actually met a few...they still exist) who think that all of these explanations of the physical processes involved in a living being are falling short of explaining "life" itself, since they can (and there arguments do indeed sound much like what I'm saying here - even if I do embellish from time to time for the purpose of making the similarity with your argument absolutely clear) "imagine all those processes occurring, and yet the thing not being alive". Much like someone could imagine a the proper configuration of particles, without them being "liquid". And like someone else could imagine the curvature of spacetime, without their being a perceived gravitational attraction. And like still others, could imagine the computation, memorization, and recall of information about stimuli gathered from any/all of the 5 senses, along with a "trick" (or "helpful tool") for compactification that gives the illusion of a complete, indivisble experience (while such a complete, final, draft never really existed); all this and yet there be no consciousness. I ask again, what is missing?

With consciousness, the thing that needs explaining is not objective at all, but instead is subjective experience. (Again, that is not to beg the question, but rather to assert that any explanation, even one grounded in objective theory, must ultimately arrive at subjective experience if it is to be successful.)

No, if I (along with those references which I've mentioned before) am correct, then you need only arive at a theory of how the illusion of a complete experience gets processed along with the rest of the information. There is nothing else to add to this. What Chalmerean philosophers are doing (AFAICT) is positing first and foremost the existence of a set of complete, indivisible, experiences, which (they say) must then be reductively explained. This, however, may be a straw-man argument, since it is not so obvious that we actually have complete experiences (instead of merely processing the illusion of such a thing along with the rest of the on-going computation in the brain), since we, as the subjective "experiencer", could not possibly tell the difference.

It's like a joke that used to be the quote of one of the members here; something to do with a philosopher asking a student why people used to think the sun moved, while the Earth remained motionless. The student said, because that's how it appears...it is the most obvious conclusion, since that is what it would look like if the sun really did move. The philosopher then said, "Oh, so what would it have looked like if the Earth were revolving around the Sun?".

Chalmers is, IMHO, trying to refute all possible explanations of what keeps the Sun moving around the Earth.
 
  • #43
Originally posted by Mentat
What I mean is, I can reductively explain a particular experience

Really? You can show how the firing of neurons logically entails redness? You can discover processes that appear to be necessary and/or sufficient for redness, but can you really explain how they bring about redness?
 
  • #44
Originally posted by Mentat
Forget what the vitalists were after, that's irrelevant. I'm talking about the people (and I've actually met a few...they still exist) who think that all of these explanations of the physical processes involved in a living being are falling short of explaining "life" itself, since they can (and there arguments do indeed sound much like what I'm saying here - even if I do embellish from time to time for the purpose of making the similarity with your argument absolutely clear) "imagine all those processes occurring, and yet the thing not being alive".

"Alive" in the sense of the vital spirit is a notoriously shaky concept. The vital spirit cannot be observed at all, so how can we begin to talk about it?

Subjective experience can very plainly be observed (from the 1st person view), so it immediately has credibility and calls for a legitimate explanation. Unlike the vital spirit, it cannot be written off or ignored.

Much like someone could imagine a the proper configuration of particles, without them being "liquid". And like someone else could imagine the curvature of spacetime, without their being a perceived gravitational attraction.

That's a strawman. "Cannot be imagined otherwise" is just another way of saying "logically entailed." From the definitions of H2O and spacetime, given materialistic assumptions, those phenomena are logically entailed by their prospective causes. It remains to be shown how the prospective cause of brain functioning can logically entail subjective experience even in principle using only materialistic assumptions.

And like still others, could imagine the computation, memorization, and recall of information about stimuli gathered from any/all of the 5 senses, along with a "trick" (or "helpful tool") for compactification that gives the illusion of a complete, indivisble experience (while such a complete, final, draft never really existed); all this and yet there be no consciousness. I ask again, what is missing?

What is missing is experience! You claim to know how the illusion of indivisible experience is formed, but you avoid the question of how any experience at all can be created by a bundle of neurons.

There is nothing else to add to this. What Chalmerean philosophers are doing (AFAICT) is positing first and foremost the existence of a set of complete, indivisible, experiences, which (they say) must then be reductively explained. This, however, may be a straw-man argument, since it is not so obvious that we actually have complete experiences (instead of merely processing the illusion of such a thing along with the rest of the on-going computation in the brain), since we, as the subjective "experiencer", could not possibly tell the difference.

What is clear is that we have experience, regardless of how we wish to classify it as divisible or indivisible. What is not clear at all is how physics can entail experience of any kind.

Chalmers is, IMHO, trying to refute all possible explanations of what keeps the Sun moving around the Earth.

What Chalmers is doing is trying to steer us towards a sound theory of consciousness. Ignoring the hard problem is not a satisfactory approach, however much more it might make consciousness amenable to scientific study. If we ever want a complete theory of consciousness we will need to face up to and surmount the hard problem at some point, because it cannot be written off like so many vital spirits as you suggest.
 
Last edited:
  • #45
Originally posted hypnagogue
What Chalmers is doing is trying to steer us towards a sound theory of consciousness. Ignoring the hard problem is not a satisfactory approach, however much more it might make consciousness amenable to scientific study. If we ever want a complete theory of consciousness we will need to face up to and surmount the hard problem at some point, because it cannot be written off like so many vital spirits as you suggest.

I was thinking the other day while reading Facing Up to the Problem of Consciousness by Chalmers and he said:

"We are already in a position to understand certain key facts about the relationship between physical processes and experience, and about the regularities that connect them. Once reductive explanation is set aside, we can lay those facts on the table so that they can play their proper role as the initial pieces in a nonreductive theory of consciousness, and as constraints on the basic laws that constitute an ultimate theory...

And I thought about this and made an idea that sparked something definitively unrealistic but probable. As he defined the 'easy problems' of consciousness, what if that is the main factors of the 'hard problem' … what if there aren't any more factors and algorithms that go into the equation given? What if that problem is set and categorized and the answer is already there? That is the experience. Those factors of the easy problem 'make up' and contain the information needed for subjective experience?

I dunno.
 
  • #46
Jeebus, Chalmers realizes that solving the 'easy' problems will be instrumental and indispensable in any attempt to solve the 'hard' problem, but there are principled reasons (which Chalmers discusses in "Consciousness and Its Place in Nature") to believe that just solving the easy problems will not be enough.
 
  • #47
I don't see what the difficulty is...Mentat is right, you guys need to just get in line!

If I'm not mistaken, M, the point you are getting at is that it is possible that the process is the experience, and that there is nothing else that needs to be explained?
 
  • #48
Originally posted by hypnagogue
Jeebus, Chalmers realizes that solving the 'easy' problems will be instrumental and indispensable in any attempt to solve the 'hard' problem, but there are principled reasons (which Chalmers discusses in "Consciousness and Its Place in Nature") to believe that just solving the easy problems will not be enough.

Thanks for the info.

I am reading it now and I came upon a paragraph that peeked my interested for further knowledge of.

Chalmers said: "What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.

My question is … doesn't behavior, in a broad sense, of the neurophysical system of materialistic functions approach -- directly compatible or parallel to cognitive experience on the physical level without the reductive explanation?

I know then further down Chalmers states:

(1) Mary knows all the physical facts.

(2) Mary does not know all the facts.

This isn't likely. Physical facts depict normal facts. If something it is not physical it is not factual to the human brain. If it can't be senesed or even verifiable, then the fact is there is no fact in question. If there is no empirical evidence for a zombie, there is no fact for me to believe that it ever existed ab ovo.

This leads to my question why Chalmers says 'materialism is false' without any empirical evidence. There were no facts given for the knowledge argument to follow but subjective choplogic. Where did he come up with this conclusion? He then explains the epistemic gap but that doesn't give me reasonable doubt to why facts are not facts without evidence for that fact.

Wish to clarify for me?
 
  • #49
Originally posted by Jeebus
My question is … doesn't behavior, in a broad sense, of the neurophysical system of materialistic functions approach -- directly compatible or parallel to cognitive experience on the physical level without the reductive explanation?

Can you rephrase this? I'm not sure from your wording exactly what you are getting at.

This isn't likely. Physical facts depict normal facts. If something it is not physical it is not factual to the human brain.

"Physical" just refers to properties that are detectable in the objective, 3rd person sense. If you define all facts as physical facts, you are begging the question by assuming that materialism coherently accounts for all existing phenomena. There could well be some property that is not what we would properly call physical but which is characteristic of human brains nonetheless.

If it can't be senesed or even verifiable, then the fact is there is no fact in question. If there is no empirical evidence for a zombie, there is no fact for me to believe that it ever existed ab ovo.

There is also no objective empirical evidence for consciousness, yet I doubt you would claim that consciousness does not exist.

And 'zombies' are a philosophical tool used to clarify the problems of consciousness, not actual entities that are presumed to exist.

This leads to my question why Chalmers says 'materialism is false' without any empirical evidence.

If you mean empirical evidence in the sense of objective information, then by definition such evidence would always be consistent with materialism, so you have no grounds for ever expecting such evidence to support the idea that materialism might be false. If you allow empirical evidence to include your own subjective experience, then you have very compelling evidence against materialism, for all the familiar reasons I've been explaining.

There were no facts given for the knowledge argument to follow but subjective choplogic.

The reasoning is simple. Forget Mary. For further clarity, let's go back to the non-conscious computer/demon D which draws conclusions from objective facts using the axioms of materialism. D can have complete information about a human brain, but D would never have reason to suspect that that human brain possesses anything like subjective experience. This is because consciousness cannot be logically entailed using only materialistic assumptions. (Re-read the 'faulty expectations' and 'liquid' threads if you doubt this.) It follows that D knows all the physical (objective) facts, but D does not know all the facts; in particular, D does not know anything about subjective experience in spite of its complete knowledge of the human brain.
 
Last edited:
  • #50
Originally posted by hypnagogue
Really? You can show how the firing of neurons logically entails redness? You can discover processes that appear to be necessary and/or sufficient for redness, but can you really explain how they bring about redness?

That's a non-sequiter. If the process is necessary and sufficient for redness, then what does it mean to explain how the process "brings about" redness? The process is the experience of redness; that's why we call it "sufficient". You might as well, on this line of questioning, ask "You can discover the wavelength that is classified as 'red', but can you really explain how that wavelength brings about it's own 'redness'?" Or, more to the point of the "liquid liquid" example, "You can discover the arrangements of particles that are necessary and/or sufficient for the substance to be a liquid, but can you really explain how that arrangement brings about it's 'liquidity'?".
 

Similar threads

Back
Top