Why reductive explanations of consciousness must fail

  • Thread starter Thread starter hypnagogue
  • Start date Start date
  • Tags Tags
    Consciousness
AI Thread Summary
Reductive explanations of consciousness fail because they cannot address why physical processes give rise to subjective experience. While physical accounts effectively explain structures and functions, they do not entail the emergence of experience, which remains conceptually distinct. Unlike vitalism, which doubted physical mechanisms could explain life, the challenge with consciousness lies in the fact that functions can be explained without accounting for experience. The discussion highlights that conscious experience is not merely an automatic consequence of physical processes, emphasizing the need for a deeper understanding. Ultimately, the problem of consciousness is fundamentally different from other scientific inquiries, requiring more than just reductive methods.
  • #151
From which piece of scientific research do you conclude that? From a scientific point of view it might be true, but there's no evidence that it is. There's not even any scientific evidence that bats are conscious (or people come to that).

No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that?
 
Physics news on Phys.org
  • #152
Originally posted by Dark Wing
This is where we diverge. I believe that it is a conceptual necessity just the same, and in fact is the only case possible with the laws set out inside the system. Agreed, consciousness cannot simply just pop out of nowhere, and cannot just be formed from some higher level of complexity. which means that you have to think at least that consciousness, or at least the building blocks for it are always there, everywhere. This is bordering on pan psychism, I know, and that is a trap that i whish to avoid. So i will try to explain my thoughts on how consciousness might work.

I agree that, for a theory of consciousness to make sense, it must make reference to some sort of building blocks for consciousness; either in the form of an irreducible and fundamental entity, or in the form of some 'things' that are not themselves conscious but somehow combine to create consciousness.

Now the question becomes: are these building blocks included in our contemporary materialistic ontology? This is precisely where I believe that contemporary materialism must fail in any attempts to really explain consciousness, because I do not think any of the building blocks given to us in a materialistic ontology can do the job of showing us how to explain or deduce consciousness. We need more building blocks.

I would say that anything that shows an ability to react and interact with its environment would show sufficient conditions for the start of consciousness. I do not take consciousness to be a "you have it or you don’t" thing, it is a matter of complexity, and a matter of how well you are able to interact with your environment. something that could only show signs of conditioning as its environmental interaction would not be as conscious as something who could also deliberate over a reaction to a stimulus. since we witness the ability to condition in every form of biology that i have encountered, i would say that things biological are the basis for consciousness, and it is a necessary thing that it is.

As a side point here: your criterion for judging whether an entity is conscious or not is the degree to which it can interact with and be conditioned by its environment, why should physical constitution matter? I understand that you want to start off on surer footing by starting with safer assumptions, but we could (relatively) easily build a silicon based robot that could do the same things. All I am suggesting is that, if biological constitution is to be the most fundamental factor for consciousness in your hypothesis, then that should your primary assumption. Deriving (as opposed to fundamentally asserting) the necessity of biology for consciousness from an entity's ability to interact with the environment seems to be faulty, since you could just as well derive that a silicon robot should be conscious by the same criterion.

Like you, I have been weary of functionalism as a good starting ground for any hypothesis for consciousness. However, recently I read an argument with a functionalist flavor put forth by Chalmers that gives me pause. If you are interested, it might be appropriate to start another thread on the topic.

I know that you can't say "just because everything we see does thins, therefore all things must do this", and I am aware that you are arguing that this might just be the observed phenomena that is present and not the necessary: but I believe that the only reason we observe this time and time again is because the configuration of certain thing will make a conscious mind, just while a slightly different configuration creates gold. There is something about biology that does this. What is it exactly? that’s what we are yet to find: we do not have the formula of H2O for the brain yet.

OK, back to the building block discussion. Suppose for the sake of argument that we eventually isolate the motion of electrons as the most fundamental necessary and sufficient physical correlate for consciousness: whenever we see electrons moving about in such and such patterns, we are confident that there will be such and such conscious experience on the part of the system.

Now, what in our materialist ontology could account for this? Electron charge? Electron mass? Electron spatiotemporal configuration? What combination of these could you throw together to show a priori that consciousness must be the result? I argue that no combination of these could be thrown together to show that consciousness must result. Rather, at this point, we would have to rework our ontology to grant an entirely new property to electrons, such that we would be able to see a priori that such and such configuration of electrons must result in consciousness. This new property would have to be either a fundamentally irreducible aspect of consciousness on the part of electrons, or it would have to be some kind of microphenomenological property of electrons such that electrons by themselves are not conscious, but when combined in patterns just so, their microphenomenological properties combine to result in consciousness.

This argument applies to any H2O formula we may wish to hypothesize for consciousness. You say we have not found the formula yet; I say that for any formula built solely from materialist building blocks, we will still not be able to show a priori that this formula must necessarily result in consciousness. We just need more building blocks than materialism will give us.

not a zombie if it is a matter of configuration. Science has only recently had the technology to even consider such things: what needs to be looked at is the point where physics becomes biology: find out what about biology makes it biology, and not just another chunk of jasper on the plain. it is a round peg all right, but that’s because the way it has been thought of and talked about has lead to massive confusion. (not that you are involved in that confusion, your point is very aside from that) It could be that it impossible for a person with the right configuration NOT to have a mind.

I don't think anyone will argue that it is impossible for a person with the right configuration in our world not to have a mind. The question is whether or not it is a metaphysical impossibility; if the world were different somehow, would consciousness still be the necessary result of the right brain configuration? For instance, in our world it is impossible for an electron not to be attracted to a proton. In a metaphysical world with different laws of physics, this would not necessarily be the case.

It is a metaphysical impossibility for a world with identical H2O molecules and identical laws of physics to ours that these H2O molecules not combine to form (given suitable circumstances) a macrophysical substance with properties identical to water in our world. A very straightforward argument involving physical structures and functions can be given to support this claim. It is not at all clear, however, that a metaphysical world that is physically identical to ours should necessarily support consciousness. If it is claimed that this metaphysical world must support consciousness, no substantive argument can be given to support this claim, even in principle, for all the familiar reasons. This is another way of getting at the suggestion that there must be something more than just the physical involved in the phenomenon of consciousness.

So, with that in mind, the match analogy may still stand. It is in virtue of its constituents that it is so. There is no logical reason why it should not be so with the brain, its just not an area that has had a lot of attention till recently, and we are still figuring out what each neuron of the brain does 9the whole 70-80 were devoted to one-one link ups of neuron and response research, they are yet to head lower).

A brain functioning without there being consciousness under this thought would not be conceivable.


I agree that there is no logical reason to say that it is not in virtue of some property of the brain and its constituents that consciousness exists. However, there is much logical reason to say that such a property is not included in our current materialist ontology.
 
  • #153
Originally posted by selfAdjoint
No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that?

The idea is not that your artificial bat must be a zombie. The idea is that we can't be certain what effect the different physical constitution has on its purported consciousness. Upon what basis do you claim that the artificial bat must have the exact same experience as the natural bat?

The argument you have put forth so far leaves much room for doubt. In fact, it does little more than beg the question; your argument rests firmly on the assumption that physical transitions and such are all that is responsible for consciousness, whereas this is precisely the issue that is open to question. To advance, you must propose an argument detailing how it must be that the functional constitution of the bat is sufficient for explaining its first person experiences.
 
Last edited:
  • #154
Originally posted by selfAdjoint
No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that? [/B]
I didn't claim that you were wrong. If you look I wrote that what you said might be true. I was just pointing out that it was pure conjecture unsupported by any evidence.

Edit - Whoops, just noticed Hypnogogue said this for me.
 
Last edited:
  • #155
The argument you have put forth so far leaves much room for doubt. In fact, it does little more than beg the question; your argument rests firmly on the assumption that physical transitions and such are all that is responsible for consciousness, whereas this is precisely the issue that is open to question. To advance, you must propose an argument detailing how it must be that the functional constitution of the bat is sufficient for explaining its first person experiences.

Exactly. I'm trying to firm up the discussion by pinning down the issues in a case where we don't have all the baggage of human consciousness to contend with. IMHO that's exactly what Nagle did in switching from talking about qualia in people to presumptive qualia in bats. The point is exactly that nobody knows what goes on in a bat's mind so the discussion can remain pure of special pleading.

If you don't like the bat, here's another one. Could an AI be built to sense colors the way people do, with the three receptor bands and intensity differencing and maybe a neural network for identification and memory, and if it could then be run through experiences with colors, some good some bad according to a carefully designed program so it had various associations with various colors, and if it then "discussed" its experience of colors with researchers and showed complex discussion behavior, not programmed in advance, could you then say the device was experiencing color qualia?
 
  • #156
So you are attempting point out the uncertainty of our knowledge of consciousness? I don't think any except the most extreme on either side really dispute that notion. At this stage of our understanding (and possibly forever), we just don't know enough to answer your question with much more than educated speculation. But this is a different matter from the subject of whether or not materialism can explain consciousness in principle.
 
  • #157
Originally posted by hypnagogue
So you are attempting point out the uncertainty of our knowledge of consciousness? I don't think any except the most extreme on either side really dispute that notion. [/B]
I would. Did you mean to say 'scientific knowledge of consiousness'.
 
  • #158
Originally posted by Canute
I would. Did you mean to say 'scientific knowledge of consiousness'.

Yes, that's what I meant. Sorry for my lack of clarity.
 
  • #159
Originally posted by Canute
Hmm. I don't think I'm clever enough to unpick this muddle. You seem to be confusing 'emptiness' and 'nothing'.

Per AHD (American Heritage Dictionary)...

Emptiness: Holding or containing nothing; vacant; meaningless, devoid, lacking force or power.

Nothing: no thing; not anything; insignificance; obscurity; absence of anything perceptable; someone or something of no consequence.

So let's dissect it...

Is a life -- or the Universe -- "holding or containing nothing"? While my life might not be a "big thing" within the cosmic context, it still contains SOMETHING ...which, at the very least -- and, ironically, the very most -- are EXPERIENCES that may or may NOT be being "recorded" in the Memory of the Universe ...and which would comprise the "spiritual plain".

Insignificant? Well, maybe I am -- or maybe I'm not -- but I know for sure that if ANYTHING IS "significant" it is the Universe.

Obsurity? perhaps for me, although I think our existence as a species -- among many others -- makes a "contribution" to the Collective by the way of these Experiences we're having. These "lifetimes" might "live on" in some sort of "information storage system" which, in my estimation, is what the "spiritual plain" of the Universe might be.

Absence of anything perceptable? Well, we seem to perceive a lot. And, there seems to BE a lot to perceive. In fact, there may be MORE than that which can BE "perceived". For instance, I think there might be a "force" that we could call "intention" that operates at the QM level ...in effect, "plucking the strings" -- to "go" this way or that -- in the PROCESSS OF MANIFESTATION!

Someone or something of no consequence? Everything has "consequence": it's a Cause & Effect Universe.

Not anything? Can the Universe be nothing?

Does it "lack meaning"? It may. I don't know ...yet.

Can It be "lacking in power" when it MAY be "all energy all the time"?

...as well as "all INFORMATION all the time" ...which will take me next to a thread on the theory of consciousness which I want to present under Theory Development. Since "consciousness" is "at least" a "part" of the Universe -- thus a part of "cosmology"...I think it would be appropriate to discuss it there.

However, I will have to ask them if a serious discussion about "An Evolving Theory About Information Exchange that Might Explain Consciousness as a Fundamental Ingredient/Process of the Universe and Point to a Possibility about Creation at the QM Level" is "off topic" under Theory Development?

Btw, I think you're "clever enough". :wink:
 
Last edited:
  • #160
Sorry, this is my fault, I wasn't clear. When I mentioned 'emptiness' I meant it as a Buddhist would mean it. It doesn't matter what that means, but I didn't mean 'nothing'. As you pointed out 'nothingness' appears to be seething with things.
 
  • #161
Hello again Hypnagogue, Sorry it took so long to reply...

Originally posted by hypnagogue
I agree that, for a theory of consciousness to make sense, it must make reference to some sort of building blocks for consciousness; either in the form of an irreducible and fundamental entity, or in the form of some 'things' that are not themselves conscious but somehow combine to create consciousness.
good. we have a starting point.


Now the question becomes: are these building blocks included in our contemporary materialistic ontology?

Well, it depends. Yes, the materialistic stance is quite shallow, and no doubt it needs to be fleshed out, but the basis for it may still be there. Take a look at Place and Smart's work on identity theory: (I know they are Australian, but hell, we do have some minds all the way down here)It simply states that’s a Mind state IS a Brain state. What they have done is set up a field to explore: what is a brain state? if you can figure out what that is, then you have the next step to the reduction: take it down to biology, and then ultimately physics, and you have your building blocks for consciousness: but I will address that better where you have mentioned it bellow.


As a side point here: your criterion for judging whether an entity is conscious or not is the degree to which it can interact with and be conditioned by its environment, why should physical constitution matter?

I think that we should at least start at a point where we know that consciousness is the case. (i am aware that people will argue that we are not conscious, and that we are all just robots, but i am going to presume consciousness on the basis of Searls "seeming" argument). If it is so that biology is conscious, then we can figure out what the constitution of biology is, and then see what the essential ingredients of the physics/biology boundary are. we can then say that they are the essential building blocks of consciousness, as they make biology, and biology is conscious, as it can react and interact with its environment. We can never argue necessity of biology for consciousness. But we can say "check it out, we have a working example, let's see how that happens"

Deriving (as opposed to fundamentally asserting) the necessity of biology for consciousness from an entity's ability to interact with the environment seems to be faulty, since you could just as well derive that a silicon robot should be conscious by the same criterion.

So, what makes a biological cell that is reacting and interacting with its environment different from a robot that is showing the same behavioral patterns? nothing according to that definition. so the theory has to be expanded to show us how to tell immitation from the real thing (it is called "artificial Intelligence" after all :o)

Besides that my explanation of consciousness is based on biological or at least physical causation, and that programmed robots ignore the causation part of the initial condition for consciousness and just write the consciousness on top to be run on a bunch of silicon mapping, there seems to be a tesable and varifiable way of seeing if a robot is conscious in the same kind of sense that a human is conscious: that it attributed meaning to its environment. It is reacting in a meaningful and productive way TO ITSELF as well as to the environment. I think searle has pretty much covered this one with his Chinese room argument that Dennett (or anyone else for that matter) has yet to reply decently. all i have seen in the literature is personal attacks and insults on searle, demanding that he fall in line with the rest of the functionalist community. All i am saying is that my work on consciousness is an expose' on what we know to be conscious, and would need some work to expand as a proof or disproof of AI, even though AI breaks every rule i set up for a conscious being (as it has no causation) yet will act like one as it is programmed to.

Like you, I have been weary of functionalism as a good starting ground for any hypothesis for consciousness. However, recently I read an argument with a functionalist flavor put forth by Chalmers that gives me pause. If you are interested, it might be appropriate to start another thread on the topic.

Absolutely. That would be great to see. as much as i am not too keen on chalmers work, he is an interesting writer to read.

OK, back to the building block discussion. Suppose for the sake of argument that we eventually isolate the motion of electrons as the most fundamental necessary and sufficient physical correlate for consciousness: whenever we see electrons moving about in such and such patterns, we are confident that there will be such and such conscious experience on the part of the system.


not so much "these atoms moving like such means we will have this conscious experience" more i am saying that a certain formation of atoms will produce consciousness in the system: the nature of the conscious experience will be dictated by the biology: what kind of biology does this thing have in order to experience the environment with? all of that is higher-level stuff that we may or may not predict on an atomic level. all i am interested in is what combinations make consciousness possible: experiencing consciousness is another question all together.

What combination of these could you throw together to show a priori that consciousness must be the result? I argue that no combination of these could be thrown together to show that consciousness must result. Rather, at this point, we would have to rework our ontology to grant an entirely new property to electrons, such that we would be able to see a priori that such and such configuration of electrons must result in consciousness.


Yes, the ontology that i follow in place and smart does not explicitly state this, but it is implied that you can go to the biology and find out what physical constituents made it possible to form. have that, and you have your physical energy level of consciousness.

This new property would have to be either a fundamentally irreducible aspect of consciousness on the part of electrons, or it would have to be some kind of micro phenomenological property of electrons such that electrons by themselves are not conscious, but when combined in patterns just so, their micro phenomenological properties combine to result in consciousness.

exactly what i am saying. It’s the combination that matters. certain combinations make one thing, other combinations make consciousness.
and if you know the combination that makes something biology, then you will know a priori that a certain amount of yay atoms on this combination will make consciousness. It’s like baking a microscopic cake.

The metaphysical question of "even if it might not be so in our universe, but is it possible for consciousness to NOT result by this mix in another universe" is to me a wonderful question to speculate, but essentially one with no answer. how can we ever know whether consciousness of this sort is contingent here or a necessary factor of existence? that sort of thing keeps one awake at night.

even if in another universe something other than the pure physical is needed to support consciousness, it means nothing to us here. I will argue that we have all the ingredients for consciousness right here in front of us, we are just not looking hard enough for them.
 
Last edited:
  • #162
Originally posted by Dark Wing
It’s the combination that matters. certain combinations make one thing, other combinations make consciousness.
and if you know the combination that makes something biology, then you will know a priori that a certain amount of yay atoms on this combination will make consciousness. It’s like baking a microscopic cake.
So, if it is up to the combination, perhaps there is a critical distinction that needs to be made which I have never seen anyone make: Perhaps there is no such THING as consciousness, perhaps there is a myriad of phenomena that each may be 'conscious experiences'.

So a reductive explanation of 'Consciousness' will fail, because there is no such thing as 'Consciousness', there is instead attributes of consciousness. If you follow me...

I guess this is similar to saying there is no such thing as 'The Biological World', there are only creatures which may be said to be biological.

Does this make sense/Help?
 
  • #163
That's not a million miles from the Buddhist view.
 
  • #164
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.
 
  • #165
Originally posted by M. Gaspar
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.

A lot of philosophical considerations point to consciousness being a fundamental aspect of reality (this thread for example). But supposing that consciousness is on some level fundamental is actually the antithesis of a reductive explanation.

As for the biological view, I don't think it's elitist as much as it is pragmatic. We know for a fact that humans are conscious and we have good reason to believe that other animals are conscious as well. The further the systems we consider stray from being human, the less confidence we can have that these systems are conscious. So it is more a matter of starting in an area where we can be confident, learning what we can from that starting point, and then extrapolating to more general systems as our knowledge and theoretical frameworks progress. It may be true that an amoeba (or a rock) is conscious on some level, but for now that is just speculation.
 
  • #166
Originally posted by Another God
So, if it is up to the combination, perhaps there is a critical distinction that needs to be made which I have never seen anyone make: Perhaps there is no such THING as consciousness, perhaps there is a myriad of phenomena that each may be 'conscious experiences'.

So a reductive explanation of 'Consciousness' will fail, because there is no such thing as 'Consciousness', there is instead attributes of consciousness. If you follow me...

I guess this is similar to saying there is no such thing as 'The Biological World', there are only creatures which may be said to be biological.

Does this make sense/Help?

I don't know how I feel about that. You can perhaps say that there is no intrinsic property that differentiates a biological system from a non-biological one, but from the 1st person view at least, there seems to be an obvious intrinsic difference between a conscious system and a non-concsious system.

Besides, even if we accept that what we need to describe are attributes of consciousness, all the familiar arguments still apply as to why we could not explain these attributes reductively in the materialist framework.
 
  • #167
Originally posted by Dark Wing
Well, it depends. Yes, the materialistic stance is quite shallow, and no doubt it needs to be fleshed out, but the basis for it may still be there. Take a look at Place and Smart's work on identity theory: (I know they are Australian, but hell, we do have some minds all the way down here)It simply states that’s a Mind state IS a Brain state.

But this is still unintelligible under the conventional materialist framework. I am not saying it is impossible in reality-- obviously that is not the case. I am just saying that if you analyze a system that turns out to be a human brain using a materialistic analysis, you will never deduce that this system has consciousness. It may very well be the case that a mind state is a brain state; but if this is so, then the implication is that there is something about brain states that is not recognized in materialism.

I think that we should at least start at a point where we know that consciousness is the case.

A very sensible approach indeed, but I don't think using brains as the starting point necessitates that we build the theory explicitly around biology. If anything I think that should come as a result of empirical research rather than a starting assumption. (I don't think we need this assumption to do productive research on consciousness.)

If it is so that biology is conscious, then we can figure out what the constitution of biology is, and then see what the essential ingredients of the physics/biology boundary are. we can then say that they are the essential building blocks of consciousness, as they make biology, and biology is conscious, as it can react and interact with its environment. We can never argue necessity of biology for consciousness. But we can say "check it out, we have a working example, let's see how that happens"

Doesn't this research paradigm just boil down to determining what makes biology, and then just baldly asserting that those things that make biology also make consciousness? I may have misunderstood, but it sounds as if you are begging the question here.

Besides that my explanation of consciousness is based on biological or at least physical causation, and that programmed robots ignore the causation part of the initial condition for consciousness and just write the consciousness on top to be run on a bunch of silicon mapping, there seems to be a tesable and varifiable way of seeing if a robot is conscious in the same kind of sense that a human is conscious: that it attributed meaning to its environment. It is reacting in a meaningful and productive way TO ITSELF as well as to the environment.

In what way does a machine act any less causally than a life form?

If you suppose that there is a one to one mapping of brain states onto mind states, then in principle the entire behavioral proclivities of a person should be encoded in their neural firing patterns. In principle, these neural firing patterns could be emulated perfectly by a complex computer. So, in principle, you could build a computer (with a robot body and so on) that would act indistinguishably from its human counterpart; how then would you conclude that the human attributes meaning in such and such a way and that the computer/robot does not?

I am familiar with the Chinese Room argument and I suppose you might invoke it here. However, from the point of view of materialism, the human brain might as well be just as void of semantics as the computer which blindly runs instructions. It is just an input-output device, after all; just as it is eminently unclear how/at what point a computer running instructions would somehow become conscious, it is equally unclear how/at what point a human brain interpretting sensory input should somehow become conscious. We know the CR argument does not apply to human brains not from some special caveat in the argument that explicitly distinguishes how biological brains are different from all other cases; rather, we know the CR argument does not apply to human brains because we are human brains and we have 1st person evidence of our own consciousness to the contrary. It could very well be equally the case for an AI robot; it could be that although the CR argument indicates that the robot should not be conscious, in fact it is conscious, and it knows this from its own 1st person subjective experience. So the only way to refute the CR argument for any physical system under our current understanding is to be that system, and accordingly I don't think it can be relied upon to guide our intuition. If anything, it is simply another way of showing how our current understanding is wholely inadequate.

Obviously there is something somewhere along the line that introduces semantics (consciousness); whatever that 'thing' is, we know the human brain has it, but the robot might have it as well (and, strictly from the 3rd person view, it would certainly at least appear to have it, although appearance does not constitute a proof). To state outright that that special thing must be biology is, again, too much of an assumption for me. It could be the case, but it could equally not be the case.

Absolutely. That would be great to see. as much as i am not too keen on chalmers work, he is an interesting writer to read.

OK, good, I will put up a post on Chalmers' functionalism argument sometime soon.

Yes, the ontology that i follow in place and smart does not explicitly state this, but it is implied that you can go to the biology and find out what physical constituents made it possible to form. have that, and you have your physical energy level of consciousness.

Even if it did turn out that some property of biology accounts for consciousness, this would in turn imply that biology possesses some fundamental property pertaining to consciousness that is entirely omitted in the current materialist framework.

The metaphysical question of "even if it might not be so in our universe, but is it possible for consciousness to NOT result by this mix in another universe" is to me a wonderful question to speculate, but essentially one with no answer. how can we ever know whether consciousness of this sort is contingent here or a necessary factor of existence? that sort of thing keeps one awake at night.

The metaphysical argument is used more to highlight the notion that materialism alone is insufficient to explain consciousness. The argument simply stated says that it is conceivable that there be a metaphysical world that is physically identical to ours but in which a human brain is not conscious. The conclusion is that there is some non-physical property of brains that accounts for consciousness. (Or, if you prefer, it is conceivable that there is some metaphysical world where the CR argument serves as a sound refutation of consciousness in human brains; there is computation but no consciousness.)
 
  • #168
Ok Hypnagogue, I think i have lost you somewhere, i am a little confused, so let's go through this a little slower so I know what you are arguing.

-What do you think the current materialistic stance is? you keep saying that you don't believe that by looking at the brain you can deduce consciousness within. And you claim that this means that the paradigm of materialism is lacking in explanation of anything that it trying to prove/study/look at. Basically, materialism is redundant in that it cannot explain what it is trying to as it does not even know what it is looking at.

Well, exactly. Because the word "consciousness" is completely misleading. there is no such thing. you can't deduce consciousness from looking at the brain, as there is no Consciousness there. But you can deduce movement and interaction of environmental stimulus, and you can see responses at neuro-chemical level. so i guess all i am saying is that consciousness is life. is movement of matter and energy. and, if you take the identity theory side of materialism, then this is exactly what they are saying. functionalism is looking at a reduction, eliminative materialism is basically looking at physics and claims there is nothing to reduce, and the concept of deducting consciousness is a complete farce. it just depends of what you mean by the materialistic stance: there are many... I am obviously very confused, please re-explain for my ignorance.

Originally posted by hypnagogue
It may very well be the case that a mind state is a brain state; but if this is so, then the implication is that there is something about brain states that is not recognized in materialism.

again, please explain to me what materialism is then. I have always taken it to be the study of the physical: i know the functionalist stance takes it from top down, but again, i am not sure what you are saying here, i am sorry.

A very sensible approach indeed, but I don't think using brains as the starting point necessitates that we build the theory explicitly around biology. If anything I think that should come as a result of empirical research rather than a starting assumption. (I don't think we need this assumption to do productive research on consciousness.)

it does not necessitate anything, really, it just gives us a good ground to understand the case of consciousness that we know exists. it is all very well to claim all kinds of things about AI, but if we don’t even know what we are dealing with when we say things like this, then its all pure speculation and word games. Wittgenstein has a lot to say about the misleading terms of mind philosophy, as does the churchlands, who put all of this to "folk psychology" but that again is another topic.

Doesn't this research paradigm just boil down to determining what makes biology, and then just baldly asserting that those things that make biology also make consciousness? I may have misunderstood, but it sounds as if you are begging the question here.
no, it says that those things that are biology are consciousness. consciousness is not made. it is just a term we have put on something that is. we think of it as something being made, we think of it as a bi product, we think of it as something other than simply movement of atoms, but its not. its just the physical world doing its thing.

In what way does a machine act any less causally than a life form?
If you suppose that there is a one to one mapping of brain states onto mind states, then in principle the entire behavioral proclivities of a person should be encoded in their neural firing patterns. In principle, these neural firing patterns could be emulated perfectly by a complex computer. So, in principle, you could build a computer (with a robot body and so on) that would act indistinguishably from its human counterpart; how then would you conclude that the human attributes meaning in such and such a way and that the computer/robot does not?
in general it does act less causally than a life form, as it has no attachment or ability to link into the world at all (hence the Chinese room argument). BUT in the 2nd case: where we build something PHYSICAL that actually fires and does things like a brain, then fine, you can attribute to it everything that a human has. why? there is no such thing as 1-1 mapping of the mind to the brain. there is only brain activity. you can't take the experience of brain activity and write a program for it or even explain it without a concept of the neural activity like the cognitivists do. you can't do it. we are experiencing neural activity. you can scrape consciousness off like that and write it separate to causation.

Identity theorists and elimininative materialists all claim that is you make something like the brain, then you have a good reason to say it works like one. Functionalists like to claim that a pile of tin canes could form the function of the mind. they are very different claims.

I am familiar with the Chinese Room argument and I suppose you might invoke it here. However, from the point of view of materialism, the human brain might as well be just as void of semantics as the computer which blindly runs instructions.
No. the difference is in the seeming. there HAS to be more to the story, as we actually do attribute meaning to our actions and the world around us NO MATTER HOW WRONG THOSE ATTRIBUTIONS MIGHT BE. A computer has no way of making those attributions: we are an inside agent attributing to the outside world: we are attributing meanings to the computers action: outside in instead of inside out. a computer is only dealing with symbols: it has not way of understanding it at all. What the materialist stance, in my opinion is looking at, is why that might be the case. some choose to take it down to our constituents. that is the paradigm in which we are working here.

So the only way to refute the CR argument for any physical system under our current understanding is to be that system, and accordingly I don't think it can be relied upon to guide our intuition. If anything, it is simply another way of showing how our current understanding is wholly inadequate.

it is showing why the approach of functionalism has a lot of fault, and why we have to start relying on a real physical basis for what we are doing by stopping these stupid analogies and looking at exactly what we are dealing with. instead of saying "the brain is like a hydraulics system" or "a brain is like a computer" we have to give this a rest and say "a brain is like a brain, let's find out what that is" instead of running around and studying computers like they are going to suddenly turn over all the answers to the universe to us on their little sliver and green hard drives. That’s all Searle was trying to point out. it can be used as a guide to show that mainstream thought is heading wildly in the wrong direction, and we should start seriously exploring the other ideas that are around before too much embarrassment is caused, and Dennett fall flat on his face when Cog finally shows them that its not possible.

Obviously there is something somewhere along the line that introduces semantics (consciousness); whatever that 'thing' is, we know the human brain has it, but the robot might have it as well (and, strictly from the 3rd person view, it would certainly at least appear to have it, although appearance does not constitute a proof). To state outright that that special thing must be biology is, again, too much of an assumption for me. It could be the case, but it could equally not be the case.
the human brain does not "have" it, it IS it. and you can argue that its all the same in physics: that’s fine: we have found a universal consciousness causation in which we are all linked: but to me, consciousness is a meaningless term, and is simply a re-description of a physical system brought on by misunderstanding of how the body functions.

Even if it did turn out that some property of biology accounts for consciousness, this would in turn imply that biology possesses some fundamental property pertaining to consciousness that is entirely omitted in the current materialist framework.
or that biology does not contain anything: it is just the structure that allows consciousness to be experienced: maybe something (like a rock) can posses the fundamentals of consciousness, but not the means in which to experience it: so even then a robot may have the attributes to HAVE consciousness: it does respond etc. BUT it has no means of "experiencing" it, as it has no way of meaningfully connecting to the world.

The metaphysical argument is used more to highlight the notion that materialism alone is insufficient to explain consciousness. The argument simply stated says that it is conceivable that there be a metaphysical world that is physically identical to ours but in which a human brain is not conscious. The conclusion is that there is some non-physical property of brains that accounts for consciousness. (Or, if you prefer, it is conceivable that there is some metaphysical world where the CR argument serves as a sound refutation of consciousness in human brains; there is computation but no consciousness.)

even if it is conceivable, so what? just because you can conceive something means nothing. i can conceive of a pink elephant in another universe, does that mean that it is relevant to what’s happening here? in another universe, maybe there is a non-physical substance that causes mind. there is little evidence that that be the case here. we have found out non-physical substance- energy.
 
  • #169
Originally posted by M. Gaspar
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

well, i was more talking about conditioned responces: being able to change behaviour patterns via conditioning: but if you want to use just basic movement in general, then why not?

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.
sure, as long as it can actually change its behaviour as well. maybe this is a more quantum mechanics question.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

i would confer consiousness to anything that has the ability to react to its environment: not just a blind action, but can actually be conditined away from its usual behaviour. whether energy IS actually consiousness, or whether it is only certian structures of ebergy that allow a structure to interact meaningfully with the environment - that is somthing proberly worth reaserching.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.
sure. i am all for it being a structual diffence: i just say that physics is the basis for it. it is the building blocks and starting point for all of this.
 
  • #170
Originally posted by Canute
That's not a million miles from the Buddhist view.

Sure. all things coneceted, consiousness is a continuim, and all life is one. why not? take physics as your basis you hardly have anywhere else to go.
 
  • #171
Originally posted by Dark Wing
Sure. all things coneceted, consiousness is a continuim, and all life is one. why not? take physics as your basis you hardly have anywhere else to go.
I don't think physicists would agree that universal consciousness follows from physics.
 
  • #172
Originally posted by Dark Wing
-What do you think the current materialistic stance is? you keep saying that you don't believe that by looking at the brain you can deduce consciousness within. And you claim that this means that the paradigm of materialism is lacking in explanation of anything that it trying to prove/study/look at. Basically, materialism is redundant in that it cannot explain what it is trying to as it does not even know what it is looking at.

Well, exactly. Because the word "consciousness" is completely misleading. there is no such thing. you can't deduce consciousness from looking at the brain, as there is no Consciousness there. But you can deduce movement and interaction of environmental stimulus, and you can see responses at neuro-chemical level. so i guess all i am saying is that consciousness is life. is movement of matter and energy. and, if you take the identity theory side of materialism, then this is exactly what they are saying. functionalism is looking at a reduction, eliminative materialism is basically looking at physics and claims there is nothing to reduce, and the concept of deducting consciousness is a complete farce.

I find your position here a little confusing in light of things you have said previously:

I think that we should at least start at a point where we know that consciousness is the case. (i am aware that people will argue that we are not conscious, and that we are all just robots, but i am going to presume consciousness on the basis of Searls "seeming" argument).

On the one hand you present consciousness with a very behaviorist kind of flavor, saying it is nothing more than the movement of matter and energy, and that really there is no such thing as consciousness; on the other hand you say that consciousness is the case, on the basis of Searle's "seeming" argument.

Consciousness seems to be a bad word to use in these discussions, since it always gets twisted around at some point. It's too ambiguous. The relevant component of consciousness that I am talking about is experience or feeling (or equally well seeming, I suppose.) Despite confusions about what consciousness is and if it really exists or not, can we agree that it is certainly the case that humans have 1st person subjective experiences? I am presuming you answer yes to this question, otherwise, you deny the manifestly true and we cannot proceed.

Now, to state that problem very simply and succintly: how is it that "movement of matter and energy" can "seem" to be anything at all? Based on even our most complete understanding, there is nothing in matter and energy that should ever give rise to "seeming." That is the crux of the issue at hand. What in physics can account for "seeming," even in principle, the same way the structure of H2O molecules accounts for water? The answer would seem to be 'nothing at all.' Accordingly, there should be more to our descriptions of reality than there currently is, in order to fully account for experience/feelings.

again, please explain to me what materialism is then. I have always taken it to be the study of the physical: i know the functionalist stance takes it from top down, but again, i am not sure what you are saying here, i am sorry.

Materialism: the stance that only the physical exists, and the description of that physical ontology (a catalogue of properties / fundamental entities such as charge, mass, spacetime, and so on, as interrelated by the laws of physics).

no, it says that those things that are biology are consciousness. consciousness is not made. it is just a term we have put on something that is. we think of it as something being made, we think of it as a bi product, we think of it as something other than simply movement of atoms, but its not. its just the physical world doing its thing.

Again, the way you have described things here, I don't think there is anything being done to objectively ascertain that biology is consciousness; it just assumed from the beginning and then carried through to the end.

No. the difference is in the seeming. there HAS to be more to the story, as we actually do attribute meaning to our actions and the world around us NO MATTER HOW WRONG THOSE ATTRIBUTIONS MIGHT BE. A computer has no way of making those attributions: we are an inside agent attributing to the outside world: we are attributing meanings to the computers action: outside in instead of inside out. a computer is only dealing with symbols: it has not way of understanding it at all.

You could just as well say that a human brain is only dealing with symbols and has no way of understanding it at all. Of course, we know this is not the case, but it really should be the case if you follow the logic of the CR argument. The only reason we know that CR does not apply to the human brain is from 1st person experience. If this is not the case, explain to me what caveats exist in the CR argument such that CR does not apply to human brains. Are these really justified by the argument or are they ad hoc patch-ups to make it compatible with reality?

the human brain does not "have" it, it IS it. and you can argue that its all the same in physics: that’s fine: we have found a universal consciousness causation in which we are all linked: but to me, consciousness is a meaningless term, and is simply a re-description of a physical system brought on by misunderstanding of how the body functions.

I disagree. No matter what you say about consciousness, it is really impossible to deny that experience or feelings exist. If you look over the materialist ontology (spacetime, mass, charge, matter, energy etc, and the laws of physics), from what in this ontology can it follow, even in principle, that experience or feelings should exist? I argue that experience cannot logically follow from any of these things, and thus should join them as an ontologically fundamental building block.

even if it is conceivable, so what? just because you can conceive something means nothing. i can conceive of a pink elephant in another universe, does that mean that it is relevant to what’s happening here? in another universe, maybe there is a non-physical substance that causes mind. there is little evidence that that be the case here. we have found out non-physical substance- energy.

Conceivability ties into explanatory power, which ultimately ties into our understanding of the world. If we possesses a good explanation of a certain phenomenon, then any rational agent who accepts our axioms and understands our logic will not even be able to conceive of an event contrary to that predicted by the explanation. (Here, again, the relevant axioms are the fundamental, nonreducible components of materialism: spacetime, mass/energy, laws of phsycis, etc.)

Example: suppose we have two competing theories about the properties of water. On the first theory, the properties of water are determined by the water god Wata. This is not a very good explanation for several reasons, one of which being that it leaves us free to rationally conceive of something to the contrary; if Wata determines the properties of water, then why didn't he decide to make water look red instead of clear/green/blue? (For that matter, why is it Wata and not Raja who determines the properties of water?) I can easily conceive under this theory that water should turn out to be red; it has not been adequately explained to me why it must be the case that water has its characteristic properties as observed to exist in nature.

The second theory is the standard scientific one involving H2O molecules. On this theory, we start off with the characteristic materialist properties of H2O molecules-- their atomic structure and bonding propensities, and so on-- and from these, we show how a large collection of such molecules under the proper conditions must combine to form a macroscopic substance with the properties of water. Under this explanation, it is not even possible to rationally conceive that H2O molecules could combine to comprise a substance with properties different from water. The explanandum (thing to be explained) follows as a necessary consequence of the explanation, leaving no room for a rational imagination to contradict it.

If one can fully understand a materialist theory of consciousness and still rationally conceive of a metaphysical world physically identical to ours where human brains are not conscious (do not experience perceptual feelings), then the implication is that that materialist theory of consciousness is a pretty lousy one. Specifically, the axioms (materialist ontology) are insufficient; it has not shown how the explanandum (consciousness/seeming/experiencing/feeling) must be a necessary consequence of the explanation.

If my explanations still seem confusing or incorrect to you, you may want to check out a couple of papers by Chalmers that essentially embody what I am trying to say here:

http://www.u.arizona.edu/~chalmers/papers/facing.html
http://www.u.arizona.edu/~chalmers/papers/nature.html
 
Last edited by a moderator:
  • #173
Originally posted by hypnagogue
If one can fully understand a materialist theory of consciousness and still rationally conceive of a metaphysical world physically identical to ours where human brains are not conscious (do not experience perceptual feelings), then the implication is that that materialist theory of consciousness is a pretty lousy one. Specifically, the axioms (materialist ontology) are insufficient; it has not shown how the explanandum (consciousness/seeming/experiencing/feeling) must be a necessary consequence of the explanation.
Unless I am mistaken, I believe that this point is the crux to the whole discussion, and that Dark Wing's claim is that although it is possible to conceive of such a situation, we know from first hand experience that it is not the case with 'myself', and since you are biologically congruent to myself, it is not the case with you, and since Apes are biologically analogous to us, it is not the case with them, and since Dogs, mice, fish are all related to us, then it is also not the case with them.

The problem of other minds is a genuine problem, but if we accept current scientific theory (materialism, evolution, causality) then we are forced to accept that my mind (brain) is essentially no different to you brain, which in turn is only different by a matter of degrees to every other mind in nature. As such every single mind in nature most likely results in 'experiences' akin to our own.

The obvious next step is to realize that with this constant throughout nature, it is feasibly to claim that there is something about the brain which necessarily gives rise to experience.

While I understand your point, and in fact agree with it, the contingent evidence points to the conclusion that biological brains of this vague layout must give experience. I don't know why, its just what they do. (I think this is what Dark Wing is saying)

Admittedly, staying with the problem of consciousness side is much easier. There are less jumps involved in the logic and it is safe. Following the line of reasoning that I just attempted to layout has two rather large assumptions/claims/hopes in it. The problem of other minds is ignorable because of common descent (we have to assume that 'seeming' is available in degrees, and is something that has evolved the whole way alongside us), and then there is the somewhat dodgy attempt to explain this constant by claiming that is simply must be a consequence of having a brain (or being biological, however far down we have to go).
 
  • #174
In otherwords, our abilites to imagine a world where there could be brains without experience is simply a trick played on us by our ignorance.*

Just like someone who doesn't understand Newtons/einsteins laws could imagine a world where the sun revolved around the earth. It seems entirely reasonable, but as soon as u understand the details of the system and the laws that dictate them, it no longer actually seems possible.

*That is, of course, if Dark Wing is correct, which is something that may not be known for...a very long time for all we know.
 
  • #175
Originally posted by Another God
In otherwords, our abilites to imagine a world where there could be brains without experience is simply a trick played on us by our ignorance.

I never claimed (for example) that an actually existing brain in this world which is identical to mine could possibly not have the same conscious states that I have. By extension, I am not skeptical that you yourself are experiencing some kind of conscious states, or that any other normally functioning person does as well.

There is a distinction to be drawn here betwen nomological (or natural) possibility and metaphysical possibility. These are two very different things. For instance, it is metaphysically possible that the speed of light in some metaphysical world be different from c, although this is (to our best understanding) nomologically impossible (impossible in this actual world we find ourselves in). In general, those things which appear to be contingent facts of nature (such as the value of c) are nomologically fixed but are not metaphysically necessary in any strong logical sense.

My claim is not that the activity of the brain in nature cannot/does not/could potentially not account for consciousness in this world. Quite the contrary, I believe that a brain performing the appropriate activities under the appropriate circumstances will always be conscious.

My claim is that the brain as it is modeled by materialism cannot theoretically account for consciousness. This is a claim about our model of reality, not reality as it actually is. I claim this because it appears to be a logical impossibility to theoretically derive the existence of consciousness starting from materialistic assumptions. Given that I fully accept the natural existence of consciousness and its natural relationship with the natural brain, the only rational route is to re-examine our assumptions (materialism).

We have a fact of existence (consciousness) that is impossible to derive from materialism; hence, materialism must be an insufficient model of reality; hence, we must modify materialism by adding more assumptions/axioms/contingent entities than it currently possesses, such that with our revised model of reality we will be able to satisfactorily explain consciousness as we observe it to exist in nature. (On some level, this will involve making at least some aspects of consciousness, or those things that somehow combine to create consciousness, irreducible and fundamental.) That is, with our revised model, we should be able to derive the properties of consciousness as they are related to brain function as a necessary consequence of our starting assumptions. Our explanation should be good enough that it leaves no rational room for imagining a reality to the contrary of what is observed in nature, given that we accept the starting assumptions as true. This explanitory rigor characterizes the strength of our explanitory power for (eg) the properties of water in terms of H2O molecules, and it is precisely the kind of explanitory rigor that is impossible to build into an explantation of consciousness using just our current materialist assumptions.
 
Last edited:
  • #176
Hypno

I usually agree with you but you seem to have simply assumed here that consciousness is caused by brain. Or have I misread your words?
 
  • #177
Canute: I don't think it is a 'simple assumtpion', but rather a well thought through belief that the brain is the cause(/seat/existence/receptor) of consciousness.

And yes, once again Hypnagogue, I understand your point and accept it. I really do appreciate how clearly you explain yourself.

So, let me try to state things as I now understand them. You do not necessarily disagree with my last post, and in fact you probably agree with it: There is reasonable evidence to accept that there is necessarily something within the brain/biology which 'gives rise' to consciousness. The problem still remains though, that our understanding of the brain via materialism still simply cannot ever explain how that comes about.

LOL. OK, saying that now, I realize that no new ground has been reached. This is still exactly the initial complaint. Sorry about that, but it's all in good fun still.

I have always agreed with your position, since before I ever did any philosophy of the mind and then after as well. It was part of my rational of the universe that there is 'the objective', and then there is 'the subjective'. The subjective comes about through the objective, but obviously it is different. The objective is the universe, the subjective is a perception of the universe. As such, the best I could do was to postulate that there was some objective occurance/event/machinery that created subjective experience. Of course materialism will never find this subjectivity (because it only looks at the objective), but it may very well be able to find the objective machinery that causes it.

And so, yeah, this doesn't resolve the problem as such, but at least it gave me a basis for ignoring it for a while. Wait until we find the direct causality of experience and then we will start to understand whether there is an explanation for experience itself, or whether it is simply something we have to accept as a phenomenon...

I dunno. I am rambling. This topic does naught but confuse me everytime I go there. I suspect that that is precisely why no progress has been made in it since the beginning of time.

Shane
 
  • #178
Originally posted by Canute
Hypno

I usually agree with you but you seem to have simply assumed here that consciousness is caused by brain. Or have I misread your words?

I don't think the brain necessarily causes consciousness as such, but I do think it's obvious (or at least, we have very good reason to believe) that the consciousness that we experience systematically varies as a function of brain activity. This does not imply that the brain actually creates or comprises consciousness; it could be that consciousness (or microphenomenological 'things') exists independently of the brain and brain activity somehow manipulates the pre-existing 'thing.' (I use 'thing' very loosely here, since whatever that 'thing' would be in this case, it would be ontologically distinct in several important ways from physical things, or at least our materialistic conception of physical things.) Either way, it is still reasonable to assume that (for instance) someone with similar brain activity as mine in a similar setting as I am in will have similar experiences.
 
  • #179
Originally posted by Another God
I have always agreed with your position, since before I ever did any philosophy of the mind and then after as well. It was part of my rational of the universe that there is 'the objective', and then there is 'the subjective'. The subjective comes about through the objective, but obviously it is different. The objective is the universe, the subjective is a perception of the universe.

I don't think subjective experience should be thought of on the most fundamental level as a perception of something in the way I understand you to mean it, which is as a representation. The way I see it, experience on the most fundamental level is a property of nature just as much as mass or charge, albeit with obvious ontological differences; experience in itself is just experience, a neutral natural phenomenon that need not be inherently representational. Obviously life has evolved to use experiential phenomena as representations of objective reality, but that is just a function that experience has come to serve for life over time rather than an actual inherent property. (By analogy, even if we always observe color to be an element of sexual selection in a species of birds, it doesn't mean that color is inherently sexual-- that is just a function that color has come to serve for the birds over time.)

And so, yeah, this doesn't resolve the problem as such, but at least it gave me a basis for ignoring it for a while. Wait until we find the direct causality of experience and then we will start to understand whether there is an explanation for experience itself, or whether it is simply something we have to accept as a phenomenon...

If Chalmers' argument is correct, then it is impossible even in principle to fully explain consciousness solely in terms of materialistic entities and properties. Even given a perfect theoretical mapping of brain states onto conscious states, we would still not be able to dispense of the explanitory gap using only a materialist framework. (I suggest you read the two articles linked to in my last reply to Dark Wing if you are interested in a more thorough argument.)
 
Last edited:
  • #180
Originally posted by Another God
The obvious next step is to realize that with this constant throughout nature, it is feasibly to claim that there is something about the brain which necessarily gives rise to experience.

AG, I commend you on your comments. It seems you have understood Hypnagogue's posts and even linked it to your own thoughts on the objective and subjective. I understand what you're saying and agree for the most part. I do have a question regarding the above quote however. Can I reasonably state this and it be just as accurate...

"The obvious next step is to realize that with this constant throughout nature, it is feasibly to claim that there is something about radios which necessarily gives rise to music."

Yet, radios don't compose music. Tell me your thoughts on this analogy. To make the anlogy work, assume that you know nothing of how radios work.
 
Last edited:
  • #181
Unfortunately there is no evidence that anything about the brain gives rise to consciousness. As Hypnogogue says, representional (or intentional, phenomenal etc) consciousness seems to be an evolved product of brain (or vice versa) but nothing suggests that brains are a necessary prerequisite for consciousness. Strictly speaking it isn't even completely clear how to define 'brain'. When you look at the behaviour of microphages or slime mould you have to wonder.
 
  • #182
Originally posted by Canute
representional (or intentional, phenomenal etc) consciousness

Actually, I was trying to say in my response to AG that representational and phenomenal content are two distinct things. The representational content of a subjective experience is an exitrinsic, relational property, an 'aboutness' that relates it to objective reality. The phenomenal content of a subjective experience is an intrinsic, inherent property that is entirely self-contained.

For instance, take the subjective experience of the blueness of the sky. This subjective blueness has a representational content insofar as it represents information about the objectively existing sky 'out there.' But this is to be distinguished from its phenomenal content, which is simply its inherent property of perceived blueness. The former seems to be a convenient usage of the latter that life has evolved, but in its most stripped down sense there need not be anything representational about a phenomenal percept.
 
  • #183
You're basically right, I was sloppy. However although I agreed completely with you said earlier about this I'm still slightly unsure whether 'phenomenal' consciousness is fundamental, which you implied.

I suspect that a state of 'emptiness' is not properly described as 'phenomenal consciousness', at least in the sense of 'pertaining to a phenomena'. That's why I lumped phenomenal consciousness in with the rest.
 
  • #184
Where does a state of 'emptiness' come into play?

Actually, I'm not sure I entirely understand that second paragraph you wrote.
 
  • #185
I was agreeing that phenomenal consciousness is intrinsic, not dependent on representations, and is more fundamental that representational consciousness. But in a not very clear way I was also suggesting that the fundamental consciousness that you argued may be an irreducible property of nature may not be phenomenal or intentional consciousness either. This is because it seems to me that fundamental experience of consciousness cannot really be called 'phenomenal'.

In a sense there is a phenomenon, namely the experience. But at the limit the phenomena is the experience, as opposed to being separate to it, whereas 'phenomenal consciousness' suggests (to me anyway) an experience somehow separate to the experiencer, in other words a phenomenon and a consciousness of a phenomenon.

God I'm pedantic sometimes. Sorry.
 
  • #186
In my mind phenomena is a word that describes the felt experience. So you can't have it separated from the consciousness.


Fliption: I will have to think about it for a while. I keep tossing around the idea that radios do more than just music, they do talk back, advertisements, static etc. Perhaps it would be more accurate to say radios necessarily give rise to noise.

With this new version though, it seems much more appropriate to me to say yes, the analogy may be correct, although I am sure your point would be missed. You wouldn't like the thought of 'static' being just as meaningful to the radio analogy as consciousness is meaningful to the brain. But that is the only way i can see the analogy as being useful.
 
  • #187
AG,

I think "noise" may technically be accurate, but I'm not sure it's relevant to the analogy. Assume you're an alien with no knowledge of radios and you have a radio that will only play one station. You cannot change stations to hear static. This station can play music or have people talking, it doesn't matter. You have knobs to control volume and EQ. Couldn't you, with the limited knowledge of an alien, make the same quote about this radio/music/talking that you made about brains/consciousness?

Here's the point: If you agree that this analogy applies then we have a good example of a situation where having good reasons to believe one way would actually be leading us astray. In this case, that music/talking is an emergent property of a radio's design. (I tried to leave out static because while it is obvious that voices and music are not generated by the radio, I'm not sure about the technical origin of white noise. The importance of static isn't so relevant. I'd include it if I knew you wouldn't say that "the radio does generate the static". Because I'm not technically savy enough to debate that :smile:)
 
Last edited:
  • #188
Originally posted by Another God
In my mind phenomena is a word that describes the felt experience. So you can't have it separated from the consciousness./B]
I'll try to be more clear about what I meant. It's important because if consciousness is not reducible then it must be consciousness in its most 'unevolved' state, or rest state, that is fundamental.

Often this fundamental state is taken to be phenomenal consciousness, as I think Hypnogogue did above, and Chalmer's does also. However the term is ambiguous and therefore can give rise to misunderstandings.

This is Austen Clark from http://www.ucc.uconn.edu/~wwwphil/pctall.html

"States of phenomenal consciousness involve a special kind of quality: phenomenal qualities. A state of phenomenal consciousness is a state in which something appears somehow to someone, and phenomenal qualities characterize that appearance."

This is the danger, that phenomenal consciousness is defined too narrowly, as a state in which 'something appears somehow to someone".

My point was that there are states more fundamental than this, states in which there is no distinction between the something that appears and the someone to whom it appears, and maybe 'phenomenal consciousness' is not the right term for them, given the above kind of useage and definition.
 
Last edited by a moderator:
  • #189
I think phenomenal consciousness could just as well be 'a state in which something appears somehow.' That's the way I have been taking it, at least. The 'to someone' part implies the presence of a further 'something that appears somehow,' which is the appearance of that 'someone' to whom the experiences are happening, or the appearance of selfhood. So to define phenomenal consciousness in the way you are using it may be a little circular or recursive. For instance, I would define the phenomenal consciousness of red as 'the appearance of redness,' whereas under the definition of phenomenal consciousness that you suggest, it would be 'the appearance of redness in conjunction with the appearance of self.'

In any case, it just a matter of how we define our terms. But I agree with your central point, which is that experience on the most fundamental level need not necessarily include the experience of selfhood.
 
  • #190
Originally posted by hypnagogue
I agree that, for a theory of consciousness to make sense, it must make reference to some sort of building blocks for consciousness; either in the form of an irreducible and fundamental entity, or in the form of some 'things' that are not themselves conscious but somehow combine to create consciousness.

Now the question becomes: are these building blocks included in our contemporary materialistic ontology? This is precisely where I believe that contemporary materialism must fail in any attempts to really explain consciousness, because I do not think any of the building blocks given to us in a materialistic ontology can do the job of showing us how to explain or deduce consciousness. We need more building blocks.

I've decided to go back... if NOT to the "beginning of time" ...then at least to page 13 of this thread ...and to devote the next hour to catching up with as many posts as have caught my eye.

Let me first see if I get the term "contemporary materialistic ontology" as meaning how "we" (i.e., "scientists") currently think matter is behaving? Please correct me if I got it wrong.

Anyway, based on this definition, I will squeeze in my paradigm to fit.

Matter -- actually "bound-up energy" -- is "behaving" as it ALWAYS has: by "sensing" one another (either as particles or systems) and "responding to" one another. We can discuss examples, but my contention is tha THIS represents "consciousness" at its bare minimum.

Might this not be the "building blocks" you've been hoping for?



As a side point here: your criterion for judging whether an entity is conscious or not is the degree to which it can interact with and be conditioned by its environment, why should physical constitution matter? I understand that you want to start off on surer footing by starting with safer assumptions, but we could (relatively) easily build a silicon based robot that could do the same things. All I am suggesting is that, if biological constitution is to be the most fundamental factor for consciousness in your hypothesis, then that should your primary assumption. Deriving (as opposed to fundamentally asserting) the necessity of biology for consciousness from an entity's ability to interact with the environment seems to be faulty, since you could just as well derive that a silicon robot should be conscious by the same criterion.
The DETECTION and RESPONSE TO stimuli ("forces") would be all that is necessary. "Conditioning" might BE a "response" but it is beyond my basic parameter for "consciousness".

Thus, I do not propose that "consciousness" is confined to biological systems and, while some may want to discuss "robots" there are plenty of non-biological systems (atoms through galaxies) that are "communicating with one another" even as we speak.

Like you, I have been weary of functionalism as a good starting ground for any hypothesis for consciousness. However, recently I read an argument with a functionalist flavor put forth by Chalmers that gives me pause. If you are interested, it might be appropriate to start another thread on the topic.
Wish you'd put it here in ONE SENTENCE. I'm hesitant to "go shopping" for additional threads to tempt me into discourse.



OK, back to the building block discussion. Suppose for the sake of argument that we eventually isolate the motion of electrons as the most fundamental necessary and sufficient physical correlate for consciousness: whenever we see electrons moving about in such and such patterns, we are confident that there will be such and such conscious experience on the part of the system.
First, why would it be necessary to "isolate the motion of electrons". We "know" they're always moving don't we (a serious question)? Perhaps it IS their "motion" that allows them to "pick up on" the presence of OTHER particles (like the positive charge of a proton) but then this would "simply" be part of the MECHANISM for "basic consciousness". No. All we have to do is consider whether the electron's "detection" and "response to" the proton can be considered a basic "unit of awareness".

Now, what in our materialist ontology could account for this? Electron charge? Electron mass? Electron spatiotemporal configuration? What combination of these could you throw together to show a priori that consciousness must be the result? I argue that no combination of these could be thrown together to show that consciousness must result. Rather, at this point, we would have to rework our ontology to grant an entirely new property to electrons, such that we would be able to see a priori that such and such configuration of electrons must result in consciousness. This new property would have to be either a fundamentally irreducible aspect of consciousness on the part of electrons, or it would have to be some kind of microphenomenological property of electrons such that electrons by themselves are not conscious, but when combined in patterns just so, their microphenomenological properties combine to result in consciousness.
There are MANY "forces" that "account for this: electrical charge; magnetic force fields; ionic fields; the em spectrum. Anything that mediates INFORMATION "accounts for" an entity's or a system's "ability" to "sense" and "respond-to" something else. Nothing "new" is needed.

This argument applies to any H2O formula we may wish to hypothesize for consciousness. You say we have not found the formula yet; I say that for any formula built solely from materialist building blocks, we will still not be able to show a priori that this formula must necessarily result in consciousness. We just need more building blocks than materialism will give us.
It is probably true that we will not be able to EVER "show" that electronics, atoms and galaxies are "conscoius" ...but we can THINK about it well enough.



I don't think anyone will argue that it is impossible for a person with the right configuration in our world not to have a mind. The question is whether or not it is a metaphysical impossibility; if the world were different somehow, would consciousness still be the necessary result of the right brain configuration? For instance, in our world it is impossible for an electron not to be attracted to a proton. In a metaphysical world with different laws of physics, this would not necessarily be the case.
First, there isn't any "wrong" "configuration" IF Everything has a "mind" ...of sorts. As to the mind's dependency on having a brain? I contend not. And a world -- both physical AND metaphysical -- with different "laws of physics" -- all that would be needed, once again, is that the parts of the "new Universe" be detecting and responding to one another for "consciousness" to be present. And, of course, having all parts NOT sensing and responding would then give us a STATIC -- hence DEAD -- Universe ...and nobody wants THAT.

It is a metaphysical impossibility for a world with identical H2O molecules and identical laws of physics to ours that these H2O molecules not combine to form (given suitable circumstances) a macrophysical substance with properties identical to water in our world. A very straightforward argument involving physical structures and functions can be given to support this claim. It is not at all clear, however, that a metaphysical world that is physically identical to ours should necessarily support consciousness. If it is claimed that this metaphysical world must support consciousness, no substantive argument can be given to support this claim, even in principle, for all the familiar reasons. This is another way of getting at the suggestion that there must be something more than just the physical involved in the phenomenon of consciousness.
Actually, I think I have given a good "substantive argument" to "support the claim" that ALL DYNAMIC WORLDS with systems that are "working together" will have "consciousness" at its most fundamental ...as well as its most complex.


I agree that there is no logical reason to say that it is not in virtue of some property of the brain and its constituents that consciousness exists. However, there is much logical reason to say that such a property is not included in our current materialist ontology.
The "brain" is an evolutionary "afterthought".
 
  • #191
Originally posted by hypnagogue
The idea is not that your artificial bat must be a zombie. The idea is that we can't be certain what effect the different physical constitution has on its purported consciousness. Upon what basis do you claim that the artificial bat must have the exact same experience as the natural bat?

The argument you have put forth so far leaves much room for doubt. In fact, it does little more than beg the question; your argument rests firmly on the assumption that physical transitions and such are all that is responsible for consciousness, whereas this is precisely the issue that is open to question. To advance, you must propose an argument detailing how it must be that the functional constitution of the bat is sufficient for explaining its first person experiences.
Would not ANY entity subject to CAUSE & EFFECT be having an "experience"? It's simply a matter of complexity of both the detection apparatus and the responsive possibilities of whatever.

And if I make it off page 13, I'll consider this hour well spent.
 
  • #192
Originally posted by selfAdjoint
I'm trying to firm up the discussion by pinning down the issues in a case where we don't have all the baggage of human consciousness to contend with. IMHO that's exactly what Nagle did in switching from talking about qualia in people to presumptive qualia in bats. The point is exactly that nobody knows what goes on in a bat's mind so the discussion can remain pure of special pleading.

If you don't like the bat, here's another one. Could an AI be built to sense colors the way people do, with the three receptor bands and intensity differencing and maybe a neural network for identification and memory, and if it could then be run through experiences with colors, some good some bad according to a carefully designed program so it had various associations with various colors, and if it then "discussed" its experience of colors with researchers and showed complex discussion behavior, not programmed in advance, could you then say the device was experiencing color qualia?
I say yes. And of course you know that there are creatures that can sense wavelengths that we cannot, in addition to magnetic and ionic field, chemicals and other "mediators of information". Nor can we "sense" -- from our vantage point -- the "weak and strong forces" that elementary particles "sense" and "respond to".
 
  • #193
M. Gaspar, your position assumes that information transfer / manipulation is inextricably bound up with consciousness; you say that whenever information is exchanged (detected, responded to, etc.), there will be some sort of attendant experiential component.

This may well be the case, but if it is, it is not something that can be accounted for in the traditional materialistic framework. There is no postulate in materialism that states that information transfer is inherently experiential. By the materialist account, information transfer between two atoms (say) should just be characterized entirely by physically detectable energy tranfer between the two; starting from materialist assumptions, we should have no reason to suspect that such an information transfer has anything at all to do with experiences.

So your hypothesis of the relation between information and consciousness would qualify as the beginnings of one possible type of "fundamental, irreducible addition" to the materialist framework that I have been talking about. It would function much like the traditional laws of physics; we would not be able to deduce why this relation holds, we would just accept it as a fundamental and contingent way in which our world works. But as it stands, no such fundamental principle exists in our materialist account of reality; there is no "law of conscious information transfer" in the materialist account, or anything even analogous to it. That is the point I have been trying to make.
 
Last edited:
  • #194
Originally posted by hypnagogue
So you are attempting point out the uncertainty of our knowledge of consciousness? I don't think any except the most extreme on either side really dispute that notion. At this stage of our understanding (and possibly forever), we just don't know enough to answer your question with much more than educated speculation. But this is a different matter from the subject of whether or not materialism can explain consciousness in principle.

Done! ...both page 13 and a proposal of how "materialism can explain consciousness in principle."

No time to read -- let alone respond to -- the next 4 pages ...which is why I am not CONSCIOUS of their content ...yet.
 
  • #195
Originally posted by hypnagogue
I think phenomenal consciousness could just as well be 'a state in which something appears somehow.'

Fine by me, (although I'm still a bit wary of the term 'appears')
 
  • #196
Originally posted by Dark Wing
...the materialistic stance is quite shallow, and no doubt it needs to be fleshed out, but the basis for it may still be there. Take a look at Place and Smart's work on identity theory: (I know they are Australian, but hell, we do have some minds all the way down here)It simply states that’s a Mind state IS a Brain state. What they have done is set up a field to explore: what is a brain state? if you can figure out what that is, then you have the next step to the reduction: take it down to biology, and then ultimately physics, and you have your building blocks for consciousness: but I will address that better where you have mentioned it bellow.
In the final analysis, the brain could "merely" be an organic device designed through evolution to detect a wide variety of signals, and to interpret them, store them and respond to them. The fact that we have developed instumentation to further gather data (signals) gives us a "consciousness" that is "aware" of objects that are "invisible" to us by wavelength and by distance. Still, "consciousness" could just be the sum product of what any entity (from particle to any dynamic material system) can detect/interpret/store/and respond-to.

So, yes, OUR "mind state" could be the same our our "brain state" ...but it is not necessarily the only TYPE of "mind" there is. We may just be on the "high end" of a "consciousness continuum" that is based on the complexity of detection and response possibilities of any given entity. Other entities less endowed ...like, say, an electron, might still have a "mind" by virtue of what IT could "detect" and "respond to" ...which would be, say, the positive charge of a proton.

Looks like I'm going to have to tackle the lengthy Dark Wing post on page 14 a bit at a time. No time to respond to it all now ...but don't want to skip it on my way to page 17.
 
  • #197
Originally posted by Dark Wing
I think that we should at least start at a point where we know that consciousness is the case. (i am aware that people will argue that we are not conscious, and that we are all just robots, but i am going to presume consciousness on the basis of Searls "seeming" argument). If it is so that biology is conscious, then we can figure out what the constitution of biology is, and then see what the essential ingredients of the physics/biology boundary are. we can then say that they are the essential building blocks of consciousness, as they make biology, and biology is conscious, as it can react and interact with its environment. We can never argue necessity of biology for consciousness. But we can say "check it out, we have a working example, let's see how that happens"
You my SAY that "we can never argue (the) necessity of biology for consciousness" ...but there are many -- maybe most -- who do just that. I say "consciousness" is NOT dependent on biology, but that biological organisms HAVE developed the capacity to "sense" more "stimuli" than, say, a rock. But a rock "senses" gravity ...and some rocks "sense" magnetism. And I say that each "detection and response" is a "level" of consciousness.

So, what makes a biological cell that is reacting and interacting with its environment different from a robot that is showing the same behavioral patterns? nothing according to that definition. so the theory has to be expanded to show us how to tell immitation from the real thing (it is called "artificial Intelligence" after all :o)
If the reception, processing and response to information is the sole parameter and if a biological cell and a, what, nano-robot? could perceive EXACTLY the quantity and quality of incoming information, then there would be no distinction between their respective levels of consciousness. However, a living cell, I believe, probably DOES have the capacity to detect MORE than the nano-robot ...such as chemical information ...unless, of course, the robot was made to detect the exact same things.

Besides that my explanation of consciousness is based on biological or at least physical causation, and that programmed robots ignore the causation part of the initial condition for consciousness and just write the consciousness on top to be run on a bunch of silicon mapping, there seems to be a tesable and varifiable way of seeing if a robot is conscious in the same kind of sense that a human is conscious: that it attributed meaning to its environment.
Qhy do you say that a programmed robot "ignore" physical causation of consciousness? If "detection" and "response" is within the physical realm, then consciousness has been caused by physicality.


It is reacting in a meaningful and productive way TO ITSELF as well as to the environment.
Is one elementary particle "reacting in a meaningful and productive way" when it SENSES and RESPONDS TO other particles via info mediated by the weak and strong forces? I think it does.

not so much "these atoms moving like such means we will have this conscious experience" more i am saying that a certain formation of atoms will produce consciousness in the system
No, not "certain formation of atoms" but ALL formation of atoms...

...the nature of the conscious experience will be dictated by the biology: what kind of biology does this thing have in order to experience the environment with?
The only relevance to biology is the fact that biological organisms are more apt to have more sensors plus a larger repertoire of possible responses to stimuli. In and of itself, consciousness is NOT dependent on biology.

all of that is higher-level stuff that we may or may not predict on an atomic level. all i am interested in is what combinations make consciousness possible:
Answer: the detection and response to information.

...experiencing consciousness is another question all together.
Anything that is detecting and responding to something is having an "experience" ...which includes EVERYTHING as there is nothing (that I can think of) that isn't detecting and responding to SOMETHING ...even at the QM level.
It’s the combination that matters. certain combinations make one thing, other combinations make consciousness.
and if you know the combination that makes something biology, then you will know a priori that a certain amount of yay atoms on this combination will make consciousness. It’s like baking a microscopic cake.
And I propose that any operational system -- from atoms, to bugs, to stars and galaxies up to an including the Universe Itself -- is "experiencing consciousness". It's just a matter of DEGREE based on WHAT can be perceived and the range of FREEDOM the system has to respond.

The metaphysical question of "even if it might not be so in our universe, but is it possible for consciousness to NOT result by this mix in another universe" is to me a wonderful question to speculate, but essentially one with no answer. how can we ever know whether consciousness of this sort is contingent here or a necessary factor of existence? that sort of thing keeps one awake at night.
I sleep very well "knowing" that "consciousness" is FUNDAMENTAL to every constituent part of the Universe ...and has been so since the Universe was compressed into the "Primal Singularity" that preceded the last Big Bang. That's when the Universe "lost It's marbles" ...but It's more coherent now. :wink:

...even if in another universe something other than the pure physical is needed to support consciousness, it means nothing to us here. I will argue that we have all the ingredients for consciousness right here in front of us, we are just not looking hard enough for them.
One does not have to "look hard". One just has to look.

The above of course is merely speculation on my part, put forth with an air of "certainty" because I like to pretend I'm right.
 
Last edited:
  • #198
Originally posted by Another God
So, if it is up to the combination, perhaps there is a critical distinction that needs to be made which I have never seen anyone make: Perhaps there is no such THING as consciousness, perhaps there is a myriad of phenomena that each may be 'conscious experiences'.

So a reductive explanation of 'Consciousness' will fail, because there is no such thing as 'Consciousness', there is instead attributes of consciousness. If you follow me...

I guess this is similar to saying there is no such thing as 'The Biological World', there are only creatures which may be said to be biological.

Does this make sense/Help?

Yes and no: it "makes sense" but doesn't "help".

I agree that the "myriad of phenomena" we have come to identify as "consciousness" may be something else. However, it is just as likely that there is a myriad of phenomena (like the detection of the physical forces ...or possibily "thought" itself) that is going on between systems that we have NOT come to identify as "consciousness" and yet ARE examples of it.
 
  • #199
Originally posted by hypnagogue
A lot of philosophical considerations point to consciousness being a fundamental aspect of reality (this thread for example). But supposing that consciousness is on some level fundamental is actually the antithesis of a reductive explanation.
Now THIS I do not understand. How is what I've been saying the "antithese of a reductive explanation"? Do I not understand the word "reductive" has meaning to "reduce down" to is barest essentials. Please advise.

As for the biological view, I don't think it's elitist as much as it is pragmatic. We know for a fact that humans are conscious and we have good reason to believe that other animals are conscious as well. The further the systems we consider stray from being human, the less confidence we can have that these systems are conscious. So it is more a matter of starting in an area where we can be confident, learning what we can from that starting point, and then extrapolating to more general systems as our knowledge and theoretical frameworks progress. It may be true that an amoeba (or a rock) is conscious on some level, but for now that is just speculation.
You're reminding me of the story of the guy who lost his keys at night, and the only place it made sense for him to LOOK for them was under the street light ...as everywhere else was DARK and so he wouldn't see them even if they were there.

Of course it's "speculation" ...and may ALWAYS be. Still, I don't mind "groping in the dark" for the "keys".

I am here to present as good a case a possible ...but I don't expect to "prove" anything.
 
  • #200
Originally posted by hypnagogue
I don't know how I feel about that. You can perhaps say that there is no intrinsic property that differentiates a biological system from a non-biological one, but from the 1st person view at least, there seems to be an obvious intrinsic difference between a conscious system and a non-concsious system.

Again: all systems are conscious in MY paradigm.

Besides, even if we accept that what we need to describe are attributes of consciousness, all the familiar arguments still apply as to why we could not explain these attributes reductively in the materialist framework.
The "problem" with "describing attributes" are that we (human beings) tend to equate everything with US! If it mothers it's young and steals a banana, it's conscious. If it scampers away from us (like a coackroach when it SEES us!) it's "merely" "reacting" in a mechanical sort of way.

I believe that we have been too narrow in our definition of what constitutes consciousness. If one uses my parameter, one could actually QUANTIFY "consciousness" along a continuum based on what a system (or particle) can detect at a given point in time and its possible range of responses.
 
Last edited:
Back
Top