Faulty expectations of a theory of consciousness.

Click For Summary
The discussion centers on the challenges of developing a scientific theory of consciousness, emphasizing that while scientific explanations can outline conditions and properties of phenomena, they do not necessarily account for the essence of consciousness itself. Participants argue that consciousness may need to be recognized as a fundamental, non-reducible entity, similar to gravity, rather than merely an association with physical processes. The conversation critiques the limitations of purely physical accounts in explaining why consciousness arises from certain brain activities, suggesting that our current understanding of reality may need expansion to include consciousness as a core component. Ultimately, the dialogue reflects a belief that a comprehensive explanation of consciousness remains elusive and may require a paradigm shift in scientific inquiry. The complexities of consciousness challenge existing scientific frameworks and highlight the need for deeper exploration into its nature.
  • #31
Originally posted by Fliption
Mentat, are you suggesting that these two examples cannot be understood reductively, in principal? To me, they appear to be examples of things that aren't intuitive given are current knowledge of physics but this is probably a temporary situation. I suspect you agree with me on this and you're arguing the same thing applies to consciouness.

Yeah, I guess I'm thinking along the same lines as you are. To be sure: Are you saying that it is possible in principle, but not in practice, to reductively explain the examples I gave, and that consciousness is no different?

The view that Hypnagogue has presented in various threads is that consciousness cannot be reductively explained in principal. It's not just an question of current knowledge and technology. So from here you would need to explain why you think consciousness is no different from the examples you mentioned. You only said "it seems as rational" to think so. Why not address the specific points of the argument? It is extremely hard to do, so take your time. I've been trying to figure out a way myself but haven't been able to and currently do not think is is possible.

I see what you mean.

I read your quote, and from it deduced that, in order to prove that consciousness is explanable in principle (even if not in practice), just like those other physical phenomena, I would have to actually take a step-by-step reductive approach to consciousness, myself.

In this attempt, I'd have to draw on work that's already been done by scientists and philosophers. Let me forewarn you all that this explanation may get a little confusing, but I will try to clarify any confusions that I can. Also, it may not seem readily evident that this is a theory of consciousness/sentience per se. However, I will put the idea to the test at the end, and see if it's workable. Here goes...

For consciousness to be reductively explanable, in the way that hurricanes are reductively explanable, something must first be clarified: The aim is to take a reductionist approach to consciousness, using the scientific method only.

This is obviously necessary, since we are trying to deduce the possibility of creating a scientific theory of consciousness. However, there is an important philosophical repercussion to this: One can no longer ask "why" these processes produce consciousness. Science can only show us 1) Which mechanisms are conscious; 2) Which mechanisms are certainly not; and 3) How to create a conscious mechanism.

Now, they only kind of reductive explanation of consciousness that I've been able to agree with has been a "selectionist" kind. Selectionist theories of consciousness are theories that, basically, give the basic unit of conscious experience, and then apply Darwinian mechanics to these units.

Basically, any Darwinian process must have 6 things:

1) There must be a pattern.
2) It (the pattern) must reproduce itself.
3) Variant patterns must sometimes be produced (by chance).
4) The pattern and its variant must compete with one another for occupation of a limited work space.
5) The competition is biased by a multifaceted environment.
6) New variants always preferentially occur around the more successful of the current patterns.

These are the six parts of any Darwinian process, as stated by William Calvin, in The Cerebral Code: Thinking a thought in the mosaics of the mind. (BTW, in his theory, the most basic units of consciousness are hexagonal arrays of synchronously-firing neurons, each of which comes from a synchronously-firing triangular array.

But, what is the relevance of all of this? Well, it is my position that, since this process has indeed been observed occurring in the neocortex of more advanced animals (including humans), this must be the process that brings forth consciousness.

So, let's say that we identify the most discreet unit of consciousness as a hexagonal array of synchronously-firing neurons (btw, I really suggest that you all read The Cerebral Code, as Calvin explains his theory much better than I can). We now have a pattern, and there is nothing philosophically wrong with postulating that this pattern occurs, as it is in no way conscious. Even saying that a spatiotemporal firing pattern is initiated whenever a pyrimidal neuron (those specialized neurons in the neocortex) is stimulated has little (if any) philosophical repercussion.

Now that we have the pattern, we next need a process of reproduction. What must be remembered for this part of it is that each member of the aforementioned hexagonal arrays is capable of firing with neurons outside of it's array, and does indeed do this. So, because of this tendency for synchronicity in self-restimulation (btw, I don't think I was clear enough on this previously: A set of "synchronously-firing" neurons is an array wherein each member is self-restimulating (IOW, each member is repeatedly re-stimulating itself, and is doing so in a pattern that finds synchronicity with the self-stimulation of the other neurons in its array)), one can easily see that other, nearby, neurons ("nearby", referring to the nearness of one neuron's axon terminals to the dendrites of another neuron) may be stimulated, and thus begin to re-stimulate themselves in synch with that "nearby" member of a hexagonal array.

If you're still with me, let's move on to the next step in the Darwinian process: The production of variant patterns (mutations).

I don't think too much time should be spent on this issue as it is a somewhat obvious circumstance that, where there is reproduction of a complex pattern, there will be "error". Thus, the production of variant patterns is just a natural consequence.

Next, the pattern and its variant patterns must compete for occupation of a limited work space. Now, let's say that we have an (hexagonal) array of neurons, that were initially stimulated by (for example) seeing an apple (more specifically: by the entrance of photons, bouncing off of an apple, into the retina, which then stimulated many "nearby" neurons, who, in turn, stimulated other "nearby" neurons, forming many spatiotemporal arrays). We'll call this array an "apple array". But, let's say that, one day, we saw an orange. Now, perhaps we associated these two, and so the orange can be seen as a variant on the original "apple" arrays (or, perhaps, as simply a new introduction). Well, these arrays constantly re-stimulate themselves, and must thus compete for supremacy whenever there is a new stimulation that bears resemblance to an "apple/orange".

IOW, when I see (let's say) an orange sitting on a table, there is a stimulation, and those spatiotemporal patterns which bear closer resemblance to the thing I'm seeing will "adopt" this new memory as one of their own...it will join their array. This takes up some "work space", which the "apple arrays" will have lost.

Points 5 and 6 were basically covered in that example, as well, since - if in this competition (for identification and memorization of the new object) there were neurons that had no previous array to belong to - you could have a "no-man's-land" where there was plenty of room for a completely new stimulus. However, in the case of the apples and oranges, there were arrays that were primed for a new spatiotemporal stimulus, and they quickly competed for this new space.

It's just like in biological evolution, where a population can either be isolated completely, or have varying degrees of interaction with already existent beings - the latter possibility is much more limiting to the new being's ability to variate.

Now, all this may not seem like much of a theory of consciousness, but think of this: If all these processes occur, and there are constant new stimulations (along with re-exitation of purely spatial arrays), then you have a working theory of the processing mechanism of the brain.

Now to test it out. Let's say you want to understand how someone makes "choices" (as this is clearly an integral part of being a sentient creature - being able to make choices). If the choice is something simple, like the old vanilla vs. chocolate ice cream illustration, then it's a bit too easy to just say: Arrays that correspond to "chocolate" happen to also correspond with previous enjoyable experience, whereas this is lacking in the "vanilla" arrays, and so chocolate is the biased choice. Actually, it seems that most (if not all) "choices" can be explained rather simply by such "this array is the biased choice, and thus prevailed over the competition".

No, I think the real test would be to show how it is that these functions translate to subjective experience. The problem is, from the scientific limitations we accepted at first, we can't really ask "why is this mechanism conscious, while others are not". We can, however, see if this approach meets up with the scientific criteria...Let's see, does it: 1) Equip us to decide whether a system is conscious or not? Sure it does - if the system does have this Darwinian process occurring, then it is conscious; 2) Does it help us decide when something is not conscious? Of course, by the inverse logic of #1; 3) Does it allow us to (in principle) produce a conscious mechanism of our own? Absolutely. If we can produce a machine (no matter what it's made of) whose discreet "thought units" are made up of synchronously-self-restimulating quanta, and these patterns can be reproduced, with allowance for error, and competition between the "parent" and "variant" copies in biased environment...well, you have a Darwin machine, and you have consciousness.

Again, I recommend the book The Cerebral Code: thinking a thought in the mosaics of the mind, by William Calvin.

g2g now. I'll check for responses tomorrow, if I can.
 
Physics news on Phys.org
  • #32
Mentat

I know you're only constructing an 'in principle' example. But in itself it seems pure conjecture, and it does not seem to do the job. Conjectures first.

Originally posted by Mentat

For consciousness to be reductively explanable, in the way that hurricanes are reductively explanable, something must first be clarified: The aim is to take a reductionist approach to consciousness, using the scientific method only.
Ok, but what is it that you are setting out to explain? Your theory is bound to be very ambiguous without a clear definition of what it is a theory of.

This is obviously necessary, since we are trying to deduce the possibility of creating a scientific theory of consciousness. However, there is an important philosophical repercussion to this: One can no longer ask "why" these processes produce consciousness. Science can only show us 1) Which mechanisms are conscious; 2) Which mechanisms are certainly not; and 3) How to create a conscious mechanism.
Ok, but I would have included 4) How consciousness arises. We know this for hurricanes etc.

Now, they only kind of reductive explanation of consciousness that I've been able to agree with has been a "selectionist" kind. Selectionist theories of consciousness are theories that, basically, give the basic unit of conscious experience, and then apply Darwinian mechanics to these units.
This may work for competition between thoughts (competition for conscious attention?). But this cannot explain how sentience or consciousness evolves.

...as stated by William Calvin, in The Cerebral Code: Thinking a thought in the mosaics of the mind. (BTW, in his theory, the most basic units of consciousness are hexagonal arrays of synchronously-firing neurons, each of which comes from a synchronously-firing triangular array.
Why hexagonal?

What is the relevance of all of this? Well, it is my position that, since this process has indeed been observed occurring in the neocortex of more advanced animals (including humans), this must be the process that brings forth consciousness.
Yes but many other processes have been also been observed in the neocortex. What makes this one so special? Also, as far as I know, there is no evidence that the neocortex is responsible for consciousness.

Now that we have the pattern, we next need a process of reproduction...snip
I'm not sure how you get from competition between neurons to competetion between thoughts. How does Calvin know that neuron patterns are thoughts? And how does this mechanism, if it really is one, explain how we are conscious of our thoughts? Our thoughts come and go but our consciousness does not.

Now, all this may not seem like much of a theory of consciousness, but think of this: If all these processes occur, and there are constant new stimulations (along with re-exitation of purely spatial arrays), then you have a working theory of the processing mechanism of the brain.
It might be one of the mechanisms within the brain. However if it is a theory of neuronal competition then it has nothing to say about consciousness. If it a theory of competition between thoughts then it is similar (or identical) to meme-based theories. Unfortunately while both may hold some truth about mechanical computation in the brain neither have anything to say about how neurons, or hexagonal arrays of neurons become conscious. Without some sort of additional theory an array of neurons is an array of neurons, no more.

Now to test it out. Let's say you want to understand how someone makes "choices" (as this is clearly an integral part of being a sentient creature - being able to make choices).
Hmm. What's your view on freewill? According to science we don't make choices, or at least no more than a thermostat does. If you mean 'conscious choice' here then this theory is not scientific. I'll assume you mean 'choice' in the thermostat sense.

No, I think the real test would be to show how it is that these functions translate to subjective experience.
Agreed

The problem is, from the scientific limitations we accepted at first, we can't really ask "why is this mechanism conscious, while others are not".
True. But we can (and must) ask whether they are, and how this occurs.

We can, however, see if this approach meets up with the scientific criteria...Let's see, does it: 1) Equip us to decide whether a system is conscious or not? Sure it does - if the system does have this Darwinian process occurring, then it is conscious;
I don't think you thought that one through. It is clearly not true. We do not have a test for consciousness, and if we did it would be unlikely to be simply a matter of measuring whether some particular brain process occurs.

2) Does it help us decide when something is not conscious? Of course, by the inverse logic of #1;
Ditto in reverse.

3) Does it allow us to (in principle) produce a conscious mechanism of our own? Absolutely. If we can produce a machine (no matter what it's made of) whose discreet "thought units" are made up of synchronously-self-restimulating quanta, and these patterns can be reproduced, with allowance for error, and competition between the "parent" and "variant" copies in biased environment...well, you have a Darwin machine, and you have consciousness.
That's true in a way. But how do intend to create 'thought units'? Or do you just mean 'hexagonal arrays of synchronously-self-restimulating quanta which we will assume to be thought units of which someone is conscious'?

I'd say this theory doesn't work for practical and scientific reasons. But more importantly it does not work as an in principle explanation of consciousness, since it does not explain the physical mechanism whereby an array of neurons becomes an experience. That is just taken for granted.

While this sort of theory may have a lot to say about mental computation it can say nothing about how the brain gives rise to our consciousness of our thoughts. After all if the array of neurons is the thought, then where is the consciousness of that thought?

Sorry to be so negative, but many great minds have tried to produce a plausible in principle explanation and nobody has managed it yet.
 
  • #33
Originally posted by Mentat
Yeah, I guess I'm thinking along the same lines as you are. To be sure: Are you saying that it is possible in principle, but not in practice, to reductively explain the examples I gave, and that consciousness is no different?


I was trying to state what I thought your view was and yes, that is what I was stating it to be. So I think we're in sync.

For consciousness to be reductively explanable, in the way that hurricanes are reductively explanable, something must first be clarified: The aim is to take a reductionist approach to consciousness, using the scientific method only.

After reading and thinking about this idea I think I'm agreeing with Canute. I'm still thinking a reductive explanation is not possible. This idea seems to do the same thing that everyone tries to do. It draws a correlation and then makes conclusions based on that correlation. If we observe that "A" always exists in conjunction with "B" then we make the conclusion that "A" and "B" must be correlated. "A" must be causing "B". Therefore, scientifically we can answer the question of "when should "B" occur?". Whenever A ocurs. And we can create "B" by simply creating "A".

All of this is founded on the correlation. But there is no explanation of the correlation itself. Why does "B" necessarily follow from "A"? We have this explanation for hurricanes but not consciousness. From my understanding we only accept these correlations as assumptions with no explanation when they deal with fundamental elements of nature.
 
Last edited:
  • #34
Originally posted by Canute
Mentat

I know you're only constructing an 'in principle' example...

Thank you for recognizing this. Selectionism needn't be the way to go at all; it's merely intended to show that someone has made a scientific attempt...

Ok, but what is it that you are setting out to explain? Your theory is bound to be very ambiguous without a clear definition of what it is a theory of.

It is a theory of what goes on in our neocortexes which gives rise to experience, memory, and creativity.

Ok, but I would have included 4) How consciousness arises. We know this for hurricanes etc.

Is "arises" really the correct word? I don't want to be too semantically picky, but a hurricane (for example) doesn't "arise" from counter-acting winds that are swirling at intense speeds, it is counter-acting winds that are swirling at intense speeds.

This may work for competition between thoughts (competition for conscious attention?). But this cannot explain how sentience or consciousness evolves.

That's the point though: Consciousness, if it is to be explained as a scientific phenomenon - in the manner of a hurricane - cannot "evolve" or "arise" from the functions of the neocortex, it must literally be those functions, as Dennett (I'm sure your elated that I'm bringing him up again) already predicted in Consciousness Explained.

Why hexagonal?

It's been observed to be hexagonal. That's really the beauty of it, in my opinion: There is no why, and thus it is just like every other event in the Universe (as understood in the eyes of the scientist).

Yes but many other processes have been also been observed in the neocortex. What makes this one so special? Also, as far as I know, there is no evidence that the neocortex is responsible for consciousness.

No, this is simply a very special process, and there have been rudimentary (at best) experiments that link certain memories to certain synchronously-firing arrays. As I said at the outset, this is merely a way of showing that it is possible, in principle, to explain consciousness; however impossible it may currently be, in practice.

I'm not sure how you get from competition between neurons to competetion between thoughts. How does Calvin know that neuron patterns are thoughts? And how does this mechanism, if it really is one, explain how we are conscious of our thoughts? Our thoughts come and go but our consciousness does not.

And herein lies the dividing line between the physical sciences and the rest of philosophy. Science cannot postulate that spacetime curvature is gravity, and then speculate on what would happen if spacetime curved, and yet there was no gravity.

As to how Calvin knows that these patterns are thoughts: he doesn't. It's a simple postulate that may turn out to be true. Anyway, it's a necessary one, since the most discreet unit of memory must be established before an understanding of the re-stimulation of memories (integral to consciousness) can be created.

It might be one of the mechanisms within the brain. However if it is a theory of neuronal competition then it has nothing to say about consciousness. If it a theory of competition between thoughts then it is similar (or identical) to meme-based theories. Unfortunately while both may hold some truth about mechanical computation in the brain neither have anything to say about how neurons, or hexagonal arrays of neurons become conscious. Without some sort of additional theory an array of neurons is an array of neurons, no more.

And without some sort of additional theory a dip in spacetime is just a dip in spacetime, no more. You see, you are again demanding more of a scientific theory of consciousness than you would of any other such scientific theory. It's like in quantum mechanics: Any form of energetic reaction (aka "observation") can collapse the wave-function of a particle, and thus produce seemingly absolute properties out of the typical chaos of that realm. Sure, it's not conceivable, and not really understood yet; but no one says it's impossible in principle to understand it because we can't make the connection of why energetic reactions should collapse the wave-function ITFP...right? It's just the way it is, and the scientific quest is to understand exactly how it is, not why it is that way and not some other way.

Hmm. What's your view on freewill? According to science we don't make choices, or at least no more than a thermostat does. If you mean 'conscious choice' here then this theory is not scientific. I'll assume you mean 'choice' in the thermostat sense.

Ok...hold on, isn't psychology a science? Psychologists often refer to our free will. Of course, we may have free will, and all the "thermostat" scientists still may be correct at the same time...we'd simply have to say that the ability to make true conscious choices does exist, but is - at it's most fundamental level, perhaps - as rudimentary as those choices that a thermostat makes (and then it is just the level of complexity - each rudimentary choice playing a role in a more complex one, and so on - that determines the level of "consciousness" in the choice).

True. But we can (and must) ask whether they are, and how this occurs.

And if, by experiment, it can be shown that all things performing these processes behave consciously, and can show exactly how it occurs (hopefully you mean how the physical process occurs), we will then have a working theory of consciousness, right?

I don't think you thought that one through. It is clearly not true. We do not have a test for consciousness, and if we did it would be unlikely to be simply a matter of measuring whether some particular brain process occurs.

Are you serious? I don't mean to be offensive in any way, I just think we must be discussing two different things here. I was talking about a physical, scientific, reductionist theory of consciousness. This would indeed give rise to a test of consciousness that would be a matter of measuring whether some particular process was occurring or not...that process being consciousness. That seems logical to me. What exactly is it that you expected from a scientific theory of consciousness?

That's true in a way. But how do intend to create 'thought units'? Or do you just mean 'hexagonal arrays of synchronously-self-restimulating quanta which we will assume to be thought units of which someone is conscious'?

I'd say this theory doesn't work for practical and scientific reasons. But more importantly it does not work as an in principle explanation of consciousness, since it does not explain the physical mechanism whereby an array of neurons becomes an experience. That is just taken for granted.

"Becomes an experience"?

While this sort of theory may have a lot to say about mental computation it can say nothing about how the brain gives rise to our consciousness of our thoughts. After all if the array of neurons is the thought, then where is the consciousness of that thought?

And if consciousness of a thought is simply thinking about that thought, then where is the problem?

Sorry to be so negative, but many great minds have tried to produce a plausible in principle explanation and nobody has managed it yet.

Just so long as you're not repeating that as a sort of mantra, and then setting out to prove it against all odds, I'm appreciative of your taking the time to educate me. I just don't want anyone to irrationally assume that something "can't be done", without first looking at all the possibilities. I, myself, am not perfectly convinced that it can be done. I'm just taking that side because most people take the other one, and because the current theories intrigue me :smile:.
 
  • #35
Originally posted by Fliption
After reading and thinking about this idea I think I'm agreeing with Canute. I'm still thinking a reductive explanation is not possible. This idea seems to do the same thing that everyone tries to do. It draws a correlation and then makes conclusions based on that correlation. If we observe that "A" always exists in conjunction with "B" then we make the conclusion that "A" and "B" must be correlated. "A" must be causing "B". Therefore, scientifically we can answer the question of "when should "B" occur?". Whenever A ocurs. And we can create "B" by simply creating "A".

All of this is founded on the correlation. But there is no explanation of the correlation itself. Why does "B" necessarily follow from "A"? We have this explanation for hurricanes but not consciousness.

Do we? This strikes at the very heart of the matter, Fliption...what if I can imagine the counter-action of very powerful, swirling, winds without their being a hurricane at all. What is it that causes a hurricane to arise from these physical processes?[/color]

Is not the rational answer "nothing, the hurricane doesn't 'arise' from these processes it is those processes"? Isn't that exactly what I - along with Dennett and precious few others - have been saying for quite some time on this topic?
 
  • #36
Originally posted by Mentat
Do we? This strikes at the very heart of the matter, Fliption...what if I can imagine the counter-action of very powerful, swirling, winds without their being a hurricane at all. What is it that causes a hurricane to arise from these physical processes?[/color]

Is not the rational answer "nothing, the hurricane doesn't 'arise' from these processes it is those processes"? Isn't that exactly what I - along with Dennett and precious few others - have been saying for quite some time on this topic?

Then a hurricane is a poor analogy. The word "hurricane" is define by humans as the very physical processes that you used to describe it i.e very powerful swirling winds. So it is correct to say that a hurricane doesn't arise from these winds. It is the winds. By definition this is true. However, this is not the case with consciousness. Consciousness as it is being discussed here has not been defined as any physical process. It is defined as "what it is like to be". It is defined this way because this is what needs to be explained. So to simply equate consciousness to some physical process the way you do a hurricane leaves an explanatory gap. How do you get from brain process to "what it's like to be"?

I was trying to show the difference between hurricanes and conscouness earlier and my last sentence threw you off the point entirely. That point was on the idea you posted.
 
  • #37
Originally posted by Mentat
Do we? This strikes at the very heart of the matter, Fliption...what if I can imagine the counter-action of very powerful, swirling, winds without their being a hurricane at all. What is it that causes a hurricane to arise from these physical processes?[/color]

Is not the rational answer "nothing, the hurricane doesn't 'arise' from these processes it is those processes"? Isn't that exactly what I - along with Dennett and precious few others - have been saying for quite some time on this topic?

What we have, initially, are two sets of phenomena; the macroscopic hurricane and the microscopic actions of atoms and molecules. The question of how to coherently connect the two is best phrased not with the term 'arising' but with 'accounting for': How can the microscopic actions of atoms and molecules account for the macroscopic phenomenon of a hurricane? What we need is a bridge principle to show how the explanation of latter completely satisfies our need to explain the former. The bridge principle is simple: the actions of those individual atoms and molecules, when taken as a whole, are the hurricane.

Why does this bridge principle work? Well, imagine that we are not concerned with hurricanes at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a hurricane) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a hurricane! Every salient feature of the hurricane that is in need of explaining has been completely derived from that set of microscopic phenomena. Given this result, our bridge principle stating that the hurricane is that set of dynamic atoms and molecules makes perfect sense, and appears to have been completely justified. So what we have is a successful reductive explanation.

Now, you propose a similar bridge principle for consciousness. You say that we can coherently connect conscious experience with physical neurons by just saying that subjective experience is the activity of the neurons.

Why does this bridge principle not work? Well, imagine that we are not concerned with consciousness at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a brain) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a brain, but looks nothing like subjective experience. Every salient feature of the objective brain that is in need of explaining has been completely derived from that set of microscopic phenomena. But no salient feature of subjective experience that is in need of explaining has been derived whatsoever from that set of microscopic phenomena. Given this result, our bridge principle stating that subjective experience is that set of dynamic atoms and molecules makes no sense at all, and indeed appears to have been invalidated. After all, if subjective experience is the activity of neurons, why has thoroughly analyzing the activity of neurons given us asbolutely no indication of subjective experience? The end result is that we do not have a successful reductive explanation of subjective experience.

For instance, in the case of the hurricane, we see completely clearly why a bunch of molecules moving in such and such a way must account for a macroscopic wind current. But in the case of consciousness, we do not see at all why a bunch of molecules moving in such and such a way must account for a subjective experience of a certain color.

The way around this is to add certain fundamental postulates to our model of reality, such as "neurons doing such and such will always be accompanied by a subjective experience of such and such." Only after taking such postulates to be axiomatically true could we (trivially) see why molecules moving in such and such a way must account for a subjective experience of a certain color.

But in assuming such a fundamental, axiomatic existence for subjective experience, we have not reductively explained the existence of certain subjective experiences under certain circumstances at all. Rather, we have just taken it for granted that certain subjective experiences exist under certain circumstances. Using this approach we may be able to explain more complex features of subjective experience in terms of simpler ones, but we will still not be able to explain those most basic components of subjective experience in terms of anything else. So we will not have a truly reductive explanation of subjective experience.
 
  • #38
Good post, hypna.

I have a question relating to subjective experience. Since axiomatic principles would not reductively explain the certain instances of subjective experience … and why the neurons do what they do in general purposes. Do you think it is possible that Evolutionary Psycology could explain someday the fundamental, algoristic co-option of neurons and subjection subconscious experience are correlated and are directly parallel to each other? or is (EP) not a matter of human-based subconsciousness in the realm of subjectivity?
 
Last edited:
  • #39
Mentat

I think Hypno answered your post very clearly so just a couple of additional points.


Originally posted by Mentat
It is a theory of what goes on in our neocortexes which gives rise to experience, memory, and creativity.
Not really it isn't. It is a theory predicated on the ad hoc conjecture that what goes on in out neocortex gives rise to experience. No account is given of how this occurs and even whether it does.

That's the point though: Consciousness, if it is to be explained as a scientific phenomenon - in the manner of a hurricane - cannot "evolve" or "arise" from the functions of the neocortex, it must literally be those functions, as Dennett (I'm sure your elated that I'm bringing him up again) already predicted in Consciousness Explained.
Ah, He who must not be named. You say 'must literally be those functions'. Why is that, and what evidence leads you (or Calvin) to say this?


It's been observed to be hexagonal. That's really the beauty of it, in my opinion: There is no why, and thus it is just like every other event in the Universe (as understood in the eyes of the scientist).
Hexagonal arrays of neurons have been observed. That is all that has been observed. There is no reason to assume that these have anything to do with causing consciousness, (even if they correlate with certain states of consciousness).



No, this is simply a very special process, and there have been rudimentary (at best) experiments that link certain memories to certain synchronously-firing arrays. As I said at the outset, this is merely a way of showing that it is possible, in principle, to explain consciousness; however impossible it may currently be, in practice.
But this does not show that it is possible. In fact it suggests it is not possible, since it is forced to merely assume consciousness. As you acknowledge here..

As to how Calvin knows that these patterns are thoughts: he doesn't. It's a simple postulate that may turn out to be true. Anyway, it's a necessary one, since the most discreet unit of memory must be established before an understanding of the re-stimulation of memories (integral to consciousness) can be created.

And without some sort of additional theory a dip in spacetime is just a dip in spacetime, no more. You see, you are again demanding more of a scientific theory of consciousness than you would of any other such scientific theory.
That is not the case. No scientific theory that successfully accounts for the existence of a phenomena starts by just assuming its existence. It would be back to front.


It's like in quantum mechanics: Any form of energetic reaction (aka "observation") can collapse the wave-function of a particle, and thus produce seemingly absolute properties out of the typical chaos of that realm. Sure, it's not conceivable, and not really understood yet; but no one says it's impossible in principle to understand it because we can't make the connection of why energetic reactions should collapse the wave-function ITFP...right? It's just the way it is, and the scientific quest is to understand exactly how it is, not why it is that way and not some other way.
Can;t argue with that, but here you are talking about a physical reaction that is scientifically observable. Consciousness is not scientifically observable.

Ok...hold on, isn't psychology a science?
It's a long-standing debate. Psychologists would like it to be, and go to some trouble to make it look it look like it is, but the jury is still out.

Psychologists often refer to our free will. Of course, we may have free will, and all the "thermostat" scientists still may be correct at the same time...we'd simply have to say that the ability to make true conscious choices does exist,
Science is very clear on this issue. The physical universe is taken to be 'causally complete', entirely explicable in terms of physical interactions. Consciousness cannot be causal in this view. For science consciousness is a waste product, a whiff of wasted steam from the engine of a train driven by, well, er, steam, as Gilbert Lyle, Dennett's teacher, argued.

Lyle argued that we are making a mistake in thinking that consciousness is a real thing in need of a real explanation, saying that to do this is to make a category error of the same type as 'she came home in floods of tears and a sedan chair'. (I always liked that).

Even if consciousness was causal science could not accept that freewill exists, since it contradicts the doctrine of physical determinism.

Are you serious? I don't mean to be offensive in any way, I just think we must be discussing two different things here. I was talking about a physical, scientific, reductionist theory of consciousness. This would indeed give rise to a test of consciousness that would be a matter of measuring whether some particular process was occurring or not...that process being consciousness. That seems logical to me.
And me also. A scientific theory of consciousness would indeed give rise to a test for it. The question is whether such a theory is possible.

What exactly is it that you expected from a scientific theory of consciousness?
I would be content with a proof that it is a scientific entitity. Some indication of the physical mechanism by which it is caused would also be required.

And if consciousness of a thought is simply thinking about that thought, then where is the problem?
This approach is taken by 'higher-order thoughts' theories (HOTs). They don't work since ultimately they disappear up their own backside in self-reference.

Just so long as you're not repeating that as a sort of mantra, and then setting out to prove it against all odds,
Yeah, that's the danger. I don't think I'm doing this.

I'm appreciative of your taking the time to educate me.
Thank you also. We're all educating each other as far as I'm concerned.

I just don't want anyone to irrationally assume that something "can't be done", without first looking at all the possibilities. I, myself, am not perfectly convinced that it can be done. I'm just taking that side because most people take the other one, and because the current theories intrigue me :smile:. [/B]
That seems a good approach. In fact it's very difficult to prove that consciousness cannot be explained scientifically (mainly because there is no scientific definition of it, so there's nothing to get ones teeth into). What is not so difficult is to show that all the ways of scientifically explaining proposed so far won't work, and that certain classes of explanation cannot ever work.

Apart from a short break during the reign of Behaviourism thinkers have been specualting about the causes of consciousness for centuries. (Much of the best work was done in the 19th century) I feel that if a scientific explanation was possible then by now someone would have come up with something at least workable in principle.

It is easy to forget that so far we do not have an in principle explantion for the existence of matter. Nobody has yet ventured a book called 'Matter Explained'.
 
  • #40
Originally posted by Fliption
Then a hurricane is a poor analogy. The word "hurricane" is define by humans as the very physical processes that you used to describe it i.e very powerful swirling winds. So it is correct to say that a hurricane doesn't arise from these winds. It is the winds. By definition this is true. However, this is not the case with consciousness. Consciousness as it is being discussed here has not been defined as any physical process. It is defined as "what it is like to be". It is defined this way because this is what needs to be explained. So to simply equate consciousness to some physical process the way you do a hurricane leaves an explanatory gap. How do you get from brain process to "what it's like to be"?

I was trying to show the difference between hurricanes and conscouness earlier and my last sentence threw you off the point entirely. That point was on the idea you posted.

Fliption, you make a very interesting point. You say that consciousness is defined as "what it is like to be"...perhaps that's the problem. Doesn't that make the assumption that it really is like something be you, and you don't just think it is? IOW (and I'll probably start a thread on this), doesn't this definition pre-suppose that there is a central self and that it's like something to be that central self?

The second assumption is perfectly sound, but the first one is up for a lot of debate.

(btw, the reason it must pre-suppose a central, indivisible, self is that, if there were no such "self", but were instead mere processes of the brain that could - with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable).
 
  • #41
Originally posted by hypnagogue
Now, you propose a similar bridge principle for consciousness. You say that we can coherently connect conscious experience with physical neurons by just saying that subjective experience is the activity of the neurons.

Why does this bridge principle not work? Well, imagine that we are not concerned with consciousness at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a brain) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a brain, but looks nothing like subjective experience.

Kudos on this post, Hypna. My hat's off to you.

There is (of course ) one thing I don't get : Do we know what a subjective experience looks like? And, if we do, do we know what it looks like from all perspectives? Really, what does a hurricane feel like to a hurricane? This question is, obviously, a non-sequitor, since the hurricane is not conscious, and thus that extra perspective doesn't exist. However, it does exist in the case of brains. If we can work up from the molecular scale, and produce a functioning brain, then, to that brain, it will indeed "feel like" consciousness. But it will only feel like consciousness from that perspective.

Do you see what I'm getting at?
 
  • #42
Originally posted by Mentat
There is (of course ) one thing I don't get : Do we know what a subjective experience looks like? And, if we do, do we know what it looks like from all perspectives? Really, what does a hurricane feel like to a hurricane? This question is, obviously, a non-sequitor, since the hurricane is not conscious, and thus that extra perspective doesn't exist. However, it does exist in the case of brains. If we can work up from the molecular scale, and produce a functioning brain, then, to that brain, it will indeed "feel like" consciousness. But it will only feel like consciousness from that perspective.

Do you see what I'm getting at?

Yes, I do, and it is certainly a reflection of the central problem. We have no knowledge of subjective experience strictly from objective observation. What we do have is individual knowledge of subjective experience coming directly from our own first hand perceptions of it, and from this we can try to infer associations between objective observations (such as listening to another human's verbal reports) and subjective experience, based on the assumption that this person is indeed conscious in the same general sense that we are. But this approach entirely presupposes our own firsthand knowledge of subjective experience, since just going by a purely objective, 3rd person approach there should be no reason to believe or even postulate such a thing as subjective experience in the first place.

Of course, this is a massive problem for any attempt at a scientific (objective) account of consciousness. It seems to strongly suggest that there is some component contributing to the existence of consciousness that we cannot objectively observe.

I think the best we can do at present is to guess at the nature of this unobservable component, using observations of brain activity in conjunction with assumptions of the validity of verbal reports and the like, to infer what subjective experience does and does not 'look like' from the 3rd person perspective of observing a human brain. This approach entirely acknowledges that there is nothing in our current 3rd person understanding of physical reality that can suggest, a priori, that system A is conscious but B is not, but rather that our key intuition about the existence and nature of subjective experience comes from our own personal, first-hand knowledge of it. Rather than try to artificially break our epistemic limitations when we try to understand and explain consciousness, we should work within them and use them to structure our approach. That is, rather than try to write off the explanitory gap by supposing that consciousness is entirely explainable via physical reductionism, we should acknowledge that the explanitory gap reflects a deep and significant fact about the nature of reality, and then very carefully go about fleshing out what exactly what that deep and significant fact might be.
 
  • #43
Hypno

I couldn't possibly agree with you more. It mystifies me why this approach is so rarely explored. It leads straight to the solution. Perhaps it is too tainted with Buddhism for most academics. But it has to be the only way forward.
 
  • #44
Originally posted by Mentat
with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable).

Who is experiencing the illusion?
 
  • #45
Originally posted by Mentat
(btw, the reason it must pre-suppose a central, indivisible, self is that, if there were no such "self", but were instead mere processes of the brain that could - with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable). [/B]
Not quite sure what you mean but this does not seem to be quite right.

The reason that 'what it is like' is so widely accepted as a definition, and can be accepted by Buddhists etc., is that it does not entail the existence of a 'self'. In most (all?) idealist accounts of consciousness 'self' is an illusion (or evolved epiphenomenon). Thus 'self' is not necessary to the existence of consciousness or experience.

In other words, 'what it is like' entails the existence of an experience, but (at the limit) it does not entail the existence of a self that is apart from the experience. Consciousness can exist in a state of selflessness.

Thus Buddhists, who in one way claim that the universe arises from consciousness, also sometimes assert that consciousness does not exist. There are subtle differences in these two applications of the term 'consciousness', related directly to the treatment of 'self'.
 
  • #46
Originally posted by Jeebus
Good post, hypna.

I have a question relating to subjective experience. Since axiomatic principles would not reductively explain the certain instances of subjective experience … and why the neurons do what they do in general purposes. Do you think it is possible that Evolutionary Psycology could explain someday the fundamental, algoristic co-option of neurons and subjection subconscious experience are correlated and are directly parallel to each other? or is (EP) not a matter of human-based subconsciousness in the realm of subjectivity?

I think evolutionary psychology's stance on consciousness will follow from neurobiology and philsophy, not the other way around. After learning more about the nature of consciousness and what functions it serves, we would be able to make better sense of how it is evolutionarily advantageous. But conjecturing how consciousness might be evolutionarily advantageous without getting a better handle on it in its own right would seem to be the wrong approach, perhaps even question begging.

However, analyzing in depth the ontogeny of the human brain and its relationship with the development of consciousness could lead to some fruitful results, and I suppose that has a bit of EP flavor to it.
 
  • #47
Originally posted by Fliption
Who is experiencing the illusion?

No one in particular. As explained in another thread, the many sub-experiences (I just really like that term, for some reason) - which are the basic computations of incoming information by the brain - do not ever produce a complete, final draft, but they do process the illusion that there is such a thing, so that it looks like it in retrospect. I think this is actually a very useful tool for the brain to have developed, as it allows for the compactification of lots of information.
 
  • #48
Originally posted by Canute
Not quite sure what you mean but this does not seem to be quite right.

The reason that 'what it is like' is so widely accepted as a definition, and can be accepted by Buddhists etc., is that it does not entail the existence of a 'self'. In most (all?) idealist accounts of consciousness 'self' is an illusion (or evolved epiphenomenon). Thus 'self' is not necessary to the existence of consciousness or experience.

In other words, 'what it is like' entails the existence of an experience, but (at the limit) it does not entail the existence of a self that is apart from the experience. Consciousness can exist in a state of selflessness.

Thus Buddhists, who in one way claim that the universe arises from consciousness, also sometimes assert that consciousness does not exist. There are subtle differences in these two applications of the term 'consciousness', related directly to the treatment of 'self'.

I see what you are getting at, but the central "illusion" is not really of the self, but of the idea that there is a coherent gestalt arising from those little sub-experiences. The reason I mention the self is because, when one introspects at what is like to just "be 'me'", one is succumbing to the same illusion as always, but in this case it is in reference to their very "selves". And, since it is often said that consciousness=a state in which it is "like something" to be "me", it appeared intrinsically related.
 
  • #49
Originally posted by Mentat
I see what you are getting at, but the central "illusion" is not really of the self, but of the idea that there is a coherent gestalt arising from those little sub-experiences. The reason I mention the self is because, when one introspects at what is like to just "be 'me'", one is succumbing to the same illusion as always, but in this case it is in reference to their very "selves". And, since it is often said that consciousness=a state in which it is "like something" to be "me", it appeared intrinsically related.
This is a confusing issue. You're right that when on introspects then in a way one is succumbing to the illusion of 'me'. 'Me' is where all introspection has to start. But when one introspects sufficiently successfully you find 'me' isn't really there, but somehow 'your' experience still is. I haven't got too far with this, but far enough to believe it.

This is why I feel in discussions of consciousness it's very important to distinguish between mind and consciousness, or at least be careful about how they are defined. Mind and brain may both arise from some fundamental state of consciousness.

(Is anyone else having trouble here. Posts keep disappearing. This post of Mentat's that I quoted isn't there any more, not the rest of that page. Is it just me?)
 
Last edited:
  • #50
Originally posted by Canute
This is a confusing issue. You're right that when on introspects then in a way one is succumbing to the illusion of 'me'. 'Me' is where all introspection has to start. But when one introspects sufficiently successfully you find 'me' isn't really there, but somehow 'your' experience still is. I haven't got too far with this, but far enough to believe it.

What does that mean? I don't mean to be offensive, I'm just confused at how clear something has to be to a person before they "believe" it.

Anyway, if you can accept such an outlook - wherein there is no "self" and no final "draft" of "experience", but merely a collection of "sub-experiences" - than what more do you want from a theory of consciousness than that which Calvin, LeDoux, Edleman and Tononi, and Dennett have proposed.

This is why I feel in discussions of consciousness it's very important to distinguish between mind and consciousness, or at least be careful about how they are defined. Mind and brain may both arise from some fundamental state of consciousness.

(Is anyone else having trouble here. Posts keep disappearing. This post of Mentat's that I quoted isn't there any more, not the rest of that page. Is it just me?)

I can still see my post there...has this happened to you on other threads?
 
  • #51
Originally posted by Mentat
What does that mean? I don't mean to be offensive, I'm just confused at how clear something has to be to a person before they "believe" it.
Yes, I didn't put it very well. What I meant was that it makes rational and reasonable sense, but that on top of that my experience confirms it. It is not provable so it has to be experience that decides it in the end.

Anyway, if you can accept such an outlook - wherein there is no "self" and no final "draft" of "experience", but merely a collection of "sub-experiences" - than what more do you want from a theory of consciousness than that which Calvin, LeDoux, Edleman and Tononi, and Dennett have proposed.
But that doesn't follow. It is widely agreed that the writers you mention do not explain consciousness. I certainly agree that they don't.

Also saying that self is an illusion is not at all the same as saying that consciousness is an illusion. (That was my point).

I can still see my post there...has this happened to you on other threads? [/B]
Yes it has, it's driving me nuts. I keep losing track of discussions and posting out of sync. I think it's something to do with pages not updating but I can't pin it down.
 
  • #52
Originally posted by Canute
But that doesn't follow. It is widely agreed that the writers you mention do not explain consciousness. I certainly agree that they don't.

They don't explain how all of the information-processing of the brain sums up to a Final Draft of conscious experience, if that's what you mean...but they are not trying to. They have each shown, in their own way, that such a Final Draft is never really produced, but is an illusion (a "trick" that the brain plays on itself, in Dennett's terms) which is processed right along with the rest of the information, which is useful (for the compacitifcation, long-term memorization, and recall...as well as for the evolution of sentience) but misleading (in philosophical discussion, one can take the illusion of compactification to be the real thing, and can spend eternity trying to explain how the Final Draft can "arise" from information-processing, but will never find the answer, since the argument is based on a faulty premise).

Also saying that self is an illusion is not at all the same as saying that consciousness is an illusion. (That was my point).

Well, that's true.
 
  • #53
Mentat

I must admit I don't really understand your point about final drafts. However I'm not sure how it's relevant. 'Final drafts' is a term from Dennett that may or may not have some relevance to consciousness.

But heterophenonology, the theory behind the term, cannot work as an explanation of consciousness since it excludes what we normally call consciousness, as has been pointed out by many of Dennet's colleagues, notably Gefffrey Harnard.

So to prove that the term 'final drafts' has any meaning in relation to consciousness one would first have to meet the well-rehearsed and so far wholly unanswered objections of just about everyone who isn't Daniel Dennett.

Somewhere online is an email discussion between Harnard and Dennett on this issue which pretty much settles the matter, but I've lost it. A search on the names together might uncover it.

I think you should change your mind. It really just cannot ever make sense to argue that consciousness is anything other than exactly what it appears to be, to you (or I), and what it has been like at other times, in your own experience and in your own words, as best you can tell or remember. If you are anything like me then what it seems to be like is a completely unified experience of what it is like to be conscious as me at this moment, and what it has been like in other remembered moments. That is what conciousness is, what it is that we're supposed to explaining.

This seems so completely obvious that I cannot understand how anyone could argue otherwise. I don't mean you, I mean the thousands of professional academics who get paid to think clearly and deeply about these issues and who agree with you. The arguments go back and forth endlessly in the literature.

Half of these tenured academics and professional researchers writing about consciousness seem to be off their rockers to me, but perhaps I'm off mine. However I'm suspicious. The problem of consciousness has turned into a goldmine for academic philosophers and many others in the research and publication industry, I sometimes wonder if they're really trying to solve it.
 
Last edited:
  • #54
Originally posted by Canute
I think you should change your mind. It really just cannot ever make sense to argue that consciousness is anything other than exactly what it appears to be, to you (or I), and what it has been like at other times, in your own experience and in your own words, as best you can tell or remember. If you are anything like me then what it seems to be like is a completely unified experience of what it is like to be conscious as me at this moment, and what it has been like in other remembered moments. That is what conciousness is, what it is that we're supposed to explaining.

That is certainly what it feels like. But then, to quote FZ+, "If reality were exactly the way it seemed, we wouldn't need science at all".

BTW, I want to clarify here and now that I don't believe any of the things I've stated about consciousness to be necessarily true. I just don't think they are necessarily false either, and I'm preferring them over the alternative specifically because it contradicts the belief that "seems obvious".

This seems so completely obvious that I cannot understand how anyone could argue otherwise. I don't mean you, I mean the thousands of professional academics who get paid to think clearly and deeply about these issues and who agree with you. The arguments go back and forth endlessly in the literature.

Half of these tenured academics and professional researchers writing about consciousness seem to be off their rockers to me, but perhaps I'm off mine. However I'm suspicious. The problem of consciousness has turned into a goldmine for academic philosophers and many others in the research and publication industry, I sometimes wonder if they're really trying to solve it.

I sometimes wonder if they are trying to solve the wrong question. The point, IMO, is to understand how a being is conscious and/or sentient. It is not to answer the "hard problem", since that is a compilation of postulates that needn't exist at all for sentience/consciousness to exist.
 
  • #55
Originally posted by Mentat
That is certainly what it feels like. But then, to quote FZ+, "If reality were exactly the way it seemed, we wouldn't need science at all".
I have to disagree. If we didn't study the way reality seems then science wouldn't exist, as how things seem is all that science can study.

BTW, I want to clarify here and now that I don't believe any of the things I've stated about consciousness to be necessarily true. I just don't think they are necessarily false either, and I'm preferring them over the alternative specifically because it contradicts the belief that "seems obvious".
Fair enough. But obviousness is nevertheless a useful guide to the truth.

I sometimes wonder if they are trying to solve the wrong question.
Yeah, that as well.

The point, IMO, is to understand how a being is conscious and/or sentient. It is not to answer the "hard problem", since that is a compilation of postulates that needn't exist at all for sentience/consciousness to exist. [/B]
But the hard problem is indistinguishable from the problem of understanding how a being can be conscious and/or sentient. That is the hard problem.
 
  • #56
Originally posted by Canute
I have to disagree. If we didn't study the way reality seems then science wouldn't exist, as how things seem is all that science can study.

That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.

Fair enough. But obviousness is nevertheless a useful guide to the truth.

Maybe.

Yeah, that as well.

I just wish more people would ponder that (as, I believe, Dennett has - and has arrived at the conclusion that they are indeed asking the wrong question).

But the hard problem is indistinguishable from the problem of understanding how a being can be conscious and/or sentient. That is the hard problem.

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined.
 
  • #57
Originally posted by Mentat
That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.

In the double slit experiments, it seems (appears) that light can either accumulate on one specific point of a barrier or be dispersed across this barrier. From these appearances we infer certain properties of light that might contradict our usual notions of how light seems to be. The point is that science is based on observation. Our knowledge of reality stemming from observation is by definition mediated, not direct, and in this sense it is built entirely from appearances. (Appearances here does not necessarily mean how things literally appear in subjective experience.)

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined.

You really need to circumvent this objection of yours. It amounts to a strawman. Think in terms of 'accounting for,' 'making intelligible how,' or whatever-- not 'produces' or 'gives rise to.' The hard problem is an epistemic problem relating to how we can know or understand the processes which underlie the phenomenon of consciousness, not a problem of explaining literal 'products.'
 
Last edited:
  • #58
Originally posted by Mentat
That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.
I don't want to be brutal but it is absolutely and completely certainly the case, and no scientist or philospher has ever disagreed. And what Hypnogogue said. This is why it is said that this is a world of appearances, in which we cannot know the essence of things.

I just wish more people would ponder that (as, I believe, Dennett has - and has arrived at the conclusion that they are indeed asking the wrong question).
IMHO all metaphysical questions withour exception are the wrong questions. But that would take all day to explain.

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.
Isn't this exactly the same as the problem of how "to understand how a being is conscious and/or sentient" as you put it?

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined. [/B]
If you can do that you will be international academic superstar overnight. Great minds have been trying for decades, if not centuries, perhaps even milenia.

Your arguments are all dealt with in full in the literature if you want to get a more trustworthy refutation of them.
 
  • #59
Originally posted by hypnagogue
In the double slit experiments, it seems (appears) that light can either accumulate on one specific point of a barrier or be dispersed across this barrier. From these appearances we infer certain properties of light that might contradict our usual notions of how light seems to be. The point is that science is based on observation. Our knowledge of reality stemming from observation is by definition mediated, not direct, and in this sense it is built entirely from appearances. (Appearances here does not necessarily mean how things literally appear in subjective experience.)

And so what does "appearance" mean when referring to subjective experience itself? What if your subjective experience appears one way that it really isn't (and don't say that it can't be because it is the experience, since it is still perfectly logical to say that what "appears" to be a complete subjective thought is really a collection of "simple" thoughts, each being identical to an impression...ergo, reductively explanable phenomenon with no synergistic reality)?

You really need to circumvent this objection of yours. It amounts to a strawman. Think in terms of 'accounting for,' 'making intelligible how,' or whatever-- not 'produces' or 'gives rise to.' The hard problem is an epistemic problem relating to how we can know or understand the processes which underlie the phenomenon of consciousness, not a problem of explaining literal 'products.'

And the "hard problem" itself, IMO, amounts to a strawman. What is it, exactly that you are trying to explain? You haven't defined subjective experience in any logical terms, you have only stated that you definitely have "it".

The "hard problem" is based on the assumption that there is a subjective experience, but I still haven't seen that term defined.
 
  • #60
Originally posted by Canute
Isn't this exactly the same as the problem of how "to understand how a being is conscious and/or sentient" as you put it?

Nearly. I just don't think that the subjective experience and the physical function are at all different from each other, which precludes the drawing of a bridge between the two, since there is only one thing to explain.

The hard problem lacks substance, IMO, simply because it raises straw-men at every turn. For example, if I say that I see a red ball, the "hard problem" philosophers will ask "What is the relationship between the stimulation of your visual cortex and the experience of a 'red ball'?". But they are asking a moot and empty question. My question in return is: "What exactly do you people expect to happen when a visual cortex processes a certain wavelength of light? How do you separate the processing of that wavelength from the experience of the color, when the experience of the color is the only method available to a visual cortex to process that wavelength?"[/color]
 

Similar threads

Replies
5
Views
3K
  • · Replies 21 ·
Replies
21
Views
6K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 212 ·
8
Replies
212
Views
45K
  • · Replies 16 ·
Replies
16
Views
2K
Replies
9
Views
2K
  • · Replies 99 ·
4
Replies
99
Views
14K
  • · Replies 135 ·
5
Replies
135
Views
23K