Is Consciousness Just the Result of Electrical Activity in Our Brains?

  • Thread starter Thread starter Rothiemurchus
  • Start date Start date
  • Tags Tags
    Consciousness
AI Thread Summary
The discussion centers around the complex nature of consciousness, exploring its relationship with brain activity and the concept of the soul. Participants debate whether consciousness is merely a product of electrical and chemical processes in the brain or if it involves a deeper, possibly material essence, such as a soul composed of unique particles. The idea that consciousness could be linked to specific particles or fields that differ from conventional physics is proposed, but this notion faces skepticism regarding its empirical viability and the explanatory gap between physical phenomena and subjective experience.The conversation also touches on the nature of awareness, suggesting that it encompasses more than just sensory input; it involves a qualitative experience that cannot be fully captured by physical descriptions. Examples like Helen Keller's evolution of awareness highlight the complexity of consciousness, emphasizing that while awareness can expand, it does not equate to the richness of phenomenal experience. The participants express uncertainty about defining consciousness, acknowledging that it remains a significant philosophical and scientific challenge, with no consensus on its fundamental nature or origins.
  • #101
Canute said:
Your claim goes beyond what is known. It might be true but, as I said earlier, I don't believe it. In fact I find the idea daft. Many people claim that we can explain human behaviour and the existence of the physical world without reference to consciousness. However this is a conjecture. As things stand we are unable to explain the existence of human consciousness or the physical world. It is therefore possible that the reason we cannot explain thses things is that we think we can explain them without reference to consciousness.

I'll explain why I believe consciousness isn't causal without referring to any hypothetical beings.

When you read my post, light is stimulating your eye, which sends signals to your brain. Your brain turns this visual data into words, and then into abstract ideas (ie, signals representing abstract ideas). These signals cause other signals to start up, which represent your own personal ideas. You might look at an apple lying on your desk and this causes new signals which represent the color red. These new signals interact with the ones already floating around in your head to bring you to the conclusion that red is real, and my arguments are nonsense.(ie, daft) Now, I'm saying that each of these steps is a physical process, and can be explained by the laws of QM and, if they apply, relativity. We aren't yet close to such an explanation, and in fact they might not actually be signals, but instead something more abstract, like "brain states." But they are physically explainable. On the other hand, you seem to be saying that at some point in this process, a mystical, non-physical force (ie, causal consciousness) creeps in and affects the physical outcome. The brain is a physical object, no inherently different than a computer. What brings you to the conclusion that there is such a mystical force?

I'm not saying there is no consciousness. It is perfectly possible that consciousness is real, but it is only a byproduct of the physical laws governing our brain. If the electrical state of our brain could be altered by physical means, it is not at all unreasonable to claim that our conscious experience would change as well. Who's to say we couldn't electrically stimulate ourselves into any conscious state we wanted? I could electrically induce you into a state where you had my opinions about consciousness, or maybe those of someone who doesn't believe in it at all. Our beliefs about consciousness are completely physically rooted. (note: maybe I would have to change the physical structure in addition to the electrical configuration to achieve certain conscious states, but this does not affect my argument.)

This isn't quite accurate. It is very easy to prove the existence of the mental world, it is just not possible to prove it by demonstration. In contrast it is impossible to prove the existence of the physical world by any means or under any circumstances.

I don't see a difference between "prove" and "prove by demonstration." If you disagree with my physicalist viewpoint, then of course you'll say consciousness can be proven, and that its all that can be proven. But my entire point is that we would believe it was there whether or not it really was. So it is impossible to prove it beyond any doubt, unless you disprove my viewpoint.
 
Physics news on Phys.org
  • #102
StatusX said:
I'll explain why I believe consciousness isn't causal without referring to any hypothetical beings.

When you read my post, light is stimulating your eye, which sends signals to your brain. Your brain turns this visual data into words, and then into abstract ideas (ie, signals representing abstract ideas).
Hmm. How does one turn an electro-chemical signal into an idea in the absence of consciousness? What is an 'abstract idea'? Is there any other sort? Or are you suggesting that ideas are physical? Does the idea of an elephant take up more brain space than the idea of a mouse?

These signals cause other signals to start up, which represent your own personal ideas.
This is a sleight of hand. An electro-chemical signal is a physical thing, an idea is not, (even if you believe that ideas have neural correlates). If these signals 'represent' ideas then who or what is decoding the representation and turning them into ideas? That is, how does your e-c signal become a non-physical idea?

You might look at an apple lying on your desk and this causes new signals which represent the color red.
What do you mean 'represent' the colour red? I thought you were arguing that the signals were the colour red.

These new signals interact with the ones already floating around in your head to bring you to the conclusion that red is real, and my arguments are nonsense.(ie, daft)
Are you saying that 'red' is not real? Why are you trying to explain our experience of it then?

Now, I'm saying that each of these steps is a physical process, and can be explained by the laws of QM and, if they apply, relativity.
OK. But I'll bet you can't find any evidence to prove it.

We aren't yet close to such an explanation,
I wonder why not.

and in fact they might not actually be signals, but instead something more abstract, like "brain states."
I'd say a brain state was not abstract. This is the problem, it is not possible to argue from brain states to states of consciousness. This is why so many arguments against the notion that the neural correlates of consciousness are consciousness have been published. I like neurophysiologist Karl Pribram's remark that looking for consciousness in the brain is like digging to the centre of the Earth to find gravity.

But they are physically explainable. On the other hand, you seem to be saying that at some point in this process, a mystical, non-physical force (ie, causal consciousness) creeps in and affects the physical outcome.
Hold on, I didn't suggest that there was anything mystical about consciousness, and both of us are arguing that it is non-physical, me on the basis that is does exist, you on the basis that it doesn't.

The whole basis of your argument is that something that is non-phsyical cannot exist, and that therefore consciousness is physical insofar as it exists and non-physical insofar as it doesn't. This forces you into the incoherent view that ideas are physical, despite the fact that they have no physical extension.

The brain is a physical object, no inherently different than a computer. What brings you to the conclusion that there is such a mystical force?
'Mystical' is your word, not mine. What forces me to conclude that consciousness (or, more properly, conscious experiences) is not physical is that the brain can be observed in the third-person and consciousness cannot be.

I'm not saying there is no consciousness. It is perfectly possible that consciousness is real, but it is only a byproduct of the physical laws governing our brain. If the electrical state of our brain could be altered by physical means, it is not at all unreasonable to claim that our conscious experience would change as well.
There is no doubt that as human beings our states of consciousness are affected by the states of our brains. However the states of the tides are affected by the state of the moon. It does not follow that water is made out of moons.

Who's to say we couldn't electrically stimulate ourselves into any conscious state we wanted?
Whose to say there isn't a teapot in orbit around Mars?

I could electrically induce you into a state where you had my opinions about consciousness, or maybe those of someone who doesn't believe in it at all. Our beliefs about consciousness are completely physically rooted. (note: maybe I would have to change the physical structure in addition to the electrical configuration to achieve certain conscious states, but this does not affect my argument.)
Yes, but this is just a restatement of your opinion. I'm arguing that there is no evidence for your opinion. Can you think of any? There's none yet in the literature.

I don't see a difference between "prove" and "prove by demonstration."
In a way I agree. It depends how you use the term 'prove'. If I say 'it appears to me that it's raining' I can know that this is true. I can 'prove' its truth to myself by a simple act of introspection, (and the statement remains true whether or not it is raining). But I cannot demonstrate a proof of it. Perhaps you wouldn't consider my introspective evidence a 'proof', but this doesn't really matter. What is known directly is certain but not provable by demonstration, (e.g. 'I think therefore I am'), whereas what can be proved by demonstration can always be falsified (Goedel et al) and is therefore never certain. This is one of the odd consequences of the nature of consciousness and of formal reasoning.

If you disagree with my physicalist viewpoint, then of course you'll say consciousness can be proven, and that its all that can be proven. But my entire point is that we would believe it was there whether or not it really was. So it is impossible to prove it beyond any doubt, unless you disprove my viewpoint.
I cannot demonstrate that I am conscious. However it doesn't follow that I cannot be sure whether I am or not. Are you suggesting that we could be not-conscious yet think we are, or be conscious yet think we are not? If so then we better just agree to disagree.
 
  • #103
StatusX said:
I'm not saying there is no consciousness. It is perfectly possible that consciousness is real, but it is only a byproduct of the physical laws governing our brain.

I agree with you that no argument can be made to say that consciousness is causal. But you seem to go to far with your arguments. If we say that we have no evidence that A causes or has an effect on B, we cannot then conclude that therefore B must cause(or is a byproduct of A). This idea is simply a belief and actually contradicts the whole premise you originally agreed with.

I still wonder if you understood me earlier when I said that a large part of the issue with consciousness is one of epistomology. The whole reason for the zombie illustration is to say that consciousness is beyond the study of a materialist paradigm. It does not claim anything about the causality of consciousness. It merely claims we cannot "know" these things using a materialist toolkit. So this includes making conclusions about it being the byproduct of anything.

You agree with the illustration when it claims that consciousness cannot be shown to be causal but then disagree with the illustration when you make the claim that therefore consciousness is the byproduct of physical processes. This is exactly what the illustration is telling you is NOT the case. You cannot make a statement about causality one way or the other because you cannot make a connection using a materialist paradigm. How can you agree that there is no causal connection and that no explanation can be had under materialism and then claim that it is simply a byproduct of physical processes? This seems inconsistent to me.


I don't see a difference between "prove" and "prove by demonstration." If you disagree with my physicalist viewpoint, then of course you'll say consciousness can be proven, and that its all that can be proven. But my entire point is that we would believe it was there whether or not it really was. So it is impossible to prove it beyond any doubt, unless you disprove my viewpoint.


I agree with Canute here, although this could largely be semantic. To me all knowledge is personal and I think that's how Canute is using the term "knowledge" as well. The only thing I am certain of is that "something exists". I know this because I of aware of existence and something has to exists for this awareness to exists.

When Canute used "prove by demonstration", I interpret it to mean proving to others. Since I do not "know" that the external world really exists and this includes all those people that I might use to "prove by demonstration" to, "prove by demonstration" doesn't prove anything.
 
Last edited:
  • #104
Fliption:
I agree with you that no argument can be made to say that consciousness is causal.

Rothie M:
It has to be causal.Because consciousness is associated with the passage of time and
time passes when physical entities change from one spatial configuration to another.
Energy of some kind causes the configurations to change.
 
  • #105
i guess if we want to fin what counsiosnes really is, we have to research on dopamine and endorfines... why dopamine in our brains make as feel good? while other neurotransmisors make as feel bad??

it may sound stupid, but i think it's a good question, if it doesn't have an answer yet..
 
  • #106
Rothiemurchus said:
It has to be causal.Because consciousness is associated with the passage of time and time passes when physical entities change from one spatial configuration to another. Energy of some kind causes the configurations to change.

It may be causal, but the point was that no argument or set of facts can show it is caused. So far, the subjective element of consciousness is unexplainable by any known principles associated with biology or physics.
 
  • #107
Les Sleeth:
no argument or set of facts can show it is caused

Rothie M:
I disagree.
If, one day, particles are detected at a region in space where ,for example ,an area of colour exists,and these particles are detected for the same length of time as a conscious observer says he can see the area of colour,that would be convincing evidence.
 
  • #108
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision. All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand. For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?

If so, this computer would be capable of having ideas. There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory. As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.

If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.
 
Last edited:
  • #109
A great post.

The only reply the others could make is pretty weak:

We can't build that computer right now.
We might never be able to build it.
Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland.
 
  • #110
selfAdjoint said:
A great post.

The only reply the others could make is pretty weak:

We can't build that computer right now.
We might never be able to build it.
Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland.

I'd respectfully submit that those aren't the only replies others can make, and neither are all of those replies "weak."

I think it is ironic you seem to downplay an argument you yourself are likely to make. You say, "Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland." Well, do you not use that exact same argument against God? You say, "Show me that God! Until you do (and we will be the judge if it is really God or not) we will continue to believe in our physicalist fairyland."

It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith. Empiricists are the ones who insist a hypothesis is to be confirmed by experience, which means by your own rules you are held to different standards. Isn't it reasonable to expect empirical physicalists to produce that computer consciousness they say is possible? And until you do, isn't your theory just another unsubstantiated physicalist "fairyland."
 
Last edited:
  • #111
StatusX said:
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision. All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand. For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?

Maybe. Since it hasn't been done, no one knows. You might say it is possible, I will disagree. The only possible solution is for those who assert it is possible to actually do it.


StatusX said:
If so, this computer would be capable of having ideas. There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory. As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.

You are modelling a zombie, which everyone agrees is possible.


StatusX said:
[If so, this computer would be capable of having ideas] If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.

Where in your explanation is the "you" that is making decisions, changing your mind, willing your body to go here or there? You can easily account for all computing functions of the brain, and all behavior, but you cannot account for self awareness, subjectivity, qualia experience, or whatever you want to call it. Your focus seems to be the quantum steps, the discrete, the parts . . . what you don't see is the continuous, the undivided, the whole . . .

If someone sees either ONLY the discrete, or ONLY the continuous, then in this reality where we exist at least they are going to miss something.
 
Last edited:
  • #112
StatusX said:
If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.

Well, the problem here is that you are reasoning based on "what we know not". Every so often someone comes along with a 'scientific theory' that they say predicts all the equations of physics, and therefore is the 'right' theory (and they are usually not the humble types in their proclamation). Of course, ask them to produce equations that are not known which we can experiment, they are usually mum. What they have done is predict the past successes of science, and even though it is an admirable task if done correctly, such kind of 'theories' do not tell us that a revolutionary theory has been discovered. Rather, all we can do is look at them and say "does it make butter too?".

Well, I think this is a very similar situation to your thought experiment. What are we supposed to do with such computational results other than scratch our heads and pick up our discussion right before we were interrupted? The fact of the matter is, a theory might be right, but if it does not show us how it is right or if it is right in experiments that we can perform, such a theory is generally not useful to science.

In the case of a super computer having all these abilities, all we can ask at the end of the day is whether it is simply under the spell of Searle's Chinese room thought experiment. You might recall in that thought experiment that a person who does not know a lick of Chinese is put inside a room (we don't know that he doesn't know Chinese). While in the room, someone comes along and slips through the door a question written in Chinese characters. A few minutes later out spews the answer in English. Now, to most of us, we would assume that the person in the room is fluent in Chinese. But, we would be wrong. If we could look inside we would see that the person has a pretty substantial filing system that they can match the Chinese characters, stroke by stroke, until find a file that contains the answer in English for that question written in Chinese. The 'translator' has no understanding of Chinese, but everyone on the outside is confident that the guy is fluent in Chinese.

What this thought experiment shows is not that AI is impossible, rather it shows that to know that AI is possible we must have a much better philosophical understanding of language, theory of learning, theory of meaning, and a theory of truth (among a few others). We need to demonstrate how a proposition can be encoded into symbols and then decoded such that no information is lost (or very little information). We can translate the contents of a sentence into 1' & 0's, but we cannot translate the meaning. Without demonstrating how it is possible, we might just as well be talking Chinese to the guy in a Chinese room.
 
  • #113
It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith.
Faith does not constitute a proof and is not related to evidence. Those relying on it may share a commonality, but by definition faith is not related to possession of evidence for some fact.

Empiricists are the ones who insist a hypothesis is to be confirmed by experience, which means by your own rules you are held to different standards. Isn't it reasonable to expect empirical physicalists to produce that computer consciousness they say is possible?
Not really, so long as it is being held as a possibility.

And until you do, isn't your theory just another unsubstantiated physicalist "fairyland."
If someone is holding it on faith to be a proof for something not yet know to be factual then, yes.
 
  • #114
BoulderHead said:
Faith does not constitute a proof and is not related to evidence. Those relying on it may share a commonality, but by definition faith is not related to possession of evidence for some fact.

Right, that's exactly what I was trying to say, and did say when I said, "It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith." Faith, at least as described by Paul and applied to God, is something one has without proof. It is an inner feeling, not an external process as proof is. So I still think that, in terms of credibility, the empirical physicalist has an entirely different standard to meet than people of faith.


BoulderHead said:
Not really, so long as it is being held as a possibility.

True. However, the attitude of the physicalist who believes in and/or entertains speculative ideas like computer consciousness, life self-organizing from chemistry, time travel, universes bubbling up from nothingness, and so on seems to be harboring a double standard when he recommends situating in Fairyland those of us who suspect some sort of consciousness might have been involved in creation. Personally, I think certain aspects of creation can be explained much easier and make more sense if consciousness was involved in their creation.

Must it be that anyone who claims to "feel" something more than the universe's mechanics is deluded? Maybe its the mechanics who are suffering from a deadened feeling nature, and who then are projecting that problem onto everybody who can still feel.
 
  • #115
Les Sleeth said:
True. However, the attitude of the physicalist who believes in and/or entertains speculative ideas like computer consciousness, life self-organizing from chemistry, time travel, universes bubbling up from nothingness, and so on

To equate computer consciousness and the chemical origin of life, for which there is weak but valid evidence, with time travel which has no evidence, is a misconstruction which prevents collegial discussion. I might as well characterize your thought as coming from the Land of Oz (and I don't mean Australia!).
 
  • #116
selfAdjoint said:
To equate computer consciousness and the chemical origin of life, for which there is weak but valid evidence, with time travel which has no evidence, is a misconstruction which prevents collegial discussion. I might as well characterize your thought as coming from the Land of Oz (and I don't mean Australia!).

Fair enough. I withdraw time travel from the list. I might add that the Wizard of Oz was my favorite childhood book, so watch it there. :smile:
 
  • #117
I don't think the burden of proof is on me. I am saying that any finite physical system can, in theory, be simulated. There is no evidence to doubt this, and none of you seem to disagree with it for most systems. So the burden is on you to explain what is different about the particular arrangement of atoms in our brain that makes simulation impossible, even in theory? No one has addressed this.

It would be like me claiming that every physical object has a mass. I can't prove this, but you would be the one who would have to make a compelling argument if you thought it wasn't true.
 
  • #118
(from Fliption) You agree with the illustration when it claims that consciousness cannot be shown to be causal but then disagree with the illustration when you make the claim that therefore consciousness is the byproduct of physical processes. This is exactly what the illustration is telling you is NOT the case. You cannot make a statement about causality one way or the other because you cannot make a connection using a materialist paradigm. How can you agree that there is no causal connection and that no explanation can be had under materialism and then claim that it is simply a byproduct of physical processes? This seems inconsistent to me.
That seemed worth reposting. It's easy to make that mistake whichever side one is on.

StatusX said:
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision.
But I can't imagine it, so the rest of your thought experiment means nothing to me. Roger Penrose would almost certainly be in the same position. Your computer would have to model the brain all the way down to the quantum level, where, quite possibly, as far as we know, consciousness and brain are related via quantum coherence in microtubles, a process that begins at the level of the absolutely fundamental substrate of matter, in micro-units of mass and energy. If the relationship between brain and mind is rooted at such a fundamental level then how can it modeled by a computer. It seems an unscientific idea.

All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand.
Pardon me? Who is this 'we' that you mention here? I thought your computer was supposed to understand its own data.

For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?
It seems quite possible. After we have have taken the output from a video camera, translated it into the sort of data a human eye, which is part of the brain by the way, would send to the brain, and then we had translated it back into a something we could understand, like the output of a video camera, it shouldn't be too hard to translate the data we've encoded to send to its vocal chords back into text that we can understand.

If so, this computer would be capable of having ideas.
Perhaps you need to think about this some more. If if it was this easy to solve the 'problem of consciousness' then the early Greeks would have done it. You can't say 'heap together some bunch of components that may or may not be equivalent to a human brain, assume that it exists, and this shows that physicalism is true'. It just isn't that easy. If it was that easy then every sane person would be a physicalist.

There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory.
There is no evidence that ideas can exist in a pattern of 1s and 0s. Until you can show that they can this is science fiction.

As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.
It is true that as conscious beings we tell each other that we are conscious. It's also true that if a hypothetical virtual brain is defined as behaving precisely like a real one then it must, just like a real one, report that it is conscious when it is. Nothing follows from this. It's an ontological argument for the existence of the hypothetical.

If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why?
That's what I'd much rather discuss, rather than arguing with you about zombies and the like. It's a question that cannot be answered using our usual methods of reasoning. If you look at it closely it's a metaphysical question. As such it must be distinguished from scientific questions and thought about in a different way.

It is impossible to show that consciousness is epiphenominal on the physical, and this means that it might not be. It does not mean that is not, but equivalently it does not mean that it is. This is Fliption's point. For this reason I do not argue that I can show you are wrong, I argue that you can't show that you are right. But I can't show that I'm right either.

To me the real question to ask is this; why it is that neither of us (and nobody else) can prove our case about the relationship between consciousness and brain? And also perhaps, and as many philsophers have suggestedis the case, does our inability to do this have something to do with the particular way we reason.

Canute, you continue to use common sense as an argument.
Sorry about that. :smile:

If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true.
That's not quite a fair challenge. How can I reason logically if I'm not allowed to use my common sense?

I think we should stop arguing and simply accept the obvious, that the truth about consciousness cannot be known by reason alone, as so many people have asserted over the millenia, and accept that it cannot even be shown to exist by formally logical means, let alone to be this or that.
 
Last edited:
  • #119
StatusX said:
I don't think the burden of proof is on me. I am saying that any finite physical system can, in theory, be simulated. There is no evidence to doubt this, and none of you seem to disagree with it for most systems. So the burden is on you to explain what is different about the particular arrangement of atoms in our brain that makes simulation impossible, even in theory? No one has addressed this.

It would be like me claiming that every physical object has a mass. I can't prove this, but you would be the one who would have to make a compelling argument if you thought it wasn't true.

I do not think anyone is saying the "arrangement of atoms in our brain that makes simulation . . . " can't be achieved. The bigger issue is whether that arrangement is responsible for self-awareness. The idea is that one can account for all human behaviors and brain functions with brain physiology, but the brain physiology we know would only produce a zombie (something that can mimic all human behaviors, but doesn't have a personal experience of what it's doing).

So at least the self-aware part of consciousness might be the result of something other than physiology. For example, possibly there is a general pool of consciousness that's evolved with the universe which is manifested in the CSN. Such a panpsychic theory suggests the brain shapes, organizes and individualtes a "point" of that general consciousness, and the self-aware part is an essential part of the panpsychic realm.
 
  • #120
The only compelling argument that such a simulation would be impossible that I've seen so far was Canute's claim that the operation of the brain is significantly affected by quantum processes that can't be simulated, and I'll try to argue it here.

No one is saying QM can't be modeled, and in fact many simple QM systems have been computer simulated. The confusion arises because people hear that there is "uncertainty" in QM, and assume this means there are no rules. There are strict rules. The problem is that when a wave function collapses, it does so randomly, and we only know the probability it will collapse into certain states. We can't create a virtual system that would act exactly the same as a given real one for the same reason that two real, identical QM systems will not act exactly the same: there is inherent randomness. But we can create a virtual system that would behave in a way that an identical real one could. We would just have to have some kind of random number generator for the wave function collapse.

One way around this is to say that these random collapses aren't truly random, but are affected by our consciousness. This is a very interesting idea, and I definitely accept it as a possibility. Another is to say the variables in question(eg, the position, velocity, mass, etc. of every particle being modeled) are continuous, and therefore any rounding we would do so that a computer could work with the numbers would be the source of error. I'm not sure about this, but I remember reading that a given volume of space contains a finite amount of information, something like 1 bit per square Planck unit of its bounding surface, and this would refute such a claim. For now, I'm just going to have to claim that we can get so close to the real values that any deviation from reality would not cause a significant difference in observed behavior. But I can't prove this.

However, this whole idea is in opposition to the view that neurons are the basic components of the brain. A crude explanation of this model is that a neuron acts by outputting a signal if it inputs are above a certain threshold, and the interaction of many, many such neurons can give rise to complicated behavior. In computer science, neural nets attempt to replicate this function, and have acheived such impressive results as handwriting and facial recognition. I can't claim that the human brain could be modeled exactly by an extremely sophisticated neural net, but many researchers believe it could be. Well just have to wait and see.

And by the way, I am not claiming this simulation would be consicous. It might be, and it might not be. But whichever it is, how could it be different than whichever we are? Getting back to the zombie argument, a society of these simulations would attempt to explain consciousness. They wouldn't know they were just simulations, and we don't know that we aren't.
 
  • #121
Sorry for posting so much, but I just thought there were some things in here I really should address.

Canute said:
Fliption said:
You agree with the illustration when it claims that consciousness cannot be shown to be causal but then disagree with the illustration when you make the claim that therefore consciousness is the byproduct of physical processes. This is exactly what the illustration is telling you is NOT the case. You cannot make a statement about causality one way or the other because you cannot make a connection using a materialist paradigm. How can you agree that there is no causal connection and that no explanation can be had under materialism and then claim that it is simply a byproduct of physical processes? This seems inconsistent to me.
That seemed worth reposting. It's easy to make that mistake whichever side one is on.

It is logically possible that a being could exist with the same physical brain structure as us and not be conscious. But my argument, and yes it is a materialist one, is that in this universe, any beings with the same physical brain structure will have the same conscious state. That is, theyre either both conscious or both unconsious. This is not inconsistent.

But I can't imagine it, so the rest of your thought experiment means nothing to me. Roger Penrose would almost certainly be in the same position. Your computer would have to model the brain all the way down to the quantum level, where, quite possibly, as far as we know, consciousness and brain are related via quantum coherence in microtubles, a process that begins at the level of the absolutely fundamental substrate of matter, in micro-units of mass and energy. If the relationship between brain and mind is rooted at such a fundamental level then how can it modeled by a computer. It seems an unscientific idea.

I address this in my last post.

Pardon me? Who is this 'we' that you mention here? I thought your computer was supposed to understand its own data.

What the computer subjectively understands is irrelevant. I'm not making a claim one way or the other about whether it is conscious. I'm saying it will behave the same as us, and to see how it behaves, we must have a way of transmitting its signals into physical actions the way our muscles do it for us. Since it doesn't have muscles to move its vocal cords, a subprogram must translate these signals into text for it.

It seems quite possible. After we have have taken the output from a video camera, translated it into the sort of data a human eye, which is part of the brain by the way, would send to the brain, and then we had translated it back into a something we could understand, like the output of a video camera, it shouldn't be too hard to translate the data we've encoded to send to its vocal chords back into text that we can understand.

You are missing the point entirely. Of course it would be hard. It would be harder than anything we've done up to this point. I'm not even sure it would be practically possible at any point in the future. All I'm saying is that it it is theoretically possible, and how complicated it is does not matter for this.

Perhaps you need to think about this some more. If if it was this easy to solve the 'problem of consciousness' then the early Greeks would have done it. You can't say 'heap together some bunch of components that may or may not be equivalent to a human brain, assume that it exists, and this shows that physicalism is true'. It just isn't that easy. If it was that easy then every sane person would be a physicalist.

Ok? So you disagree? I don't see an argument here.

There is no evidence that ideas can exist in a pattern of 1s and 0s. Until you can show that they can this is science fiction.

What about the ones and zeroes representing that sentence? That sentence represents an idea. So what if this computer doesn't "understand" the sentence? I say a smarter one could.

It is true that as conscious beings we tell each other that we are conscious. It's also true that if a hypothetical virtual brain is defined as behaving precisely like a real one then it must, just like a real one, report that it is conscious when it is. Nothing follows from this. It's an ontological argument for the existence of the hypothetical.

We must have different definitions of consciousness. What I call consciousness is experience. It is difficult to explain exactly, but its basically what its like to do things. To see red, have an idea, feel an emotion. It is difficult to imagine the functions associated with these experiences without consciousness, but it is not logically impossible. There is no logical reason a non-conscious entity couldn't talk to us about its ideas. None whatsoever. There would just be no first person experience of the ideas.

That's what I'd much rather discuss, rather than arguing with you about zombies and the like. It's a question that cannot be answered using our usual methods of reasoning. If you look at it closely it's a metaphysical question. As such it must be distinguished from scientific questions and thought about in a different way.

It is impossible to show that consciousness is epiphenominal on the physical, and this means that it might not be. It does not mean that is not, but equivalently it does not mean that it is. This is Fliption's point. For this reason I do not argue that I can show you are wrong, I argue that you can't show that you are right. But I can't show that I'm right either.

To me the real question to ask is this; why it is that neither of us (and nobody else) can prove our case about the relationship between consciousness and brain? And also perhaps, and as many philsophers have suggestedis the case, does our inability to do this have something to do with the particular way we reason.

Because science hasn't gotten there yet. Just like no one could understand magnetism or the sun going across the sky hundreds of years ago. I know this argument has probably been beaten into the ground, but you have to put yourself in those ancient peoples shoes. They were sure there was no scientific explanation for these pheonomena, just like many today are sure there is none for consciousness.

Sorry about that. :smile:
...
That's not quite a fair challenge. How can I reason logically if I'm not allowed to use my common sense?

When I say you shouldn't use common sense as an argument, I mean you can't use it as your only argument. You need logic to back it up. The sun looks like its going around us, but it isn't.

I think we should stop arguing and simply accept the obvious, that the truth about consciousness cannot be known by reason alone, as so many people have asserted over the millenia, and accept that it cannot even be shown to exist by formally logical means, let alone to be this or that.

Um... I say it can be. Thats where we disagree. You want me to just stop arguing and accept the obvious that youre right? That's a compelling argument, but no.
 
Last edited:
  • #122
Status, I want to be more specific about what Les said about "human activities", as I'm also struggling to understand how you would simulate them, at least conceptually.

Very simple: let’s say I’m holding a fork and all of a sudden I make a decision to drop it. Let’s examine this decision making process. My brain must be in a certain state before the drop, say state A. You can brake down this state to a quantum level, to anything you want. The bottom line is there is a physical state that can be expressed in a matrix of certain values for each neuron, synapse, electron, photon, etc. Now comes the point in time when the decision making neuron must fire to cause the “drop the fork” reaction chain. My question is what specifically causes that neuron to fire? Yes, you can reduce that cause to a quark spin or a wave function, if you will, but that’s just begging the question. The ultimate question is what causes the system to be transformed from state A to state B (firing of the fork dropping neuron). I can think of only two causes. First, randomness / spontaneity. Whether it’s the electron’s undetermined position in the carbon atom, nuclear decay, gust of wind in your face, other natural random phenomena, whatever it is, the prime mover is random. (that’s assuming spontaneity exists, of course, which is a subject for another thread). The other cause is determinism. The transformation from state A to state B is strictly determined by natural laws. Whether the neuron will fire or not completely depends on the current state, state A, all incoming input from other neurons, and the rules (brain fabric which determines thresholds etc.) which dictate what to do. Without going into metaphysics, is there anything else?

Whether you choose randomness or determinism, there’s a problem. If the neuron firing is caused by a random act, all our decisions are nothing but a roll of a dice. I find it hard to swallow since it would make this very idea an outcome of randomness in someone’s mind….. Determinism doesn’t make things better. If my decision making is the outcome of strict deterministic rules, we’re nothing but a cog in a huge machine following the rules, we don’t really think or make decisions. I find it also hard to believe because, again, that would mean your very idea of determinism is not the outcome of your independent thinking, it’s the outcome of some physical state and some rules, you couldn’t “think” otherwise, you’re programmed to say “we’re determined”. The third option is the combination of the two of course, but again, the same criticism applies. So, how, conceptually, would you simulate the transformation from one state to the next?

Regards,

Pavel.
 
  • #123
Pavel said:
Status, I want to be more specific about what Les said about "human activities", as I'm also struggling to understand how you would simulate them, at least conceptually.

Very simple: let’s say I’m holding a fork and all of a sudden I make a decision to drop it. Let’s examine this decision making process. My brain must be in a certain state before the drop, say state A. You can brake down this state to a quantum level, to anything you want. The bottom line is there is a physical state that can be expressed in a matrix of certain values for each neuron, synapse, electron, photon, etc. Now comes the point in time when the decision making neuron must fire to cause the “drop the fork” reaction chain. My question is what specifically causes that neuron to fire? Yes, you can reduce that cause to a quark spin or a wave function, if you will, but that’s just begging the question. The ultimate question is what causes the system to be transformed from state A to state B (firing of the fork dropping neuron). I can think of only two causes. First, randomness / spontaneity. Whether it’s the electron’s undetermined position in the carbon atom, nuclear decay, gust of wind in your face, other natural random phenomena, whatever it is, the prime mover is random. (that’s assuming spontaneity exists, of course, which is a subject for another thread). The other cause is determinism. The transformation from state A to state B is strictly determined by natural laws. Whether the neuron will fire or not completely depends on the current state, state A, all incoming input from other neurons, and the rules (brain fabric which determines thresholds etc.) which dictate what to do. Without going into metaphysics, is there anything else?

Whether you choose randomness or determinism, there’s a problem. If the neuron firing is caused by a random act, all our decisions are nothing but a roll of a dice. I find it hard to swallow since it would make this very idea an outcome of randomness in someone’s mind….. Determinism doesn’t make things better. If my decision making is the outcome of strict deterministic rules, we’re nothing but a cog in a huge machine following the rules, we don’t really think or make decisions. I find it also hard to believe because, again, that would mean your very idea of determinism is not the outcome of your independent thinking, it’s the outcome of some physical state and some rules, you couldn’t “think” otherwise, you’re programmed to say “we’re determined”. The third option is the combination of the two of course, but again, the same criticism applies. So, how, conceptually, would you simulate the transformation from one state to the next?

Regards,

Pavel.

The most concise way to phrase your argument is the following:

If you do something for a reason, it was determined, and there was no free will.
If you do something for no reason, it was random, and there was no free will.

I don't know where I heard it, but its a great way of putting it. So are you saying there is no free will? If so, I don't see a problem in my simulation.

It is an oversimplification to say your brain popped from state A to state B. It is constantly evolving. Say you dropped the fork because you were thinking about free will, and this caused you to decide to drop a fork "randomly" to prove you had it. That's not really random at all, and since the simulation would "think" the same way, (maybe it wouldn't experience its thought, but the thought would have consequences), it would drop the fork too. If it was for some "random" reason, and by that I mean it was caused by some deterministic process other than the normal evolution of brain states, like a nuclear decay, then any good simulation would have to include virtual nuclear decays to qualify as an accurate simulation.

If you are saying there is free will, then I disagree. You say you have a problem with both randomness and determinism. So what do you suggest is going on?
 
Last edited:
  • #124
I suggest there's something else is going on. Whichever it is, metaphysical, emergent, epiphenomenal, I don’t know, but determinism and randomness just don’t cut it on their own, the way I see it. There is a reason why I made the argument so long. When summarized in your manner, you treat the argument on a high level of abstraction and don’t see the detail, where the real problem lies. When you program your computer, you don’t just tell it “think” or “make the decision”. You tell it specifically what you want it to do. And no, I was not asking for pseudo code either. It doesn’t even have to be a computer. Whatever your simulator is, I wanted to see how, specifically, you were planning on transforming from one state to the next. You said “evolution”, but that’s exactly my point – it’s too abstract; clearly, how do you evolve the simulator? Do you create deterministic rules with random variables? Do you really believe such evolution will produce brain capable of novelty, so inherent to the human kind?

Thanks,

Pavel
 
  • #125
Hi,

Consciousness is awareness, feeling and understanding, whatever its physical manifestation in this reality might be.

juju
 
  • #126
This isn't complicated. It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain. If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.
 
  • #127
StatusX said:
It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain.

are you saying it's an emergent property? If so, how do you know that your simulation hit the target? After all, you're not creating an artificial brain, you're assigning interpretation to a digital model. If the model comes up with a novelty, how do you know it's not a screw-up?


StatusX said:
If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.

That they're both matter is still an assumption. I believe there's a problem with asserting that brain is purely matter, and that's been my argument. I'm not sure why it's difficult to see the problem with the determinism. Let's say I have a guy standing by me with a gun who threatened to shoot me if I answer "YES" to anything. You, observing all of this, ask me "do you cheat on your wife?" I say "NO", would you believe me? My analogy might be lame, but I hope you see the point. If you're programmed to believe we're determined, then your statement asserting it has little truth value, if any. You can't jump out of the system, say "oh, look, I'm determined", and hten jump back in.


Thanks,

Pavel.
 
  • #128
StatusX said:
This isn't complicated. It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain. If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.

I want to check to see if I understand you correctly, and if you understand the implications of what at least a couple of us are saying. When you say, "I don't explicitly program 'thinking,' it arises as a consequence of the structure and the rules relating its constituents, just like a real brain [my emphasis]," that sounds like you are trying to make points by assuming your side of what we are debating is true. I think some of us are at least open to the possibility that the brain is not "creating" consciousness, just the way a switch that turns on a light bulb does not "create" light.
 
  • #129
Pavel, if you assert the brain is more than just matter, you'll have to provide a reason. Which part of its formation allowed a non-physical substance to creep in? And I'm not sure I understand your analogy about determinism. Are you suggesting that if we have no free will, we are destined to have a certain opinion, right or wrong? If so, I have to agree with you, but this doesn't mean we have opinions that are wrong and we'll never know it. We are constantly changing, and logical reasoning can cause us to see our error. Determinism does not mean we don't have the power to change our minds, it means that these changes can be predicted by the laws of physics.

Les, as I've mentioned before, I don't claim this thing is conscious. All I claim is that it tells us it is conscious. It doesn't even know that its just a simulation. And just to be clear, it is not a contradiction to say that its not conscious but still knows things. If you think it is, you aren't fully realizing the difference between the experience of knowledge and the function of knowledge.
 
Last edited:
  • #130
Les Sleeth said:
… I think some of us are at least open to the possibility that the brain is not "creating" consciousness, just the way a switch that turns on a light bulb does not "create" light.

Les you bring up an interesting point. Do you think that, going off of the switch idea, consciousness is always present or can it have absence?
 
  • #131
StatusX said:
. . . if you assert the brain is more than just matter, you'll have to provide a reason.

The zombie argument is the reason that's been offered. You may not be giving enough consideration to it. You can explain every function of consciousness with brain physiology, but you cannot account for how the self-aware aspect of consciousness has come about with brain physiology. You right now are calculating, measuring, weighing, computing . . . just what the brain seems set up to do. But what is the physical basis of the "you" which is controlling, observing, and understanding that?


StatusX said:
Les, as I've mentioned before, I don't claim this thing is conscious. All I claim is that it tells us it is conscious. It doesn't even know that its just a simulation.

If you don't claim it is conscious, then what is your point? I do not understand only arguing zombie consciousness is possible since we have computer programs right now that are nothing but zombies. Can a zombie program get so sophisticated it can fool genuinely conscious human beings into believing it is conscious? Possibly. But so what? Such a program still isn't self-aware awareness, and that is the only thing that's at issue when we are talking about if physicalness alone can produce consciousness. At this time, all we can prove is that physicalness can produce zombie-awareness, which leaves the door open to the possibility that something in addition to the brain is involved in establishing consciousness.


StatusX said:
And just to be clear, it is not a contradiction to say that its not conscious but still knows things. If you think it is, you aren't fully realizing the difference between the experience of knowledge and the function of knowledge.

I believe it is a contradiction. I say there is no possible way to know anything without being conscious. To have a hard drive full of information which is "functioning" to achieve things is not the same as conscious knowing; but of course, if one degrades the meaning of "knowing" then one might get away with saying it is.

The experience of knowledge is precisely what we are talking about. Not some dumb machine operating according to its programming. Besides, where do you think even the programming for the dumb machine comes from? It doesn't figure out anything by itself, conscious humans figured it out and program it into systems.


StatusX said:
Which part of its formation allowed a non-physical substance to creep in?

How do you know the development of the central nervous system was driven by only physical factors? There isn't a single, solitary example outside of living systems where chemistry and physics have organized themselves to the extent and quality that we find in living systems. Until someone demonstrates physical principles alone can be shown to produce something like consciousness, then the conservative and objective thinker has to wait for more information before assuming physicalness can do it.
 
Last edited:
  • #132
Jeebus said:
Les you bring up an interesting point. Do you think that, going off of the switch idea, consciousness is always present or can it have absence?

I am not sure what you mean by consciousness being always present. Do you mean in us, or in the universe, or in some other way. If you mean in us, then my view is that our conscious presence varies from fully present to present but unconscious (like when someone is in a coma). Sleep seems pretty unconscious, yet people are known to detect things going on around them while they sleep, so during sleep I think consciousness is present but not fully turned on. Some people believe they or others can leave the body (which would suggest one can be not-present), but I don't know enough from personal experience to have a strong opinion about that.
 
  • #133
Wow, two more pages since I last looked here. I've read all of the comments and I'm not sure I understand the point that StatusX is making. I think I understand it but then once I have it, I don't see what the big deal is.
Let me see if I can state it

Statement 1 - It is conceivable that a very powerful computer/machine could be built and simulate all the functions of the brain.

Ok, I got that one and I have no problem with it.

Statement 2 - Once this machine is built we cannot know whether it is truly conscious or not.

Absolutely I agree with this.

Is this it? Do I have it all? Because I have to ask, "What's the big deal?" Nothing here is new or astounding. Nothing can be concluded from any of these statements; Certainly not materialism. So why is there so much debate?

Have I missed a point StatusX?

Now if you were claiming that this machine would indeed be conscious, then we have a big problem but that's not what you're claiming. Nothing here supports a physicalist view in any way so it seems things have gotten unnecessarily stirred up here.
 
  • #134
Ok, I realize I've been unclear on my point. I had forgotten this myself, but the reason I brought up the whole supercomputer thing was because the argument had shifted to whether consciousness was causal or not. Now I'm glad you've gotten this far, as I've been having trouble convincing people such a computer could exist. (I'll still try to explain why to whoever doesn't agree) Now assuming it could, what would that mean? Keep in mind, this computer behaves EXACTLY the same as us, even if we don't know what's going on "inside its head." There are two possible interpretations of such a machine:

1. It is not conscious, but still behaves exactly as we do. The conclusion? Consciousness is not causal. Easy, right?

2. It is conscious, and behaves exactly as we do. This says nothing about the causal nature of consciousness.


Now the following represents the two main possibilities I accept as a "physicalist." (I use that term a little loosely, since the second view would probably be more considered dualism, but I'll explain that):

1. Consciousness is an illusion. I don't particularly like this idea, but it is not as dismissable as it sounds. Because if the first case above is true, that computer would be trying to understand its own, non-existent consciousness. It would not only disagree with you if you claimed it was unconscious, or that consciousness was an illusion, but would also disagree if you told it it was just a simulation and not a real person. So how do we really know we aren't just simulations, or zombies, or some other non-conscious entities that believe we are conscious?

2. Consciousness is real, and it exists everywhere there is a complex system to sustain it. I obviously don't know what they are yet, but I assert that there are strict rules that relate some aspect of the configuration of matter to consciousness. Just like hooking up a battery to a circuit gives rise to current, hooking together the right components, whether theyre neurons, computer chips, or whatever, gives rise to conscious experience. This is usually called dualism, but I've extended the terminology to call any theory of reality in which everything obeys derivable rules a physicalist theory.

Now is consciousness causal in this view? I don't know yet. There are two variations that result:

a) If it is causal, I say it is only at the quantum level. Maybe it causes random wavefunction collapse, maybe it influences the result of the collapses, or maybe it does something entirely different. I don't believe it is causal at an observable level because then there would be a way to observe it's effect on the physical. This is not a rigorous argument, and I'll get back to it some other time, but its late now.

b) If it isn't causal, then we're just sort of "along for the ride," with everything we do being controlled by our physical and deterministic brains, and all we do is experience it.


So my point is, if you believe the supercomputer argument, and you believe physicalism, in the broad sense I've defined, then you are pretty much limited to the three views above. Of course you can arbitrarily claim the rules are different for brains than for anything else, but I find such a claim inelegant and unsupportable. I'm not yet prepared to decide among these three.
 
Last edited:
  • #135
StatusX said:
Pavel, if you assert the brain is more than just matter, you'll have to provide a reason. Which part of its formation allowed a non-physical substance to creep in?

It is the same reason why the standard atomic framework was replaced by Niels Bohr’s model, and later by Schrodinger’s concepts of how atoms really work - they failed to explain certain behavior and therefore were modified to make the conceptual framework more consistent. Similarly, the random / deterministic model of consciousness is also weak in its explanatory power. It produces, as Les has adequately put it, zombies, not conscious beings. That’s the reason for something else to creep in.

In order to continue, I’d like to stick with the specific thread of thought in question, and not try to discover quantum mechanics, human art, and mysticism at the same time. Every time you deviate, you open up a Pandora’s box and we’ll never be able to come to any conclusion. Let’s stick with the question at hand, and if we reach an impasse, we’ll know exactly where we disagree. If there’s a way to resolve the disagreement through empirical observations or logical methods, good for us; if not, at least we’ll know what it all comes down to (I’m sure it’ll be something in the realm of metaphysics).

First, it seems like we really need to agree on what we mean by “conscious being”. I don’t like your behavioristic interpretation of it, as anything which behaves like human consciousness. If I didn’t know anything about electromagnetic waves and electronics, I’d definitely conclude that my radio is very conscious and intelligent (well, depending on the station I tune in). However, we both know that’s not the case. Besides, you’re comparing it to something we’re trying to define in the first place. I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities. For example, intention. I intend to drop the fork. My radio cannot intend, even though it exhibits other conscious properties. “I think” would be another. When I think, I manipulate objects in my mind in such a manner that I assign qualitative, subjective attributes to them, and I operate on them without restrictions. Zombie, on the other hand, does not have this privilege, even though it might be self-aware, it is bound by restrictions in what attributes it can assign to objects. So, before moving on, do you agree with this notion of “conscious being”? As soon as we agree on the definition, we’ll see if we can build a device, which would be governed by deterministic rules, and yet satisfy our notion of “conscious being”.

The other thing we need to be clear on is the kind of simulator you’re building. Are you building a replica of the human brain that will exhibit similar physical properties on its quantum level, or are you simulating it through interpretation of 1’s and 0’s in a digital model? There’s a big difference between the two and we should be clear about it.

I’ll comment on my determinism analogy as soon as we agree on the terminology. By the way, coming from a different country, the experience of two different cultures made me a strong believer in the cultural relativism and social conditioning in general. You throw genetics and the whole biology on top of it, and I’m convinced our consciousness is 99% determined. But there’s that 1% that crept in and made things so mysterious, at least for me. I know you disagree with the existence of this 1% and I’d like to make that the focus of this discourse.

Thanks,

Pavel.
 
  • #136
Pavel said:
I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities.

I like the way you make your points. I suppose you are referring to my use of the term "self-aware" since I am mainly the one who uses that around here. I'll explain why I tend to use that to characterize subjectivity, and why now thinking about it I can imagine a better term.

In past threads I've argued that if we examine consciousness, it seems to do several very basic things which are more fundamental than thinking. One is consciousness is sensitive to stimulation, which is pretty obvious. A second basic trait I call retention, which is that consciousness not only senses, it holds patterns of what has been sensed. These patterns range in permanance from simple impressions and memory, to a third basic quality I've called integration. Integration is retention too, but it seems more permanent. An example would be understanding, where a collection of related events suddenly produce a singular sort of result in consciousness. Suddenly "getting" how to ride a bike is like that, but there also intellectual understanding of course.

It seems to me that this integrative quality of consciousness is what most establishes self, or subjectivity. (A computer can do all the rest, but not that.) Examining humans, it seems there is a very high realization of the integrative thing because we can function single-pointedly doing complex tasks. It's like all that's integrated into consciousness is right there guiding the focused human even though he might not be thinking about everything that's contributed to his knowing pool.

I've looked at animal life from amoebas all the way up, and it seems to me that as the integrative aspect improves, so does the sense of self. So I started saying "self aware" to describe that because I believe it is the most defining thing there is about consciousness. You are right that it doesn't communicate other qualilties that are present, such as volition, or as some like to say "what its like" to sense something.

Possibly a better term for me to use would be self-forming or self- establishing or something like that.
 
  • #137
StatusX said:
It is logically possible that a being could exist with the same physical brain structure as us and not be conscious.
How do you know that?

But my argument, and yes it is a materialist one, is that in this universe, any beings with the same physical brain structure will have the same conscious state.
How do you know that?

What the computer subjectively understands is irrelevant. I'm not making a claim one way or the other about whether it is conscious. I'm saying it will behave the same as us,
So you say. I don't know why you believe it. There's certainly no evidence. Hell, we've been looking for evidence since the nineteenth century and there still isn't any.

and to see how it behaves, we must have a way of transmitting its signals into physical actions the way our muscles do it for us. Since it doesn't have muscles to move its vocal cords, a subprogram must translate these signals into text for it.
I thought this thing was supposed to behave like a human being.

You are missing the point entirely. Of course it would be hard. It would be harder than anything we've done up to this point. I'm not even sure it would be practically possible at any point in the future. All I'm saying is that it it is theoretically possible...
How do you know that?

What about the ones and zeroes representing that sentence? That sentence represents an idea. So what if this computer doesn't "understand" the sentence? I say a smarter one could.
No, the sentence is a re-symbolisation of the ones and zero's. It contains no more and no more less information than the string of ones and zeros. It is certainly not an idea. An idea is defined (in my dict.) as 'any content of the mind, esp. the conscious mind'. So a computer can have an idea only if it has mind. So far you've failed to show that it can.

We must have different definitions of consciousness. What I call consciousness is experience. It is difficult to explain exactly, but its basically what its like to do things.
Yes, I'm ok with 'what it is like' as a working defintion.

It is difficult to imagine the functions associated with these experiences without consciousness, but it is not logically impossible.
I'd say it was. I don't want to convince you, but just point out that your certainty is misplaced. Your arguments have been made by many fine scientists and one or two philosophers. They have been shown not to stand up to analysis. If they did then your ideas would be widely held.

There is no logical reason a non-conscious entity couldn't talk to us about its ideas. None whatsoever. There would just be no first person experience of the ideas.
I don't think everyone shares your view of what is logical.

Because science hasn't gotten there yet. Just like no one could understand magnetism or the sun going across the sky hundreds of years ago. I know this argument has probably been beaten into the ground, but you have to put yourself in those ancient peoples shoes. They were sure there was no scientific explanation for these pheonomena, just like many today are sure there is none for consciousness.
You do our ancestors a disservice, and misunderstand the nature of metaphysical questions.

When I say you shouldn't use common sense as an argument, I mean you can't use it as your only argument. You need logic to back it up. The sun looks like its going around us, but it isn't.
There is this ridiculous idea going around that if one uses ones common sense then one is bound to conclude that the Earth goes around the sun. I take no notice of it.

Um... I say it can be. Thats where we disagree. You want me to just stop arguing and accept the obvious that youre right? That's a compelling argument, but no.
Ok. Demonstrate by use of reason alone that consciousness exists and I'll withdraw my comment. If you can't do this then you ought to wonder why not.
 
Last edited:
  • #138
Hi,

From my own experiences, I must conclude that consciousness (in terms of self awareness, perception,etc) does not only reside in a physical vehicle. I have had out of the body experiences, and experiences of alternate realities, that are as vivid and as real (if not more so) than this physical reality.

juju
 
  • #139
Canute,
"How do you know?" and "I don't think so" are not very convincing arguments. All I'll say is that it is DEFINITELY logically coherent to imagine a non-conscious being acting as a conscious one because we don't know anyone else is conscious, and yet they act just like us. "Logical" doesn't mean "consistent with your preconceptions."
 
  • #140
Pavel said:
It is the same reason why the standard atomic framework was replaced by Niels Bohr’s model, and later by Schrodinger’s concepts of how atoms really work - they failed to explain certain behavior and therefore were modified to make the conceptual framework more consistent. Similarly, the random / deterministic model of consciousness is also weak in its explanatory power. It produces, as Les has adequately put it, zombies, not conscious beings. That’s the reason for something else to creep in.

Those models where changed because they couldn't explain the outcomes of certain experiments. What experiments show a problem with the random/deterministic model of choice (I think you are confusing this with consciousness)? Just because the idea is unsettling to you doesn't mean its incoherent, or even wrong. And by the way, zombies refer to the hypothetical beings who are identical to us in every physical way but lack consciousness. We talked about them a lot earlier in this thread. Just to reiterate, when I talk about consciousness, I mean the subjective experience of what its like to see a color, feel pain, etc. Choice is something different.

In order to continue, I’d like to stick with the specific thread of thought in question, and not try to discover quantum mechanics, human art, and mysticism at the same time. Every time you deviate, you open up a Pandora’s box and we’ll never be able to come to any conclusion. Let’s stick with the question at hand, and if we reach an impasse, we’ll know exactly where we disagree. If there’s a way to resolve the disagreement through empirical observations or logical methods, good for us; if not, at least we’ll know what it all comes down to (I’m sure it’ll be something in the realm of metaphysics).

I'm trying to make my arguments as rigorous as possible. If someone claims they are false because of incomplete knowledge of some other subject, like quantum mechanics, I need to try to correct them. But,I agree, let's try to stay on topic.

First, it seems like we really need to agree on what we mean by “conscious being”. I don’t like your behavioristic interpretation of it, as anything which behaves like human consciousness. If I didn’t know anything about electromagnetic waves and electronics, I’d definitely conclude that my radio is very conscious and intelligent (well, depending on the station I tune in). However, we both know that’s not the case.

Consciousness is experience, as I just described. I don't know where I claimed it was behavioral. I actually explictly described two beings that behaved indentically but one was conscious and the other wasn't, so I don't understand where you got this idea.

Besides, you’re comparing it to something we’re trying to define in the first place. I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities. For example, intention. I intend to drop the fork. My radio cannot intend, even though it exhibits other conscious properties.

I'm going to have to flat out disagree with you here. Intention is not consciousness. We can see that other people have intentions, but we know nothing of what they experience. The subjective feeling of what its like to want to do something is an aspect of consciousness, but they are not the same thing.

“I think” would be another. When I think, I manipulate objects in my mind in such a manner that I assign qualitative, subjective attributes to them, and I operate on them without restrictions.

This is getting closer. The actual mental images are a result of consciousness. However the attributes you assign them are not. There is clearly a place in your brain where you store the attributes of objects. These are called schema, and I have little doubt psychology will provide a scientific explanation for them.

Just on a side note, those of you who claim all these functions of the brain, like knowledge, thought, etc., are a result of non-physical consciousness: what is the brain for? It's there, it is obvisouly important, and we understand only a tiny fraction of what it does. Are you claiming its superfluous, and that it only does whatever our "consciousness" tells it to?

Zombie, on the other hand, does not have this privilege, even though it might be self-aware, it is bound by restrictions in what attributes it can assign to objects.

Again, your notion of a zombie is unclear, and not the same as the one the rest of us use.

So, before moving on, do you agree with this notion of “conscious being”? As soon as we agree on the definition, we’ll see if we can build a device, which would be governed by deterministic rules, and yet satisfy our notion of “conscious being”.

Look over what I've typed above and see what you agree with and what you would change.

The other thing we need to be clear on is the kind of simulator you’re building. Are you building a replica of the human brain that will exhibit similar physical properties on its quantum level, or are you simulating it through interpretation of 1’s and 0’s in a digital model? There’s a big difference between the two and we should be clear about it.

I assume you mean by the second description a machine where I try to program in the functions of the brain without worrying about the actual physical structure of it. If that's what you mean, then my simulator is the first description, accurate down to the atom. It's not practical, but if we had a computer the size of a galaxy, I think it could be done, and could be is what's important.

I’ll comment on my determinism analogy as soon as we agree on the terminology. By the way, coming from a different country, the experience of two different cultures made me a strong believer in the cultural relativism and social conditioning in general. You throw genetics and the whole biology on top of it, and I’m convinced our consciousness is 99% determined. But there’s that 1% that crept in and made things so mysterious, at least for me. I know you disagree with the existence of this 1% and I’d like to make that the focus of this discourse.

I'm not sure what you mean by "our consciousness is 99% determined."
 
  • #141
Excellent post StatusX.

StatusX said:
There are two possible interpretations of such a machine:

1. It is not conscious, but still behaves exactly as we do. The conclusion? Consciousness is not causal. Easy, right?

2. It is conscious, and behaves exactly as we do. This says nothing about the causal nature of consciousness.

Yep

1. Consciousness is an illusion.

So how do we really know we aren't just simulations, or zombies, or some other non-conscious entities that believe we are conscious?

I agree with the two possibilities you have posted. This first one, however, is not my favorite. First of all, the fact that I cannot know whether this machine is conscious or not does NOT mean that I cannot know whether I am conscious. This I know to be the case. Which leads to the first point about consciousness being an illusion. I've never liked this one because it doesn't really explain anything. If we cannot explain how the brain produces consciousness then how the hell are we going to explain the how the brain produces the illusion of it? I'm not even sure what the difference is. It seems the same problems remain. I always thought certain aspects of illusions were a function of consciousness to begin with. How can you have an illusion without consciousness? Who is it that is experiencing the illusion? And how do they experience it if consciousness is just an illusion? This one just seems messy to me.

2. Consciousness is real, and it exists everywhere there is a complex system to sustain it. I obviously don't know what they are yet, but I assert that there are strict rules that relate some aspect of the configuration of matter to consciousness. Just like hooking up a battery to a circuit gives rise to current, hooking together the right components, whether theyre neurons, computer chips, or whatever, gives rise to conscious experience. This is usually called dualism, but I've extended the terminology to call any theory of reality in which everything obeys derivable rules a physicalist theory.

This one I like much better. My only comment here is that your use of the word physical may not be consistent with others posting here. I have come to similar conclusions as you and I don't consider myself a physicalist.
I can make the argument that everything obeys rules at some level and I have always argued that if everything is physical by definition then what good is the word? It doesn't distinguish anything from anything else. But this is all semantics. Maybe some of the other people participating here can tell us what they think a physicalists is. You may find that you are not one based on their definitions. This might explain some of the heated debate happening here for no apparent reason.

I personally believe the distinction between physical and non-physical is the method one uses to gain knowledge of it. This is why people claim that the scientific method as it current exists, cannot explain consciousness.

Now is consciousness causal in this view? I don't know yet. There are two variations that result:

a) If it is causal, I say it is only at the quantum level.

b) If it isn't causal,

I agree with these as the possibilities. Don't have a clue which one is closer to truth.

So my point is, if you believe the supercomputer argument, and you believe physicalism, in the broad sense I've defined, then you are pretty much limited to the three views above. Of course you can arbitrarily claim the rules are different for brains than for anything else, but I find such a claim inelegant and unsupportable. I'm not yet prepared to decide among these three.

Again, I will say that I agree with all your choices and I don't believe in physicalism. I think we just need to make sure we're all using the same definition.
 
Last edited:
  • #142
Les Sleeth said:
Possibly a better term for me to use would be self-forming or self- establishing or something like that.

Les, I'm of the opinion that it doesn't matter what you call it, as long as it's clear what you mean by it, which you well explained. I suspect that it could be argued that sensitivity to stimuli, retention, and integration can be exhibited in neural networks. When you train a net, you provide feedback to it, which could be viewed as stimulation in some context. The retention of “memory” is the adjusted weights which consequently allow to recognize patterns. On some level of abstraction, that’s being “smart” and possibly conscious. I will not argue that’s the case, I’m merely saying it’s easier to debate those qualities. I want to pick something very obvious and yet challenging for a materialist to handle. For the sake of progress in this argument, I want to stick with one apparent quality that we all attribute to consciousness, agree on the criteria that will allow us to apply it to an instance and say “yeap, that one is conscious”, and finally simulate the property and see if it satisfies our condition or criterion. If StatusX or anybody will be able to demonstrate such simulation, so be it, I’ll be glad I learned something new. If not, I’d like to discuss possible accounts for this “mysteron” in our brain that couldn’t be simulated by randomness or deterministic rules.

I think “intentionality” is a very good candidate, what do you think? When I say “I intend to graduate from college”, you know precisely what I mean. It’s not “I will graduate”, it’s not “I’m thinking about graduation”, it’s just that - intend. It’s hard to explain it in other terms, and yet you have no problems understanding my state of mind. StatusX, if you’re willing to play along, do you accept that “intention”, as illustrated, is an inherent quality of a human mind, which is we define as conscious. If so, I’ll offer criteria which we can use to determine if somebody is in the state of “intending”. I probably won’t be able to offer any reductionistic criteria, because inability to do so is the precise point of the argument, but I think there are plenty of reasonable ways that allow us to observe somebody being in the state of “intention”.

In a nutshell, where we are. StatusX, or anybody with similar views, claims that all human conscious activities can be reduced to particles governed by natural deterministic laws, and some random quantum events. Such view would allow us to simulate consciousness. We, whoever doesn’t buy it, claim there is something unaccounted in that picture. As a proof, I offer a specific quality of consciousness that I will challenge you to reduce and simulate in principle.

Thanks,

Pavel.

I’m sorry if I’m being too particular, detailed, and slow in my approach. This is because it’s very easy in these conversations to digress and jump all over the place without getting anywhere. I just want to stick to the point and make this productive, not a waste of time. I hope you share the same feeling. :smile:
 
  • #143
Status, I didn't read your post before posting my own... So, you don't like intention as the pick. Very well, let's stick with experience, as I have similar doubts in that area as well. Let's define the criteria that will allow us to conclude that the thing, whatever it is, is experiencing color, or better yet, pain. The only way I see to do it is to believe the "experiencer" of his experience and being able to relate it to your own experience. After all, color is not the frequency of light when I get hit on the head and see blue spots (no, it doesn't happen too often :) ). You can also put electrons through my brain and cause me to experience red, but I'm not experiencing electrons, I'm experiencing RED. Similarly, if say, an alien with different physiology is pleading for help because he's in pain. How can I verify that he's not faking it and is indeed in a state of pain? Even if I study his physiology, how do I know it's the same type of experience? How do I know that his own pattern of neuron firing causes PAIN?


Thanks,

Pavel.

BTW, when I said our consciousness is 99% determined, I meant that I strongly believe that our decisions about choices we make in our lives are almost completely determined by social conditioning, biological and physical laws. Almost! :)
 
  • #144
StatusX said:
Canute,
"How do you know?" and "I don't think so" are not very convincing arguments.
I agree. The first is a question and the second a simple observation. I note that you always dodge my questions. You state your opinions and expect everybody to agree. Unfortunately your arguments do not hold water, which you'd know if you read the scientific literature on consciousness. When I ask you how you know all these things you assert you don't answer. This leaves me unable to say very much in reply except to point out that you're guessing.

All I'll say is that it is DEFINITELY logically coherent to imagine a non-conscious being acting as a conscious one because we don't know anyone else is conscious, and yet they act just like us. "Logical" doesn't mean "consistent with your preconceptions."
Well, here you go again. It isn't logically coherent as far as many people, including many scientists and philosophers, are concerned until someone has shown that it is. You won't acknowledge that the idea of non-conscious beings behaving like conscious ones is an ad hoc conjecture. What you are doing is making an assumption and then using it as an axiom from which to derive the truth of your assumption. That doesn't work. You need to come up with some reason why you or anybody else should believe that a non-conscious being would behave like a conscious one. Obviously you believe it, but as yet I don't know why you do. I don't want to just argue about your opinions, it won't get us anywhere.
 
Last edited:
  • #145
Canute said:
You need to come up with some reason why you or anybody else should believe that a non-conscious being would behave like a conscious one.

Canute, I have interpreted StatusX to mean that there is no reason to believe why a non-conscious being wouldn't act like a conscious being. And he is right about this. If there were a reason to believe that, then there would be no hard problem.

But if he is saying that a non-conscious being would indeed act like a conscious being then obviously he cannot know this for the very same reasons. I may have misunderstood StatusX but it sounds like he is claiming the first and not the latter. StatusX am I correct?
 
  • #146
Fliption said:
Canute, I have interpreted StatusX to mean that there is no reason to believe why a non-conscious being wouldn't act like a conscious being. And he is right about this. If there were a reason to believe that, then there would be no hard problem.

But if he is saying that a non-conscious being would indeed act like a conscious being then obviously he cannot know this for the very same reasons. I may have misunderstood StatusX but it sounds like he is claiming the first and not the latter. StatusX am I correct?
Maybe you're right. If so I apologise for the misunderstanding. I have no problem with the idea that this question cannot be decided by inferrence from the evidence of our senses or by pure deduction. It's what seems to be the case.
 
  • #147
Fliption said:
Canute, I have interpreted StatusX to mean that there is no reason to believe why a non-conscious being wouldn't act like a conscious being. And he is right about this. If there were a reason to believe that, then there would be no hard problem.

Of course there is no reason to believe that a non-conscious being wouldn’t behave as conscious, because you haven’t provided a clear definition on either of those. We can argue till the blue in the face in that manner. As I said, and StatusX actually agreed, let’s leave behaviorism out of this. There’s no point debating whether it can act. Yes, it can, my radio talks and sounds intelligent, yet nobody would argue it’s non-conscious.

Instead, I don’t mind adopting a suggested notion of experience by Status, that renders necessary to being qualified as “conscious”. So, my question remains, tell me how you would determine if your artificial silicon child is in fact experiencing pain. To be more articulate, your plastic creature complains that it is experiencing pain and asks you to terminate its existence because of it. You check the circuitry, voltages, blah blah blah, and to your amusement you find that artificial neurons that responsible for pain are firing properly… what do you do, is you child faking it?

Thanks,

Pavel.
 
  • #148
Fliption said:
My only comment here is that your use of the word physical may not be consistent with others posting here. I have come to similar conclusions as you and I don't consider myself a physicalist.
I can make the argument that everything obeys rules at some level and I have always argued that if everything is physical by definition then what good is the word? It doesn't distinguish anything from anything else. But this is all semantics. Maybe some of the other people participating here can tell us what they think a physicalists is. You may find that you are not one based on their definitions. This might explain some of the heated debate happening here for no apparent reason.

I personally believe the distinction between physical and non-physical is the method one uses to gain knowledge of it. This is why people claim that the scientific method as it current exists, cannot explain consciousness.

To be more specific, I believe there is something that explains both the physical world and the mental world in terms of rules. Maybe this is something we don't have yet, and maybe its just a different way of looking at the physical theories. But the reason I call myself a physicalist is because I believe that a system is completely described by its physical state. This obvisously means either zombies aren't possible or theyre the only thing possible. Looking at a system, you should be able to predict whether or not it's conscious.

Pavel said:
Status, I didn't read your post before posting my own... So, you don't like intention as the pick. Very well, let's stick with experience, as I have similar doubts in that area as well. Let's define the criteria that will allow us to conclude that the thing, whatever it is, is experiencing color, or better yet, pain. The only way I see to do it is to believe the "experiencer" of his experience and being able to relate it to your own experience. After all, color is not the frequency of light when I get hit on the head and see blue spots (no, it doesn't happen too often :) ). You can also put electrons through my brain and cause me to experience red, but I'm not experiencing electrons, I'm experiencing RED. Similarly, if say, an alien with different physiology is pleading for help because he's in pain. How can I verify that he's not faking it and is indeed in a state of pain? Even if I study his physiology, how do I know it's the same type of experience? How do I know that his own pattern of neuron firing causes PAIN?

Well now that you see what I mean by consciousness, you're free to go back over any of the old posts in this thread, because that definition was the one I was using all along. However, I know that's a lot to go through, so I'll sumarize some of my arguments briefly here, and you can go back and find where they were first brought up for more detail. Theres some new stuff here too.

First of all, the whole reason I'm even talking about zombies is because of the possibility we could be them. Zombies, just to be clear, are hypothetical beings that have the same exact physical makeup as us, but are not conscious. I assert that because there physically identical, they would behave identically, and that's why I've been mentioning behavior. This may be a controversial stance, and I know that Canute, for one, probably disagrees. Before I get into why I think behavior is entirely physically explainable, I'll just wrap up the zombies with the point from my first or second post in this thread, which was this: Zombies would try to understand consciousness just like us, but they would fail because it isn't really there or them. So how can we know that its really there for us and that well succeed? Keep in mind, a zombie behaves exactly like us, so he can explain what he's experiencing, he can argue about consciousness, and he will claim till the day he dies that he's a conscious being, but there would be no substance to these statements.

Now this argument only holds water if you believe my assertion that our behavior is physically explainable. Surely, you'd say, our decision to do something or our experience of pain is a mental event, and can't be explained in terms of atoms and forces. Well maybe that's true, but the fact is we have an extremely powerful physical computer in our head, and it must be doing something. I say that every decision we make, and every color we see has a direct correlation in the physical brain to something like a set of neurons firing. These are what really cause all the physical results of our actions, and our experiences are just by-products.

To make this more clear, think about the physical process that leads up to you deciding to randomly drop a fork. Work it backwards. The muscles in your hand moved. This must have been in response to neurons in your spinal cord firing. This was caused by some signal from your brain which coordinated the various signals that caused your fingers to move fluidly. But what caused this signal? A non-physical mental event? So particles just started spontaneously moving in our brain? What about conservation of momentum? I say that a physical event caused it, and we just had an experience of this event. There have been experiments where people's brains were scanned, and they were told to tap there finger at a random time. About a third of a second before the conscious experience of the decision to tap, activity began in the unconscious regions of the brain. What caused this activity? It isn't yet known, but it wasn't conscious thought, and I believe it is still physically explainable.

Canute said:
I agree. The first is a question and the second a simple observation. I note that you always dodge my questions. You state your opinions and expect everybody to agree. Unfortunately your arguments do not hold water, which you'd know if you read the scientific literature on consciousness. When I ask you how you know all these things you assert you don't answer. This leaves me unable to say very much in reply except to point out that you're guessing.

I'm sorry you feel that way. I try my best to respond to any valid arguments I see, but I do seem to be alone in this corner, and it gets tiring to rebut every argument myself. That being said, this is philosophy. Obviously I have no proof of anything I've said. It is based on a combination of evidence from experience and experiments and my own opinions. I try to explain why I feel the way I do, but I can't convince you if your opinion is fundamentally and irrevocably different. And when you say, "how do you know" or "I don't think so" I just don't feel a need to respond. Explain why you think my arguments are wrong, or what you think is right.

Canute said:
Well, here you go again. It isn't logically coherent as far as many people, including many scientists and philosophers, are concerned until someone has shown that it is. You won't acknowledge that the idea of non-conscious beings behaving like conscious ones is an ad hoc conjecture. What you are doing is making an assumption and then using it as an axiom from which to derive the truth of your assumption. That doesn't work. You need to come up with some reason why you or anybody else should believe that a non-conscious being would behave like a conscious one. Obviously you believe it, but as yet I don't know why you do. I don't want to just argue about your opinions, it won't get us anywhere.

Fliption said:
Canute, I have interpreted StatusX to mean that there is no reason to believe why a non-conscious being wouldn't act like a conscious being. And he is right about this. If there were a reason to believe that, then there would be no hard problem.

Fliption is right. All I was saying is that it is coherent (ie, not an inherent contradiction) to talk about non-conscious beings which behave exactly as we do. I am making no claim about whether they really do or could exist.
 
  • #149
Pavel said:
I’m sorry if I’m being too particular, detailed, and slow in my approach. This is because it’s very easy in these conversations to digress and jump all over the place without getting anywhere. I just want to stick to the point and make this productive, not a waste of time.

Not at all. I appreciate anyone’s efforts to keep things on target (unless the target is boring, and then diversions are welcome :smile:).


Pavel said:
I want to pick something very obvious and yet challenging for a materialist to handle. For the sake of progress in this argument, I want to stick with one apparent quality that we all attribute to consciousness, agree on the criteria that will allow us to apply it to an instance and say “yeap, that one is conscious”, and finally simulate the property and see if it satisfies our condition or criterion.

That’s what I’m after as well. Some who post here seem to prefer what I consider a more rationalistic type of argument (e.g., Chalmers). It appears you believe you can make “intention” challenging to the physicalist. I have another approach, which I’ll elaborate on more below.


Pavel said:
I suspect that it could be argued that sensitivity to stimuli, retention, and integration can be exhibited in neural networks. When you train a net, you provide feedback to it, which could be viewed as stimulation in some context. The retention of “memory” is the adjusted weights which consequently allow to recognize patterns. On some level of abstraction, that’s being “smart” and possibly conscious.

I noticed you accounted for sensitivity and simple retention with the neuronal model of consciousness. In my post I acknowledged that was possible too. However, you did not offer a neurological explanation for integration.


Pavel said:
I think “intentionality” is a very good candidate, what do you think? When I say “I intend to graduate from college”, you know precisely what I mean. It’s not “I will graduate”, it’s not “I’m thinking about graduation”, it’s just that - intend. It’s hard to explain it in other terms, and yet you have no problems understanding my state of mind. . . . do you accept that “intention”, as illustrated, is an inherent quality of a human mind, which is we define as conscious. If so, I’ll offer criteria which we can use to determine if somebody is in the state of “intending”. I probably won’t be able to offer any reductionistic criteria, because inability to do so is the precise point of the argument, but I think there are plenty of reasonable ways that allow us to observe somebody being in the state of “intention”.

We are about to have our first disagreement ( :cry: . . . just kidding, I’m sure you will welcome an opportunity to defend your ideas, or to change your mind if you agree with my view).

To me, intentionality seems perfectly explainable by brain physiology. Remember the movie “The Terminator”? He was rather intent wasn’t he? And he was from being programmed to be so. In fact, it seems to me intent is one of the easiest traits of consciousness to account for with a computer model of consciousness because that’s pretty much what programming is (i.e., giving intent to an otherwise intentless piece of equipment). If by “intent” you mean free will, that’s better but will can still be explained as only appearing “free” because so many programming choices are available.

Subjectivity is a great counterargument to physicalism, and the zombie analogy has proven effective. The weak spot in that approach, in my opinion, is that it doesn’t offer a model of it’s own. The physicalists have science on their side, and through that they are tendering a lot of facts about the brain. At least they have a model that non-physicalists can take potshots at, and in a way that makes their case more substantial than simply making a strong argument against functionalism.

That’s why I like to refute the physicalist model with an alternative model, one which explains the presence of subjectivity in consciousness. I believe the concept of retention is a consciousness trait with the potential to explain subjectivity, and thereby give us a model which better fits how consciousness works. I have argued this model quite a bit both at the old Physics Forums and here, and I realize there is a problem with it. The problem with my model is that to understand it, a person needs to contemplate his own consciousness, and not many people seem to have done that.

Just think about how your own consciousness works for a second. Doesn’t it seem like your body is surrounded by a field of sensitivity, sensitivity to light, sound, pain, smells, tastes, heat, cold, etc. Now, all that offers the potential to perceive tons of information that is in your environment every instant, and also to remember it. Do we remember it all? No, we only “retain” certain information. Why? Try out this little contemplation of retention I recently posted in another thread.

Say someone takes a walk in the woods to think about something important. The majority of sense data which peripherally floods his perception – the environmental sights, sounds, smells, etc. of where he is walking – is usually only retained briefly; although his subjective aspect of consciousness is present, he is not paying attention to all that info. But if he concentrates on something like a beautiful tree showing of its Autumn colors, then he will usually retain that perception more strongly. If we do something that requires a variety of elements to do well, say ride a bike, and we do it often, that may be retained in a way I’ve described as “integration.” In other words, the more what we sense/feel is concentrated upon and/or repeated, the more it “integrates.” I believe as experience integrates, it establishes a non-intellectual certainty with past events we call knowing.

Now, you have to stop here to reflect carefully on the integrated aspect of consciousness. We all rely on it incessantly, but few people I’ve talked to actually have looked squarely at it in themselves to see how it functions. Consequently, when I talk about it mostly I get a sort of “huh?” response.

If you are reading this now and comprehending it, that is because of integration. You don’t have to think about how to read before you perceive each word, and if you are familiar with my ideas, you don’t have to think about them again for comprehension to happen. You hear your cat at the door, get up and let her in, collect the mail, grab a banana to eat, and return to reading without having to think about how to walk, or use your hands, or why the cat wants in, or how to eat, etc. A HUGE amount of ability and understanding is present in your consciousness right now, and much of it is merged into a “singular” part of consciousness that is interacting with the world.

I am suggesting that the “self” has come about in consciousness exactly through that route. When information integrates, in a very important way it becomes distinct from the multifaceted aspects of consciousness. It is unified, it is “one,” while all the rest are “parts” that feed it new information it can integrate (I also believe the integrated aspect is centered within the multipart aspects that occur on the periphery of consciousness). The integrative function is absolutely the most crucial factor of consciousness because it creates the singular aspect which comes to control, oversee, know . . . and one of the things it “knows” is that it exists! That is what self/subjectivity is: self knowing. That is why the oneness aspect of consciousness cannot be reproduced by a physical thing made of zillions of atoms or 1s and 0s.

In case you found any this interesting, I am posting a drawing representing the retention-integration model of subjectivity.

See Diagram 1

The picture represents a disembodied consciousness. The idea is that the most outer aspect of consciousness is outward-oriented sensitivity; it detects by being impressed with information. It is counterbalanced by a more inward concentrative aspect which when initiated causes the impressions sensed to be drawn deeper into consciousness where they will be embedded (memory); how deeply embedded depends on the strength of concentration, repetition, etc. With more experience information may integrate into the singular aspect. Since existence was our first, is our longest-running, and is a non-stop experience, in this model that knowledge is what has integrated at the very center of consciousness to become the "self."
 

Attachments

  • Retention1.jpg
    Retention1.jpg
    10.6 KB · Views: 458
Last edited:
  • #150
Les Sleeth said:
We are about to have our first disagreement ( :cry: . . . just kidding, I’m sure you will welcome an opportunity to defend your ideas, or to change your mind if you agree with my view).

To me, intentionality seems perfectly explainable by brain physiology. Remember the movie “The Terminator”? He was rather intent wasn’t he? And he was from being programmed to be so. In fact, it seems to me intent is one of the easiest traits of consciousness to account for with a computer model of consciousness because that’s pretty much what programming is (i.e., giving intent to an otherwise intentless piece of equipment). If by “intent” you mean free will, that’s better but will can still be explained as only appearing “free” because so many programming choices are available.

Hehe, no problem, I've always been a strong believer that it's through the disagreement that you learn the most. In fact, you convinced me that “intention” might not be a good example. I perceive the state of intention as a modal attribute, not merely as the overall [desired ?] goal, which would seem to me to be the case with the Terminator. But I’m afraid trying to show this aspect of “intention” and then reducing it would shift the conversation into a different direction. But some time, I’d like to get back to it, as I suspect modality brings a lot of trouble to materialism as well.

Les Sleeth said:
Subjectivity is a great counterargument to physicalism, and the zombie analogy has proven effective. The weak spot in that approach, in my opinion, is that it doesn’t offer a model of it’s own. The physicalists have science on their side, and through that they are tendering a lot of facts about the brain. At least they have a model that non-physicalists can take potshots at, and in a way that makes their case more substantial than simply making a strong argument against functionalism.

I like this point, didn't think about it. Although, when a physicalist can't explain a phenomenon resorting to the science, a non-physicalist can offer an explanation just as rational. If you can't provide a physical explanation for my experience of pain, introduction of a metaphysical component might not be the best, but it is just as rational as attributing the pain to something physical under the presumption that it's physical. Unfortunately or not, such component is ruled out a priori by a physicalst with the only answer as "we'll give a physical explanation in the future". Isn't that a metaphysical statement as well?

Anyhow, I want to take some time and think about what you said wiht regards to "intergration". It sounds interesting, but I need to chew on it for a bit and see how it settles with me. Thank you for your time explaining it!



Pavel.
 
Back
Top