Is Consciousness Just the Result of Electrical Activity in Our Brains?

In summary, consciousness is the awareness of space and time, or the existence of space and time relative to oneself. It is associated with electrical activity in the brain, but this does not fully explain its complexity. Some believe that consciousness is simply a chemical reaction, while others argue that it is influenced by both chemical and electrical impulses. There is still much we do not understand about consciousness, including the concept of a "soul" and the possibility of multiple existences or memories carrying over. However, it is clear that our brains play a crucial role in creating our conscious experiences.
  • #106
Rothiemurchus said:
It has to be causal.Because consciousness is associated with the passage of time and time passes when physical entities change from one spatial configuration to another. Energy of some kind causes the configurations to change.

It may be causal, but the point was that no argument or set of facts can show it is caused. So far, the subjective element of consciousness is unexplainable by any known principles associated with biology or physics.
 
Physics news on Phys.org
  • #107
Les Sleeth:
no argument or set of facts can show it is caused

Rothie M:
I disagree.
If, one day, particles are detected at a region in space where ,for example ,an area of colour exists,and these particles are detected for the same length of time as a conscious observer says he can see the area of colour,that would be convincing evidence.
 
  • #108
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision. All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand. For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?

If so, this computer would be capable of having ideas. There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory. As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.

If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.
 
Last edited:
  • #109
A great post.

The only reply the others could make is pretty weak:

We can't build that computer right now.
We might never be able to build it.
Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland.
 
  • #110
selfAdjoint said:
A great post.

The only reply the others could make is pretty weak:

We can't build that computer right now.
We might never be able to build it.
Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland.

I'd respectfully submit that those aren't the only replies others can make, and neither are all of those replies "weak."

I think it is ironic you seem to downplay an argument you yourself are likely to make. You say, "Until you can show us the computer being conscious (and we will be the judges of whether it is conscious or not), we can continue to believe in our fairyland." Well, do you not use that exact same argument against God? You say, "Show me that God! Until you do (and we will be the judge if it is really God or not) we will continue to believe in our physicalist fairyland."

It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith. Empiricists are the ones who insist a hypothesis is to be confirmed by experience, which means by your own rules you are held to different standards. Isn't it reasonable to expect empirical physicalists to produce that computer consciousness they say is possible? And until you do, isn't your theory just another unsubstantiated physicalist "fairyland."
 
Last edited:
  • #111
StatusX said:
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision. All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand. For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?

Maybe. Since it hasn't been done, no one knows. You might say it is possible, I will disagree. The only possible solution is for those who assert it is possible to actually do it.


StatusX said:
If so, this computer would be capable of having ideas. There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory. As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.

You are modelling a zombie, which everyone agrees is possible.


StatusX said:
[If so, this computer would be capable of having ideas] If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.

Where in your explanation is the "you" that is making decisions, changing your mind, willing your body to go here or there? You can easily account for all computing functions of the brain, and all behavior, but you cannot account for self awareness, subjectivity, qualia experience, or whatever you want to call it. Your focus seems to be the quantum steps, the discrete, the parts . . . what you don't see is the continuous, the undivided, the whole . . .

If someone sees either ONLY the discrete, or ONLY the continuous, then in this reality where we exist at least they are going to miss something.
 
Last edited:
  • #112
StatusX said:
If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why? Canute, you continue to use common sense as an argument. If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true. And Fliption, to me, the zombies exist in an alternate universe where the rules are different. They just illustrate logical possibilities.

Well, the problem here is that you are reasoning based on "what we know not". Every so often someone comes along with a 'scientific theory' that they say predicts all the equations of physics, and therefore is the 'right' theory (and they are usually not the humble types in their proclamation). Of course, ask them to produce equations that are not known which we can experiment, they are usually mum. What they have done is predict the past successes of science, and even though it is an admirable task if done correctly, such kind of 'theories' do not tell us that a revolutionary theory has been discovered. Rather, all we can do is look at them and say "does it make butter too?".

Well, I think this is a very similar situation to your thought experiment. What are we supposed to do with such computational results other than scratch our heads and pick up our discussion right before we were interrupted? The fact of the matter is, a theory might be right, but if it does not show us how it is right or if it is right in experiments that we can perform, such a theory is generally not useful to science.

In the case of a super computer having all these abilities, all we can ask at the end of the day is whether it is simply under the spell of Searle's Chinese room thought experiment. You might recall in that thought experiment that a person who does not know a lick of Chinese is put inside a room (we don't know that he doesn't know Chinese). While in the room, someone comes along and slips through the door a question written in Chinese characters. A few minutes later out spews the answer in English. Now, to most of us, we would assume that the person in the room is fluent in Chinese. But, we would be wrong. If we could look inside we would see that the person has a pretty substantial filing system that they can match the Chinese characters, stroke by stroke, until find a file that contains the answer in English for that question written in Chinese. The 'translator' has no understanding of Chinese, but everyone on the outside is confident that the guy is fluent in Chinese.

What this thought experiment shows is not that AI is impossible, rather it shows that to know that AI is possible we must have a much better philosophical understanding of language, theory of learning, theory of meaning, and a theory of truth (among a few others). We need to demonstrate how a proposition can be encoded into symbols and then decoded such that no information is lost (or very little information). We can translate the contents of a sentence into 1' & 0's, but we cannot translate the meaning. Without demonstrating how it is possible, we might just as well be talking Chinese to the guy in a Chinese room.
 
  • #113
It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith.
Faith does not constitute a proof and is not related to evidence. Those relying on it may share a commonality, but by definition faith is not related to possession of evidence for some fact.

Empiricists are the ones who insist a hypothesis is to be confirmed by experience, which means by your own rules you are held to different standards. Isn't it reasonable to expect empirical physicalists to produce that computer consciousness they say is possible?
Not really, so long as it is being held as a possibility.

And until you do, isn't your theory just another unsubstantiated physicalist "fairyland."
If someone is holding it on faith to be a proof for something not yet know to be factual then, yes.
 
  • #114
BoulderHead said:
Faith does not constitute a proof and is not related to evidence. Those relying on it may share a commonality, but by definition faith is not related to possession of evidence for some fact.

Right, that's exactly what I was trying to say, and did say when I said, "It isn't the God-believers who are bound by the standard of producing evidence for proof; the standard for God-believers is faith." Faith, at least as described by Paul and applied to God, is something one has without proof. It is an inner feeling, not an external process as proof is. So I still think that, in terms of credibility, the empirical physicalist has an entirely different standard to meet than people of faith.


BoulderHead said:
Not really, so long as it is being held as a possibility.

True. However, the attitude of the physicalist who believes in and/or entertains speculative ideas like computer consciousness, life self-organizing from chemistry, time travel, universes bubbling up from nothingness, and so on seems to be harboring a double standard when he recommends situating in Fairyland those of us who suspect some sort of consciousness might have been involved in creation. Personally, I think certain aspects of creation can be explained much easier and make more sense if consciousness was involved in their creation.

Must it be that anyone who claims to "feel" something more than the universe's mechanics is deluded? Maybe its the mechanics who are suffering from a deadened feeling nature, and who then are projecting that problem onto everybody who can still feel.
 
  • #115
Les Sleeth said:
True. However, the attitude of the physicalist who believes in and/or entertains speculative ideas like computer consciousness, life self-organizing from chemistry, time travel, universes bubbling up from nothingness, and so on

To equate computer consciousness and the chemical origin of life, for which there is weak but valid evidence, with time travel which has no evidence, is a misconstruction which prevents collegial discussion. I might as well characterize your thought as coming from the Land of Oz (and I don't mean Australia!).
 
  • #116
selfAdjoint said:
To equate computer consciousness and the chemical origin of life, for which there is weak but valid evidence, with time travel which has no evidence, is a misconstruction which prevents collegial discussion. I might as well characterize your thought as coming from the Land of Oz (and I don't mean Australia!).

Fair enough. I withdraw time travel from the list. I might add that the Wizard of Oz was my favorite childhood book, so watch it there. :smile:
 
  • #117
I don't think the burden of proof is on me. I am saying that any finite physical system can, in theory, be simulated. There is no evidence to doubt this, and none of you seem to disagree with it for most systems. So the burden is on you to explain what is different about the particular arrangement of atoms in our brain that makes simulation impossible, even in theory? No one has addressed this.

It would be like me claiming that every physical object has a mass. I can't prove this, but you would be the one who would have to make a compelling argument if you thought it wasn't true.
 
  • #118
(from Fliption) You agree with the illustration when it claims that consciousness cannot be shown to be causal but then disagree with the illustration when you make the claim that therefore consciousness is the byproduct of physical processes. This is exactly what the illustration is telling you is NOT the case. You cannot make a statement about causality one way or the other because you cannot make a connection using a materialist paradigm. How can you agree that there is no causal connection and that no explanation can be had under materialism and then claim that it is simply a byproduct of physical processes? This seems inconsistent to me.
That seemed worth reposting. It's easy to make that mistake whichever side one is on.

StatusX said:
I'll keep this short. Imagine a computer so powerful, it could simulate the physical human brain in every aspect. Every neuron would be modeled to incredible precision.
But I can't imagine it, so the rest of your thought experiment means nothing to me. Roger Penrose would almost certainly be in the same position. Your computer would have to model the brain all the way down to the quantum level, where, quite possibly, as far as we know, consciousness and brain are related via quantum coherence in microtubles, a process that begins at the level of the absolutely fundamental substrate of matter, in micro-units of mass and energy. If the relationship between brain and mind is rooted at such a fundamental level then how can it modeled by a computer. It seems an unscientific idea.

All the sources of input would have to be supplied to it (eg, the data from a video camera could be translated into the appropriate data an eye would send it). The output would be translated into some form we could understand.
Pardon me? Who is this 'we' that you mention here? I thought your computer was supposed to understand its own data.

For example, the data it sends to the virtual "vocal cords" could be tanslated into text. Is this possible?
It seems quite possible. After we have have taken the output from a video camera, translated it into the sort of data a human eye, which is part of the brain by the way, would send to the brain, and then we had translated it back into a something we could understand, like the output of a video camera, it shouldn't be too hard to translate the data we've encoded to send to its vocal chords back into text that we can understand.

If so, this computer would be capable of having ideas.
Perhaps you need to think about this some more. If if it was this easy to solve the 'problem of consciousness' then the early Greeks would have done it. You can't say 'heap together some bunch of components that may or may not be equivalent to a human brain, assume that it exists, and this shows that physicalism is true'. It just isn't that easy. If it was that easy then every sane person would be a physicalist.

There would be no way to "see" these ideas, they wouldn't take up space, but they would be inherent in the pattern of 1s and 0s in the computer memory.
There is no evidence that ideas can exist in a pattern of 1s and 0s. Until you can show that they can this is science fiction.

As I have already emphasized, it would tell us it was conscious. Just like a virtual pendulum "swings" back and forth, a virtual brain acts the same as a real brain, and a real brain "tells" the world it is conscious.
It is true that as conscious beings we tell each other that we are conscious. It's also true that if a hypothetical virtual brain is defined as behaving precisely like a real one then it must, just like a real one, report that it is conscious when it is. Nothing follows from this. It's an ontological argument for the existence of the hypothetical.

If not, why not? What is so special about the particular arrangement of matter in our brain that prohibits simulation? We could simulate a pendulum, a solar system, gas in a container, but not this? Why?
That's what I'd much rather discuss, rather than arguing with you about zombies and the like. It's a question that cannot be answered using our usual methods of reasoning. If you look at it closely it's a metaphysical question. As such it must be distinguished from scientific questions and thought about in a different way.

It is impossible to show that consciousness is epiphenominal on the physical, and this means that it might not be. It does not mean that is not, but equivalently it does not mean that it is. This is Fliption's point. For this reason I do not argue that I can show you are wrong, I argue that you can't show that you are right. But I can't show that I'm right either.

To me the real question to ask is this; why it is that neither of us (and nobody else) can prove our case about the relationship between consciousness and brain? And also perhaps, and as many philsophers have suggestedis the case, does our inability to do this have something to do with the particular way we reason.

Canute, you continue to use common sense as an argument.
Sorry about that. :smile:

If you disagree, use logical reasoning to explain why, or illustrate an example that shows why my arguments are absurd without assuming your preconceived notions are true.
That's not quite a fair challenge. How can I reason logically if I'm not allowed to use my common sense?

I think we should stop arguing and simply accept the obvious, that the truth about consciousness cannot be known by reason alone, as so many people have asserted over the millenia, and accept that it cannot even be shown to exist by formally logical means, let alone to be this or that.
 
Last edited:
  • #119
StatusX said:
I don't think the burden of proof is on me. I am saying that any finite physical system can, in theory, be simulated. There is no evidence to doubt this, and none of you seem to disagree with it for most systems. So the burden is on you to explain what is different about the particular arrangement of atoms in our brain that makes simulation impossible, even in theory? No one has addressed this.

It would be like me claiming that every physical object has a mass. I can't prove this, but you would be the one who would have to make a compelling argument if you thought it wasn't true.

I do not think anyone is saying the "arrangement of atoms in our brain that makes simulation . . . " can't be achieved. The bigger issue is whether that arrangement is responsible for self-awareness. The idea is that one can account for all human behaviors and brain functions with brain physiology, but the brain physiology we know would only produce a zombie (something that can mimic all human behaviors, but doesn't have a personal experience of what it's doing).

So at least the self-aware part of consciousness might be the result of something other than physiology. For example, possibly there is a general pool of consciousness that's evolved with the universe which is manifested in the CSN. Such a panpsychic theory suggests the brain shapes, organizes and individualtes a "point" of that general consciousness, and the self-aware part is an essential part of the panpsychic realm.
 
  • #120
The only compelling argument that such a simulation would be impossible that I've seen so far was Canute's claim that the operation of the brain is significantly affected by quantum processes that can't be simulated, and I'll try to argue it here.

No one is saying QM can't be modeled, and in fact many simple QM systems have been computer simulated. The confusion arises because people hear that there is "uncertainty" in QM, and assume this means there are no rules. There are strict rules. The problem is that when a wave function collapses, it does so randomly, and we only know the probability it will collapse into certain states. We can't create a virtual system that would act exactly the same as a given real one for the same reason that two real, identical QM systems will not act exactly the same: there is inherent randomness. But we can create a virtual system that would behave in a way that an identical real one could. We would just have to have some kind of random number generator for the wave function collapse.

One way around this is to say that these random collapses aren't truly random, but are affected by our consciousness. This is a very interesting idea, and I definitely accept it as a possibility. Another is to say the variables in question(eg, the position, velocity, mass, etc. of every particle being modeled) are continuous, and therefore any rounding we would do so that a computer could work with the numbers would be the source of error. I'm not sure about this, but I remember reading that a given volume of space contains a finite amount of information, something like 1 bit per square Planck unit of its bounding surface, and this would refute such a claim. For now, I'm just going to have to claim that we can get so close to the real values that any deviation from reality would not cause a significant difference in observed behavior. But I can't prove this.

However, this whole idea is in opposition to the view that neurons are the basic components of the brain. A crude explanation of this model is that a neuron acts by outputting a signal if it inputs are above a certain threshold, and the interaction of many, many such neurons can give rise to complicated behavior. In computer science, neural nets attempt to replicate this function, and have acheived such impressive results as handwriting and facial recognition. I can't claim that the human brain could be modeled exactly by an extremely sophisticated neural net, but many researchers believe it could be. Well just have to wait and see.

And by the way, I am not claiming this simulation would be consicous. It might be, and it might not be. But whichever it is, how could it be different than whichever we are? Getting back to the zombie argument, a society of these simulations would attempt to explain consciousness. They wouldn't know they were just simulations, and we don't know that we aren't.
 
  • #121
Sorry for posting so much, but I just thought there were some things in here I really should address.

Canute said:
Fliption said:
You agree with the illustration when it claims that consciousness cannot be shown to be causal but then disagree with the illustration when you make the claim that therefore consciousness is the byproduct of physical processes. This is exactly what the illustration is telling you is NOT the case. You cannot make a statement about causality one way or the other because you cannot make a connection using a materialist paradigm. How can you agree that there is no causal connection and that no explanation can be had under materialism and then claim that it is simply a byproduct of physical processes? This seems inconsistent to me.
That seemed worth reposting. It's easy to make that mistake whichever side one is on.

It is logically possible that a being could exist with the same physical brain structure as us and not be conscious. But my argument, and yes it is a materialist one, is that in this universe, any beings with the same physical brain structure will have the same conscious state. That is, theyre either both conscious or both unconsious. This is not inconsistent.

But I can't imagine it, so the rest of your thought experiment means nothing to me. Roger Penrose would almost certainly be in the same position. Your computer would have to model the brain all the way down to the quantum level, where, quite possibly, as far as we know, consciousness and brain are related via quantum coherence in microtubles, a process that begins at the level of the absolutely fundamental substrate of matter, in micro-units of mass and energy. If the relationship between brain and mind is rooted at such a fundamental level then how can it modeled by a computer. It seems an unscientific idea.

I address this in my last post.

Pardon me? Who is this 'we' that you mention here? I thought your computer was supposed to understand its own data.

What the computer subjectively understands is irrelevant. I'm not making a claim one way or the other about whether it is conscious. I'm saying it will behave the same as us, and to see how it behaves, we must have a way of transmitting its signals into physical actions the way our muscles do it for us. Since it doesn't have muscles to move its vocal cords, a subprogram must translate these signals into text for it.

It seems quite possible. After we have have taken the output from a video camera, translated it into the sort of data a human eye, which is part of the brain by the way, would send to the brain, and then we had translated it back into a something we could understand, like the output of a video camera, it shouldn't be too hard to translate the data we've encoded to send to its vocal chords back into text that we can understand.

You are missing the point entirely. Of course it would be hard. It would be harder than anything we've done up to this point. I'm not even sure it would be practically possible at any point in the future. All I'm saying is that it it is theoretically possible, and how complicated it is does not matter for this.

Perhaps you need to think about this some more. If if it was this easy to solve the 'problem of consciousness' then the early Greeks would have done it. You can't say 'heap together some bunch of components that may or may not be equivalent to a human brain, assume that it exists, and this shows that physicalism is true'. It just isn't that easy. If it was that easy then every sane person would be a physicalist.

Ok? So you disagree? I don't see an argument here.

There is no evidence that ideas can exist in a pattern of 1s and 0s. Until you can show that they can this is science fiction.

What about the ones and zeroes representing that sentence? That sentence represents an idea. So what if this computer doesn't "understand" the sentence? I say a smarter one could.

It is true that as conscious beings we tell each other that we are conscious. It's also true that if a hypothetical virtual brain is defined as behaving precisely like a real one then it must, just like a real one, report that it is conscious when it is. Nothing follows from this. It's an ontological argument for the existence of the hypothetical.

We must have different definitions of consciousness. What I call consciousness is experience. It is difficult to explain exactly, but its basically what its like to do things. To see red, have an idea, feel an emotion. It is difficult to imagine the functions associated with these experiences without consciousness, but it is not logically impossible. There is no logical reason a non-conscious entity couldn't talk to us about its ideas. None whatsoever. There would just be no first person experience of the ideas.

That's what I'd much rather discuss, rather than arguing with you about zombies and the like. It's a question that cannot be answered using our usual methods of reasoning. If you look at it closely it's a metaphysical question. As such it must be distinguished from scientific questions and thought about in a different way.

It is impossible to show that consciousness is epiphenominal on the physical, and this means that it might not be. It does not mean that is not, but equivalently it does not mean that it is. This is Fliption's point. For this reason I do not argue that I can show you are wrong, I argue that you can't show that you are right. But I can't show that I'm right either.

To me the real question to ask is this; why it is that neither of us (and nobody else) can prove our case about the relationship between consciousness and brain? And also perhaps, and as many philsophers have suggestedis the case, does our inability to do this have something to do with the particular way we reason.

Because science hasn't gotten there yet. Just like no one could understand magnetism or the sun going across the sky hundreds of years ago. I know this argument has probably been beaten into the ground, but you have to put yourself in those ancient peoples shoes. They were sure there was no scientific explanation for these pheonomena, just like many today are sure there is none for consciousness.

Sorry about that. :smile:
...
That's not quite a fair challenge. How can I reason logically if I'm not allowed to use my common sense?

When I say you shouldn't use common sense as an argument, I mean you can't use it as your only argument. You need logic to back it up. The sun looks like its going around us, but it isn't.

I think we should stop arguing and simply accept the obvious, that the truth about consciousness cannot be known by reason alone, as so many people have asserted over the millenia, and accept that it cannot even be shown to exist by formally logical means, let alone to be this or that.

Um... I say it can be. Thats where we disagree. You want me to just stop arguing and accept the obvious that youre right? That's a compelling argument, but no.
 
Last edited:
  • #122
Status, I want to be more specific about what Les said about "human activities", as I'm also struggling to understand how you would simulate them, at least conceptually.

Very simple: let’s say I’m holding a fork and all of a sudden I make a decision to drop it. Let’s examine this decision making process. My brain must be in a certain state before the drop, say state A. You can brake down this state to a quantum level, to anything you want. The bottom line is there is a physical state that can be expressed in a matrix of certain values for each neuron, synapse, electron, photon, etc. Now comes the point in time when the decision making neuron must fire to cause the “drop the fork” reaction chain. My question is what specifically causes that neuron to fire? Yes, you can reduce that cause to a quark spin or a wave function, if you will, but that’s just begging the question. The ultimate question is what causes the system to be transformed from state A to state B (firing of the fork dropping neuron). I can think of only two causes. First, randomness / spontaneity. Whether it’s the electron’s undetermined position in the carbon atom, nuclear decay, gust of wind in your face, other natural random phenomena, whatever it is, the prime mover is random. (that’s assuming spontaneity exists, of course, which is a subject for another thread). The other cause is determinism. The transformation from state A to state B is strictly determined by natural laws. Whether the neuron will fire or not completely depends on the current state, state A, all incoming input from other neurons, and the rules (brain fabric which determines thresholds etc.) which dictate what to do. Without going into metaphysics, is there anything else?

Whether you choose randomness or determinism, there’s a problem. If the neuron firing is caused by a random act, all our decisions are nothing but a roll of a dice. I find it hard to swallow since it would make this very idea an outcome of randomness in someone’s mind….. Determinism doesn’t make things better. If my decision making is the outcome of strict deterministic rules, we’re nothing but a cog in a huge machine following the rules, we don’t really think or make decisions. I find it also hard to believe because, again, that would mean your very idea of determinism is not the outcome of your independent thinking, it’s the outcome of some physical state and some rules, you couldn’t “think” otherwise, you’re programmed to say “we’re determined”. The third option is the combination of the two of course, but again, the same criticism applies. So, how, conceptually, would you simulate the transformation from one state to the next?

Regards,

Pavel.
 
  • #123
Pavel said:
Status, I want to be more specific about what Les said about "human activities", as I'm also struggling to understand how you would simulate them, at least conceptually.

Very simple: let’s say I’m holding a fork and all of a sudden I make a decision to drop it. Let’s examine this decision making process. My brain must be in a certain state before the drop, say state A. You can brake down this state to a quantum level, to anything you want. The bottom line is there is a physical state that can be expressed in a matrix of certain values for each neuron, synapse, electron, photon, etc. Now comes the point in time when the decision making neuron must fire to cause the “drop the fork” reaction chain. My question is what specifically causes that neuron to fire? Yes, you can reduce that cause to a quark spin or a wave function, if you will, but that’s just begging the question. The ultimate question is what causes the system to be transformed from state A to state B (firing of the fork dropping neuron). I can think of only two causes. First, randomness / spontaneity. Whether it’s the electron’s undetermined position in the carbon atom, nuclear decay, gust of wind in your face, other natural random phenomena, whatever it is, the prime mover is random. (that’s assuming spontaneity exists, of course, which is a subject for another thread). The other cause is determinism. The transformation from state A to state B is strictly determined by natural laws. Whether the neuron will fire or not completely depends on the current state, state A, all incoming input from other neurons, and the rules (brain fabric which determines thresholds etc.) which dictate what to do. Without going into metaphysics, is there anything else?

Whether you choose randomness or determinism, there’s a problem. If the neuron firing is caused by a random act, all our decisions are nothing but a roll of a dice. I find it hard to swallow since it would make this very idea an outcome of randomness in someone’s mind….. Determinism doesn’t make things better. If my decision making is the outcome of strict deterministic rules, we’re nothing but a cog in a huge machine following the rules, we don’t really think or make decisions. I find it also hard to believe because, again, that would mean your very idea of determinism is not the outcome of your independent thinking, it’s the outcome of some physical state and some rules, you couldn’t “think” otherwise, you’re programmed to say “we’re determined”. The third option is the combination of the two of course, but again, the same criticism applies. So, how, conceptually, would you simulate the transformation from one state to the next?

Regards,

Pavel.

The most concise way to phrase your argument is the following:

If you do something for a reason, it was determined, and there was no free will.
If you do something for no reason, it was random, and there was no free will.

I don't know where I heard it, but its a great way of putting it. So are you saying there is no free will? If so, I don't see a problem in my simulation.

It is an oversimplification to say your brain popped from state A to state B. It is constantly evolving. Say you dropped the fork because you were thinking about free will, and this caused you to decide to drop a fork "randomly" to prove you had it. That's not really random at all, and since the simulation would "think" the same way, (maybe it wouldn't experience its thought, but the thought would have consequences), it would drop the fork too. If it was for some "random" reason, and by that I mean it was caused by some deterministic process other than the normal evolution of brain states, like a nuclear decay, then any good simulation would have to include virtual nuclear decays to qualify as an accurate simulation.

If you are saying there is free will, then I disagree. You say you have a problem with both randomness and determinism. So what do you suggest is going on?
 
Last edited:
  • #124
I suggest there's something else is going on. Whichever it is, metaphysical, emergent, epiphenomenal, I don’t know, but determinism and randomness just don’t cut it on their own, the way I see it. There is a reason why I made the argument so long. When summarized in your manner, you treat the argument on a high level of abstraction and don’t see the detail, where the real problem lies. When you program your computer, you don’t just tell it “think” or “make the decision”. You tell it specifically what you want it to do. And no, I was not asking for pseudo code either. It doesn’t even have to be a computer. Whatever your simulator is, I wanted to see how, specifically, you were planning on transforming from one state to the next. You said “evolution”, but that’s exactly my point – it’s too abstract; clearly, how do you evolve the simulator? Do you create deterministic rules with random variables? Do you really believe such evolution will produce brain capable of novelty, so inherent to the human kind?

Thanks,

Pavel
 
  • #125
Hi,

Consciousness is awareness, feeling and understanding, whatever its physical manifestation in this reality might be.

juju
 
  • #126
This isn't complicated. It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain. If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.
 
  • #127
StatusX said:
It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain.

are you saying it's an emergent property? If so, how do you know that your simulation hit the target? After all, you're not creating an artificial brain, you're assigning interpretation to a digital model. If the model comes up with a novelty, how do you know it's not a screw-up?


StatusX said:
If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.

That they're both matter is still an assumption. I believe there's a problem with asserting that brain is purely matter, and that's been my argument. I'm not sure why it's difficult to see the problem with the determinism. Let's say I have a guy standing by me with a gun who threatened to shoot me if I answer "YES" to anything. You, observing all of this, ask me "do you cheat on your wife?" I say "NO", would you believe me? My analogy might be lame, but I hope you see the point. If you're programmed to believe we're determined, then your statement asserting it has little truth value, if any. You can't jump out of the system, say "oh, look, I'm determined", and hten jump back in.


Thanks,

Pavel.
 
  • #128
StatusX said:
This isn't complicated. It is a digital model of the physical atoms in the brain. They are subject to the normal forces an atom feels. I don't explicitly program "thinking", it arises as a consequence of the structure and the rules relating its constituents, just like a real brain. If you don't agree this is possible, that this simulation would act just like a real brain, then tell me, what is the difference between a brain and a clock that allows you to simulate a clock but not a brain? Theyre both just matter. And youre argument that determinisim "just doesn't cut it" does not convince me. I don't like it either, but its the conclusion I've arrived at logically.

I want to check to see if I understand you correctly, and if you understand the implications of what at least a couple of us are saying. When you say, "I don't explicitly program 'thinking,' it arises as a consequence of the structure and the rules relating its constituents, just like a real brain [my emphasis]," that sounds like you are trying to make points by assuming your side of what we are debating is true. I think some of us are at least open to the possibility that the brain is not "creating" consciousness, just the way a switch that turns on a light bulb does not "create" light.
 
  • #129
Pavel, if you assert the brain is more than just matter, you'll have to provide a reason. Which part of its formation allowed a non-physical substance to creep in? And I'm not sure I understand your analogy about determinism. Are you suggesting that if we have no free will, we are destined to have a certain opinion, right or wrong? If so, I have to agree with you, but this doesn't mean we have opinions that are wrong and we'll never know it. We are constantly changing, and logical reasoning can cause us to see our error. Determinism does not mean we don't have the power to change our minds, it means that these changes can be predicted by the laws of physics.

Les, as I've mentioned before, I don't claim this thing is conscious. All I claim is that it tells us it is conscious. It doesn't even know that its just a simulation. And just to be clear, it is not a contradiction to say that its not conscious but still knows things. If you think it is, you aren't fully realizing the difference between the experience of knowledge and the function of knowledge.
 
Last edited:
  • #130
Les Sleeth said:
… I think some of us are at least open to the possibility that the brain is not "creating" consciousness, just the way a switch that turns on a light bulb does not "create" light.

Les you bring up an interesting point. Do you think that, going off of the switch idea, consciousness is always present or can it have absence?
 
  • #131
StatusX said:
. . . if you assert the brain is more than just matter, you'll have to provide a reason.

The zombie argument is the reason that's been offered. You may not be giving enough consideration to it. You can explain every function of consciousness with brain physiology, but you cannot account for how the self-aware aspect of consciousness has come about with brain physiology. You right now are calculating, measuring, weighing, computing . . . just what the brain seems set up to do. But what is the physical basis of the "you" which is controlling, observing, and understanding that?


StatusX said:
Les, as I've mentioned before, I don't claim this thing is conscious. All I claim is that it tells us it is conscious. It doesn't even know that its just a simulation.

If you don't claim it is conscious, then what is your point? I do not understand only arguing zombie consciousness is possible since we have computer programs right now that are nothing but zombies. Can a zombie program get so sophisticated it can fool genuinely conscious human beings into believing it is conscious? Possibly. But so what? Such a program still isn't self-aware awareness, and that is the only thing that's at issue when we are talking about if physicalness alone can produce consciousness. At this time, all we can prove is that physicalness can produce zombie-awareness, which leaves the door open to the possibility that something in addition to the brain is involved in establishing consciousness.


StatusX said:
And just to be clear, it is not a contradiction to say that its not conscious but still knows things. If you think it is, you aren't fully realizing the difference between the experience of knowledge and the function of knowledge.

I believe it is a contradiction. I say there is no possible way to know anything without being conscious. To have a hard drive full of information which is "functioning" to achieve things is not the same as conscious knowing; but of course, if one degrades the meaning of "knowing" then one might get away with saying it is.

The experience of knowledge is precisely what we are talking about. Not some dumb machine operating according to its programming. Besides, where do you think even the programming for the dumb machine comes from? It doesn't figure out anything by itself, conscious humans figured it out and program it into systems.


StatusX said:
Which part of its formation allowed a non-physical substance to creep in?

How do you know the development of the central nervous system was driven by only physical factors? There isn't a single, solitary example outside of living systems where chemistry and physics have organized themselves to the extent and quality that we find in living systems. Until someone demonstrates physical principles alone can be shown to produce something like consciousness, then the conservative and objective thinker has to wait for more information before assuming physicalness can do it.
 
Last edited:
  • #132
Jeebus said:
Les you bring up an interesting point. Do you think that, going off of the switch idea, consciousness is always present or can it have absence?

I am not sure what you mean by consciousness being always present. Do you mean in us, or in the universe, or in some other way. If you mean in us, then my view is that our conscious presence varies from fully present to present but unconscious (like when someone is in a coma). Sleep seems pretty unconscious, yet people are known to detect things going on around them while they sleep, so during sleep I think consciousness is present but not fully turned on. Some people believe they or others can leave the body (which would suggest one can be not-present), but I don't know enough from personal experience to have a strong opinion about that.
 
  • #133
Wow, two more pages since I last looked here. I've read all of the comments and I'm not sure I understand the point that StatusX is making. I think I understand it but then once I have it, I don't see what the big deal is.
Let me see if I can state it

Statement 1 - It is conceivable that a very powerful computer/machine could be built and simulate all the functions of the brain.

Ok, I got that one and I have no problem with it.

Statement 2 - Once this machine is built we cannot know whether it is truly conscious or not.

Absolutely I agree with this.

Is this it? Do I have it all? Because I have to ask, "What's the big deal?" Nothing here is new or astounding. Nothing can be concluded from any of these statements; Certainly not materialism. So why is there so much debate?

Have I missed a point StatusX?

Now if you were claiming that this machine would indeed be conscious, then we have a big problem but that's not what you're claiming. Nothing here supports a physicalist view in any way so it seems things have gotten unnecessarily stirred up here.
 
  • #134
Ok, I realize I've been unclear on my point. I had forgotten this myself, but the reason I brought up the whole supercomputer thing was because the argument had shifted to whether consciousness was causal or not. Now I'm glad you've gotten this far, as I've been having trouble convincing people such a computer could exist. (I'll still try to explain why to whoever doesn't agree) Now assuming it could, what would that mean? Keep in mind, this computer behaves EXACTLY the same as us, even if we don't know what's going on "inside its head." There are two possible interpretations of such a machine:

1. It is not conscious, but still behaves exactly as we do. The conclusion? Consciousness is not causal. Easy, right?

2. It is conscious, and behaves exactly as we do. This says nothing about the causal nature of consciousness.


Now the following represents the two main possibilities I accept as a "physicalist." (I use that term a little loosely, since the second view would probably be more considered dualism, but I'll explain that):

1. Consciousness is an illusion. I don't particularly like this idea, but it is not as dismissable as it sounds. Because if the first case above is true, that computer would be trying to understand its own, non-existent consciousness. It would not only disagree with you if you claimed it was unconscious, or that consciousness was an illusion, but would also disagree if you told it it was just a simulation and not a real person. So how do we really know we aren't just simulations, or zombies, or some other non-conscious entities that believe we are conscious?

2. Consciousness is real, and it exists everywhere there is a complex system to sustain it. I obviously don't know what they are yet, but I assert that there are strict rules that relate some aspect of the configuration of matter to consciousness. Just like hooking up a battery to a circuit gives rise to current, hooking together the right components, whether theyre neurons, computer chips, or whatever, gives rise to conscious experience. This is usually called dualism, but I've extended the terminology to call any theory of reality in which everything obeys derivable rules a physicalist theory.

Now is consciousness causal in this view? I don't know yet. There are two variations that result:

a) If it is causal, I say it is only at the quantum level. Maybe it causes random wavefunction collapse, maybe it influences the result of the collapses, or maybe it does something entirely different. I don't believe it is causal at an observable level because then there would be a way to observe it's effect on the physical. This is not a rigorous argument, and I'll get back to it some other time, but its late now.

b) If it isn't causal, then we're just sort of "along for the ride," with everything we do being controlled by our physical and deterministic brains, and all we do is experience it.


So my point is, if you believe the supercomputer argument, and you believe physicalism, in the broad sense I've defined, then you are pretty much limited to the three views above. Of course you can arbitrarily claim the rules are different for brains than for anything else, but I find such a claim inelegant and unsupportable. I'm not yet prepared to decide among these three.
 
Last edited:
  • #135
StatusX said:
Pavel, if you assert the brain is more than just matter, you'll have to provide a reason. Which part of its formation allowed a non-physical substance to creep in?

It is the same reason why the standard atomic framework was replaced by Niels Bohr’s model, and later by Schrodinger’s concepts of how atoms really work - they failed to explain certain behavior and therefore were modified to make the conceptual framework more consistent. Similarly, the random / deterministic model of consciousness is also weak in its explanatory power. It produces, as Les has adequately put it, zombies, not conscious beings. That’s the reason for something else to creep in.

In order to continue, I’d like to stick with the specific thread of thought in question, and not try to discover quantum mechanics, human art, and mysticism at the same time. Every time you deviate, you open up a Pandora’s box and we’ll never be able to come to any conclusion. Let’s stick with the question at hand, and if we reach an impasse, we’ll know exactly where we disagree. If there’s a way to resolve the disagreement through empirical observations or logical methods, good for us; if not, at least we’ll know what it all comes down to (I’m sure it’ll be something in the realm of metaphysics).

First, it seems like we really need to agree on what we mean by “conscious being”. I don’t like your behavioristic interpretation of it, as anything which behaves like human consciousness. If I didn’t know anything about electromagnetic waves and electronics, I’d definitely conclude that my radio is very conscious and intelligent (well, depending on the station I tune in). However, we both know that’s not the case. Besides, you’re comparing it to something we’re trying to define in the first place. I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities. For example, intention. I intend to drop the fork. My radio cannot intend, even though it exhibits other conscious properties. “I think” would be another. When I think, I manipulate objects in my mind in such a manner that I assign qualitative, subjective attributes to them, and I operate on them without restrictions. Zombie, on the other hand, does not have this privilege, even though it might be self-aware, it is bound by restrictions in what attributes it can assign to objects. So, before moving on, do you agree with this notion of “conscious being”? As soon as we agree on the definition, we’ll see if we can build a device, which would be governed by deterministic rules, and yet satisfy our notion of “conscious being”.

The other thing we need to be clear on is the kind of simulator you’re building. Are you building a replica of the human brain that will exhibit similar physical properties on its quantum level, or are you simulating it through interpretation of 1’s and 0’s in a digital model? There’s a big difference between the two and we should be clear about it.

I’ll comment on my determinism analogy as soon as we agree on the terminology. By the way, coming from a different country, the experience of two different cultures made me a strong believer in the cultural relativism and social conditioning in general. You throw genetics and the whole biology on top of it, and I’m convinced our consciousness is 99% determined. But there’s that 1% that crept in and made things so mysterious, at least for me. I know you disagree with the existence of this 1% and I’d like to make that the focus of this discourse.

Thanks,

Pavel.
 
  • #136
Pavel said:
I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities.

I like the way you make your points. I suppose you are referring to my use of the term "self-aware" since I am mainly the one who uses that around here. I'll explain why I tend to use that to characterize subjectivity, and why now thinking about it I can imagine a better term.

In past threads I've argued that if we examine consciousness, it seems to do several very basic things which are more fundamental than thinking. One is consciousness is sensitive to stimulation, which is pretty obvious. A second basic trait I call retention, which is that consciousness not only senses, it holds patterns of what has been sensed. These patterns range in permanance from simple impressions and memory, to a third basic quality I've called integration. Integration is retention too, but it seems more permanent. An example would be understanding, where a collection of related events suddenly produce a singular sort of result in consciousness. Suddenly "getting" how to ride a bike is like that, but there also intellectual understanding of course.

It seems to me that this integrative quality of consciousness is what most establishes self, or subjectivity. (A computer can do all the rest, but not that.) Examining humans, it seems there is a very high realization of the integrative thing because we can function single-pointedly doing complex tasks. It's like all that's integrated into consciousness is right there guiding the focused human even though he might not be thinking about everything that's contributed to his knowing pool.

I've looked at animal life from amoebas all the way up, and it seems to me that as the integrative aspect improves, so does the sense of self. So I started saying "self aware" to describe that because I believe it is the most defining thing there is about consciousness. You are right that it doesn't communicate other qualilties that are present, such as volition, or as some like to say "what its like" to sense something.

Possibly a better term for me to use would be self-forming or self- establishing or something like that.
 
  • #137
StatusX said:
It is logically possible that a being could exist with the same physical brain structure as us and not be conscious.
How do you know that?

But my argument, and yes it is a materialist one, is that in this universe, any beings with the same physical brain structure will have the same conscious state.
How do you know that?

What the computer subjectively understands is irrelevant. I'm not making a claim one way or the other about whether it is conscious. I'm saying it will behave the same as us,
So you say. I don't know why you believe it. There's certainly no evidence. Hell, we've been looking for evidence since the nineteenth century and there still isn't any.

and to see how it behaves, we must have a way of transmitting its signals into physical actions the way our muscles do it for us. Since it doesn't have muscles to move its vocal cords, a subprogram must translate these signals into text for it.
I thought this thing was supposed to behave like a human being.

You are missing the point entirely. Of course it would be hard. It would be harder than anything we've done up to this point. I'm not even sure it would be practically possible at any point in the future. All I'm saying is that it it is theoretically possible...
How do you know that?

What about the ones and zeroes representing that sentence? That sentence represents an idea. So what if this computer doesn't "understand" the sentence? I say a smarter one could.
No, the sentence is a re-symbolisation of the ones and zero's. It contains no more and no more less information than the string of ones and zeros. It is certainly not an idea. An idea is defined (in my dict.) as 'any content of the mind, esp. the conscious mind'. So a computer can have an idea only if it has mind. So far you've failed to show that it can.

We must have different definitions of consciousness. What I call consciousness is experience. It is difficult to explain exactly, but its basically what its like to do things.
Yes, I'm ok with 'what it is like' as a working defintion.

It is difficult to imagine the functions associated with these experiences without consciousness, but it is not logically impossible.
I'd say it was. I don't want to convince you, but just point out that your certainty is misplaced. Your arguments have been made by many fine scientists and one or two philosophers. They have been shown not to stand up to analysis. If they did then your ideas would be widely held.

There is no logical reason a non-conscious entity couldn't talk to us about its ideas. None whatsoever. There would just be no first person experience of the ideas.
I don't think everyone shares your view of what is logical.

Because science hasn't gotten there yet. Just like no one could understand magnetism or the sun going across the sky hundreds of years ago. I know this argument has probably been beaten into the ground, but you have to put yourself in those ancient peoples shoes. They were sure there was no scientific explanation for these pheonomena, just like many today are sure there is none for consciousness.
You do our ancestors a disservice, and misunderstand the nature of metaphysical questions.

When I say you shouldn't use common sense as an argument, I mean you can't use it as your only argument. You need logic to back it up. The sun looks like its going around us, but it isn't.
There is this ridiculous idea going around that if one uses ones common sense then one is bound to conclude that the Earth goes around the sun. I take no notice of it.

Um... I say it can be. Thats where we disagree. You want me to just stop arguing and accept the obvious that youre right? That's a compelling argument, but no.
Ok. Demonstrate by use of reason alone that consciousness exists and I'll withdraw my comment. If you can't do this then you ought to wonder why not.
 
Last edited:
  • #138
Hi,

From my own experiences, I must conclude that consciousness (in terms of self awareness, perception,etc) does not only reside in a physical vehicle. I have had out of the body experiences, and experiences of alternate realities, that are as vivid and as real (if not more so) than this physical reality.

juju
 
  • #139
Canute,
"How do you know?" and "I don't think so" are not very convincing arguments. All I'll say is that it is DEFINITELY logically coherent to imagine a non-conscious being acting as a conscious one because we don't know anyone else is conscious, and yet they act just like us. "Logical" doesn't mean "consistent with your preconceptions."
 
  • #140
Pavel said:
It is the same reason why the standard atomic framework was replaced by Niels Bohr’s model, and later by Schrodinger’s concepts of how atoms really work - they failed to explain certain behavior and therefore were modified to make the conceptual framework more consistent. Similarly, the random / deterministic model of consciousness is also weak in its explanatory power. It produces, as Les has adequately put it, zombies, not conscious beings. That’s the reason for something else to creep in.

Those models where changed because they couldn't explain the outcomes of certain experiments. What experiments show a problem with the random/deterministic model of choice (I think you are confusing this with consciousness)? Just because the idea is unsettling to you doesn't mean its incoherent, or even wrong. And by the way, zombies refer to the hypothetical beings who are identical to us in every physical way but lack consciousness. We talked about them a lot earlier in this thread. Just to reiterate, when I talk about consciousness, I mean the subjective experience of what its like to see a color, feel pain, etc. Choice is something different.

In order to continue, I’d like to stick with the specific thread of thought in question, and not try to discover quantum mechanics, human art, and mysticism at the same time. Every time you deviate, you open up a Pandora’s box and we’ll never be able to come to any conclusion. Let’s stick with the question at hand, and if we reach an impasse, we’ll know exactly where we disagree. If there’s a way to resolve the disagreement through empirical observations or logical methods, good for us; if not, at least we’ll know what it all comes down to (I’m sure it’ll be something in the realm of metaphysics).

I'm trying to make my arguments as rigorous as possible. If someone claims they are false because of incomplete knowledge of some other subject, like quantum mechanics, I need to try to correct them. But,I agree, let's try to stay on topic.

First, it seems like we really need to agree on what we mean by “conscious being”. I don’t like your behavioristic interpretation of it, as anything which behaves like human consciousness. If I didn’t know anything about electromagnetic waves and electronics, I’d definitely conclude that my radio is very conscious and intelligent (well, depending on the station I tune in). However, we both know that’s not the case.

Consciousness is experience, as I just described. I don't know where I claimed it was behavioral. I actually explictly described two beings that behaved indentically but one was conscious and the other wasn't, so I don't understand where you got this idea.

Besides, you’re comparing it to something we’re trying to define in the first place. I think the aspect of subjectivity is necessary in defining a “conscious being”. I’m not sure I like the self-awareness aspect, even though I agree with it, because it’s too vague and hard to define. What, specifically, is aware? So, to call something “conscious”, I think it needs to have subjective qualities. For example, intention. I intend to drop the fork. My radio cannot intend, even though it exhibits other conscious properties.

I'm going to have to flat out disagree with you here. Intention is not consciousness. We can see that other people have intentions, but we know nothing of what they experience. The subjective feeling of what its like to want to do something is an aspect of consciousness, but they are not the same thing.

“I think” would be another. When I think, I manipulate objects in my mind in such a manner that I assign qualitative, subjective attributes to them, and I operate on them without restrictions.

This is getting closer. The actual mental images are a result of consciousness. However the attributes you assign them are not. There is clearly a place in your brain where you store the attributes of objects. These are called schema, and I have little doubt psychology will provide a scientific explanation for them.

Just on a side note, those of you who claim all these functions of the brain, like knowledge, thought, etc., are a result of non-physical consciousness: what is the brain for? It's there, it is obvisouly important, and we understand only a tiny fraction of what it does. Are you claiming its superfluous, and that it only does whatever our "consciousness" tells it to?

Zombie, on the other hand, does not have this privilege, even though it might be self-aware, it is bound by restrictions in what attributes it can assign to objects.

Again, your notion of a zombie is unclear, and not the same as the one the rest of us use.

So, before moving on, do you agree with this notion of “conscious being”? As soon as we agree on the definition, we’ll see if we can build a device, which would be governed by deterministic rules, and yet satisfy our notion of “conscious being”.

Look over what I've typed above and see what you agree with and what you would change.

The other thing we need to be clear on is the kind of simulator you’re building. Are you building a replica of the human brain that will exhibit similar physical properties on its quantum level, or are you simulating it through interpretation of 1’s and 0’s in a digital model? There’s a big difference between the two and we should be clear about it.

I assume you mean by the second description a machine where I try to program in the functions of the brain without worrying about the actual physical structure of it. If that's what you mean, then my simulator is the first description, accurate down to the atom. It's not practical, but if we had a computer the size of a galaxy, I think it could be done, and could be is what's important.

I’ll comment on my determinism analogy as soon as we agree on the terminology. By the way, coming from a different country, the experience of two different cultures made me a strong believer in the cultural relativism and social conditioning in general. You throw genetics and the whole biology on top of it, and I’m convinced our consciousness is 99% determined. But there’s that 1% that crept in and made things so mysterious, at least for me. I know you disagree with the existence of this 1% and I’d like to make that the focus of this discourse.

I'm not sure what you mean by "our consciousness is 99% determined."
 

Similar threads

Replies
6
Views
1K
  • Quantum Interpretations and Foundations
Replies
14
Views
968
  • General Discussion
2
Replies
62
Views
11K
Replies
1
Views
49
  • Biology and Medical
Replies
5
Views
5K
  • General Discussion
Replies
5
Views
2K
  • General Discussion
4
Replies
135
Views
20K
  • General Discussion
Replies
20
Views
3K
Replies
17
Views
10K
Back
Top