The Role of Consciousness in the Evolution of Interpretation

  • Thread starter Thread starter PIT2
  • Start date Start date
  • Tags Tags
    Self
AI Thread Summary
The discussion centers on the conceptual and physical emergence of the self in nature, exploring definitions and the nature of existence. A self is defined as the experience of distinguishing between what is part of the experiencer and what is not, leading to the assertion that the first self arose as the first entity capable of making this distinction. The conversation highlights the complexity of defining the self, with concerns about circular definitions and the distinction between knowing and experiencing. Participants argue about whether the self is a mental construct or an enduring entity, emphasizing that a self requires the recognition of non-self experiences. Ultimately, the dialogue reflects on the nature of consciousness and the evolving understanding of selfhood.
PIT2
Messages
897
Reaction score
2
How do you guys think the first self arose in nature, conceptually and physically?

Also what is a good definition of a self?
How about this one:

A self is the feeling that some things are part of that which experiences them ánd that other things are not part of that which experiences them. In other words, it is the experience of "this is me, and that is not me".
 
Last edited:
Physics news on Phys.org
PIT2 said:
How do you guys think the first self arose in nature, conceptually
I think the first (and only) self was the first entity to exist. Whether it came into existence at some point out of nothing, or whether it always existed and had no beginning is IMHO an insoluble mystery.
PIT2 said:
and physically?
If, by "physically", you mean made of matter or energy as our physical universe is, I don't think the self is, or ever was, physical. (I am a Cartesian dualist).
PIT2 said:
Also what is a good definition of a self?
I don't know if you would call this definition good, but I think the self is an entity capable of knowing.
PIT2 said:
How about this one:

A self is the feeling that some things are a part of urself and that other things are not a part of urself. In other words, it is the experience of "this is me, and that is not me".
One problem is that you use the term 'self' (actually you used "urself") in the definition of 'self'. That makes the definition circular and thus it isn't very clear. Also, rather than being a feeling, I think it woud be better to say the self is the thing, or entity, which has the feeling. That seems to be more consistent with the ordinary understanding of the term.

Another problem is that your definition uses the term 'me'. You haven't defined the term 'me' but it seems reasonable to consider it to be a synonym of 'self'. This brings on more circularity.

As for my definition, I suppose I better define 'the capability of knowing'. I suspect that each reader of this post knows at least something. This means that each reader must be capable of knowing. So I define 'the capability of knowing' to be what each reader knows it to mean. You know what I mean.

Paul
 
Last edited:
Paul Martin said:
If, by "physically" you mean made of matter or energy as our physical universe is, I don't think the self is, or ever was, physical.

I wasnt really suggesting it was physical or nonphysical, just wondering at which point it arose in the physical universe.

(I am a Cartesian dualist).I don't know if you would call this definition good, but I think the self is an entity capable of knowing.One problem is that you use the term 'self' (actually you used "urself") in the definition of 'self'. That makes the definition circular and thus it isn't very clear.

I noticed that it was circular just before u replied and quickly replaced 'urself' with 'that which experiences them'. The logic being that those replacing words do not equal a self according to the definition used, so the definition is not circular anymore. For instance, if one only has experiences without making the distinction of some being part of that which experiences them and others not being part of that which experiences them, then there is still something which experiences them but this something doesn't constitute a self anymore when it makes no distinction. In simple words: the 'self' is a feeling had by that which experiences.

Does this solve the circularness of the definition?

Also, rather than being a feeling, I think it woud be better to say the self is the thing, or entity, which has the feeling. That seems to be more consistent with the ordinary understanding of the term.
But i think the self is a mental construct, not a thing or the thing having the experience. In my view if one thinks the sun is part of that which experiences it, then the sun is part of ones self.

As for my definition, I suppose I better define 'the capability of knowing'. I suspect that each reader of this post knows at least something. This means that each reader must be capable of knowing. So I define 'the capability of knowing' to be what each reader knows it to mean. You know what I mean.

Paul
Does knowing equal experiencing? And in ur definition, there is always a self as long as there is knowing (experiencing), correct?

In my definition there can be experience without a self (though not without a 'that which experiences it').
 
Last edited:
PIT2 said:
I noticed that it was circular just before u replied and quickly replaced 'urself' with 'that which experiences them'. The logic being that those replacing words do not equal a self according to the definition used, so the definition is not circular anymore. For instance, if one only has experiences without making the distinction of some being part of that which experiences them and others not being part of that which experiences them, then there is still something which experiences them but this something doesn't constitute a self anymore when it makes no distinction.

Does this solve the circularness of the definition?
I think it helps. Your definition boils down to an entity which experiences and which can make a distinction. A pot of water on a hot burner experiences heating, and maybe boiling, but as you observe, it doesn't make any distinction so it is not a self. A thermostat experiences warming, but it can also make a distinction between the temperature being above or below some threshold. Still, I don't think either of us would say a thermostat has (or is) a self. I think the critical capability, in addition to experience and distinction, is that of knowing. I'd say that if the thermostat knew that it had just clicked, it would qualify as a self.
PIT2 said:
But i think the self is a mental construct, not a thing or the thing having the experience.
But doesn't being a mental construct imply that there is (was) some mind which did the constructing?
PIT2 said:
Does knowing equal experiencing?
Of course it would depend on your definitions. But I would say that they are not the same. I tried to explain the difference above with the thermostat example.
PIT2 said:
And in ur definition, there is always a self as long as there is knowing (experiencing), correct?
Correct (except that knowing is not the same as experiencing).
PIT2 said:
In my definition there can be experience without a self (though not without a 'that which experiences it').
I'd agree with that, as I explained above.

Warm regards,

Paul
 
Last edited:
Paul Martin said:
I think it helps. Your definition boils down to an entity which experiences and which can make a distinction. A pot of water on a hot burner experiences heating, and maybe boiling, but as you observe, it doesn't make any distinction so it is not a self. A thermostat experiences warming, but it can also make a distinction between the temperature being above or below some threshold. Still, I don't think either of us would say a thermostat has (or is) a self.

In the thermostat example, I am not saying that any distinction it makes between experiences would make it a self. Something could experience all the things happening in the universe, but unless it experiences any of them as not-part-of-it, then it has no self. The only distinction that would make it a self is between experiences of "this is me, and that is not me".

So the thermostat could experience a temperature being above or below some threshold, but that doesn't mean it experiences above, below, or the threshold as part or not part of it. If it experienced above, below, and the treshold as all being part of it. Then it has no self.

Think about when ur sleeping, u may have all kinds of experiences, but (sometimes) there is (at least when i sleep) little or no distinction made between experiences coming from something me or not-me.

I think the critical capability, in addition to experience and distinction, is that of knowing. I'd say that if the thermostat knew that it had just clicked, it would qualify as a self.
But what is this knowing? Isnt it an experience?

But doesn't being a mental construct imply that there is (was) some mind which did the constructing?
Yes because the self is a feeling had by that which experiences (mind). The feeling can change, expand, or dissappear entirely, but the experiencing entity doesn't have to because it is not the same as the self.
 
Last edited:
PIT2 said:
In the thermostat example, I am not saying that any distinction it makes between experiences would make it a self. Something could experience all the things happening in the universe, but unless it experiences any of them as not-part-of-it, then it has no self. The only distinction that would make it a self is between experiences of "this is me, and that is not me".
I don't think we are very far apart. We just describe it differently. To say that "it experiences any of them as..." seems to imply that "it knows them as...". And, that to experience "this is me" is the same thing as knowing that this is me. Similarly for things that are experienced as "not me" it's the same as saying "I know that is not me." I still think that knowing, in the sense I described earlier, is the key to the self.
PIT2 said:
So the thermostat could experience a temperature being above or below some threshold, but that doesn't mean it experiences above, below, or the threshold as part or not part of it.
That depends on your definition of 'experience'. I would define 'experience' as something that happens (to the experiencer), and by this definition, heating is experienced by the thermostat. So, by this definition, I would agree with you that the thermostat could experience a temperature being above or below some threshold. And, I think we agree that the thermostat does not experience that temperature as a part, or not a part, of the thermostat. Using my language, I would say that the thermostat does not know that it is warm, or that it crossed a threshold.
PIT2 said:
If it experienced above, below, and the treshold as all being part of it. Then it has no self.
I'm not sure I get what you mean here.
PIT2 said:
Think about when ur sleeping, u may have all kinds of experiences, but (sometimes) there is (at least when i sleep) little or no distinction made between experiences coming from something me or not-me.
Here I think you are getting into a very complex and mysterious area, so I don't think it can help us sort out your original question. In my opinion, sleep itself is a baffling mystery which I think science can't begin to explain. Add dreams to that mystery and it only deepens. In the context of sleep and dreams, the very definitions of 'you', 'self', 'consciousness', 'knowing', 'feeling', etc. etc. become ambiguous and a lot messier than they are in a waking state.
PIT2 said:
But what is this knowing?
Good question. I think it is a profound question at the heart not only of the question of consciousness, but of epistemology and ontology as well.
PIT2 said:
...the self is a feeling had by that which experiences (mind). The feeling can change, expand, or dissappear entirely, but the experiencing entity doesn't have to because it is not the same as the self.
That implies that the self can change, expand, or disappear entirely and the mind endures. It makes more sense to me to define 'self' as the thing that endures and the feelings, which are had by the self, come and go.

Warm regards,

Paul
 
Paul Martin said:
I'm not sure I get what you mean here.
In my definition a self can only exist if there is also an experience of non-self: something that is not part of the experiencer. If u have a thought, then u experience it as part of the thing that experiences, u identify with the thought and it is 'owned' by u. If u had an experiental world entirely consisting of 'owned' experiences like this, then there wouldn't be a self because there is also no non-self.

That implies that the self can change, expand, or disappear entirely and the mind endures. It makes more sense to me to define 'self' as the thing that endures and the feelings, which are had by the self, come and go.
What about our human selfs? Do you think they endure?
I remember u have talked about a primordial consciousness, but if so, then doesn't the spawning of human and animal selfs from that show they these selfs can change, expand, disappear?
 
PIT2 said:
In my definition a self can only exist if there is also an experience of non-self: something that is not part of the experiencer.
Being the dualist that I am, I can easily agree with your definition. I'd say, like Descartes would say, that there is the mind, or self, and then there is everything that is not mind, or that is outside of the self. In my view, that other stuff is simply thoughts in the mind of the self. So you have the duality of the knower and the known.
PIT2 said:
If u have a thought, then u experience it as part of the thing that experiences, u identify with the thought and it is 'owned' by u.
This is a legitimate and reasonable way of thinking about it. But I think it is just one arbitrary way of using Phaedrus' knife to distinguish and categorize things. You can think of the knower and the known as one, or you can think of them as distinct.
PIT2 said:
If u had an experiental world entirely consisting of 'owned' experiences like this, then there wouldn't be a self because there is also no non-self.
I think that's a reasonable conclusion. That's just about exactly what the Buddhist's teach: There is no self; there is just the One and a bunch of illusions.
PIT2 said:
What about our human selfs?
I'm with the Buddhists here. I don't think there is any such thing as a human self.
PIT2 said:
Do you think they endure?
No, because I don't think they exist. But, of course, there is the familiar idea of a human individual, and it is reasonable in vernacular conversation to think of each one as a self. In my view, this "self" is equivalent to Gregg Rosenberg's idea of a "Natural Individual". The idea is that the body is a vehicle through which some outside "self" vicariously experiences what happens to the Natural Individual and which can deliberately control some of the actions of the Natural Individual. In short, the body is a vehicle being driven by the self, or mind. Thus, what we normally refer to as a human self is really a paired combination of a real self and a body. This is analogous to the pairing of a car and driver. This pair acts completely different from a car without driver, or a driver without car. It can accomplish feats that neither could do without the other.

So, to address the question, the pair does not endure for any longer than the intervals between sleep episodes, and it disappears completely on death of the body. So, if the pair is what you mean by a "human self", then it does not endure. But IMHO the real self, or mind, does indeed endure. I think it has always existed, although not in the same form, and I think it always will.
PIT2 said:
I remember u have talked about a primordial consciousness, but if so, then doesn't the spawning of human and animal selfs from that show they these selfs can change, expand, disappear?
Yes. I use the term Primordial Consciousness (PC) to refer both to the one single self, or mind, that animates all conscious entities now, and also to refer to the primordial state of that mind, which I suspect at the very beginning was extremely simple and, well, primordial. As I explained above, the human and animal "selves" appear, change, function, and disappear.

Warm regards,

Paul
 
Last edited:
The test that is often used to determine if a creature has a concept of "self" is to see if it is capable of recognizing itself in a mirror as distinct from other creatures. After all, you cannot say "that is me" if you cannot conceive of a "me".

Basically, the test is accomplished by watching the subject acknowledge the presence of a dot of paint on their own forehead, which they can only see in the mirror.

Both dolphins and chimps have passed the test. Dogs and cats and others do not.
 
  • #10
Hi Dave,

I am not impressed by that test. Neither am I by the Turing test. I just read in a recent "Popular Science" magazine (I think it was), which had a cover story on robots, that someone has built a robot that recognizes itself in a mirror. Whether it can or not, I don't think the robot is conscious and I don't think it has a self in the sense we are talking about.

As for dogs and cats, I think that anyone who is very familiar with them would agree that they have a strong sense of self exhibited by their concern for their own preservation and safety. My dog is well aware of when his brain is maxed out and he can't figure out how to get his ball out from somewhere where it is stuck. He looks at me with the unmistakable message that he can't get the ball and that he wants me to retrieve it. He knows his own capabilities and limitations and he looks outside himself for help when he needs it. I also think he has absolutely no interest in any dot he might see on anyone's forehead, including his.

Warm regards,

Paul
 
  • #11
DaveC426913 said:
Basically, the test is accomplished by watching the subject acknowledge the presence of a dot of paint on their own forehead, which they can only see in the mirror.

Both dolphins and chimps have passed the test. Dogs and cats and others do not.
I think the test probably is a good way to tell if some animals have a self. However i don't think it can be used to determine whether animals that do not pass the test have no self. This test is based on visual experience and compares the animal response to a human response. The self being tested by the test is the already well advanced human form, but i think that other (perhaps all) animals have a less evolved and less testable form of a self. Also, i think any type of experience should count, whether it is seen, heard, felt, thought, etc.

Im going to mention an extreme example which u probably won't agree with: the bacteria. It can recognise a virus as not being part of itself, and other bacteria or the colony as being part of/similar to itself. Id say that if this recognition is based on experience, then the bacteria has a self, because it recognises a non-self. However the self here is so weak that it doesn't look anything like our human self. Maybe our human self is a much more complex self/non-self recognition system?
 
  • #12
Paul Martin said:
I use the term Primordial Consciousness (PC) to refer both to the one single self, or mind, that animates all conscious entities now, and also to refer to the primordial state of that mind, which I suspect at the very beginning was extremely simple and, well, primordial. As I explained above, the human and animal "selves" appear, change, function, and disappear.
Does the PC have a self according to the definition i gave? Does it experience/know/make a distinction that some things are and other things are not part of it?
 
  • #13
Paul Martin said:
I am not impressed by that test. Neither am I by the Turing test. I just read in a recent "Popular Science" magazine (I think it was), which had a cover story on robots, that someone has built a robot that recognizes itself in a mirror. Whether it can or not, I don't think the robot is conscious and I don't think it has a self in the sense we are talking about.
You are mistaking the word 'test' for the word 'definition'.

Computers can be programmed to simulate all sorts of life-like or intelligent behaviours, none of which cause us to question whether the computer could be considerd alive or truly intelligent.
 
  • #14
PIT2 said:
I think the test probably is a good way to tell if some animals have a self. However i don't think it can be used to determine whether animals that do not pass the test have no self.

The 'self' I'm talking about, is really 'self as distinct from others'.

There's a passage in Jay Ingram's book 'Theatre of the Mind' that defines four stages of awareness. I think this is it:

I know.
I know I know.
I know you know.
I know you know I know.

It has to do with the recognition of, first, one's self [step 2], and then the realization that others are equivalent, yet distinct [step 3]. It also leads to the concept of deliberate deception [step 4], inasmuch as a critter can 'anticipate' anothers' thoughts, (such as "he saw me bury that bit of food here") Some intelligent birds have been tested and shown to demonstrate this behaviour, and it's not just instinct.

Anyway, the point is, a critter that sees a dot of paint on its paw will tend to try to lick it off. A critter that sees a dot of paint on its forehead will also tend to pay attention to it - but only if it recognizes that that self is itself.
 
  • #15
PIT2 said:
Does the PC have a self according to the definition i gave? Does it experience/know/make a distinction that some things are and other things are not part of it?
GOOD question! I don't think there is a simple 'yes' or 'no' answer to your question, but I think by addressing the implications of the question and possible answers, some of the complexity of reality might be glimpsed. Here's how I see it. (Bear in mind that all my speculation is based on my premise of a single consciousness in all of reality. Also keep in mind that by now, PC is extremely evolved and the label of 'PC' is really a misnomer; PC is no longer Primordial.)

PC is the only entity in reality that can think, but nonetheless much (if not most) of PC's thinking is done vicariously. That is, some of PC's thinking is done by using human brains as devices to gather information about parts of physical reality and also, possibly, to store information about the history of experiences of that particular human. So just this gives us about six billion different modes of thinking for PC in its "current" state. Add to that all the other animals, organisms on other planets, people and animals that lived in other times and are now long since dead, the possibility of other Natural Individuals existing in dimensions outside of our physical reality, and you can see that the various modes of PC's thought are very complex indeed. And finally there is a possibility that PC can think independently of any of the "crutches" of these Natural Individuals.

So to answer your question, we would have to be specific about which of these modes of thought are we talking about? For example, if we were talking about a specific human Natural Individual, as the vehicle used by PC in his/her/its thought, and that individual, say, is a person who is an enlightened Buddhist, then the answer would probably be that the PC would make no distinction between self and other, or between inside and outside, or any other distinction. In this case, PC would declare that there is no self.

On the other hand, if the particular human Natural Individual was someone of the persuasion of, say, Daniel Dennett, PC may conclude and declare that there was only the outside and there is really no self at all.

But if the particular human was more typical of the normal mix, PC would probably declare that, yes there is a self, bounded by the skin of the human organism, and everything outside that skin is the outside world.

Warm regards,

Paul
 
  • #16
Paul Martin said:
On the other hand, if the particular human Natural Individual was someone of the persuasion of, say, Daniel Dennett, PC may conclude and declare that there was only the outside and there is really no self at all.

I want to know if your PC has/had a self in:

1. in its most primal form. Or:
2. its its most-knowing form (perhaps allknowing), which would be a PC that can access all experiences like a one way mirror: the PC can experience the human experiences (as well as everything else), but the humans and other parts can't experience the PC's other experiences.

Do u think either of those forms of PC exist and have a self?

As for the human examples:
On the other hand, if the particular human Natural Individual was someone of the persuasion of, say, Daniel Dennett, PC may conclude and declare that there was only the outside and there is really no self at all.
Do u really think Dennett has no self?
He may reason that we dont, but he still experiences being a human.
 
  • #17
PIT2 said:
I want to know if your PC has/had a self in:

1. in its most primal form.
After thinking about this for many years, my guess is that, No. PC in its most primal form did not have a self according to your definition of being aware of, or being able to make, a distinction between itself and other. In my opinion, the most primal PC was simply a rudimentary ability to know, but with nothing whatsoever known. I think of it sort of like a primal pure energy field (which is some people's favorite ontological primitive). The field, being energy, is the ability to do work, but in that primal state, no work has yet been done. Once some work starts getting done, the universe can begin unfolding. In my view, instead of that ability to do work, the origin of reality was instead the ability to know. And once something -anything at all- became known, then knowledge began to accumulate and the universe began unfolding. Somewhere down the line, after the construction and recognition of many bits of knowledge (known information), the capability to distinguish between something and something else developed. Only after this capability was used to identify some part of reality as "self" and then to distinguish between that and other, did a notion of self come to be. But it is only a notion and it only exists and makes sense in some context of assumptions. That is why there is no absolute answer to the question, Is there a self? As I pointed out before, from some points of view there is, and from other points of view, there is not.
PIT2 said:
Or:
2. its its most-knowing form (perhaps allknowing), which would be a PC that can access all experiences like a one way mirror:
I think that identifying that "most-knowing form" might be hard to do. How would you judge? For example, who is the most-knowing human? Is it the most published scientist? The most successful evangelist? The most tenured philosopher? The most enlightened guru? The most successful businessman? The person with the highest IQ? Someone who just emerged from an NDE and who "saw it all"? Who? Since IMHO these are all PC in various modes of thought, they are all candidates for your question. But, in addition to human Natural Individuals, there is the possibility of other modes for PC thoughts. So if we are asking who knows the most about our particular physical reality, maybe one of those human individuals would be the answer. But if the question were who knows the most about the greater reality including hyperdimensional space, human knowledge might look pretty puny. On the other hand, it might be possible that knowledge decreases, at least in detail, as you move through levels beyond our 4D spacetime. It might be, as the Wizard of Oz sort of hints at, that if you finally get to that ultimate Natural Individual at the very top of the hierarchy, it might not know anything at all. Maybe all knowledge is distributed among the Natural Individuals throughout the hierarchy with the most acute and detailed knowledge occurring down at the bottom with us humans. Who knows?
PIT2 said:
the PC can experience the human experiences (as well as everything else), but the humans and other parts can't experience the PC's other experiences.
Yes. I'm not sure if you mean the same thing I do, but IMHO PC is the only thing that can experience, so what we think of as human experience is really the vicarious experience of PC.
PIT2 said:
Do u think either of those forms of PC exist and have a self?
Yes, I think they exist, but whether they have a self or not is nothing more than a semantic question. It all depends on how you define self. As I have tried to explain, the answer is sometimes 'yes' and sometimes 'no'.
PIT2 said:
Do u really think Dennett has no self?
I think he has the same self that you and I do. It is the same self shared by all humans, animals, PC, and anything else that might seem to have a self.
PIT2 said:
He may reason that we dont, but he still experiences being a human.
In my view, it is really PC having the experiences of being Daniel Dennett regardless of what the vocal cords of Dennett's body might utter.

Warm regards,

Paul
 
Last edited:
  • #18
So Paul, your PC as a thing that exists outside human self, even outside and prior to known universe--how does this differ from the concept of god ?
 
  • #19
Rade said:
So Paul, your PC as a thing that exists outside human self, even outside and prior to known universe--how does this differ from the concept of god ?
Several ways:

1. PC is not omniscient. It only knows what it has painstakingly learned over time by experience. At the very beginning, it knew absolutely nothing.

2. PC is not eternal. Either PC itself had a beginning, or the very first bit it ever knew marked the beginning of time when it first became known.

3. PC is not omnipotent. It cannot, for example, violate the laws of physics. It cannot interfere with the unitary evolution of QM unless it stays under the HUP threshold.

4. PC is not perfect. The fossil record here on Earth shows the horrendous trial and error pattern of the development of modern flora and fauna. Individual species also contain many design flaws and poor choices of structural material.

5. PC is not infinite. It had a finite beginning and the result of the evolution of reality is now, and always will be, finite in extent and duration.

6. PC is not wholly good (omni-benevolent). Since PC is the driver of all organisms, it follows that PC was also the driver of Adolf Hitler and Pol Pot.

7. PC is not immutable. PC has been evolving since the beginning, learning all the while.

8. PC is not complete. Being finite and growing, PC still has a long way to go.

9. PC is limited. PC has a difficult time communicating the knowledge it has gained from one Natural Individual to another. PC can't avert natural disasters like volcanos, hurricanes, asteroid impacts, etc. that cause a lot of trouble for life.

10. PC did not do the things attributed to God, like stop the sun, part the Red Sea, flood the earth, etc.

11. PC is us, with all our frailty and power.

I hope that helps clear things up.

Warm regards,

Paul
 
Last edited:
  • #20
Paul Martin said:
Several ways...I hope that helps clear things up.
Well, at least I now know that PC not= god and that god must be prior to PC, thus PC must be created by god (that is, god creates potential as well as actual). From classical physics we have the equation H = T + V where H = a Hamiltonian function which is the total energy of a system, T = kinetic energy and V = potential energy. So, next question, would it be correct to think of your "PC" concept as being the "V", thus PC = V ?
 
  • #21
Rade said:
Well, at least I now know that PC not= god and that god must be prior to PC, thus PC must be created by god (that is, god creates potential as well as actual). From classical physics we have the equation H = T + V where H = a Hamiltonian function which is the total energy of a system, T = kinetic energy and V = potential energy. So, next question, would it be correct to think of your "PC" concept as being the "V", thus PC = V ?
You could think of it that way, but I don't think it would be correct.

Warm regards,

Paul
 
  • #22
Pual Martin said:
So I define 'the capability of knowing' to be what each reader knows it to mean. You know what I mean.
Oh dear, Paul – and you accuse PIT2 of using a circular definition?

PIT2’s definition was a good start, but as you pointed out it is a little circular. IMHO a better operational definition of “the feeling of self” (avoiding self-referentiality in the use of the word self) is :

The ability of an agent to register and classify perceptual phenomena under at least two headings : Phenomena which are considered to be associated with a part of the agent on the one hand, and phenomena which are considered to be associated with objects external to the agent on the other.

This definition implies that an agent needs to possesses the ability to register and classify perceptual phenomena before it can claim to have a feeling of self, which I think entirely rational.

The agent’s belief that it “knows” (where I define knowledge as justified true belief) things about the world then simply arises quite naturally from this as a consequence of the agent’s registration and classification of perceptual phenomena.

Best Regards
 
  • #23
moving finger said:
Oh dear, Paul – and you accuse PIT2 of using a circular definition?
"Accusation" is a little strong, but I did point out the circularity.
moving finger said:
PIT2’s definition was a good start, but as you pointed out it is a little circular.
You seem to agree.
moving finger said:
IMHO a better operational definition of “the feeling of self” (avoiding self-referentiality in the use of the word self) is :

The ability of an agent to register and classify perceptual phenomena under at least two headings : Phenomena which are considered to be associated with a part of the agent on the one hand, and phenomena which are considered to be associated with objects external to the agent on the other.
You have defined the "feeling of self" but you have not defined the self itself, which is what I think the original question asked. What you have offered is a criterion for testing whether one's claim to having a feeling of self is legitimate, or believable.

I think all this borders on Sophism. The purpose of defining terms is not to learn anything, or discover any truth, since neither can be done that way. The only reasonable purpose of defining terms is to try to help us communicate. Once we understand the meaning of a term as used by one of us, we might then be able to begin to understand what the person is trying to say. When we finally understand that, we may have some criticisms or objections to the idea, but attacking or judging the definition doesn't help anything.
moving finger said:
This definition implies that an agent needs to possesses the ability to register and classify perceptual phenomena before it can claim to have a feeling of self, which I think entirely rational.
According to your definition, I could program a computer to register and classify information from perceptual phenomena and then have it utter a claim to possessing a self. The machine/program would then qualify to having a self. I think it is perfectly legitimate to define 'self' that way, but I don't think it helps us understand the feeling of self that we experience. Of course you already know that we disagree on this and why.
moving finger said:
The agent’s belief that it “knows” (where I define knowledge as justified true belief) things about the world then simply arises quite naturally from this as a consequence of the agent’s registration and classification of perceptual phenomena.
Here you tacitly assume that a machine, suitably programmed, can hold beliefs. I haven't heard (at least I don't remember) your definition of 'belief'. But to me, belief is a form of knowledge, which I think is primary, and it is IMHO not possible for a machine to know or believe. I know that you hold belief to be primary, and knowledge to be "justified, true, belief" but that raises a few questions in my mind:

1. What is your definition of 'belief'?
2. Who is the judge that has the job of "justifying" the true belief?
3. What is the criterion of truth that the judge applies to the belief?

The answers to those will help me better understand you.

Good talking with you again, MF

Warm regards,

Paul
 
  • #24
Hi Paul

moving finger said:
The ability of an agent to register and classify perceptual phenomena under at least two headings : Phenomena which are considered to be associated with a part of the agent on the one hand, and phenomena which are considered to be associated with objects external to the agent on the other.
Paul Martin said:
You have defined the "feeling of self" but you have not defined the self itself, which is what I think the original question asked. What you have offered is a criterion for testing whether one's claim to having a feeling of self is legitimate, or believable.
As you correctly pointed out, we are here talking about the definition of “self” and NOT “the conscious feeling of self” (an important point to bear in mind).

“Self” is simply a particular classification of parts of the world by an agent – it is an attempt by an agent to draw a boundary between two distinct parts of the world. Once an agent is able to classify perceptual phenomena as defined above, the agent will be able to distinguish between parts of the world which it considers “self” on the one hand (ie those parts of the world the agent considers to be internal to its physical and operational structure), and “non-self” on the other hand. That’s all there is to it. Nothing mystical or supernatural (such as a primordial consciousness) is needed.

Paul Martin said:
I think all this borders on Sophism. The purpose of defining terms is not to learn anything, or discover any truth, since neither can be done that way. The only reasonable purpose of defining terms is to try to help us communicate. Once we understand the meaning of a term as used by one of us, we might then be able to begin to understand what the person is trying to say. When we finally understand that, we may have some criticisms or objections to the idea, but attacking or judging the definition doesn't help anything.
I agree with all but the first senstence. Why does this have anything to do with solipsism?

Though I agree that the purpose of defining terms is not to arrive at truth, it is critically important to clearly define and understand terms if one wants to be sure (in one's own mind) that one has arrived at truth in understanding the world - to conclude that one has arrived at a true understanding of the world based on poor, ambiguous, or sloppy definitions is a very dubious conclusion.

moving finger said:
This definition implies that an agent needs to possesses the ability to register and classify perceptual phenomena before it can claim to have a feeling of self, which I think entirely rational.
Paul Martin said:
According to your definition, I could program a computer to register and classify information from perceptual phenomena and then have it utter a claim to possessing a self. The machine/program would then qualify to having a self. I think it is perfectly legitimate to define 'self' that way, but I don't think it helps us understand the feeling of self that we experience. Of course you already know that we disagree on this and why.
That is precisely why it is important to define terms before debating them. I think you are assuming “self” entails “consciousness”. That is a natural confusion, since in normal daily life the only agents we come across who claim to possesses a self are conscious agents. But it does not follow that self necessarily entails consciousness.

moving finger said:
The agent’s belief that it “knows” (where I define knowledge as justified true belief) things about the world then simply arises quite naturally from this as a consequence of the agent’s registration and classification of perceptual phenomena.
Paul Martin said:
Here you tacitly assume that a machine, suitably programmed, can hold beliefs. I haven't heard (at least I don't remember) your definition of 'belief'. But to me, belief is a form of knowledge, which I think is primary, and it is IMHO not possible for a machine to know or believe.
You have it back to front. Belief is not a “form” of knowledge – because knowledge entails truth - I cannot know something (to be true) which is in fact false, whereas belief does not necessarily entail truth - I can certainly believe things to be true which are in fact false. Thus knowledge is a form of belief, not the other way around.

Paul Martin said:
I know that you hold belief to be primary, and knowledge to be "justified, true, belief" but that raises a few questions in my mind:
1. What is your definition of 'belief'?
For an agent to believe a proposition X is for that agent to accept that X is a true proposition. (This of course does not mean that X is true).

Paul Martin said:
2. Who is the judge that has the job of "justifying" the true belief?
Some JTB definitions of knowledge qualify justification by referring to evidential justification, which I think gives us a good clue to the meaning of justification. Justification is a tricky and subjective area – but justification is a necessary condition for a claim to knowledge (how can you claim to know some X is true unless you can justify, at least to yourself, the reasons why you think X is true? – belief without evidential justification is faith, and not knowledge). There is no absolute judge on justification, which is why knowledge is not absolute and depends on perspective (which is also why many people have problems understanding knowledge, and why so many Gettier-style examples have been proposed in an attempt to show that knowledge is not justified true belief – but they all fail when one takes into account the perspectival nature of knowledge). An agent’s belief is justified (from that agent’s perspective) when the agent has taken reasonable logical and rational steps to evaluate the validity of that belief, and the agent deems that it has sufficient evidence in support of that belief (hence the reference to evidential justification). The agent may of course be mistaken in its assessment of the available evidence (no agent is infallible) – but this again simply underscores the fallible nature of knowledge.

Paul Martin said:
3. What is the criterion of truth that the judge applies to the belief?
Are you asking “what is truth?”, or “how do we decide what is true?”? At the end of the day, we are all fallible agents dependent on our beliefs. Nobody has access to absolutely certain knowledge (of truth and falsity) of the world, everything we think we know of the world is built upon our premises, and infallible (certain) knowledge of the external world is an impossible goal. An agent may have a justified belief that X, but if X is false then the agent does not possesses knowledge. One cannot possesses “false knowledge” about something – if the agent believes that it knows X to be true, but X is in fact false, then the agent does not “know something which is false”, rather it is simply mistaken in its belief that it knows X to be true.

Best Regards
 
  • #25
moving finger said:
As you correctly pointed out, we are here talking about the definition of “self” and NOT “the conscious feeling of self” (an important point to bear in mind).
I happen to think both are the same: the self that we think we have, is an experience had by the agent. This also opens the possibility of the agent existing without that experience.

“Self” is simply a particular classification of parts of the world by an agent – it is an attempt by an agent to draw a boundary between two distinct parts of the world.
...
That’s all there is to it. Nothing mystical or supernatural (such as a primordial consciousness) is needed.
Do u think that if the agent no longer draws boundaries, the agent would also no longer exist? If so what makes u think this?
 
  • #26
PIT2 said:
I happen to think both are the same: the self that we think we have, is an experience had by the agent. This also opens the possibility of the agent existing without that experience.
This is why many people believe that only conscious agents can have a self. "Self" and "the conscious feeling of self" are not synonymous however - when you fall asleep your conscious feeling of self disappears, but your "self" continues to exist (you are just not aware of your "self" whilst unconscious).

PIT2 said:
Do u think that if the agent no longer draws boundaries, the agent would also no longer exist? If so what makes u think this?
There is no in principle reason why an agent must draw a boundary; I could envisage an agent which believes the entire universe to be its "self" (isn't this what Zen Buddhism tries to achieve - oneness with the universe?). I could also envisage an agent with fluid or ambiguous boundaries. But one of the main reasons why agents have fairly definite boundaries in practice is for survival purposes - if you need to eat in order to survive its better if you don't eat your "self". And if there are any other agents in existence which might eat you, you also need to know which parts are parts of you, and which are parts of your predators.

Best Regards
 
Last edited:
  • #27
moving finger said:
This is why many people believe that only conscious agents can have a self. "Self" and "the conscious feeling of self" are not synonymous however - when you fall asleep your conscious feeling of self disappears, but your "self" continues to exist (you are just not aware of your "self" whilst unconscious).
The 'self' u are talking about is i call 'that which experiences' (or agent is also a good word). I think when one is asleep, the agent has 'ambiguous self-boundaries'; he still has some experience of self and still registers the environment it needs to respond to. Now what a truly selfless experience feels like, is probably something which u mention below, the zen buddhism oneness feeling.

moving finger said:
I could envisage an agent which believes the entire universe to be its "self" (isn't this what Zen Buddhism tries to achieve - oneness with the universe?). I could also envisage an agent with fluid or ambiguous boundaries. But one of the main reasons why agents have fairly definite boundaries in practice is for survival purposes - if you need to eat in order to survive its better if you don't eat your "self". And if there are any other agents in existence which might eat you, you also need to know which parts are parts of you, and which are parts of your predators.

So the self is useful for the organisms survival, i agree, and if we take the union (as les sleeth calls it) experience as an experience of selflessness, then we can also say that an agent can exist without a self (i believe the self to be a feeling).

So what made u say this:

...once an agent is able to classify perceptual phenomena as defined above, the agent will be able to distinguish between parts of the world which it considers “self” on the one hand (ie those parts of the world the agent considers to be internal to its physical and operational structure), and “non-self” on the other hand. That’s all there is to it. Nothing mystical or supernatural (such as a primordial consciousness) is needed.

How does it follow that there is nothing mystical (like a PC) to it? The people who have these experience tell something different don't they?
 
  • #28
PIT2 said:
How does it follow that there is nothing mystical (like a PC) to it? The people who have these experience tell something different don't they?
Why do we need to posit something mystical or supernatural? What is left unexplained about the "self" in my naturalistic account?

(Bearing in mind that I do not consider the terms "self" and "conscious feeling of self" to be synonymous)

Best Regards
 
  • #29
moving finger said:
Why do we need to posit something mystical or supernatural? What is left unexplained about the "self" in my naturalistic account?

(Bearing in mind that I do not consider the terms "self" and "conscious feeling of self" to be synonymous)

First of all ur explanation itself may be supernatural and mystical, since we do not know what is and what isn't natural.

Secondly i don't see how ur explanation of the arisal of a self says anything about the process requiring something mystical or not. U spoke of an agent that draws boundaries:

There is no in principle reason why an agent must draw a boundary
It doesn't explain the nature of the agent.
 
  • #30
PIT2 said:
First of all ur explanation itself may be supernatural and mystical, since we do not know what is and what isn't natural.

Secondly i don't see how ur explanation of the arisal of a self says anything about the process requiring something mystical or not. U spoke of an agent that draws boundaries:

It doesn't explain the nature of the agent.
I'm quite happy to discuss whether my explanation contains supernatural elements or not, if you care to point out where you think those supernatural elements are.

Best Regards
 
  • #31
moving finger said:
I'm quite happy to discuss whether my explanation contains supernatural elements or not, if you care to point out where you think those supernatural elements are.
I meant that we shouldn't decide up front what nature is, and then dismiss other options as being inferior because they don't fit the definition.
 
  • #32
PIT2 said:
I meant that we shouldn't decide up front what nature is, and then dismiss other options as being inferior because they don't fit the definition.
As a physicist called Jim Al-Khalili said : Be open-minded, but not so open-minded that your brain falls out.

To make any progress in interpreting the world we must make value judgements about different forms of explanation. If I said that consciousness is actually created by pink fairies (who live at the bottom of my garden) sprinkling magic consciousness-dust over us while we are asleep, I wouldn't expect you to take me seriously. There is a line to be drawn between credible explanations and incredible explanations - but we don't all draw that line in the same place.

Best Regards
 
Last edited:
  • #33
moving finger said:
As a physicist called Jim Al-Khalili said : Be open-minded, but not so open-minded that your brain falls out.

I completely agree, and that's why i value empirical evidence over a theory designed to fit scientific criteria and their limits. I am not so open minded that i accept the latter as the absolute path to truth.
 
Last edited:
  • #34
moving finger said:
I agree with all but the first senstence. Why does this have anything to do with solipsism?
It has nothing to do with solipsism. My first sentence was "I think all this borders on Sophism." I meant sophism, or sophistry, and from what you wrote, it sounds like you agree with me.
moving finger said:
“Self” is simply a particular classification of parts of the world by an agent – it is an attempt by an agent to draw a boundary between two distinct parts of the world. Once an agent is able to classify perceptual phenomena as defined above, the agent will be able to distinguish between parts of the world which it considers “self” on the one hand (ie those parts of the world the agent considers to be internal to its physical and operational structure), and “non-self” on the other hand. That’s all there is to it. Nothing mystical or supernatural (such as a primordial consciousness) is needed.
I agree with all but the last sentence. Not that I think anything mystical or supernatural is needed, but that I think we can't agree or disagree with your last sentence until we define the terms 'mystical' and 'supernatural'.

In my opinion, we are in exactly the same boat here as we were with the definition of the term 'self'. As our conversation with PIT2 has revealed, all three of us seem to agree that there are various ways of defining the term and different definitions yield different implications. As I tried to point out, if we claim to have learned any truth from any of these implications, we deceive ourselves -- it is nothing more than sophism or sophistry to do so.

Similarly, we can define 'supernatural' in several reasonable ways. The different ways depend on assumptions about the nature of existence. If, for example, one believes that nothing exists but physicality, then everything that is natural would be physical in that belief system, so it follows that 'supernatural' would be synonymous with 'non-physical'. And, in that system, it would also follow that nothing supernatural is required to explain anything that exists.

If, on another hand, one believes that existence comprises more than the physical, and if 'natural' is defined to be everything and anything that exists, then it would also follow in this belief system that nothing supernatural is required to explain anything that exists.

But, if, on a third hand, one believes that existence comprises more than the physical and if 'natural' is defined to be only things that are physical, then something supernatural would be required to explain anything that exists. That simply follows from the definition of 'natural' and the conclusion tells us nothing new about reality.

Now, as for your parenthetical remark implying that my PC is supernatural or mystical, you can define it however you like and it won't change anything or tell us anything. What we have, as I think you have agreed, is a difference in a fundamental assumption concerning existence. That is can concepts exist in the total absence of mind or not? I say no and you say yes. So, with my assumption, mind is necessary for anything to exist and therefore must be primary and primordial. That primordial mind needn't be complex or powerful but at least some rudimentary capability must be there in order to grasp the hair and pull it up out of the swamp of nothingness.

With your assumption, there must have existed some sort of primordial concept (a field, a false vacuum, a principle, a set of laws, some rules of logic, some set of possibilities, etc.) in order for reality to exist at all. And, with your assumption, all the complexities we find in reality can be explained as nothing but variations and combinations in the evolution of this primordial (set of) concept(s).

As I have tried to point out, I don't think your explanation is all that much different from mine. You claim that PC can't be simple but must be complex, which I deny. I claim that your primordial (set of) concept(s) can't be simple (especially if it includes the "infinite" set of all possibilities) and furthermore it is inconceivable to me that a concept can exist without being conceived.

But, if we agree that this controversy about the exact nature of "primordiality" is too clouded and murky to resolve, I don't think that the evolution of reality in our respective scenarios is all that much different. There is only the relatively minor difference in the timing of when consciousness first appeared with respect to when life first appeared. And, until we have a good definition both of 'life' and 'consciousness', it would be a waste of time to debate this issue.
moving finger said:
I think you are assuming “self” entails “consciousness”. That is a natural confusion, since in normal daily life the only agents we come across who claim to possesses a self are conscious agents. But it does not follow that self necessarily entails consciousness.
Sometimes I make that assumption but in this conversation I don't. If I were asked to define 'self', I would either say that no such thing exists, or I would claim that there is only one such and it is PC. In this conversation I was responding to PIT2's question about his definition of 'self'. And, as I have indicated, I don't think the concept of self is very interesting if it is nothing but the ability to make distinctions. It simply depends on your definitions and your preferences.

Consciousness, on the other hand, is quite another kettle of fish. Here, IMHO, is a deep mystery which I have spent much of my lifetime thinking about and trying to understand and explain. If 'self' = 'consciousness', than the concept of self is mysterious and interesting to me. But if you define 'self' in some other way, it seems uninteresting to me. It is the 'C' in PC that is of primary interest to me.
moving finger said:
You have it back to front. Belief is not a “form” of knowledge – because knowledge entails truth - I cannot know something (to be true) which is in fact false, whereas belief does not necessarily entail truth - I can certainly believe things to be true which are in fact false. Thus knowledge is a form of belief, not the other way around.
Yes, you and I definitely do have our respective carts and horses in different orders. As I said above, we have our different assumptions about the existence of concepts, and we also have different definitions of the terms. Let me start with yours:

Belief:
moving finger said:
For an agent to believe a proposition X is for that agent to accept that X is a true proposition. (This of course does not mean that X is true)
So, in order to have a belief, we must first have an agent (the believer) and a proposition (a concept expressed in language). Fair enough.

Knowledge:
moving finger said:
I define knowledge as justified true belief. ...Justification is a tricky and subjective area – but justification is a necessary condition for a claim to knowledge (how can you claim to know some X is true unless you can justify, at least to yourself, the reasons why you think X is true?
So, in order to have knowledge, i.e. in order to know, we must first have a proposition (a concept expressed in language which asserts something about reality) and we must have the agent, or knower, who claims that the assertion made by the proposition is consistent with reality, and who has some reasons to justify the acceptance of that proposition and the claim about it.
moving finger said:
An agent’s belief is justified (from that agent’s perspective) when the agent has taken reasonable logical and rational steps to evaluate the validity of that belief, and the agent deems that it has sufficient evidence in support of that belief (hence the reference to evidential justification). The agent may of course be mistaken in its assessment of the available evidence (no agent is infallible) – but this again simply underscores the fallible nature of knowledge.
The agent may indeed be mistaken. We have many examples from history to make this point. Euclid thought he was justified in accepting his axioms as obviously true. This justification held sway among thinkers for about two thousand years before people realized that the axioms weren't necessarily consistent with reality. Similarly Newtons laws of motion and gravitation were believed to be justifiably true for about two hundred years until they, too, were found to be inconsistent with reality. The same fate befell the various conservation laws, and modern scientists aren't as willing to assert that any of their propositions are actually consistent with reality. In fact, the only proposition I am willing to say might be consistent with reality is the proposition that "Thought happens". I think all other propositions can be reasonably doubted. And even this proposition demands definitions of 'thought' and 'happens' which haven't yet been satisfactorily given.

From the foregoing, I think we can conclude that by these definitions, there is no knowledge. I.e. nothing is known.

We can also draw the same conclusion from an agreement you and I came to some years ago, MF. You convinced me that the term 'certain knowledge' was redundant. That is, if knowledge is not certain, it is not knowledge. You said essentially the same thing when you said
moving finger said:
An agent may have a justified belief that X, but if X is false then the agent does not possesses knowledge. One cannot possesses “false knowledge” about something – if the agent believes that it knows X to be true, but X is in fact false, then the agent does not “know something which is false”, rather it is simply mistaken in its belief that it knows X to be true.
You say that justification is not sufficient to guarantee truth. So until someone comes up with a way of guaranteeing a proposition to be true, there will not be, and cannot be, any knowledge except by accident. It might accidentally be that case that someone knows something, but we have shown that they can't know that they know it.

This gets to Rade's (I think it was) quote: "I know./ I know I know./ I know you know/..." and shows that no one can be certain in making any of these assertions. The best we can do is to say "I think I know", or as you said, "I believe I know."

Now this is pure sophistry, but I think it shows that by defining the terms the way you did, we don't end up with anything useful in trying to understand reality.

Furthermore, we made two big assumptions at the very beginning which IMHO steep this whole approach in mystery: The first assumption is the existence of an agent. What, exactly is an agent? The second assumption is the existence of a set of concepts which can be expressed in language. Where, exactly did they come from?


I just spent quite a bit of time trying to explain my view with the horse and cart interchanged. I put knowledge ahead of belief, but ahead of that of course, is PC. PC, IMHO, is prior to everything, including concepts and language. It started getting too big, so instead of presenting it here, I'll post this much now and I'll post my explanation in a separate and new thread. I think I'll call it "Let's start at the beginning".

Warm regards,

Paul
 
Last edited:
  • #35
PIT2 said:
I completely agree, and that's why i value empirical evidence over a theory designed to fit scientific criteria and their limits.
whether you choose to use a "scientific" approach to try to explain the universe, or choose to use some other approach preferable to you, you will at some stage need to fit explanation and observation. Thus we should ask the question : where is the evidence that single-celled organisms exhibit consciousness?

Best Regards
 
Last edited:
  • #36
Hi Paul

I started to prepare a response to all of the points in your post, then realized that most of our disagreement comes down to definitions, and a detailed point-by-point reply is simply a waste of time if we don’t agree on the definition of the single word on which this entire thread is based – “self”.

The meaning of words is derived from their usage in language, not dictated by fiat (except possibly in French :wink: ). In the English language, “self” is often used in contexts where there is no consciousness involved (as in a “self-priming pump”, or “self-sufficient economy”, or even "self-fulfilling prophecy", and in IT/AI contexts such as “self-organising”, “self-configuring” and “self-defending” networks and systems). Whether you think such a concept of self devoid of consciousness is “interesting” or not is your personal value judgement, but to be honest I don’t see why self must be defined simply so that it is interesting from your point of view? To insist that self must defined in terms of consciousness seems an artificially and unnecessarily parochial view intended simply to prove what the definition already assumes.

Best Regards
 
Last edited:
  • #37
moving finger said:
whether you choose to use a "scientific" approach to try to explain the universe, or choose to use some other approach preferable to you, you will at some stage need to fit explanation and observation. Thus we should ask the question : where is the evidence that single-celled organisms exhibit consciousness?
Thats a good question of course. I happen to believe that any intelligence (AI or not) requires subjective experience, and that this subjective experience is what allows organisms to respond to their environment in the selfish manner that is needed for survival. If this is so then we would need to know if bacteria are intelligent. When looking closely at the behaviour of bacteria, one can see signs that this is the case.

Just as ur question was valid, i would ask what evidence there is that phenomenal consciousness arose somewhere on the evolutionary timeline (perhaps at the beginning of the first brain?).
 
  • #38
PIT2 said:
Thats a good question of course. I happen to believe that any intelligence (AI or not) requires subjective experience, and that this subjective experience is what allows organisms to respond to their environment in the selfish manner that is needed for survival. If this is so then we would need to know if bacteria are intelligent. When looking closely at the behaviour of bacteria, one can see signs that this is the case.
One can? Such as?

How do you define intelligence? Are all intelligent systems necessarily conscious? Are all conscious systems necessarily intelligent?

PIT2 said:
Just as ur question was valid, i would ask what evidence there is that phenomenal consciousness arose somewhere on the evolutionary timeline (perhaps at the beginning of the first brain?).
Are you asking for empirical evidence? Consciousness is not fossilised, and we have no direct access to the past in order to acquire such evidence. The only access we have to evidence on whether consciousness emerged (or has always existed) is via the development of rational explanatory models of consciousness, and then to construct a model of if and how such consciousness could have arisen (if it did) in the past.

Best Regards
 
  • #39
moving finger said:
One can? Such as?

How do you define intelligence? Are all intelligent systems necessarily conscious? Are all conscious systems necessarily intelligent?

A possible definition is very simple and abstract:
"to understand and profit from experience".
From this definition (which i agree with) it follows that intelligence requires consciousness.

Looking at bacteria and other cells, several researchers have suggested that their behaviour may classify as intelligent:

The molecular properties of the sum of the two-component systems in a typical bacterium, such as E. coli, can therefore be summarized as follows: (i) multiple (branched) systems operate in parallel; (ii) key components carry out logical operations; (iii) the basic elements of this network are subject to auto-amplification; and (iv) crosstalk does occur between the pathways. The extent, to which this latter process occurs, however, remains to be characterized in more detail (Figure 2). Strikingly, the characteristics of such a network are identical to the properties that have been assigned as the prerequisites to make any network perform as a ‘neural’ network [31]. This leads to the idea, as formulated earlier [32], that the combined activity of all two-component systems in a single bacterium, because of their biochemical properties, could bestow bacteria with properties associated with intelligent cellular behaviour, such as associative memory and learning, and thus with a minimal form of intelligence.
http://star.tau.ac.il/~eshel/Bio_co...elligence.pdf#search="bacterial intelligence"

http://star.tau.ac.il/~eshel/bacterial_linguistic.html

Some of the observed behaviours of mammalian cells:

The results suggest that mammalian cells, indeed, posess intelligence. The experimental basis for this conclusion is presented in the following web pages. The most significant experimental results are:
  • 1. The motile machinery of cells contains subdomains ('microplasts') that can be isolated from the cell and then are capable of autonomous movements. Yet, inside the cell they do not exercise their ability. The situation is comparable to a person's muscles that are capable of contraction outside a person's body, but do not contract at will once they are part of the person, suggesting that they are subject to a control center.
  • 2. The cell as a whole is capable of immensely complex migration patterns for which their genome cannot contain a detailed program as they are responses to unforseeable encounters ( Cell movement is not random.. ).
  • 3. Cells can 'see', i.e. they can map the directions of near-infrared light sources in their environment and direct their movements toward them. No such 'vision' is possible without a very sophisticated signal processing system ('cell brain') that is linked to the movement control of the cell. (The larger their light scattering, the larger the distance from which aggregating cells came together. )
In addition there is the supporting theoretical consideration that the hiterto completely unexplained complex structure of centrioles is predicted in every detail if one asks what structure a cellular 'eye' would have. ( The best design for a cellular eye is a pair of centrioles )
http://www.basic.northwestern.edu/g-buehler/cellint0.htm

And some examples of possibly intelligent bacterial behaviour are described in this link:
http://www.world-science.net/exclusives/050418_bactfrm.htm
 
Last edited:
  • #40
PIT2 said:
A possible definition is very simple and abstract:
"to understand and profit from experience".
From this definition (which i agree with) it follows that intelligence requires consciousness.
Ahhh well, I don't. Intelligence (to me) does not entail experience, or understanding. Intelligence (imho) is simply the ability to achieve goals by solving problems. Which does not necessarily require consciousness.

Though the articles you quote suggest that bacteria may exhibit some intelligence in accord with my definition, they are silent on the question of consciousness (they do not even refer to consciousness).

(there is no empirical evidence that the bacteria in question "understand" anything - thus it is debatable whether they qualify as being intelligent according to your definition).

My original question, recall, was :

where is the evidence that single-celled organisms exhibit consciousness?

Best Regards
 
  • #41
Hi MF,
moving finger said:
Hi Paul

I started to prepare a response to all of the points in your post, then realized that most of our disagreement comes down to definitions, and a detailed point-by-point reply is simply a waste of time if we don’t agree on the definition of the single word on which this entire thread is based – “self”.
I understand and agree. I think our primary disagreement is over the definition of 'concept'. As I have said, it is inconceivable to me how you can conceive of unconceived concepts. I invite you to join my attempt at defining this term in my thread "Let's start at the beginning". I would dearly love to hear your comments.
moving finger said:
The meaning of words is derived from their usage in language, not dictated by fiat (except possibly in French :wink: ). In the English language, “self” is often used in contexts where there is no consciousness involved (as in a “self-priming pump”, or “self-sufficient economy”, or even "self-fulfilling prophecy", and in IT/AI contexts such as “self-organising”, “self-configuring” and “self-defending” networks and systems). Whether you think such a concept of self devoid of consciousness is “interesting” or not is your personal value judgement, but to be honest I don’t see why self must be defined simply so that it is interesting from your point of view?
I'm sorry I gave you the impression that I was advocating for, or insisting on, any particular definition of 'self'. I simply pointed out that the term could be defined in several ways and that these definitions led to different answers to PIT2s questions.
moving finger said:
To insist that self must defined in terms of consciousness seems an artificially and unnecessarily parochial view intended simply to prove what the definition already assumes.
Those may be the motives of some who insist, but I don't, and haven't, insisted on any definition. I only insist that people do define their terms before they make claims using them.

Warm regards,

Paul
 
  • #42
moving finger said:
Ahhh well, I don't. Intelligence (to me) does not entail experience, or understanding. Intelligence (imho) is simply the ability to achieve goals by solving problems. Which does not necessarily require consciousness.
Doesnt a goal require an intention? And do problems exist without anyone experiencing them as problems?

where is the evidence that single-celled organisms exhibit consciousness?
It could be in the bacteriums mind! :smile:
 
  • #43
PIT2 said:
Doesnt a goal require an intention? And do problems exist without anyone experiencing them as problems?
Imho you are to a large extent correct - things like goals, problems, intentions (and hence also intelligence) are interpretations placed on the behaviours and actions of some agents by other agents. But none of this requires consciousness as a necessary condition. We might say that the "goal" of a chess-playing computer is to win games of chess, but that is simply an interpretation that we (as interpreting agents) are placing on the actions and behaviour of the chess-playing computer.

Dennett describes it very well in his "intentional stance".

PIT2 said:
It could be in the bacteriums mind! :smile:
Does the bacterium have a mind?

Best Regards
 
  • #44
Paul Martin said:
Hi MF,
I understand and agree. I think our primary disagreement is over the definition of 'concept'.
In this thread, it seems to be on the definition and meaning of "self"

Paul Martin said:
I simply pointed out that the term could be defined in several ways and that these definitions led to different answers to PIT2s questions.
Precisely - the first thing in any debate is to agree the meanings of the fundamental terms being used in a debate. There is not much point in moving beyond that if agreement cannot be reached on the meanings of terms.

Best Regards
 
  • #45
moving finger said:
Does the bacterium have a mind?
Possibly, here is an interesting paper from the JCS that deals with the issue (there is plenty talk of bacteria in it too.):

Clearly, the corporeal path by which we can trace the evolution of consciousness can be richly elaborated in terms of the inherent kinetic spontaneity of animate forms. Such elaboration decisively challenges the putative evolutionary notion of an agent as something that ‘does something and then looks to see what moves’. Attention to corporeal matters of fact demonstrates that a bona fide evolutionary account of consciousness begins with surface recognition sensitivity. It thereby acknowledges a meta-corporeal consciousness. It furthermore takes into account the emergence of a diversity of animate forms, showing how surface recognition sensitivity, while mediated by touch, is actually in the service of movement for creatures all the way from bacteria to protists to invertebrate forms to vertebrate ones. It strongly suggests how a form of corporeal consciousness is present in bacteria.36 Indeed, it shows how a bacterium, being an animate form of life, is something first of all that moves and is capable of moving on its own power rather than being always impelled to move from without; it shows further how it is something that feeds, that grows, that changes direction, that, in effect, can stop doing what it is doing and begin doing something else. A bona fide evolutionary account shows how, with the evolution of varied and complex external sensors, a different form of corporeal consciousness is present, and how, with the evolution of internal sensors from external ones, a still different form of corporeal consciousness is present. It shows how each of these forms of corporeal consciousness is coincident with the evolution of varied and complex animate forms themselves, and equally, how each form of proprioception that evolved, from the most rudimentary to the most complex of kinesthetic systems, is coincident with particular forms of life. It shows all this by paying attention to corporeal matters of fact and by presenting concrete sensory-kinetic analyses.
http://www.imprint.co.uk/sheet.htm

As for the chesscomputer being interpreted to have a goal: i think this is the case (that the interpretation is only in our minds). But don't u think that if the chesscomputer were able to interpret goals by itself (as opposed to another agent), it would result in intelligent behaviour? The chess game may be ruined though.
 
Last edited by a moderator:
  • #46
PIT2 said:
Possibly, here is an interesting paper from the JCS that deals with the issue (there is plenty talk of bacteria in it too.):
sorry, but I couldn't stop laughing at the pretentiously flowery language used in that quote! It reads more like poetry or an art-form than descriptive text. What a load of (imho) BS.

PIT2 said:
As for the chesscomputer being interpreted to have a goal: i think this is the case (that the interpretation is only in our minds). But don't u think that if the chesscomputer were able to interpret goals by itself (as opposed to another agent), it would result in intelligent behaviour?
Sure - but what does this have to do with consciousness?

Best Regards
 
Last edited:
  • #47
Hi MF,

I think you may have misunderstood what I have been saying in this thread. When I said that the idea of self, when taken to include such things as self-priming pumps and self-referential statements, is not interesting to me, I did not mean that the discussion in this thread was not interesting. Far from it. I think PIT2 raised an interesting question and I have been interested in all the discussion that followed.

What I meant was that since I think 'self' can be defined in many ways, and different conclusions can be drawn from the different definitions (just as is the case for definitions of any words), I have no problem accepting PIT2's definition. As far as I am concerned, there is no debate here. At least neither you nor PIT2 has said anything concerning self with which I disagree.

However, as I tried to point out, I think the fundamental disagreement between you and me is over the definition, or the very notion of, the term 'concept'. On that issue, I would very much like to have a debate with you. That was the purpose of my introducing a new thread.

I hope to see you over there.

Warm regards,

Paul
 
  • #48
moving finger said:
sorry, but I couldn't stop laughing at the pretentiously flowery language used in that quote! It reads more like poetry or an art-form than descriptive text. What a load of (imho) BS.
Some of the flowery words are remnants of earlier sections, but the thing is funny to read, like the part where he discusses some of dennetts ideas. I agree with the general idea of the paper though.

Sure - but what does this have to do with consciousness?

Well... i thought consciousness would be needed for something to make an interpretation. But apparently not? What role do u think consciousness has in organisms?
 
  • #49
PIT2 said:
Some of the flowery words are remnants of earlier sections, but the thing is funny to read, like the part where he discusses some of dennetts ideas. I agree with the general idea of the paper though.
We'll have to agree to disagree here.

PIT2 said:
Well... i thought consciousness would be needed for something to make an interpretation. But apparently not? What role do u think consciousness has in organisms?
Role? In the sense of the "purpose" of consciousness (ie why do some agents possesses it and some not)?

I agree that interpretation is one of the roles of consciousness, but it does not follow from this that everything which interprets is therefore conscious. Just as "transporting passengers" is one of the roles of a car, but it does not follow that everything which transports passengers is therefore a car.

I agree with Dennett on this. The development of consciousness may simply be a competitive evolutionary mechanism that enables us to develop and test ideas (hypotheses) about what might be going on in the minds of others. If you are going to think about my thinking, then I need to start thinking about your thinking to stay even. When communication (of any form) arises in a species, pure honesty may not always be the best policy (from a survival perspective) since it will be all too easily exploitable by one’s competitors. In the arms race of “producing future” you have a tremendous advantage if you can produce more and better future about your competitor than he can produce about you, so it is always an advantage to keep one’s own control systems inscrutable. Unpredictability is a fine protective feature, but must be spent wisely. Consciousness enables us to do this very effectively.

Best Regards
 
Last edited:

Similar threads

Back
Top