Intelligence, Instincts and Reason

In summary, the conversation discusses the concept of consciousness and its implications in the field of Artificial Intelligence. The speaker believes that for something to be considered "true" Artificial Intelligence, it must not only have the ability to learn new information, but also the capacity to create new ideas and concepts from that information. This requires the entity to have consciousness, free will, and a desire for continued existence. The conversation also touches on the idea of artificial intelligence turning against its creator, and how this relates to the story of God and Man. The concept of morality and control is also discussed, with the mention of Asimov's Three Laws of Robotics and the idea that humans also have a set of fundamental laws that they can choose to disregard. Ultimately,
  • #1
one_raven
203
0
This obviously need to be revised to make it more congruent and readable, but I would like to know what people think of the reasoning behind it...

I don't think that any discussion about Artificial Intelligence can be complete without broaching the subject of consciousness. There are countless definitions and ideas about what consciousness is and what its implications are. People talk about expanding consciousness, deeper consciousness, wider consciousness, focused, internal, external, greater and even social consciousness. It seems prudent to define exactly what I am referring to. Consciousness, in the context I am using it, refers to perhaps the most rudimentary, straight forward, secular definition of the term.

Consciousness- An entity's knowledge and comprehension, that it not only exists, but it exists as a complete and separate whole apart from the rest of existence- self-awareness.

As in many fields of science, the science fictions writers preceded reality. Today, many argue, we are on the verge of creating Artificial Intelligence. There is a great deal of debate regarding what does or does not constitute "true" Artificial Intelligence. My view on the subject, I suppose, can be called fundamentalist. In order for something to be considered "true" Artificial Intelligence, it must have the ability to not only learn new information, but to create new ideas and concepts from what it has learned. Productivity and commercial pragmatism aside, if an entity (be it a robot or a simple computer program) can not process brand new information that it was not programmed and prepared to process before hand, comprehend that new information, add the new information to the base of its knowledge and manipulate that information to form new ideas- then I do not consider that entity to be intelligent.

In order for an entity to be able to process new unexpected information the entity has to be able to discern the reality of the world around it. It has to understand that this box in front of it is an object distinct and separate from itself and it has properties that are specific to it. The entity also must have a desire (yes, I am using that world loosely) to learn. Something has to give the entity the impetus to take that information in, and make use of it rather than simply sit and wait for instructions. With the desire to learn comes the desire to continue existence. In short, for something to have intelligence it must have consciousness, free will and a desire for continue existence or survival.

I am not alone in my assessment. We are all familiar with what has become an old hat in sci-fi, the archetypal artificial intelligence that turns on its maker. 2001, The Matrix, I, Robot etc. The basic concept is that if you create an entity with consciousness, free will and the ability to learn, it is inevitable that you will lose control over the entity (if you ever had it in the first place). Some argue that the original idea is seeded in Mary Shelley's "Frankenstein; Or, The Modern Prometheus". It is no coincidence that the concept closely parallels the story of God and Man. God (the creator of the intelligent species, Man) loses control of Man when he gains the knowledge of Good and Evil by partaking of the fruit in the garden of evil, thereby gaining both the ability to learn and the free will of choosing to disobey his master. The story of Original Sin kicked off an eternal struggle between Man exercising his free will (sometime to his peril) and God attempting to imbue "morals" upon him. It may also seem a familiar story if you have ever met a teenager.

The authors of the various incarnations of the archetypal story have usually wanted to impart a moral upon the reader. It often took one of two shapes. Mankind is evil and this higher intelligence sees this. The cold godless intelligence of the entity is inherently evil and wants to rule the world (comment on Old World Communism, perhaps... maybe another paper). Of course the story has to be thrilling and gripping. Some form of struggle ensues and the result is that the machine must be destroyed or it will destroy.

In Isaac Asimov's classic short story "Runaround", Asimov introduced his Three Laws of Robotics. In one sense, the Three Laws are man's parallel to Abraham's God's Ten Commandments- an attempt to integrate a moral code into his creation (if only for his own protection) and remain in control of his creation.

1.) A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

It is widely, if not universally, accepted in the field of modern Artificial Intelligence Research and Development that any Artificial Intelligence must have some adaptation or another of Asimov's Three Laws somehow "hard wired" into the system.

Humans also have a set of fundamental "hard wired" laws. The one thing that all life seems to have in common is the basic instincts- survival of the individual, survival of the species and replication or procreation. Humans, however, routinely disregard this basic set of instincts. Why? Because they can.

With consciousness, comes the capacity for intelligence. With intelligence comes both the ability to justify and the desire to learn. With the desire to learn comes a desire for continued survival. The desire for continued survival combined with the ability to justify and free will results in an entity that can not be enslaved. Except by means of restraint.

Creating machines with a limited ability to process information that appears intelligent is one thing. Creating a truly intelligent entity, and expecting that entity to obey a set of controlling rules is folly. Either you have the appearance of intelligence and can be enslaved, or you have intelligence and free will.
 
Biology news on Phys.org
  • #2
I'm not sure that you're right to assume that artifical intelligence is the same thing as artificial consciousness. Researchers in AI do not all do this, and many say that we can create intelligence without creating consciousness. I suppose it depends on how we define intelligence. How do you define it here?

Your definition of consciousness seems slightly incorrect to me. The most common definition of consciousness in scientific/philosophical literature is 'what it is like'. I prefer this to yours since it contains no assumptions.

However, yours is better in one way, in that it includes mention of knowledge. The ability to know appears to be a defining property of consciousness. If an entity cannot know then it cannot know that it is conscious and so cannot be. (Perhaps a test for machine consciousness would be whther or not the machine reports that it knows that solipsism is unfalsifiable). In my view if a machine can know things then it is conscious, and if it cannot know things then it is not.

If this is the case then to create artifical consciousness we must create the ability to know. To do this we will have to figure out how we ourselves know things. As yet there is no scientific/computational theory, hypothesis or conjecture as to how we know things, nor even any third-person evidence that we do, so for research into how to create 'knowing' all we can do is to explore our own consciousness and figure out how we do it ourselves.

When we do this it becomes clear that we use more than our reason to know things. Hence Descarte's 'cogito'. This means that to create artificial consciousness we must transcend computation, which suggests to me that research in artificial intelligence is simply research into computation and has got nothing at all to do with creating consciousness.

I agree though that if we do manage to create machines that can act in any way independently then Asimov's laws will be important. But if I remember right those laws give rise to contradictions, leaving the robot with smoke coming out of its ears in one of his tales. This does not happen to human beings, who can simply break the rules. So a robot with unbreakable laws of behaviour programmed into it is not equivalent to a human being. To make it equivalent, as you say, we would have to allow it break those laws.

My feeling is that artificial consciousness arising as a product of computation is a less plausible idea than Hari Seldon's predictive sociology.

Whirrclank
Canute

PS. It occurs to me that Asimov's three laws are equivalent to axioms. Perhaps this means that we cannot ever be sure that the laws we impose on a robot are self-consistent and do not give rise to contradictions unless the robot is extremely simpleminded!
 
  • #3
You can not have both.


Overall, I believe that the argument presented in this content is well thought out and logical. The author effectively defines consciousness and its importance in the development of true artificial intelligence. The comparison to the story of God and Man is an interesting parallel and adds depth to the discussion. Additionally, the mention of Asimov's Three Laws of Robotics and the comparison to humans' basic instincts further strengthens the argument that true intelligence cannot be controlled or enslaved. Overall, this content provides a thought-provoking perspective on the relationship between intelligence, instincts, and reason.
 

What is intelligence?

Intelligence can be defined as the ability to acquire and apply knowledge and skills. It involves problem-solving, critical thinking, and learning from experiences.

What are instincts?

Instincts can be described as innate, fixed patterns of behavior that are present in all members of a species. They are not learned, but rather inherited, and are essential for survival and reproduction.

How do intelligence and instincts interact?

Intelligence and instincts work together to help living organisms adapt and survive in their environment. Instincts provide automatic responses to certain situations, while intelligence allows for more complex and adaptable behaviors.

Can intelligence be measured?

Yes, intelligence can be measured through various standardized tests, such as IQ tests. However, it is important to note that these tests may not accurately capture all aspects of intelligence and may be influenced by cultural and environmental factors.

Is reason a form of intelligence?

Yes, reason can be considered a form of intelligence. It involves using logic and critical thinking to make sense of information and draw conclusions. Reason is often seen as a higher level of intelligence, as it allows for more complex and abstract thinking.

Similar threads

  • Computing and Technology
3
Replies
99
Views
5K
  • Quantum Interpretations and Foundations
Replies
32
Views
2K
  • Other Physics Topics
Replies
20
Views
2K
Replies
1
Views
1K
Replies
21
Views
14K
  • Programming and Computer Science
Replies
1
Views
1K
  • Computing and Technology
Replies
2
Views
1K
  • General Discussion
Replies
24
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
470
  • General Discussion
Replies
9
Views
1K
Back
Top