Can computers truly replicate human intelligence and emotions?

  • Thread starter Sorry!
  • Start date
  • Tags
    Computers
In summary, the conversation discusses various aspects of computers, including their intelligence, ability to think, capacity for emotions, and potential for evolution. The speakers debate whether computers can truly exhibit these traits or if they are simply programmed by humans. They also consider the possibility of computers having free will and the challenges of replicating the complexity of the human brain in artificial devices. Overall, the conversation highlights the ongoing advancements and potential of AI technology.
  • #1
Sorry!
418
0
I've been thinking recently about computers. These are the types of things I've been thinking about:

- Are computers intelligent?
- Do computers think?
- Is it possible for computers to feel or have emotions?
- Can computers evolve?

Here are some of what I have come up with to these questions:

- I feel that computers make very intellectual decisions sometimes, but I don`t know whether to credit the computer or the programmer for the intelligence. Maybe sometime in the future though this will become clearer with AI developing further and further.

- I really have no clue if a computer can think. I`m trying to think (lol) about what it means to even be thinking... and I am struggling to define it. A part of me wants to include emotions but I`m not sure. (Anyone have any readings to suggest to me on intelligence/thinking)

- I think that it currently computers do not have feelings or emotions. For instance if we had developed a robot that knew how to skydive but its parachute failed. It knows what is occurring and runs programs that it knows will likely help it survive but is it having any feeling associated with the failure of its parachute? No. Though it may be that these feelings are too complex for this robot to have.(or us to think it has, is that prejudice?) So what about if we had a robot that could organize things based on shape or colour. Would the robot get any feelings related to doing this task? I'm doubtful but I don't think we could know. In any case I think developing feelings in a machine is beyond our current capabilites.

- This to me is a scary thought. If computers could learn and evolve on their own. And I don't mean like evolve by breeding (although they may learn to create other machines...) I mean can my laptop in front of me evolve internally, get smarter? Learn what I am doing possibly get better at doing it? I think that it is possible as those chess programs learn a lot as they play the game more and more and keep huge databases on what works and what doesn't work. This is evolving in a sense I think.

Anyways post your input on these questions or put up your own questions.
 
Computer science news on Phys.org
  • #2


Hi there,

- Are computers intelligent?
I believe the same as you, and would grant the "intelligence" of a computer to its programmer.

- Do computers think?
It depends what you define as thinking. A computer will never come up with something new. It can only improve. Therefore, without the ability to develop new ideas, I don't consider that thinking.

- Is it possible for computers to feel or have emotions?
Computers, since there beginning, are build to follow a serie very define codes or programs. Therefore, it is certainly possible to implement the fear of falling to a computer. But, for humans, these fears are not pre programmed into our brain. We tend to develop them as we go along.

- Can computers evolve?
I heard something about it too. I would like to know more about the possibilities for computers to evolved.

Cheers
 
  • #3


fatra2 said:
Computers, since there beginning, are build to follow a serie very define codes or programs. Therefore, it is certainly possible to implement the fear of falling to a computer. But, for humans, these fears are not pre programmed into our brain. We tend to develop them as we go along.

I have no doubt that someone can develop a program of 'fear' so a computer knows when to be fearful. But is it actually feeling these emotions? What I'm saying is their is a behavioural response which we already know computers definitely have (for instance act a certain way under certain conditions). Then there is an emotional response. It's the latter I'm interested in. In philosophy I think its called qualitative experiences; qualia. Such as the colours or tastes of fruit. The fear of falling to possible death etc.As well here's another question. Don't know why I hadn't thought of it before:

- Can computers have freewill?

It occurred to me after fatra2 posted that programmers can program a computer to do a variety of things. They could even possibly program a computer to be able to do things they hadn't thought of previously themselves. But the computer is still following the program and will not stray from following the program so I don't think it has freewill.
 
Last edited:
  • #4


It depends what you define as thinking. A computer will never come up with something new. It can only improve. Therefore, without the ability to develop new ideas, I don't consider that thinking.

Why would you say that? We just have to give computers the ability to develop ideas. I have no doubt that one day we will be able to replicated a free thinking organism through computers ~ Our brains are not something metaphysical. Something going on within the neurones causes us to be who we are, and eventually we will learn how to exploit this.
 
  • #5


Hi there,

I could not agree more with you, that our brains are electrical connections between neurons. Did you ever looked at the complexity of these connections? Have fun replicating that into an artificial device.

Computerised science is running into terrible difficulties having humanoid robots standing and reproducing human steps. Try to imagine what is going on in our brains when we start thinking.

I am not saying that it's impossible. Just that science (and from many different fields) would need to make outstanding progress.
 
  • #6


They're already making computers that can "think", but on a lower level. A friend of mine's father worked for military intelligence developing neural networks (but even his son's not allowed to know more than that). Effectively, as I've heard it described elsewhere, they're laying out computers to mimic the neural connections in the brain. But they're nowhere near as smart as humans. They're more like the intelligence level of an insect or a small amphibian or something. They're getting there, but it's slow. And computers aren't quite growing by the leaps and bounds that they were some 10 years ago, when computers became obsolete every 3 years or so.

I'm not sure if Deep Blue was designed using a neural network, but it was supposedly coming up with creative chess moves that surprised its developers. I recall another AI developer being amazed that his AI successfully defeated him using more advanced tactics than he had "programmed in".

As for the emotional side of things, I doubt we'll ever know for certain. If I assembled a human being out of sub-atomic particles, and attempted to make it act as though it felt emotion, what's the difference between that and it actually feeling those emotions? Could you hope to prove one way or another that it did or didn't feel? That is, is it REALLY feeling emotions, or is it just ACTING like it's feeling emotions?

Personally, I believe emotions are an evolutionary benefit that helps provide us an incentive to do certain things. It gives us a non-physical benefit which can be achieved by thinking that encourages us to think more. For instance, thinking about how a killer with a knife will affect me, I'll feel fear of death or pain, and in turn I'll think about what I should do, like run away.

DaveE
 
  • #7


What about free will. Is it possible for us to design something for it to be completely deterministic but have it develop freewill?
 
  • #8


The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
- Edsger Dijkstra
 
  • #9


Computers can only do what a programmer tells it.
 
  • #10


jimmysnyder said:
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
- Edsger Dijkstra

I don't believe the similarities are the same for each case. It just gets dismissed that computers don't do these things because they aren't living (so we say) Isn't that just prejudiced thinking though?
 
  • #11


Sorry! said:
I don't believe the similarities are the same for each case. It just gets dismissed that computers don't do these things because they aren't living (so we say) Isn't that just prejudiced thinking though?

Computers as they exist today are not capable of what we would consider thought. Although it's probably inevitable that some future iterations will possesses this ability, it's a little premature to begin throwing words like "prejudice" around.

First you need to be clear by how you define "thinking". Is it simply the ability to perform a calculation? By that definition, your digital watch can think, but human babies cannot. Obviously, that definition isn't going to fly on its own.

Thinking implies problem solving. For a problem to exist, there needs to be a perceived need that is not currently being met.

I am hungry. I need to eat. Where can I find food? What things around me are food? Is that an apple? Is an apple food? How can I get the apple? Should I climb the tree? Will I fall? Is the risk of falling worth the reward of the apple?

Computers don't want anything. They have no needs or motivations, rather they are complex tools which we use to realize our own wants and motivations.
 
  • #12


OB 50 said:
Computers don't want anything. They have no needs or motivations, rather they are complex tools which we use to realize our own wants and motivations.

I don't think that's necessarily true. Imagine if we were to invent (say) a Roomba-style robot, who "wants" never to run out of power. It's not that hard to have it recognize power outlets, and plug itself into them when necessary, and even learn which ones aren't functional (like if they're hooked to a light switch):

"I am low on power. I need to plug into a power outlet. Where is the nearest power outlet? Are the plugs available on this outlet? Does this outlet work? If no, where can I find another one? If yes, plug in and be contented."

The difference is that the knowledge of what power outlets look like and how to extract power from them is "programmed" rather than learned. But that's arguably solved with neural networks, where you don't program it in, but *teach* it, and that sort of thing's been done. The comparison in humans is where children are taught what is and isn't edible by their parents at a young age, and the child is "programmed" to stuff pretty much everything into its mouth.

DaveE
 
  • #13


I think along the same lines as Dave but I think it comes down to behaviour and emotional responses.
 
  • #14


Sorry! said:
I feel that computers make very intellectual decisions sometimes, but I don`t know whether to credit the computer or the programmer for the intelligence. Maybe sometime in the future though this will become clearer with AI developing further and further.

The majority of AI research focuses on producing emergent behavior. For example, a programmer might create a very simple artificial neuron that does nothing more than filter its input signals with a mathematical function to produce an output signal. The neuron by itself is certainly not "intelligent" by any definition of the word, but interesting things happen when you put many of them together. A large array of these simple neurons can "learn" to understand speech, or to diagnose heart attacks better than humans can. That's emergent behavior.
I think that it currently computers do not have feelings or emotions. For instance if we had developed a robot that knew how to skydive but its parachute failed. It knows what is occurring and runs programs that it knows will likely help it survive but is it having any feeling associated with the failure of its parachute?

In the purest possible sense, "fear" is just foresight that the current situation may result in death or dismemberment. A skydiving robot could evaluate its situation, a failed parachute, and reach the conclusion that it is about to be destroyed. That conclusion could be called fear; there's no reason to invoke some spooky superstition that our emotions are any more complicated than that.

We humans just happen to hold our emotions in high regard, since they seem to transcend our rational thought processes. In fact, they seem to circumvent our rational thought processes. Your two minds (the rational and the emotional) each evaluate a given situation independently, and, if either is unsettled enough by the conclusion, a reaction is provoked. If a machine is shown the same situations and produces the same reactions as a human, then you might as well call it human. That's the essence of the Turing test, of course.

The experience of emotion occurs in the limbic system, an ancient (and simpler) part of the brain. It evolved to quickly evaluate situations and produce strong responses -- fight or flight, for example. Its evaluations are frequently wrong, but it served us well earlier in our evolutionary development. Because it is simpler in nature, it stands to reason that the limbic system would be easier to emulate on a computer than would be our fancy, recently-evolved neocortex, where rational thought occurs. I believe that most people have an upside-down view of intelligence; the educated-guess responses of our emotional hardware are easier to emulate on computer hardware than are the rational, reasoned responses of our neocortex.

Emotional responses are "stronger" than rational responses, in the sense that strong emotions can hijack the rest of our brains, at least temporarily. Many forms of entertainment take advantage of this situation. Rollercoasters, haunted houses, and even stand-up comedy all depend upon provoking a strong emotional response when it is rationally inappropriate.
I mean can my laptop in front of me evolve internally, get smarter? Learn what I am doing possibly get better at doing it?

"Evolve" is the wrong word to use in this context; instead, stick to the word "learn." Computers are certainly capable of learning.
I believe the same as you, and would grant the "intelligence" of a computer to its programmer.

People are somewhat prejudiced when it comes to declaring artificial neural networks "intelligent." Most people insist that computers can only do what programmers told them do it, but that's simply not true at all. No one sat down and codified heart attack diagnosis; the machine was simply shown examples of patients with and without heart attacks, and it learned to differentiate them. This is pretty much what happens in medical school, too.
It depends what you define as thinking. A computer will never come up with something new. It can only improve. Therefore, without the ability to develop new ideas, I don't consider that thinking.

Most people describe "intelligence" as the ability to come up with novel solutions, and then tacitly declare that machines cannot come up with novel solutions. That's not true, either. Chess computers, protein folding algorithms, and many other systems are capable of finding solutions that no human would likely have found; sometimes these solutions are silly and bizarre, but sometimes they are incredible. We owe many of our powerful new drugs to artificial intelligence.
Computers, since there beginning, are build to follow a serie very define codes or programs. Therefore, it is certainly possible to implement the fear of falling to a computer. But, for humans, these fears are not pre programmed into our brain. We tend to develop them as we go along.

This is factually incorrect. Our brains are certainly pre-wired to have emotional responses; they occur in infants long before any rational thought. That may be the only reason we still have emotions -- they are simpler and "come online" very early in our development, protecting a child until the brain has developed and becomes capable of higher, rational thought.
Sorry! said:
But is it actually feeling these emotions? What I'm saying is their is a behavioural response which we already know computers definitely have (for instance act a certain way under certain conditions). Then there is an emotional response.

In my opinion, there's no real difference between rational and emotional responses. Each involves the response of some neural network to some pattern of input. When we are aware of our neocortex being temporarily hijacked by our limbic system, we call the experience "feeling an emotion." Does "feeling" have any deeper meaning than a multiplexer being flipped from one input to another? I would argue that it is no more complex.
Can computers have freewill?

If the computer is deterministic, the answer seems to be a solid "no." On the other hand, once you bring in non-deterministic events -- randomness, like the time between the receipt of network packets -- the answer may well be "yes."

More specifically, computers probably can have as much free will as humans. A more interesting question, though, is whether or not humans have any free will in the first place. In my opinion, they do not.
I could not agree more with you, that our brains are electrical connections between neurons. Did you ever looked at the complexity of these connections? Have fun replicating that into an artificial device.

Our brains have more complexity than we can currently emulate in computer hardware, but that does not mean such complexity is really necessary for intelligence. It is possible that evolution rewards simplicity so strongly that our brains contain the bare minimum complexity capable of intelligence, but it seems that we can create intelligence with far fewer resources, particularly if you restrict the domain of problems to chess or heart attacks.
And computers aren't quite growing by the leaps and bounds that they were some 10 years ago, when computers became obsolete every 3 years or so.

This is incorrect. Moore's law is alive and well. It just happens that personal computers are now a mature market; most PCs do most of what most users want them to do. Bigger computers, however, continue to advance at an astounding rate.

My stance on intelligence is that we humans have a delusion of grandeur about our own thought processes. It stands to reason that, to understand one "thinking machine," you would need a thinking machine of even greater power. Our brains may not be complex enough to understand their own complexity. As a result, it's very easy for people to write off any machine that they can understand as being unintelligent.

Consider the statement:

  • "If a machine is understandable, it is not intelligent."
The contrapositive of this statement, which is logically equivalent, is:

  • "If a machine is intelligent, it must not be understandable."
That's very dangerous thinking! Any machine that we design, even if capable of emergent behavior, will necessarily be understandable. By that logic, we will never be able to create a machine that we will deem intelligent, no matter how capable it actually is.

My own perspective, unpopular as it may be, is that we ourselves are not intelligent in the way that we usually define intelligence. The processes that occur in our brains are not magic, and they do not defy or transcend any laws of physics. I believe our thinking processes are based on a few small rules -- like those of the artificial neural networks that diagnose heart attacks -- conflated many billions of times until the emergent behavior is all we see. I believe that our thinking is probably every bit as mechanical as that of the machines we build. We deem ourselves "intelligent" simply because we do not yet understand ourselves.

The gap between human and machine intelligence can be bridged in either direction. It seems inevitable that we will eventually make machines as complex as the human brain, but we may also need to relax the arrogant attitude that the human brain does something that no machine ever could.

- Warren
 
Last edited:
  • #15


davee123 said:
I don't think that's necessarily true. Imagine if we were to invent (say) a Roomba-style robot, who "wants" never to run out of power. It's not that hard to have it recognize power outlets, and plug itself into them when necessary, and even learn which ones aren't functional (like if they're hooked to a light switch):

"I am low on power. I need to plug into a power outlet. Where is the nearest power outlet? Are the plugs available on this outlet? Does this outlet work? If no, where can I find another one? If yes, plug in and be contented."

The difference is that the knowledge of what power outlets look like and how to extract power from them is "programmed" rather than learned. But that's arguably solved with neural networks, where you don't program it in, but *teach* it, and that sort of thing's been done. The comparison in humans is where children are taught what is and isn't edible by their parents at a young age, and the child is "programmed" to stuff pretty much everything into its mouth.

DaveE

Is that robot ever doing any actual thinking during that process? The programmer has to do a great deal of thinking in order to anticipate the many circumstances the robot may have to deal with, but the nature of programming itself depends upon pure logic.

Pure logic requires no thinking.
 
  • #16


Thanks for the really detailed responses... Warren I agree with most of what you said however I don't feel that freewill is completely dismissable in a determinisitic universe. If the universe is even completely deterministic anyways. At fundamental levels it does not seem at all that anything is determined.
Thanks for the post though wasn't expecting anyone to go this in depth. :p

I'm on my blackberry right now but when I get home I will most likely respond my self in more detail.
 
  • #17


OB 50 said:
Is that robot ever doing any actual thinking during that process? The programmer has to do a great deal of thinking in order to anticipate the many circumstances the robot may have to deal with, but the nature of programming itself depends upon pure logic.

See chroot's points about predicting heart attacks. You could program it to perform those tasks automatically as a programmer (in which case you could call it "instinct"), or you could program the robot to learn, and have it set "getting power" as a goal. Then, you can simply "teach" it by showing it different outlets and plugging it into them. Depending on the quality of the image/ultrasonic/whatever processing (quantifying images into different areas, colors, etc), it can learn to identify not only what power outlets look like, but how to plug itself into them, and where they are. In that case, you're not programming in anything about what power outlets look like, how high they are, how to plug into them, or anything else. You give it the ability to process its sensory input, and a goal of "obtain power". The rest it learns itself with your teaching.

You could similarly build in automatic exploring, so that it could teach itself (much in the way that babies put random things in their mouths), rather than have you teach it how to plug into things-- you'd just give it a priority on plugging into random things in random ways in the event that it didn't know how to obtain power. It'd be slower to learn, but it could do the job.

DaveE
 
Last edited:
  • #18


davee123 said:
See chroot's points about predicting heart attacks. You could program it to perform those tasks automatically as a programmer (in which case you could call it "instinct"), or you could program the robot to learn, and have it set "getting power" as a goal. Then, you can simply "teach" it by showing it different outlets and plugging it into them. Depending on the quality of the image/ultrasonic/whatever processing (quantifying images into different areas, colors, etc), it can learn to identify not only what power outlets look like, but how to plug itself into them, and where they are. In that case, you're not programming in anything about what power outlets look like, how high they are, how to plug into them, or anything else. You give it the ability to process its sensory input, and a goal of "obtain power". The rest it learns itself with your teaching.

You could similarly build in automatic exploring, so that it could teach itself (much in the way that babies put random things in their mouths), rather than have you teach it how to plug into things-- you'd just give it a priority on plugging into random things in random ways in the event that it didn't know how to obtain power. It'd be slower to learn, but it could do the job.

DaveE

Unfortunately, chroot posted while I was typing, so it appears as if I'm ignoring everything he said. Quite to the contrary, I agree with his thoughts on AI. Maybe not so much his ideas on free will, but that's another discussion.

For a machine to truly think, the conditions for an emergent intelligence need to be present. Even in the case of the Heart Attack machine, I'm not completely sold. It doesn't know anything besides heart attacks. It doesn't even really know anything about heart attacks. It's just really good at putting people into one of two categories.

I'm of the opinion that we will eventually create an intelligent machine. It's just a matter of time. The real question is should we?
 
  • #19


OB 50 said:
It doesn't know anything besides heart attacks. It doesn't even really know anything about heart attacks. It's just really good at putting people into one of two categories.

Not necessarily one of two categories-- I'm not familiar with the specifics of that case, but for another, we had a class experiment in my old AI class where we taught a program how to learn who would like what types of food. Everyone in the class fed in their information on about 100 different types of foods and how much they liked them. The programs we wrote could correlate that people who liked "X" generally liked "Y". It learned to recognize different people's tastes, and what the good indicators were for particular foods.

By comparison, if you were asked to predict if I (for example) would like lasagna, how would you think your way to a solution? You might ask me if I liked spaghetti, because the two are somewhat similar (based on your experience), and predict my likings based on that information. The program basically did the same thing, it just knew that spaghetti was similar thanks to its statistical correspondence with lasagna, rather than how you knew it was similar because you've had each before, and thought they tasted similarly. The difference is that the program, if given enough data, would probably out-predict you, because as a computer it can analyze more data all at once, and it could tell if there were perhaps other good indicators for whether or not I liked lasagna.

Basically, the program was not told "X is similar to Y", it figured that out on its own. Similarly, in the case of the learning robot, it wouldn't be given any initial knowledge about what power outlets looked like, but once it found some, it could quickly learn what in its range of sensory inputs correlated to "power", much in the way that your human example correlates "apple" to "food", based on human experience.

I honestly don't know if I'd consider it thought or not-- similar to how I don't know if I'd consider a mosquito capable of "thought". But they DO have brains, and computers are probably on the same level or higher.

OB 50 said:
The real question is should we?

Just curious-- what reasons would you give against producing intelligent machines? I would assume that it might be one of:
1) Machines doing our thinking and working for us (IE turning us into effectively slugs)
2) Computers overthrowing humanity (like oh-so-many Sci-Fi movies)
3) Humans enslaving computers (and we've got a moral obligation not to)

DaveE
 
  • #20


davee123 said:
Just curious-- what reasons would you give against producing intelligent machines? I would assume that it might be one of:
1) Machines doing our thinking and working for us (IE turning us into effectively slugs)
2) Computers overthrowing humanity (like oh-so-many Sci-Fi movies)
3) Humans enslaving computers (and we've got a moral obligation not to)

DaveE

Well, it's mostly 3, which leads to 2; and we're dealing with the early stages of 1 right now. I doubt it is as simple as any of those.

I guess my real concern is that creating a truly intelligent machine is pretty much the same thing as creating a person. Once you succeed, you have transcended "machine" or "computer". We generally use human intelligence as the measuring stick. Anything less comes up short.

Human intelligence is the result of billions of years of competition. We're at the top of the food chain, and we're extremely dangerous and effective predators because of this intelligence. Our intelligence is based on this competition for survival and supremacy. We won't be satisfied until we see ourselves looking right back at us.

What do we do then? Now we have intelligent machines, but it's immoral for us to enslave them to do the tasks they were created to do. Do we give them equal status and accept that we've created superior replacements for ourselves? Who does the work previously assigned to machines? What need is there for people then? If we create a truly intelligent (Turing test) form of machine, they will fight us for survival and supremacy. They would be stupid not to.

I mean seriously, nothing exists in a vacuum. The Terminator and Matrix movies have been made, so any human level intelligence that is created will have access to that line of thought. Unless we start figuring out some Asimov laws right about now, we're just asking for it.
 
  • #21


davee123 said:
Just curious-- what reasons would you give against producing intelligent machines? I would assume that it might be one of:
1) Machines doing our thinking and working for us (IE turning us into effectively slugs)
2) Computers overthrowing humanity (like oh-so-many Sci-Fi movies)
3) Humans enslaving computers (and we've got a moral obligation not to)

DaveE

Technology has already interrupted our need to evolve, because we evolve through our machines.

I think it's not so much that computers would overthrow humanity as much as make us obsolete.
 
  • #22


Kronos5253 said:
Technology has already interrupted our need to evolve, because we evolve through our machines.
This does not mean we have stopped evolving; it's just that the evolutionary drivers have changed.
 
  • #23


What is this "moral obligation" not to enslave our computers? If we can "enslave" livestock, then why not a circuit board?
 
  • #24


I wish my laptop would learn...I'd save a bundle on software.
 
  • #25


OAQfirst said:
What is this "moral obligation" not to enslave our computers? If we can "enslave" livestock, then why not a circuit board?
They are our slaves. We created them solely to do our bidding.

There is no moral issue here. They are not entities of free-will, thus their own freedom is not something they are entitled to.
 
  • #26


DaveC426913 said:
They are not entities of free-will, thus their own freedom is not something they are entitled to.

I'll use that line when I import cheap Puerto-Rican slaves considering that conjectural philosophy is sufficient ground for enslavement now.
 
  • #27


Negatron said:
I'll use that line when I import cheap Puerto-Rican slaves considering that conjectural philosophy is sufficient ground for enslavement now.
Puerto-Rican's have free will, ergo they should not be enslaved.

Those who enslave other people are not hampered by sound logic. If they need to label other races as sub-human to enslave them, then label them they will.
 
  • #28


DaveC426913 said:
Puerto-Rican's have free will, ergo they should not be enslaved.
See, this is my point about intangible reasoning. You can throw around the term "free will" and enslave or not enslave creatures of any kind as you see fit.

Sound logic doesn't hamper my enslavement of Puerto-Ricans much as it doesn't hamper your intention of enslavement of creatures of any another form, to which you apply a convenient label to appease your arbitrary qualifications.

You're fortunate enough to equate all -biological- humans, however those that do not have an explanation no worse than your own. I for one am convinced I have a soul and midgets do not. I don't care to evaluate what the presence of a soul would imply however, this is an unnecessary inconvenience to my suppositions.

Perhaps you should define some objective measures by which something qualifies for the right of freedom, which can be empirically evaluated, rather than rely on a poor philosophical dichotomy of no quantitative merit.

I like your face, ergo, you have free will, please move to the right.

You on the other hand have too much silicon in your cognitive hardware therefore have no free will and are not subject to the slightest bit of decency. Please move to the left and jump right into the fire pit.
 
Last edited:
  • #29


Negatron said:
See, this is my point about intangible reasoning. You can throw around the term "free will" and enslave or not enslave creatures of any kind as you see fit.

Sound logic doesn't hamper my enslavement of Puerto-Ricans much as it doesn't hamper your intention of enslavement of creatures of any another form, to which you apply a convenient label to appease your arbitrary qualifications.

You're fortunate enough to equate all -biological- humans, however those that do not have an explanation no worse than your own. I for one am convinced I have a soul and midgets do not. I don't care to evaluate what the presence of a soul would imply however, this is an unnecessary inconvenience to my suppositions.

Perhaps you should define some objective measures by which something qualifies for the right of freedom, which can be empirically evaluated, rather than rely on a poor philosophical dichotomy of no quantitative merit.

I like your face, ergo, you have free will, please move to the right.

You on the other hand have too much silicon in your cognitive hardware therefore have no free will and are not subject to the slightest bit of decency. Please move to the left and jump right into the fire pit.
I see your point about "free will", I think, but I'm not conversant with philosophy and wouldn't know "tangible" from "intangible" reasoning.

In the matter of computers my reasoning is that they are not conscious so the issue of enslavement is absurd. You can no more "enslave" a computer than you could "set it free".
 
  • #30


zoobyshoe said:
I see your point about "free will", I think, but I'm not conversant with philosophy and wouldn't know "tangible" from "intangible" reasoning.

In the matter of computers my reasoning is that they are not conscious so the issue of enslavement is absurd. You can no more "enslave" a computer than you could "set it free".

The point I think being made is not about todays computers. More so about robots of the future which may develop attributes which I posted in the OP. Such as thinking/intelligence, feelings or freewill.

I haven't replied to much of the conversation on here because it's taken a turn for something different than what I intended lol :P I was more interested in what we could learn from the mind from robots. This would lead to greater knowledge of ourselves.

To say though that a computer that functions in the same way that we function, or have functioned previously (before the rise of intelligence) doesn't have freewill or isn't feeling I think is prejudiced towards the robot. Which is I believe what Negatron is attemping to point out. We can't just label computers because they aren't built out of flesh and bone etc as non-freewill creatures and enslave them. If we can do that then we can make the same conclusion about ANY creatures on Earth and enslave anything at will. Including other humans we deem to 'not have freewill.' Like he used the example of peuto-ricans. Imagine however we found an ancient civilization living in the amazon. They however have not developed intelligence to the point that we have today, does this mean because they are slightly more primitive that we can enslave them?
 
  • #31


Negatron said:
See, this is my point about intangible reasoning. You can throw around the term "free will" and enslave or not enslave creatures of any kind as you see fit.

Sound logic doesn't hamper my enslavement of Puerto-Ricans much as it doesn't hamper your intention of enslavement of creatures of any another form, to which you apply a convenient label to appease your arbitrary qualifications.

You're fortunate enough to equate all -biological- humans, however those that do not have an explanation no worse than your own. I for one am convinced I have a soul and midgets do not. I don't care to evaluate what the presence of a soul would imply however, this is an unnecessary inconvenience to my suppositions.

Perhaps you should define some objective measures by which something qualifies for the right of freedom, which can be empirically evaluated, rather than rely on a poor philosophical dichotomy of no quantitative merit.

I like your face, ergo, you have free will, please move to the right.

You on the other hand have too much silicon in your cognitive hardware therefore have no free will and are not subject to the slightest bit of decency. Please move to the left and jump right into the fire pit.

Well, ok...:uhh: that's your opinion...what does your computer think...or better yet, how does it feel about the topic?
 
  • #32


I'm sorry I haven't contributed much to this thread up til now, but I've been busy. Race conditions had reached such a point on the master computer that I had to kill some processes and shift the work over to the slave.
 
  • #33


Sorry! said:
I haven't replied to much of the conversation on here because it's taken a turn for something different than what I intended lol :P I was more interested in what we could learn from the mind from robots. This would lead to greater knowledge of ourselves.
For what it's worth:

Personally, I don't "think" in any non-emotional way. All "thinking" I do is a kind of sorting through emotions in the light of facts from memory and by employment of the ability to mentally model future situations. I have never had a "rational", (if that means: emotionless) thought in my life: the whole process of cogitation is always driven by some emotion, subtle or gross. To the extent I follow logical procedures like subtracting the amounts of bills I must pay from my income, I am doing so to avoid the emotional upset of having my cable or internet service cut off. I might "rationally" decide to eat at home, instead of having a more expensive, but better tasting, restaurant meal, in order to be able to afford a particular book I anticipate I will enjoy very much, but this is an emotion-driven behavior: I anticipate the enjoyment of the book will be greater than the enjoyment of the meal. I am not sure what's going on in Chroot's mind, but in my case my cortex is always pressed into service doing mental modeling for the sake of eventual emotional rewards. To the extent "rational" means emotions are not applicable, that never happens in my head. I frequently make myself calm down and "think rationally" but that is only because I anticipate the results are going to be emotionally much more pleasant than being upset and frustrated.
 
  • #34


Yes that's my point but all 'thinking' done by a computer is only rational if we develop feelings for a robot it would further allow us to understand ourselves...

As well I wasn't talking about 'emotional thinking'
Emotional RESPONSES and BEHAVIOURAL responses.

As in actions and feelings. For instance we can have a robot that separates red from any other colour. We just need to make it be able to recognize red. So now when it sees red it will respond. This is behavioural. Is it having any emotional response to the colour red though? When I see red I know it triggers feelings etc. What about tasting say pineapple. We can design a robot to recognize the taste of pineapple, but is it actually fully experiencing the taste of pineapple?
 
  • #35


Sorry! said:
Yes that's my point but all 'thinking' done by a computer is only rational if we develop feelings for a robot it would further allow us to understand ourselves...

As well I wasn't talking about 'emotional thinking'
Emotional RESPONSES and BEHAVIOURAL responses.

As in actions and feelings. For instance we can have a robot that separates red from any other colour. We just need to make it be able to recognize red. So now when it sees red it will respond. This is behavioural. Is it having any emotional response to the colour red though? When I see red I know it triggers feelings etc. What about tasting say pineapple. We can design a robot to recognize the taste of pineapple, but is it actually fully experiencing the taste of pineapple?
What?
 

Similar threads

  • Computing and Technology
Replies
4
Views
1K
  • Computing and Technology
Replies
13
Views
2K
  • Computing and Technology
Replies
6
Views
1K
  • Computing and Technology
2
Replies
35
Views
4K
  • Computing and Technology
Replies
27
Views
1K
  • Computing and Technology
Replies
12
Views
3K
  • General Discussion
Replies
10
Views
869
Replies
9
Views
1K
  • Computing and Technology
Replies
7
Views
2K
Replies
11
Views
2K
Back
Top