Sorry! said:
I feel that computers make very intellectual decisions sometimes, but I don`t know whether to credit the computer or the programmer for the intelligence. Maybe sometime in the future though this will become clearer with AI developing further and further.
The majority of AI research focuses on producing
emergent behavior. For example, a programmer might create a very simple artificial neuron that does nothing more than filter its input signals with a mathematical function to produce an output signal. The neuron by itself is certainly not "intelligent" by any definition of the word, but interesting things happen when you put many of them together. A large array of these simple neurons can "learn" to understand speech, or to
diagnose heart attacks better than humans can. That's emergent behavior.
I think that it currently computers do not have feelings or emotions. For instance if we had developed a robot that knew how to skydive but its parachute failed. It knows what is occurring and runs programs that it knows will likely help it survive but is it having any feeling associated with the failure of its parachute?
In the purest possible sense, "fear" is just foresight that the current situation may result in death or dismemberment. A skydiving robot could evaluate its situation, a failed parachute, and reach the conclusion that it is about to be destroyed. That conclusion could be called fear; there's no reason to invoke some spooky superstition that our emotions are any more complicated than that.
We humans just happen to hold our emotions in high regard, since they seem to transcend our rational thought processes. In fact, they seem to circumvent our rational thought processes. Your two minds (the rational and the emotional) each evaluate a given situation independently, and, if either is unsettled enough by the conclusion, a reaction is provoked. If a machine is shown the same situations and produces the same reactions as a human, then you might as well call it human. That's the essence of the
Turing test, of course.
The experience of emotion occurs in the
limbic system, an ancient (and simpler) part of the brain. It evolved to quickly evaluate situations and produce strong responses -- fight or flight, for example. Its evaluations are frequently wrong, but it served us well earlier in our evolutionary development. Because it is simpler in nature, it stands to reason that the limbic system would be easier to emulate on a computer than would be our fancy, recently-evolved neocortex, where rational thought occurs. I believe that most people have an upside-down view of intelligence; the educated-guess responses of our emotional hardware are
easier to emulate on computer hardware than are the rational, reasoned responses of our neocortex.
Emotional responses are "stronger" than rational responses, in the sense that strong emotions can hijack the rest of our brains, at least temporarily. Many forms of entertainment take advantage of this situation. Rollercoasters, haunted houses, and even stand-up comedy all depend upon provoking a strong emotional response when it is rationally inappropriate.
I mean can my laptop in front of me evolve internally, get smarter? Learn what I am doing possibly get better at doing it?
"Evolve" is the wrong word to use in this context; instead, stick to the word "learn." Computers are certainly capable of learning.
I believe the same as you, and would grant the "intelligence" of a computer to its programmer.
People are somewhat prejudiced when it comes to declaring artificial neural networks "intelligent." Most people insist that computers can only do what programmers told them do it, but that's simply not true at all. No one sat down and codified heart attack diagnosis; the machine was simply shown examples of patients with and without heart attacks, and it learned to differentiate them. This is pretty much what happens in medical school, too.
It depends what you define as thinking. A computer will never come up with something new. It can only improve. Therefore, without the ability to develop new ideas, I don't consider that thinking.
Most people describe "intelligence" as the ability to come up with novel solutions, and then tacitly declare that machines cannot come up with novel solutions. That's not true, either. Chess computers, protein folding algorithms, and many other systems are capable of finding solutions that no human would likely have found; sometimes these solutions are silly and bizarre, but sometimes they are incredible. We owe many of our powerful new drugs to artificial intelligence.
Computers, since there beginning, are build to follow a serie very define codes or programs. Therefore, it is certainly possible to implement the fear of falling to a computer. But, for humans, these fears are not pre programmed into our brain. We tend to develop them as we go along.
This is factually incorrect. Our brains are certainly pre-wired to have emotional responses; they occur in infants long before any rational thought. That may be the only reason we still have emotions -- they are simpler and "come online" very early in our development, protecting a child until the brain has developed and becomes capable of higher, rational thought.
Sorry! said:
But is it actually feeling these emotions? What I'm saying is their is a behavioural response which we already know computers definitely have (for instance act a certain way under certain conditions). Then there is an emotional response.
In my opinion, there's no real difference between rational and emotional responses. Each involves the response of some neural network to some pattern of input. When we are aware of our neocortex being temporarily hijacked by our limbic system, we call the experience "feeling an emotion." Does "feeling" have any deeper meaning than a
multiplexer being flipped from one input to another? I would argue that it is no more complex.
Can computers have freewill?
If the computer is deterministic, the answer seems to be a solid "no." On the other hand, once you bring in non-deterministic events -- randomness, like the time between the receipt of network packets -- the answer may well be "yes."
More specifically, computers probably can have as much free will as humans. A more interesting question, though, is whether or not humans have any free will in the first place. In my opinion, they do not.
I could not agree more with you, that our brains are electrical connections between neurons. Did you ever looked at the complexity of these connections? Have fun replicating that into an artificial device.
Our brains have more complexity than we can currently emulate in computer hardware, but that does not mean such complexity is really necessary for intelligence. It is possible that evolution rewards simplicity so strongly that our brains contain the bare minimum complexity capable of intelligence, but it seems that we can create intelligence with far fewer resources, particularly if you restrict the domain of problems to chess or heart attacks.
And computers aren't quite growing by the leaps and bounds that they were some 10 years ago, when computers became obsolete every 3 years or so.
This is incorrect.
Moore's law is alive and well. It just happens that personal computers are now a mature market; most PCs do most of what most users want them to do. Bigger computers, however, continue to advance at an astounding rate.
My stance on intelligence is that we humans have a delusion of grandeur about our own thought processes. It stands to reason that, to understand one "thinking machine," you would need a thinking machine of even greater power. Our brains may not be complex enough to understand their own complexity. As a result, it's very easy for people to write off any machine that they can understand as being unintelligent.
Consider the statement:
- "If a machine is understandable, it is not intelligent."
The contrapositive of this statement, which is logically equivalent, is:
- "If a machine is intelligent, it must not be understandable."
That's very dangerous thinking! Any machine that we design, even if capable of emergent behavior, will necessarily be understandable. By that logic, we will never be able to create a machine that we will deem intelligent, no matter how capable it actually is.
My own perspective, unpopular as it may be, is that we ourselves are not intelligent in the way that we usually define intelligence. The processes that occur in our brains are not magic, and they do not defy or transcend any laws of physics. I believe our thinking processes are based on a few small rules -- like those of the artificial neural networks that diagnose heart attacks -- conflated many billions of times until the emergent behavior is all we see. I believe that our thinking is probably every bit as mechanical as that of the machines we build. We deem ourselves "intelligent" simply because we do not yet understand ourselves.
The gap between human and machine intelligence can be bridged in either direction. It seems inevitable that we will eventually make machines as complex as the human brain, but we may also need to relax the arrogant attitude that the human brain does something that no machine ever could.
- Warren