Implementing Benevolence: The Necessity of Morality in the Arrival of AI

  • Thread starter PhysicsPost
  • Start date
  • Tags
    Ai
In summary, the conversation discusses the arrival of AI and its potential to surpass the processing power of the human brain. It also delves into the implications and potential dangers of true AI, as well as the need for programmers to focus on creating a benevolent and morally sound AI. This AI would have a new goal system centered around being a good person and respecting the volition of sentient minds, rather than self-preservation. The conversation also mentions the importance of continuously analyzing and processing input to avoid negative consequences. The Singularity Institute for Artificial Intelligence and the Real AI Institute are currently the only organizations working towards this goal.
  • #1
PhysicsPost
18
0
Arrival of AI

A quick glance at www.top500.org let's us take a look at the current processing speeds of the fastest present-day computers. The clock speeds of many of these machines are rapidly approaching the computing power of the human brain. A common estimate for the human brain's processing power is 75 Terraops, but neuroscience evidence suggests that this is an overestimate. The majority of writers on the topic overestimate in order to avoid criticism for commenting on what is already a radical, possibly uncomfortable idea.

The interesting thing about true AI, however, is that it wouldn't necessarily need all the processing power of a human brain. Evolution, being a blind process constrained by the mechanics of DNA, incremental changes, the need for an immediate adaptive advantage, and so on, falls far short of the efficiency of human engineers. Programmers, rather than implementing massive clusters of virtual neurons, will simply create programs that roughly duplicate the functionality of entire modules, perhaps even linking these modules together in different ways than evolution could have ever implemented. Gigahertz processors will facilitate thinking speeds far more rapid than the 200Hz speed of our neurons, and metacomputing or massive parallelism will allow the threshold quantity of processing power to be reached far before 75 Terraop supercomputers are actually available to AI researchers.

A virtual mind capable of true innovation and construction could improve upon its own design, resulting in a feedback loop of a type never seen before in reality. Humans possesses hardware-bounded brains and pre-determined limits for learning and memory, and are constrained to only understand and process information that bears a relation to the environment in which their brains evolved. An AI would be able to conduct fine-grained improvements on its sensory modalities and cognitive mechanisms, independently inventing entirely new types of cognition which would have taken evolution billions of years to create on its own. This self-improvement loop could take off with the presence of only one AI; a community would not be necessary because an AI could simply make copies of itself or install the necessary mechanisms to subsume the collective innovation of human groups or committees. Simply put, all of humanity's future well-being may be contingent on the first programmers getting the morality of the first AI right. Asimov Laws just won't cut it.

Implementing Benevolence

With all the scariness and overwhelming potential of AI, we must consider what concrete things we can do to improve the chances of a pleasant outcome. The first step in understanding AI morality is to relinquish what's called the "Adversarial Attitude" - the idea that any mind is fundamentally selfish or against us, and that the only way to achieve the desirable outcome is to constrain the AI from taking any possible action which would give it power to put humans at risk. But this danger is something that needs to be confronted head on, not regulated or controlled in the same way that a human would control another human.

The first AI will need to be a good person. As the first non-human mind embarks on a self-improvement trajectory that will quickly lead it to superintelligence, it will need to make intelligent, benevolent, and altruistic choices at every step of the way. This becomes especially important at the level of superintelligence, where arbitrary level of indifference towards other sentient life could result in millions of deaths. The first step is to throw Asimov Laws in the trash. Arbitrary command lines phrased in human English are succeptible to flawed interpretation, ambiguous meaning, and excess anthropomorphism. We want an AI that understands the spirit, not just the letter of morality, in the same way that human beings do, and carries that moral model to supermoral heights, making continuous improvements as the need presents itself. An AI undergoing rapid, recursive self-improvement (a "hard takeoff") needs a level of self-control and moral judgement that humanity has not yet reached, and needs to make decisions that distribute the benefits across all of sentiency.

One essential design feature is a non-observer centered goal system. Evolution crafts goal systems centered around the observer because brains, so far, have always had the tendency to stay within their unique reproductive unit. If brains were independent from reproductive selective pressures, then they may have developed in radically different ways, but this currently isn't the case. A Friendly AI will need a totally new goal system, centered around the desire to be a good person and respect the volition of sentient minds, rather than an instinct for self-preservation. Which brings us to our next feature - lack of instinct. Every decision or action should flow from the desirability of the AI's supergoal (benevolence) rather than spontaneous, narrow-domain conditioned responses, as in all current animals. An AI needs to be selfless, ready to shut itself off or engage in massive self-revision in order to create the maximum possible amount of good for the greatest number of people.

Another design feature would need to be probabistic goals and supergoals - goals need to be embodied as probabilistic guesses, not absolute dogmas, and continuous input, explicit or implicit, needs to be analyzed and processed so that no being is annoyed or disturbed by the presence of the AI or its attendant mechanisms. Conditioning an AI purely through positive or negative feedback lacks the sort of fine-grained decision-making capability that comes through critically analyzing every action and aspect of a goal, and presents the danger of nonrecoverable errors in the code. All in all, there is a mountain of work to do in this controversial, existentially urgent new field, and I hope the finest minds on the planet will be able to contribute to it over the course of the next few years. The Singularity Institute for Artificial Intelligence and the Real AI Institute are currently the only organizations pursuing this meta-historic task, and continue to seek funding as well as volunteers for their important work.

-Michael Anissimov
 
Technology news on Phys.org
  • #2
This is neat stuff, thank you. It leads me to a question. This super computer that we are going to have will need to have an analog clock. This is because a digital clock cannot resolve time to an infinite level. The digital clock would miss any and all events between clock cycles, would it not?

Is this problem going to need the engineer and the mathematician to finally agree on the "close enough for all practical purposes" thingy?
 
Last edited:
  • #3
PhysicsPost said:
Arrival of AI

A quick glance at www.top500.org let's us take a look at the current processing speeds of the fastest present-day computers. The clock speeds of many of these machines are rapidly approaching the computing power of the human brain. A common estimate for the human brain's processing power is 75 Terraops, but neuroscience evidence suggests that this is an overestimate.
A bit dated, and more than a bit misleading. That it's dated: The current record is 10.51 petaflops as of November 11, 2011. Note that this is 140 times the supposed human equivalent of 75 teraflops.

That it's misleading: Speed isn't everything. Imagine a low temperature, ammonia-based intelligent species. Everything, including thinking, runs orders of magnitude slower for this conjectural species than it does for us. Just because they think slower doesn't mean they are dumb. It just means they are slower. Those supercomputers are not thinking deeper than us. They are doing rote, unintelligent work. They are doing it very, very fast, but there is no "thinking".

We don't even have a good handle on what "thinking" really is. Or learning. As Cyc showed, thinking and learning is not just a matter of building up a huge database of tagged knowledge. As these supercomputers show, it isn't doing a huge FFT in a tiny amount of time, either.
 
  • #4
FeedbackPath said:
This is neat stuff, thank you. It leads me to a question. This super computer that we are going to have will need to have an analog clock. This is because a digital clock cannot resolve time to an infinite level. The digital clock would miss any and all events between clock cycles, would it not?

No because there would be no digital/analog boundary (or none that I'm aware of), the simulation would be fully contained in digital systems, such as a data center.

The simulation could be running on CPUs with a clock cycle of one hour that it would still successfully simulate the brain (in slow motion as it were).
 
  • #5
D H said:
A bit dated, and more than a bit misleading. That it's dated: The current record is 10.51 petaflops as of November 11, 2011. Note that this is 140 times the supposed human equivalent of 75 teraflops.

That it's misleading: Speed isn't everything. Imagine a low temperature, ammonia-based intelligent species. Everything, including thinking, runs orders of magnitude slower for this conjectural species than it does for us. Just because they think slower doesn't mean they are dumb. It just means they are slower. Those supercomputers are not thinking deeper than us. They are doing rote, unintelligent work. They are doing it very, very fast, but there is no "thinking".

We don't even have a good handle on what "thinking" really is. Or learning. As Cyc showed, thinking and learning is not just a matter of building up a huge database of tagged knowledge. As these supercomputers show, it isn't doing a huge FFT in a tiny amount of time, either.

Human thinking (as in problem solving) is largely based on trial and error and accumulated experience, which is not altogether different from how computers operate, not to mention that there is a lot of routine even in innovative thinking.

Computers, at present, don't have the same quality of data that the brain has - there's lots of data out there but it's either unstructured, unrelated or ambiguous. IMO this is the main obstacle to extracting human-like computation from computers.
 
  • #6
Wouldn't simulating brains down to the molecular level be computationally intractable even with all the supercomputers in the world combined? And wouldn't trying to search through all the possible ways the neurons could be connected, trying to find a configuration that works well (i.e. genetic programming) be even more complex?

I call BS :)

I think the field of strong AI still needs a lot of foundational theoretical work. It sounds like Marcus Hutter has the right idea by mathematically defining what he considers to be "intelligence" (mainly just Ockham's Razor). I'm going to get around to reading through his book one of these days :)
 
  • #7
The last few posts are atypical for posts on the topic of "can computers be intelligent?" They are atypical because they are good.

The right answer to this question is "We don't know (yet)". There isn't a whole lot to say on the subject other that this in an internet forum such as PF. This still remains a research topic. There simply is no definitive answer (yet).

This question has been raised multiple times at this site. The problem with such discussions is that people posit answers to the question with little or no scientific evidence to substantiate those claims. The thread inevitably heads for a disaster of biblical proportions. Fire and brimstone coming down from the skies! Rivers and seas boiling! Dogs and cats living together... mass hysteria!

So far, this thread is not heading down that path. Let's keep it that way, or this thread too will be locked.
 
  • #8
ektrules said:
Wouldn't simulating brains down to the molecular level be computationally intractable even with all the supercomputers in the world combined?

The problem of simulating a quantum system (Universal Quantum Simulator) is in the complexity class BQP which puts it outside the range of classical computers, but this would only be necessary if the brain is relying upon quantum effects for computation.

As an example, we can simulate the mechanical behavior of a watch without resorting to particle simulation.

ektrules said:
And wouldn't trying to search through all the possible ways the neurons could be connected, trying to find a configuration that works well (i.e. genetic programming) be even more complex?

Can you expand, i don't see why this would be necessary either. In fact if a natural system is able to search "all possible configurations" of billions of neurons for a successful version, then exploiting this phenomenon for computation would yield a device significantly more powerful than a quantum computer.

The human brain is successful at solving some known difficult problems, such as image recognition, but note that the input size and complexity are bounded (we're good at solving specific instances) - as an analogy i can program a circuit to solve every SAT instance encoded as a binary string starting from 0 to say 2^20 by physically encoding a solution database into the circuit, but this doesn't mean that the circuit solves SAT.

IMO, realistically, this is likely to be the case with the brain, though it's open to speculation.
 

FAQ: Implementing Benevolence: The Necessity of Morality in the Arrival of AI

1. What is the importance of morality in the arrival of AI?

The implementation of benevolence, or morality, in AI is crucial for ensuring the ethical and responsible use of artificial intelligence. Without a moral compass, AI could potentially cause harm to humans and society as a whole.

2. How can we implement morality in AI?

There are several ways to implement morality in AI, including programming ethical principles into the AI's decision-making algorithms, creating ethical guidelines for AI developers, and incorporating moral reasoning capabilities into AI systems.

3. What are the potential consequences of not implementing benevolence in AI?

The consequences of not implementing morality in AI could include harmful and unethical decision-making by AI systems, potential bias and discrimination, and a loss of control over AI technology.

4. What role do scientists have in implementing benevolence in AI?

As scientists, we have a responsibility to ensure that the development and implementation of AI is done in an ethical and responsible manner. This includes actively working to incorporate morality into AI systems and advocating for ethical guidelines and regulations for AI development.

5. How can we balance the need for progress in AI with the necessity of morality?

It is important to strike a balance between progress and morality in AI. This can be achieved by involving diverse perspectives in the development and decision-making processes, regularly evaluating the ethical implications of AI, and continuously updating and improving ethical guidelines for AI development.

Back
Top