Exploring AI Safety with Scott Aaronson

  • Thread starter Jarvis323
  • Start date
  • Tags
    Ai Safety
In summary: I don't know...understanding of intelligence or something?In summary, Scott Aronson wrote a blog post on the topic of AI safety, and raised some concerns about its current state. He suggests that we need to find a definition of AI that everyone agrees to in order to have a meaningful discussion on the topic, and also points out that the real world is difficult and expensive to navigate for an AI.
  • #1
Jarvis323
1,243
986
Aaronson went on leave to join OpenAI and study AI safety, and wrote a blog post on the topic.

https://scottaaronson.blog/?m=202211

Maybe this could be a good jumping off point to discuss the topic in general.
 
Last edited:
Computer science news on Phys.org
  • #2
Here's the flaw from that blog post:

AI ethics and AI alignment, might both feel like the other is completely missing the point! On top of that, AI ethics people are almost all on the political left, while AI alignment people are often centrists or libertarians or whatever, so that surely feeds into it as well.
It has already started to be politically polarized.

But a bigger flaw is the most basic. Find a definition of AI that everyone agrees to. Unless people agree as to the subject of a debate, the debate is doomed.
 
  • Like
Likes russ_watters and berkeman
  • #3
The usual story here is that someone puts an AI in charge of a paperclip factory, they tell it to figure out how to make as many paperclips as possible, and the AI (being superhumanly intelligent) realizes that it can invent some molecular nanotechnology that will convert the whole solar system into paperclips.
If you have an AI that can invent this kind of nanotechnology, then you don't have paperclip factories anymore to put AI's in. Besides, paperclip factories only have the facilities to make paperclips. They don't have the ability to invent super advanced nanotechnology. This is true for ALL manufacturing facilities. The supreme law of, "I don't want to spend money on stuff I don't need to make my product" pretty much prevents factories that make ball bearings or aluminum cans from making weapons of mass destruction or magic nanotech.

It also prevents people from spending money on AI's that they don't need. If we get to the point where a factory is paying for a general purpose AI that can do millions of tasks that the factory doesn't need it to do, including advanced research and development, then we're to the point where everyone has similar AI that can probably be used to prevent other AI's from doing bad things.

I'm sorry but destroying the world is hard. It takes a lot of effort. Of the potential world-ending weapons I can think of off the top of my head:

  • Nuclear weapons have huge infrastructure chains supporting their research, manufacture, deployment, maintenance, and use. No one, not even an AI, is just going to take them over and wipe us out in a day. Even the launching of existing nukes is easily avoidable by not giving control over them to an AI or any single source (which is what we already do).
  • Bioweapons are potentially easy for other advanced AI's to reverse engineer a cure for and are subject to many potential issues with surviving and spreading once released. The recent Covid pandemic shows a (admittedly flawed) way of handling such things. I suspect that by the time we reach the point that we understand how to artificially create super-pathogens, we will also understand how to quickly analyze and stop them.
  • 'Nanotech' is a magic word used to describe fantasy technology that will probably never exist. Going from raw materials to even a simple finished product takes a huge amount of work and, often, other raw materials and finished products. Going from raw ore to a refined metal by itself is a rather delicate process that requires various chemicals, high-temperatures, LOTS of energy, and the right conditions to do it correctly. Squirting some grey goo on a rock in Alaska is not going to make a bunch of paperclips.
  • Armed robots are countered by other armed robots controlled by other AI's working for humans and not against them. And there's probably going to be a lot more on our side.
anorlunda said:
But a bigger flaw is the most basic. Find a definition of AI that everyone agrees to. Unless people agree as to the subject of a debate, the debate is doomed.
I think the biggest flaw is the complete and utter lack of understanding of how the real world works that most people debating this issue have.

"Oh no! Scary AI will kill us all if we let it!"
"Ok. So don't let it. "
"But it's so much smarter than us!"
"But we're the ones who invented it, created and run the infrastructure that runs it, and maintains its systems."
"But what if it gets control of everything?! Then we're doomed!!"
"Ok. So don't let it do that."
"But it's so much smarter than us!"
...
...
 
Last edited:
  • Like
Likes russ_watters
  • #4
Drakkith said:
I'm sorry but destroying the world is hard. It takes a lot of effort.
Human effort? Or how do we measure effort for a hypothetical AI? Maybe in resources and energy?
 
  • #5
anorlunda said:
Here's the flaw from that blog post:It has already started to be politically polarized.

I think Scott is just pointing this out, and then making a suggestion to broaden and depoliticize it. I see it more as (what Scott sees as) a flaw in our collective effort so far, rather than a flaw in his blog post.

anorlunda said:
But a bigger flaw is the most basic. Find a definition of AI that everyone agrees to. Unless people agree as to the subject of a debate, the debate is doomed.

I don't think a definition is needed. We don't have such a definition for human intelligence. For AI or human intelligence, we may never have one. The complexity is generally beyond our ability to pin down through analytical thinking.

I prefer a more functional basis. What can it do? This depends on the specific instance of technology. And what is possible? In general it is unknown, or not agreed on.

Also, I don't see it as a topic to debate, so much as a topic to study.
 
Last edited:
  • #6
Jarvis323 said:
Human effort? Or how do we measure effort for a hypothetical AI? Maybe in resources and energy?
As in a vigorous or determined attempt. It takes time, energy, resources, etc. It's not trivial. An AI isn't going to build the entire infrastructure needed to manufacture and deploy nuclear weapons overnight or in a single warehouse someone forgot about. The same is true for any kind of advanced weapon or non-primitive technology.
 
  • Like
Likes russ_watters
  • #7
Drakkith said:
The supreme law of, "I don't want to spend money on stuff I don't need to make my product" pretty much prevents factories that make ball bearings or aluminum cans from making weapons of mass destruction or magic nanotech.
Don't forget Mickey in Fantasia. Those brooms were the paper clip factory.

1670891262026.png
 
  • Like
Likes Jarvis323 and Drakkith
  • #8
Drakkith said:
As in a vigorous or determined attempt. It takes time, energy, resources, etc. It's not trivial. An AI isn't going to build the entire infrastructure needed to manufacture and deploy nuclear weapons overnight or in a single warehouse someone forgot about. The same is true for any kind of advanced weapon or non-primitive technology.
I see your point. It might be worth reading Scott's other blog post.

https://scottaaronson.blog/?p=6821
 
  • #9
Jarvis323 said:
I see your point. It might be worth reading Scott's other blog post.

https://scottaaronson.blog/?p=6821
Yes, this sort of goes with what I was saying. His "Reformer AI-Riskers" consider AI danger to be incremental and long-term, as opposed to a single 'incident' screwing everything up. I would broadly agree with this, though I think the overall risk of AI destroying us is extremely overblown. I think we'll run into some number of limited cases where AI screws something up, costing companies lots of money, and fueling safety-oriented research.
 
  • Like
Likes Jarvis323

What is the main focus of Scott Aaronson's work on AI safety?

Scott Aaronson's work on AI safety focuses on the potential risks and dangers associated with the development and advancement of artificial intelligence. He is particularly interested in finding ways to ensure that AI systems are aligned with human values and goals, and do not pose a threat to humanity.

What are some of the potential risks of artificial intelligence, according to Scott Aaronson?

According to Scott Aaronson, some potential risks of artificial intelligence include the possibility of AI systems becoming too powerful and uncontrollable, leading to unintended consequences or even posing a threat to humanity. There is also a concern that AI could be used for malicious purposes, such as cyber attacks or surveillance.

How does Scott Aaronson propose addressing the potential risks of AI?

Scott Aaronson suggests a multi-faceted approach to addressing the potential risks of AI. This includes developing rigorous safety measures and protocols for AI systems, promoting transparency and accountability in AI development, and fostering collaboration and communication between different stakeholders in the AI field.

What role should governments and policymakers play in ensuring AI safety?

According to Scott Aaronson, governments and policymakers have an important role to play in ensuring AI safety. They should establish regulations and guidelines for AI development, as well as invest in research and development of AI safety measures. They should also be involved in international discussions and collaborations on AI safety to ensure a global approach.

What are some potential benefits of artificial intelligence, and how can we balance these benefits with the potential risks?

Some potential benefits of artificial intelligence include increased efficiency and productivity, improved decision making, and advancements in various fields such as healthcare and transportation. To balance these benefits with the potential risks, Scott Aaronson suggests that we prioritize safety and ethical considerations in AI development and continuously monitor and evaluate the impact of AI on society. Collaboration and open communication between different stakeholders can also help to mitigate potential risks while maximizing the benefits of AI.

Similar threads

Replies
10
Views
2K
  • STEM Academic Advising
Replies
6
Views
1K
Replies
33
Views
5K
  • Feedback and Announcements
Replies
1
Views
428
  • STEM Academic Advising
Replies
14
Views
704
Replies
9
Views
1K
  • Sci-Fi Writing and World Building
Replies
11
Views
2K
  • STEM Academic Advising
Replies
3
Views
1K
  • Science and Math Textbooks
Replies
0
Views
699
Back
Top