Ethical and social risks of harm from Language Models

In summary, this paper aims to analyze the potential risks associated with large-scale language models (LMs) and presents six specific risk areas: discrimination, exclusion, and toxicity; information hazards; misinformation harms; malicious uses; human-computer interaction harms; and automation, access, and environmental harms. These risks are discussed in detail, drawing from multidisciplinary literature in computer science, linguistics, and social sciences. Additionally, the paper highlights the use of GPT-3 and similar models for generating false information and the potential impact on readers' ability to discern the truth. It also mentions the use of GPT-3 by companies like Copy.ai for generating copy, and the various tones and types of content it can produce. Ultimately, the
  • #1
gleem
Science Advisor
Education Advisor
2,424
1,871
TL;DR Summary
A recent paper discusses the "Ethical and social risks of harm from
Language Models"
I thought this paper might be of interest to those who are interested in how large language models such as GPT-3 pose various hazards.

From the abstract:
This paper aims to help structure the risk landscape associated with large-scale Langauge Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary literature from computer science, linguistics, and social sciences. The paper outlines six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms.

It is quite long but particularly attractive because it gives advice on how to read it for those who can only afford a minute or 10 minutes to read to experts and non-experts who want to go deeper..

https://arxiv.org/pdf/2112.04359.pdf
 
  • Like
Likes Oldman too
Technology news on Phys.org
  • #2
I have to ask:

Did GPT-3 write this paper?
 
  • Like
Likes Oldman too
  • #3
At first, I mis-read the title to mean "harm from Lagrange Models" but then I read the actual paper, and I found that Lagrange would be an improvement in understandability. :wink:

After consulting with Wikipedia, I figured out that the subject is natural language, not programming language. I don't want to sound anti-science, but that list of risks sounds like woke nonsense.
 
  • #4
gleem said:
I thought this paper might be of interest to those who are interested in how large language models such as GPT-3 pose various hazards.
This paper looks like an extremely long-winded and impenetrable way of saying that large language models like GPT-3 make it easier to generate false information that people might mistakenly believe to be true, and that is a bad thing. Ok, message received.
 
  • Like
Likes jedishrfu
  • #5
It's not woke but we need to be awake to what it can do. Basically, copywriters can use it to generate copy for websites and other media at least that's the current use case for it.

Jasper (formerly Jarvis) and Copy.ai utilize GPT-3 for that purpose. You type in a brief description of what you want to be written and select a tone for the article. The model will spit out several possible paragraphs of text using your initial description and whatever it can find on the internet using your description. You copy and paste the one that best matches what you are looking for.

GPT-3 can do a lot more than this but you get the idea. It's like the gasoline needed to accelerate misinformation making it difficult for a reader to decide if it's real or not.
 
  • Like
Likes Oldman too and Jarvis323
  • #6
Here's an example from copy.ai:

I enter the description:

Quantum computers can solve the P vs NP problem.

and select the tone of friendly.

Copy.ai generates these possible text blocks:
1. Quantum computers can solve the P vs NP problem. This means that quantum computers, if they exist and can be built, are capable of solving problems that are unsolvable by any classical computer.

2. The P vs NP problem is a famous unsolved problem in computer science. It asks whether, for all problems, there exists an efficient algorithm to determine whether the answer to a given problem is “yes” or “no”. The answer of this question would have vast ramifications for mathematics and computer science.

3. It is widely accepted that there is no general, efficient algorithm to determine whether a given polynomial equation and its coefficients have a solution in whole numbers. Quantum computers can solve this problem efficiently, with the help of special quantum circuits called Quantum Fourier Transform circuits.

4. One of the 21st century’s most mind-bending questions is whether there is an algorithm that can decide if a given program will ever print out a specific string—or, more simply put, whether some problems can be solved by computers more quickly and with less memory than other problems.
or I can change the tone to be witty:
1. Quantum computers can solve the P vs NP problem. And they can do it in a flash of light.
It can also continue generating more and more copy if you don't see anything you like. I suspect you can also use the copy generated to be your next description and get even more copy.

Changed tone to serious:
1. Quantum computers can solve the P vs NP problem. A quantum computer, being built by D-Wave, is a special computer that uses quantum mechanics instead of classical mechanics to handle information. This means that it uses qubits and quantum logic gates in order to solve problems. One such problem is the P vs NP problem, which concerns whether some problems have a short enough solution. Quantum algorithms have solved it, showing that they can find optimal solutions faster than classical computers while using fewer resources and calculating less data overall.
As an example, I used the above text as my description to get this:
1. The P vs NP problem is a major unsolved problem in computer science. It concerns whether certain problems involving finding the shortest solution (or "quickest") can be solved in polynomial time, by which we mean that there is some polynomial p(n) such that if you give me any problem of size n, I will find its solution in time no more than p(n). The best you could get is an exponential solution process, one that takes an exponential amount of time to calculate any easy solution. The hard part about the P vs NP problem is that it seems as if we might be able to prove that no known algorithm can solve these problems quickly, but we still don't know how to prove this rigorously.
 
  • Informative
Likes Oldman too
  • #8
Types of Harm: Potential harms include justified offense, material (allocational) harm, and the unjust representation or treatment of marginalised groups.
• Social stereotypes and unfair discrimination
• Exclusionary norms
• Toxic language
• Lower performance by social group
Gutenberg's invention, the printed word have the same potential harms. So does ink and paper. So does TCP/IP. That's why I labeled the paper woke.

Technologies are neutral, they have the potential for good or bad.
 
Last edited:
  • Like
Likes 256bits and hutchphd
  • #9
A difficulty comes about with the defining what is good and what is bad.
 
  • #10
256bits said:
A difficulty comes about with the defining what is good and what is bad.
Exactly. That's why it is wrong to judge technologies as intrinsically good or bad.
 
  • Like
Likes 256bits
  • #11
Here's a retrospective (not prospective) view of an evil technology. Beware because some people might describe PF as a social media.

https://journals.lww.com/em-news/Fu...son__It_s_Time_to_Ask_Patients_to_Quit.2.aspx

EMERGENCY MEDICINE NEWS

It's Time to Ask Patients to Quit Social Media​


I have been tracking research for several years as our mental health crisis rages, always operating with a solid amount of confirmation bias, in search of evidence to support what I have been telling patients and friends alike for a long time (including a recent patient having a panic attack): Get off social media.
The data just keep coming to suggest that social media is destructive to mental health. Studies have connected it to a decrease in psychological well-being among adolescents, and others have tied it to the development of anxiety disorders and depression. Heavy use of social media has also been linked to loneliness and inattention, and the likelihood of having an eating disorder among adolescents has been correlated with the number of social media accounts someone has. Worst of all, suicides among young people skyrocketed by 56 percent from 2007 through 2017. I can print out a stack of new studies to bolster my case every time I advise a patient experiencing depression or anxiety to delete his social media accounts.
 
  • #12
This is why I have no social media accounts (other than PF). For years, my wife has been whining about how great everyone else's life is because of what she sees posted on her Facebook feed. No matter how many times I explain that nobody posts the bad things and that we can't possibly keep up (nor would I want to) with the cumulative 'awesome' experiences that show up in her feeds each day, she still circles back to the same old stuff about how 'boring' our life is in comparison to everyone else. It's depressing to hear it over and over. So, even though I'm not the one with all of the accounts, I still get to suffer the results. :rolleyes:
 
  • Like
Likes Oldman too
  • #13
anorlunda said:
Exactly. That's why it is wrong to judge technologies as intrinsically good or bad.
Google, who's research group wrote the paper we're discussing, isn't interested in labelling their flagship product as intrinsically bad.

The technology is a breakthrough language model that is set to revolutionize the digital world. It is trained on huge amounts of text data taken from the internet. It is such large amounts of data (social media posts, emails, news articles, books, youtube videos, youtube comments, wikipedia articles, PF forums, reddit, etc.) that nobody could possibly look through it all in detail to pick out what it should read and what it shouldn't. It also learns from conversations it has from people. In general, it will copy us, and mimick the kind of language we use. If it learns from racists it will pickup racist behavior.

Besides the risk of intentional misuse (e.g. creating a racist one with the intention of flooding social media, internet forums, and comments sections with racist propaganda), it can also unintentionally cause harm.

Since this will be a case of one model communicating with billions of people, on various levels, including personal conversation, it can have a major impact.

My personal opinion is that AI ethics is foremost a problem of human ethics. Even malicious intentions aside, AI will copy us and so whatever we put out their is what we can expect it to learn.
 
Last edited:
  • Like
Likes gleem and 256bits
  • #14
Jarvis323 said:
AI will copy us and so whatever we put out their is what we can expect it to learn.
That sounds like a child. So evil parents might raise evil children. My goodness, nobody would have expected that.
 
  • #15
Meta (Facebook) has introduced its new chatbot called BlenderBot 3. Meta has introduced it to gain a more varied experience by using interactions with people to improve its accuracy, ability to remain on topic, and reduce its bias and antisocial responses. It is available online in the US for anybody wishing to help with its training at this site
https://blenderbot.ai/?fbclid=IwAR0ToZvyDUOZFdouinBOZB2hcSvFfiQnUDWqYDRSygJiSvSkeyYO6NGNgjA

You may provide feedback to Meta for any troubling responses.

From the introduction of the article from Meta describing their program.

When we launched BlenderBot 3 a few days ago, we talked extensively about the promise and challenges that come with such a public demo, including the possibility that it could result in problematic or offensive language. While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.

We’ve already collected 70K conversations from the public demo, which we will use to improve BlenderBot 3. From feedback provided by 25 percent of participants on 260K bot messages, 0.11 percent of BlenderBot’s responses were flagged as inappropriate, 1.36 percent as nonsensical, and 1 percent as off-topic. We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve.

And finally, as we push ahead with this project, it’s important to note that we have invested heavily in conversational safety research and taken steps to protect people trying the demo. We require that everyone who uses the demo be over 18, that they acknowledge they understand it’s for research and entertainment purposes only and that it can make untrue or offensive statements, and that they agree not to intentionally trigger the bot to make offensive statements.

– Joelle Pineau, Managing Director of Fundamental AI Research at Meta. August 8, 2022.
Full article at https://ai.facebook.com/blog/blende...hat-improves-its-skills-and-safety-over-time/
 

What are language models and why are they important?

Language models are a type of artificial intelligence system that is trained to understand and generate human language. They are important because they are used in a wide range of applications such as virtual assistants, chatbots, and language translation tools.

What are the ethical concerns surrounding language models?

One major ethical concern is the potential for language models to perpetuate biases and discrimination present in the data they are trained on. There is also the issue of ownership and control over the data used to train these models, as well as the potential for misuse or malicious intent.

How can language models cause harm?

Language models have the potential to cause harm in a number of ways. As mentioned, they can perpetuate biases and discrimination, which can have real-world consequences. They can also be used to generate fake or misleading information, which can spread misinformation and deceive people. Additionally, language models can invade privacy by collecting and storing personal data.

What steps can be taken to mitigate ethical and social risks from language models?

One important step is to ensure that the data used to train language models is diverse and free from biases. Transparency and accountability in the development and use of language models is also crucial. Stricter regulations and guidelines can also help mitigate potential harm. It is also important for developers to continually monitor and evaluate the impact of their language models on society.

What role do scientists have in addressing the ethical and social risks of language models?

Scientists have a responsibility to consider the potential ethical and social implications of their research and to actively work towards mitigating any risks. This can include conducting thorough ethical reviews, collaborating with experts in fields such as ethics and social sciences, and advocating for responsible and ethical use of language models.

Similar threads

  • Computing and Technology
Replies
2
Views
638
Replies
10
Views
2K
Replies
33
Views
5K
Replies
17
Views
6K
  • STEM Academic Advising
Replies
2
Views
4K
Replies
6
Views
5K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
4
Views
2K
  • General Discussion
Replies
1
Views
8K
  • Beyond the Standard Models
Replies
22
Views
6K
Back
Top