AI and Ethics in Journalism: The Sports Illustrated Scandal

  • Thread starter russ_watters
  • Start date
  • Tags
    Ai Ethics
  • #1
russ_watters
Mentor
23,168
10,379
TL;DR Summary
If you can't tell it's fake does it even matter?
Interesting article about an AI writing scandal at Sports Illustrated:
https://www.cnn.com/2023/11/29/opinions/sports-illustrated-ai-controversy-leitch/index.html

I hadn't heard about it in real-time, which is probably indicative about how far SI has fallen*. In short, the article discusses how SI was caught using AI and worse fake reporter photos/profiles to write game summaries. Game summaries are the short articles that summarize last night's Phillies game. They are so formulaic that they are easy for AI to write without people noticing they are AI-written. I think that's fine except, ya know, the lying.

This is juxtaposed against an NFL sideline reporter who made up reports when coaches wouldn't talk to her (and she wasn't allowed to report on what she overheard). She got away with it for years because even when the coaches do talk they always use the same bland platitudes. Here in Philly it was a running joke how Andy Reid (now in KC) always says "we gotta do a better job" in response to any question about negative performance:

Video owner doesn't like embedded videos.

So I get it. I don't like being lied to, but I sympathize with a BS job you're just trying to do while you can. Of course, I don't sympathize enough to be very sad for them when that job disappears because it's so pointless that AI can do it just as well and cheaper. Maybe I will when the jobs are more....human.

*I don't subscribe anymore, but at least I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
Disclaimer 2: This isn't AI but I can't argue that every time the term is misused, which is basically all the time right now.
 
Last edited:
  • Like
Likes Bystander
Computer science news on Phys.org
  • #2
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
 
  • #3
russ_watters said:
TL;DR Summary: If you can't tell it's fake does it even matter?

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
AI - misused term, from the AI community themselves
journalism - Journalism school seems to have sideswiped.
Perhaps a Society of Professional Journalists could solve the problem.
Ethics - unethical behavior from the "AI" community in promotion of their products.
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits. I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
 
  • #4
jedishrfu said:
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
The lie matters, sure, but I think the people lying about it do it because they know that the identity of the human reporters matters. I think it's about humans knowing humans are humans.

Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies. So even if Digital Generic Action Hero is taller and better looking than Real Tom Cruise, it doesn't matter. I went to see 40 Years Later Top Gun in large part because Real Tom Cruise was the star. If the studio had decided Real Tom Cruise was too old and substituted Digital Generic Action Hero Son of Tom Cruise nobody would have bothered to go see it.

The risk in media therefore is if people stop caring about the human involvement. If I don't recognize the name on the byline of the news article, does it matter if that's a person or AI bot? This issue doesn't just apply to print media, it's already starting in visual; I was watching my favorite aviation vlogger today. I'm reasonably certain he's real. But he was critiquing another aviation vlogger video that was clearly AI generated (really badly) and didn't seem to know it. That fake vlogger video had something like a million views. But someone who doesn't know anything about aviation watching a video by someone/AI that also doesn't know anything about aviation...and also has poor writing and speech synthesis....isn't going to be able to identify it.
 
  • #5
256bits said:
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Probably. But even in heavily regulated industries such as cars, the regulation of new capabilities has lagged.
256bits said:
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits.
Sure. And linked to your prior point about ethics, they are sacrificing their ethics for the money. I think they don't realize that even if ethics doesn't matter to them, it matters to the consumer. Or maybe they do, but hope they don't get caught in the lie.
256bits said:
I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
I really don't blame Open AI and others for this. They do what works to market their product. Heck, I blame it more on the public and since we're falling for the salesmanship. Open AI's website is an awful corporate marketing caricature. Like an evil company from an '80s/'90s movie. It's Initrode. But wow, do people buy into the schtick. So why not keep doing it?
 
Last edited:
  • Like
Likes 256bits
  • #6
russ_watters said:
Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies.

This doesn't need to be an actor though. "Pixar" was a guarantee of quality for a while, that guaranteed profits for a while after the actual quality started slipping. It could be that an actor has enough pull that a) They can influence things beyond acting, like the script and story, and b) They know how to use that influence to improve quality. That could result in the audience recognizing their name as an indication of quality.
 
  • Like
Likes russ_watters
  • #7
russ_watters said:
I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.
How do you know? And is AIbrushing better or worse than airbrushing?
 
  • #8
Algr said:
This doesn't need to be an actor though. "Pixar"
Yeah, I was mulling that over after I posted it. It's a tough one. It's definitely true that the actors for the most part don't exist or matter in cartoons. I think cartoons are different but it's tough to put my finger on why. And Pixar's formula is somehow unique in that, and I don't know why, either. Maybe the uniqueness of the technology helped? Some thoughts that may or may not go anywhere:
  1. Some cartoon characters do become iconic: Mickey > Tom Cruise
  2. Pixar doesn't really create iconic characters, but in the beginning it didn't seem to matter.
  3. Nobody cares who the voice actors are in cartoons, for the most part. Nobody knows who voices most of them. Woody? Tom who?
  4. But, Shrek; Iconic. And he had to be Mike Myers. To a lesser extent, The Simpsons.
Can you replace the entire staff and contractors for Pixar with a computer program? Maybe. Cartoons are fake to begin with though. There's no lie. But I don't think you can replace human celebrities with AI look-alikes. I don't think people will accept that. I think that's the problem with the story in the OP.
 
  • #9
Vanadium 50 said:
How do you know? And is AIbrushing better or worse than airbrushing?
I was talking about the articles.
[edit] Eh, maybe I wasn't. I don't even remember anymore. Point taken though. There's probably a threshold, but I don't know what it is.
 
Last edited:
  • Haha
Likes Vanadium 50 and jedishrfu

What was the Sports Illustrated scandal involving AI and ethics in journalism?

The Sports Illustrated scandal referred to an incident where the magazine was criticized for using AI-generated content without proper disclosure. This raised concerns about transparency, the authenticity of journalism, and the ethical implications of AI in media production.

How did AI contribute to the creation of content in the Sports Illustrated scandal?

In the scandal, AI tools were used to automate the creation of articles and content, which were then published as though they were written by human journalists. This involved the use of natural language generation technologies to produce coherent and seemingly authentic stories.

What were the ethical issues raised by the use of AI in this context?

The primary ethical issues included the lack of transparency about the use of AI, which deceived readers about the origins of the content. Additionally, concerns were raised about job displacement for journalists, the potential for spreading misinformation, and the undermining of trust in journalistic institutions.

How did Sports Illustrated respond to the backlash?

Sports Illustrated issued a public apology, acknowledging the oversight and promising to implement stricter guidelines and disclosures regarding the use of AI-generated content in their publications. They also committed to enhancing editorial oversight to prevent future incidents.

What measures can be taken to prevent similar ethical breaches in the use of AI in journalism?

To prevent similar issues, media organizations can implement clear guidelines and ethical standards for AI use, including full disclosure when AI-generated content is published. Training for journalists on the implications of AI, along with public discussions and policy-making on AI ethics in journalism, are also crucial steps toward accountability and transparency.

Similar threads

Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
6
Views
2K
Replies
7
Views
2K
  • General Discussion
Replies
12
Views
1K
Replies
4
Views
1K
  • Biology and Medical
Replies
21
Views
2K
  • General Discussion
Replies
6
Views
877
  • STEM Academic Advising
Replies
3
Views
1K
Replies
2
Views
884
Replies
23
Views
6K
Back
Top