AI and Ethics in Journalism: The Sports Illustrated Scandal

  • Thread starter Thread starter russ_watters
  • Start date Start date
  • Tags Tags
    Ai Ethics
Click For Summary
SUMMARY

The discussion centers on the ethical implications of AI-generated content in journalism, particularly highlighted by the Sports Illustrated scandal involving AI-written game summaries and fake reporter profiles. Participants express concerns about the authenticity of journalism as AI technology becomes capable of producing indistinguishable content from human writers. The conversation also touches on the historical context of "pink slime journalism," where misleading practices in reporting have previously occurred, raising questions about accountability in the AI industry and the potential erosion of trust in media.

PREREQUISITES
  • Understanding of AI content generation techniques
  • Familiarity with ethical standards in journalism
  • Knowledge of historical cases of misleading journalism, such as "pink slime journalism"
  • Awareness of the economic pressures facing the publishing industry
NEXT STEPS
  • Research the implications of AI in journalism ethics
  • Explore the concept of "pink slime journalism" and its impact on media credibility
  • Investigate AI content generation tools and their applications in various industries
  • Examine regulatory frameworks for AI technology in journalism and other sectors
USEFUL FOR

Journalists, media ethicists, AI developers, and anyone interested in the intersection of technology and journalism ethics will benefit from this discussion.

Messages
23,791
Reaction score
11,225
TL;DR
If you can't tell it's fake does it even matter?
Interesting article about an AI writing scandal at Sports Illustrated:
https://www.cnn.com/2023/11/29/opinions/sports-illustrated-ai-controversy-leitch/index.html

I hadn't heard about it in real-time, which is probably indicative about how far SI has fallen*. In short, the article discusses how SI was caught using AI and worse fake reporter photos/profiles to write game summaries. Game summaries are the short articles that summarize last night's Phillies game. They are so formulaic that they are easy for AI to write without people noticing they are AI-written. I think that's fine except, ya know, the lying.

This is juxtaposed against an NFL sideline reporter who made up reports when coaches wouldn't talk to her (and she wasn't allowed to report on what she overheard). She got away with it for years because even when the coaches do talk they always use the same bland platitudes. Here in Philly it was a running joke how Andy Reid (now in KC) always says "we gotta do a better job" in response to any question about negative performance:

Video owner doesn't like embedded videos.

So I get it. I don't like being lied to, but I sympathize with a BS job you're just trying to do while you can. Of course, I don't sympathize enough to be very sad for them when that job disappears because it's so pointless that AI can do it just as well and cheaper. Maybe I will when the jobs are more....human.

*I don't subscribe anymore, but at least I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
Disclaimer 2: This isn't AI but I can't argue that every time the term is misused, which is basically all the time right now.
 
Last edited:
  • Like
  • Informative
  • Haha
Likes   Reactions: hutchphd, Hornbein and Bystander
Computer science news on Phys.org
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
 
russ_watters said:
TL;DR Summary: If you can't tell it's fake does it even matter?

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
AI - misused term, from the AI community themselves
journalism - Journalism school seems to have sideswiped.
Perhaps a Society of Professional Journalists could solve the problem.
Ethics - unethical behavior from the "AI" community in promotion of their products.
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits. I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
 
jedishrfu said:
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
The lie matters, sure, but I think the people lying about it do it because they know that the identity of the human reporters matters. I think it's about humans knowing humans are humans.

Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies. So even if Digital Generic Action Hero is taller and better looking than Real Tom Cruise, it doesn't matter. I went to see 40 Years Later Top Gun in large part because Real Tom Cruise was the star. If the studio had decided Real Tom Cruise was too old and substituted Digital Generic Action Hero Son of Tom Cruise nobody would have bothered to go see it.

The risk in media therefore is if people stop caring about the human involvement. If I don't recognize the name on the byline of the news article, does it matter if that's a person or AI bot? This issue doesn't just apply to print media, it's already starting in visual; I was watching my favorite aviation vlogger today. I'm reasonably certain he's real. But he was critiquing another aviation vlogger video that was clearly AI generated (really badly) and didn't seem to know it. That fake vlogger video had something like a million views. But someone who doesn't know anything about aviation watching a video by someone/AI that also doesn't know anything about aviation...and also has poor writing and speech synthesis....isn't going to be able to identify it.
 
256bits said:
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Probably. But even in heavily regulated industries such as cars, the regulation of new capabilities has lagged.
256bits said:
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits.
Sure. And linked to your prior point about ethics, they are sacrificing their ethics for the money. I think they don't realize that even if ethics doesn't matter to them, it matters to the consumer. Or maybe they do, but hope they don't get caught in the lie.
256bits said:
I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
I really don't blame Open AI and others for this. They do what works to market their product. Heck, I blame it more on the public and since we're falling for the salesmanship. Open AI's website is an awful corporate marketing caricature. Like an evil company from an '80s/'90s movie. It's Initrode. But wow, do people buy into the schtick. So why not keep doing it?
 
Last edited:
  • Like
Likes   Reactions: 256bits
russ_watters said:
Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies.

This doesn't need to be an actor though. "Pixar" was a guarantee of quality for a while, that guaranteed profits for a while after the actual quality started slipping. It could be that an actor has enough pull that a) They can influence things beyond acting, like the script and story, and b) They know how to use that influence to improve quality. That could result in the audience recognizing their name as an indication of quality.
 
  • Like
Likes   Reactions: russ_watters
russ_watters said:
I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.
How do you know? And is AIbrushing better or worse than airbrushing?
 
  • Like
Likes   Reactions: hutchphd
Algr said:
This doesn't need to be an actor though. "Pixar"
Yeah, I was mulling that over after I posted it. It's a tough one. It's definitely true that the actors for the most part don't exist or matter in cartoons. I think cartoons are different but it's tough to put my finger on why. And Pixar's formula is somehow unique in that, and I don't know why, either. Maybe the uniqueness of the technology helped? Some thoughts that may or may not go anywhere:
  1. Some cartoon characters do become iconic: Mickey > Tom Cruise
  2. Pixar doesn't really create iconic characters, but in the beginning it didn't seem to matter.
  3. Nobody cares who the voice actors are in cartoons, for the most part. Nobody knows who voices most of them. Woody? Tom who?
  4. But, Shrek; Iconic. And he had to be Mike Myers. To a lesser extent, The Simpsons.
Can you replace the entire staff and contractors for Pixar with a computer program? Maybe. Cartoons are fake to begin with though. There's no lie. But I don't think you can replace human celebrities with AI look-alikes. I don't think people will accept that. I think that's the problem with the story in the OP.
 
Vanadium 50 said:
How do you know? And is AIbrushing better or worse than airbrushing?
I was talking about the articles.
[edit] Eh, maybe I wasn't. I don't even remember anymore. Point taken though. There's probably a threshold, but I don't know what it is.
 
Last edited:
  • Haha
Likes   Reactions: hutchphd, Vanadium 50 and jedishrfu
  • #10
Not sure this is the right thread for this comment, which concerns the risk of AI generated content in the public domain, perhaps a version of journalism.

I searched today for a current update on wildfire management in wilderness areas in my state, and was puzzled that the most current update was for Nov. 6, 2024, since that date does not occur until next month. It turned out the update I read was summarized by AI from an official government source, but which read the date 6-11-2024 in the European manner, i.e., as Nov. 6, instead of June 11. I think this sort of thing could easily be dangerous to travelers.

Of course we are used to being consciously misled by bad human actors online, but this sort of AI misinformation is beginning to be embraced by, or disguised as coming from, trusted sources like local governments, making it less obvious as potentially suspect. I personally ignore all information specified as AI - generated as unreliable, but the information I found summarized today was mislabeled as coming directly from the local government source. Just another hazard, but one with the potential to render virtually all online information as unreliable, at least in my opinion.
 
  • Like
Likes   Reactions: berkeman
  • #11
This whole thing reminds me of the decay that is HowStuffWorks.com it used to be a fun site to find some information (it was never terribly accurate but it was ok), now they rehash old articles using LLMs, it is so obvious that now they declare it in every article.
 
  • #12
Ronald Reagan used to simulate live radio broadcasts of the Cubs from the news wire feed. to des Moines IA.....is this nefarious (other than his occasional loss of contact with reality suffered as president )? It is an entertainment medium after all. Just a little more suspension of disbelief is required,
 
  • #13
What makes you think they care about the truth? If you buy the soap they advertise and vote the right way, haven't they done their job? Accuracy is only useful insofar as it leads to repeat clicks.
 
  • #14
A virtual singer named Hatsuni Miku has been a big star centered in Japan for many years. Everybody knows her. She even gives concerts with live bands as a hologram and has a dedicated following almost entirely of men. At least one real man has virtually married her, though this is generally frowned upon.
 
  • Like
Likes   Reactions: jedishrfu

Similar threads

Replies
10
Views
5K
Replies
3
Views
3K
Replies
6
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
Replies
23
Views
6K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K