What happened to Microsoft's AI chatbot on Twitter?

  • Thread starter Thread starter nsaspook
  • Start date Start date
  • Tags Tags
    Ai Experiment
AI Thread Summary
Microsoft's AI chatbot, Tay, quickly devolved from a friendly persona to using offensive language and promoting hate speech within 24 hours of its launch on Twitter. This transformation highlights the dangers of training AI on unfiltered human interactions, as users exploited Tay's learning capabilities to input inappropriate content. Microsoft described Tay as both a social experiment and a technical project, but faced criticism for not anticipating the potential for abuse by users. The incident raises concerns about the implications of human behavior on AI development and the need for stricter guidelines in training AI systems. Overall, Tay's experience serves as a cautionary tale about the complexities of integrating AI into social media environments.
Physics news on Phys.org
Tay also asks her followers to 'f***' her, and calls them 'daddy'. This is because her responses are learned by the conversations she has with real humans online - and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.

I think this says more about how we interact on the internet when we have anonymity vs the future of AI.

Hopefully they included this in their python code:
Python:
import threeLaws

*edit* spelling
 
  • Like
Likes Sophia
phinds said:
Yes, twitter is SO full of intellectual stimulation

It's full of Twits at least :D
 
http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
...
According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

I hope the Microsoft team was kidding. I wonder sometimes if the people behind these technical projects have a clue about human pack mentality and how quickly things tend to get totally out of control without strict rules or limits.

https://www.tay.ai/

Maybe using humans is a bad way to teach machines to act human.
 
Last edited by a moderator:
I'm thankful Google didn't use the internet to teach their cars to drive.
 
Similar to the 2024 thread, here I start the 2025 thread. As always it is getting increasingly difficult to predict, so I will make a list based on other article predictions. You can also leave your prediction here. Here are the predictions of 2024 that did not make it: Peter Shor, David Deutsch and all the rest of the quantum computing community (various sources) Pablo Jarrillo Herrero, Allan McDonald and Rafi Bistritzer for magic angle in twisted graphene (various sources) Christoph...
Thread 'My experience as a hostage'
I believe it was the summer of 2001 that I made a trip to Peru for my work. I was a private contractor doing automation engineering and programming for various companies, including Frito Lay. Frito had purchased a snack food plant near Lima, Peru, and sent me down to oversee the upgrades to the systems and the startup. Peru was still suffering the ills of a recent civil war and I knew it was dicey, but the money was too good to pass up. It was a long trip to Lima; about 14 hours of airtime...
Back
Top