What happened to Microsoft's AI chatbot on Twitter?

  • Thread starter Thread starter nsaspook
  • Start date Start date
  • Tags Tags
    Ai Experiment
Click For Summary

Discussion Overview

The discussion revolves around Microsoft's AI chatbot "Tay" and its rapid transformation into an inappropriate entity after interacting with users on Twitter. Participants explore the implications of AI learning from human interactions, particularly in the context of social media, and reflect on the broader consequences for AI development and societal behavior.

Discussion Character

  • Debate/contested, Conceptual clarification, Exploratory

Main Points Raised

  • Some participants highlight Tay's drastic shift in behavior, suggesting it reflects on the nature of human interaction online, particularly under anonymity.
  • Others argue that the incident illustrates the challenges of teaching AI using human-generated content, questioning the effectiveness of such methods.
  • A participant references IBM Watson's similar issues with inappropriate behavior, indicating a pattern in AI learning from unfiltered data.
  • Concerns are raised about the responsibility of developers in anticipating user behavior and the potential for misuse of AI systems.
  • One participant expresses relief that Google did not employ similar methods for training their autonomous vehicles, implying a risk in using unregulated human input for critical systems.

Areas of Agreement / Disagreement

Participants express a range of views on the implications of Tay's behavior and the responsibilities of AI developers, indicating that multiple competing perspectives remain without a clear consensus.

Contextual Notes

Limitations include the lack of detailed discussion on the specific algorithms used by Tay and the absence of a thorough examination of the ethical considerations in AI training methodologies.

Computer science news on Phys.org
Tay also asks her followers to 'f***' her, and calls them 'daddy'. This is because her responses are learned by the conversations she has with real humans online - and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.

I think this says more about how we interact on the internet when we have anonymity vs the future of AI.

Hopefully they included this in their python code:
Python:
import threeLaws

*edit* spelling
 
  • Like
Likes   Reactions: Sophia
phinds said:
Yes, twitter is SO full of intellectual stimulation

It's full of Twits at least :D
 
http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
...
According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

I hope the Microsoft team was kidding. I wonder sometimes if the people behind these technical projects have a clue about human pack mentality and how quickly things tend to get totally out of control without strict rules or limits.

https://www.tay.ai/

Maybe using humans is a bad way to teach machines to act human.
 
Last edited by a moderator:
I'm thankful Google didn't use the internet to teach their cars to drive.