# The AI experiment

Tay also asks her followers to 'f***' her, and calls them 'daddy'. This is because her responses are learned by the conversations she has with real humans online - and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.

I think this says more about how we interact on the internet when we have anonymity vs the future of AI.

Hopefully they included this in their python code:
Python:
import threeLaws

*edit* spelling

Sophia
Yes, twitter is SO full of intellectual stimulation

It's full of Twits at least :D

http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
...
According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

I hope the Microsoft team was kidding. I wonder sometimes if the people behind these technical projects have a clue about human pack mentality and how quickly things tend to get totally out of control without strict rules or limits.

https://www.tay.ai/ [Broken]

Maybe using humans is a bad way to teach machines to act human.

Last edited by a moderator:
Borg