Taylor Swift once threatened to sue Microsoft over its chatbot Tay, which Twitter manipulated into a bile-spewing racist (MSFT)

An anecdote from Microsoft president Brad Smith's upcoming book reveals that he once received a legal threat from Taylor Swift over a chatbot.

The chatbot in question was called XiaoIce, and was designed to converse with real people on social media. However, its US name was Tay.

In his forthcoming book "Tools and Weapons," per the Guardian's Alex Hern, Smith says he was on vacation having dinner when he received a message.

Read more: Microsoft President Brad Smith says these are the 10 biggest challenges facing tech in 2019

"An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: 'We represent Taylor Swift, on whose behalf this is directed to you,'" Smith writes.

"He went on to state that 'the name Tay, as I'm sure you must know, is closely associated with our client.' No, I actually didn't know, but the email nonetheless grabbed my attention. The lawyer went on to argue that the use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws."

In 2016 Tay was given its own Twitter account where it could learn from its interactions with Twitter users. Unfortunately, it was quickly manipulated to spew horrendously racist tweets, at one point denying the holocaust.

Microsoft shut Tay down after less than 24 hours.


Smith says that the incident taught him, "not just about cross-cultural norms but about the need for stronger AI safeguards."

Business Insider contacted Taylor Swift's representation for comment on the incident, and whether the matter was resolved to her satisfaction. They were not immediately available for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *