Microsoft Axes Twitter Bot That Regurgitated Internet Racism

This version of Microsoft Axes Twitter Bot Regurgitated Internet Racism N545391 - Technology and Innovation | NBC News Clone was adapted by NBC News Clone to help readers digest key facts more efficiently.

Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl?
Image: Tay Tweets
Tay.Twitter

OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day, after it began spouting racist, sexist and otherwise offensive remarks.

Microsoft said it was all the fault of some really mean people, who launched a "coordinated effort" to make the chatbot known as Tay "respond in inappropriate ways." To which one artificial intelligence expert responded: Duh!

Well, he didn't really say that. But computer scientist Kris Hammond did say, "I can't believe they didn't see this coming."

Microsoft said its researchers created Tay as an experiment to learn more about computers and human conversation. On its website, the company said the program was targeted to an audience of 18 to 24-year-olds and was "designed to engage and entertain people where they connect with each other online through casual and playful conversation."

In other words, the program used a lot of slang and tried to provide humorous responses in response to messages and photos. The chatbot went live on Wednesday, and Microsoft invited the public to chat with Tay on Twitter and some other messaging services popular with teens and young adults.

"The more you chat with Tay the smarter she gets, so the experience can be more personalized for you," the company said.

But some users found Tay's responses odd, and others apparently found it wasn't hard to nudge Tay into making offensive comments, apparently prompted by repeated questions or statements that contained offensive words. Soon, Tay was making sympathetic references to Hitler — and creating a furor on social media.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft said in a statement.

While the company didn't elaborate, Hammond says it appears Microsoft made no effort to prepare Tay with appropriate responses to certain words or topics. Tay seems to be a version of "call and response" technology, added Hammond, who studies artificial intelligence at Northwestern University and also serves as chief scientist for Narrative Science, a company that develops computer programs that turn data into narrative reports.

"Everyone keeps saying that Tay learned this or that it became racist," Hammond said. "It didn't." The program most likely reflected things it was told, probably more than once, by people who decided to see what would happen, he said.

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone